The Conversation – Robohub https://robohub.org Connecting the robotics community to the world Sun, 31 Dec 2023 08:35:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives https://robohub.org/faced-with-dwindling-bee-colonies-scientists-are-arming-queens-with-robots-and-smart-hives/ Sun, 31 Dec 2023 08:29:29 +0000 http://robohub.org/?guid=9b6739d9acee77ead04840b46cd2ede9 By Farshad Arvin, Martin Stefanec, and Tomas Krajnik

Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape.

In the last three decades, the global biomass of flying insects has shrunk by 75%. Among the trend’s most notables victims is the world’s most important pollinator, the honeybee. In the United States, 48% of honeybee colonies died in 2023 alone, making it the second deadliest year on record. This significant loss is due in part to colony collapse disorder (CCD), the sudden disappearance of bees. In contrast, European countries report lower but still worrisome rates of colony losses, ranging from 6% to 32%.

This decline causes many of our essential food crops to be under-pollinated, a phenomenon that threatens our society’s food security.

Debunking the sci-fi myth of robotic bees

So, what can be done? Given pesticides’ role in the decline of bee colonies, commonly proposed solutions include a shift away from industrial farming and toward less pesticide-intensive, more sustainable forms of agriculture.

Others tend to look toward the sci-fi end of things, with some scientists imagining that we could eventually replace live honeybees with robotic ones. Such artificial bees could interact with flowers like natural insects, maintaining pollination levels despite the declining numbers of natural pollinators. The vision of artificial pollinators contributed to ingenious designs of insect-sized robots capable of flying.

In reality, such inventions are more effective at educating us over engineers’ fantasies than they are at reviving bee colonies, so slim are their prospects of materialising. First, these artificial pollinators would have to be equipped for much more more than just flying. Daily tasks carried out by the common bee include searching for plants, identifying flowers, unobtrusively interacting with them, locating energy sources, ducking potential predators, and dealing with adverse weather conditions. Robots would have to perform all of these in the wild with a very high degree of reliability since any broken-down or lost robot can cause damage and spread pollution. Second, it remains to be seen whether our technological knowledge would be even capable of manufacturing such inventions. This is without even mentioning the price tag of a swarm of robots capable of substituting pollination provided by a single honeybee colony.

Inside a smart hive

Bees on one of Hiveopolis’s augmented hives.
Hiveopolis, Fourni par l’auteur

Rather than trying to replace honeybees with robots, our two latest projects funded by the European Union propose that the robots and honeybees actually team up. Were these to succeed, struggling honeybee colonies could be transformed into bio-hybrid entities consisting of biological and technological components with complementary skills. This would hopefully boost and secure the colonies’ population growth as more bees survive over harsh winters and yield more foragers to pollinate surrounding ecosystems.

The first of these projects, Hiveopolis, investigates how the complex decentralised decision-making mechanism in a honeybee colony can be nudged by digital technology. Begun in 2019 and set to end in March 2024, the experiment introduces technology into three observation hives each containing 4,000 bees, by contrast to 40,000 bees for a normal colony.

The foundation of an augmented honeycomb.
Hiveopolis, Fourni par l’auteur

Within this honeybee smart home, combs have integrated temperature sensors and heating devices, allowing the bees to enjoy optimal conditions inside the colony. Since bees tend to snuggle up to warmer locations, the combs also enables us to direct them toward different areas of the hive. And as if that control weren’t enough, the hives are also equipped with a system of electronic gates that monitors the insects movements. Both technologies allow us to decide where the bees store honey and pollen, but also when they vacate the combs so as to enable us to harvest honey. Last but not least, the smart hive contains a robotic dancing bee that can direct foraging bees toward areas with plants to be pollinated.

Due to the experiment’s small scale, it is impossible to draw conclusions on the extent to which our technologies may have prevented bee losses. However, there is little doubt what we have seen thus far give reasons to be hopeful. We can confidently assert that our smart beehives allowed colonies to survive extreme cold during the winter in a way that wouldn’t otherwise be possible. To precisely assess how many bees these technologies have saved would require upscaling the experiment to hundreds of colonies.

Pampering the queen bee

Our second EU-funded project, RoboRoyale, focuses on the honeybee queen and her courtyard bees, with robots in this instance continuously monitoring and interacting with her Royal Highness.

Come 2024, we will equip each hive with a group of six bee-sized robots, which will groom and feed the honeybee queen to affect the number of eggs she lays. Some of these robots will be equipped with royal jelly micro-pumps to feed her, while others will feature compliant micro-actuators to groom her. These robots will then be connected to a larger robotic arm with infrared cameras, that will continuously monitor the queen and her vicinity.

A RoboRoyale robot arm susses out a honeybee colony.
RoboRoyale, Fourni par l’auteur

As witnessed by the photo to the right and also below, we have already been able to successfully introduce the robotic arm within a living colony. There it continuously monitored the queen and determined her whereabouts through light stimuli.

Emulating the worker bees

In a second phase, it is hoped the bee-sized robots and robotic arm will be able to emulate the behaviour of the workers, the female bees lacking reproductive capacity who attend to the queen and feed her royal jelly. Rich in water, proteins, carbohydrates, lipids, vitamins and minerals, this nutritious substance secreted by the glands of the worker bees enables the queen to lay up to thousands of eggs a day.

Worker bees also engage in cleaning the queen, which involves licking her. During such interactions, they collect some of the queen’s pheromones and disperse them throughout the colony as they move across the hive. The presence of these pheromones controls many of the colony’s behaviours and notifies the colony of a queen’s presence. For example, in the event of the queen’s demise, a new queen must be quickly reared from an egg laid by the late queen, leaving only a narrow time window for the colony to react.

One of RoboRoyale’s first experiments has consisted in simple interactions with the queen bee through light stimulus. The next months will then see the robotic arm stretch out to physically touch and groom her.
RoboRoyale, Fourni par l’auteur

Finally, it is believed worker bees may also act as the queen’s guides, leading her to laying eggs in specific comb cells. The size of these cells can determine if the queen lays a diploid or haploid egg, resulting in the bee developing into either into drone (male) or worker (female) bee. Taking over these guiding duties could affect no less than the rate’s entire reproductive rate.

How robots can prevent bee cannibalism

This could have another virtuous effect: preventing cannibalism.

During tough times, such as long periods of rain, bees have to make do with little pollen intake. This forces them to feed young larvae to older ones so that at least the older larvae has a chance to survive. Through RoboRoyale, we will look not only to reduce chances of this behaviour occurring, but also quantify to what extent it occurs under normal conditions.

Ultimately, our robots will enable us to deepen our understanding of the very complex regulation processes inside honeybee colonies through novel experimental procedures. The insights gained from these new research tracks will be necessary to better protect these valuable social insects and ensure sufficient pollination in the future – a high stakes enterprise for food security.


This article is the result of The Conversation’s collaboration with Horizon, the EU research and innovation magazine.

The Conversation

Farshad Arvin is a member of the Department of Computer Science at Durham University in the UK. The research of Farshad Arvin is primarily funded by the EU H2020 and Horizon Europe programmes.

Martin Stefanec is a member of the Institute of Biology at the University of Graz. He has received funding from the EU programs H2020 and Horizon Europe.

Tomas Krajnik is member of the Institute of Electrical and Electronics Engineers (IEEE). The research of Tomas Krajnik is primarily funded by EU H2020 Horizon programme and Czech National Science Foundation.

]]>
Mobile robots get a leg up from a more-is-better communications principle https://robohub.org/mobile-robots-get-a-leg-up-from-a-more-is-better-communications-principle/ Sat, 19 Aug 2023 07:49:06 +0000 http://robohub.org/?guid=6a89d4d57c4c133cc86d23c47a05cd7b

Getting a leg up from mobile robots comes down to getting a bunch of legs. Georgia Institute of Technology

By Baxi Chong (Postdoctoral Fellow, School of Physics, Georgia Institute of Technology)

Adding legs to robots that have minimal awareness of the environment around them can help the robots operate more effectively in difficult terrain, my colleagues and I found.

We were inspired by mathematician and engineer Claude Shannon’s communication theory about how to transmit signals over distance. Instead of spending a huge amount of money to build the perfect wire, Shannon illustrated that it is good enough to use redundancy to reliably convey information over noisy communication channels. We wondered if we could do the same thing for transporting cargo via robots. That is, if we want to transport cargo over “noisy” terrain, say fallen trees and large rocks, in a reasonable amount of time, could we do it by just adding legs to the robot carrying the cargo and do so without sensors and cameras on the robot?

Most mobile robots use inertial sensors to gain an awareness of how they are moving through space. Our key idea is to forget about inertia and replace it with the simple function of repeatedly making steps. In doing so, our theoretical analysis confirms our hypothesis of reliable and predictable robot locomotion – and hence cargo transport – without additional sensing and control.

To verify our hypothesis, we built robots inspired by centipedes. We discovered that the more legs we added, the better the robot could move across uneven surfaces without any additional sensing or control technology. Specifically, we conducted a series of experiments where we built terrain to mimic an inconsistent natural environment. We evaluated the robot locomotion performance by gradually increasing the number of legs in increments of two, beginning with six legs and eventually reaching a total of 16 legs.

Navigating rough terrain can be as simple as taking it a step at a time, at least if you have a lot of legs.

As the number of legs increased, we observed that the robot exhibited enhanced agility in traversing the terrain, even in the absence of sensors. To further assess its capabilities, we conducted outdoor tests on real terrain to evaluate its performance in more realistic conditions, where it performed just as well. There is potential to use many-legged robots for agriculture, space exploration and search and rescue.

Why it matters

Transporting things – food, fuel, building materials, medical supplies – is essential to modern societies, and effective goods exchange is the cornerstone of commercial activity. For centuries, transporting material on land has required building roads and tracks. However, roads and tracks are not available everywhere. Places such as hilly countryside have had limited access to cargo. Robots might be a way to transport payloads in these regions.

What other research is being done in this field

Other researchers have been developing humanoid robots and robot dogs, which have become increasingly agile in recent years. These robots rely on accurate sensors to know where they are and what is in front of them, and then make decisions on how to navigate.

However, their strong dependence on environmental awareness limits them in unpredictable environments. For example, in search-and-rescue tasks, sensors can be damaged and environments can change.

What’s next

My colleagues and I have taken valuable insights from our research and applied them to the field of crop farming. We have founded a company that uses these robots to efficiently weed farmland. As we continue to advance this technology, we are focused on refining the robot’s design and functionality.

While we understand the functional aspects of the centipede robot framework, our ongoing efforts are aimed at determining the optimal number of legs required for motion without relying on external sensing. Our goal is to strike a balance between cost-effectiveness and retaining the benefits of the system. Currently, we have shown that 12 is the minimum number of legs for these robots to be effective, but we are still investigating the ideal number.


The Research Brief is a short take on interesting academic work.

The Conversation

The authors has received funding from NSF-Simons Southeast Center for Mathematics and Biology (Simons Foundation SFARI 594594), Georgia Research Alliance (GRA.VL22.B12), Army Research Office (ARO) MURI program, Army Research Office Grant W911NF-11-1-0514 and a Dunn Family Professorship.

The author and his colleagues have one or more pending patent applications related to the research covered in this article.

The author and his colleagues have established a start-up company, Ground Control Robotics, Inc., partially based on this work.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Titan submersible disaster underscores dangers of deep-sea exploration – an engineer explains why most ocean science is conducted with crewless submarines https://robohub.org/titan-submersible-disaster-underscores-dangers-of-deep-sea-exploration-an-engineer-explains-why-most-ocean-science-is-conducted-with-crewless-submarines/ Wed, 28 Jun 2023 07:28:43 +0000 http://robohub.org/?guid=904ed50396689fb0d37e257f14855ede

Researchers are increasingly using small, autonomous underwater robots to collect data in the world’s oceans. NOAA Teacher at Sea Program, NOAA Ship PISCES, CC BY-SA

By Nina Mahmoudian (Associate Professor of Mechanical Engineering, Purdue University)

Rescuers spotted debris from the tourist submarine Titan on the ocean floor near the wreck of the Titanic on June 22, 2023, indicating that the vessel suffered a catastrophic failure and the five people aboard were killed.

Bringing people to the bottom of the deep ocean is inherently dangerous. At the same time, climate change means collecting data from the world’s oceans is more vital than ever. Purdue University mechanical engineer Nina Mahmoudian explains how researchers reduce the risks and costs associated with deep-sea exploration: Send down subs, but keep people on the surface.

Why is most underwater research conducted with remotely operated and autonomous underwater vehicles?

When we talk about water studies, we’re talking about vast areas. And covering vast areas requires tools that can work for extended periods of time, sometimes months. Having people aboard underwater vehicles, especially for such long periods of time, is expensive and dangerous.

One of the tools researchers use is remotely operated vehicles, or ROVs. Basically, there is a cable between the vehicle and operator that allows the operator to command and move the vehicle, and the vehicle can relay data in real time. ROV technology has progressed a lot to be able to reach deep ocean – up to a depth of 6,000 meters (19,685 feet). It’s also better able to provide the mobility necessary for observing the sea bed and gathering data.

Autonomous underwater vehicles provide another opportunity for underwater exploration. They are usually not tethered to a ship. They are typically programmed ahead of time to do a specific mission. And while they are underwater they usually don’t have constant communication. At some interval, they surface, relay the whole amount of data that they have gathered, change the battery or recharge and receive renewed instructions before again submerging and continuing their mission.

What can remotely operated and autonomous underwater vehicles do that crewed submersibles can’t, and vice versa?

Crewed submersibles will be exciting for the public and those involved and helpful for the increased capabilities humans bring in operating instruments and making decisions, similar to crewed space exploration. However, it will be much more expensive compared with uncrewed explorations because of the required size of the platforms and the need for life-support systems and safety systems. Crewed submersibles today cost tens of thousands of dollars a day to operate.

Use of unmanned systems will provide better opportunities for exploration at less cost and risk in operating over vast areas and in inhospitable locations. Using remotely operated and autonomous underwater vehicles gives operators the opportunity to perform tasks that are dangerous for humans, like observing under ice and detecting underwater mines.

Remotely operated vehicles can operate under Antarctic ice and other dangerous places.

How has the technology for deep ocean research evolved?

The technology has advanced dramatically in recent years due to progress in sensors and computation. There has been great progress in miniaturization of acoustic sensors and sonars for use underwater. Computers have also become more miniaturized, capable and power efficient. There has been a lot of work on battery technology and connectors that are watertight. Additive manufacturing and 3D printing also help build hulls and components that can withstand the high pressures at depth at much lower costs.

There has also been great progress toward increasing autonomy using more advanced algorithms, in addition to traditional methods for navigation, localization and detection. For example, machine learning algorithms can help a vehicle detect and classify objects, whether stationary like a pipeline or mobile like schools of fish.

What kinds of discoveries have been made using remotely operated and autonomous underwater vehicles?

One example is underwater gliders. These are buoyancy-driven autonomous underwater vehicles. They can stay in water for months. They can collect data on pressure, temperature and salinity as they go up and down in water. All of these are very helpful for researchers to have an understanding of changes that are happening in oceans.

One of these platforms traveled across the North Atlantic Ocean from the coast of Massachusetts to Ireland for nearly a year in 2016 and 2017. The amount of data that was captured in that amount of time was unprecedented. To put it in perspective, a vehicle like that costs about $200,000. The operators were remote. Every eight hours the glider came to the surface, got connected to GPS and said, “Hey, I am here,” and the crew basically gave it the plan for the next leg of the mission. If a crewed ship was sent to gather that amount of data for that long it would cost in the millions.

In 2019, researchers used an autonomous underwater vehicle to collect invaluable data about the seabed beneath the Thwaites glacier in Antarctica.

Energy companies are also using remotely operated and autonomous underwater vehicles for inspecting and monitoring offshore renewable energy and oil and gas infrastructure on the seabed.

Where is the technology headed?

Underwater systems are slow-moving platforms, and if researchers can deploy them in large numbers that would give them an advantage for covering large areas of ocean. A great deal of effort is being put into coordination and fleet-oriented autonomy of these platforms, as well as into advancing data gathering using onboard sensors such as cameras, sonars and dissolved oxygen sensors. Another aspect of advancing vehicle autonomy is real-time underwater decision-making and data analysis.

What is the focus of your research on these submersibles?

My team and I focus on developing navigational and mission-planning algorithms for persistent operations, meaning long-term missions with minimal human oversight. The goal is to respond to two of the main constraints in the deployment of autonomous systems. One is battery life. The other is unknown situations.

The author’s research includes a project to allow autonomous underwater vehicles to recharge their batteries without human intervention.

For battery life, we work on at-sea recharging, both underwater and surface water. We are developing tools for autonomous deployment, recovery, recharging and data transfer for longer missions at sea. For unknown situations, we are working on recognizing and avoiding obstacles and adapting to different ocean currents – basically allowing a vehicle to navigate in rough conditions on its own.

To adapt to changing dynamics and component failures, we are working on methodologies to help the vehicle detect the change and compensate to be able to continue and finish the mission.

These efforts will enable long-term ocean studies including observing environmental conditions and mapping uncharted areas.

The Conversation

Nina Mahmoudian receives funding from National Science Foundation and Office of Naval Research.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
We need to discuss what jobs robots should do, before the decision is made for us https://robohub.org/we-need-to-discuss-what-jobs-robots-should-do-before-the-decision-is-made-for-us/ Sat, 29 Apr 2023 08:30:31 +0000 http://robohub.org/?guid=1301ab6bcafc6e182faac594ca6cd839

Shutterstock / Frame Stock Footage

By Thusha Rajendran (Professor of Psychology, The National Robotarium, Heriot-Watt University)

The social separation imposed by the pandemic led us to rely on technology to an extent we might never have imagined – from Teams and Zoom to online banking and vaccine status apps.

Now, society faces an increasing number of decisions about our relationship with technology. For example, do we want our workforce needs fulfilled by automation, migrant workers, or an increased birth rate?

In the coming years, we will also need to balance technological innovation with people’s wellbeing – both in terms of the work they do and the social support they receive.

And there is the question of trust. When humans should trust robots, and vice versa, is a question our Trust Node team is researching as part of the UKRI Trustworthy Autonomous Systems hub. We want to better understand human-robot interactions – based on an individual’s propensity to trust others, the type of robot, and the nature of the task. This, and projects like it, could ultimately help inform robot design.

This is an important time to discuss what roles we want robots and AI to take in our collective future – before decisions are taken that may prove hard to reverse. One way to frame this dialogue is to think about the various roles robots can fulfill.

Robots as our servants

The word “robot” was first used by the Czech writer, Karel Čapek, in his 1920 sci-fi play Rossum’s Universal Robots. It comes from the word “robota”, meaning to do the drudgery or donkey work. This etymology suggests robots exist to do work that humans would rather not. And there should be no obvious controversy, for example, in tasking robots to maintain nuclear power plants or repair offshore wind farms.

The more human a robot looks, the more we trust it. Antonello Marangi/Shutterstock

However, some service tasks assigned to robots are more controversial, because they could be seen as taking jobs from humans.

For example, studies show that people who have lost movement in their upper limbs could benefit from robot-assisted dressing. But this could be seen as automating tasks that nurses currently perform. Equally, it could free up time for nurses and careworkers – currently sectors that are very short-staffed – to focus on other tasks that require more sophisticated human input.

Authority figures

The dystopian 1987 film Robocop imagined the future of law enforcement as autonomous, privatised, and delegated to cyborgs or robots.

Today, some elements of this vision are not so far away: the San Francisco Police Department has considered deploying robots – albeit under direct human control – to kill dangerous suspects.

This US military robot is fitted with a machine gun to turn it into a remote weapons platform. US Army

But having robots as authority figures needs careful consideration, as research has shown that humans can place excessive trust in them.

In one experiment, a “fire robot” was assigned to evacuate people from a building during a simulated blaze. All 26 participants dutifully followed the robot, even though half had previously seen the robot perform poorly in a navigation task.

Robots as our companions

It might be difficult to imagine that a human-robot attachment would have the same quality as that between humans or with a pet. However, increasing levels of loneliness in society might mean that for some people, having a non-human companion is better than nothing.

The Paro Robot is one of the most commercially successful companion robots to date – and is designed to look like a baby harp seal. Yet research suggests that the more human a robot looks, the more we trust it.

The Paro companion robot is designed to look like a baby seal. Angela Ostafichuk / Shutterstock

A study has also shown that different areas of the brain are activated when humans interact with either another human or a robot. This suggests our brains may recognise interactions with a robot differently from human ones.

Creating useful robot companions involves a complex interplay of computer science, engineering and psychology. A robot pet might be ideal for someone who is not physically able to take a dog for its exercise. It might also be able to detect falls and remind someone to take their medication.

How we tackle social isolation, however, raises questions for us as a society. Some might regard efforts to “solve” loneliness with technology as the wrong solution for this pervasive problem.

What can robotics and AI teach us?

Music is a source of interesting observations about the differences between human and robotic talents. Committing errors in the way humans do all the time, but robots might not, appears to be a vital component of creativity.

A study by Adrian Hazzard and colleagues pitted professional pianists against an autonomous disklavier (an automated piano with keys that move as if played by an invisible pianist). The researchers discovered that, eventually, the pianists made mistakes. But they did so in ways that were interesting to humans listening to the performance.

This concept of “aesthetic failure” can also be applied to how we live our lives. It offers a powerful counter-narrative to the idealistic and perfectionist messages we constantly receive through television and social media – on everything from physical appearance to career and relationships.

As a species, we are approaching many crossroads, including how to respond to climate change, gene editing, and the role of robotics and AI. However, these dilemmas are also opportunities. AI and robotics can mirror our less-appealing characteristics, such as gender and racial biases. But they can also free us from drudgery and highlight unique and appealing qualities, such as our creativity.

We are in the driving seat when it comes to our relationship with robots – nothing is set in stone, yet. But to make educated, informed choices, we need to learn to ask the right questions, starting with: what do we actually want robots to do for us?

The Conversation

Thusha Rajendran receives funding from the UKRI and EU. He would like to acknowledge evolutionary anthropologist Anna Machin’s contribution to this article through her book Why We Love, personal communications and draft review.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Robots are everywhere – improving how they communicate with people could advance human-robot collaboration https://robohub.org/robots-are-everywhere-improving-how-they-communicate-with-people-could-advance-human-robot-collaboration/ Mon, 17 Apr 2023 08:09:01 +0000 http://robohub.org/?guid=fe11ff4051d53f5bca47ffbf557ca30e

Emotionally intelligent’ robots could improve their interactions with people. Andriy Onufriyenko/Moment via Getty Images

By Ramana Vinjamuri (Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County)

Robots are machines that can sense the environment and use that information to perform an action. You can find them nearly everywhere in industrialized societies today. There are household robots that vacuum floors and warehouse robots that pack and ship goods. Lab robots test hundreds of clinical samples a day. Education robots support teachers by acting as one-on-one tutors, assistants and discussion facilitators. And medical robotics composed of prosthetic limbs can enable someone to grasp and pick up objects with their thoughts.

Figuring out how humans and robots can collaborate to effectively carry out tasks together is a rapidly growing area of interest to the scientists and engineers that design robots as well as the people who will use them. For successful collaboration between humans and robots, communication is key.

Robotics can help patients recover physical function in rehabilitation. BSIP/Universal Images Group via Getty Images

How people communicate with robots

Robots were originally designed to undertake repetitive and mundane tasks and operate exclusively in robot-only zones like factories. Robots have since advanced to work collaboratively with people with new ways to communicate with each other.

Cooperative control is one way to transmit information and messages between a robot and a person. It involves combining human abilities and decision making with robot speed, accuracy and strength to accomplish a task.

For example, robots in the agriculture industry can help farmers monitor and harvest crops. A human can control a semi-autonomous vineyard sprayer through a user interface, as opposed to manually spraying their crops or broadly spraying the entire field and risking pesticide overuse.

Robots can also support patients in physical therapy. Patients who had a stroke or spinal cord injury can use robots to practice hand grasping and assisted walking during rehabilitation.

Another form of communication, emotional intelligence perception, involves developing robots that adapt their behaviors based on social interactions with humans. In this approach, the robot detects a person’s emotions when collaborating on a task, assesses their satisfaction, then modifies and improves its execution based on this feedback.

For example, if the robot detects that a physical therapy patient is dissatisfied with a specific rehabilitation activity, it could direct the patient to an alternate activity. Facial expression and body gesture recognition ability are important design considerations for this approach. Recent advances in machine learning can help robots decipher emotional body language and better interact with and perceive humans.

Robots in rehab

Questions like how to make robotic limbs feel more natural and capable of more complex functions like typing and playing musical instruments have yet to be answered.

I am an electrical engineer who studies how the brain controls and communicates with other parts of the body, and my lab investigates in particular how the brain and hand coordinate signals between each other. Our goal is to design technologies like prosthetic and wearable robotic exoskeleton devices that could help improve function for individuals with stroke, spinal cord and traumatic brain injuries.

One approach is through brain-computer interfaces, which use brain signals to communicate between robots and humans. By accessing an individual’s brain signals and providing targeted feedback, this technology can potentially improve recovery time in stroke rehabilitation. Brain-computer interfaces may also help restore some communication abilities and physical manipulation of the environment for patients with motor neuron disorders.

Brain-computer interfaces could allow people to control robotic arms by thought alone. Ramana Kumar Vinjamuri, CC BY-ND

The future of human-robot interaction

Effective integration of robots into human life requires balancing responsibility between people and robots, and designating clear roles for both in different environments.

As robots are increasingly working hand in hand with people, the ethical questions and challenges they pose cannot be ignored. Concerns surrounding privacy, bias and discrimination, security risks and robot morality need to be seriously investigated in order to create a more comfortable, safer and trustworthy world with robots for everyone. Scientists and engineers studying the “dark side” of human-robot interaction are developing guidelines to identify and prevent negative outcomes.

Human-robot interaction has the potential to affect every aspect of daily life. It is the collective responsibility of both the designers and the users to create a human-robot ecosystem that is safe and satisfactory for all.

The Conversation

Ramana Vinjamuri receives funding from National Science Foundation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Our future could be full of undying, self-repairing robots – here’s how https://robohub.org/our-future-could-be-full-of-undying-self-repairing-robots-heres-how/ Wed, 01 Feb 2023 14:28:20 +0000 http://robohub.org/?guid=e1d7fdc88964f50cba7a15f5383b62dd

Robotic head, 3D illustration (frank60/Shutterstock)

By Jonathan Roberts (Professor in Robotics, Queensland University of Technology)

With generative artificial intelligence (AI) systems such as ChatGPT and StableDiffusion being the talk of the town right now, it might feel like we’ve taken a giant leap closer to a sci-fi reality where AIs are physical entities all around us.

Indeed, computer-based AI appears to be advancing at an unprecedented rate. But the rate of advancement in robotics – which we could think of as the potential physical embodiment of AI – is slow.

Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting T-1000 robot from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?

Energy for ‘life’

Biological lifeforms like ourselves need energy to operate. We get ours via a combination of food, water, and oxygen. The majority of plants also need access to light to grow.

By the same token, an everlasting robot needs an ongoing energy supply. Currently, electrical power dominates energy supply in the world of robotics. Most robots are powered by the chemistry of batteries.

An alternative battery type has been proposed that uses nuclear waste and ultra-thin diamonds at its core. The inventors, a San Francisco startup called Nano Diamond Battery, claim a possible battery life of tens of thousands of years. Very small robots would be an ideal user of such batteries.

But a more likely long-term solution for powering robots may involve different chemistry – and even biology. In 2021, scientists from the Berkeley Lab and UMAss Amherst in the US demonstrated tiny nanobots could get their energy from chemicals in the liquid they swim in.

The researchers are now working out how to scale up this idea to larger robots that can work on solid surfaces.

Repairing and copying oneself

Of course, an undying robot might still need occasional repairs.

Ideally, a robot would repair itself if possible. In 2019, a Japanese research group demonstrated a research robot called PR2 tightening its own screw using a screwdriver. This is like self-surgery! However, such a technique would only work if non-critical components needed repair.

Other research groups are exploring how soft robots can self-heal when damaged. A group in Belgium showed how a robot they developed recovered after being stabbed six times in one of its legs. It stopped for a few minutes until its skin healed itself, and then walked off.

Another unusual concept for repair is to use other things a robot might find in the environment to replace its broken part.

Last year, scientists reported how dead spiders can be used as robot grippers. This form of robotics is known as “necrobotics”. The idea is to use dead animals as ready-made mechanical devices and attach them to robots to become part of the robot.

The proof-of-concept in necrobotics involved taking a dead spider and ‘reanimating’ its hydraulic legs with air, creating a surprisingly strong gripper. Preston Innovation Laboratory/Rice University

A robot colony?

From all these recent developments, it’s quite clear that in principle, a single robot may be able to live forever. But there is a very long way to go.

Most of the proposed solutions to the energy, repair and replication problems have only been demonstrated in the lab, in very controlled conditions and generally at tiny scales.

The ultimate solution may be one of large colonies or swarms of tiny robots who share a common brain, or mind. After all, this is exactly how many species of insects have evolved.

The concept of the “mind” of an ant colony has been pondered for decades. Research published in 2019 showed ant colonies themselves have a form of memory that is not contained within any of the ants.

This idea aligns very well with one day having massive clusters of robots that could use this trick to replace individual robots when needed, but keep the cluster “alive” indefinitely.

Ant colonies can contain ‘memories’ that are distributed between many individual insects. frank60/Shutterstock

Ultimately, the scary robot scenarios outlined in countless science fiction books and movies are unlikely to suddenly develop without anyone noticing.

Engineering ultra-reliable hardware is extremely difficult, especially with complex systems. There are currently no engineered products that can last forever, or even for hundreds of years. If we do ever invent an undying robot, we’ll also have the chance to build in some safeguards.The Conversation


Jonathan Roberts is Director of the Australian Cobotics Centre, the Technical Director of the Advanced Robotics for Manufacturing (ARM) Hub, and is a Chief Investigator at the QUT Centre for Robotics. He receives funding from the Australian Research Council. He was the co-founder of the UAV Challenge – an international drone competition.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Five ways drones will change the way buildings are designed https://robohub.org/five-ways-drones-will-change-the-way-buildings-are-designed/ Mon, 02 Jan 2023 10:00:07 +0000 https://robohub.org/?p=206252

elwynn/Shutterstock

By Paul Cureton (Senior Lecturer in Design (People, Places, Products), Lancaster University) and Ole B. Jensen (Professor of Urban Theory and Urban Design, Aalborg University)

Drones are already shaping the face of our cities – used for building planning, heritage, construction and safety enhancement. But, as studies by the UK’s Department of Transport have found, swathes of the public have a limited understanding of how drones might be practically applied.

It’s crucial that the ways drones are affecting our future are understood by the majority of people. As experts in design futures and mobility, we hope this short overview of five ways drones will affect building design offers some knowledge of how things are likely to change.

Infographic showcasing other ways drones will influence future building design. Nuri Kwon, Drone Near-Futures, Imagination Lancaster, Author provided

1. Creating digital models of buildings

Drones can take photographs of buildings, which are then used to build 3D models of buildings in computer-aided design software.

These models have accuracy to within a centimetre, and can be combined with other data, such as 3D scans of interiors using drones or laser scanners, in order to provide a completely accurate picture of the structure for surveyors, architects and clients.

Using these digital models saves time and money in the construction process by providing a single source thaOle B. Jensent architects and planners can view.

2. Heritage simulations

Studio Drift are a multidisciplinary team of Dutch artists who have used drones to construct images through theatrical outdoor drone performances at damaged national heritage sites such as the Notre Dame in Paris, Colosseum in Rome and Gaudí’s Sagrada Familia in Barcelona.

Drones could be used in the near-future in a similar way to help planners to visualise the final impact of restoration or construction work on a damaged or partially finished building.

3. Drone delivery

The arrival of drone delivery services will see significant changes to buildings in our communities, which will need to provide for docking stations at community hubs, shops and pick-up points.

Wingcopter are one of many companies trialling delivery drones. Akash 1997, CC BY-SA

There are likely to be landing pads installed on the roofs of residential homes and dedicated drone-delivery hubs. Research has shown that drones can help with the last mile of any delivery in the UK, Germany, France and Italy.

Architects of the future will need to add these facilities into their building designs.

4. Drones mounted with 3D printers

Two research projects from architecture, design, planning, and consulting firm Gensler and another from a consortium led by Imperial College London (comprising University College London, University of Bath, University of Pennsylvania, Queen Mary University of London, and Technical University of Munich) named Empa have been experimenting with drones with mounted 3D printers. These drones would work at speed to construct emergency shelters or repair buildings at significant heights, without the need for scaffolding, or in difficult to reach locations, providing safety benefits.

Gensler have already used drones for wind turbine repair and researchers at Imperial College are exploring bee-like drone swarms that work together to construct blueprints. The drones coordinate with each other to follow a pre-defined path in a project called Aerial Additive Manufacturing. For now, the work is merely a demonstration of the technology, and not working on a specific building.

In the future, drones with mounted 3D printers could help create highly customised buildings at speed, but how this could change the workforce and the potential consequences for manual labour jobs is yet to be understood.

5. Agile surveillance

Drones offer new possibilities for surveillance away from the static, fixed nature of current systems such as closed circuit television.

Drones with cameras and sensors relying on complex software systems such as biometric indicators and “face recognition” will probably be the next level of surveillance applied by governments and police forces, as well as providing security monitoring for homeowners. Drones would likely be fitted with monitoring devices, which could communicate with security or police forces.

Drones used in this way could help our buildings become more responsive to intrusions, and adaptable to changing climates. Drones may move parts of the building such as shade-creating devices, following the path of the sun to stop buildings overheating, for example.The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
How shoring up drones with artificial intelligence helps surf lifesavers spot sharks at the beach https://robohub.org/how-shoring-up-drones-with-artificial-intelligence-helps-surf-lifesavers-spot-sharks-at-the-beach/ Sat, 05 Nov 2022 11:16:25 +0000 https://robohub.org/?p=205884

A close encounter between a white shark and a surfer. Author provided.

By Cormac Purcell (Adjunct Senior Lecturer, UNSW Sydney) and Paul Butcher (Adjunct Professor, Southern Cross University)

Australian surf lifesavers are increasingly using drones to spot sharks at the beach before they get too close to swimmers. But just how reliable are they?

Discerning whether that dark splodge in the water is a shark or just, say, seaweed isn’t always straightforward and, in reasonable conditions, drone pilots generally make the right call only 60% of the time. While this has implications for public safety, it can also lead to unnecessary beach closures and public alarm.

Engineers are trying to boost the accuracy of these shark-spotting drones with artificial intelligence (AI). While they show great promise in the lab, AI systems are notoriously difficult to get right in the real world, so remain out of reach for surf lifesavers. And importantly, overconfidence in such software can have serious consequences.

With these challenges in mind, our team set out to build the most robust shark detector possible and test it in real-world conditions. By using masses of data, we created a highly reliable mobile app for surf lifesavers that could not only improve beach safety, but help monitor the health of Australian coastlines.

White shark being observed by a drone.A white shark being tracked by a drone. Author provided.

Detecting dangerous sharks with drones

The New South Wales government has invested more than A$85 million in shark mitigation measures over the next four years. Of all approaches on offer, a 2020 survey showed drone-based shark surveillance is the public’s preferred method to protect beach-goers.

The state government has been trialling drones as shark-spotting tools since 2016, and with Surf Life Saving NSW since 2018. Trained surf lifesaving pilots fly the drone over the ocean at a height of 60 metres, watching the live video feed on portable screens for the shape of sharks swimming under the surface.

Identifying sharks by carefully analysing the video footage in good conditions seems easy. But water clarity, sea glitter (sea-surface reflection), animal depth, pilot experience and fatigue all reduce the reliability of real-time detection to a predicted average of 60%. This reliability falls further when conditions are turbid.

Pilots also need to confidently identify the species of shark and tell the difference between dangerous and non-dangerous animals, such as rays, which are often misidentified.

Identifying shark species from the air.

AI-driven computer vision has been touted as an ideal tool to virtually “tag” sharks and other animals in the video footage streamed from the drones, and to help identify whether a species nearing the beach is cause for concern.

AI to the rescue?

Early results from previous AI-enhanced shark-spotting systems have suggested the problem has been solved, as these systems report detection accuracies of over 90%.

But scaling these systems to make a real-world difference across NSW beaches has been challenging.

AI systems are trained to locate and identify species using large collections of example images and perform remarkably well when processing familiar scenes in the real world.

However, problems quickly arise when they encounter conditions not well represented in the training data. As any regular ocean swimmer can tell you, every beach is different – the lighting, weather and water conditions can change dramatically across days and seasons.

Animals can also frequently change their position in the water column, which means their visible characteristics (such as their outline) changes, too.

All this variation makes it crucial for training data to cover the full gamut of conditions, or that AI systems be flexible enough to track the changes over time. Such challenges have been recognised for years, giving rise to the new discipline of “machine learning operations”.

Essentially, machine learning operations explicitly recognises that AI-driven software requires regular updates to maintain its effectiveness.

Examples of the drone footage used in our huge dataset.

Building a better shark spotter

We aimed to overcome these challenges with a new shark detector mobile app. We gathered a huge dataset of drone footage, and shark experts then spent weeks inspecting the videos, carefully tracking and labelling sharks and other marine fauna in the hours of footage.

Using this new dataset, we trained a machine learning model to recognise ten types of marine life, including different species of dangerous sharks such as great white and whaler sharks.

And then we embedded this model into a new mobile app that can highlight sharks in live drone footage and predict the species. We worked closely with the NSW government and Surf Lifesaving NSW to trial this app on five beaches during summer 2020.

Drone flying at a beach.A drone in surf lifesaver NSW livery preparing to go on patrol. Author provided.

Our AI shark detector did quite well. It identified dangerous sharks on a frame-by-frame basis 80% of the time, in realistic conditions.

We deliberately went out of our way to make our tests difficult by challenging the AI to run on unseen data taken at different times of year, or from different-looking beaches. These critical tests on “external data” are often omitted in AI research.

A more detailed analysis turned up common-sense limitations: white, whaler and bull sharks are difficult to tell apart because they look similar, while small animals (such as turtles and rays) are harder to detect in general.

Spurious detections (like mistaking seaweed as a shark) are a real concern for beach managers, but we found the AI could easily be “tuned” to eliminate these by showing it empty ocean scenes of each beach.

Seaweed identified as sharks.Example of where the AI gets it wrong – seaweed identified as sharks. Author provided.

The future of AI for shark spotting

In the short term, AI is now mature enough to be deployed in drone-based shark-spotting operations across Australian beaches. But, unlike regular software, it will need to be monitored and updated frequently to maintain its high reliability of detecting dangerous sharks.

An added bonus is that such a machine learning system for spotting sharks would also continually collect valuable ecological data on the health of our coastline and marine fauna.

In the longer term, getting the AI to look at how sharks swim and using new AI technology that learns on-the-fly will make AI shark detection even more reliable and easy to deploy.

The NSW government has new drone trials for the coming summer, testing the usefulness of efficient long-range flights that can cover more beaches.

AI can play a key role in making these flights more effective, enabling greater reliability in drone surveillance, and may eventually lead to fully-automated shark-spotting operations and trusted automatic alerts.

The authors acknowledge the substantial contributions from Dr Andrew Colefax and Dr Andrew Walsh at Sci-eye.The Conversation

This article appeared in The Conversation.

]]> A new type of material called a mechanical neural network can learn and change its physical properties to create adaptable, strong structures https://robohub.org/a-new-type-of-material-called-a-mechanical-neural-network-can-learn-and-change-its-physical-properties-to-create-adaptable-strong-structures/ Thu, 20 Oct 2022 10:36:54 +0000 http://robohub.org/?guid=1135e1d032571cde4ee63631d1714ef8

This connection of springs is a new type of material that can change shape and learn new properties. Jonathan Hopkins, CC BY-ND

By Ryan H. Lee (PhD Student in Mechanical and Aerospace Engineering, University of California, Los Angeles)

A new type of material can learn and improve its ability to deal with unexpected forces thanks to a unique lattice structure with connections of variable stiffness, as described in a new paper by my colleagues and me.

Architected materials – like this 3D lattice – get their properties not from what they are made out of, but from their structure. Ryan Lee, CC BY-ND

The new material is a type of architected material, which gets its properties mainly from the geometry and specific traits of its design rather than what it is made out of. Take hook-and-loop fabric closures like Velcro, for example. It doesn’t matter whether it is made from cotton, plastic or any other substance. As long as one side is a fabric with stiff hooks and the other side has fluffy loops, the material will have the sticky properties of Velcro.

My colleagues and I based our new material’s architecture on that of an artificial neural network – layers of interconnected nodes that can learn to do tasks by changing how much importance, or weight, they place on each connection. We hypothesized that a mechanical lattice with physical nodes could be trained to take on certain mechanical properties by adjusting each connection’s rigidity.

To find out if a mechanical lattice would be able to adopt and maintain new properties – like taking on a new shape or changing directional strength – we started off by building a computer model. We then selected a desired shape for the material as well as input forces and had a computer algorithm tune the tensions of the connections so that the input forces would produce the desired shape. We did this training on 200 different lattice structures and found that a triangular lattice was best at achieving all of the shapes we tested.

Once the many connections are tuned to achieve a set of tasks, the material will continue to react in the desired way. The training is – in a sense – remembered in the structure of the material itself.

We then built a physical prototype lattice with adjustable electromechanical springs arranged in a triangular lattice. The prototype is made of 6-inch connections and is about 2 feet long by 1½ feet wide. And it worked. When the lattice and algorithm worked together, the material was able to learn and change shape in particular ways when subjected to different forces. We call this new material a mechanical neural network.

The prototype is 2D, but a 3D version of this material could have many uses. Jonathan Hopkins, CC BY-ND

Why it matters

Besides some living tissues, very few materials can learn to be better at dealing with unanticipated loads. Imagine a plane wing that suddenly catches a gust of wind and is forced in an unanticipated direction. The wing can’t change its design to be stronger in that direction.

The prototype lattice material we designed can adapt to changing or unknown conditions. In a wing, for example, these changes could be the accumulation of internal damage, changes in how the wing is attached to a craft or fluctuating external loads. Every time a wing made out of a mechanical neural network experienced one of these scenarios, it could strengthen and soften its connections to maintain desired attributes like directional strength. Over time, through successive adjustments made by the algorithm, the wing adopts and maintains new properties, adding each behavior to the rest as a sort of muscle memory.

This type of material could have far reaching applications for the longevity and efficiency of built structures. Not only could a wing made of a mechanical neural network material be stronger, it could also be trained to morph into shapes that maximize fuel efficiency in response to changing conditions around it.

What’s still not known

So far, our team has worked only with 2D lattices. But using computer modeling, we predict that 3D lattices would have a much larger capacity for learning and adaptation. This increase is due to the fact that a 3D structure could have tens of times more connections, or springs, that don’t intersect with one another. However, the mechanisms we used in our first model are far too complex to support in a large 3D structure.

What’s next

The material my colleagues and I created is a proof of concept and shows the potential of mechanical neural networks. But to bring this idea into the real world will require figuring out how to make the individual pieces smaller and with precise properties of flex and tension.

We hope new research in the manufacturing of materials at the micron scale, as well as work on new materials with adjustable stiffness, will lead to advances that make powerful smart mechanical neural networks with micron-scale elements and dense 3D connections a ubiquitous reality in the near future.

The Conversation

Ryan Lee has received funding from the Air Force Office of Science Research .

This article appeared in The Conversation.

]]>
‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie https://robohub.org/killer-robots-will-be-nothing-like-the-movies-show-heres-where-the-real-threats-lie/ Wed, 19 Oct 2022 12:13:23 +0000 http://robohub.org/?guid=a4e14eead0959783928f634693fa3916

Ghost Robotics Vision 60 Q-UGV. US Space Force photo by Senior Airman Samuel Becker

By Toby Walsh (Professor of AI at UNSW, Research Group Leader, UNSW Sydney)

You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies would watch the latest Bond movie to see what technologies might be coming their way.

Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming sex robot courtroom drama Dolly.

I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a 2011 short story by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.

The real killer robots

Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic Metropolis.

But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.

Indeed, contrary to recent fears, robots may never be sentient.

It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and Nagorno-Karabakh.

A war transformed

Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of the real future of killer robots.

On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store.

And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms.

This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see Turkey emerging as a major drone power.

And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies.

Robot manufacturers are, however, starting to push back against this future.

A pledge not to weaponise

Last week, six leading robotics companies pledged they would never weaponise their robot platforms. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can perform an impressive backflip, and the Spot robot dog, which looks like it’s straight out of the Black Mirror TV series.

This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised an open letter signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a global disarmament award.

However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.

We have, for example, already seen third parties mount guns on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was assassinated by Israeli agents using a robot machine gun in 2020.

Collective action to safeguard our future

The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.

Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation.

Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council has recently unanimously decided to explore the human rights implications of new and emerging technologies like autonomous weapons.

Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation.

Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.

The Conversation

Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

]]>
Tesla’s Optimus robot isn’t very impressive – but it may be a sign of better things to come https://robohub.org/teslas-optimus-robot-isnt-very-impressive-but-it-may-be-a-sign-of-better-things-to-come/ Tue, 04 Oct 2022 07:18:32 +0000 http://robohub.org/?guid=16f868e8c9cf035eaab55a3e83a1f198

By Wafa Johal (Senior Lecturer, Computing & Information Systems, The University of Melbourne)

In August 2021, Tesla CEO Elon Musk announced the electric car manufacturer was planning to get into the robot business. In a presentation accompanied by a human dressed as a robot, Musk said work was beginning on a “friendly” humanoid robot to “navigate through a world built for humans and eliminate dangerous, repetitive and boring tasks”.

Musk has now unveiled a prototype of the robot, called Optimus, which he hopes to mass-produce and sell for less than US$20,000 (A$31,000).

At the unveiling, the robot walked on a flat surface and waved to the crowd, and was shown doing simple manual tasks such as carrying and lifting in a video. As a robotics researcher, I didn’t find the demonstration very impressive – but I am hopeful it will lead to bigger and better things.

Why would we want humanoid robots?

Most of the robots used today don’t look anything like people. Instead, they are machines designed to carry out a specific purpose, like the industrial robots used in factories or the robot vacuum cleaner you might have in your house.

So why would you want one shaped like a human? The basic answer is they would be able to operate in environments designed for humans.

Unlike industrial robots, humanoid robots might be able to move around and interact with humans. Unlike robot vacuum cleaners, they might be able to go up stairs or traverse uneven terrain.

And as well as practical considerations, the idea of “artificial humans” has long had an appeal for inventors and science-fiction writers!

Room for improvement

Based on what we saw in the Tesla presentation, Optimus is a long way from being able to operate with humans or in human environments. The capabilities of the robot showcased fall far short of the state of the art in humanoid robotics.

The Atlas robot made by Boston Dynamics, for example, can walk outdoors and carry out flips and other acrobatic manoeuvres.

And while Atlas is an experimental system, even the commercially available Digit from Agility Robotics is much more capable than what we have seen from Optimus. Digit can walk on various terrains, avoid obstacles, rebalance itself when bumped, and pick up and put down objects.

Bipedal walking (on two feet) alone is no longer a great achievement for a robot. Indeed, with a bit of knowledge and determination you can build such a robot yourself using open source software.

There was also no sign in the Optimus presentation of how it will interact with humans. This will be essential for any robot that works in human environments: not only for collaborating with humans, but also for basic safety.

It can be very tricky for a robot to accomplish seemingly simple tasks such as handing an object to a human, but this is something we would want a domestic humanoid robot to be able to do.

Sceptical consumers

Others have tried to build and sell humanoid robots in the past, such as Honda’s ASIMO and SoftBank’s Pepper. But so far they have never really taken off.

Amazon’s recently released Astro robot may make inroads here, but it may also go the way of its predecessors.

Consumers seem to be sceptical of robots. To date, the only widely adopted household robots are the Roomba-like vacuum cleaners, which have been available since 2002.

To succeed, a humanoid robot will need be able to do something humans can’t to justify the price tag. At this stage the use case for Optimus is still not very clear.

Hope for the future

Despite these criticisms, I am hopeful about the Optimus project. It is still in the very early stages, and the presentation seemed to be aimed at recruiting new staff as much as anything else.

Tesla certainly has plenty of resources to throw at the problem. We know it has the capacity to mass produce the robots if development gets that far.

Musk’s knack for gaining attention may also be helpful – not only for attracting talent to the project, but also to drum up interest among consumers.

Robotics is a challenging field, and it’s difficult to move fast. I hope Optimus succeeds, both to make something cool we can use – and to push the field of robotics forward.

The Conversation

Wafa Johal receives funding from the Australian Research Council.

This article appeared in The Conversation.

]]>
Why household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers https://robohub.org/why-household-robot-servants-are-a-lot-harder-to-build-than-robotic-vacuums-and-automated-warehouse-workers/ Fri, 09 Sep 2022 09:18:00 +0000 http://robohub.org/?guid=0293dd438e0be0f3871b9aa14c00335d

Who wouldn’t want a robot to handle all the household drudgery? Skathi/iStock via Getty Images

By Ayonga Hereid (Assistant Professor of Mechanical and Aerospace Engineering, The Ohio State University)

With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores.

Tesla is building a humanoid robot, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently acquired iRobot, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the Amazon Robotics program to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.’s largest robotics center devoted to developing household robots that carry out daily domestic tasks in residential spaces.

Despite the growing interest, would-be customers may have to wait awhile for those robots to come on the market. While devices such as smart thermostats and security systems are widely used in homes today, the commercial use of household robots is still in its infancy.

As a robotics researcher, I know firsthand how household robots are considerably more difficult to build than smart digital devices or industrial robots.

Robots that can handle a variety of domestic chores are an age-old staple of science fiction.

Handling objects

One major difference between digital and robotic devices is that household robots need to manipulate objects through physical contact to carry out their tasks. They have to carry the plates, move the chairs and pick up dirty laundry and place it in the washer. These operations require the robot to be able to handle fragile, soft and sometimes heavy objects with irregular shapes.

The state-of-the-art AI and machine learning algorithms perform well in simulated environments. But contact with objects in the real world often trips them up. This happens because physical contact is often difficult to model and even harder to control. While a human can easily perform these tasks, there exist significant technical hurdles for household robots to reach human-level ability to handle objects.

Robots have difficulty in two aspects of manipulating objects: control and sensing. Many pick-and-place robot manipulators like those on assembly lines are equipped with a simple gripper or specialized tools dedicated only to certain tasks like grasping and carrying a particular part. They often struggle to manipulate objects with irregular shapes or elastic materials, especially because they lack the efficient force, or haptic, feedback humans are naturally endowed with. Building a general-purpose robot hand with flexible fingers is still technically challenging and expensive.

It is also worth mentioning that traditional robot manipulators require a stable platform to operate accurately, but the accuracy drops considerably when using them with platforms that move around, particularly on a variety of surfaces. Coordinating locomotion and manipulation in a mobile robot is an open problem in the robotics community that needs to be addressed before broadly capable household robots can make it onto the market.

A sophisticated robotic kitchen is already on the market, but it operates in a highly structured environment, meaning all of the objects it interacts with – cookware, food containers, appliances – are where it expects them to be, and there are no pesky humans to get in the way.

They like structure

In an assembly line or a warehouse, the environment and sequence of tasks are strictly organized. This allows engineers to preprogram the robot’s movements or use simple methods like QR codes to locate objects or target locations. However, household items are often disorganized and placed randomly.

Home robots must deal with many uncertainties in their workspaces. The robot must first locate and identify the target item among many others. Quite often it also requires clearing or avoiding other obstacles in the workspace to be able to reach the item and perform given tasks. This requires the robot to have an excellent perception system, efficient navigation skills, and powerful and accurate manipulation capability.

For example, users of robot vacuums know they must remove all small furniture and other obstacles such as cables from the floor, because even the best robot vacuum cannot clear them by itself. Even more challenging, the robot has to operate in the presence of moving obstacles when people and pets walk within close range.

Keeping it simple

While they appear straightforward for humans, many household tasks are too complex for robots. Industrial robots are excellent for repetitive operations in which the robot motion can be preprogrammed. But household tasks are often unique to the situation and could be full of surprises that require the robot to constantly make decisions and change its route in order to perform the tasks.

The vision for household humanoid robots like the proposed Tesla Bot is of an artificial servant capable of handling any mundane task. Courtesy Tesla

Think about cooking or cleaning dishes. In the course of a few minutes of cooking, you might grasp a sauté pan, a spatula, a stove knob, a refrigerator door handle, an egg and a bottle of cooking oil. To wash a pan, you typically hold and move it with one hand while scrubbing with the other, and ensure that all cooked-on food residue is removed and then all soap is rinsed off.

There has been significant development in recent years using machine learning to train robots to make intelligent decisions when picking and placing different objects, meaning grasping and moving objects from one spot to another. However, to be able to train robots to master all different types of kitchen tools and household appliances would be another level of difficulty even for the best learning algorithms.

Not to mention that people’s homes often have stairs, narrow passageways and high shelves. Those hard-to-reach spaces limit the use of today’s mobile robots, which tend to use wheels or four legs. Humanoid robots, which would more closely match the environments humans build and organize for themselves, have yet to be reliably used outside of lab settings.

A solution to task complexity is to build special-purpose robots, such as robot vacuum cleaners or kitchen robots. Many different types of such devices are likely to be developed in the near future. However, I believe that general-purpose home robots are still a long way off.


The Conversation

Ayonga Hereid does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

]]>
UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research https://robohub.org/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research/ Sun, 16 Jan 2022 10:45:50 +0000 http://robohub.org/?guid=e3b07adbe24e56543370908e9c15054f

Humanitarian groups have been calling for a ban on autonomous weapons. Wolfgang Kumm/picture alliance via Getty Images

By James Dawes

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could be combined with chemical, biological, radiological and nuclear weapons themselves.

As a specialist in human rights with a focus on the weaponization of artificial intelligence, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained authority to launch a strike – more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.

Lethal errors and black boxes

I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?

Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)

The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison.

Autonomous weapons expert Paul Scharre uses the metaphor of the runaway gun to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard.

Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can generate internally correct outcomes that nonetheless spread terrible errors rapidly across populations.

For example, a neural net designed for use in Pittsburgh hospitals identified asthma as a risk-reducer in pneumonia cases; image recognition software used by Google identified Black people as gorillas; and a machine-learning tool used by Amazon to rank job candidates systematically assigned negative scores to women.

The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The black box problem of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.

The proliferation problems

The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread.

Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people. Ministry of Defense of Ukraine, CC BY

High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.

High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “myth of a surgical strike” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.

Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons.

Undermining the laws of war

Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.

But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

To hold a soldier criminally responsible for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great.

The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate meaningful human control of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.

A new global arms race

Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.

In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.


The Conversation

This is an updated version of an article originally published on September 29, 2021.

James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

]]>
Robots can be companions, caregivers, collaborators — and social influencers https://robohub.org/robots-can-be-companions-caregivers-collaborators-and-social-influencers/ Fri, 26 Nov 2021 11:02:30 +0000 http://robohub.org/?guid=a70d72942c892f1bba3396dcc9b01f98

Robot and artificial intelligence are poised to increase their influences within our every day lives. (Shutterstock)

By Shane Saunderson

In the mid-1990s, there was research going on at Stanford University that would change the way we think about computers. The Media Equation experiments were simple: participants were asked to interact with a computer that acted socially for a few minutes after which, they were asked to give feedback about the interaction.

Participants would provide this feedback either on the same computer (No. 1) they had just been working on or on another computer (No. 2) across the room. The study found that participants responding on computer No. 2 were far more critical of computer No. 1 than those responding on the same machine they’d worked on.

People responding on the first computer seemed to not want to hurt the computer’s feelings to its face, but had no problem talking about it behind its back. This phenomenon became known as the computers as social actors (CASA) paradigm because it showed that people are hardwired to respond socially to technology that presents itself as even vaguely social.

The CASA phenomenon continues to be explored, particularly as our technologies have become more social. As a researcher, lecturer and all-around lover of robotics, I observe this phenomenon in my work every time someone thanks a robot, assigns it a gender or tries to justify its behaviour using human, or anthropomorphic, rationales.

What I’ve witnessed during my research is that while few are under any delusions that robots are people, we tend to defer to them just like we would another person.

Social tendencies

While this may sound like the beginnings of a Black Mirror episode, this tendency is precisely what allows us to enjoy social interactions with robots and place them in caregiver, collaborator or companion roles.

The positive aspects of treating a robot like a person is precisely why roboticists design them as such — we like interacting with people. As these technologies become more human-like, they become more capable of influencing us. However, if we continue to follow the current path of robot and AI deployment, these technologies could emerge as far more dystopian than utopian.

The Sophia robot, manufactured by Hanson Robotics, has been on 60 Minutes, received honorary citizenship from Saudi Arabia, holds a title from the United Nations and has gone on a date with actor Will Smith. While Sophia undoubtedly highlights many technological advancements, few surpass Hanson’s achievements in marketing. If Sophia truly were a person, we would acknowledge its role as an influencer.

However, worse than robots or AI being sociopathic agents — goal-oriented without morality or human judgment — these technologies become tools of mass influence for whichever organization or individual controls them.

If you thought the Cambridge Analytica scandal was bad, imagine what Facebook’s algorithms of influence could do if they had an accompanying, human-like face. Or a thousand faces. Or a million. The true value of a persuasive technology is not in its cold, calculated efficiency, but its scale.

Seeing through intent

Recent scandals and exposures in the tech world have left many of us feeling helpless against these corporate giants. Fortunately, many of these issues can be solved through transparency.

There are fundamental questions that are important for social technologies to answer because we would expect the same answers when interacting with another person, albeit often implicitly. Who owns or sets the mandate of this technology? What are its objectives? What approaches can it use? What data can it access?

Since robots could have the potential to soon leverage superhuman capabilities, enacting the will of an unseen owner, and without showing verbal or non-verbal cues that shed light on their intent, we must demand that these types of questions be answered explicitly.

As a roboticist, I get asked the question, “When will robots take over the world?” so often that I’ve developed a stock answer: “As soon as I tell them to.” However, my joke is underpinned by an important lesson: don’t scapegoat machines for decisions made by humans.

I consider myself a robot sympathizer because I think robots get unfairly blamed for many human decisions and errors. It is important that we periodically remind ourselves that a robot is not your friend, your enemy or anything in between. A robot is a tool, wielded by a person (however far removed), and increasingly used to influence us.

The Conversation

Shane receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC). He is affiliated with the Human Futures Institute, a Toronto-based think tank.

This article appeared in The Conversation.

]]>
To swim like a tuna, robotic fish need to change how stiff their tails are in real time https://robohub.org/to-swim-like-a-tuna-robotic-fish-need-to-change-how-stiff-their-tails-are-in-real-time/ Tue, 05 Oct 2021 08:07:30 +0000 https://robohub.org/to-swim-like-a-tuna-robotic-fish-need-to-change-how-stiff-their-tails-are-in-real-time/
Researchers have been building robotic fish for years, but the performance has never approached the efficiency of real fish. Daniel Quinn, CC BY-NC

By Daniel Quinn

Underwater vehicles haven’t changed much since the submarines of World War II. They’re rigid, fairly boxy and use propellers to move. And whether they are large manned vessels or small robots, most underwater vehicles have one cruising speed where they are most energy efficient.

Fish take a very different approach to moving through water: Their bodies and fins are very flexible, and this flexibility allows them to interact with water more efficiently than rigid machines. Researchers have been designing and building flexible fishlike robots for years, but they still trail far behind real fish in terms of efficiency.

What’s missing?

I am an engineer and study fluid dynamics. My labmates and I wondered if something in particular about the flexibility of fish tails allows fish to be so fast and efficient in the water. So, we created a model and built a robot to study the effect of stiffness on swimming efficiency. We found fish swim so efficiently over a wide range of speeds because they can change how rigid or flexible their tails are in real time.

A sketch of a human–powered helicopter with a large spiral propeller on top.
Leonardo Da Vinci designed a propeller–driven helicopter in 1481.
Leonardo Da Vinci/WikimediaCommons

Why are people still using propellers?

Fluid dynamics applies to both liquids and gasses. Humans have been using rotating rigid objects to move vehicles for hundreds of years – Leonardo Da Vinci incorporated the concept into his helicopter designs, and the first propeller–driven boats were built in the 1830s. Propellers are easy to make, and they work just fine at their designed cruise speed.

It has only been in the past couple of decades that advances in soft robotics have made actively controlled flexible components a reality. Now, marine roboticists are turning to flexible fish and their amazing swimming abilities for inspiration.

When engineers like me talk about flexibility in a swimming robot, we are usually referring to how stiff the tail of the fish is. The tail is the entire rear half of a fish’s body that moves back and forth when it swims.

Consider tuna, which can swim up to 50 mph and are extremely energy efficient over a wide range of speeds.

Tuna are some of the fastest fish in the ocean.

The tricky part about copying the biomechanics of fish is that biologists don’t know how flexible they are in the real world. If you want to know how flexible a rubber band is, you simply pull on it. If you pull on a fish’s tail, the stiffness depends on how much the fish is tensing its various muscles.

The best that researchers can do to estimate flexibility is film a swimming fish and measure how its body shape changes.

Visualization of a fish swimming with colorful representations of water flow.
Visualizing how water flows around the fish tail showed that tail stiffness had to increase as the square of swimming speed for a fish to be most efficient.
Qiang Zhong and Daniel Quinn, CC BY-ND

Searching for answers in the math

Researchers have built dozens of robots in an attempt to mimic the flexibility and swimming patterns of tuna and other fish, but none have matched the performance of the real things.

In my lab at the University of Virginia, my colleagues and I ran into the same questions as others: How flexible should our robot be? And if there’s no one best flexibility, how should our robot change its stiffness as it swims?

We looked for the answer in an old NASA paper about vibrating airplane wings. The report explains how when a plane’s wings vibrate, the vibrations change the amount of lift the wings produce. Since fish fins and airplane wings have similar shapes, the same math works well to model how much thrust fish tails produce as they flap back and forth.

Using the old wing theory, postdoctoral researcher Qiang Zhong and I created a mathematical model of a swimming fish and added a spring and pulley to the tail to represent the effects of a tensing muscle. We discovered a surprisingly simple hypothesis hiding in the equations. To maximize efficiency, muscle tension needs to increase as the square of swimming speed. So, if swimming speed doubles, stiffness needs to increase by a factor of four. To swim three times faster while maintaining high efficiency, a fish or fish-like robot needs to pull on its tendon about nine times harder.

To confirm our theory, we simply added an artificial tendon to one of our tunalike robots and then programmed the robot to vary its tail stiffness based on speed. We then put our new robot into our test tank and ran it through various “missions” – like a 200-meter sprint where it had to dodge simulated obstacles. With the ability to vary its tail’s flexibility, the robot used about half as much energy on average across a wide range of speeds compared to robots with a single stiffness.

Two people standing with a fish robot over a tank of water.
Qiang Zhong (left) and Daniel Quinn designed their robot to vary its stiffness as it swam at different speeds.
Yicong Fu, CC BY-ND

Why it matters

While it is great to build one excellent robot, the thing my colleagues and I are most excited about is that our model is adaptable. We can tweak it based on body size, swimming style or even fluid type. It can be applied to animals and machines whether they are big or small, swimmers or flyers.

For example, our model suggests that dolphins have a lot to gain from the ability to vary their tails’ stiffness, whereas goldfish don’t get much benefit due to their body size, body shape and swimming style.

The model also has applications for robotic design too. Higher energy efficiency when swimming or flying – which also means quieter robots – would enable radically new missions for vehicles and robots that currently have only one efficient cruising speed. In the short term, this could help biologists study river beds and coral reefs more easily, enable researchers to track wind and ocean currents at unprecedented scales or allow search and rescue teams to operate farther and longer.

In the long term, I hope our research could inspire new designs for submarines and airplanes. Humans have only been working on swimming and flying machines for a couple centuries, while animals have been perfecting their skills for millions of years. There’s no doubt there is still a lot to learn from them.

The Conversation

Daniel Quinn receives funding from The National Science Foundation and The Office of Naval Research.

This article appeared in The Conversation.

]]> Fish fins are teaching us the secret to flexible robots and new shape-changing materials https://robohub.org/fish-fins-are-teaching-us-the-secret-to-flexible-robots-and-new-shape-changing-materials/ Fri, 20 Aug 2021 17:35:10 +0000 https://robohub.org/fish-fins-are-teaching-us-the-secret-to-flexible-robots-and-new-shape-changing-materials/ By Francois Barthelat

Flying fish use their fins both to swim and glide through the air. Smithsonian Institution/Flickr

The big idea

Segmented hinges in the long, thin bones of fish fins are critical to the incredible mechanical properties of fins, and this design could inspire improved underwater propulsion systems, new robotic materials and even new aircraft designs.

A pink and pale colored fish tail with thin lines radiating out from the base.
The thin lines in the tail of this red snapper are rays that allow the fish to control the shape and stiffness of its fins.
Francois Barthelat, CC BY-ND

Fish fins are not simple membranes that fish flap right and left for propulsion. They probably represent one of the most elegant ways to interact with water. Fins are flexible enough to morph into a wide variety of shapes, yet they are stiff enough to push water without collapsing.

The secret is in the structure: Most fish have rays – long, bony spikes that stiffen the thin membranes of collagen that make up their fins. Each of these rays is made of two stiff rows of small bone segments surrounding a softer inner layer. Biologists have long known that fish can change the shape of their fins using muscles and tendons that push or pull on the base of each ray, but very little research has been done looking specifically at the mechanical benefits of the segmented structure.

A pufferfish uses its small but efficient fins to swim against, and maneuver in, a strong current.

To study the mechanical properties of segmented rays, my colleagues and I used theoretical models and 3D-printed fins to compare segmented rays with rays made of a non-segmented flexible material.

We showed that the numerous small, bony segments act as hinge points, making it easy to flex the two bony rows in the ray side to side. This flexibility allows the muscles and tendons at the base of rays to morph a fin using minimal amounts of force. Meanwhile, the hinge design makes it hard to deform the ray along its length. This prevents fins from collapsing when they are subjected to the pressure of water during swimming. In our 3D-printed rays, the segmented designs were four times easier to morph than continuous designs while maintaining the same stiffness.

Photos of a straight ray and a bent ray showing how pulling on one half and pushing on the other half of a ray will make it bend.
The segmented nature of fish fin rays allows them to be easily morphed by pulling at the bottom of the ray.
Francois Barthelat, CC BY-ND

Why it matters

Morphing materials – materials whose shape can be changed – come in two varieties. Some are very flexible – like hydrogels – but these materials collapse easily when you subject them to external forces. Morphing materials can also be very stiff – like some aerospace composites – but it takes a lot of force to make small changes in their shape.

Image showing how 3D printed continuous and segmented fin rays bend.
It requires much more force to control the shape of a continuous 3D-printed ray (top two images) than to morph a segmented ray (bottom two images).
Francois Barthelat, CC BY-ND

The segmented structure design of fish fins overcomes this functional trade-off by being highly flexible as well as strong. Materials based on this design could be used in underwater propulsion and improve the agility and speed of fish-inspired submarines. They could also be incredibly valuable in soft robotics and allow tools to change into a wide variety of shapes while still being able to grasp objects with a lot of force. Segmented ray designs could even benefit the aerospace field. Morphing wings that could radically change their geometry, yet carry large aerodynamic forces, could revolutionize the way aircraft take off, maneuver and land.

What still isn’t known

While this research goes a long way in explaining how fish fins work, the mechanics at play when fish fins are bent far from their normal positions are still a bit of a mystery. Collagen tends to get stiffer the more deformed it gets, and my colleagues and I suspect that this stiffening response – together with how collagen fibers are oriented within fish fins – improves the mechanical performance of the fins when they are highly deformed.

What’s next

I am fascinated by the biomechanics of natural fish fins, but my ultimate goal is to develop new materials and devices that are inspired by their mechanical properties. My colleagues and I are currently developing proof-of-concept materials that we hope will convince a broader range of engineers in academia and the private sector that fish fin-inspired designs can provide improved performance for a variety of applications.

The Conversation

Francois Barthelat does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

]]>
The social animals that are inspiring new behaviours for robot swarms https://robohub.org/the-social-animals-that-are-inspiring-new-behaviours-for-robot-swarms/ Sun, 12 May 2019 22:43:11 +0000 https://robohub.org/the-social-animals-that-are-inspiring-new-behaviours-for-robot-swarms/ By Edmund Hunt, University of Bristol

From flocks of birds to fish schools in the sea, or towering termite mounds, many social groups in nature exist together to survive and thrive. This cooperative behaviour can be used by engineers as “bio-inspiration” to solve practical human problems, and by computer scientists studying swarm intelligence.

“Swarm robotics” took off in the early 2000s, an early example being the “s-bot” (short for swarm-bot). This is a fully autonomous robot that can perform basic tasks including navigation and the grasping of objects, and which can self-assemble into chains to cross gaps or pull heavy loads. More recently, “TERMES” robots have been developed as a concept in construction, and the “CoCoRo” project has developed an underwater robot swarm that functions like a school of fish that exchanges information to monitor the environment. So far, we’ve only just begun to explore the vast possibilities that animal collectives and their behaviour can offer as inspiration to robot swarm design.

Swarm behaviour in birds – or robots designed to mimic them?
EyeSeeMicrostock/Shutterstock

Robots that can cooperate in large numbers could achieve things that would be difficult or even impossible for a single entity. Following an earthquake, for example, a swarm of search and rescue robots could quickly explore multiple collapsed buildings looking for signs of life. Threatened by a large wildfire, a swarm of drones could help emergency services track and predict the fire’s spread. Or a swarm of floating robots (“Row-bots”) could nibble away at oceanic garbage patches, powered by plastic-eating bacteria.

A future where floating robots powered by plastic-eating bacteria could tackle ocean waste.
Shutterstock

Bio-inspiration in swarm robotics usually starts with social insects – ants, bees and termites – because colony members are highly related, which favours impressive cooperation. Three further characteristics appeal to researchers: robustness, because individuals can be lost without affecting performance; flexibility, because social insect workers are able to respond to changing work needs; and scalability, because a colony’s decentralised organisation is sustainable with 100 workers or 100,000. These characteristics could be especially useful for doing jobs such as environmental monitoring, which requires coverage of huge, varied and sometimes hazardous areas.

Social learning

Beyond social insects, other species and behavioural phenomena in the animal kingdom offer inspiration to engineers. A growing area of biological research is in animal cultures, where animals engage in social learning to pick up behaviours that they are unlikely to innovate alone. For example, whales and dolphins can have distinctive foraging methods that are passed down through the generations. This includes forms of tool use – dolphins have been observed breaking off marine sponges to protect their beaks as they go rooting around for fish, like a person might put a glove over a hand.

Bottlenose dolphin playing with a sponge. Some have learned to use them to help them catch fish.
Yann Hubert/Shutterstock

Forms of social learning and artificial robotic cultures, perhaps using forms of artificial intelligence, could be very powerful in adapting robots to their environment over time. For example, assistive robots for home care could adapt to human behavioural differences in different communities and countries over time.

Robot (or animal) cultures, however, depend on learning abilities that are costly to develop, requiring a larger brain – or, in the case of robots, a more advanced computer. But the value of the “swarm” approach is to deploy robots that are simple, cheap and disposable. Swarm robotics exploits the reality of emergence (“more is different”) to create social complexity from individual simplicity. A more fundamental form of “learning” about the environment is seen in nature – in sensitive developmental processes – which do not require a big brain.

‘Phenotypic plasticity’

Some animals can change behavioural type, or even develop different forms, shapes or internal functions, within the same species, despite having the same initial “programming”. This is known as “phenotypic plasticity” – where the genes of an organism produce different observable results depending on environmental conditions. Such flexibility can be seen in the social insects, but sometimes even more dramatically in other animals.

Most spiders are decidedly solitary, but in about 20 of 45,000 spider species, individuals live in a shared nest and capture food on a shared web. These social spiders benefit from having a mixture of “personality” types in their group, for example bold and shy.

Social spider (Stegodyphus) spin collective webs in Addo Elephant Park, South Africa.
PicturesofThings/Shutterstock

My research identified a flexibility in behaviour where shy spiders would step into a role vacated by absent bold nestmates. This is necessary because the spider colony needs a balance of bold individuals to encourage collective predation, and shyer ones to focus on nest maintenance and parental care. Robots could be programmed with adjustable risk-taking behaviour, sensitive to group composition, with bolder robots entering into hazardous environments while shyer ones know to hold back. This could be very helpful in mapping a disaster area such as Fukushima, including its most dangerous parts, while avoiding too many robots in the swarm being damaged at once.

The ability to adapt

Cane toads were introduced in Australia in the 1930s as a pest control, and have since become an invasive species themselves. In new areas cane toads are seen to be somewhat social. One reason for their growth in numbers is that they are able to adapt to a wide temperature range, a form of physiological plasticity. Swarms of robots with the capability to switch power consumption mode, depending on environmental conditions such as ambient temperature, could be considerably more durable if we want them to function autonomously for the long term. For example, if we want to send robots off to map Mars then they will need to cope with temperatures that can swing from -150°C at the poles to 20°C at the equator.

Cane toads can adapt to temperature changes.
Radek Ziemniewicz/Shutterstock

In addition to behavioural and physiological plasticity, some organisms show morphological (shape) plasticity. For example, some bacteria change their shape in response to stress, becoming elongated and so more resilient to being “eaten” by other organisms. If swarms of robots can combine together in a modular fashion and (re)assemble into more suitable structures this could be very helpful in unpredictable environments. For example, groups of robots could aggregate together for safety when the weather takes a challenging turn.

Whether it’s the “cultures” developed by animal groups that are reliant on learning abilities, or the more fundamental ability to change “personality”, internal function or shape, swarm robotics still has plenty of mileage left when it comes to drawing inspiration from nature. We might even wish to mix and match behaviours from different species, to create robot “hybrids” of our own. Humanity faces challenges ranging from climate change affecting ocean currents, to a growing need for food production, to space exploration – and swarm robotics can play a decisive part given the right bio-inspiration.The Conversation

Edmund Hunt, EPSRC Doctoral Prize Fellow, University of Bristol

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Robots guarded Buddha’s relics in a legend of ancient India https://robohub.org/robots-guarded-buddhas-relics-in-a-legend-of-ancient-india/ Sun, 28 Apr 2019 22:32:49 +0000 https://robohub.org/robots-guarded-buddhas-relics-in-a-legend-of-ancient-india/

Two small figures guard the table holding the Buddha’s relics. Are they spearmen, or robots? British Museum, CC BY-NC-SA


By Adrienne Mayor

As early as Homer, more than 2,500 years ago, Greek mythology explored the idea of automatons and self-moving devices. By the third century B.C., engineers in Hellenistic Alexandria, in Egypt, were building real mechanical robots and machines. And such science fictions and historical technologies were not unique to Greco-Roman culture.

In my recent book “Gods and Robots,” I explain that many ancient societies imagined and constructed automatons. Chinese chronicles tell of emperors fooled by realistic androids and describe artificial servants crafted in the second century by the female inventor Huang Yueying. Techno-marvels, such as flying war chariots and animated beings, also appear in Hindu epics. One of the most intriguing stories from India tells how robots once guarded Buddha’s relics. As fanciful as it might sound to modern ears, this tale has a strong basis in links between ancient Greece and ancient India.

The story is set in the time of kings Ajatasatru and Asoka. Ajatasatru, who reigned from 492 to 460 B.C., was recognized for commissioning new military inventions, such as powerful catapults and a mechanized war chariot with whirling blades. When Buddha died, Ajatasatru was entrusted with defending his precious remains. The king hid them in an underground chamber near his capital, Pataliputta (now Patna) in northeastern India.

A sculpture depicting the distribution of the Buddha’s relics.
Los Angeles County Museum of Art/Wikimedia Commons

Traditionally, statues of giant warriors stood on guard near treasures. But in the legend, Ajatasatru’s guards were extraordinary: They were robots. In India, automatons or mechanical beings that could move on their own were called “bhuta vahana yanta,” or “spirit movement machines” in Pali and Sanskrit. According to the story, it was foretold that Ajatasatru’s robots would remain on duty until a future king would distribute Buddha’s relics throughout the realm.

Ancient robots and automatons

A statue of Visvakarman, the engineer of the universe.
Suraj Belbase/Wikimedia Commons, CC BY-SA

Hindu and Buddhist texts describe the automaton warriors whirling like the wind, slashing intruders with swords, recalling Ajatasatru’s war chariots with spinning blades. In some versions the robots are driven by a water wheel or made by Visvakarman, the Hindu engineer god. But the most striking version came by a tangled route to the “Lokapannatti” of Burma – Pali translations of older, lost Sanskrit texts, only known from Chinese translations, each drawing on earlier oral traditions.

In this tale, many “yantakara,” robot makers, lived in the Western land of the “Yavanas,” Greek-speakers, in “Roma-visaya,” the Indian name for the Greco-Roman culture of the Mediterranean world. The Yavanas’ secret technology of robots was closely guarded. The robots of Roma-visaya carried out trade and farming and captured and executed criminals.

Robot makers were forbidden to leave or reveal their secrets – if they did, robotic assassins pursued and killed them. Rumors of the fabulous robots reached India, inspiring a young artisan of Pataliputta, Ajatasatru’s capital, who wished to learn how to make automatons.

In the legend, the young man of Pataliputta finds himself reincarnated in the heart of Roma-visaya. He marries the daughter of the master robot maker and learns his craft. One day he steals plans for making robots, and hatches a plot to get them back to India.

Certain of being slain by killer robots before he could make the trip himself, he slits open his thigh, inserts the drawings under his skin and sews himself back up. Then he tells his son to make sure his body makes it back to Pataliputta, and starts the journey. He’s caught and killed, but his son recovers his body and brings it to Pataliputta.

Once back in India, the son retrieves the plans from his father’s body, and follows their instructions to build the automated soldiers for King Ajatasatru to protect Buddha’s relics in the underground chamber. Well hidden and expertly guarded, the relics – and robots – fell into obscurity.

The sprawling Maurya Empire in about 250 B.C.
Avantiputra7/Wikimedia Commons, CC BY-SA

Two centuries after Ajatasatru, Asoka ruled the powerful Mauryan Empire in Pataliputta, 273-232 B.C. Asoka constructed many stupas to enshrine Buddha’s relics across his vast kingdom. According to the legend, Asoka had heard the legend of the hidden relics and searched until he discovered the underground chamber guarded by the fierce android warriors. Violent battles raged between Asoka and the robots.

In one version, the god Visvakarman helped Asoka to defeat them by shooting arrows into the bolts that held the spinning constructions together; in another tale, the old engineer’s son explained how to disable and control the robots. At any rate, Asoka ended up commanding the army of automatons himself.

Exchange between East and West

Is this legend simply fantasy? Or could the tale have coalesced around early cultural exchanges between East and West? The story clearly connects the mechanical beings defending Buddha’s relics to automatons of Roma-visaya, the Greek-influenced West. How ancient is the tale? Most scholars assume it arose in medieval Islamic and European times.

But I think the story could be much older. The historical setting points to technological exchange between Mauryan and Hellenistic cultures. Contact between India and Greece began in the fifth century B.C., a time when Ajatasatru’s engineers created novel war machines. Greco-Buddhist cultural exchange intensified after Alexander the Great’s campaigns in northern India.

Inscriptions in Greek and Aramaic on a monument originally erected by King Asoka at Kandahar, in what is today Afghanistan.
World Imaging/Wikimedia Commons

In 300 B.C., two Greek ambassadors, Megasthenes and Deimachus, resided in Pataliputta, which boasted Greek-influenced art and architecture and was the home of the legendary artisan who obtained plans for robots in Roma-visaya. Grand pillars erected by Asoka are inscribed in ancient Greek and name Hellenistic kings, demonstrating Asoka’s relationship with the West. Historians know that Asoka corresponded with Hellenistic rulers, including Ptolemy II Philadelphus in Alexandria, whose spectacular procession in 279 B.C. famously displayed complex animated statues and automated devices.

Historians report that Asoka sent envoys to Alexandria, and Ptolemy II sent ambassadors to Asoka in Pataliputta. It was customary for diplomats to present splendid gifts to show off cultural achievements. Did they bring plans or miniature models of automatons and other mechanical devices?

I cannot hope to pinpoint the original date of the legend, but it is plausible that the idea of robots guarding Buddha’s relics melds both real and imagined engineering feats from the time of Ajatasatru and Asoka. This striking legend is proof that the concepts of building automatons were widespread in antiquity and reveals the universal and timeless link between imagination and science.

Adrienne Mayor is the author of:

Gods and Robots: Myths, Machines, and Ancient Dreams of TechnologyThe Conversation

Princeton University Press provides funding as a member of The Conversation US.

Adrienne Mayor, Research Scholar, Classics and History and Philosophy of Science, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Technology and robots will shake labour policies in Asia and the world https://robohub.org/technology-and-robots-will-shake-labour-policies-in-asia-and-the-world/ Fri, 25 Jan 2019 21:19:14 +0000 https://robohub.org/technology-and-robots-will-shake-labour-policies-in-asia-and-the-world/

Developing countries must begin seriously considering how technological changes will impact labour trends. KC Jan/Shutterstock


By Asit K. Biswas, University of Glasgow and Kris Hartley, The Education University of Hong Kong

In the 21st century, governments cannot ignore how changes in technology will affect employment and political stability.

The automation of work – principally through robotics, artificial intelligence (AI) and the Internet of things (IoT), collectively known as the Fourth Industrial Revolution – will provide an unprecedented boost to productivity and profit. It will also threaten the stability of low- and mid-skilled jobs in many developing and middle-income countries.

From labour to automation

Developing countries must begin seriously considering how technological changes will impact labour trends. Technology now looms just as large a disruptive force, if not larger, than the whims of global capital.

China has for decades increased its global contribution to manufacturing value-added goods, now enjoying a competitive position in Apple products, household appliances, and technology. In the process, the country has made historic progress lifting its citizens out of poverty.

China has accomplished this by raising worker productivity through technology and up-skilling (improving or acquiring new skills), and higher wages have predictably followed.

However, this trend is also compelling manufacturers to relocate some low-skill production to Southeast Asia. US-China trade disputes could exacerbate this trend.

Relocation of manufacturing activity has been an economic boon for workers in countries like Vietnam and Indonesia. However, the race among global manufacturers to procure the cheapest labour brings no assurances of long-term growth and prosperity to any one country.

Governments in developing countries must parlay the proceeds of ephemeral labour cost advantages into infrastructure investment, industrial upgrading and worker upskilling. China has done this to better effect than many.

The growth in sophistication and commercial feasibility of robotics, IoT, and other automation technologies will impact jobs at nearly every skill level. More broadly, the fallout from technological advancement may replicate the disruptive geographic shifts in production once resulting from labour cost arbitrage.

Political blowback

After many decades of globalisation, a borderless economy has emerged in which capital and production move freely to locations with the greatest investment returns and lowest cost structures. This has prompted a pattern of global economic restructuring, generating unprecedented growth opportunities for developing countries.

Workers have been rewarded for their personal efforts in education and skill development, while millions have been lifted from poverty.

Given advancements in technology and the associated impact on livelihoods, it is time to consider how the next chapter of global development will play out politically. Automation will be a highly disruptive force by most economic, social, and political measures. Few countries – developed or otherwise – will escape this challenge.

Some Western countries, including the United States, are already experiencing a populist political wave fuelled in part by the economic grievances of workers displaced from once stable, middle-class manufacturing jobs. Similar push-back may erupt in countries already embroiled in nationalist politics, including India.

Growing populations and the automation of work will soon mix to create unemployment crises, with serious implications for domestic political stability.

As education systems flood the employment market with scores of ambitious graduates, one of the greatest challenges governments face is how to generate well-paying jobs.

Further, vulnerable workers will include not only new entrants but also experienced workers, some of whom are continuously and aggressively up-skilling in anticipation of more lucrative employment.

In India, over 1 million people enter the working-age population every month. More than 8 million new jobs are needed each year to maintain current employment levels.

India’s young population is becoming increasingly pessimistic about their employment prospects. Although official statistics are unreliable, as a large percentage of work occurs in the informal sector in positions such domestic workers, coolies, street vendors, and transient positions lacking contracts, indications are that India may be facing the prospect of jobless growth.

Insufficient skill levels in much of the workforce are impeding India’s effort to accelerate growth in high-productivity jobs. Thus, the country’s large-scale manufacturers, both domestically and internationally owned, are turning to robots to ensure consistent, reliable, and efficient production.

Urbanisation also adds to India’s employment challenge. The promise of higher-paying jobs has lured many rural workers into urban areas, but these workers are often illiterate and lack sufficient skills. This was not always a concern, as these workers could find menial factory jobs. Robots are now doing much of the low-skilled work that migrant workers were once hired to do.

Towards a future of stable livelihoods

The lingering socio-economic imperative for many governments is to replace eliminated jobs. According to The World Economic Forum, “inequality represents the greatest societal concern associated with the Fourth Industrial Revolution.”

However, the WEF and others have given little useful guidance on how to address this challenge. How should the economy absorb multitudes of variously skilled workers displaced by technology?

People aspire to economic and social mobility more than ever before, particularly as they observe wealth rising ostentatiously all around them – on the streets, in the news, and among seemingly lucky friends and acquaintances. Sadly, the aspirations of most will go unfulfilled.

One way forward is said to be through up-skilling by retraining workers to operate and maintain technology systems. However, this seems to be a paradox, as workers would be training robots to eventually take jobs held by humans. If a major driver of automation is reduction or elimination of labour costs, one cannot expect all displaced workers to enjoy stable and continuing employment opportunities.

Despite political promises about employment growth from high-tech industries and the technological transformation of primary sectors, the tension between the drive for technology-based efficiency and the loss of jobs is undeniable and may have no clear resolution.

Societies have reacted to global economic restructuring in discouraging ways, indulging in nationalism, racism, militarism, and arbitrary economic protectionism. Populist opportunists and foul-tempered troglodytes have ridden reactionary rhetoric into positions of political power, raging against what former White House chief strategist Steve Bannon calls the “liberal postwar international order.” At the same time, left-leaning solutions such as universal basic income face significant fiscal and political headwinds.

The 21st century will see increased disruptions to once-stable work life, due to technological progress and the continuing liberalisation of global capital and production. Early indications about how countries will respond – haphazardly and with no clear long-term strategy – are not encouraging.The Conversation

Asit K. Biswas, Visiting professor, University of Glasgow and Kris Hartley, Assistant professor, The Education University of Hong Kong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
How robots are helping doctors save lives in the Canadian North https://robohub.org/how-robots-are-helping-doctors-save-lives-in-the-canadian-north/ Mon, 07 Jan 2019 00:17:29 +0000 https://robohub.org/how-robots-are-helping-doctors-save-lives-in-the-canadian-north/

Remote presence technology enables a medic to perform an ultrasound at the scene of accident.
(University of Saskatchewan), Author provided

Ivar Mendez, University of Saskatchewan

It is the middle of the winter and a six-month-old child is brought with acute respiratory distress to a nursing station in a remote community in the Canadian North.

The nurse realizes that the child is seriously ill and contacts a pediatric intensivist located in a tertiary care centre 900 kilometres away. The intensivist uses her tablet to activate a remote presence robot installed in the nursing station and asks the robot to go to the assessment room.

The robot autonomously navigates the nursing station corridors and arrives at the assessment room two minutes later. With the help of the robot’s powerful cameras, the doctor “sees” the child and talks to the nurse and the parents to obtain the medical history. She uses the robot’s stethoscope to listen to the child’s chest, measures the child’s oxygen blood saturation with a pulse oximeter and performs an electrocardiogram.

With the robot’s telestrator (an electronic device which enables the user to write and draw freehand over a video image) she helps the nurse to start an intravenous line and commences therapy to treat the child’s life-threatening condition.

This is not science fiction. This remote presence technology is currently in use in Saskatchewan, Canada — to provide care to acutely ill children living in remote Northern communities.

Treating acutely ill children

Advances in telecommunication, robotics, medical sensor technology and artificial intelligence (AI) have opened the door for solutions to the challenge of delivering remote, real-time health care to underserviced rural and remote populations.

A team uses a remote presence robot to see a patient in the emergency room.
(University of Saskatchewan), Author provided

In Saskatchewan, we have established a remote medicine program that focuses on the care of the most vulnerable populations — such as acutely ill children, pregnant women and the elderly.

We have demonstrated that with this technology about 70 per cent of acutely ill children can be successfully treated in their own communities. In similar communities without this technology, all acutely ill children need to be transported to a tertiary care centre.

We have also shown that this technology prevents delays in diagnosis and treatment and results in substantial savings to the health-care system.

Prenatal ultrasounds for Indigenous women

Remote communities often lack access to diagnostic ultrasonography services. This gap disproportionally affects Indigenous pregnant women in the Canadian North and results in increases in maternal and newborn morbidity and mortality.

We are pioneering the use of an innovative tele-robotic ultrasound system that allows an expert sonographer to perform a diagnostic ultrasound study, in real time, in a distant location.

Research shows that robotic ultrasonography is comparable to standard sonography and is accepted by most patients.

The first tele-robotic ultrasonography systems have been deployed to two northern Saskatchewan communities and are currently performing prenatal ultrasounds.

Emergency room trauma assessment

Portable remote presence devices that use available cellular networks could also be used in emergency situations, such as trauma assessment at the scene of an accident or transport of a victim to hospital.

For example, emergency physicians or trauma surgeons could perform real-time ultrasonography of the abdomen, thorax and heart in critically injured patients, identify life-threatening injuries and start life-saving treatment.

Wearable remote presence devices such a Google Glass technology are the next step in remote presence health care for underserviced populations.

For example, a local nurse and a specialist in a tertiary care centre thousand of kilometres away could assess together an acutely ill patient in an emergency room in a remote community through the nurse’s eyes.

A nurse examines a patient with Google Glass.
(University of Saskatchewan), Author provided

Although remote presence technology may be applied initially to emergency situations in remote locations, its major impact may be in the delivery of primary health care. We can imagine the use of mobile remote presence devices by health professionals in a wide range of scenarios — from home-care visits to follow-up mental health sessions — in which access to medical expertise in real time would be just a computer click away.

A paradigm shift in health-care delivery

The current model of centralized health care, where the patient has to go to a hospital or a clinic to receive urgent or elective medical care, is inefficient and costly. Patients have to wait many hours in emergency rooms. Hospitals run at overcapacity. Delays in diagnosis and treatment cause poor outcomes or even death.

Underserviced rural and remote communities and the most vulnerable populations such as children and the elderly are the most affected by this centralized model.

Remote presence technologies have the potential to shift this — so that we can deliver medical care to a patient anywhere. In this decentralized model, patients requiring urgent or elective medical care will be seen, diagnosed and treated in their own communities or homes and patients requiring hospitalization will be triaged without delay.

This technology could have important applications in low-resource settings. Cellular network signals around the globe and rapidly increasing bandwidth will provide the telecommunication platform for a wide range of mobile applications.

Low-cost, dedicated remote-presence devices will increase access to medical expertise for anybody living in a geographical area with a cellphone signal. This access will be especially beneficial to people in developing countries where medical expertise is insufficient or not available.

The future of medical care is not in building more or bigger hospitals but in harnessing the power of technology to monitor and reach patients wherever they are — to preserve life, ensure wellness and speed up diagnosis and treatment.The Conversation

Ivar Mendez, Fred H. Wigmore Professor and Unified Head of the Department of Surgery, University of Saskatchewan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
The Montréal Declaration: Why we must develop AI responsibly https://robohub.org/the-montreal-declaration-why-we-must-develop-ai-responsibly/ Sun, 09 Dec 2018 22:48:20 +0000 https://robohub.org/the-montreal-declaration-why-we-must-develop-ai-responsibly/ Yoshua Bengio, Université de Montréal

I have been doing research on intelligence for 30 years. Like most of my colleagues, I did not get involved in the field with the aim of producing technological objects, but because I have an interest in the the abstract nature of the notion of intelligence. I wanted to understand intelligence. That’s what science is: Understanding.

However, when a group of researchers ends up understanding something new, that knowledge can be exploited for beneficial or harmful purposes.

That’s where we are — at a turning point where the science of artificial intelligence is emerging from university laboratories. For the past five or six years, large companies such as Facebook and Google have become so interested in the field that they are putting hundreds of millions of dollars on the table to buy AI firms and then develop this expertise internally.

The progression in AI has since been exponential. Businesses are very interested in using this knowledge to develop new markets and products and to improve their efficiency.

So, as AI spreads in society, there is an impact. It’s up to us to choose how things play out. The future is in our hands.

Killer robots, job losses

From the get-go, the issue that has concerned me is that of lethal autonomous weapons, also known as killer robots.

While there is a moral question because machines have no understanding of the human, psychological and moral context, there is also a security question because these weapons could destabilize the world order.

Another issue that quickly surfaced is that of job losses caused by automation. We asked the question: Why? Who are we trying to bring relief to and from what? The trucker isn’t happy on the road? He should be replaced by… nobody?

We scientists seemingly can’t do much. Market forces determine which jobs will be eliminated or those where the workload will be lessened, according to the economic efficiency of the automated replacements. But we are also citizens who can participate in a unique way in the social and political debate on these issues precisely because of our expertise.

Computer scientists are concerned with the issue of jobs. That is not because they will suffer personally. In fact, the opposite is true. But they feel they have a responsibility and they don’t want their work to potentially put millions of people on the street.

Revising the social safety net

So strong support exists, therefore, among computer scientists — especially those in AI — for a revision of the social safety net to allow for a sort of guaranteed wage, or what I would call a form of guaranteed human dignity.

The objective of technological innovation is to reduce human misery, not increase it.

It is also not meant to increase discrimination and injustice. And yet, AI can contribute to both.

Discrimination is not so much due, as we sometimes hear, to the fact AI was conceived by men because of the alarming lack of women in the technology sector. It is mostly due to AI leading on data that reflects people’s behaviour. And that behaviour is unfortunately biased.

In other words, a system that relies on data that comes from people’s behaviour will have the same biases and discrimination as the people in question. It will not be “politically correct.” It will not act according to the moral notions of society, but rather according to common denominators.

Society is discriminatory and these systems, if we’re not careful, could perpetuate or increase that discrimination.

There could also be what is called a feedback loop. For example, police forces use this kind of system to identify neighbourhoods or areas that are more at-risk. They will send in more officers… who will report more crimes. So the statistics will strengthen the biases of the system.

The good news is that research is currently being done to develop algorithms that will minimize discrimination. Governments, however, will have to bring in rules to force businesses to use these techniques.

Saving lives

There is also good news on the horizon. The medical field will be one of those most affected by AI — and it’s not just a matter of saving money.

Doctors are human and therefore make mistakes. So the more we develop systems with more data, fewer mistakes will occur. Such systems are more precise than the best doctors. They are already using these tools so they don’t miss important elements such as cancerous cells that are difficult to detect in a medical image.

There is also the development of new medications. AI can do a better job of analyzing the vast amount of data (more than what a human would have time to digest) that has been accumulated on drugs and other molecules. We’re not there yet, but the potential is there, as is more efficient analysis of a patient’s medical file.

We are headed toward tools that will allow doctors to make links that otherwise would have been very difficult to make and will enable physicians to suggest treatments that could save lives.

The chances of the medical system being completely transformed within 10 years are very high and, obviously, the importance of this progress for everyone is enormous.

I am not concerned about job losses in the medical sector. We will always need the competence and judgment of health professionals. However, we need to strengthen social norms (laws and regulations) to allow for the protection of privacy (patients’ data should not be used against them) as well as to aggregate that data to enable AI to be used to heal more people and in better ways.

The solutions are political

Because of all these issues and others to come, the Montréal Declaration for Responsible Development of Artificial Intelligence is important. It was signed Dec. 4 at the Society for Arts and Technology in the presence of about 500 people.

It was forged on the basis of vast consensus. We consulted people on the internet and in bookstores and gathered opinion in all kinds of disciplines. Philosophers, sociologists, jurists and AI researchers took part in the process of creation, so all forms of expertise were included.

There were several versions of this declaration. The first draft was at a forum on the socially responsible development of AI organized by the Université de Montréal on Nov. 2, 2017.

That was the birthplace of the declaration.

Its goal is to establish a certain number of principles that would form the basis of the adoption of new rules and laws to ensure AI is developed in a socially responsible manner. Current laws are not always well adapted to these new situations.

And that’s where we get to politics.

The abuse of technology

Matters related to ethics or abuse of technology ultimately become political and therefore belong in the sphere of collective decisions.

How is society to be organized? That is political.

What is to be done with knowledge? That is political.

I sense a strong willingness on the part of provincial governments as well as the federal government to commit to socially responsible development.

Because Canada is a scientific leader in AI, it was one of the first countries to see all its potential and to develop a national plan. It also has the will to play the role of social leader.

Montréal has been at the forefront of this sense of awareness for the past two years. I also sense the same will in Europe, including France and Germany.

Generally speaking, scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate.

And in this debate, I have come to realize that society has given me a voice — that governments and the media were interested in what I had to say on these topics because of my role as a pioneer in the scientific development of AI.

So, for me, it is now more than a responsibility. It is my duty. I have no choice.The Conversation

Yoshua Bengio, Professeur titulaire, Département d’informatique et de recherche opérationnelle, Université de Montréal

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Worried about AI taking over the world? You may be making some rather unscientific assumptions https://robohub.org/worried-about-ai-taking-over-the-world-you-may-be-making-some-rather-unscientific-assumptions/ Sat, 10 Nov 2018 00:13:58 +0000 https://robohub.org/worried-about-ai-taking-over-the-world-you-may-be-making-some-rather-unscientific-assumptions/ Eleni Vasilaki, Professor of Computational Neuroscience, University of Sheffield

File 20180923 117383 1d2tv74.jpg?ixlib=rb 1.1

Phonlamai Photo/Shutterstock

Should we be afraid of artificial intelligence? For me, this is a simple question with an even simpler, two letter answer: no. But not everyone agrees – many people, including the late physicist Stephen Hawking, have raised concerns that the rise of powerful AI systems could spell the end for humanity.

Clearly, your view on whether AI will take over the world will depend on whether you think it can develop intelligent behaviour surpassing that of humans – something referred to as “super intelligence”. So let’s take a look at how likely this is, and why there is much concern about the future of AI.

Humans tend to be afraid of what they don’t understand. Fear is often blamed for racism, homophobia and other sources of discrimination. So it’s no wonder it also applies to new technologies – they are often surrounded with a certain mystery. Some technological achievements seem almost unrealistic, clearly surpassing expectations and in some cases human performance.

No ghost in the machine

But let us demystify the most popular AI techniques, known collectively as “machine learning”. These allow a machine to learn a task without being programmed with explicit instructions. This may sound spooky but the truth is it is all down to some rather mundane statistics.

The machine, which is a program, or rather an algorithm, is designed with the ability to discover relationships within provided data. There are many different methods that allow us to achieve this. For example, we can present to the machine images of handwritten letters (a-z), one by one, and ask it to tell us which letter we show each time in sequence. We have already provided the possible answers – it can only be one of (a-z). The machine at the beginning says a letter at random and we correct it, by providing the right answer. We have also programmed the machine to reconfigure itself so that next time, if presented with the same letter, it is more likely to give us the correct answer for the next one. As a consequence, the machine over time improves its performance and “learns” to recognise the alphabet.

In essence, we have programmed the machine to exploit common relationships in the data in order to achieve the specific task. For instance, all versions of “a” look structurally similar, but different to “b”, and the algorithm can exploit this. Interestingly, after the training phase, the machine can apply the obtained knowledge on new letter samples, for example written by a person whose handwriting the machine has never seen before.

We do give AI answers.
Chim/Shutterstock

Humans, however, are good at reading. Perhaps a more interesting example is Google Deepmind’s artificial Go player, which has surpassed every human player in their performance of the game. It clearly learns in a way different to humans – playing a number of games with itself that no human could play in their lifetime. It has been specifically instructed to win and told that the actions it takes determine whether it wins or not. It has also been told the rules of the game. By playing the game again and again it can discover in each situation what is the best action – inventing moves that no human has played before.

Toddlers versus robots

Now does that make the AI Go player smarter than a human? Certainly not. AI is very specialised to particular type of tasks and it doesn’t display the versatility that humans do. Humans develop an understanding of the world over years that no AI has achieved or seem likely to achieve anytime soon.

The fact that AI is dubbed “intelligent” is ultimately down to the fact that it can learn. But even when it comes to learning, it is no match for humans. In fact, toddlers can learn by just watching somebody solving a problem once. An AI, on the other hand, needs tonnes of data and loads of tries to succeed on very specific problems, and it is difficult to generalise its knowledge on tasks very different to those trained upon. So while humans develop breathtaking intelligence rapidly in the first few years of life, the key concepts behind machine learning are not so different from what they were one or two decades ago.

Toddler brains are amazing.
Mcimage/Shutterstock

The success of modern AI is less due to a breakthrough in new techniques and more due to the vast amount of data and computational power available. Importantly, though, even an infinite amount of data won’t give AI human-like intelligence – we need to make a significant progress on developing artificial “general intelligence” techniques first. Some approaches to doing this involve building a computer model of the human brain – which we’re not even close to achieving.

Ultimately, just because an AI can learn, it doesn’t really follow that it will suddenly learn all aspects of human intelligence and outsmart us. There is no simple definition of what human intelligence even is and we certainly have little idea how exactly intelligence emerges in the brain. But even if we could work it out and then create an AI that could learn to become more intelligent, that doesn’t necessarily mean that it would be more successful.

Personally, I am more concerned by how humans use AI. Machine learning algorithms are often thought of as black boxes, and less effort is made in pinpointing the specifics of the solution our algorithms have found. This is an important and frequently neglected aspect as we are often obsessed with performance and less with understanding. Understanding the solutions that these systems have discovered is important, because we can also evaluate if they are correct or desirable solutions.

If, for instance, we train our system in a wrong way, we can also end up with a machine that has learned relationships that do not hold in general. Say for instance that we want to design a machine to evaluate the ability of potential students in engineering. Probably a terrible idea, but let us follow it through for the sake of the argument. Traditionally, this is a male dominated discipline, which means that training samples are likely to be from previous male students. If we don’t make sure, for instance, that the training data are balanced, the machine might end up with the conclusion that engineering students are male, and incorrectly apply it to future decisions.

Machine learning and artificial intelligence are tools. They can be used in a right or a wrong way, like everything else. It is the way that they are used that should concerns us, not the methods themselves. Human greed and human unintelligence scare me far more than artificial intelligence.The Conversation

]]>
Robots can learn a lot from nature if they want to ‘see’ the world https://robohub.org/robots-can-learn-a-lot-from-nature-if-they-want-to-see-the-world/ Tue, 31 Jul 2018 19:38:08 +0000 https://robohub.org/robots-can-learn-a-lot-from-nature-if-they-want-to-see-the-world/

‘Seeing’ through robot eyes.
Shutterstock/TrifonenkoIvan


By Michael Milford, Queensland University of Technology and Jonathan Roberts, Queensland University of Technology

Vision is one of nature’s amazing creations that has been with us for hundreds of millions of years. It’s a key sense for humans, but one we often take for granted: that is, until we start losing it or we try and recreate it for a robot.

Many research labs (including our own) have been modelling aspects of the vision systems found in animals and insects for decades. We draw heavily upon studies like those done in ants, in bees and even in rodents.

To model a biological system and make it useful for robots, you typically need to understand both the behavioural and neural basis of that vision system.

The behavioural component is what you observe the animal doing and how that behaviour changes when you mess with what it can see, for example by trying different configurations of landmarks. The neural components are the circuits in the animal’s brain underlying visual learning for tasks, such as navigation.

Recognising faces

Recognition is a fundamental visual process for all animals and robots. It’s the ability to recognise familiar people, animals, objects and landmarks in the world.

Because of its importance, facial recognition comes partly “baked in” to natural systems such as a baby. We’re able to recognise faces quite early on.

Along those lines, some artificial face recognition systems are based on how biological systems are thought to function. For example, researchers have created sets of neural networks that mimic different levels of the visual processing hierarchy in primates to create a system that is capable of face recognition.

Recognising places

Visual place recognition is an important process for anything that navigates through the world.

Place recognition is the process by which a robot or animal looks at the world around it and is able to reconcile what it’s currently seeing with some past memory of a place, or in the case of humans, a description or expectation of that place.

Before the advent of GPS navigation, we may have been given instructions like “drive along until you see the church on the left and take the next right hand turn”. We know what a typical church looks like and hence can recognise one when we see it.

This place recognition may sound like an easy task, until one encounters challenges such as appearance-change – for example the change in the appearance caused by day-night cycles or by adverse weather conditions.

Visually recognising a place is straightforward … until the appearance of that place changes drastically.
Michael Milford

Another challenge in visually recognising a place is viewpoint change: changes in how a place appears if you view it from a different perspective.

An extreme example of this is encountered when retracing a route along a road for the first time – you are encountering everything in the environment from the opposite viewpoint.

When viewed from opposing viewpoints, the same place appears very different.
neyro2008 / Alexander Zelnitskiy / Maxim Popov / 123rf.com / 1 Year, 1,000km: The Oxford RobotCar Dataset.

Creating a robotic system that can recognise this place despite these challenges requires the vision system to have a deeper understanding of what is in the environment around it.

Sensing capability

Visual sensing hardware has advanced rapidly over the past decade, in part driven by the proliferation of highly capable cameras in smartphones. Modern cameras are now matching or surpassing even the more capable natural vision systems, at least in certain aspects.

For example, a consumer camera can now see as well as an adjusted human eye in the dark.

New smartphone cameras can also record video at 1,000 frames per second, enabling the potential for robotic vision systems that operate at a higher frequency than a human vision system.

Specialist robotic vision sensing such as the Dynamic Vision Sensor (DVS) are even faster but only report the change in the brightness of a pixel, rather than its absolute colour. You can see the difference here in a walk around Hyde Park in London:

Not all robot cameras have to be like conventional cameras either: roboticists use specialist cameras based on how animals such as ants see the world.

Required resolution?

One of the fundamental questions in all vision-based research for robots and animals is what visual resolution (or visual acuity) is required to “get the job done”.

For many insects and animals such as rodents, a relatively low visual resolution is all they have access to – equivalent to a camera with a few thousand pixels in many cases (compared with a modern smartphone which has camera resolutions ranging from 8 Megapixels to 40 Megapixels).

Bees navigate effectively using a relatively low resolution visual sensing capability.
Bogdan Mircea Hoda / 123rf.com

The required resolution varies greatly depending on the task – for some navigation tasks, only a few pixels are required for both animals such as ants and bees and robots.

But for more complex tasks – such as self-driving cars – much higher camera resolutions are likely to be required.

If cars are ever to reliably recognise and predict what a human pedestrian is doing, or intending to do, they will likely require high resolution visual sensing systems that can pick up subtle facial expressions and body movement.

A tension between bio-inspiration and pragmatism

For roboticists looking to nature for inspiration, there is a constant tension between mimicking biology and capitalising on the constant advances in camera technology.

While biological vision systems were clearly superior to cameras in the past, constant rapid advancement in technology has resulted in cameras with superior sensing capabilities to natural systems in many instances. It’s only sensible that these practical capabilities should be exploited in the pursuit of creating high performance and safe robots and autonomous vehicles.

The ConversationBut biology will still play a key role in inspiring roboticists. The natural kingdom is superb at making highly capable vision systems that consume minimal space, computational and power resources, all key challenges for most robotic systems.

Michael Milford, Professor, Queensland University of Technology and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

]]>
How robot math and smartphones led researchers to a drug discovery breakthrough https://robohub.org/how-robot-math-and-smartphones-led-researchers-to-a-drug-discovery-breakthrough/ Thu, 22 Feb 2018 17:32:06 +0000 http://robohub.org/how-robot-math-and-smartphones-led-researchers-to-a-drug-discovery-breakthrough/
By Ian Haydon, University of Washington

Robotic movement can be awkward.

For us humans, a healthy brain handles all the minute details of bodily motion without demanding conscious attention. Not so for brainless robots – in fact, calculating robotic movement is its own scientific subfield.

My colleagues here at the University of Washington’s Institute for Protein Design have figured out how to apply an algorithm originally designed to help robots move to an entirely different problem: drug discovery. The algorithm has helped unlock a class of molecules known as peptide macrocycles, which have appealing pharmaceutical properties.

One small step, one giant leap

Roboticists who program movement conceive of it in what they call “degrees of freedom.” Take a metal arm, for instance. The elbow, wrist and knuckles are movable and thus contain degrees of freedom. The forearm, upper arm and individual sections of each finger do not. If you want to program an android to reach out and grasp an object or take a calculated step, you need to know what its degrees of freedom are and how to manipulate them.

The more degrees of freedom a limb has, the more complex its potential motions. The math required to direct even simple robotic limbs is surprisingly abstruse; Ferdinand Freudenstein, a father of the field, once called the calculations underlying the movement of a limb with seven joints “the Mount Everest of kinematics.”

https://youtu.be/V9a6vvq3H3w

Freudenstein developed his kinematics equations at the dawn of the computer era in the 1950s. Since then, roboticists have increasingly relied on algorithms to solve these complex kinematic puzzles. One algorithm in particular – known as “generalized kinematic closure” – bested the seven joint problem, allowing roboticists to program fine control into mechanical hands.

Molecular biologists took notice.

Many molecules inside living cells can be conceived of as chains with pivot points, or degrees of freedom, akin to tiny robotic arms. These molecules flex and twist according to the laws of chemistry. Peptides and their elongated cousins, proteins, often must adopt precise three-dimensional shapes in order to function. Accurately predicting the complex shapes of peptides and proteins allows scientists like me to understand how they work.

Mastering macrocycles

While most peptides form straight chains, a subset, known as macrocycles, form rings. This shape offers distinct pharmacological advantages. Ringed structures are less flexible than floppy chains, making macrocycles extremely stable. And because they lack free ends, some can resist rapid degradation in the body – an otherwise common fate for ingested peptides.

Macrocycles have a circular ‘main chain’ (shown as thick lines) and many ‘side chains’ (shown as thin lines). The macrocycle on the left — cyclosporin — evolved in a fungus. The one on the right was designed on a computer. Credit: Ian Haydon/Institute for Protein Design

Natural macrocycles such as cyclosporin are among the most potent therapeutics identified to date. They combine the stability benefits of small-molecule drugs, like aspirin, and the specificity of large antibody therapeutics, like herceptin. Experts in the pharmaceutical industry regard this category of medicinal compounds as “attractive, albeit underappreciated.”

“There is a huge diversity of macrocycles in nature – in bacteria, plants, some mammals,” said Gaurav Bhardwaj, a lead author of the new report in Science, “and nature has evolved them for their own particular functions.” Indeed, many natural macrocycles are toxins. Cyclosporin, for instance, displays anti-fungal activity yet also acts as a powerful immunosuppressant in the clinic making it useful as a treatment for rheumatoid arthritis or to prevent rejection of transplanted organs.

A popular strategy for producing new macrocycle drugs involves grafting medicinally useful features onto otherwise safe and stable natural macrocycle backbones. “When it works, it works really well, but there’s a limited number of well-characterized structures that we can confidently use,” said Bhardwaj. In other words, drug designers have only had access to a handful of starting points when making new macrocycle medications.

To create additional reliable starting points, his team used generalized kinematic closure – the robot joint algorithm – to explore the possible conformations, or shapes, that macrocycles can adopt.

Adaptable algorithms

As with keys, the exact shape of a macrocycle matters. Build one with the right conformation and you may unlock a new cure.

Modeling realistic conformations is “one of the hardest parts” of macrocycle design, according to Vikram Mulligan, another lead author of the report. But thanks to the efficiency of the robotics-inspired algorithm, the team was able to achieve “near-exhaustive sampling” of plausible conformations at “relatively low computational cost.”

Supercomputer not necessary – smartphones performed the design calculations. Credit: Los Alamos National Laboratory

The calculations were so efficient, in fact, that most of the work did not require a supercomputer, as is usually the case in the field of molecular engineering. Instead, thousands of smartphones belonging to volunteers were networked together to form a distributed computing grid, and the scientific calculations were doled out in manageable chunks.

With the initial smartphone number crunching complete, the team pored over the results – a collection of hundreds of never-before-seen macrocycles. When a dozen such compounds were chemically synthesized in the lab, nine were shown to actually adopt the predicted conformation. In other words, the smartphones were accurately rendering molecules that scientists can now optimize for their potential as targeted drugs.

The team estimates the number of macrocycles that can confidently be used as starting points for drug design has jumped from fewer than 10 to over 200, thanks to this work. Many of the newly designed macrocycles contain chemical features that have never been seen in biology.

To date, macrocyclic peptide drugs have shown promise in battling cancer, cardiovascular disease, inflammation and infection. Thanks to the mathematics of robotics, a few smartphones and some cross-disciplinary thinking, patients may soon see even more benefits from this promising class of molecules.

Ian Haydon, Doctoral Student in Biochemistry, University of Washington

This article was originally published on The Conversation. Read the original article.

The Conversation ]]>
Drones, volcanoes and the ‘computerisation’ of the Earth https://robohub.org/drones-volcanoes-and-the-computerisation-of-the-earth/ Fri, 29 Dec 2017 23:57:30 +0000 http://robohub.org/drones-volcanoes-and-the-computerisation-of-the-earth/

The Mount Agung volcano spews smoke, as seen from Karangasem, Bali. EPA-EFE/MADE NAGI


By Adam Fish

The eruption of the Agung volcano in Bali, Indonesia has been devastating, particularly for the 55,000 local people who have had to leave their homes and move into shelters. It has also played havoc with the flights in and out of the island, leaving people stranded while the experts try to work out what the volcano will do next.

But this has been a fascinating time for scholars like me who investigate the use of drones in social justice, environmental activism and crisis preparedness. The use of drones in this context is just the latest example of the “computerisation of nature” and raises questions about how reality is increasingly being constructed by software.

Amazon drone delivery is developing in the UK, drone blood delivery is happening in Rwanda, while in Indonesia people are using drones to monitor orangutan populations, map the growth and expansion of palm oil plantations and gather information that might help us predict when volcanoes such as Agung might again erupt with devastating impact.

In Bali, I have the pleasure of working with a remarkable group of drone professionals, inventors and hackers who work for Aeroterrascan, a drone company from Bandung, on the Indonesian island of Java. As part of their corporate social responsibility, they have donated their time and technologies to the Balinese emergency and crisis response teams. It’s been fascinating to participate in a project that flies remote sensing systems high in the air in order to better understand dangerous forces deep in the Earth.

I’ve been involved in two different drone volcano missions. A third mission will begin in a few days. In the first, we used drones to create an extremely accurate 3D map of the size of the volcano – down to 20cm of accuracy. With this information, we could see if the volcano was actually growing in size – key evidence that it is about to blow up.

The second mission involved flying a carbon dioxide and sulphur dioxide smelling sensor through the plume. An increase in these gases can tell us if an eruption looms. There was a high degree of carbon dioxide and that informed the government to raise the threat warning to the highest level.

In the forthcoming third mission, we will use drones to see if anyone is still in the exclusion zone so they can be found and rescued.

What is interesting to me as an anthropologist is how scientists and engineers use technologies to better understand distant processes in the atmosphere and below the Earth. It has been a difficult task, flying a drone 3,000 meters to the summit of an erupting volcano. Several different groups have tried and a few expensive drones have been lost – sacrifices to what the Balinese Hindus consider a sacred mountain.

More philosophically, I am interested in better understanding the implications of having sensor systems such as drones flying about in the air, under the seas, or on volcanic craters – basically everywhere. These tools may help us to evacuate people before a crisis but it also entails transforming organic signals into computer code. We’ve long interpreted nature through technologies that augment our senses, particularly sight. Microscopes, telescopes and binoculars have been great assets for chemistry, astronomy and biology.

The internet of nature

But the sensorification of the elements is something different. This has been called the computationalisation of Earth. We’ve heard a lot about the internet of things but this is the internet of nature. This is the surveillance state turned onto biology. The present proliferation of drones is the latest step in wiring everything on the planet. In this case, the air itself, to better understand the guts of a volcano.

These flying sensors, it is hoped, will give volcanologists what anthropologist Stephen Helmreich called abduction – or a predictive and prophetic “argument from the future”.

But the drones, sensors and software we use provide a particular and partial worldview. Looking back at today from the future, what will be the impact of increasing datafication of nature: better crop yield, emergency preparation, endangered species monitoring? Or will this quantification of the elements result in a reduction of nature to computer logic?

There is something not fully comprehended – or more ominously not comprehensible – about how flying robots and self-driving cars equipped with remote sensing systems filter the world through big data crunching algorithms capable of generating and responding to their own artificial intelligence.

These non-human others react to the world not as ecological, social, or geological processes but as functions and feature sets in databases. I am concerned by what this software view of nature will exclude, and as they remake the world in their database image, what the implications of those exclusions might be for planetary sustainability and human autonomy.

The ConversationIn this future world, there may be less of a difference between engineering towards nature and the engineering of nature.

Adam Fish, Senior Lecturer in Sociology and Media Studies, Lancaster University

This article was originally published on The Conversation. Read the original article.

]]>
What the robots of Star Wars tell us about automation, and the future of human work https://robohub.org/what-the-robots-of-star-wars-tell-us-about-automation-and-the-future-of-human-work/ Tue, 19 Dec 2017 21:17:30 +0000 http://robohub.org/what-the-robots-of-star-wars-tell-us-about-automation-and-the-future-of-human-work/
File 20171212 9386 8xrbbt.jpg?ixlib=rb 1.1
BB-8 is an “astromech droid” who first appeared in The Force Awakens.
Lucasfilm/IMDB

By Paul Salmon, University of the Sunshine Coast

Millions of fans all over the world eagerly anticipated this week’s release of Star Wars: The Last Jedi, the eighth in the series. At last we will get some answers to questions that have been vexing us since 2015’s The Force Awakens.

Throughout the franchise, the core characters have been accompanied by a number of much-loved robots, including C-3PO, R2-D2 and more recently, BB-8 and K2-SO. While often fulfilling the role of wise-cracking sidekicks, these and other robots also play an integral role in events.

Interestingly, they can also tell us useful things about automation, such as whether it poses dangers to us and whether robots will ever replace human workers entirely. In these films, we see the good, bad and ugly of robots – and can thus glean clues about what our technological future might look like.

The fear of replacement

One major fear is that robots and automation will replace us, despite work design principles that tell us technology should be used as a tool to assist, rather than replace, humans. In the world of Star Wars, robots (or droids as they are known) mostly assist organic lifeforms, rather than completely replace them.

R2-D2 and C3PO in A New Hope.
Lucasfilms/IMDB

So for instance, C-3PO is a protocol droid who was designed to assist in translation, customs and etiquette. R2-D2 and the franchise’s new darling, BB-8, are both “astromech droids” designed to assist in starship maintenance.

In the most recent movie, Rogue One, an offshoot of the main franchise, we were introduced to K2-SO, a wisecracking advanced autonomous military robot who was caught and reprogrammed to switch allegiance to the rebels. K2-SO mainly acts as a co-pilot, for example when flying a U-Wing with the pilot Cassian Andor to the planet of Eadu.

In most cases then, the Star Wars droids provide assistance – co-piloting ships, helping to fix things, and even serving drinks. In the world of these films, organic lifeforms are still relied upon for most skilled work.

When organic lifeforms are completely replaced, it is generally when the work is highly dangerous. For instance, during the duel between Annakin and Obi Wan on the planet Mustafar in Revenge of the Sith, DLC-13 mining droids can be seen going about their work in the planet’s hostile lava rivers.

Further, droid armies act as the frontline in various battles throughout the films. Perhaps, in the future, we will be OK with losing our jobs if the work in question poses a significant risk to our health.

K2-SO in Rogue One.
Lucasfilm/IMDB

However, there are some exceptions to this trend in the Star Wars universe. In the realm of healthcare, for instance, droids have fully replaced organic lifeforms. In The Empire Strikes Back a medical droid treats Luke Skywalker after his encounter with a Wampa, a yeti-like snow beast on the planet Hoth. The droid also replaces his hand following his battle with Darth Vadar on the planet Bespin.

Likewise, in Revenge of the Sith, a midwife droid is seen delivering the siblings Luke and Leia on Polis Massa.

https://youtu.be/ohsLZxQYBk8

Perhaps this is one area in which Star Wars has it wrong: here on earth, full automation is a long way off in healthcare. Assistance from robots in healthcare is the more realistic prospect and is in fact, already here. Indeed, robots have been assisting surgeons in operating theatres for some time now.

Automated vehicles

Driverless vehicles are currently flavour of the month – but will we actually use them? In Star Wars, despite the capacity for spacecraft and star ships to be fully automated, organic lifeforms still take the controls. The spaceship Millenium Falcon, for example, is mostly flown by the smuggler Han Solo and his companion Chewbacca.

Most of the Star Wars starship fleet (A-Wings, X-Wings, Y-Wings, Tie Fighters, Star Destroyers, Starfighters and more) ostensibly possess the capacity for fully automated flight, however, they are mostly flown by organic lifeforms. In The Phantom Menace the locals on Tatooine have even taken to building and manually racing their own “pod racers”.

It seems likely that here on earth, humans too will continue to prefer to drive, fly, sail, and ride. Despite the ability to fully automate, most people will still want to be able to take full control.

Flawless, error proof robots?

Utopian visions often depict a future where sophisticated robots will perform highly skilled tasks, all but eradicating the costly errors that humans make. This is unlikely to be true.

A final message from the Star Wars universe is that the droids and advanced technologies are often far from perfect. In our own future, costly human errors may simply be replaced by robot designer errors.

R5-D4, the malfunctioning droid of A New Hope.
Lucasfilms/IMDB

The B1 Battle Droids seen in the first and second Star Wars films lack intelligence and frequently malfunction. C-3PO is notoriously error prone and his probability-based estimates are often wide of the mark.

In the fourth film, A New Hope, R5-D4 (another astromech droid) malfunctions and explodes just as the farmer Owen Lars is about to buy it. Other droids are slow and clunky, such as the GNK Power droid and HURID-327, the groundskeeper at the castle of Maz Kanata in The Force Awakens.

The much feared scenario, whereby robots become so intelligent that they eventually take over, is hard to imagine with this lot.

The ConversationPerhaps the message from the Star Wars films is that we need to lower our expectations of robot capabilities, in the short term at least. Cars will still crash, mistakes will still be made, regardless of whether humans or robots are doing the work.

Paul Salmon, Professor of Human Factors, University of the Sunshine Coast

This article was originally published on The Conversation. Read the original article.

]]>
We built a robot care assistant for elderly people – here’s how it works https://robohub.org/we-built-a-robot-care-assistant-for-elderly-people-heres-how-it-works/ Fri, 24 Nov 2017 17:06:43 +0000 http://robohub.org/we-built-a-robot-care-assistant-for-elderly-people-heres-how-it-works/

Credit: Trinity College Dublin

By Conor McGinn, Trinity College Dublin

Not all robots will take over human jobs. My colleagues and I have just unveiled a prototype care robot that we hope could take on some of the more mundane work of looking after elderly and disabled people and those with conditions such as dementia. This would leave human carers free to focus on the more personal parts of the job. The robot could also do things humans don’t have time to do now, like keeping a constant check on whether someone is safe and well, while allowing them to keep their privacy.

Our robot, named Stevie, is designed to look a bit (but not too much) like a human, with arms and a head but also wheels. This is because we need it to exist alongside people and perform tasks that may otherwise be done by a human. Giving the robot these features help people realise that they can speak to it and perhaps ask it to do things for them.

Stevie can perform some of its jobs autonomously, for example reminding users to take medication. Other tasks are designed to involve human interaction. For example, if a room sensor detects a user may have fallen over, a human operator can take control of the robot, use it to investigate the event and contact the emergency services if necessary.

Credit:Trinity College Dublin

Stevie can also help users stay socially connected. For example, the screens in the head can facilitate a Skype call, eliminating the challenges many users face using telephones. Stevie can also regulate room temperatures and light levels, tasks that help to keep the occupant comfortable and reduce possible fall hazards.

None of this will mean we won’t need human carers anymore. Stevie won’t be able to wash or dress people, for example. Instead, we’re trying to develop technology that helps and complements human care. We want to combine human empathy, compassion and decision-making with the efficiency, reliability and continuous operation of robotics.

One day, we might might be able to develop care robots that can help with more physical tasks, such as helping users out of bed. But these jobs carry much greater risks to user safety and we’ll need to do a lot more work to make this happen.

Stevie would provide benefits to carers as well as elderly or disabled users. The job of a professional care assistant is incredibly demanding, often involving long, unsocial hours in workplaces that are frequently understaffed. As a result, the industry suffers from extremely low job satisfaction. In the US, more than 35% of care assistants leave their jobs every year. By taking on some of the more routine, mundane work, robots could free carers to spend more time engaging with residents.

Of course, not everyone who is getting older or has a disability may need a robot. And there is already a range of affordable smart technology that can help people by controlling appliances with voice commands or notifying caregivers in the event of a fall or accident.

Credit: Trinity College Dublin

Smarter than smart

But for many people, this type of technology is still extremely limited. For example, how can someone with hearing problems use a conventional smart hub such as the Amazon Echo, a device that communicates exclusively through audio signals? What happens if someone falls and they are unable to press an emergency call button on a wearable device?

Stevie overcomes these problems because it can communicate in multiple ways. It can talk, make gestures, and show facial expressions and display text on its screen. In this way, it follows the principles of universal design, because it is designed to adapt to the needs of the greatest possible number of users, not just the able majority.

The ConversationWe hope to have a version of Stevie ready to sell within two years. We still need to refine the design, decide on and develop new features and make sure it complies with major regulations. All this needs to be guided by extensive user testing so we are planning a range of pilots in Ireland, the UK and the US starting in summer 2018. This will help us achieve a major milestone on the road to developing robots that really do make our lives easier.

This article was originally published on The Conversation. Read the original article.

]]>
Three concerns about granting citizenship to robot Sophia https://robohub.org/three-concerns-about-granting-citizenship-to-robot-sophia/ Thu, 02 Nov 2017 18:11:01 +0000 http://robohub.org/three-concerns-about-granting-citizenship-to-robot-sophia/

Citizen Sophia. Flickr/AI for GOOD Global Summit, CC BY

I was surprised to hear that a robot named Sophia was granted citizenship by the Kingdom of Saudi Arabia.

The announcement last week followed the Kingdom’s commitment of US$500 billion to build a new city powered by robotics and renewables.

One of the most honourable concepts for a human being, to be a citizen and all that brings with it, has been given to a machine. As a professor who works daily on making AI and autonomous systems more trustworthy, I don’t believe human society is ready yet for citizen robots.

To grant a robot citizenship is a declaration of trust in a technology that I believe is not yet trustworthy. It brings social and ethical concerns that we as humans are not yet ready to manage.

https://youtu.be/03QduDcu5wc

Who is Sophia?

Sophia is a robot developed by the Hong Kong-based company Hanson Robotics. Sophia has a female face that can display emotions. Sophia speaks English. Sophia makes jokes. You could have a reasonably intelligent conversation with Sophia.

Sophia’s creator is Dr David Hanson, a 2007 PhD graduate from the University of Texas.

Sophia is reminiscent of “Johnny 5”, the first robot to become a US citizen in the 1986 movie Short Circuit. But Johnny 5 was a mere idea, something dreamt up by comic science fiction writers S. S. Wilson and Brent Maddock.

Did the writers imagine that in around 30 years their fiction would become a reality?

Risk to citizenship

Citizenship – in my opinion, the most honourable status a country grants for its people – is facing an existential risk.

As a researcher who advocates for designing autonomous systems that are trustworthy, I know the technology is not ready yet.

We have many challenges that we need to overcome before we can truly trust these systems. For example, we don’t yet have reliable mechanisms to assure us that these intelligent systems will always behave ethically and in accordance with our moral values, or to protect us against them taking a wrong action with catastrophic consequences.

Here are three reasons I think it is a premature decision to grant Sophia citizenship.

1. Defining identity

Citizenship is granted to a unique identity.

Each of us, humans I mean, possesses a unique signature that distinguishes us from any other human. When we get through customs without talking to a human, our identity is automatically established using an image of our face, iris and fingerprint. My PhD student establishes human identity by analysing humans’ brain waves.

What gives Sophia her identity? Her MAC address? A barcode, a unique skin mark, an audio mark in her voice, an electromagnetic signature similar to human brain waves?

These and other technological identity management protocols are all possible, but they do not establish Sophia’s identity – they can only establish hardware identity. What then is Sophia’s identity?

To me, identity is a multidimensional construct. It sits at the intersection of who we are biologically, cognitively, and as defined by every experience, culture, and environment we encountered. It’s not clear where Sophia fits in this description.

2. Legal rights

For the purposes of this article, let’s assume that Sophia the citizen robot is able to vote. But who is making the decision on voting day – Sophia or the manufacturer?

Presumably also Sophia the citizen is “liable” to pay income taxes because Sophia has a legal identity independent of its creator, the company.

Sophia must also have the right for equal protection similar to other citizens by law.

Consider this hypothetical scenario: a policeman sees Sophia and a woman each being attacked by a person. That policeman can only protect one of them: who should it be? Is it right if the policeman chooses Sophia because Sophia walks on wheels and has no skills for self-defence?

Today, the artificial intelligence (AI) community is still debating what principles should govern the design and use of AI, let alone what the laws should be.

The most recent list proposes 23 principles known as the Asilomar AI Principles. Examples of these include: Failure Transparency (ascertaining the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvement (subjecting AI systems with abilities to self-replicate to strict safety and control measures).

3. Social rights

Let’s talk about relationships and reproduction.

As a citizen, will Sophia, the humanoid emotional robot, be allowed to “marry” or “breed” if Sophia chooses to? Students from North Dakota State University have taken steps to create a robot that self-replicates using 3D printing technologies.

If more robots join Sophia as citizens of the world, perhaps they too could claim their rights to self-replicate into other robots. These robots would also become citizens. With no resource constraints on how many children each of these robots could have, they could easily exceed the human population of a nation.

As voting citizens, these robots could create societal change. Laws might change, and suddenly humans could find themselves in a place they hadn’t imagined.

The Conversation

This article was originally published on The Conversation. Read the original article.

]]>
Robots won’t steal our jobs if we put workers at center of AI revolution https://robohub.org/robots-wont-steal-our-jobs-if-we-put-workers-at-center-of-ai-revolution/ Fri, 01 Sep 2017 23:10:30 +0000 http://robohub.org/robots-wont-steal-our-jobs-if-we-put-workers-at-center-of-ai-revolution/ File 20170830 24267 1w1z0fj


Future robots will work side by side with humans, just as they do today.
Credit: AP Photo/John Minchillo

by Thomas Kochan, MIT Sloan School of Management and Lee Dyer, Cornell University

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Lessons from history

There is no question coming technologies like AI will eliminate some jobs, as did those of the past.

The invention of the steam engine was supposed to reduce the number of manufacturing workers. Instead, their ranks soared.
Lewis Hine

More than half of the American workforce was involved in farming in the 1890s, back when it was a physically demanding, labor-intensive industry. Today, thanks to mechanization and the use of sophisticated data analytics to handle the operation of crops and cattle, fewer than 2 percent are in agriculture, yet their output is significantly higher.

But new technologies will also create new jobs. After steam engines replaced water wheels as the source of power in manufacturing in the 1800s, the sector expanded sevenfold, from 1.2 million jobs in 1830 to 8.3 million by 1910. Similarly, many feared that the ATM’s emergence in the early 1970s would replace bank tellers. Yet even though the machines are now ubiquitous, there are actually more tellers today doing a wider variety of customer service tasks.

So trying to predict whether a new wave of technologies will create more jobs than it will destroy is not worth the effort, and even the experts are split 50-50.

It’s particularly pointless given that perhaps fewer than 5 percent of current occupations are likely to disappear entirely in the next decade, according to a detailed study by McKinsey.

Instead, let’s focus on the changes they’ll make to how people work.

It’s about tasks, not jobs

To understand why, it’s helpful to think of a job as made up of a collection of tasks that can be carried out in different ways when supported by new technologies.

And in turn, the tasks performed by different workers – colleagues, managers and many others – can also be rearranged in ways that make the best use of technologies to get the work accomplished. Job design specialists call these “work systems.”

One of the McKinsey study’s key findings was that about a third of the tasks performed in 60 percent of today’s jobs are likely to be eliminated or altered significantly by coming technologies. In other words, the vast majority of our jobs will still be there, but what we do on a daily basis will change drastically.

To date, robotics and other digital technologies have had their biggest effects on mostly routine tasks like spell-checking and those that are dangerous, dirty or hard, such as lifting heavy tires onto a wheel on an assembly line. Advances in AI and machine learning will significantly expand the array of tasks and occupations affected.

Creating an integrated strategy

We have been exploring these issues for years as part of our ongoing discussions on how to remake labor for the 21st century. In our recently published book, “Shaping the Future of Work: A Handbook for Change and a New Social Contract,” we describe why society needs an integrated strategy to gain control over how future technologies will affect work.

And that strategy starts with helping define the problems humans want new technologies to solve. We shouldn’t be leaving this solely to their inventors.

Fortunately, some engineers and AI experts are recognizing that the end users of a new technology must have a central role in guiding its design to specify which problems they’re trying to solve.

The second step is ensuring that these technologies are designed alongside the work systems with which they will be paired. A so-called simultaneous design process produces better results for both the companies and their workers compared with a sequential strategy – typical today – which involves designing a technology and only later considering the impact on a workforce.

An excellent illustration of simultaneous design is how Toyota handled the introduction of robotics onto its assembly lines in the 1980s. Unlike rivals such as General Motors that followed a sequential strategy, the Japanese automaker redesigned its work systems at the same time, which allowed it to get the most out of the new technologies and its employees. Importantly, Toyota solicited ideas for improving operations directly from workers.

In doing so, Toyota achieved higher productivity and quality in its plants than competitors like GM that invested heavily in stand-alone automation before they began to alter work systems.

Similarly, businesses that tweaked their work systems in concert with investing in IT in the 1990s outperformed those that didn’t. And health care companies like Kaiser Permanente and others learned the same lesson as they introduced electronic medical records over the past decade.

Each example demonstrates that the introduction of a new technology does more than just eliminate jobs. If managed well, it can change how work is done in ways that can both increase productivity and the level of service by augmenting the tasks humans do.

Worker wisdom

But the process doesn’t end there. Companies need to invest in continuous training so their workers are ready to help influence, use and adapt to technological changes. That’s the third step in getting the most out of new technologies.

And it needs to begin before they are introduced. The important part of this is that workers need to learn what some are calling “hybrid” skills: a combination of technical knowledge of the new technology with aptitudes for communications and problem-solving.

Companies whose workers have these skills will have the best chance of getting the biggest return on their technology investments. It is not surprising that these hybrid skills are now in high and growing demand and command good salaries.

None of this is to deny that some jobs will be eliminated and some workers will be displaced. So the final element of an integrated strategy must be to help those displaced find new jobs and compensate those unable to do so for the losses endured. Ford and the United Auto Workers, for example, offered generous early retirement benefits and cash severance payments in addition to retraining assistance when the company downsized from 2007 to 2010.

Examples like this will need to become the norm in the years ahead. Failure to treat displaced workers equitably will only widen the gaps between winners and losers in the future economy that are now already all too apparent.

The ConversationIn sum, companies that engage their workforce when they design and implement new technologies will be best-positioned to manage the coming AI revolution. By respecting the fact that today’s workers, like those before them, understand their jobs better than anyone and the many tasks they entail, they will be better able to “give wisdom to the machines.”

Thomas Kochan, Professor of Management, MIT Sloan School of Management and Lee Dyer, Professor Emeritus of Human Resource Studies and Research Fellow, Center for Advanced Human Resource Studies (CAHRS), Cornell University

This article was originally published on The Conversation. Read the original article.

]]>
Does the next industrial revolution spell the end of manufacturing jobs? https://robohub.org/does-the-next-industrial-revolution-spell-the-end-of-manufacturing-jobs/ Tue, 25 Jul 2017 04:26:05 +0000 http://robohub.org/does-the-next-industrial-revolution-spell-the-end-of-manufacturing-jobs/

By Jeff Morgan, Trinity College Dublin

Robots have been taking our jobs since the 1960s. So why are politicians and business leaders only now becoming so worried about robots causing mass unemployment?

It comes down to the question of what a robot really is. While science fiction has often portrayed robots as androids carrying out tasks in the much the same way as humans, the reality is that robots take much more specialised forms. Traditional 20th century robots were automated machines and robotic arms building cars in factories. Commercial 21st century robots are supermarket self-checkouts, automated guided warehouse vehicles, and even burger-flipping machines in fast-food restaurants.

Ultimately, humans haven’t become completely redundant because these robots may be very efficient but they’re also kind of dumb. They do not think, they just act, in very accurate but very limited ways. Humans are still needed to work around robots, doing the jobs the machines can’t and fixing them when they get stuck. But this is all set to change thanks to a new wave of smarter, better value machines that can adapt to multiple tasks. This change will be so significant that it will create a new industrial revolution.

The fourth industrial revolution.
Christoph Roser, CC BY-SA

Industry 4.0

This era of “Industry 4.0” is being driven by the same technological advances that enable the capabilities of the smartphones in our pockets. It is a mix of low-cost and high-power computers, high-speed communication and artificial intelligence. This will produce smarter robots with better sensing and communication abilities that can adapt to different tasks, and even coordinate their work to meet demand without the input of humans.

In the manufacturing industry, where robots have arguably made the most headway of any sector, this will mean a dramatic shift from centralised to decentralised collaborative production. Traditional robots focused on single, fixed, high-speed operations and required a highly skilled human workforce to operate and maintain them. Industry 4.0 machines are flexible, collaborative and can operate more independently, which ultimately removes the need for a highly skilled workforce.

 

For large-scale manufacturers, Industry 4.0 means their robots will be able to sense their environment and communicate in an industrial network that can be run and monitored remotely. Each machine will produce large amounts of data that can be collectively studied using what is known as “big data” analysis. This will help identify ways to improve operating performance and production quality across the whole plant, for example by better predicting when maintenance is needed and automatically scheduling it.

For small-to-medium manufacturing businesses, Industry 4.0 will make it cheaper and easier to use robots. It will create machines that can be reconfigured to perform multiple jobs and adjusted to work on a more diverse product range and different production volumes. This sector is already beginning to benefit from reconfigurable robots designed to collaborate with human workers and analyse their own work to look for improvements, such as BAXTER, SR-TEX and CareSelect.

Helping hands.
Rethink Robotics

While these machines are getting smarter, they are still not as smart as us. Today’s industrial artificial intelligence operates at a narrow level, which gives the appearance of human intelligence exhibited by machines, but designed by humans.

What’s coming next is known as “deep learning”. Similar to big data analysis, it involves processing large quantities of data in real time to make decisions about what is the best action to take. The difference is that the machine learns from the data so it can improve its decision making. A perfect example of deep learning was demonstrated by Google’s AlphaGo software, which taught itself to beat the world’s greatest Go players.

The turning point in applying artificial intelligence to manufacturing could come with the application of special microchips called graphical processing units (GPUs). These enable deep learning to be applied to extremely large data sets at extremely fast speeds. But there is still some way to go and big industrial companies are recruiting vast numbers of scientists to further develop the technology.


Impact on industry

As Industry 4.0 technology becomes smarter and more widely available, manufacturers of any size will be able to deploy cost-effective, multipurpose and collaborative machines as standard. This will lead to industrial growth and market competitiveness, with a greater understanding of production processes leading to new high-quality products and digital services.

Exactly what impact a smarter robotic workforce with the potential to operate on its own will have on the manufacturing industry, is still widely disputed. Artificial intelligence as we know it from science fiction is still in its infancy. It could well be the 22nd century before robots really have the potential to make human labour obsolete by developing not just deep learning but true artificial understanding that mimics human thinking.

Ideally, Industry 4.0 will enable human workers to achieve more in their jobs by removing repetitive tasks and giving them better robotic tools. In theory, this would allow us humans to focus more on business development, creativity and science, which it would be much harder for any robot to do. Technology that has made humans redundant in the past has forced us to adapt, generally with more education.

But because Industry 4.0 robots will be able to operate largely on their own, we might see much greater human redundancy from manufacturing jobs without other sectors being able to create enough new work. Then we might see more political moves to protect human labour, such as taxing robots.

The ConversationAgain, in an ideal scenario, humans may be able to focus on doing the things that make us human, perhaps fuelled by a basic income generated from robotic work. Ultimately, it will be up to us to define whether the robotic workforce will work for us, with us, or against us.

This article was originally published on The Conversation. Read the original article.

]]>
Asimov’s Laws won’t stop robots harming humans so we’ve developed a better solution https://robohub.org/asimovs-laws-wont-stop-robots-harming-humans-so-weve-developed-a-better-solution/ Mon, 17 Jul 2017 21:46:33 +0000 http://robohub.org/asimovs-laws-wont-stop-robots-harming-humans-so-weve-developed-a-better-solution/

By Christoph Salge, Marie Curie Global Fellow, University of Hertfordshire

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

Science fiction already envisioned this problem and has suggested various potential solutions. The most famous was author Isaac Asimov’s Three Laws of Robotics, which are designed to prevent robots harming humans. But since 2005, my colleagues and I at the University of Hertfordshire, have been working on an idea that could be an alternative.

Instead of laws to restrict robot behaviour, we think robots should be empowered to maximise the possible ways they can act so they can pick the best solution for any given scenario. As we describe in a new paper in Frontiers, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible.


The Three Laws

Asimov’s Three Laws are as follows:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While these laws sound plausible, numerous arguments have demonstrated why they are inadequate. Asimov’s own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations. Most attempts to draft new guidelines follow a similar principle to create safe, compliant and robust robots.

One problem with any explicitly formulated robot guidelines is the need to translate them into a format that robots can work with. Understanding the full range of human language and the experience it represents is a very hard job for a robot. Broad behavioural goals, such as preventing harm to humans or protecting a robot’s existence, can mean different things in different contexts. Sticking to the rules might end up leaving a robot helpless to act as its creators might hope.

Our alternative concept, empowerment, stands for the opposite of helplessness. Being empowered means having the ability to affect a situation and being aware that you can. We have been developing ways to translate this social concept into a quantifiable and operational technical language. This would endow robots with the drive to keep their options open and act in a way that increases their influence on the world.

When we tried simulating how robots would use the empowerment principle in various scenarios, we found they would often act in surprisingly “natural” ways. It typically only requires them to model how the real world works but doesn’t need any specialised artificial intelligence programming designed to deal with the particular scenario.

But to keep people safe, the robots need to try to maintain or improve human empowerment as well as their own. This essentially means being protective and supportive. Opening a locked door for someone would increase their empowerment. Restraining them would result in a short-term loss of empowerment. And significantly hurting them could remove their empowerment altogether. At the same time, the robot has to try to maintain its own empowerment, for example by ensuring it has enough power to operate and it does not get stuck or damaged.


Robots could adapt to new situations

Using this general principle rather than predefined rules of behaviour would allow the robot to take account of the context and evaluate scenarios no one has previously envisaged. For example, instead of always following the rule “don’t push humans”, a robot would generally avoid pushing them but still be able to push them out of the way of a falling object. The human might still be harmed but less so than if the robot didn’t push them.

In the film I, Robot, based on several Asimov stories, robots create an oppressive state that is supposed to minimise the overall harm to humans by keeping them confined and “protected”. But our principle would avoid such a scenario because it would mean a loss of human empowerment.

The ConversationWhile empowerment provides a new way of thinking about safe robot behaviour, we still have much work to do on scaling up its efficiency so it can easily be deployed on any robot and translate to good and safe behaviour in all respects. This poses a very difficult challenge. But we firmly believe empowerment can lead us towards a practical solution to the ongoing and highly debated problem of how to rein in robots’ behaviour, and how to keep robots -– in the most naive sense -– “ethical”.

This article was originally published on The Conversation. Read the original article.

]]> Helping or hacking? Engineers and ethicists must work together on brain-computer interface technology https://robohub.org/helping-or-hacking-engineers-and-ethicists-must-work-together-on-brain-computer-interface-technology/ Thu, 22 Jun 2017 13:00:21 +0000 http://robohub.org/helping-or-hacking-engineers-and-ethicists-must-work-together-on-brain-computer-interface-technology/ File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

Recent announcements by Elon Musk and Facebook about brain-computer interface (BCI) technology are just the latest headlines in an ongoing science-fiction-becomes-reality story.

BCIs use brain signals to control objects in the outside world. They’re a potentially world-changing innovation – imagine being paralyzed but able to “reach” for something with a prosthetic arm just by thinking about it. But the revolutionary technology also raises concerns. Here at the University of Washington’s Center for Sensorimotor Neural Engineering (CSNE) we and our colleagues are researching BCI technology – and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now.


Picking up on P300 signals

All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.

Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals “event-related potentials.” In the lab, they help us identify a reaction to a stimulus.

Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus. Tamara Bonaci, CC BY-ND

In particular, we capitalize on one of these specific signals, called the P300. It’s a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an “oddball” that stands out from the rest of what’s around you.

For example, you don’t stop and stare at each person’s face when you’re searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.


Reading your mind using P300s

P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. That’s led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, one character at a time.

It also can be used to determine what you know, in what’s called a “guilty knowledge test.” In the lab, subjects are asked to choose an item to “steal” or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.

Everyone’s P300 is unique. In order to know what they’re looking for, researchers need “training” data. These are previously obtained brain signal recordings that researchers are confident contain P300s; they’re then used to calibrate the system. Since the test measures an unconscious neural signal that you don’t even know you have, can you fool it? Maybe, if you know that you’re being probed and what the stimuli are.

Techniques like these are still considered unreliable and unproven, and thus U.S. courts have resisted admitting P300 data as evidence.

For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth. Mark Stone, University of Washington, CC BY-ND

Imagine that instead of using a P300 signal to solve the mystery of a “stolen” item in the lab, someone used this technology to extract information about what month you were born or which bank you use – without your telling them. Our research group has collected data suggesting this is possible. Just using an individual’s brain activity – specifically, their P300 response – we could determine a subject’s preferences for things like favorite coffee brand or favorite sports.

But we could do it only when subject-specific training data were available. What if we could figure out someone’s preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of continuing experiments at the University of Washington and elsewhere.

It’s when the technology is able to “read” someone’s mind who isn’t actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time – when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing what’s in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.

What if it’s possible to decode what you’re thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads – which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your account’s PIN – which would be alarming.


With great power comes great responsibility

The potential ability to determine individuals’ preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security be a human right? How do we adequately protect and store all the neural data being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering biomedical research or health care. Should neural data be treated differently?

Neuroethicists from the UW Philosophy department discuss issues related to neural implants.
Mark Stone, University of Washington, CC BY-ND

These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers – as we have done at the CSNE – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is “housed” within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. He’s also easily able to interact with – and, in fact, interview – research subjects about their ethical concerns about brain research.

There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as genetics and neuromarketing. But there seems to be something important and different about reading neural data. They’re more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.


Working on ethics while tech’s in its infancy

As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.

First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.

All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.

The ConversationThe goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. It’s a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.

]]>
The truth is no stranger than fiction when it comes to robots https://robohub.org/the-truth-is-no-stranger-than-fiction-when-it-comes-to-robots/ Wed, 21 Aug 2013 19:57:50 +0000 http://robohub.org/?p=18519 xray_delta1

Killer robots, a problem as old as voodoo. Source: x-ray delta one.

By Kathleen Richardson, University College London

Robots represent the cutting edge in science. For decades we have been promised a bright future in which these human-like machines will become so advanced that we won’t be able to tell the difference between them and us. But are technologists really dabbling in the unknown in their work or merely ripping a page out of their favourite sci-fi novel?

Robots existed in fiction long before science made them a reality. In the 1920s, Czech playwright Karel Čapek wanted to create a character that could reflect the dehumanisation of society, the obsession with production and the jubilant celebration of technological progress that often resulted in the horror of the battlefields.

Rossum’s Universal Robots. Source: fortinbras

Having already experimented with using different non-human characters like newts and salamanders to reflect on human life and existence, Čapek made “the Robot” a central character in his play R.U.R. (Rossum’s Universal Robots). The Robot was a particular kind of “other” who looked and acted like a human being but lacked something unique – feelings. It was not the product of a mother and father but of a production line. For Čapek, it seems, the robot is an inherently political character, a revolutionary even.

But even this was not the first time that artificial beings had been used by creative writers. The cultural narrative of creation goes back to a time when humans first began to craft objects from material things. Some of these objects were shaped to look like humans. Take the Venus figurines that date back at least 35,000 years, or dolls, which have long been more than just innocent playthings for children in some cultures. Dolls can be magical talismans and, for some, the miniature representation of the human form was a useful way to control the human adult it was supposed to represent. The particular ways in which humans are represented is culturally specific but the desire to represent is, and always has been, universal.

So what is the modern technology of robotics doing that is so different from all these fictional exercises in imitating the human form? The roboticists and technologists of today would have you believe that their work is grounded in scientific reality when they seek the next big breakthrough in artificial intelligence. Cyberneticians and futurologists make claims as if the issues they address were never before considered in human society. But they are in fact more swept up in fantasy than ever before.

All attempts to represent the human form tap into a timeless motivation to know who we are: the mystery of life, reproduction, childhood and attachments to other humans, animals and nature.

What is exciting about AI and technology is that these provide new ways of representing the human form. But the debate about what that means is so confused and ridiculous at times it can leave futurologists lost in their own fantasies. In the 1960s, Marvin Minsky was so optimistic about the new field of AI, he believed that, by the end of the 20th century, machines will outsmart human beings. This is, in part, what inspired Arthur C. Clarke when he speculated about the future of intelligence in 2001: A Space Odyssey.

Ray Kurzweil is another case in point. In books such as The Singularity is Near: When Humans Transcend Biology, Kurzweil is forever predicting that we will merge with machines and be able to upload our “complete” consciousness into machines. This idea is emerging as the next big challenge in robotics but it could equally be viewed as a basic feature of human cultural existence.

I’m “uploading” my consciousness right now into this article. A visual artist, when she paints is also “uploading” her consciousness. Consciousness is just another way of saying psychic life – the life and impulses of the individual as a member of a family and collective. Arguably, any human being that has ever created anything has transferred aspects of their consciousness to artificial materials.

The fiction is now being created by the scientists. AI roboticists are given a free reign to project any fantasy they like about their technology and how it will irrevocably change what it means to be human. We have been asking the same question since the beginning of time in different ways. The only difference now is that those building the robots and AI systems believe their work is unique rather than part of an ongoing process and also stand to acquire a lot of money in the process.

Kathleen Richardson is affiliated with Department of Anthropology, University College London

The Conversation

This article was originally published at The Conversation.
Read the original article.

]]>