Views – Robohub https://robohub.org Connecting the robotics community to the world Wed, 04 Oct 2023 16:50:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 50 women in robotics you need to know about 2023 https://robohub.org/50-women-in-robotics-you-need-to-know-about-2023/ Wed, 04 Oct 2023 16:50:34 +0000 https://robohub.org/?p=208381

In celebration of the launch of International Women in Robotics Day, the Women in Robotics organization is proud to release another “50 women in robotics you need to know about” collection of stories. With a growing robotics industry there are many opportunities for everyone to get involved. This is why we showcase the wide range of roles that women play in robotics today.

Since 2012, the Women in Robotics organization has released a list of women building the future in robotics. The list has covered all ages, career stages, types of occupation and experience. We’ve featured more than 350 women already and we’ve shown that women have always been working in the robotics industry, in the earliest robotics research labs and companies, although those stories have often been forgotten.

This year’s collection includes Nancy Cornelius, co-founder of Boston Dynamics and the first engineer hired. Cornelius remained an integral part of Boston Dynamics until the company was sold to Google in 2013. Vandi Verma is the head of NASA’s rover (robot) program. Joanna Buttler is the head of the Global Autonomous Technology Group for Daimler Truck. And Whitney Rockley founded a venture capital company investing exclusively in ‘industrial internet’ companies like Clearpath Robotics.

For the first time, we feature an Indigenous (Ojibwe) American roboticist, Danielle Boyer. Boyer started a non-profit The STEAM Connection to combat the difficulties that many kids have getting access to robotics. She created an affordable robot kit that’s been distributed to thousands of students, and is proudest of the SKOBOT project. Personalized robots that keep culture and language traditions alive. Boyer epitomizes the motto “Building the Future”.

We also try to feature women from all regions of the world and this year’s collection represents Nigeria, India, China, Australia, Japan, Switzerland, Croatia, Korea, Denmark, Singapore, Italy, Romania, United States, Sweden, Spain, Canada, the UK, Israel, Austria, Belgium, Mexico, Argentina and Brazil. There is an active Latinx community in Women in Robotics engaged in translating more robotics information into Spanish, hoping to create more connections between the global robotics community and the roboticists, and potential roboticists, of Latin America.

There have always been women doing great things in robotics! And we’re pleased to present another collection of strong female role models for young and upcoming roboticists (of any gender).

You can also join in the Women in Robotics celebrations today and throughout October, with events listed on the women in robotics site, like Diversity Cocktails at the IROS conference in Detroit, or the launch of the Los Angeles women in robotics chapter. Women in Robotics is a global community organization for women and non-binary people working in robotics and those who’d like to work in robotics. Learn more at https://womeninrobotics.org

Join our events, host your own events, share our celebration on social media!

]]>
A short guide to Multidisciplinary Research https://robohub.org/a-short-guide-to-multidisciplinary-research/ Wed, 27 Sep 2023 08:33:32 +0000 https://robohub.org/?p=208319

This guide to ‘colliding opposite disciplines with your research’ is intended to help students and researchers, or indeed anyone who might otherwise be looking for some ideas on how to approach research or methods for designing concepts and solutions, to broaden their thinking and approach to research. This guide is mainly focused on the disciplines of science and engineering with the idea of collaborating with other distinct disciplines. However, the overall principles remain for any multidisciplinary research.

The guide is written into three different sections;

  1. Is it all just hot STEAM?
  2. When worlds collide.
  3. The common goal – how can I develop my multidisciplinary research?

With the assistance of this guide, it will help to open new ways of thinking about research, highlight the ‘unseen’ benefits of multidisciplinary approaches to research and how they can be extremely advantageous and can lend for an optimal delivery. It will help you to contemplate how, when, and why you should open up your research to other disciplines.

Is it all just hot STEAM?

If we think of the Arts and Science then I think, for the most part, that people would think of them as opposites. People are either ‘arty’ or ‘science-y’. A large factor in this thinking may come from the fact that people are seen as left-brained or right-brained, where one side of the brain is dominant. Left-brained thinkers are said to be methodical and analytical, whereas right-brained thinkers are said to be creative or artistic. What should happen, though, if you were able to work across these two separated sides and create whilst you analyse?

Do we need to get out of this thought, that art and science don’t belong together? I would firmly argue that we do.

The acronym STEM is widely known as Science, Technology, Engineering and Mathematics. However, another acronym, possibly less well known, is STEAM. STEAM is Science, Technology, Engineering, (liberal) Arts and Mathematics. This represents all aspects of art such as drama, music, design, media and visual arts. STEM primarily focuses on scientific concepts, whilst STEAM investigates the same concepts, but does this through investigation and problem-based learning methods used in an imaginative process.

The application of the arts to science is not a new practice Leonardo DaVinci is an early example of someone using STEAM to make discoveries and explain them to several generations.

There are many advantages to applying the arts to science and engineering. For example, would increasing application of the arts to science and engineering make more young people want to do science and engineering as it looks visually more attractive and significant? Could it help them develop a love for the STEM subjects and support them to seeing it as being more relatable than a Bunsen burner in a school laboratory.

A prime, and very recent, example of STEAM being applied was the crewed Space-X launch of the Dragon capsule in 2020. This launch represents the essence of advanced technology that is both on the forefront of science and engineering development as well technological aesthetics. From the design of the sleek logos, through to the futuristic spacesuits and even continuing onto the matt black launch platform, it was clear throughout this launch that every single detail had been considered.

Some may argue that by adding this artistic touch to technology that the ‘nitty-gritty science’ of the design and aesthetic becomes lost. If something doesn’t ‘look’ complicated and complex can it really be advanced or sophisticated? Well yes! Have you ever heard the saying “when someone makes something look simple, they have spent hours perfecting it” Take for example the space suits and touch screen controls of the Dragon SpaceX launch. The suits look like they were designed for a film set of Hollywood’s renditions of Space travel. A spacesuit without oxygen inlets, pressurized helmets and fitted to individual body contours.

To the layman, these may simply look like pleasant visuals. Though, to the trained eye and relevantly knowledgeable mind the pure fact that these two components of the flight look so streamline and simplistic not only nods to but reinforces that all aspects of the flight was saturated in superior engineering from hundreds of magnificent minds. That is the beauty of STEAM.

When worlds collide

There are several advantages to multidisciplinary research. Now more than before, there has been focus on research becoming more multidisciplinary. As the world is in the fourth industry revolution (Industry 4) and with constant and significant advances in technology and AI there is increased necessity for research to meet complex and substantial scientific and engineering global challenges. Consider, a real-world problem, either one you know something about or one you are researching. Completing all aspects of this issue and its application you will notice that, repeatedly, it cannot be confined to one single discipline.

A multidisciplinary environment in research allows for different theories, methodologies, modes of thinking (convergent, divergent and lateral) and perspectives to come together for one common goal and purpose. The beauty of multidisciplinary research sits within this divergence of thinking, approaches, and theories which provides a much broader context to create innovate and bespoke discoveries and solutions.

In research, it can often be the case that students or academics end up working in quite a niche area of research, which of course has great advantages of its own as one can become a leading expert in a particular field. However, when a specific piece of research is presented that needs expert knowledge and experience from another field or discipline it can be difficult for an individual to become skilled or versed enough (often within pressing timelines) to lead on that area of the work. Here, a multidisciplinary environment/team will allow for contributions and skilled knowledge from other disciplines to have input without all researchers having to masters each other’s skills and knowledge but to only understand.

Often, an effective way to further show value and demonstrate a concept is to lead by example…

Consider the discipline of robotics. A robot represents a wide array of disciplines.

In a true multidisciplinary approach, there is great research capability in designing and building new robotic systems, that offers the ability to apply bespoke robotic based solutions to a range of applications.

In the case of social and healthcare robotics. Here, robotics represents aspects of electrical/mechanical engineering, material science, psychology, and medicine. In these environments, robots are much more than circuitry and AI. Other disciplines come from the end user requirements, the operational environments, and bespoke requirements/purposes.

Design in robotics is something that is often overlooked. However, it is extremely important in the creation of robotic solutions for both end users and client requirements. The physical appearance of a robot can affect presumptions and expectations of how a robotic system should or will perform.

Aesthetics, more than one is conscious of, influences aspects of our lives, and the decisions we make. The collaboration of the arts/design and robotics can be particularly effective towards increasing trust in robots. Establishing human-robot interaction (HRI) trust is especially pertinent where robots are being used in individual personal and healthcare roles.

For several years, it has been recognised and understood that trust is a crucial aspect of effective human-robot interaction for social robots as it closes the discipline gap between human psychology and artificial intelligence (AI).

A robot’s physical appearance can positively or negatively affect a human’s interaction with a robot. Just as humans can make decision upon first meeting someone, a human can make a similar decision based on the first impression when interacting with a robot for the first time. As a result, co-design improves the engagement and the quality of the interaction between (multidisciplinary) researchers and the end user. In the cases of healthcare or surgical robotics, it should be instinctive for a researcher to involve a person from a medical background, such as surgeons and/or medical device developers, to co-create a robotic solution fitting to the healthcare issue. To create ‘sightless’ to the knowledge and experience of the real life reality and important key issues that must be considered and adopted would be developing from a position of being completely ignorant to the challenge and effective solution.

Sustainability is a major current area of research, sustainability for the world and sustainability for technology research and development. Considering sustainability in robotic technology development the factors that should be considered, for example, are the choice & quantity of materials, the possibility of using recycled materials (e.g. in soft robotics) and effective design to limit single use robotics, or inefficient processes. This research is a topic of a truly multidisciplinary nature. How can this be considered or deciphered? It can be broken down in the resulting way;

  1. Considerations – material choices, quantity of materials, effective & efficient design.
  2. Following considerations – where do the materials come from, how does the technology affect the environment and society.
  3. Draw out research points – lithium batteries, mining, supporting jobs roles or taking job roles, damage to environment, moving local communities.
  4. What disciplines do we need? – material scientists, chemical scientists, robotics engineers, policy makers, lawyers, sociologists, economists.

The common goal – how can I develop my multidisciplinary research?

A multidisciplinary research environment may be outside of your established or traditional research approaches or considerations. If so, there may be several questions that immediately come to mind about working in these types of collaborations. Such as how do you put together a multidisciplinary research group? Will the terminology and language across our disciplines be the same? How will other disciplines approach the problem – ‘will there be too many cooks’?

It is common that with change, new methodologies and approaches to working (especially if you are set in a certain way of doing things over many years), for there to be some initial challenges and a period of adaptation.

If you are interested to work in or create a multidisciplinary team here are some tips for developing this research approach.

  1. Identify and acknowledge your dominant discipline/perspective (and all that it encompasses) – this can seem like an obvious point to make and think. However, considering your discipline will make you think of the fields that sit with it and this will help to identity disciplines and fields that are completely out with your area. As well as groups or researchers that do not sit within your department or similarly researchers that may be in another group within the department that you had not considered working with before.
  2. Consider the research you are conducting. What are the underlying theories? Where are the natural overlaps? This can create a research ecosystem that will help you identify areas that you and others can co-create within. As a brief example, physics provides the fundamental basis for biology.
  3. Identify the areas of knowledge are you lacking in? What areas of knowledge do you need strengthened? Where are the gaps in your research? This will help you to identify what type of expertise you need and where to get this from.
  4. Next, identify who are the end users, what are the applications? This will help you think of the bigger picture of your research and what expertise should have input in the work. For example, in the case of healthcare, surgeons should have input in medical device robotics or in the instance of assisted living the end user must have input of information about their requirements and daily living situation.
  5. Lastly, make sure you know what you are talking about. Put together a short brief of the purpose of the work and an outline answering these key indicators listed here. Ensure this conveys the purpose and end goal of the work, the gaps and where the other disciplines can add value. Identify, the group or induvial person you think could create a beneficial multidisciplinary team and contact them to present this information to them. Of course, in certain circumstances this can lead to the development of research grants!

Recognise that there are certain points to consider such as,

  1. When collaborating with people from other disciplines there can be initial hurdles to overcome with how easy it is to convey your ideas to them, the languages that speak in and the ways they communicate. Take time to verse yourself in others ways of doing things and their language and methods on communicating.
  2. Other disciplines may not work in a factually driven way, and it could be more a creative/holistic view of thinking. Be open minded, be open to adapting to new ways of doing things. This is advantageous to you too!
  3. It may take some brainstorming sessions and design workshops (for instances) to get some momentum going in the work. However, take time to reflect on what has been done so far and always move forward with the same purpose and goal. Remember there was a reason that you created this team. Reflect on this.

Lastly, do not let these considerations stop or hinder your ideas of working in a multidisciplinary environment. There is so much to learn in these types of research teams, and it is always interesting, it is guaranteed. There is no research quite like the output form a team that is not confined into one discipline.

So, the next time you are designing, creating, or innovating, consider; am I letting off enough STEAM and are worlds colliding?

This work by Dr Karen Donaldson is licensed under a Creative Commons Attribution licence 4.0.

]]>
Tackling loneliness with ChatGPT and robots https://robohub.org/tackling-loneliness-with-chatgpt-and-robots/ Tue, 05 Sep 2023 13:02:00 +0000 https://robotrabbi.com/?p=25514 Read More]]> As the last days of summer set, one is wistful of the time spent with loved ones sitting on the beach, traveling on the road, or just sharing a refreshing ice cream cone. However, for many Americans such emotional connections are rare, leading to high suicide rates and physical illness. In a recent study by the Surgeon General, more than half of the adults in the USA experience loneliness, with only 39% reporting feeling “very connected to others.” As Dr. Vivek H. Murthy states: “Loneliness is far more than just a bad feeling—it harms both individual and societal health. It is associated with a greater risk of cardiovascular disease, dementia, stroke, depression, anxiety, and premature death. The mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day and even greater than that associated with obesity and physical inactivity.” In dollar terms, this epidemic accounts for close to $7 billion of Medicare spending annually, on top of $154 billion of yearly worker absenteeism.

As a Venture Capitalist, I have seen a growing number of pitch decks for conversational artificial intelligence in place of organic companions (some of these have wellness applications, while others are more lewd). One of the best illustrations of how AI-enabled chatbots are entering human relationships is in a recent article by the New York Times reporter Erin Griffin, who spent five days testing the AI buddy Pi. Near the end of the missive, she exclaims, “It wasn’t until Monday morning, after hours of intermittent chatting throughout the weekend, that I had my ‘aha’ moment with Pi. I was feeling overwhelmed with work and unsure of how to structure my day, a recurring hangup that often prevents me from getting started. ‘Good morning,’ I typed into the app. ‘I don’t have enough time to do everything I need to do today!’ With a level of enthusiasm only a robot could muster before coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bot’s advice, it was obvious and simple, the kind of thing you would read in a self-help article by a productivity guru. But it was tailored specifically to me — and it worked.” As the reporter reflected on her weekend with the bot, she commented further, “I could have dumped my stress on a family member or texted a friend. But they are busy with their own lives and, well, they have heard this before. Pi, on the other hand, has infinite time and patience, plus a bottomless well of encouraging affirmations and detailed advice.”

In a population health study cited by General Murthy, the demographic that is most isolated in America is people over the age of 65. This is also the group that is most affected by physical and cognitive decline due to loneliness. Doctors Qi and Wu presented to Neurology Live a survey of the benefits of AI in their June paper, “ChatGPT: A Promising Tool to Combat Social Isolation and Loneliness in Older Adults With Mild Cognitive Impairment.” According to the authors, “ChatGPT can provide emotional support by offering a nonjudgmental space for individuals to express their thoughts and feelings. This can help alleviate loneliness and provide a sense of connection, which is crucial for well-being.” The researchers further cited ancillary uses, “ChatGPT can also assist with daily tasks and routines. By offering reminders for appointments, medications, and other daily tasks, this AI model can help older adults with MCI (mild cognitive impairment) maintain a sense of independence and control over their lives.” The problem with ChatGPT for geriatric plus populations is the form factors, as most seniors are not the most tech-savvy. This is an opportunity for roboticists.

Last Tuesday, Intuition Robotics announced it scored an additional financing of $25 million for expanding its “AI care companions” to all senior households. While its core product, ElliQ, does not move, its engagement offers the first glimpse of the benefits of social robots at mass. In speaking about the future, I interviewed its founder/CEO, Dor Skuler, last week. He shared with me his vision, “At this time, we don’t have plans to add legs or wheels to ElliQ, but we are always looking to add new activities or conversational features that can benefit the users. Our goal is to continue getting ElliQ into as many homes as possible to spread its benefits to even more older adults. We plan to create more partnerships with governments and aging agencies and are developing more partnerships within the healthcare industry. With this new funding, we will capitalize on our strong pipeline and fund the growth of our go-to-market activities.”

Unlike the stuffed animal executions of Paro and Tombot, ElliQ looks like an attractive home furnishing (and winner of the 2003 International Design Award). According to Skuler, this was very intentional, “We placed very high importance on the design of ElliQ to make it as easy as possible to use. We also knew we older adults needed technology that celebrated them and the aging process rather than focusing on disabilities and what they may no longer be able to do by themselves.” At the same time, the product underwent a rigorous testing and development stage that put its customer at the center of the process. “We designed ElliQ with the goal of helping seniors who are aging in place at home combat loneliness and social isolation. This group of seniors who participated in the development and beta testing helped us to shape and improve ElliQ, ensuring it had the right personality, character, mannerisms, and other modalities of interaction (like movement, conversation design, LEDs, and on-screen visuals) to form meaningful bonds with real people.” He further observed in the testing with hundreds of seniors, “we’ve witnessed older adults forming an actual relationship with ElliQ, closer to how one would see a roommate rather than a smart appliance.”

The results since deploying in homes throughout New York have been astounding in keeping older populations more socially and mentally engaged. As ElliQ’s creator elaborated, “In May 2022, we announced a partnership with the New York State Office for the Aging to bring 800+ ElliQ units to seniors across New York State at no cost to the end users. Just a few weeks ago this year, we announced a renewal of that partnership and the amazing results we’ve seen so far including a 95% reduction in loneliness and great improvement in well-being among older adults using the platform. ElliQ users throughout New York have demonstrated exceptionally high levels of engagement consistently over time, interacting with their ElliQ over 30 times per day, 6 days a week. More than 75% of these interactions are related to improving older adults’ social, physical, and mental well-being.”

To pedestrian cynics, ElliQ might look like an Alexa knockoff leading them to question why couldn’t the FAANG companies cannibalize the startup. Skuler’s response, “Alexa and other digital assistant technology were designed with the masses in mind or for younger end users. They also focus mainly on reactive AI, meaning they do not provide suggestions or talk with users unless prompted. ElliQ is designed to engage users over time, using a proactive approach to engagement. Its proactive suggestions and conversational capabilities foster a deep relationship with the user. Moreover, ElliQ’s integration of Generative AI and Large Language Models (LLMs) enables rich and continuous conversational experiences, allowing for more contextual, personalized, and goal-driven interactions. These capabilities and unique features such as drinking coffee with ElliQ in cafes around the world or visiting a virtual art museum, bring ElliQ and the user close together, creating trust that allows ElliQ to motivate the older adult to lead a more healthy and engaged lifestyle.”

While ElliQ and OpenAI’s ChatGPT have shown promise in treating mental illness, some health professionals are still not convinced. At MIT, professor and psychologist, Sherry Turkle, worries that the interactions of machines “push us along a road where we’re encouraged to forget what makes people special.” Dr. Turkle demures, “The performance of empathy is not empathy. The area of companion, lover therapist, best friend is really one of the few areas where people need people.”

]]>
The Strange: Scifi Mars robots meet real-world bounded rationality https://robohub.org/the-strange-scifi-mars-robots-meet-real-world-bounded-rationality/ Tue, 15 Aug 2023 11:40:19 +0000 https://robohub.org/?p=207995

Even with the addition of a strange mineral, robots still obey the principle of bounded rationality in artificial intelligence set forth by Herb Simon.

I cover bounded rationality in my Science Robotics review (image courtesy of @SciRobotics) but I am adding some more details here.

Did you like the Western True Grit? Classic scifi like The Martian Chronicles? Scifi horror like Annihilation? Steam punk? How about robots? If yes to any or all of the above, The Strange by Nathan Ballingrud is for you! It’s a captivating book. And as a bonus, it’s a great example of the real world principle of bounded rationality.

First off, let’s talk about the book. The Strange is set in a counterfactual Confederate States of America colony on Mars circa 1930s, evocative of Ray Bradbury’s The Martian Chronicles. The colony makes money by mining the Strange, a green mineral which amplifies the sapience and sentience of the Steam Punk robots called Engines. The planet is capable of supporting human life, though conditions are tenuous and though the colony is self-sufficient, all communication with Earth has suddenly stopped without a reason and its long term survival is now in question. The novel’s protagonist is Annabel Crisp, a delightfully frank and unfiltered 13 year old heroine straight out of Charles Portis’ classic Western novel, True Grit. She and Watson, the dishwashing Engine from her parent’s small restaurant, embark on a dangerous trek to recover stolen property and right a plethora of wrongs. Along the way, they deal with increasingly less friendly humans and Engines.

It’s China Meiville’s New Weird meets the Wild West.

Really.

What makes The Strange different from other horror novels is that the Engines (and humans) don’t exceed their intrinsic capabilities but rather the mineral focuses or concentrates on the existing capabilities. In humans it amplifies the deepest aspects of character, a coward becomes more clever at being a coward, a person determined to return to Earth will go to unheard of extreme measures. Yet, the human will not do anything they weren’t already capable of. In robots, it adds a veneer of personality and the ability to converse through natural language, but Engines are still limited by physical capabilities and intrinsic software functionality.

The novel indirectly illustrates important real world concepts in machine intelligence and bounded rationality:

  • One is that intelligence is robotics is tailored to the task it is designed for. While the public assumes a general purpose artificial general intelligence that can be universally applied to any task or work domain, robotics focuses on developing forms of intelligence needed to accomplish specific tasks. For example, a factory robot may need to learn to mimic (and improve on) how a person performs a function, but it doesn’t need to be like Sophia and speak to audiences about the role of artificial intelligence in society.
  • Another important distinction is that the Public often conflates four related but separate concepts: sapience (intelligence, expertise), sentience (feelings, self awareness), autonomy (ability to adapt how it executes a task or mission), and initiative (ability to modify or discard the task or mission to meet the objectives). In sci-fi, a robot may have all four, but in real-life they typically have very narrow sapience, no sentience, autonomy limited to the tasks they are designed for, and no initiative.

These concepts fall under a larger idea first proposed in the 1950s by Herb Simon, a Nobel Prize winner in economics and a founder of the field of artificial intelligence- the idea is bounded rationality. Bounded rationality states that all decision-making agents, be they human or machine, have limits imposed by their computational resources (IQ, compute hardware, etc.), time, information (either too much or too little). For AI and robots the limits include the algorithms- the core programming. Even humans with high IQs make dumb decisions when they are tired, hungry, misinformed, or stressed. And no matter how smart, they stay within the constraints of their innate abilities. Only in fiction do people suddenly supersede their innate capabilities, and usually that takes something like a radioactive spider bite.

What would our real-world robots grow into if they were suddenly smarter? Would they be obsessive about a task, ignoring humans all together, possibly injuring or even killing them as the robots went about their tasks-sort of making OSHA violations? Would they hunt us down and kill us in one of the myriad ways detailed in Robopocalypse? Or would they deliver inventory, meals, and medicine with the kindly charm of an old-fashioned mailman? Would the social manipulation in healthcare and tutoring robots become real sentience, real caring?

Bounded rationality says it depends on us. The robots will simply do whatever we programmed them for within the limits of their hardware and the situation. Of course, that’s the problem- our programming is imperfect and we have trouble anticipating consequences. But for now, even if there was the Strange on Mars, Curiosity and Perseverance would keep on keeping on. And a big shout out to NASA- It’s hard to imagine how they could work better than they already do.

Pick up a copy of The Strange, it’s a great read. Plus Herb Simon’s Sciences of the Artificial. And don’t forget my books too! You can learn more about bounded rationality in science fiction here, in my textbook Introduction to AI Robotics and more science fiction that illustrates how robots work in Robotics Through Science Fiction and Learn AI and Human-Robot Interaction through Asimov’s I, Robot Stories.

]]>
Submersible robots that can fly https://robohub.org/submersible-robots-that-can-fly/ Thu, 13 Jul 2023 16:19:18 +0000 https://robohub.org/?p=207759 Last month, the entire world was abuzz when five über wealthy explorers perished at the bottom of the Atlantic Ocean near the grave of the once “unsinkable ship.” Disturbingly, during the same week, hundreds of war-torn refugees drowned in the Mediterranean with little news of their plight. The irony of machine versus nature illustrates how tiny humans are in the universe, and that every soul rich or poor is precious. It is with this attitude that many roboticists have been tackling some of the hardest problems in the galaxy from space exploration to desert mining to oceanography to search & rescue.

Following the news of the implosion of the Titan submersible, I reached out to Professor F. Javier Diez of Rutgers University for his comment on the rescue mission and the role of robots. The aerospace academic is also an entrepreneur of a novel drone technology company that can fly and swim autonomously within the same mission. As he explains, his approach could’ve saved time and money in ascertaining the same unfortunate answer, “I think we could go down to 12,000. No problem. So now imagine sending a 20-pound [robot] down to 12,000 feet. You can do this in a couple of hours. You just throw it overboard, or you fly, you know you don’t need to bring in a crane, a gigantic ship, and all this very expensive equipment just to do that first look.” Dr. Diez’s sentiment was validated during the first press conference of US Coast Guard Rear Adm. John Mauger when he cautioned the media of the huge logistical undertaking in moving such large equipment to a remote, hostile, area of the globe. Diez continued, “We could have been there in a couple of hours. So of course, you know there’s more to it. But I was just saying that long term I can see how very small robots like ours for search and rescue could be huge. We are doing some work. We actually put some proposals with the submarine community. I think this has a huge application because again, these 20-pound [drones] are something you can deploy from anywhere, anytime.”

In breaking down his invention, the drone CEO elaborated on the epiphany that happened in his lab years earlier by overcoming the conventional wisdom that an uncrewed system that operated in two modalities (marine and air) required two separate propulsion systems. He further noted that two propulsion systems were very inefficient regarding burning energy and functionality. “And this was I would say a mental barrier for a lot of people, and it still is when they see what we put into it.” He explained how he first had to overcome so many industry naysayers, “I brought this to some folks at NASA, and everyone was saying, it’s not going to work. And then when you look at what’s behind the propeller design and the motor design, you realize that we cannot be living on an edge. We designed propellers for a very specific condition, which is air.” However, the innovator challenged the status quo of the aerospace community by asking, “Can you design propellers and motors for water? And it turns out that you can.” He deconstructed his lab’s research, “So if you look at the curve for air, and you look at the currents for water, they intersect, and if you do it the right way, you can be efficient in both places. So that was the breakthrough for me to be able to show. And we actually show that you can design propellers that can be efficient in both air and underwater.”

After sharing insights into the design, he then conveyed to me that the programming of the flight controls was the next hurdle to overcome. “The next challenge is the transition. So we worked very hard from the very beginning on that transition from water. We actually have a patent on this and it’s really the heart of our technology. I call it dual-plane propulsion. You have 2 propellers on the top and two propellers on the bottom. So when you’re on the surface, the bottom ones are in the water and the top ones are in the air. So the bottom ones are like when you have a baby and you are pull-swimming. Babies are not very good at swimming, but if you put your hand on their bellies all of a sudden they become great swimmers. So think of it as the bottom propellers. When the vehicle is on the surface, the bottom propellers are keeping it very very stable. So now that you have that stability, the top [propellers] can work together to get [the drone] out of the water. So that’s how we accomplish the continuous transition. You can go in and out 100 times,” bragged the Professor.

Diez’s company SubUAS is not a theoretical concept, but an actual product that is currently deployed by the US military, and looking to expand into commercial markets. “So we’d been a hundred percent with the Department of Defense. They really supported the development of technology.” He now is itching to expand from a Navy Research-funded project to new deployments in the municipal and energy sectors. “We have done a lot of different types of inspections related to ship pylons. Now, we have [Florida’s] Department of Transportation interested in this technology,” said the startup founder. “What I realized over the last year or so is that defense has its own speed. You cannot really push it. There is a specific group now in defense that is encouraging us, but it takes a couple of years,” he quipped. Optimistically, he envisions being profitable very soon by opening up the platform for commercial applications. “Now we’re starting to see the fruits of that [effort]. I can tell you that we got approved in Europe to do offshore wind turbine inspection later this summer. However, he is most excited by bridge inspections, “We have over half a million bridges in the USA. And like at least 50,000 to 200,000 have something seriously wrong with them. I mean, we’re not doing enough inspections. So having a vehicle like the Naviator that can look at the underwater part of the bridge is huge.”

He has also been approached by several companies in the energy industry. “And then there are a lot of interesting assets within the oil and gas, but we are discovering this. It’s kind of almost like a discovery phase because nobody has ever had the capability of doing air and marine.” He described that there are many robots like ROVs (Remotely Operated Vehicles) inspecting rigs on the marine’s surface and aerial drones looking from the air, but no one is focused on the splash zone [where the two meet] as they never had dual modality before. He further illustrated the value proposition of this specific use case, “Nobody gets close to the surface. So they’re saying that that’s a huge application for us.” Long-term, Diez imagines replacing tethered ROVs altogether as his system is easier (and cheaper) to deploy.

Today, SubUAS’ business model is on an inspection basis, but over time it will center around data collection as they are the only waterproof aerial drone on the market that can swim. “We go to the bridge inspectors, and we work with them to simplify their lives, and at the end of the day reduce the risk for the diver. So they know what we are doing is making their lives easier.” However, that is only the tip of the iceberg, because “it’s not so much about the hardware or the sensors, but the data that you collect. We think cloud services are huge as it allows you to sort and analyze it anywhere.” He concluded by sharing that his next model will be utilizing a lot of artificial intelligence in interpreting the condition and autonomously planning the missions accordingly. Maybe soon, virtual explorers could look at shipwrecks as well from the comfort (and safety) of their couches.

]]>
Titan submersible disaster underscores dangers of deep-sea exploration – an engineer explains why most ocean science is conducted with crewless submarines https://robohub.org/titan-submersible-disaster-underscores-dangers-of-deep-sea-exploration-an-engineer-explains-why-most-ocean-science-is-conducted-with-crewless-submarines/ Wed, 28 Jun 2023 07:28:43 +0000 http://robohub.org/?guid=904ed50396689fb0d37e257f14855ede

Researchers are increasingly using small, autonomous underwater robots to collect data in the world’s oceans. NOAA Teacher at Sea Program, NOAA Ship PISCES, CC BY-SA

By Nina Mahmoudian (Associate Professor of Mechanical Engineering, Purdue University)

Rescuers spotted debris from the tourist submarine Titan on the ocean floor near the wreck of the Titanic on June 22, 2023, indicating that the vessel suffered a catastrophic failure and the five people aboard were killed.

Bringing people to the bottom of the deep ocean is inherently dangerous. At the same time, climate change means collecting data from the world’s oceans is more vital than ever. Purdue University mechanical engineer Nina Mahmoudian explains how researchers reduce the risks and costs associated with deep-sea exploration: Send down subs, but keep people on the surface.

Why is most underwater research conducted with remotely operated and autonomous underwater vehicles?

When we talk about water studies, we’re talking about vast areas. And covering vast areas requires tools that can work for extended periods of time, sometimes months. Having people aboard underwater vehicles, especially for such long periods of time, is expensive and dangerous.

One of the tools researchers use is remotely operated vehicles, or ROVs. Basically, there is a cable between the vehicle and operator that allows the operator to command and move the vehicle, and the vehicle can relay data in real time. ROV technology has progressed a lot to be able to reach deep ocean – up to a depth of 6,000 meters (19,685 feet). It’s also better able to provide the mobility necessary for observing the sea bed and gathering data.

Autonomous underwater vehicles provide another opportunity for underwater exploration. They are usually not tethered to a ship. They are typically programmed ahead of time to do a specific mission. And while they are underwater they usually don’t have constant communication. At some interval, they surface, relay the whole amount of data that they have gathered, change the battery or recharge and receive renewed instructions before again submerging and continuing their mission.

What can remotely operated and autonomous underwater vehicles do that crewed submersibles can’t, and vice versa?

Crewed submersibles will be exciting for the public and those involved and helpful for the increased capabilities humans bring in operating instruments and making decisions, similar to crewed space exploration. However, it will be much more expensive compared with uncrewed explorations because of the required size of the platforms and the need for life-support systems and safety systems. Crewed submersibles today cost tens of thousands of dollars a day to operate.

Use of unmanned systems will provide better opportunities for exploration at less cost and risk in operating over vast areas and in inhospitable locations. Using remotely operated and autonomous underwater vehicles gives operators the opportunity to perform tasks that are dangerous for humans, like observing under ice and detecting underwater mines.

Remotely operated vehicles can operate under Antarctic ice and other dangerous places.

How has the technology for deep ocean research evolved?

The technology has advanced dramatically in recent years due to progress in sensors and computation. There has been great progress in miniaturization of acoustic sensors and sonars for use underwater. Computers have also become more miniaturized, capable and power efficient. There has been a lot of work on battery technology and connectors that are watertight. Additive manufacturing and 3D printing also help build hulls and components that can withstand the high pressures at depth at much lower costs.

There has also been great progress toward increasing autonomy using more advanced algorithms, in addition to traditional methods for navigation, localization and detection. For example, machine learning algorithms can help a vehicle detect and classify objects, whether stationary like a pipeline or mobile like schools of fish.

What kinds of discoveries have been made using remotely operated and autonomous underwater vehicles?

One example is underwater gliders. These are buoyancy-driven autonomous underwater vehicles. They can stay in water for months. They can collect data on pressure, temperature and salinity as they go up and down in water. All of these are very helpful for researchers to have an understanding of changes that are happening in oceans.

One of these platforms traveled across the North Atlantic Ocean from the coast of Massachusetts to Ireland for nearly a year in 2016 and 2017. The amount of data that was captured in that amount of time was unprecedented. To put it in perspective, a vehicle like that costs about $200,000. The operators were remote. Every eight hours the glider came to the surface, got connected to GPS and said, “Hey, I am here,” and the crew basically gave it the plan for the next leg of the mission. If a crewed ship was sent to gather that amount of data for that long it would cost in the millions.

In 2019, researchers used an autonomous underwater vehicle to collect invaluable data about the seabed beneath the Thwaites glacier in Antarctica.

Energy companies are also using remotely operated and autonomous underwater vehicles for inspecting and monitoring offshore renewable energy and oil and gas infrastructure on the seabed.

Where is the technology headed?

Underwater systems are slow-moving platforms, and if researchers can deploy them in large numbers that would give them an advantage for covering large areas of ocean. A great deal of effort is being put into coordination and fleet-oriented autonomy of these platforms, as well as into advancing data gathering using onboard sensors such as cameras, sonars and dissolved oxygen sensors. Another aspect of advancing vehicle autonomy is real-time underwater decision-making and data analysis.

What is the focus of your research on these submersibles?

My team and I focus on developing navigational and mission-planning algorithms for persistent operations, meaning long-term missions with minimal human oversight. The goal is to respond to two of the main constraints in the deployment of autonomous systems. One is battery life. The other is unknown situations.

The author’s research includes a project to allow autonomous underwater vehicles to recharge their batteries without human intervention.

For battery life, we work on at-sea recharging, both underwater and surface water. We are developing tools for autonomous deployment, recovery, recharging and data transfer for longer missions at sea. For unknown situations, we are working on recognizing and avoiding obstacles and adapting to different ocean currents – basically allowing a vehicle to navigate in rough conditions on its own.

To adapt to changing dynamics and component failures, we are working on methodologies to help the vehicle detect the change and compensate to be able to continue and finish the mission.

These efforts will enable long-term ocean studies including observing environmental conditions and mapping uncharted areas.

The Conversation

Nina Mahmoudian receives funding from National Science Foundation and Office of Naval Research.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Flowstate: Intrinsic’s app to simplify the creation of robotics applications https://robohub.org/flowstate-intrinsics-app-to-simplify-the-creation-of-robotics-applications/ Sun, 18 Jun 2023 08:30:04 +0000 https://www.theconstructsim.com/?p=37325

Copyright by Intrinsic.

Finally, Intrinsic (a spin-off of Google-X) has revealed the product they have been working with the help of the Open Source Robotics Corporation team (among others): Flowstate!

What is Flowstate?

Introducing Intrinsic Flowstate | Intrinsic (image copyright by Intrinsic)

Flowstate is a web-based software designed to simplify the creation of software applications for industrial robots. The application provides a user-friendly desktop environment where blocks can be combined to define the desired behavior of an industrial robot for specific tasks.

Good points

  • Flowstate offers a range of features, including simulation testing, debugging tools, and seamless deployment to real robots.
  • It is based on ROS, so we should be able to use our favorite framework and all the existing software to program on it, including Gazebo simulations.
  • It has a behavior tree based system to graphically control the flow of the program, which simplifies the way to create programs by just moving blocks around. But it is also possible to switch to expert mode to manually touch the code.
  • It has a library of already existing robot models and hardware ready to be added, but you can also add your own.
  • Additionally, the application provides pre-built AI skills that can be utilized as modules to achieve complex AI results without the need for manual coding.
  • One limitiation (but I actually consider a good point) is that the tool is thought for industrial robots not for service robots in general. This is good because it provides a focus for the product, specially for this initial release

Flowstate | Intrinsic (image copyright by Intrinsic)

Based on the official post and the keynote released on Monday, May 15, 2023 (available here), this is the information we have gathered so far. However, we currently lack a comprehensive understanding of how the software works, its complete feature set, and any potential limitations. To gain more insights, we must wait until July of this year, hoping that I will be among the lucky participants selected for the private beta (open call to the beta still available here).

Unclear points

Even if I find interesting the proposal of Intrinsic, I have identified three potential concerns regarding it:

  1. Interoperability across different hardware and software platforms poses a challenge. The recruitment of the full OSRC team by Intrinsic appears to address this issue, given that ROS is currently the closest system in the market to achieve such interoperability. However, widespread adoption of ROS by industrial robot manufacturers is still limited, with only a few companies embracing it.

    Ensuring hardware interoperability necessitates the adoption of a common framework by robot manufacturers, which is currently a distant reality. What we, ROS developers, aim right now is to be able to have somebody build the ROS drivers for the robotic arm we want to use (like for example the manufacturers of the robot, or the team of ROS Industrial). However, manufacturers generally hesitate to develop ROS drivers due to potential business limitations and their aims for customer lock-in. Unless a platform dedicates substantial resources to developing and maintaining drivers for supported robots, the challenge of hardware interoperability cannot be solved by a platform alone (actually, that is one of the goals that ROS-Industrial is trying to achieve).

    Google possesses the potential to unite hardware companies towards this goal, as Wendy Tan White, the CEO of Intrinsic mentioned, “This is an ecosystem effort” However, it is crucial for the industrial community to perceive tangible benefits and value in supporting this initiative beyond merely assisting others in building their businesses. The specific benefits that the ecosystem stands to gain by supporting this initiative remain unclear.

  2. Flowstate | Intrinsic (image copyright by Intrinsic)

  3. The availability of pre-made AI skills for robots is a complex task. Consider the widely used skills in ROS, such as navigation or arm path planning, exemplified by Nav2 and MoveIt, which offer excellent functionality. However, integrating these skills into new robots is not as simple as plug-and-play. In fact, dedicated courses exist to teach users how to effectively utilize the different components of navigation within a robot. This highlights the challenges associated with implementing such skills for robots in general. Thus, it is reasonable to anticipate similar difficulties in developing pre-made skills within Flowstate.
  4. A final point that I don’t see clear (because it was not addressed in the presentation) is how the company is going to do business with Flowstate. This is a very important point for every robotics developer because we don’t want to be locked into proprietary systems. We understand that companies must have a business, but we want to understand clearly what the business is so we can decide if that is convenient or not for us, both in the short and the long run. For instance, Robomaker from Amazon did not gain much traction because forced the developers to pay for the cloud while running Robomaker, when they could do the same thing (with less fancy stuff) in their own local computers for free

Conclusion

Overall, while Flowstate shows promising, further information and hands-on experience are required to assess its effectiveness and address potential challenges.

I have applied to the restricted beta. I hope to be selected so I can have a first hand experience and report about it.

Please make sure to read the original post by Wendy Tan White and the keynote presentation, both can be found at the web of Intrinsic.

Flowstate | Intrinsic (image copyright by Intrinsic)

]]>
Ranking the best humanoid robots of 2023 https://robohub.org/ranking-the-best-humanoid-robots-of-2023/ Sat, 03 Jun 2023 07:52:12 +0000 https://robohub.org/?p=207458

Is Rosie the Robot Maid from the Jetsons here yet? Several different types of humanoid are currently deployed commercially or in trials. We’ve come along way since the DARPA Robotics Challenge of 2015/2016, where the most popular footage was the blooper reels of robots falling over and failing to open doors or climb stairs.

The Avatar XPrize of 2019-2022 showcased some extremely sophisticated humanoids that certainly advanced the state of the art but the holy grail of humanoid robots is combining incredible sophistication into a sub $50,000 package. Why $50,000? Wouldn’t some companies pay a lot more? Then again, can’t we buy a car, also a very sophisticated device capable of partial autonomy that is 5 times the size of a humanoid, for less than $50,000? Why is this the benchmark for humanoids?

$50,000 is the annual wage for a single shift of labor at slightly more than $18/hour or minimum wage in every low wage industry. There is a terrible labor shortage and it is the dirty dull and dangerous jobs that are hardest for employers to fill. Companies that can afford to run two or more shifts a day also have more alternatives when it comes to filling their labor gaps. It’s the small to medium size enterprise that is suffering the most in our current economic and demographic conditions.

We don’t need a Six Million Dollar Man.

We need a $50,000 humanoid.

The roll out of sophisticated new robots and how we integrate them into society is at the heart of my early research and my current roles as the Managing Director of Silicon Valley Robotics (explain), VP of Global Robotics for AMT (explain) and the VP of Industrial Activities for the IEEE Robotics and Automation Society (explain).

As more and more companies announce their work towards the affordable humanoid robot, I wanted to create a reference chart for myself, and realized that it might be of interest to others as well. The ranking system is just my own opinion and it will be fascinating to see who succeeds and progresses over the next few years. Enjoy this overview and make up your own minds as to which humanoid robot is really the best.

Who’s in the running? (in alphabetical order by company not robot)

  • 1x – Eve
  • Aeolus Robotics – Eva
  • Agility Robotics – Digit
  • Apptronik – Astra
  • Boston Dynamics – Atlas
  • Comma.ai – body
  • Devanthro – Robody
  • Engineered Arts – Ameca
  • Figure – Figure01
  • Giant.ai – Universal Worker
  • IIT – ErgoCub
  • PAL – Reem-C
  • Prosper Robotics – Alfie
  • Sanctuary – Phoenix
  • Tesla – Optimus
  • Toyota – T-HR3

Who isn’t in the running?

Hollywood Humanoids
Hollywood Humanoids are one off robots for the purpose of entertainment, like Sophia from Hanson Robotics, Xoxe from AI Life, or Beonmi from Beyond Imagination. ….

Chinese robots
It’s too hard for me to validate that they exist, work as advertized, and what the specifications are.

Research robots
Love them but they have a different purpose. Only robots with commercial deployment plans, and ideally, a price tag and a date in 2023 or 2024 when they’ll be available for purchase, if they aren’t already being sold.

Not humanoid
I also love robots that work like a humanoid but don’t look human-like. We saw some examples in the DARPA Robotics Challenge, most notably RoboSimian. Once we go down that route, all quadrupeds, and multi-armed robots or wheeled humanlike robots, would qualify. Who knew there were so many robots!

Who have I missed?

I’m hoping to crowdsource some more great robots :)


Read the original article on Substack.

]]>
Automate 2023 recap and the receding horizon problem https://robohub.org/automate-2023-recap-and-the-receding-horizon-problem/ Thu, 01 Jun 2023 07:38:00 +0000 https://robotrabbi.com/?p=25320 Read More]]> “Thirty million developers” are the answer to driving billion-dollar robot startups, exclaimed Eliot Horowitz of Viam last week at Automate. The hushed crowd of about 200 hardware entrepreneurs listened intensely to MongoDB‘s founder and former CTO (a $20Bn success story). Now, Horowitz aims to take the same approach that he took to democratizing cloud data applications to mechatronics. As I nudged him with questions about how his new platform will speed complex robot deployments to market, he shared his vision of the Viam developer army (currently 1,000+ strong) creating applications that can be seamlessly downloaded on the fly to any system and workflow. Unlike RoS which is primarily targeted to the current community of roboticists, Viam is luring the engineers that birthed ChatGPT to revolutionize uncrewed systems with new mechanical tasks addressing everyday needs. Imagine generative AI prompts for SLAM, gripping, computer vision, and other highly manipulative tasks with drag-and-drop ease.

Interviewing Horowitz recalled my discussion a few months back with Dr. Hal Thorsrud of Anges Scott College in Georgia. Professor Thorsrud teaches a novel philosophy course at this Liberal Arts institution on the “Introduction to Artificial Intelligence.” Similar to Horowitz, Thorsrud envisions an automated world whereby his graduates would be critical in thinking through the ethical applications of robots and AI. “Ethics has to become an engineering problem which is fascinating because, I mean, the idea is that we need to figure out how we can actually encode our ethical values into these systems. So they will abide by our values in order to pursue what we deemed to be good,” remarked Thorsrud.

According to Thorsrud’s syllabus, the class begins: “with a brief survey of positions in the philosophy of mind in order to better understand the concept of intelligence and to formulate the default position of most AI research, namely Computationalism. We then examine questions such as ‘What is a computer?’, ‘What makes a function or number computable?’, ‘What are algorithms and how do they differ from heuristics?’ We will consider fundamental issues in AI such as what makes a system intelligent, and whether computers can have minds. Finally, we will explore some of the ethical challenges that face AI such as whether intelligent artificial systems should be relied upon to make important decisions that affect our lives, and whether we should create such systems in the first place.”

In explaining the origins of his course, Dr. Thorsrud recalled, “A lot of my students are already interested in philosophy. They just don’t know it. And so, in fact, just recently my department has joined forces with neuroscience. We’re no longer a Philosophy Department. We’re now the Philosophy of Neuroscience, and now the Department of Law, Neuroscience, and Philosophy. Because these students are interested in mind, they are interested in intelligence, but they don’t realize that philosophy has been dealing with an attempt to understand the nature of mind from the very beginning, and the nature of intelligence from the very beginning. So we have a lot to offer these students this question of how to reach them. So that’s what kind of started me off on this different path, and, in the meantime think the same is true of artificial intelligence.”

Thorsrud elaborated that this introductory course is only the first step in a wider AI curriculum at Anges Scott as the confluence between endeavors like Viam and ChatGPT collide in the coming years to move the automation industry at hyperspeed. Already, the AI Philosopher sees how GPT is challenging humans to stand out, “The massive growth in the training data and the parameters, the weights that were that the machine learning was operating on really paid off.” He continued to illustrate how dystopian fears are unfounded, “I mean, we have a tendency to anthropomorphize things like ChatGPT and it’s understandable. But as far as I can tell, it’s, it’s a long way from the intelligence of my dog, a long, long way.” He is realistic about the speed of adoption, “Well, as a philosopher, they’re never going to be able to get to the point where they can give me a credible adjudication, because human judgment you know it is. And, this is another example of the ever-present receding horizon problem. First, you know that computers will never be able to beat a human at chess. Okay, fine computers will never be able to beat a human at GO. Fine computers will never be able to write. And so we keep setting these limits down, and then surpassing them.”

At Automate, I had the chance to catch up with ff Venture Capital portfolio company, PlusOne Robotics, and its amazing founder, Erik Nieves. While the talk in the theater was about the future, Nieves illustrated on the floor what is happening today. Impressively the startup is close to one million picks of depalletizing packages and sorting goods for the likes of FedEx and other leading providers of shipping & logistics. PlusOne’s proprietary computer vision co-bot platform is not waiting for the next generation of developers to join the ranks, but building its own intelligent protocols to increase efficiencies on the front lines of e-commerce fulfillment.

As Brian Marflak, of FedEx, remarked, “The technology in these depalletizing arms helps us move certain shipments that would otherwise take up valuable resources to manually offload. Having these systems installed allows team members to perform more skilled tasks such as loading and unloading airplanes and trucks. This has been a great opportunity for robotics to complement our existing team members and help them complete tasks more efficiently.”

Markflak’s sentiment was shared by the 25,000+ attendees of Automate that filled the entire Detroit Convention Center. A big backdrop of the show was how macro labor trends and shortages are exasperating the push towards automation (and thus moving the horizon even further). According to the most recent reports, close to 20% of all US retail sales are driven online, with over 20 billion packages being shipped every year growing at an annual rate of 25%. This means even if the e-commerce industry is able to hire a million more workers, there are not enough (organic) hands to keep up. As Nieves puts it, “The growth of e-commerce has placed tremendous pressure on shipping responsiveness and scalability that has significantly exacerbated labor and capacity issues. Automation is key, but keeping a human-in-the-loop is essential to running a business 24/7 with greater speed and fewer errors. With the ongoing labor shortages, I believe we’ll see an increase in the adoption of Robots-as-a-Service (RaaS) to lower capital expenditures and deploy automation on a subscription basis.” Get ready for Automate 2024, as the convention moves for the first time to an annual gathering!

]]>
The 5 Laws of Robotics https://robohub.org/the-5-laws-of-robotics/ Thu, 11 May 2023 09:19:19 +0000 https://svrobo.org/?p=26188

I have been studying the whole range of issues/opportunities in the commercial roll out of robotics for many years now, and I’ve spoken at a number of conferences about the best way for us to look at regulating robotics. In the process I’ve found that my guidelines most closely match the EPSRC Principles of Robotics, although I provide additional focus on potential solutions. And I’m calling it the 5 Laws of Robotics because it’s so hard to avoid Asimov’s Laws of Robotics in the public perception of what needs to be done.

The first most obvious point about these “5 Laws of Robotics” should be that I’m not suggesting actual laws, and neither actually was Asimov with his famous 3 Laws (technically 4 of them). Asimov proposed something that was hardwired or hardcoded into the existence of robots, and of course that didn’t work perfectly, which gave him the material for his books. Interestingly Asimov believed, as did many others at the time (symbolic AI anyone?) that it was going to be possible to define effective yet global behavioral rules for robots. Whereas, I don’t.

My 5 Laws of Robotics are:

  1. Robots should not kill.
  2. Robots should obey the law.
  3. Robots should be good products.
  4. Robots should be truthful.
  5. Robots should be identifiable.

What exactly does those laws mean?

Firstly, people should not legally able to weaponize robots, although there may be lawful exclusions for use by defense forces or first responders. Some people are completely opposed to Lethal Autonomous Weapon Systems (LAWS) in any form, whereas others draw the line at robot weapons being ultimately under human command, with accountability to law. Currently in California there is a proposed legislation to introduce fines for individuals building or modifying weaponized robots, drones or autonomous systems, with the exception of ‘lawful’ use.

Secondly, robots should be built so that they comply with existing laws, including privacy laws. This implies some form of accountability for companies on compliance in various jurisdictions, and while that is technically very complex, successful companies will be proactive because companies otherwise there will be a lot of court cases and insurance claims keeping lawyers happy but badly impacting the reputation of all robotics companies.

Thirdly, although we are continually developing and adapting standards as our technologies evolve, the core principle is that robots are products, designed to do tasks for people. As such, robots should be safe, reliable and do what they claim to do, in the manner that they claim to operate. Misrepresentation of the capabilities of any product is universally frowned upon.

Fourthly, and this is a fairly unique capability of robots, robots should not lie. Robots have the illusion of emotions and agency, and humans are very susceptible to being ‘digitally nudged’ or manipulated by artificial agents. Examples include robots or avatars claiming to be your friend, but could be as subtle as robots using a human voice just as if there was a real person listening and speaking. Or not explaining that a conversation that you’re having with a robot might have many listeners at other times and locations. Robots are potentially amazingly effective advertizing vehicles, in ways we are not yet expecting.

Finally, and this extends the principles of accountability, transparency and truthfulness, it should be possible to know who is the owner and/or operator of any robot that we interact with, even if we’re just sharing a sidewalk with them. Almost every other vehicle has to comply with some registration law or process, allowing ownership to be identified.

What can we do to act on these laws?

  1. Robot Registry (license plates, access to database of owners/operators)
  2. Algorithmic Transparency (via Model Cards and Testing Benchmarks)
  3. Independent Ethical Review Boards (as in biotech industry)
  4. Robot Ombudspeople (to liaise between the public, policy makers and the robotics industry)
  5. Rewarding Good Robots (design awards and case studies)

There are many organizations releasing guides, principles, and suggested laws. I’ve surveyed most of them and looked at the research. Most of them are just ethical hand wringing and accomplish nothing because they don’t factor in real world conditions around what the goals are, who would be responsible and how to make progress towards the goals. I wrote about this issue ahead of giving a talk at the ARM Developer Summit in 2020 (video included below).

Silicon Valley Robotics announced the first winners of our inaugural Robotics Industry Awards in 2020. The SVR Industry Awards consider the responsible design as well as technological innovation and commercial success. There are also some ethical checkmark or certification initiatives under preparation, but like the development of new standards, these can take a long time to do properly, whereas awards, endorsements and case studies can be available immediately to foster the discussion of what constitutes good robots, and, what are the social challenges that robotics needs to solve.

The Federal Trade Commission recently published “The Luring Test: AI and the engineering of consumer trust” describing the

For those not familiar with Isaac Asimov’s famous Three Laws of Robotics, they are:

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later added a Fourth (called the Zeroth Law, as in 0, 1, 2, 3)

Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Robin R. Murphy and David D. Woods have updated Asimov’s laws to be more similar to the laws I proposed above and provide a good analysis for what Asimov’s Laws meant and why they’ve changed them to deal with modern robotics. Beyond Asimov The Three Laws of Responsible Robotics (2009)

Some other selections from the hundreds of principles, guidelines and surveys of the ethical landscape that I recommend come from one of the original EPSRC authors, Joanna Bryson.

The Meaning of the EPSRC Principles of Robotics (2016)

And the 2016/2017 update from the original EPSRC team:

Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby & Alan Winfield (2017) Principles of robotics: regulating robots in the real world, Connection Science, 29:2, 124-129, DOI: 10.1080/09540091.2016.1271400

Another survey worth reading is on the Stanford Plato site: https://plato.stanford.edu/entries/ethics-ai/

]]>
We need to discuss what jobs robots should do, before the decision is made for us https://robohub.org/we-need-to-discuss-what-jobs-robots-should-do-before-the-decision-is-made-for-us/ Sat, 29 Apr 2023 08:30:31 +0000 http://robohub.org/?guid=1301ab6bcafc6e182faac594ca6cd839

Shutterstock / Frame Stock Footage

By Thusha Rajendran (Professor of Psychology, The National Robotarium, Heriot-Watt University)

The social separation imposed by the pandemic led us to rely on technology to an extent we might never have imagined – from Teams and Zoom to online banking and vaccine status apps.

Now, society faces an increasing number of decisions about our relationship with technology. For example, do we want our workforce needs fulfilled by automation, migrant workers, or an increased birth rate?

In the coming years, we will also need to balance technological innovation with people’s wellbeing – both in terms of the work they do and the social support they receive.

And there is the question of trust. When humans should trust robots, and vice versa, is a question our Trust Node team is researching as part of the UKRI Trustworthy Autonomous Systems hub. We want to better understand human-robot interactions – based on an individual’s propensity to trust others, the type of robot, and the nature of the task. This, and projects like it, could ultimately help inform robot design.

This is an important time to discuss what roles we want robots and AI to take in our collective future – before decisions are taken that may prove hard to reverse. One way to frame this dialogue is to think about the various roles robots can fulfill.

Robots as our servants

The word “robot” was first used by the Czech writer, Karel Čapek, in his 1920 sci-fi play Rossum’s Universal Robots. It comes from the word “robota”, meaning to do the drudgery or donkey work. This etymology suggests robots exist to do work that humans would rather not. And there should be no obvious controversy, for example, in tasking robots to maintain nuclear power plants or repair offshore wind farms.

The more human a robot looks, the more we trust it. Antonello Marangi/Shutterstock

However, some service tasks assigned to robots are more controversial, because they could be seen as taking jobs from humans.

For example, studies show that people who have lost movement in their upper limbs could benefit from robot-assisted dressing. But this could be seen as automating tasks that nurses currently perform. Equally, it could free up time for nurses and careworkers – currently sectors that are very short-staffed – to focus on other tasks that require more sophisticated human input.

Authority figures

The dystopian 1987 film Robocop imagined the future of law enforcement as autonomous, privatised, and delegated to cyborgs or robots.

Today, some elements of this vision are not so far away: the San Francisco Police Department has considered deploying robots – albeit under direct human control – to kill dangerous suspects.

This US military robot is fitted with a machine gun to turn it into a remote weapons platform. US Army

But having robots as authority figures needs careful consideration, as research has shown that humans can place excessive trust in them.

In one experiment, a “fire robot” was assigned to evacuate people from a building during a simulated blaze. All 26 participants dutifully followed the robot, even though half had previously seen the robot perform poorly in a navigation task.

Robots as our companions

It might be difficult to imagine that a human-robot attachment would have the same quality as that between humans or with a pet. However, increasing levels of loneliness in society might mean that for some people, having a non-human companion is better than nothing.

The Paro Robot is one of the most commercially successful companion robots to date – and is designed to look like a baby harp seal. Yet research suggests that the more human a robot looks, the more we trust it.

The Paro companion robot is designed to look like a baby seal. Angela Ostafichuk / Shutterstock

A study has also shown that different areas of the brain are activated when humans interact with either another human or a robot. This suggests our brains may recognise interactions with a robot differently from human ones.

Creating useful robot companions involves a complex interplay of computer science, engineering and psychology. A robot pet might be ideal for someone who is not physically able to take a dog for its exercise. It might also be able to detect falls and remind someone to take their medication.

How we tackle social isolation, however, raises questions for us as a society. Some might regard efforts to “solve” loneliness with technology as the wrong solution for this pervasive problem.

What can robotics and AI teach us?

Music is a source of interesting observations about the differences between human and robotic talents. Committing errors in the way humans do all the time, but robots might not, appears to be a vital component of creativity.

A study by Adrian Hazzard and colleagues pitted professional pianists against an autonomous disklavier (an automated piano with keys that move as if played by an invisible pianist). The researchers discovered that, eventually, the pianists made mistakes. But they did so in ways that were interesting to humans listening to the performance.

This concept of “aesthetic failure” can also be applied to how we live our lives. It offers a powerful counter-narrative to the idealistic and perfectionist messages we constantly receive through television and social media – on everything from physical appearance to career and relationships.

As a species, we are approaching many crossroads, including how to respond to climate change, gene editing, and the role of robotics and AI. However, these dilemmas are also opportunities. AI and robotics can mirror our less-appealing characteristics, such as gender and racial biases. But they can also free us from drudgery and highlight unique and appealing qualities, such as our creativity.

We are in the driving seat when it comes to our relationship with robots – nothing is set in stone, yet. But to make educated, informed choices, we need to learn to ask the right questions, starting with: what do we actually want robots to do for us?

The Conversation

Thusha Rajendran receives funding from the UKRI and EU. He would like to acknowledge evolutionary anthropologist Anna Machin’s contribution to this article through her book Why We Love, personal communications and draft review.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
What is the hype cycle for robotics? https://robohub.org/what-is-the-hype-cycle-for-robotics/ Tue, 07 Mar 2023 10:10:11 +0000 https://robohub.org/?p=206751 We’ve all seen or heard of the Hype Cycle. It’s a visual depiction of the lifecycle stages a technology goes through from the initial development to commercial maturity. It’s a useful way to track what technologies are compatible with your organization’s needs. There are five stages of the Hype Cycle, which take us through the initial excitement trigger, that leads to the peak of inflated expectations followed by disillusionment. It’s only as a product moves into more tangible market use, sometimes called ‘The Slope of Enlightenment’, that we start to reach full commercial viability.

Working with so many robotics startups, I see this stage as the transition into revenue generation in more than pilot use cases. This is the point where a startup no longer needs to nurture each customer deployment but can produce reference use cases and start to reliably scale. I think this is a useful model but that Gartner’s classifications don’t do robotics justice.

For example, this recent Gartner chart puts Smart Robots at the top of the hype cycle. Robotics is a very fast moving field at the moment. The majority of new robotics companies are less than 5-10 years old. From the perspective of the end user, it can be very difficult to know when a company is moving out of the hype cycle and into commercial maturity because there aren’t many deployments or much marketing at first, particularly compared to the media coverage of companies at the peak of the hype cycle.

So, here’s where I think robotics technologies really fit on the Gartner Hype Cycle:

Innovation trigger

  • Voice interfaces for practical applications of robots
  • Foundational models applied to robotics


Peak of inflated expectations

  • Large Language models – although likely to progress very quickly
  • Humanoids

Trough of disillusionment

  • Quadrupeds
  • Cobots
  • Full self-driving cars and trucks
  • Powered clothing/Exoskeletons


Slope of enlightenment

  • Teleoperation
  • Cloud fleet management
  • Drones for critical delivery to remote locations
  • Drones for civilian surveillance
  • Waste recycling
  • Warehouse robotics (pick and place)
  • Hospital logistics
  • Education robots
  • Food preparation
  • Rehabilitation
  • AMRs in other industries


Plateau of productivity

  • Robot vacuum cleaners (domestic and commercial)
  • Surgical Robots
  • Warehouse robotics (AMRs in particular)
  • Factory automation (robot arms)
  • 3d printing
  • ROS
  • Simulation

AI, in the form of Large Language Models ie. ChatGPT, GPT3 and Bard is at peak hype, as are humanoid robots, and perhaps the peak of that hype is the idea of RoboGPT, or using LLMs to interpret human commands to robots. Just in the last year, four or five new humanoid robot companies have come out of stealth from Figure, Teslabot, Aeolus, Giant AI, Agility, Halodi, and so far only Halodi has a commercial deployment doing internal security augmentation for ADT.

Cobots are still in the Trough of Disillusionment, in spite of Universal Robot selling 50,000+ arms. People buy robot arms from companies like Universal primarily for affordability, ease of setup, not requiring safety guarding hardware and capable of industrial precision. The full promise of collaborative robots has had trouble landing with end users. We don’t really deploy collaborative robots engaged in frequent hand-offs to humans. Perhaps we need more dual armed cobots with better human-robot interaction before we really explore the possibilities.

Interestingly the Trough of Disillusionment generates a lot of media coverage but it’s usually negative. Self-driving cars and trucks are definitely at the bottom of the trough. Whereas powered clothing or exoskeletons, or quadrupeds are a little harder to place.

AMRs, or Autonomous Mobile Robots, are a form of self-driving cargo that is much more successful than self-driving cars or trucks traveling on public roads. AMRs are primarily deployed in warehouses, hospitals, factories, farms, retail facilities, airports and even on the sidewalk. Behind every successful robot deployment there is probably a cloud fleet management provider or a teleoperation provider, or monitoring service.

Finally, the Plateau of Productivity is where the world’s most popular robots are. Peak popularity is the Roomba and other home robot vacuum cleaners. Before their acquisition by Amazon, iRobot had sold more than 40 million Roombas and captured 20% of the domestic vacuum cleaner market. Now commercial cleaning fleets are switching to autonomy as well.

And of course Productivity (not Hype) is also where the workhorse industrial robot arms live with ever increasing deployments worldwide. The International Federation of Robotics, IFR, reports that more than half a million new industrial robot arms were deployed in 2021, up 31% from 2020. This figure has been rising pretty steadily since I first started tracking robotics back in 2010.


What does your robotics hype cycle look like? What technology would you like me to add to this chart? Contact andra@svrobo.org

]]>
Our future could be full of undying, self-repairing robots – here’s how https://robohub.org/our-future-could-be-full-of-undying-self-repairing-robots-heres-how/ Wed, 01 Feb 2023 14:28:20 +0000 http://robohub.org/?guid=e1d7fdc88964f50cba7a15f5383b62dd

Robotic head, 3D illustration (frank60/Shutterstock)

By Jonathan Roberts (Professor in Robotics, Queensland University of Technology)

With generative artificial intelligence (AI) systems such as ChatGPT and StableDiffusion being the talk of the town right now, it might feel like we’ve taken a giant leap closer to a sci-fi reality where AIs are physical entities all around us.

Indeed, computer-based AI appears to be advancing at an unprecedented rate. But the rate of advancement in robotics – which we could think of as the potential physical embodiment of AI – is slow.

Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting T-1000 robot from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?

Energy for ‘life’

Biological lifeforms like ourselves need energy to operate. We get ours via a combination of food, water, and oxygen. The majority of plants also need access to light to grow.

By the same token, an everlasting robot needs an ongoing energy supply. Currently, electrical power dominates energy supply in the world of robotics. Most robots are powered by the chemistry of batteries.

An alternative battery type has been proposed that uses nuclear waste and ultra-thin diamonds at its core. The inventors, a San Francisco startup called Nano Diamond Battery, claim a possible battery life of tens of thousands of years. Very small robots would be an ideal user of such batteries.

But a more likely long-term solution for powering robots may involve different chemistry – and even biology. In 2021, scientists from the Berkeley Lab and UMAss Amherst in the US demonstrated tiny nanobots could get their energy from chemicals in the liquid they swim in.

The researchers are now working out how to scale up this idea to larger robots that can work on solid surfaces.

Repairing and copying oneself

Of course, an undying robot might still need occasional repairs.

Ideally, a robot would repair itself if possible. In 2019, a Japanese research group demonstrated a research robot called PR2 tightening its own screw using a screwdriver. This is like self-surgery! However, such a technique would only work if non-critical components needed repair.

Other research groups are exploring how soft robots can self-heal when damaged. A group in Belgium showed how a robot they developed recovered after being stabbed six times in one of its legs. It stopped for a few minutes until its skin healed itself, and then walked off.

Another unusual concept for repair is to use other things a robot might find in the environment to replace its broken part.

Last year, scientists reported how dead spiders can be used as robot grippers. This form of robotics is known as “necrobotics”. The idea is to use dead animals as ready-made mechanical devices and attach them to robots to become part of the robot.

The proof-of-concept in necrobotics involved taking a dead spider and ‘reanimating’ its hydraulic legs with air, creating a surprisingly strong gripper. Preston Innovation Laboratory/Rice University

A robot colony?

From all these recent developments, it’s quite clear that in principle, a single robot may be able to live forever. But there is a very long way to go.

Most of the proposed solutions to the energy, repair and replication problems have only been demonstrated in the lab, in very controlled conditions and generally at tiny scales.

The ultimate solution may be one of large colonies or swarms of tiny robots who share a common brain, or mind. After all, this is exactly how many species of insects have evolved.

The concept of the “mind” of an ant colony has been pondered for decades. Research published in 2019 showed ant colonies themselves have a form of memory that is not contained within any of the ants.

This idea aligns very well with one day having massive clusters of robots that could use this trick to replace individual robots when needed, but keep the cluster “alive” indefinitely.

Ant colonies can contain ‘memories’ that are distributed between many individual insects. frank60/Shutterstock

Ultimately, the scary robot scenarios outlined in countless science fiction books and movies are unlikely to suddenly develop without anyone noticing.

Engineering ultra-reliable hardware is extremely difficult, especially with complex systems. There are currently no engineered products that can last forever, or even for hundreds of years. If we do ever invent an undying robot, we’ll also have the chance to build in some safeguards.The Conversation


Jonathan Roberts is Director of the Australian Cobotics Centre, the Technical Director of the Advanced Robotics for Manufacturing (ARM) Hub, and is a Chief Investigator at the QUT Centre for Robotics. He receives funding from the Australian Research Council. He was the co-founder of the UAV Challenge – an international drone competition.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Year end summary: Top Robocar stories of 2022 https://robohub.org/year-end-summary-top-robocar-stories-of-2022/ Tue, 10 Jan 2023 08:10:51 +0000 http://robohub.org/?guid=54fbdddc3d49b10964fc859a4b649f19

Here’s my annual summary of the top stories of the prior year. This time the news was a strong mix of bad and good.

Read the text story on Forbes.com at Robocars 2022 year in review.

And see the video version here:

]]>
Five ways drones will change the way buildings are designed https://robohub.org/five-ways-drones-will-change-the-way-buildings-are-designed/ Mon, 02 Jan 2023 10:00:07 +0000 https://robohub.org/?p=206252

elwynn/Shutterstock

By Paul Cureton (Senior Lecturer in Design (People, Places, Products), Lancaster University) and Ole B. Jensen (Professor of Urban Theory and Urban Design, Aalborg University)

Drones are already shaping the face of our cities – used for building planning, heritage, construction and safety enhancement. But, as studies by the UK’s Department of Transport have found, swathes of the public have a limited understanding of how drones might be practically applied.

It’s crucial that the ways drones are affecting our future are understood by the majority of people. As experts in design futures and mobility, we hope this short overview of five ways drones will affect building design offers some knowledge of how things are likely to change.

Infographic showcasing other ways drones will influence future building design. Nuri Kwon, Drone Near-Futures, Imagination Lancaster, Author provided

1. Creating digital models of buildings

Drones can take photographs of buildings, which are then used to build 3D models of buildings in computer-aided design software.

These models have accuracy to within a centimetre, and can be combined with other data, such as 3D scans of interiors using drones or laser scanners, in order to provide a completely accurate picture of the structure for surveyors, architects and clients.

Using these digital models saves time and money in the construction process by providing a single source thaOle B. Jensent architects and planners can view.

2. Heritage simulations

Studio Drift are a multidisciplinary team of Dutch artists who have used drones to construct images through theatrical outdoor drone performances at damaged national heritage sites such as the Notre Dame in Paris, Colosseum in Rome and Gaudí’s Sagrada Familia in Barcelona.

Drones could be used in the near-future in a similar way to help planners to visualise the final impact of restoration or construction work on a damaged or partially finished building.

3. Drone delivery

The arrival of drone delivery services will see significant changes to buildings in our communities, which will need to provide for docking stations at community hubs, shops and pick-up points.

Wingcopter are one of many companies trialling delivery drones. Akash 1997, CC BY-SA

There are likely to be landing pads installed on the roofs of residential homes and dedicated drone-delivery hubs. Research has shown that drones can help with the last mile of any delivery in the UK, Germany, France and Italy.

Architects of the future will need to add these facilities into their building designs.

4. Drones mounted with 3D printers

Two research projects from architecture, design, planning, and consulting firm Gensler and another from a consortium led by Imperial College London (comprising University College London, University of Bath, University of Pennsylvania, Queen Mary University of London, and Technical University of Munich) named Empa have been experimenting with drones with mounted 3D printers. These drones would work at speed to construct emergency shelters or repair buildings at significant heights, without the need for scaffolding, or in difficult to reach locations, providing safety benefits.

Gensler have already used drones for wind turbine repair and researchers at Imperial College are exploring bee-like drone swarms that work together to construct blueprints. The drones coordinate with each other to follow a pre-defined path in a project called Aerial Additive Manufacturing. For now, the work is merely a demonstration of the technology, and not working on a specific building.

In the future, drones with mounted 3D printers could help create highly customised buildings at speed, but how this could change the workforce and the potential consequences for manual labour jobs is yet to be understood.

5. Agile surveillance

Drones offer new possibilities for surveillance away from the static, fixed nature of current systems such as closed circuit television.

Drones with cameras and sensors relying on complex software systems such as biometric indicators and “face recognition” will probably be the next level of surveillance applied by governments and police forces, as well as providing security monitoring for homeowners. Drones would likely be fitted with monitoring devices, which could communicate with security or police forces.

Drones used in this way could help our buildings become more responsive to intrusions, and adaptable to changing climates. Drones may move parts of the building such as shade-creating devices, following the path of the sun to stop buildings overheating, for example.The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Countering Luddite politicians with life (and cost) saving machines https://robohub.org/countering-luddite-politicians-with-life-and-cost-saving-machines/ Sun, 04 Dec 2022 09:30:00 +0000 https://robotrabbi.com/?p=25005 Read More]]>

Earlier this month, Candy Crush celebrated its decade birthday by hosting a free party in lower Manhattan. The climax culminated with a drone light display of 500 Unmanned Ariel Vehicles (UAVs) illustrating the whimsical characters of the popular mobile game over the Hudson. Rather than applauding the decision, New York lawmakers ostracized the avionic wonders to Jersey. In the words of Democratic State Senator, Brad Hoylman, “Nobody owns New York City’s skyline – it is a public good and to allow a private company to reap profits off it is in itself offensive.” The complimentary event followed the model of Macy’s New York fireworks that have illuminated the Hudson skies since 1958. Unlike the department store’s pyrotechnics that release dangerous greenhouse gases into the atmosphere, drones are a quiet climate-friendly choice. Still, Luddite politicians plan to introduce legislation to ban the technology as a public nuisance, citing its impact on migratory birds, which are often more spooked by skyscrapers in Hoylman’s district.

Beyond aerial tricks, drones are now being deployed in novel ways to fill the labor gap of menial jobs that have not returned since the pandemic. Founded in 2018, Andrew Ashur’s Lucid Drones has been power-washing buildings throughout the United States for close to five years. As the founder told me: “I saw window washers hanging off the side of the building on a swing stage and it was a mildly windy day. You saw this platform get caught in the wind and all of a sudden the platform starts slamming against the side of the building. The workers were up there, hanging on for dear life, and I remember having two profound thoughts in this moment. The first one, thank goodness that’s not me up there. And then the second one was how can we leverage technology to make this a safer, more efficient job?” At the time, Ashur was a junior at Davidson College playing baseball. The self-starter knew he was on to a big market opportunity.

Each year, more than 160,000 emergency room injuries, and 300 deaths, are caused by falling off of ladders in the United States. Entrepreneurs like Ashur understood that drones were uniquely qualified to free humans from such dangerous work. This first required building a sturdy tethered quadcopter, capable of a 300 psi flow rate, connected to a tank for power and cleaning fluid for less than the cost of the annual salary of one window cleaner. After overcoming the technical hurdle, the even harder task was gaining sales traction. Unlike many hardware companies that set out to disrupt the market and sell directly to end customers; Lucid partnered with existing building maintenance operators. “Our primary focus is on existing cleaning companies. And the way to think about it is we’re now the shiniest tool in their toolkit that helps them do more jobs with less time and less liability to make more revenue,” explains Ashur. This relationship was further enhanced this past month with the announcement of a partnership with Sunbelt Rentals, servicing its 1,000 locations throughout California, Florida, and Texas. Lucid’s drones are now within driving distance of the majority of the 86,000 facade cleaning companies in America.

According to Commercial Buildings Energy Consumption Survey, there are 5.9 million commercial office buildings in the United States, with an average height of 16 floors. This means there is room for many robot cleaning providers. Competing directly with Lucid are several other drone operators, including Apellix, Aquiline Drones, Alpha Drones, and a handful of local upstarts. In addition, there are several winch-powered companies, such as Skyline Robotics, HyCleaner, Serbot, Erlyon, Kite Robotics, and SkyPro. Facade cleaning is ripe for automation as it is a dangerous, costly, repetitive task that can be safely accomplished by an uncrewed system. As Ashur boasts, “You improve that overall profitability because it’s fewer labor hours. You’ve got lower insurance on ground cleaner versus an above ground cleaner as well as the other equipment.” His system being tethered, ground-based, and without any ladders is the safest way to power wash a multistory office building. He elaborated further on the cost savings, “It lowers insurance cost, especially when you look at how workers comp is calculated… we had a customer, one of their workers missed the bottom rung of the ladder, the bottom rung, he shattered his ankle. OSHA classifies it as a hazardous workplace injury. Third workers comp rates are projected to increase by an annual $25,000 over the next five years. So it’s a six-figure expense for just that one business from missing one single bottom rung of the ladder and unfortunately, you hear stories of people falling off a roof or other terrible accidents that are life changing or in some cases life lost. So that’s the number one thing you get to eliminate with the drone by having people on the ground.”

As older construction workers are retiring at alarming numbers and a declining younger population of skilled laborers, I pressed Ashur on the future of Lucid in expanding to other areas. He retorted, “Cleaning drones, that’s just chapter one of our story here at Lucid. We look all around us at service industries that are being crippled by labor shortages.” He continued to suggest that robots could inspire a younger, more creative workforce, “When it comes to the future of work, we really believe that robotics is the answer because what makes us distinctly human isn’t our ability to do a physical task in a repetitive fashion. It’s our ability to be creative and problem solve… And that’s the direction that the younger populations are showing they’re gravitating towards” He hinted further at some immediate areas of revenue growth, “Since we launched a website many years ago, about 50% of our requests come from international opportunities. So it is very much so a global problem.” In New York, buildings taller than six stories are required to have their facades inspected and repaired every five years (Local Law 11). Rather than shunting drones, State Senator Hoylman should be contacting companies like Lucid for ideas to automate facade work and create a new Manhattan-launched industry.
]]>
General purpose robots should not be weaponized: An open letter to the robotics industry and our communities https://robohub.org/general-purpose-robots-should-not-be-weaponized-an-open-letter-to-the-robotics-industry-and-our-communities/ Mon, 07 Nov 2022 09:31:53 +0000 https://robohub.org/?p=205928

Over the course of the past year Open Robotics has taken time from our day-to-day efforts to work with our colleagues in the field to consider how the technology we develop could negatively impact society as a whole. In particular we were concerned with the weaponization of mobile robots. After a lot of thoughtful discussion, deliberation, and debate with our colleagues at organizations like Boston Dynamics, Clearpath Robotics, Agility Robotics, AnyBotics, and Unitree, we have co-authored and signed an open letter to the robotics community entitled, “General Purpose Robots Should Not Be Weaponized.” You can read the letter, in its entirety, here. Additional media coverage of the letter can be found in Axios, and The Robot Report.

The letter codifies internal policies we’ve had at Open Robotics since our inception and we think it captures the sentiments of much of the ROS community. For our part, we have pledged that we will not weaponize mobile robots, and we do not support others doing so either. We believe that the weaponization of robots raises serious ethical issues and harms public trust in technologies that can have tremendous benefits to society. This is but a first step, and we look forward to working with policy makers, the robotics community, and the general public, to continue to promote the ethical use of robots and prohibit their misuse. This is but one of many discussions that must happen between robotics professionals, the general public, and lawmakers about advanced technologies, and quite frankly, we think it is long overdue.

Due to the permissive nature of the licenses we use for ROS, Gazebo, and our other projects, it is difficult, if not impossible, for us to limit the use of the technology we develop to build weaponized systems. However, we do not condone such efforts, and we will have no part in directly assisting those who do with our technical expertise or labor. This has been our policy from the start, and will continue to be our policy. We encourage the ROS community to take a similar stand and to work with their local lawmakers to prevent the weaponization of robotic systems. Moreover, we hope the entire ROS community will take time to reflect deeply on the ethical implications of their work, and help others better understand both the positive and negative outcomes that are possible in robotics.

]]>
‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie https://robohub.org/killer-robots-will-be-nothing-like-the-movies-show-heres-where-the-real-threats-lie/ Wed, 19 Oct 2022 12:13:23 +0000 http://robohub.org/?guid=a4e14eead0959783928f634693fa3916

Ghost Robotics Vision 60 Q-UGV. US Space Force photo by Senior Airman Samuel Becker

By Toby Walsh (Professor of AI at UNSW, Research Group Leader, UNSW Sydney)

You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies would watch the latest Bond movie to see what technologies might be coming their way.

Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming sex robot courtroom drama Dolly.

I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a 2011 short story by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.

The real killer robots

Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic Metropolis.

But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.

Indeed, contrary to recent fears, robots may never be sentient.

It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and Nagorno-Karabakh.

A war transformed

Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of the real future of killer robots.

On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store.

And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms.

This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see Turkey emerging as a major drone power.

And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies.

Robot manufacturers are, however, starting to push back against this future.

A pledge not to weaponise

Last week, six leading robotics companies pledged they would never weaponise their robot platforms. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can perform an impressive backflip, and the Spot robot dog, which looks like it’s straight out of the Black Mirror TV series.

This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised an open letter signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a global disarmament award.

However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.

We have, for example, already seen third parties mount guns on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was assassinated by Israeli agents using a robot machine gun in 2020.

Collective action to safeguard our future

The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.

Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation.

Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council has recently unanimously decided to explore the human rights implications of new and emerging technologies like autonomous weapons.

Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation.

Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.

The Conversation

Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

]]>
50 women in robotics you need to know about 2022 https://robohub.org/50-women-in-robotics-you-need-to-know-about-2022/ Tue, 11 Oct 2022 06:59:02 +0000 https://robohub.org/?p=205713 Our Women in Robotics list turns 10 this year and we are delighted to introduce you to another amazing “50 women in robotics you need to know about” as we also celebrate Ada Lovelace Day. We have now profiled more than 300 women AND non-binary people making important contributions to robotics since the list began in 2013. This year our 50 come from robotics companies (small and large), self-driving car companies, governments, research organizations and the media. The list covers the globe, with the chosen ones having nationalities from the EU, UK, USA, Australia, China, Turkey, India and Kenya. A number of women come from influential companies that are household names such as NASA, ABB, GE, Toyota and the Wall Street Journal. As the number of women on the list grows so does the combined global impact of their efforts, increasing the visibility of women in the field who may otherwise go unrecognized. We publish this list to overcome the unconscious perception that women aren’t making significant contributions. We encourage you to use our lists to help find women for keynotes, panels, interviews and to cite their work and include them in curricula.

The role models these 50 women represent are diverse, ranging from emeritus to early career stage. Role models are important. Countess Ada Lovelace, the world’s first computer programmer and an extraordinary mathematician, faced an uphill battle in the days when women were not encouraged to pursue a career in science. Fast forward 200 years and there are still not enough women in science, technology, engineering or math (STEM). One key reason is clear: the lack of visible female role models and so we continue to run our women in robotics photo challenge, to showcase real women building real robots. Women in STEM need to be equally represented at conferences, keynotes, magazine covers, or stories about technology. Although this is starting to change, the change is not happening quickly enough. You can help. Spread the word and use this resource to inspire others to consider a career in robotics. As you will see there are many different ways the women we profile are making a difference.

We hope you are inspired by these profiles, and if you want to work in robotics too, please join us at Women in Robotics. We are now a 501(c)(3) non-profit organization, but even so, this post wouldn’t be possible if not for the hard work of volunteers and the Women in Robotics Board of Directors.

Want to keep reading? There are more than 300 other stories on our 2013 to 2021 lists (and their updates):

Please share this and cite Women in Robotics as the author. Why not nominate a woman or non-binary person working in robotics for inclusion next year! Tweet this.

]]>
Tesla’s Optimus robot isn’t very impressive – but it may be a sign of better things to come https://robohub.org/teslas-optimus-robot-isnt-very-impressive-but-it-may-be-a-sign-of-better-things-to-come/ Tue, 04 Oct 2022 07:18:32 +0000 http://robohub.org/?guid=16f868e8c9cf035eaab55a3e83a1f198

By Wafa Johal (Senior Lecturer, Computing & Information Systems, The University of Melbourne)

In August 2021, Tesla CEO Elon Musk announced the electric car manufacturer was planning to get into the robot business. In a presentation accompanied by a human dressed as a robot, Musk said work was beginning on a “friendly” humanoid robot to “navigate through a world built for humans and eliminate dangerous, repetitive and boring tasks”.

Musk has now unveiled a prototype of the robot, called Optimus, which he hopes to mass-produce and sell for less than US$20,000 (A$31,000).

At the unveiling, the robot walked on a flat surface and waved to the crowd, and was shown doing simple manual tasks such as carrying and lifting in a video. As a robotics researcher, I didn’t find the demonstration very impressive – but I am hopeful it will lead to bigger and better things.

Why would we want humanoid robots?

Most of the robots used today don’t look anything like people. Instead, they are machines designed to carry out a specific purpose, like the industrial robots used in factories or the robot vacuum cleaner you might have in your house.

So why would you want one shaped like a human? The basic answer is they would be able to operate in environments designed for humans.

Unlike industrial robots, humanoid robots might be able to move around and interact with humans. Unlike robot vacuum cleaners, they might be able to go up stairs or traverse uneven terrain.

And as well as practical considerations, the idea of “artificial humans” has long had an appeal for inventors and science-fiction writers!

Room for improvement

Based on what we saw in the Tesla presentation, Optimus is a long way from being able to operate with humans or in human environments. The capabilities of the robot showcased fall far short of the state of the art in humanoid robotics.

The Atlas robot made by Boston Dynamics, for example, can walk outdoors and carry out flips and other acrobatic manoeuvres.

And while Atlas is an experimental system, even the commercially available Digit from Agility Robotics is much more capable than what we have seen from Optimus. Digit can walk on various terrains, avoid obstacles, rebalance itself when bumped, and pick up and put down objects.

Bipedal walking (on two feet) alone is no longer a great achievement for a robot. Indeed, with a bit of knowledge and determination you can build such a robot yourself using open source software.

There was also no sign in the Optimus presentation of how it will interact with humans. This will be essential for any robot that works in human environments: not only for collaborating with humans, but also for basic safety.

It can be very tricky for a robot to accomplish seemingly simple tasks such as handing an object to a human, but this is something we would want a domestic humanoid robot to be able to do.

Sceptical consumers

Others have tried to build and sell humanoid robots in the past, such as Honda’s ASIMO and SoftBank’s Pepper. But so far they have never really taken off.

Amazon’s recently released Astro robot may make inroads here, but it may also go the way of its predecessors.

Consumers seem to be sceptical of robots. To date, the only widely adopted household robots are the Roomba-like vacuum cleaners, which have been available since 2002.

To succeed, a humanoid robot will need be able to do something humans can’t to justify the price tag. At this stage the use case for Optimus is still not very clear.

Hope for the future

Despite these criticisms, I am hopeful about the Optimus project. It is still in the very early stages, and the presentation seemed to be aimed at recruiting new staff as much as anything else.

Tesla certainly has plenty of resources to throw at the problem. We know it has the capacity to mass produce the robots if development gets that far.

Musk’s knack for gaining attention may also be helpful – not only for attracting talent to the project, but also to drum up interest among consumers.

Robotics is a challenging field, and it’s difficult to move fast. I hope Optimus succeeds, both to make something cool we can use – and to push the field of robotics forward.

The Conversation

Wafa Johal receives funding from the Australian Research Council.

This article appeared in The Conversation.

]]>
Have a say on these robotics solutions before they enter the market! https://robohub.org/have-a-say-on-these-robotics-solutions-before-they-enter-the-market/ Sat, 24 Sep 2022 09:30:08 +0000 https://robohub.org/?p=205582

Robotics solutions and technologies are rapidly changing society and transforming the way we live and work – with both positive improvements and unforeseen consequences.

In Robotics4EU we wish to ensure that citizens have a say when it comes to these new technologies and how they affect everyday life. Therefore, we have gathered robots which are being developed right now or have just entered the market. We have set these up in a survey style consultation.

TAKE THE CITIZENS’ AUDIT HERE

By answering the survey, you get the opportunity to have an influence on these robotics solutions, as your answers will be given directly to the companies behind the robots, who will use your feedback in the further development of the robots.

The solutions to give feedback to are various: from a robot that gives throat swabs, to a humanoid that assists medical personnel and even a solution that aims to protect farmers’ crops.

]]>
Shelf-stocking robots with independent movement https://robohub.org/shelf-stocking-robots-with-independent-movement/ Fri, 23 Sep 2022 12:44:47 +0000 http://robohub.org/?guid=60b93fa1da65ab7cc74a945c5077d5b9 Robots that move about by themselves must be able to adapt to the dynamic and challenging conditions in a supermarket. Hernández Corbato says: “My research focuses on using artificial intelligence to make machines smarter and more reliable by teaching them symbolic knowledge. The goal is to develop robotic ‘brains’ for intelligent robots that can be […]

The post Shelf-stocking robots with independent movement appeared first on RoboHouse.

]]>

Robots that move about by themselves must be able to adapt to the dynamic and challenging conditions in a supermarket. Hernández Corbato says: “My research focuses on using artificial intelligence to make machines smarter and more reliable by teaching them symbolic knowledge. The goal is to develop robotic ‘brains’ for intelligent robots that can be trusted to work alongside people, because they can explain their decisions.”

A supermarket is typically a place where unexpected things happen all the time. Not only are there thousands of products with different shapes and looks, there are also people walking in and out. How can an independently operating machine handle this safely, efficiently and intelligently? By activating symbolic knowledge that we humans also use, says Hernández. “We recognize a tray with four legs underneath as a symbol: ‘table’. We don’t need a photo for it. When we encode such ‘symbol language’ and make it suitable for robots, they can perform more complex tasks.”

Researcher Carlos Hernández Corbato of the Department of Cognitive Robotics in the retail lab at RoboHouse.

One look with its camera eyes and the robot knows it is facing an object on which plates and cups can be placed. Based on this, it can decide what to do. The acceleration that this produces, should enable robots to perform multiple actions at the same time: navigate, pick up and move objects, and ultimately communicate with people.

Symbolic knowledge

For Hernández, the AI for Retail Lab research program of supermarket chain Ahold Delhaize brings together everything that fascinates him about artificial intelligence. Retail requires robots to use a broad diversity of skills: to perceive the environment, navigate around it, manipulate objects or collaborate with humans. For him, it’s all about the question of which algorithms are needed to make a machine respond just as intelligently as a human brain. As a specialist in software for autonomously operating robots, he already won the Amazon Picking Challenge in 2016 with a team from TU Delft. At this occasion a robotic arm placed products from a container in their place on a shelf.

“Retail requires robots to use a broad diversity of skills: to perceive the environment, navigate around it, manipulate objects or collaborate with humans.”

– Carlos Hernández Corbato, researcher

The ‘supermarket robot’ is even more challenging. It requires the leap from a static factory environment to the dynamics of a store. The traditional way, in which robots learn from the data they collect, is too cumbersome for that. The robot would already get stuck in stock management. Programming a customized robot treatment for every orange, bottle, soup can, milk carton or cucumber would be too much work. “We want to inject symbolic knowledge into the robot’s operating system all at once,” says Hernández. “If that knowledge is available, the robot can continuously adapt to his changing environment. For example, by downloading a different hand movement.”

The robot must be able to independently choose a different algorithm if it encounters a problem along the way. So that he can pick up a can that falls from his hands, or change the grip of his hand slightly when picking up an unknown object. The technicians have already set up a test shop where robot ‘Tiago’ can practice with it. In about five years’ time, it should deliver a machine with a mobile base, two arms and two camera eyes, which independently refills supermarket shelves 24 hours a day. And it must be able to do that under all circumstances, day and night.

Sensor broken

The latter does not only apply to the supermarket robot. In fact, every robot should have a next generation operating system to better cope with changing circumstances. Hernández Corbato:  “Beyond integrating different robot skills, cognitive skills for robots need to enable them to reason about those skills, to understand how they can use them, and what are the consequences of their own actions. In sum, we need to endow robots (or any intelligent autonomous system build) with self-awareness so that we can trust them.”

“We need to endow robots (or any intelligent autonomous system build) with self-awareness so that we can trust them.”

– Carlos Hernández Corbato, researcher

It is the core idea behind the European project Metacontrol for ROS2 systems (MROS) that the Cognitive Robotics department recently completed. The AI technique that Hernández used for this is called the metacontrol method. It describes the properties and skills of the robot in a structured way, so that the robot can use the knowledge to adapt and overcome problems.

As part of this research, he developed multiple prototypes of these next generation robots together with Bosch Corporate Research, Universidad Rey Juan Carlos, Universidad Politecnica de Madrid and IT University in Copenhagen.

Does it perform better than traditional robots? “Yes, he navigated more safely and, thanks to its symbolic knowledge, was able to adapt to the circumstances. When one sensor broke, it switched to another independently,” says Hernández enthusiastically. “That is where we want to go to: a robot with sufficient intelligence to deal with failures.”

(in the middle) Corrado Pezzato, PhD candidate at AIRLab , (right) Stefan Bonhof, research engineer and project manager at AIRLab in RoboHouse.

The post Shelf-stocking robots with independent movement appeared first on RoboHouse.

]]>
RoboCup humanoid league: Interview with Jasper Güldenstein https://robohub.org/robocup-humanoid-league-interview-with-jasper-guldenstein/ Tue, 20 Sep 2022 07:22:53 +0000 https://robohub.org/?p=205558

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event returned to an in-person format for 2022, taking place from 13-17 July in Bangkok. RoboCup comprises a number of leagues, with perhaps the most well-known being the soccer leagues.

In the Humanoid League, autonomous robots with a human-inspired body plan and senses play soccer against each other. We spoke to Jasper Güldenstein, a member of the technical committee, about the competition at RoboCup 2022, and also about the Humanoid League Virtual Season. As a biweekly virtual competition taking place between the physical RoboCup events it provides teams with an opportunity to test their ideas and keep in touch throughout the year.

Could you give us an overview of the Humanoid League competition at RoboCup this year?

This year we had the first in-person event after a three year break. It was really good to have the majority of the teams coming back together and working on their robots again. Although a lot of teams came with fewer participants than they usually do, we still managed to have 12 teams in the kid size league and three teams in the adult size. Unfortunately, some teams could not participate due to travel restrictions, but we hope to see them back next year.

Humanoid teamsHumanoid league finalists with their robots

What was the format for the competition?

At the beginning we had a drop-in round, which is where we play games with one robot from each team participating in a joint team. So, we have eight robots playing from eight different teams. That is useful for finding out which teams are likely to perform the best in the competition. Next, the results from the drop-in games were used as seeding for the round robin phase of the regular competition, with the strongest teams separated into different groups. After the round robin, we have a knockout competition. The seeding method means that we can hopefully avoid the situation where very good teams get kicked out early. We saw that the most interesting games were towards the end of the competition when the teams performed really well.

Have you noticed improvements since the last physical competition?

I’d say definitely that one big thing that has improved for a lot of teams is the localisation. A lot of teams are more easily able to localise themselves on the field, and they don’t run off randomly. They are more certain that they are in the correct position.

Furthermore, I think the kicking has improved. The robots kick the ball much further than they used to. People have been tuning their kicking motions to increase the distance.

In terms of computer vision, this has definitely improved quite a bit. Something we did differently this time, which was inspired by what we did in the virtual season, is that we had a set of six different balls, all from previous FIFA competitions. For each game a ball was drawn randomly, so the teams couldn’t really prepare for all the balls. Although they were visually quite different, the teams didn’t really have any problems detecting the ball. We’ve seen, in general, that computer vision approaches have improved and these improvements have been transferred to the RoboCup competition. I think that almost all teams are using a neural network to detect the ball. This is a change from three, four, five years ago, where many teams used hand-tuned classical computer vision algorithms.

To talk a bit more about ball detection, it will be interesting to see what happens if we move to an environment with natural and/or varying light conditions. This year we were in a convention hall with uniform lighting. I believe next year, in Bordeaux, there is going to be some form of natural light coming in, and perhaps even fields that are outside. It’s still at the planning stage but we are looking forward to that. It will be a challenge and I strongly believe that the participants will find approaches to make their vision approach robust against these varying conditions.

Teams in actionThe setup and testing for the Humanoid League competition at RoboCup 2022, Bangkok.

Thinking about the transfer from the simulation to the real world, are there any specific elements that lend themselves well to being transferred?

In terms of computer vision, we had a bit of transfer. In the virtual season we concentrated a lot on changing the lighting conditions and having varied backgrounds, to be able to emulate the real world a bit better. I think a few teams used their vision approaches from the virtual season in the real world.

However, I think the strongest part is behaviour. Teams were able to test their strategies in the virtual competition and adapt every other week. For example, CIT Brains, which won the virtual season and the physical competition, made quite a few changes to their strategy and they had robust systems running. Their strategy worked really well, and in the final they managed to score several goals win against the previous world champions (Rhoban Football Club).

How did the competition go for your team (Hamburg Bit-Bots)?

We actually had quite a few hardware problems, especially on the mechanics side. The motors are wearing out and warped due to wear, and flexed more than we expected. This meant we had difficulties walking stably. And, if you can’t walk in a stable manner that defeats the purpose of everything else. It’s a really integrated system – if one component breaks, you are out of luck as you are very restricted in what you can change during the competition as you don’t have much spare equipment with you.

However, what was good for us was that we had a lot of software up and running, and a lot of it had been tested really well during the virtual league. We had to try and find a way round the walking problem algorithmically, to try to find walking parameters that were more stable. We also switched from [robot operating system] ROS 1 to ROS 2 which brought with it many challenges. We actually did a write up on the issues we faced, with some tricks and tips.

Will there be a new virtual league season this year?

Yes, we’ve discussed this in our technical committee and we plan on doing it again. The last event was successful and the teams enjoyed it. We plan on making some changes, such as logging the games to extract some interesting metrics and doing some analysis on those.

Another thing we want to do is domain randomisation, making the environment a bit more varied. This means that the approaches have to be more robust. The hope is that, when they are more robust, they can be transferred more easily to the real world. We were thinking about making the terrain slightly uneven. Another approach could be to modify the models of the robots such that the joints emulate a bit of wear, so they simulated actuators might be a bit weaker or stronger randomly, and teams have to find robust approaches to deal with that.

We won’t do everything at the beginning. We’ll move through the season and talk to the teams and form some organising groups to develop the simulation further, to run the games and to organise the competition itself. We are always happy to have input and we always talk to the teams to see what they think. It’s a competition but it’s something we build together.

Robots at the humanoid league BangkokHumanoid robots at RoboCup 2022, Bangkok.

Could you tell us about the satellite event that took place at RoboCup?

This was a discussion about how to get teams more interested in participating and how to bridge the junior leagues and the major leagues.

We know that some people who participated in RoboCup Junior specifically selected a University that has a RoboCup team so that they could join that team. It would be awesome for more people to do this, and for more juniors to know what the major league is about.

To bridge the gap between the junior and major leagues we don’t want to introduce another league, but we want some form of events where the two groups can meet, and where the juniors can show off how well they are doing. It would be good to have more interaction between the leagues, although we haven’t decided on the exact format yet.

About Jasper

Jasper

Jasper Güldenstein is a PhD student at University of Hamburg. His research focuses on humanoid navigation. He has been participating in RoboCup as a member of the Hamburg Bit-Bots since 2016 where his focus is developing not only the software of the team but also the electronics of the robot platform. In his recent master thesis he evaluated using reinforcement learning to perform path planning and execution for humanoid robots.

]]>
Peace on Earth (1987): Using telerobotics to check in on a swarm robot uprising on the Moon https://robohub.org/peace-on-earth-1987-using-telerobotics-to-check-in-on-a-swarm-robot-uprising-on-the-moon/ Tue, 13 Sep 2022 08:48:54 +0000 https://robohub.org/?p=205489

Robots: humanoids, teleoperated reconfigurable robots, swarms.

Recommendation: Read this classic hard sci-fi novel and expand your horizons about robots, teleoperation, and swarms.

Stanislaw Lem was one of the most read science fiction authors in the world in his day, especially the 70s and 80s, though not in America because there were rarely translations from his native Polish to English. Europeans could parse the French translations, we couldn’t even parlez vous francais. Lem famously did not like American science fiction, with a very few exceptions. One being Philip K. Dick- and it is no wonder since Lem’s 1987 novel Peace on Earth shares many of the same themes that Dick covered: militarization of robots, people losing their memory or not being what they seem, and government conspiracies. In some ways Peace on Earth is like the longer, more detailed, and, actually, *better* version of Dick’s 1953 short story Second Variety (which was basis for the Peter Weller movie Screamers).

Peace on Earth has a sort of a Battlestar Galatica (reboot) backstory. Mankind has put all their military robots on the moon to do whatever military robots do. The rapidly evolving, super smart robots can continue to use simulations and machine learning to improve or work out alternatives to Clauswitz style of warfare but out of the way so that it can’t impact humans.

Or can it?

Except after a couple of decades no one has heard from the robots. This is not unexpected, but people, being people, are beginning to wonder if the robots are still up there. Or maybe the robots have evolved into something peaceful. Or into some supreme intelligence that might want to take over the Earth. Or maybe the robots have run out of things to shot at up there and the winners are now thinking about shooting at Earth. Oooops. Maybe we should send someone to check in on them, just in case…

The story is told from the viewpoint of the astronaut, Ijon Tichy, sent to check in on the robots. The book starts with his return on Earth with brain damage that has severed his corpus callosum, left him with major memory loss as to what happened and why he is on the run. We are in Christopher Nolan Momento territory (without the tattoos) or Jonathan Nolan’s/HBO’s Westworld out of sequence story telling as Tichy tries to figure out what happened on the Moon and what it means.

Along the way we get some interesting descriptions of telerobotics and telepresence as well as swarm and distributed robotics. Lem was a hard science ficition writer, who had gone to medical school before switching to physics. He was very much into the science component of his books and in this case more of the ideas of biological evolution. He posits that biological evolution has been about the evolution of small to large— from viruses and bacteria to single cells to animals and people, but that robotics evolution will be from large to small. We started with big robots improving, then getting smaller with miniaturization of sensors and actuators, then smaller computation as a single robot would not need to carry all its computation onboard but could rely on distributed computation, and the trend will continue finally a robot becomes a collection of tiny, simple robots that can cast itself into a larger shape with greater intelligence— the idea behind Michael Crichton’s novel Prey. These swarms of what we would now call nanorobots would provide the ultimate flexibility in reconfigurable robots. Of course, Lem hand waves over limiting factors such as power and communication. But that aside, it’s a thought-provoking idea and a radically different take than Dick’s on how military robots would evolve.

One of the interesting scientific themes in Peace on Earth is Tichy’s use of teleoperation robots to land on the Moon and attempt to check out the robots in the different sectors of the Moon. Eventually Tichy quits using humanoid robots and begins using a reconfigurable robot body that can transform into different animal shapes so as to move more effectively through the different structures built by the robots.

Teleoperated robots are sometimes called avatars, though the term avatar was originally restricted to software simulations- James Cameron changed that connotation with his movie. There is increasing interest in telecommuting (and telesex) through robots, so much so, there is now a XPrize competition on avatars.

My favorite shape that Tichy’s teleoperated robot took on was that of a dachshund. And here is where Lem underestimated the scientific challenges of teleoperation. Lem focused on the physical science— how the avatar might reconfigure into a new shape. He assumed that Tichy would have little difficulty adjusting to the new shape because Tichy would be wearing a suit that sensed his body movements. Except this ignores the human-robot interaction component— how does Tichy know to move like a dog and synthesize perception from angles much lower than a human? The degrees of freedom are different, the movement patterns are different, the location of sensors are different. Operators get rapidly fatigued with humanoid robots where there is a one-to-one correspondence between the human and robot and there is no change in size. The cognitive load for trying to control a four legged animal would be huge. It is hard to imagine that Tichy would be successful without an intermediary intelligent assistance program that would translate his intent into the appropriate motions for the current shape.

And that type of assistive AI is a hard, open research question.

The XPrize ANA Avatar competition is making a similar assumption, that if you can build a humanoid avatar, it will be easy and natural for a human to control. That hasn’t been supported by decades of research in telerobotics and the humanoid robots in the DARPA robotics challenge often required multiple operators.

But back to Peace on Earth. It’s a very readable book jam packed with scientific ideas that were ahead of its time, combined with a serious jab at the stupidity of the nuclear arms race that was in progress at the time.

More importantly, Lew foresaw a world in which robots could be a threat, though politicians were a bigger threat, and were a solution to the threat. A refreshing take on robotics and the New World Order. What a shame Lem has been relatively unknown in the US.

You really should read this one, especially if you like hard science fiction like Arthur C. Clarke or if you want to get beyond the US viewpoint of sci-fi.

For an audio version of this review, click below…


Original article posted in Robotics Through Science Fiction blog.

]]>
Why household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers https://robohub.org/why-household-robot-servants-are-a-lot-harder-to-build-than-robotic-vacuums-and-automated-warehouse-workers/ Fri, 09 Sep 2022 09:18:00 +0000 http://robohub.org/?guid=0293dd438e0be0f3871b9aa14c00335d

Who wouldn’t want a robot to handle all the household drudgery? Skathi/iStock via Getty Images

By Ayonga Hereid (Assistant Professor of Mechanical and Aerospace Engineering, The Ohio State University)

With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores.

Tesla is building a humanoid robot, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently acquired iRobot, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the Amazon Robotics program to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.’s largest robotics center devoted to developing household robots that carry out daily domestic tasks in residential spaces.

Despite the growing interest, would-be customers may have to wait awhile for those robots to come on the market. While devices such as smart thermostats and security systems are widely used in homes today, the commercial use of household robots is still in its infancy.

As a robotics researcher, I know firsthand how household robots are considerably more difficult to build than smart digital devices or industrial robots.

Robots that can handle a variety of domestic chores are an age-old staple of science fiction.

Handling objects

One major difference between digital and robotic devices is that household robots need to manipulate objects through physical contact to carry out their tasks. They have to carry the plates, move the chairs and pick up dirty laundry and place it in the washer. These operations require the robot to be able to handle fragile, soft and sometimes heavy objects with irregular shapes.

The state-of-the-art AI and machine learning algorithms perform well in simulated environments. But contact with objects in the real world often trips them up. This happens because physical contact is often difficult to model and even harder to control. While a human can easily perform these tasks, there exist significant technical hurdles for household robots to reach human-level ability to handle objects.

Robots have difficulty in two aspects of manipulating objects: control and sensing. Many pick-and-place robot manipulators like those on assembly lines are equipped with a simple gripper or specialized tools dedicated only to certain tasks like grasping and carrying a particular part. They often struggle to manipulate objects with irregular shapes or elastic materials, especially because they lack the efficient force, or haptic, feedback humans are naturally endowed with. Building a general-purpose robot hand with flexible fingers is still technically challenging and expensive.

It is also worth mentioning that traditional robot manipulators require a stable platform to operate accurately, but the accuracy drops considerably when using them with platforms that move around, particularly on a variety of surfaces. Coordinating locomotion and manipulation in a mobile robot is an open problem in the robotics community that needs to be addressed before broadly capable household robots can make it onto the market.

A sophisticated robotic kitchen is already on the market, but it operates in a highly structured environment, meaning all of the objects it interacts with – cookware, food containers, appliances – are where it expects them to be, and there are no pesky humans to get in the way.

They like structure

In an assembly line or a warehouse, the environment and sequence of tasks are strictly organized. This allows engineers to preprogram the robot’s movements or use simple methods like QR codes to locate objects or target locations. However, household items are often disorganized and placed randomly.

Home robots must deal with many uncertainties in their workspaces. The robot must first locate and identify the target item among many others. Quite often it also requires clearing or avoiding other obstacles in the workspace to be able to reach the item and perform given tasks. This requires the robot to have an excellent perception system, efficient navigation skills, and powerful and accurate manipulation capability.

For example, users of robot vacuums know they must remove all small furniture and other obstacles such as cables from the floor, because even the best robot vacuum cannot clear them by itself. Even more challenging, the robot has to operate in the presence of moving obstacles when people and pets walk within close range.

Keeping it simple

While they appear straightforward for humans, many household tasks are too complex for robots. Industrial robots are excellent for repetitive operations in which the robot motion can be preprogrammed. But household tasks are often unique to the situation and could be full of surprises that require the robot to constantly make decisions and change its route in order to perform the tasks.

The vision for household humanoid robots like the proposed Tesla Bot is of an artificial servant capable of handling any mundane task. Courtesy Tesla

Think about cooking or cleaning dishes. In the course of a few minutes of cooking, you might grasp a sauté pan, a spatula, a stove knob, a refrigerator door handle, an egg and a bottle of cooking oil. To wash a pan, you typically hold and move it with one hand while scrubbing with the other, and ensure that all cooked-on food residue is removed and then all soap is rinsed off.

There has been significant development in recent years using machine learning to train robots to make intelligent decisions when picking and placing different objects, meaning grasping and moving objects from one spot to another. However, to be able to train robots to master all different types of kitchen tools and household appliances would be another level of difficulty even for the best learning algorithms.

Not to mention that people’s homes often have stairs, narrow passageways and high shelves. Those hard-to-reach spaces limit the use of today’s mobile robots, which tend to use wheels or four legs. Humanoid robots, which would more closely match the environments humans build and organize for themselves, have yet to be reliably used outside of lab settings.

A solution to task complexity is to build special-purpose robots, such as robot vacuum cleaners or kitchen robots. Many different types of such devices are likely to be developed in the near future. However, I believe that general-purpose home robots are still a long way off.


The Conversation

Ayonga Hereid does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article appeared in The Conversation.

]]>
Robots will open more doors than they close https://robohub.org/robots-will-open-more-doors-than-they-close/ Sun, 14 Aug 2022 09:30:03 +0000 https://robohub.org/?p=205179

Michael M. Lee

In early 19th-century England, the Luddites rebelled against the introduction of machinery in the textile industry. The Luddites’ name originates from the mythical tale of a weaver’s apprentice called Ned Ludd who, in an act of anger against increasingly dangerous and poor working conditions, supposedly destroyed two knitting machines. Contrary to popular belief, the Luddites were not against technology because they were ignorant or inept at using it (1). In fact, the Luddites were perceptive artisans who cared about their craft, and some even operated machinery. Moreover, they understood the consequences of introducing machinery to their craft and working conditions. Specifically, they were deeply concerned about how technology was being used to shift the balance of power between workers and owners of capital.

The problem is not the advent of technology; the problem is how technology is applied. This is the essence of the intensely polarizing debate on robotic labor. Too often the debate is oversimplified to two opposing factions: the anti-tech pessimist versus the pro-tech optimist. On the one hand, the deeply pessimistic make the case that there will be greatly diminished workers’ rights, mass joblessness, and a widening gulf between socioeconomic classes. On the other hand, the overly optimistic believe that technology will bring better jobs and unbridled economic wealth. The reality is that, although extreme, both sides have valid points. The debate in its present form lacks a middle ground, leaving little room for nuanced and thoughtful discussion. It is simplistic to assume those who are pessimistic towards technological change do not understand the potential of technology as it is incorrect to conclude those who are optimistic about technological change are not thinking about the consequences. Pessimists may fully understand the potential for technological change and still feel that the drawbacks outweigh benefits. Optimists may not want change at any cost, but they feel that the costs are worthwhile.

There are various examples of how the introduction of machines have made industries more efficient and innovative, raising both the quality of work and the quality of output (for example, automated teller machines in banking, automated telephone exchanges in telecommunications, and industrial robots in manufacturing). An important detail in these success stories that is rarely mentioned, however, are timelines. The first industrial revolution did lead to higher levels of urbanization and rises in output; however, crucially, it took several decades before workers saw higher wages. This period of constant wages in the backdrop of rising output per worker is known as Engels’ pause, named after Friedrich Engels, the philosopher who first observed it (2).

Timing matters because, although there will be gains in the long term, there will certainly be losses in the short term. Support for retraining those most at risk of job displacement is needed to bridge this gap. Unfortunately, progress is disappointingly slow on this front. On one level, there are those who are apathetic to the challenges facing the workforce and feel that the loss of jobs is part of the cut and thrust of technological change. On another level, it is possible that there is a lack of awareness of the challenges of transitioning people to a new era of work. We need to bring change and light to both cases, respectively. Those at risk of being displaced by machines need to feel empowered by being a part of the change and not a by-product of change. Moreover, in developing the infrastructure to retrain and support those at risk, we must also recognize that retraining is itself a solution encased in many unsolved problems that include technical, economic, social, and even cultural challenges.

There is more that roboticists should be doing to advance the debate on robotic labor beyond the current obsessive focus on job-stealing robots. First, roboticists should provide a critical and fair assessment of the current technological state of robots. If the public were aware of just how far the field of robotics needs to advance to realize highly capable and truly autonomous robots, then they might be more assured. Second, roboticists should openly communicate the intent of their research goals and aspirations. Understanding that, in the foreseeable future, robotics will be focused on task replacement, not comprehensive job replacement, changes the conversation from how robots will take jobs from workers to how robots can help workers do their job better. The ideas of collaborative robots and multiplicity are not new (3), but they seldom get the exposure that they deserve. Opening an honest and transparent dialogue between roboticists and the general public will go a long way to building a middle ground that will elevate discussion on the future of work.

References

  1. J. Sadowski, “I’m a Luddite. You should be one too,” The Conversation, 25 November 2021 [accessed 3 April 2022].
  2. R. C. Allen, Engels’ pause: Technical change, capital accumulation, and inequality in the British industrial revolution. Explor. Econ. Hist. 46, 418–435 (2019).
  3. K. Goldberg, Editorial multiplicity has more potential than singularity. IEEE Trans. Autom. Sci. Eng. 12, 395 (2015).

From “Lee, M. M., Robots will open more doors than they close. Science Robotics, 7, 65 (2022).” Reprinted with permission from AAAS. Further distribution or republication of this article is not permitted without prior written permission from AAAS.

]]>
The best sci-fi books that describe how robots really work https://robohub.org/the-best-sci-fi-books-that-describe-how-robots-really-work/ Fri, 12 Aug 2022 11:37:48 +0000 https://robohub.org/?p=205192

I have loved science fiction ever since I was a kid and read all my Dad’s ancient issues of Analog Science Fiction and Fact from the 1940s. The first book I can remember reading was The Green Hills of Earth anthology by Robert Heinlein. Fast forward to the 1990s, when, as a new professor of computer science, I began adding sci-fi short stories and movies as extra credit for my AI and robotics courses. Later as a Faculty Fellow for Innovation in High-Impact Learning Experiences at Texas A&M, I created the Robotics Through Science Fiction book series as a companion to my textbook, Introduction to AI Robotics.

The books I picked & why

Little Eyes
By Samanta Schweblin, Megan McDowell

A Firby-like robot pet becomes an international fad, where a “keeper” buys a little wheeled robot and is randomly paired with a “dweller” who teleoperates the robot. The robot has only a camera and microphone, but no audio output, and the identity of the keeper and dweller are hidden. The game is that the keeper is entertained trying to figure out why the robot does what it does, while the dweller is entertained by exploring a new place. What could go wrong? Lots. Lots! Little Eyes absolutely terrified me, much more than any Stephen King novel because there is nothing supernatural, it could really happen.

The Warehouse
By Rob Hart

This is my favorite introduction to the state of automation and autonomy in manufacturing. In a near future, a Sam Walton type has made a fortune through drone delivery and warehouse automation. The warehouse automation is based on a well-intentioned, but shallow, interpretation of the outdated Fitts Law in human factors that divide different jobs between robots and humans. Except humans can’t match robot speed and endurance. The tension is whether a corporate spy who has infiltrated a warehouse to steal secrets is there to expose the inherent cruelty or, worse, to replicate the work practices at a competitor’s facility.

Rendezvous with Rama
By Arthur C. Clarke

This 1973 hard sci-fi classic is perhaps the best fictional introduction to behavioral robotics there is, appearing a decade before researchers, most notably Rod Brooks, created the behavioral paradigm. An alien spaceship is passing through our solar system on a slingshot orbit. It is autonomous but controlled strictly by simple biological affordances that enable it to respond to the human intruders without applying any of the HAL 9000 reasoning Clarke popularized in his more famous 2001: A Space Odyssey. I mentally throw this book at engineers when they try to make unnecessarily complex robots.

Kill Decision
By Daniel Suarez

When Kill Decision came out, I sent an email to all my Department of Defense colleagues saying: finally, a book that gets swarms, drones, computer vision, and lethal autonomous weapons right! The book shows behavioral robotics can duplicate insect intelligence to create simple, but relentlessly effective, drones. The inexpensive individual drones are limited in intelligence but a greater, more adaptive intelligence emerges from the swarm. It’s on par with a Michael Crichton technothriller with lots of action (plus romance), making it an easy read.

Head On: A Novel of the Near Future
By John Scalzi

The second in his entertaining detective series in a near future where 2% of the population is paralyzed and has to teleoperate robots in order to interact with the world (interestingly, it was written before the pandemic). The protagonist, Chris (we never are told their gender, making for a delightful guessing game), is an FBI agent investigating a murder and along the way faces the kind of casual discrimination that the disabled undoubtedly face every day. Chris maintains a wry sense of humor through it all, adding an Elmore Leonard or Donald E. Westlake vibe that makes me laugh out loud.


Original article published in Shepherd. Shepherd also has bookshelves about robots and robotics.

]]>
It’s time to update 19th century terms for 21st century technology https://robohub.org/its-time-to-update-19th-century-terms-for-21st-century-technology/ Sat, 04 Jun 2022 11:45:41 +0000 https://robohub.org/?p=204699

Unmanned and master/slave are two terms that are offensive to many in the community. Such terms may once have been accepted by society, but not any longer, and we are pleased to see many organizations starting to use alternative terms.

We call on you in 2022 to remove words with negative connotations, like the ones listed below, from all materials, course descriptions, department names, products, forms, reports or articles. The benefit to you is in broadening your appeal to all the community members who find those terms, if not outrightly offensive, then at the least old-fashioned and representative of a mindset that has not engaged meaningfully with creating inclusive or modern robotics.

Women in Robotics has taken the lead in curating a list of best practices in inclusive terminology, in consultation with other groups, and now we would like to share the first draft of “Terminology for 21st Century Technologists” for comment. So far we’ve considered gender, ethnicity and some disability issues. Our goal is to create a comprehensive directory of terminology, which can go through an update process periodically, just as standards do. There is obviously a little more work to be done and we want to include sections on ‘how to retire terms’, ‘how to implement changes constructively’ and more information about the process. This is where you can help us.

Please send comments in response to the first draft by June 28 2022 to reports@womeninrobotics.org

]]>
A draft open standard for an Ethical Black Box https://robohub.org/a-draft-open-standard-for-an-ethical-black-box/ Thu, 19 May 2022 09:36:00 +0000 http://robohub.org/?guid=636b02d9d058314c6837f303e21603fd

About 5 years ago we proposed that all robots should be fitted with the robot equivalent of an aircraft Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box (EBB). We argued that an ethical black box will play a key role in the processes of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility.

Since then, within the RoboTIPS project, we have developed and tested several model EBBs, including one for an e-puck robot that I wrote about in this blog, and another for the MIRO robot. With some experience under our belts, we have now drafted an Open Standard for the EBB for social robots – initially as a paper submitted to the International Conference on Robots Ethics and Standards. Let me now explain first why we need a standard, and second why it should be an open standard.

Why do we need a standard specification for an EBB? As we outline in our new paper, there are four reasons:

  1. A standard approach to EBB implementation in social robots will greatly benefit accident and incident (near miss) investigations. 
  2. An EBB will provide social robot designers and operators with data on robot use that can support both debugging and functional improvements to
    the robot. 
  3. An EBB can be used to support robot ‘explainability’ functions to allow, for instance, the robot to answer ‘Why did you just do that?’ questions from its user. And,
  4. a standard allows EBB implementations to be readily shared and adapted for different robots and, we hope, encourage manufacturers to develop and market general purpose robot EBBs.

And why should it be an Open Standard? Bruce Perens, author of The Open Source Definition, outlines a number of criteria an open standard must satisfy, including:

  • Availability: Open standards are available for all to read and implement.
  • Maximize End-User Choice: Open Standards create a fair, competitive market for implementations of the standard.
  • No Royalty: Open standards are free for all to implement, with no royalty
    or fee.
  • No Discrimination: Open standards and the organizations that administer them do not favor one implementor over another for any reason other than the technical standards compliance of a vendor’s implementation.
  • Extension or Subset: Implementations of open standards may be extended, or offered in subset form.

These are *good* reasons.

The most famous and undoubtedly the most impactful Open Standards are those that were written for the Internet. They were, and still are, called Requests for Comments (RFCs) to reflect the fact that they were – especially in the early years, drafts for revision. As a mark of respect we also regard our draft 0.1 Open Standard for an EBB for Social Robots, as an RFC. You can find draft 0.1 in Annex A of the paper here.

Not only is this a first draft, it is also incomplete, covering only the specification of the data and its format, that should be saved in an EBB for social robots. Given that the EBB data specification is at the heart of the EBB standard, we feel that this is sufficient to be opened up for comments and feedback. We will continue to extend the specification, with subsequent versions also published on arXiv.

Let me know encourage comments and feedback. Please feel free to either submit comments to this blog post – this way everyone can see the comments – or by contacting me directly via email. All constructive comments that result in revisions to the standard will be acknowledged in the standard.

 

 

]]>
Interview with Andrea Thomaz (co-founder of Diligent Robotics): socially intelligent automation solutions for hospitals https://robohub.org/interview-with-andrea-thomaz-co-founder-of-diligent-robotics-socially-intelligent-automation-solutions-for-hospitals/ Sat, 16 Apr 2022 09:30:33 +0000 https://robohub.org/?p=204060 By Sonia Roberts, with additional editing by Dharini Dutia

Diligent Robotics, founded by Andrea Thomaz and Vivian Chu, develops socially intelligent automation solutions for hospitals. Moxi, their flagship robot, delivers items like medications and wound dressings between departments to save the clinical staff’s time. Diligent has just closed their Series B funding round with $30 million

We sat down with Dr. Thomaz to talk about Moxi, how to manage people’s expectations about robots, and advice for young people and women in robotics. This interview has been lightly edited for clarity. 

A woman puts her arm around the shoulder of a friendly-looking robot with one arm. The robot is a little shorter than a human.

Andrea Thomaz with Diligent’s flagship robot Moxi.

What kinds of problems are you trying to solve with Moxi?

We are building Moxi to help hospitals with the massive workforce shortage that they’re seeing now more than ever. We actually started the company with the same intention several years ago before there was a worldwide pandemic, and it really has just gotten to be an even bigger problem for hospitals. I feel really strongly that robots have a place to play in teamwork environments, and hospitals are a great example of that. There’s no one person’s job in a hospital that you would actually want to give over to automation or robots, but there are tiny little bits of a lot of people’s jobs that are absolutely able to be automated and that we can give over to delivery robots in particular like Moxi. [The main problem we’re trying to solve with Moxi is] point to point delivery, where we’re fetching and gathering things and taking them from one area of the hospital to another. 

Hospitals have a lot of stuff that’s moving around every day. Every person in the hospital is going to have certain medications that need to be delivered to them, certain lab samples that need to be taken and delivered to the central lab, certain supplies that need to come up to them, food and nutrition every day. You have a lot of stuff that’s coming and going between patient units and all these different support departments. 

Every one of these support departments has a process in place for getting the stuff moved around, but no matter what, there’s stuff that happens every single day that requires ad-hoc [deliveries] to happen between all of these departments and different nursing units. So sometimes that’s going to be a nurse that just needs to get something for their patient and they want that to happen as soon as possible. They’re trying to discharge their patient, they need a particular wound dressing kit, they’re going to run down and get it because they want to help their patient get out. Or if there’s something that needs to be hand carried because the regular rounding of medications has already happened, a lot of times you’ll have a pharmacy technician stop what they’re doing and go and run some infusion meds for a cancer patient, for example. It sort of falls between these departments. There’s different people that would be involved but a lot of times it does fall on the nursing units themselves. A nurse explained to us one time that nurses are the last line of defense in patient care.

A smiling clinical staff member holds a large stack of samples. Next to her, a humanoid robot almost as tall as she is holds its storage container open. The storage container has plenty of room for this large stack of samples.

Moxi performing a delivery for a clinical staff member.

What is changing with this most recent round of funding?

Over the last 6-12 months, the demand has really skyrocketed such that we’re barely keeping up with the demand for people wanting to implement robots in their hospitals. That’s the reason why we’re raising this round of funding, expanding the team, and expanding our ability to capitalize on that demand. A couple of years ago, if we were working with a hospital it was because they had some special funds set aside for innovation or they had a CTO or a CIO that had a background in robotics, but it certainly wasn’t the first thing that every hospital CIO was thinking about. Now that has completely changed. We’re getting cold outreach on our website from CIOs of hospitals saying “I need to develop a robotic strategy for our hospital and I want to learn about your solution.” Through the pandemic, I think everyone has seen that the workforce shortage in hospitals is only getting worse in the near term. Everybody wants to plan for the future and do everything they can to take small tasks off of the plates of their clinical teams. It’s been really exciting to be part of that market change and see that shift to where everybody is really really open to automation. Before we had to say “No no no, this is not the future, I promise it’s not scifi, I promise these really work.” Now [the climate has] really shifted to people understanding “This is actually something that can impact my teams.”

[Two of our investors are hospitals, and] that’s been one of our most exciting parts of this round. It’s always great to have a successful funding round, but to have strategic partners like Cedars-Sinai and Shannon Healthcare coming in and saying “Yeah, we actually want to build this alongside you” — it’s pretty exciting to have customers like that. 

What kinds of technical problems did you run into when you were either building Moxi or deploying it in a hospital environment? How did you solve those problems? 

One that was almost surprising in how often it came up, and really impacted our ability [to run Moxi in the hospital environment] because we have a software-based robotic solution that is connecting at a regular basis to cloud services, [was that] we had no clue how terrible hospital WiFi was going to be. We actually spent quite a while building in backup systems to be able to use WiFi, backup to LTE if we have to, but be smart about that so we’re not spending a whole bunch of money on LTE data. That was a problem that seemed very specific to hospitals in particular.

Another one was security and compliance. We just didn’t know what some of the different requirements were for hospitals until we actually got into the environments and started interacting with customers and understanding what they wanted to use Moxi for. When we were first doing research trials in 2018 or 2019, we had a version of the robot that was a little bit different than the one we have today. It had lots of open containers so you could just put whatever you wanted to on the robot and send it over to another location. We quickly learned that that limited what the robot was allowed to carry, because so much of what [the customers] wanted was to understand who pulled something out of the robot. So now we have an RF badge reader on the robot that is connected to locking storage containers that are only going to open if you’re the kind of person that is allowed to open the robot. That was an interesting technical challenge that we didn’t know about until after we got out there. 

A close-up of the body of a robot with two small storage containers open in the front and one large storage container open in the back.

Moxi’s locking storage containers.

How did you work with nurses and the other healthcare professionals you were working with to figure out what would be the most helpful robot for them? 

My background, and my co-founder Vivian Chu’s background, is in human-robot interaction so we knew that we didn’t know enough about nursing or the hospital environment. We spent the first 9 months of the company in 2018 building out our research prototype. It looked a lot like what Moxi looks like today. Under the hood it was completely different than what we have today in terms of the reliability and robustness of the hardware and software, but it was enough to get that platform out and have it deployed with nursing units. We embedded ourselves with four different nursing units across Texas over a year-long period. We would spend about 6-8 weeks with a nursing department, and we were just there — engineers, product people, and everybody in the company was cycling in and out a week or two at a time. 

We would ask those nurses: “What would you actually want a robot like this to do?” Part of this that was really important was they didn’t have good ideas about what they would want the robot to do until they saw the robot. It was a very participatory design, where they had to see and get a sense for the robot before they would have good ideas of what they would want the robot to do. Then we would take those ideas [to the company] and come back and say “Yes we can do that,” or “No we can’t do that.” We came out of that whole process with a really great idea. We like to say that’s where we found our product market fit — that’s where we really understood that what was going to be most valuable for the robot to do was connecting the nursing units to these other departments. We can help a nurse with supply management and getting things from place to place within their department, or we can help them with things that are coming from really far away. [The second one] was actually impacting their time way way more.

Because the capabilities of robotic systems are usually misinterpreted, it can be really hard to manage the relationship with stakeholders and customers and set appropriate expectations. How did you manage that relationship?

We do a lot of demonstrations, but still with almost every single implementation you get questions about some robot in Hollywood, [and you have to say] “No, that’s the movies” and explain exactly what Moxi does. 

From a design perspective, we also limit the English phrases that come out of Moxi’s mouth just because we don’t want to communicate a really high level of intelligence. There are lots of canned phrases and interactions on the iPad instead of via voice, and a lot of times the robot will just make meeps and beeps and flash lights and things like that. 

Before starting the company, I had a lab, and one of the big research topics that we had for a number of years was embodied dialogue — how robots could have a real conversation with people. I had a very good appreciation for how hard that problem is, and also for just how much people want it. People come up to a robot, and they want it to be able to talk to them. How you can [set expectations] with the design and behavior of the robot has been a focus of mine since before we started the company. We purposefully don’t make the robot look very human-like because we don’t want there to be android human-level expectations, but [the robot does have a face and eyes so it can] communicate “I’m looking at that thing” and “I’m about to manipulate that thing,” which we think is important. It’s really about striking that balance. 

What would you say is one lesson that you’ve learned from your work at Diligent so far and how are you looking to apply this lesson moving forward?

The difference between research and practice. On the one hand, the motivation and reason for starting a company is that you want to see the kinds of things that you’ve done in the research lab really make it out into the world and start to impact real people and their work. That’s been one of the most fascinating, impactful, and inspiring things about starting Diligent: Being able to go and see nurses when Moxi is doing work for them. They are so thankful! If you just hang back and watch Moxi come and do a delivery, almost always people are super excited to see the robot. They get their delivery and they’re like, “Oh, thank you Moxi!” That feels like we’re really making a difference in a way that you just don’t get with just research contributions that don’t make it all the way out into the world. 

That being said though, there is a long tail of things that you have to solve from an engineering perspective beyond [developing a feature]. My VP of engineering Starr Corbin has this great way of putting it: The research team will get a certain thing on the product to be feature complete, where we’ve demonstrated that this feature works and it’s a good solution, but then there’s this whole phase that has to happen after that to get the feature to be production ready. I would say my biggest lesson is probably everything that it takes, and the entire team of people it takes, to get something from being feature complete to production ready. I have a deep appreciation for that. How fast we can move things out into the world is really dictated by some of that. 

Two women stand next to a friendly-looking humanoid robot with one arm.

Andrea Thomaz (left) and Vivian Chu with Moxi.

What advice would you give young women in robotics? 

If I put my professor hat on, I always had advice that I liked to give women in robotics, in academia, and just kind of pursuing things in general. Imposter syndrome is real, and everybody feels it. All you can do to combat it is not underestimate yourself. Speak up and know that you deserve a seat at the table. It’s all about hard work, but also making sure that your voice is heard. Some of the mentorship that I gave to a lot of my women grad students when I was a professor was around speaking engagements, speaking styles, and communication. It can be really uncomfortable when you’re the only anything in the room to stand up and feel like you deserve to be the one speaking, and so the more that you practice doing that, the more comfortable it can feel, the more confident you’ll feel in yourself and your voice. I think finding that confident voice is a really important skill that you have to develop early on in your career. 

What’s one piece of advice you’ve received that you always turn to when things are tough? 

There are two mentors that I’ve had who are women in AI and robotics. [In my] first year as a faculty member [the first mentor] came and gave a research seminar talk. I for some reason got to take her out to lunch by myself, so we had this amazing one-on-one. We talked a little bit about her talk, probably half of the lunch we talked about technical things, and then she just kind of turned the conversation [around] and said “Andrea, don’t forget to have a family.” Like, don’t forget to focus on that part of your life — it’s the most important thing. She got on a soapbox and said “You have to have a work life balance it’s so important. Don’t forget to focus on building a family for yourself, whatever that looks like.” That really stuck with me, especially as [when you’re] early in your career you’re worried about nothing but success. It was really powerful to have somebody strong and influential like that telling you “No, no, this is important and you need to focus on this.” 

The other person that’s always been an inspiration and mentor for me that I’ll highlight [was the professor teaching a class I TA’d for at MIT]. I had found a bug in one of her homework problems, and she was like, “Oh, fascinating.” She was so excited that I had found a question that she didn’t know the answer to. She [just said], “Oh my gosh I don’t know, let’s go find out!” I remember her being this great professor at MIT, and she was excited to find something that she didn’t know and go and learn about it together as opposed to being embarrassed that she didn’t know something. I learned a lot from that interaction: That it’s fun to not know something because then you get to go and find the answer, and no matter who you are, you’re never expected to know everything.

]]>
Careers in robotics: Should you get a PhD or go into industry? https://robohub.org/careers-in-robotics-should-you-get-a-phd-or-go-into-industry/ Sat, 02 Apr 2022 09:50:07 +0000 https://robohub.org/?p=203875 So you are considering a PhD in robotics! Before you decide to apply, here are some things to consider.

What is a PhD?

A PhD is a terminal degree, meaning it is the highest degree that you can earn. Almost without exception, people only earn one PhD. This degree is required for some jobs, mostly in academia, but is often just considered equivalent to years worked in industry. For some positions, hiring managers may even prefer to hire people without PhDs. A PhD is a paid (though not well paid) position at a university that lasts for between 4 and 10 years. To learn more about the PhD process, check out this previous post

The author on a field trip to Oceano Dunes in California. She is controlling RHex, a six-legged robot, outfitted with sensors to study dune migration.

The day-to-day life of a PhD student versus an industry professional

The process of earning the PhD is very different from the process of earning a bachelor’s or a master’s degree. It is more like an internship or a job. The first two or so years of any PhD program will be largely coursework, but even at this stage you will be balancing spending time on your courses against spending time on research – either because you are rotating through different labs, because you are performing research for a qualifier, or because your advisor is attaching you to an existing research project to give you some experience and mentorship before you develop your own project. This means that getting a PhD is not actually a way to avoid “getting a job” or to “stay in school” – it is actually a job. This also means that just because you are good at or enjoy coursework does not mean you will necessarily enjoy or excel in a PhD program, and just because you struggled with coursework does not mean you will not flourish in a PhD program. After you are done with coursework, you will spend all of your time on research. Depending on the day, that can mean reading textbooks and research papers, writing papers, making and giving presentations, teaching yourself new skills or concepts, programming or building robots, running experiments, and mentoring younger students. If you excelled at either conducting research as an undergraduate or very open-ended course projects much more than typical coursework, you’ll be much more likely to enjoy research as a PhD student. 

Types of goal setting in academia and industry

In course work, and in industry jobs with a good manager, you are given relatively small, well defined goals to accomplish. You have a team (your study group, your coworkers) who are all working towards the same goal and who you can ask for help. In a PhD program, you are largely responsible for narrowing down a big research question (like “How can we improve the performance of a self-driving car?”) into a question that you can answer over the course of a few years’ diligent work (“Can we use depth information to develop a new classification method for pedestrians?”). You define your own goals, often but not always with the advice of your advisor and your committee. Some projects might be team projects, but your PhD only has your name on it: You alone are responsible for this work. If this sounds exciting to you, great! But these are not good working conditions for everybody. If you do not work well under those conditions, know that you are no less brilliant, capable, or competent than someone who does. It just means that you might find a job in industry significantly more fulfilling than a job in academia. We tend to assume that getting a PhD is a mark of intelligence and perseverance, but that is often not the case — sometimes academia is just a bad match to someone’s goals and motivations. 

Meaning and impact

Academic research usually has a large potential impact, but little immediate impact. In contrast, industry jobs generally have an impact that you can immediately see, even if it is very small. It is worth considering how much having a visible impact matters to you and your motivation because this is a major source of PhD student burnout. To give a tangible example, let’s say that you choose to do research on bipedal robot locomotion. In the future, your work might contribute to prostheses that can help people who have lost legs walk again, or to humanoid robots that can help with elder care. Is it important to you that you can see these applications come to fruition? If so, you might be more fulfilled working at a company that builds robots directed towards those kinds of tasks instead of working on fundamental research that may never see application in the real world. The world will be better for your contributions regardless of where you make them – you just want to make sure you are going to make those impacts in a way that allows you to find them meaningful! 

Pay and lifetime earning potential

Engineers are significantly better paid in industry than academia. Since working in industry for a minimum of five to ten years and getting a PhD are often considered equivalent experience for the purposes of many job applications, even the time spent getting a PhD – where you will earn much less than you would in industry – can mean that you give up a substantial amount of money. Let’s say that an entry-level engineering job makes $100,000 per year, and a graduate student earns $40,000. If your PhD takes 6 years, you lose out on $60,000 x 6 = $360,000 of potential pay. Consider also that a PhD student’s stipend is fairly static, whereas you can expect to have incremental salary increases, bonuses, and promotions in an industry job, meaning that you actually lose out on at least $400,000. This is a totally valid reason to either skip the PhD process completely, or to work in industry for a few years and build up some savings before applying to PhD programs. 

Robotics Institute at University of Toronto

How do I know what I want?

It’s hard! If you’re still uncertain, remember that you can gain a few years of work experience in industry before going back to get the PhD, and will likely be considered an even stronger candidate than before. Doing this allows you to build up some savings and become more confident that you really do want to get that PhD. 

Thinking through these questions might help you figure out what direction you want to go:

  • Are you much more motivated to do class projects that you are allowed to fully design yourself? 
  • When you think about something small you built being used daily by a neighbor, how do you feel?
  • Is your desire to get a PhD because of the prestige associated with the degree, or the specific job opportunities it opens up?
]]>
Careers in robotics: What is a robotics PhD? https://robohub.org/careers-in-robotics-what-is-a-robotics-phd/ Sat, 26 Mar 2022 10:12:20 +0000 https://robohub.org/?p=203872 This relatively general post focuses on robotics-related PhD programs in the American educational system. Much of this will not apply to universities in other countries, or to other departments in American universities. This post will take you through the overall life cycle of a PhD and is intended as a basic overview for anyone unfamiliar with the process, whether they are considering a PhD or have a loved one who is currently in a PhD program and just want to learn more about what they are doing. 

The basics

A PhD (doctoral degree) in engineering or a DEng (Doctorate of Engineering) is the highest degree that you can earn in engineering. This is generally a degree that people only earn one of, if they earn one at all. Unlike a bachelor’s degree or a master’s degree, a PhD studying a topic relevant to robotics should be free and students should receive a modest stipend for their living expenses. There are very few stand-alone robotics PhDs programs, so people generally join robotics labs through PhD programs in electrical engineering, computer science, or mechanical engineering. 

Joining a lab

In some programs, students are matched with a lab when they are accepted to the university. This matching is not random: If a university works this way, a professor has to have a space in their lab, see the application, and decide that the student would be a good fit for their lab. Essentially, the professor “hires” the student to join their lab. 

Other programs accept cohorts of students who take courses in the first few years and pick professors to work with by some deadline in the program. The mechanism through which students and professors pair up is usually rotations: Students perform a small research project in each of several labs and then join one of the labs they rotated in. 

The advisor

Regardless of how a student gets matched up with their advisor, the advisor has a lot of power to make their graduate school experience a positive one or a negative one. Someone who is a great advisor for one student may not be a great advisor for another. If you are choosing an advisor, it pays to pay attention to the culture in a lab, and whether you personally feel supported by that environment and the type of mentorship that your advisor offers. In almost every case, this is more important for your success in the PhD program than the specifics of the project you will work on or the prestige of the project, collaborators, or lab. 

Qualifiers

PhD programs typically have qualifiers at some point in the first three years. Some programs use a test-based qualifier system, either creating a specific qualifier test or using tests from final exams of required courses. In some programs, you are tested by a panel of faculty who ask the student questions about course material that they are expected to have learned by this point. In other programs, the student performs a research project and then presents it to a panel of faculty. 

Some universities view the qualifiers as a hurdle that almost all of the admitted PhD students should be able to pass, and some universities view them as a method to weed out students from the PhD program. If you are considering applying to PhD programs, it is worth paying attention to this cultural difference between programs, and not taking it too personally if you do not pass the qualifiers at a school that weeds out half of their students. After all, you were qualified enough to be accepted. It is also important to remember, if you join either kind of program, that if you do not pass your qualifiers, usually what happens is that you leave the program with a free master’s degree. Your time in the program will not be wasted!

The author testing a robot on a steep dune face on a research field trip at Oceano Dunes.

Research

Some advisors will start students on a research project as soon as they join the lab, typically by attaching them to an existing project so that they can get a little mentorship before starting their own project. Some advisors will wait until the student is finished with qualifiers. Either way, it is worth knowing that a PhD student’s relationship to their PhD project is likely different from any project they have ever been involved with before. 

For any other research project, there is another person – the advisor, an older graduate student, a post doc – who has designed the project or at least worked with the student to come up with parameters for success. The scope of previous research projects would typically be one semester or one summer, resulting in one or two papers at most. In contrast, a PhD student’s research project is expected to span multiple years (at least three), should result in multiple publications, and is designed primarily by the student. It is not just that the student has ownership over their project, but that they are responsible for it in a way that they have never been responsible for such a large open-ended project before. It is also their primary responsibility – not one project alongside many others. This can be overwhelming for a lot of students, which is why it is impolite to ask a PhD student how much longer they expect their PhD to take. 

The committee

The “committee” is a group of professors that work in a related area to the student’s. Their advisor is on the committee, but it must include other professors as well. Typically, students need to have a mix of professors from their school and at least one other institution. These professors provide ongoing mentorship on the thesis project. They are the primary audience for the thesis proposal and defense, and will ultimately decide what is sufficient for the student to do in order to graduate. If you are a student choosing your committee, keep in mind that you will benefit greatly from having supportive professors on your committee, just like you will benefit from having a supportive advisor. 

Proposing and defending the thesis

When students are expected to propose a thesis project varies widely by program. In some programs, students propose a topic as part of their qualifier process. In others, students have years after finishing their qualifiers to propose a topic – and can propose as little as a semester before they defend! 

The proposal and defense both typically take the form of a presentation followed by questions from the committee and the audience. In the proposal, the student outlines the project they plan to do, and presents criteria that they and their committee should agree on as the required minimum for them to do in order to graduate. The defense makes the case that the student has hit those requirements. 

After the student presents, the committee will ask them some questions, will confer, and then will either tell the student that they passed or failed. It is very uncommon for a PhD student to fail their defense, and it is generally considered a failure on the part of the advisor rather than the student if this happens, because the advisor shouldn’t have let the student present an unfinished thesis. After the defense, there may be some corrections to the written thesis document or even a few extra experiments, but typically the student does not need to present their thesis again in order to graduate.

The bottom line

A PhD is a long training process to teach students how to become independent researchers. Students will take classes and perform research, and will also likely teach or develop coursework. If this is something you’re thinking about, it’s important to learn about what you might be getting yourself into – and if it’s a journey one of your loved ones is starting on, you should know that it’s not just more school!

]]>
Robots and romance: science fiction and science https://robohub.org/robots-and-romance-science-fiction-and-science/ Sat, 12 Feb 2022 12:00:24 +0000 https://robohub.org/?p=203551

Valentine’s Day is approaching… Do want to sneak in a robot movie to watch on date night? Do you wonder about whether robots and love is possible? Here are five recommendations for sci-fi movies with a discussion of the related real-world robotics science. And remember to check out Learn AI and Human-Robot Interaction from Asimov’s I, Robot Stories– it’s a great primer on social interactions!

Can roboticists make the perfect partner? The original 1975 The Stepford Wives argues “yes”– if your definition of the perfect partner is limited to their appearance and willingness to selflessly perform subservient tasks. The movie, and book it is based on, extrapolated the advances in animatronics at Disney which had opened the Hall of Presidents attraction in 1971 to great acclaim. The assumption was that the hard part in creating a human substitute is creating robots that looked and moved like humans.

Mimicking human movement and facial expressiveness is certainly a challenge for the mechanics and control of a robot, think Hanson’s Sophia and Ishiguro’s Geminoid series of ‘bots. But physical fidelity is not the same as creating the artificial general intelligence needed to avoid the Uncanny Valley or hold a meaningful conversation. See more about the science of making life-like robots here.

Creating the perfect sex partner is a real issue in robotics, with major capital investments in the sexbot industry. This presents real legal and ethical issues. See the RTSF interview with Dr. Van Wynsberghe of the The Foundation for Responsible Robotics here. The FRR has been working to get governments to set policies— see their report Our Sexual Future with Robots. As a woman, I appreciate The Campaign Against Sex Robots which argues that acting out sexual fantasies with sexbots that would make even Takashi Kovacs in Altered Carbon blush have toxic gender, cultural, and mental health implications. Check out the great article on the legal status of sexbots in the US here.

Nominally a horror movie, The Stepford Wives is likely to make you both grateful for whatever positive relationship you have with each other. But get the 1975 original, not the 2004 remake.

Can roboticists make a robot that is better at social interactions, including love, than we are? In the 1987 movie Making Mr. Right, a neuroatypical scientist, played by John Malkovich, builds an android, also played by John Malkovich, that is much more socially competent than he is. A press agent is assigned to handle public relations and teach it to be more emotional. Love ensues.

Social interaction skills are a hot topic in human-robot research, see the large number of papers in venues such as the annual IEEE/ACM Conference on Human-Robot Interaction. But, sadly, most of the touted skills are social engineering tricks that make us think the robots have human skills. Noel Sharkey has a great set of articles on this type of advance/parlor trick in robotics.

Making Mr. Right is a romantic comedy, not a particularly good one, but it *is* the opposite of The Stepford Wives and, well, a romcom. It might inspire some snuggles.

Can’t roboticists make it where we can just download our brains to speed up the process of making robots more human like? In science fiction, that never works and three movies illustrates how it could lead to some tainted love. Saturn 3 and Eve of Destruction, and Demon Seed (FYI, the book is waaay better than the movie) are delightfully, MST3K worthy movies. All three have robot creators who download their brains, apparently missing the class on Freud and the Id. Ooops! Each movie has top actors, including Harvey Keitel, Kirk Douglas, Farrah Fawcett, Gregory Hines, and Julie Christie, who probably regret their decision to participate.

In the real world, downloading probably won’t work either, even if we can edit out our Id. There is some work on transfer learning and brain-computer interfaces but that work is more about motor skills and control, not abstract reasoning and memories.

If togetherness means throwing popcorn at a big screen TV and shouting out derisive comments then any of these three movies are terrific choices to watch with your special someone! My favorite is Saturn 3.

Here’s another link to a slideshow discussing emotions in robots and the general topic page. Whatever you decide to watch, enjoy and keep learning about robots!

]]>
What Women in Robotics achieved in 2021 and what’s coming next in 2022 https://robohub.org/what-women-in-robotics-achieved-in-2021-and-whats-coming-next-in-2022/ Fri, 28 Jan 2022 15:17:20 +0000 https://svrobo.org/?p=22158

It’s been a hard year for women all over the world, and I’d like to thank everyone who has contributed to Women in Robotics in 2021, whether you’ve simply shared information about yourself in our community #intros channel, or organized an online event, or made yourself available as an advisor in our pilot mentoring program. Perhaps you’ve been furthering our mission in an ‘unofficial’ way simply by supporting other women, and non-binary people, who are working in robotics, or who are interested in working in robotics. 

We recognize and appreciate the community building work that women do, which is so often out of the spotlight, and on top of everything else. Women’s work has rarely been given economic value as one of my heroes Marilyn Waring wrote in “Counting for Nothing”. She founded feminist economics, now called triple bottom line accounting, and changed the way that the World Bank and other global organizations evaluate economies. 

The pandemic has forced women out of the workforce at twice the rate of men, leaving women’s participation in the workforce lower than it’s been for 30 years. And the pressure shows no sign of stopping. However, I believe that whenever women are forced to step backwards, we move forward again with renewed determination and focus. And so my inspiration is renewed to further the mission of Women in Robotics, to support women and non-binary people who work in robotics, and those who would like to work in robotics. We may all find it hard to find time, but small actions in the right time and place can move mountains. 

In 2021, our annual showcase featured not 25 or 30 but ‘50 women in robotics you need to know about’ from 21 different countries, from industry, startups and academia, with particular mention of the women featured who have fought for the rights of refugees and persecuted women, especially the Afghanistan Women’s Robotics Team. For other women, this recognition has helped to raise their profile within universities or companies, leading to increased opportunities.

Our annual list also means that there’s no excuse for not including women in conferences, articles, policy making, job searches etc. In 2022, I’d love to see us create wikipedia pages for more women in robotics, and create a speaker database, and a citation list, similar to what Black in Robotics has done, and the work of 500 Women Scientists

The work of women in science is still less likely to be cited than that of men. Recent UNESCO research has found that citation bias is the start of career long lack of recognition for women, starting with women citing themselves less often than men do. In 2022, let’s focus on improving citation rates, increasing the number of women in panels, journals and conferences, or holding organizers accountable. We can target increasing the number of women cited in robotics curricula, reading lists and coursework. As an organization we can reach out to universities, labs, conferences and journals in a way that individual women can not.

Another grassroots campaign that we started was the Women in Robotics Photo Challenge, which has already resulted in some great new photos of women building robots joining the image search results. Since then we’ve realized that ordinary google or wikipedia search steer you to Sophia or sex robots, rather than referencing real women building robots. It’s also time to retire the word ‘unmanned’. Women in Robotics is planning to request that any university still referencing ‘Unmanned’ Vehicles should substitute with driverless, uncrewed or a better term.

The lack of in person conferences is severely impacting the benefits of in person networking at a senior level for women in robotics, and so we’d like to finally launch our Advisory Board through an online networking event(s) for senior women in robotics, both in academia and industry..

We piloted our first mentoring program over 12 weeks with 16 women and it was a very successful experience for almost every participant. We know that there is a lot of demand to run the program again but we’ll need more volunteers to help organize the events and match mentors/mentees. This is one area in which sponsorship for Women in Robotics could be useful, but sponsorship comes with a significant administrative cost, so we will only seek sustainable major sponsorships in 2022.

My gratitude goes to CITRIS & the Banatao Institute, through the People and Robots Lab, for providing me with some funding for the last two years that has allowed me to spend some of my time on Women in Robotics, Black in Robotics and the Society, Robots and Us Seminars. 

My call to action is for you to make your volunteer labor impactful by investing your time in a call to action with a big outcome. I hope it’s one of our Women in Robotics actions, but in everything you do you represent women in robotics and allies. Best wishes to you all for 2022. And thank you to the 2021 Women in Robotics Board of Directors! Our full Annual Report is here.

  1. Counting for Nothing (originally If Women Counted) https://en.wikipedia.org/wiki/If_Women_Counted by Marilyn Waring https://en.wikipedia.org/wiki/Marilyn_Waring
  2. https://www.theguardian.com/commentisfree/2021/nov/19/great-resignation-mothers-forced-to-leave-jobs
  3. https://www.techrepublic.com/article/women-and-middle-managers-will-lead-the-great-resignation-into-2022/
  4. https://en.unesco.org/news/unesco-research-shows-women-career-scientists-still-face-gender-bias

Reflections from the Women in Robotics Board of Directors:

We are very grateful for the work of our Board Members over the last year and we thank Kerri Fetzer-Borelli, Hallie Siegel and Ariel Anders for their vision and experience on the 2020 and 2021 Board. We are delighted to have them join our Advisory Board in 2022.

 

Allison Thackston:

Women in Robotics community and support in 2021 was different from previous years when we relied a lot on local chapters, meetups, and networking.  With many offices locked down and people more hesitant to do events in person, we’ve struggled a bit.  On the bright side, we’ve been bringing up our online presence, improving our website, and increasing our social media outreach.  In the year ahead, I hope we continue this growth.

Cynthia Yeung:

Launching the Project Advance mentorship program was a highlight of my service on the WiR board in 2021. We have received lots of great feedback from the inaugural cohort which we can use to improve the programming for the second cohort in 2022. One of the key success metrics was the percentage of returning mentors (demand is unlimited in two-sided marketplaces; supply is the constraint) and we are pleased to report that all mentors are interested in returning for the second cohort. It is personally gratifying to be in a position to implement the kind of program that I wanted to have access to earlier in my career. On a macro level, I believe that strong focus and measurable progress on a small number of initiatives will bode well for WiR’s impact roadmap.

Lisa Winter:

2021 was a year of self-reflection and a test of patience. I think 2022 will be the year when a lot of us take bigger risks as we try to figure out a better work/life balance. I would like to see more communication on the WiR Slack, specifically giving job advice and engineering advice. What I enjoy about other sites that I think we could incorporate is more sharing of personal projects; connecting over them and also learning.

Laura Stelzner:

WiR provides community and  support in a field where it can be lonely being one of the few women at your company/department.  As the field of robotics grows we would like to show women and non-binary people all the amazing career opportunities that exist, by providing them with mentorship,  networking, leadership and career advancement opportunities.

Laurie Linz:

2021 was a quiet year for Boulder/Denver, we didn’t hold any local (in person)  meetings. I do have good news and that is I am in process with setting up some in person events again. We’ll approach with caution given the covid situation, but happy to be starting again. 

WiR helps women level up or launch their career in robotics. We welcome those not ready to launch with networking and educational support. Learn, launch, level up.

Sue Keay:

One of the concerns that keeps me awake at night is wondering what important challenges we might have solved already and what technologies are missing because of the lack of diversity in robotics. That’s why Women in Robotics exists, to help to support the small number of women who are contributing to developing robotic technologies and to encourage more to join our ranks. The global list of women in robotics has been an important way to signal the important contributions that women are making in this space and to raise the profile of robotics as a valid career choice for women. Joining WiR is acknowledging that we can be doing better with diversity in robotics and may provide much needed support to a woman in robotics who may be questioning their reason for remaining in such a male-dominated field. My own experience has been that women have always been my greatest supporters and I feel less alone by being part of WiR.

Sue Keay (Robotics Australia) with Erin McColl (Toyota Research Institute) with a Ghost Robotics platform.

]]>
UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research https://robohub.org/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research/ Sun, 16 Jan 2022 10:45:50 +0000 http://robohub.org/?guid=e3b07adbe24e56543370908e9c15054f

Humanitarian groups have been calling for a ban on autonomous weapons. Wolfgang Kumm/picture alliance via Getty Images

By James Dawes

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could be combined with chemical, biological, radiological and nuclear weapons themselves.

As a specialist in human rights with a focus on the weaponization of artificial intelligence, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained authority to launch a strike – more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.

Lethal errors and black boxes

I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?

Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)

The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison.

Autonomous weapons expert Paul Scharre uses the metaphor of the runaway gun to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard.

Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can generate internally correct outcomes that nonetheless spread terrible errors rapidly across populations.

For example, a neural net designed for use in Pittsburgh hospitals identified asthma as a risk-reducer in pneumonia cases; image recognition software used by Google identified Black people as gorillas; and a machine-learning tool used by Amazon to rank job candidates systematically assigned negative scores to women.

The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The black box problem of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.

The proliferation problems

The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread.

Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people. Ministry of Defense of Ukraine, CC BY

High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.

High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “myth of a surgical strike” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.

Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons.

Undermining the laws of war

Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.

But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

To hold a soldier criminally responsible for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great.

The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate meaningful human control of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.

A new global arms race

Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.

In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.


The Conversation

This is an updated version of an article originally published on September 29, 2021.

James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

]]>
California’s AV testing rules apply to Tesla’s “FSD” https://robohub.org/californias-av-testing-rules-apply-to-teslas-fsd/ Mon, 10 Jan 2022 08:53:42 +0000 http://robohub.org/?guid=3efa489c89b1e7f7ea6ad99fae288ce0

Tesla Motors autopilot (photo:Tesla)

Five years to the day after I criticized Uber for testing its self-proclaimed “self-driving” vehicles on California roads without complying with the testing requirements of California’s automated driving law, I find myself criticizing Tesla for testing its self-proclaimed “full self-driving” vehicles on California roads without complying with the testing requirements of California’s automated driving law.

As I emphasized in 2016, California’s rules for “autonomous technology” necessarily apply to inchoate automated driving systems that, in the interest of safety, still use human drivers during on-road testing. “Autonomous vehicles testing with a driver” may be an oxymoron, but as a matter of legislative intent it cannot be a null set.

There is even a way to mortar the longstanding linguistic loophole in California’s legislation: Automated driving systems undergoing development arguably have the “capability to drive a vehicle without the active physical control or monitoring by a human operator” even though they do not yet have the demonstrated capability to do so safely. Hence the human driver.

(An imperfect analogy: Some kids can drive vehicles, but it’s less clear they can do so safely.)

When supervised by that (adult) human driver, these nascent systems function like the advanced driver assistance features available in many vehicles today: They merely work unless and until they don’t. This is why I distinguish between the aspirational level (what the developer hopes its system can eventually achieve) and the functional level (what the developer assumes its system can currently achieve).

(SAE J3016, the source for the (in)famous levels of driving automation, similarly notes that “it is incorrect to classify” an automated driving feature as a driver assistance feature “simply because on-road testing requires” driver supervision. The version of J3016 referenced in regulations issued by the California Department of Motor Vehicles does not contain this language, but subsequent versions do.)

The second part of my analysis has developed as Tesla’s engineering and marketing have become more aggressive.

Back in 2016, I distinguished Uber’s AVs from Tesla’s Autopilot. While Uber’s AVs were clearly on the automated-driving side of a blurry line, the same was not necessarily true of Tesla’s Autopilot:

In some ways, the two are similar: In both cases, a human driver is (supposed to be) closely supervising the performance of the driving automation system and intervening when appropriate, and in both cases the developer is collecting data to further develop its system with a view toward a higher level of automation.

In other ways, however, Uber and Tesla diverge. Uber calls its vehicles self-driving; Tesla does not. Uber’s test vehicles are on roads for the express purpose of developing and demonstrating its technologies; Tesla’s production vehicles are on roads principally because their occupants want to go somewhere.

Like Uber then, Tesla now uses the term “self-driving.” And not just self-driving: full self-driving. (This may have pushed Waymo to call its vehicles “fully driverless“—a term that is questionable and yet still far more defensible. Perhaps “fully” is the English language’s new “very.”)

Tesla’s use of “FSD” is, shall we say, very misleading. After all, its “full self-driving” cars still need human drivers. In a letter to the California DMV, the company characterized “FSD” as a level two driver assistance feature. And I agree, to a point: “FSD” is functionally a driver assistance system. For safety reasons, it clearly requires supervision by an attentive human driver.

At the same time, “FSD” is aspirationally an automated driving system. The name unequivocally communicates Tesla’s goal for development, and the company’s “beta” qualifier communicates the stage of that development. Tesla intends for its “full self-driving” to become, well, full self-driving, and its limited beta release is a key step in that process.

And so while Tesla’s vehicles are still on roads principally because their occupants want to go somewhere, “FSD” is on a select few of those vehicles because Tesla wants to further develop—we might say “test”—it. In the words of Tesla’s CEO: “It is impossible to test all hardware configs in all conditions with internal QA, hence public beta.”

Tesla’s instructions to its select beta testers show that Tesla is enlisting them in this testing. Since the beta software “may do the wrong thing at the worst time,” drivers should “always keep your hands on the wheel and pay extra attention to the road. Do not become complacent…. Use Full Self-Driving in limited Beta only if you will pay constant attention to the road, and be prepared to act immediately….”

California’s legislature envisions a similar role for the test drivers of “autonomous vehicles”: They “shall be seated in the driver’s seat, monitoring the safe operation of the autonomous vehicle, and capable of taking over immediate manual control of the autonomous vehicle in the event of an autonomous technology failure or other emergency.” These drivers, by the way, can be “employees, contractors, or other persons designated by the manufacturer of the autonomous technology.”

Putting this all together:

  1. Tesla is developing an automated driving system that it calls “full self-driving.”
  2. Tesla’s development process involves testing “beta” versions of “FSD” on public roads.
  3. Tesla carries out this testing at least in part through a select group of designated customers.
  4. Tesla instructs these customers to carefully supervise the operation of “FSD.”

Tesla’s “FSD” has the “capability to drive a vehicle without the active physical control or monitoring by a human operator,” but it does not yet have the capability to do so safely. Hence the human drivers. And the testing. On public roads. In California. For which the state has a specific law. That Tesla is not following.

As I’ve repeatedly noted, the line between testing and deployment is not clear—and is only getting fuzzier in light of over-the-air updates, beta releases, pilot projects, and commercial demonstrations. Over the last decade, California’s DMV has performed admirably in fashioning rules, and even refashioning itself, to do what the state’s legislature told it to do. The issues that it now faces with Tesla’s “FSD” are especially challenging and unavoidably contentious.

But what is increasingly clear is that Tesla is testing its inchoate automated driving system on California roads. And so it is reasonable—and indeed prudent—for California’s DMV to require Tesla to follow the same rules that apply to every other company testing an automated driving system in the state.

]]> Investors warn Deep Tech founders about these 12 pitfalls https://robohub.org/investors-warn-deep-tech-founders-about-these-12-pitfalls/ Thu, 16 Dec 2021 09:54:56 +0000 https://svrobo.org/?p=22070

Firstly, what is Deep Tech as opposed to Tech or technology enabled? Sometimes Deep Tech is regarded as a science based startup, sometimes it is regarded as disruptive to the status quo, sometimes it is regarded just as slow and hard, capital intensive, with a long ROI horizon. Or as something that investors aren’t ready for yet. But the amount of money going into Deep Tech investing is increasing, and the pool of Deep Tech investors is increasing. One of the key points I made in a recent GIST Tech Connect Deep Tech panel is that most investors, including the most successful Tech investors are not able to invest seriously in Deep Tech startups because they lack the technical awareness and depth of commercialization experience specific to a Deep Tech startup. GIST or the Global Innovation in Science and Technology Network is the US State Department program to encourage and support global entrepreneurship.

In fact, if you do the research into the failure rates of some high profile Deep Tech startups, it seems that certain large funds have a much higher failure rate than others, so at best, their growth pathway is not compatible with Deep Tech startups. At worst, they are simply cherry picking some Deep Tech startups for their publicity value. Startups should always do their due diligence on investors and how they treat founders, particularly founders with similar startups.

Universities play a huge role in derisking, funding and commercializing Deep Tech startups but there is still a ‘Valley of Death’ in the transfer stages. And a Deep Tech startup can come out of any university but not all universities have real commercialization experience and a supportive startup ecosystem. Silicon Valley Robotics and Circuit Launch have provided a ‘halfway house’ for a lot of Deep Tech startups by providing affordable workspace with prototyping facilities and a startup ecosystem. But the first question I always ask entrepreneurs is if they have leveraged every advantage that their university connections can provide. Universities can provide greatly discounted lab space and testing facilities, also connections to scientific experts in most any field who can be leveraged as consultants and advisors.

The SBIR program, or the American Seed Fund, which is about a $4 billion non dilutive funding from the federal government in the form of R&D dollars, contracts and grants to small businesses and startups gives you the opportunity to derisk a lot of the technology very early on. You can really do a detailed scope and scan, and then couple that with the iCorps program and you get the opportunity to do deeper dives into customer discovery, to really understand if this is something that’s just a nice to have, or is it a real must have. Although the SBIR program is American based program, a lot of the countries around the world have been creating similar ones. A good example of that is EU Horizon 2020 grants.

Grants catalyze and do a certain amount to de-risk technology, extending the runway through non-dilutive funding and by creating a technology roadmap which validates the science as significant. Corporate venture funds or strategic investors also play an important role, alongside non-dilutive grant funding. Not only can they be a check, they can be a customer, they can be an advisor and a partner in the early prototype to manufacture stages. The best strategic investors play a huge role in helping Deep Tech startups succeed, because they need the technology you are creating.

Here’s a collection of tips for Deep Tech founders gathered from the GIST TechConnect Panel on Deep Tech with Nakia Melecio from Georgia Tech, Nhi Lê from WARF, Andra Keay from SVR and The Robotics Hub, G. Nagesh Rao from US Dept of State. Also tips from Six red flags that send investors running the other way by Sara Bloomberg, San Francisco Business Times. Quotes not attributed to other investors are my thoughts or recollections from the event.

Accelerator hopping

“When you start going from accelerator to accelerator looking for funding, then you’re doing it wrong. Accelerators only fund you to participate in their program. Their program and mentors are the real value.” Nhi Lê, WARF Accelerator

You also dilute your equity and become uninvestable.

Taking the first check, giving away too much equity in early rounds

Always negotiate terms. But don’t focus solely on the financials and at the risk of throwing away the less obvious value that a good investor can bring to you.

“Deep Tech startups may take longer to get to revenue than a traditional tech startup, so you need to think about grant funding as a source of revenue, and any contracts that help you develop part of your technology.” Nhi Lê, WARF Accelerator

Not budgeting for IP defense

“Companies often say that they’re investable because they have a patent, but they haven’t budgeted anything to defend it. Your IP is only as good as your ability to defend it. Universities play a great role in protecting and defending IP that they’ve licensed.” Nhi Lê, WARF Accelerator

Not having a plan for the whole journey

“When you go into your first funding meeting, you must be thinking about the long term journey, all the way to exit. It’s never going to be just one check, you’re growing a company.” G. Nagesh Rao, US Dept of Commerce

Not doing diligence on investors or accelerators

“Deep tech, especially at the leading edge, is usually expensive, so it’s critical to find the right path to commercialization at scale. Good investors speed up the process and lower your burn rate.” Michael Harries, The Robotics Hub

Have your potential investors brought similar startups to market? Having that experience can make the commercialization process much faster, and it’s critical to manage your resources effectively. Constant fundraising takes founders away from product development. Also, do your investors have patient capital? Or are they needing a rapid return on investment for their current fund? Don’t assume that a well known investor or accelerator guarantees you success, or even finding a good fit with their process.

Ignorance of basic financials

Overreaching on inventory, being unable to meet debts in a timely fashion, structuring the company poorly, all these things are cited by founders who’ve struggled.

Customer discovery never stops

“Focus on the customer and fall in love with the customer’s problem and you’ll never go wrong.” Nakia Melecio, Georgia Tech

Do it from the start, and never stop going to market. You can’t just outsource your business development to people with better sales skills, not until you know that pain points you’re solving for your customers and you can write the scripts for them.

Not doing the research, or using vanity metrics instead of strategy

“If a founder is estimating their market in the trillions of dollars they have either not done the research or they are just delusional.” Swati Chaturvedi, Co-Founder of PropelX

“Founders who are focused only on vanity metrics (growth rate and valuation) and not attuned to developing sound business models are a red flag.” Anurag Chandra, Fort Ross Ventures

Trying to skip steps

“Another red flag is trying to FOMO you into moving quickly. Not only is it bad for arriving at a sound investment decision, it’s an indication of how they do business with customers and partners (ie. not invested in building long term relationships). Anurag Chandra, Fort Ross Ventures

Misrepresentation or withholding data

“Investors can tell when you are avoiding details like actual product or customer development status and it may mean you are misrepresenting your business.” Caroline Winnett, Executive Director of Berkeley SkyDeck

Cofounder issues, not having a clear leader or not being open to feedback

“There needs to be agreement on who is acting as CEO, and everyone needs to be aligned on that. Another red flag is not being open to advice from experts.” Caroline Winnett, Executive Director of Berkeley SkyDeck

Being disorganized

“Founders should be responsive to requests for more information. It shows if they are organized and in the mindset to do a deal versus spin cycles.” Shruti Gandhi of Array Ventures

]]>
Robots can be companions, caregivers, collaborators — and social influencers https://robohub.org/robots-can-be-companions-caregivers-collaborators-and-social-influencers/ Fri, 26 Nov 2021 11:02:30 +0000 http://robohub.org/?guid=a70d72942c892f1bba3396dcc9b01f98

Robot and artificial intelligence are poised to increase their influences within our every day lives. (Shutterstock)

By Shane Saunderson

In the mid-1990s, there was research going on at Stanford University that would change the way we think about computers. The Media Equation experiments were simple: participants were asked to interact with a computer that acted socially for a few minutes after which, they were asked to give feedback about the interaction.

Participants would provide this feedback either on the same computer (No. 1) they had just been working on or on another computer (No. 2) across the room. The study found that participants responding on computer No. 2 were far more critical of computer No. 1 than those responding on the same machine they’d worked on.

People responding on the first computer seemed to not want to hurt the computer’s feelings to its face, but had no problem talking about it behind its back. This phenomenon became known as the computers as social actors (CASA) paradigm because it showed that people are hardwired to respond socially to technology that presents itself as even vaguely social.

The CASA phenomenon continues to be explored, particularly as our technologies have become more social. As a researcher, lecturer and all-around lover of robotics, I observe this phenomenon in my work every time someone thanks a robot, assigns it a gender or tries to justify its behaviour using human, or anthropomorphic, rationales.

What I’ve witnessed during my research is that while few are under any delusions that robots are people, we tend to defer to them just like we would another person.

Social tendencies

While this may sound like the beginnings of a Black Mirror episode, this tendency is precisely what allows us to enjoy social interactions with robots and place them in caregiver, collaborator or companion roles.

The positive aspects of treating a robot like a person is precisely why roboticists design them as such — we like interacting with people. As these technologies become more human-like, they become more capable of influencing us. However, if we continue to follow the current path of robot and AI deployment, these technologies could emerge as far more dystopian than utopian.

The Sophia robot, manufactured by Hanson Robotics, has been on 60 Minutes, received honorary citizenship from Saudi Arabia, holds a title from the United Nations and has gone on a date with actor Will Smith. While Sophia undoubtedly highlights many technological advancements, few surpass Hanson’s achievements in marketing. If Sophia truly were a person, we would acknowledge its role as an influencer.

However, worse than robots or AI being sociopathic agents — goal-oriented without morality or human judgment — these technologies become tools of mass influence for whichever organization or individual controls them.

If you thought the Cambridge Analytica scandal was bad, imagine what Facebook’s algorithms of influence could do if they had an accompanying, human-like face. Or a thousand faces. Or a million. The true value of a persuasive technology is not in its cold, calculated efficiency, but its scale.

Seeing through intent

Recent scandals and exposures in the tech world have left many of us feeling helpless against these corporate giants. Fortunately, many of these issues can be solved through transparency.

There are fundamental questions that are important for social technologies to answer because we would expect the same answers when interacting with another person, albeit often implicitly. Who owns or sets the mandate of this technology? What are its objectives? What approaches can it use? What data can it access?

Since robots could have the potential to soon leverage superhuman capabilities, enacting the will of an unseen owner, and without showing verbal or non-verbal cues that shed light on their intent, we must demand that these types of questions be answered explicitly.

As a roboticist, I get asked the question, “When will robots take over the world?” so often that I’ve developed a stock answer: “As soon as I tell them to.” However, my joke is underpinned by an important lesson: don’t scapegoat machines for decisions made by humans.

I consider myself a robot sympathizer because I think robots get unfairly blamed for many human decisions and errors. It is important that we periodically remind ourselves that a robot is not your friend, your enemy or anything in between. A robot is a tool, wielded by a person (however far removed), and increasingly used to influence us.

The Conversation

Shane receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC). He is affiliated with the Human Futures Institute, a Toronto-based think tank.

This article appeared in The Conversation.

]]> Top 10 recommendations for a video gamer who you’d like to read (or even just touch) a book https://robohub.org/top-10-recommendations-for-a-video-gamer-who-youd-like-to-read-or-even-just-touch-a-book/ Sat, 20 Nov 2021 11:23:57 +0000 https://robohub.org/?p=202534

Sure the average video gamer is 34 years old, but the most active group is boys under 18, a group famously resistant to reading. Here is the RTSF Top 10 recommendations of books that have robots plus enough world building to rival Halo or Doom and lots of action or puzzles to solve. What’s even cooler is that you can cleverly use the “Topics” links to work in some STEM talking points by asking things like: do you think it would be easy reprogram cars to hit pedestrians instead of avoiding them? How would you fool a security drone? or Do you think robots should have the same rights as animals? But you may want to read them too, the first six on the list are books that I routinely recommend to general audiences and people tell me how much they loved them.

Head On – The rugby-like game in the book, Hilketa, played with real robots, is the best multiplayer game that never was. And paralyzed people have an advantage! (FYI: a PG-13 discussion of tele-sex through robots). Good for teachable moments about teleoperation.

Robopocalypse– Loved World War Z and read the book? They’ll love this more and it’s largely accurate about robots. Good for teachable moments about robot autonomy.

The Murderbot Diaries (series)- Delightfully snarky point of view of a security robot trying to save clueless scientists from Aliens-like corpos and creatures. Good for teachable moments about software engineering and whether intelligence systems would need a governor to keep them in line.

The Electric State– This is sort of a graphic novel the way Hannah Gadsby is sort of a comedian- it transcends the genre. Neither the full page illustrations nor the accompanying text tell the whole story of the angry teenage girl and her robot trying to outrun the end of the world. Like an escape room, you have to put the text and images together to figure out what is going on. Good for teachable moments about autonomy.

Tales from the Loop– the graphic novels, two in the series, are different from the emo Amazon streaming series. The books are much more suited to a teenage audience who love world building and surprising twists. Good for teachable moments about bounded rationality.

Kill Decision– Scarily realistic description of killer drones, with cool Spec Ops guy who has two ravens with call out to Norse mythology. Good for teachable moments about swarms (aka multi-robot systems).

Robots of Gotham– It’s sort of Game Lit without being based on a video game. Excellent discussion of how computer vision/machine learning works. Good for teachable moments about computer vision and machine learning.

The Andromeda Evolution– Helps if they’ve seen or read the original Andromeda Strain movie, but it can be read as a stand-alone. This commissioned sequel is a worthy addition. Good for teachable moments about drones and teleoperation.

Machinehood – A pro-Robots Rights group is terrorizing the world, nice discussion of ethics amid a lot of action- no boring lectures. Good for teachable moments about robot ethics.

The Themis Files– A earnest girl finds an alien Pacific Rim robot and learns to use it to fight evil giant piloted mecha invaders while shadowy quasi-governmental figures try to uncover its origins. Good for teachable moments about exoskeletons.

]]>
Join the Women in Robotics Photo Challenge https://robohub.org/join-the-women-in-robotics-photo-challenge/ Tue, 12 Oct 2021 05:54:48 +0000 https://robohub.org/?p=201968 How can women feel as if they belong in robotics if we can’t see any pictures of women building or programming robots? The Civil Rights Activist Marian Wright Edelson aptly said, “You can’t be what you can’t see.” We’d like you all to take photos of women building and coding robots and share them with us!

Here’s the handy guide to what a great photo looks like with some awesome examples. This is a great opportunity for research labs and robotics companies to showcase their talented women and other underrepresented groups.

]]>
Ethics is the new Quality https://robohub.org/ethics-is-the-new-quality/ Sun, 30 May 2021 09:30:00 +0000 https://robohub.org/ethics-is-the-new-quality/ This morning I took part in the first panel at the BSI conference The Digital World: Artificial Intelligence.  The subject of the panel was AI Governance and Ethics. My co-panelist was Emma Carmel, and we were expertly chaired by Katherine Holden.

Emma and I each gave short opening presentations prior to the Q&A. The title of my talk was Why is Ethical Governance in AI so hard? Something I've thought about alot in recent months.

Here are the slides exploring that question.

 

And here are my words.

Early in 2018 I wrote a short blog post with the title Ethical Governance: what is it and who's doing it? Good ethical governance is important because in order for people to have confidence in their AI they need to know that it has been developed responsibly. I concluded my piece by asking for examples of good ethical governance. I had several replies, but none were nominating AI companies.

So. why is it that 3 years on we see some of the largest AI companies on the planet shooting themselves in the foot, ethically speaking? I’m not at all sure I can offer an answer but, in the next few minutes, I would like to explore the question: why is ethical governance in AI so hard? 

But from a new perspective. 

Slide 2

In the early 1970s I spent a few months labouring in a machine shop. The shop was chaotic and disorganised. It stank of machine oil and cigarette smoke, and the air was heavy with the coolant spray used to keep the lathe bits cool. It was dirty and dangerous, with piles of metal swarf cluttering the walkways. There seemed to be a minor injury every day.

Skip forward 40 years and machine shops look very different. 

Slide 3

So what happened? Those of you old enough will recall that while British design was world class – think of the British Leyland Mini, or the Jaguar XJ6 – our manufacturing fell far short. "By the mid 1970s British cars were shunned in Europe because of bad workmanship, unreliability, poor delivery dates and difficulties with spares. Japanese car manufacturers had been selling cars here since the mid 60s but it was in the 1970s that they began to make real headway. Japanese cars lacked the style and heritage of the average British car. What they did have was superb build quality and reliability"*.

What happened was Total Quality Management. The order and cleanliness of modern machine shops like this one is a strong reflection of TQM practices. 

Slide 4

In the late 1970s manufacturing companies in the UK learned - many the hard way - that ‘quality’ is not something that can be introduced by appointing a quality inspector. Quality is not something that can be hired in.

This word cloud reflects the influence from Japan. The words Japan, Japanese and Kaizen – which roughly translates as continuous improvement – appear here. In TQM everyone shares the responsibility for quality. People at all levels of an organization participate in kaizen, from the CEO to assembly line workers and janitorial staff. Importantly suggestions from anyone, no matter who, are valued and taken equally seriously.

Slide 5

In 2018 my colleague Marina Jirotka and I published a paper on ethical governance in robotics and AI. In that paper we proposed 5 pillars of good ethical governance. The top four are:

  • have an ethical code of conduct, 
  • train everyone on ethics and responsible innovation,
  • practice responsible innovation, and
  • publish transparency reports.

The 5th pillar underpins these four and is perhaps the hardest: really believe in ethics.

Now a couple of months ago I looked again at these 5 pillars and realised that they parallel good practice in Total Quality Management: something I became very familiar with when I founded and ran a company in the mid 1980s.

Slide 6 

So, if we replace ethics with quality management, we see a set of key processes which exactly parallel our 5 pillars of good ethical governance, including the underpinning pillar: believe in total quality management.

I believe that good ethical governance needs the kind of corporate paradigm shift that was forced on UK manufacturing industry in the 1970s.

Slide 7

In a nutshell I think ethics is the new quality

Yes, setting up an ethics board or appointing an AI ethics officer can help, but on their own these are not enough. Like Quality, everyone needs to understand and contribute to ethics. Those contributions should be encouraged, valued and acted upon. Nobody should be fired for calling out unethical practices.

Until corporate AI understands this we will, I think, struggle to find companies that practice good ethical governance. 

Quality cannot be ‘inspected in’, and nor can ethics.

Thank you.

*I am quoting from the excellent history of British Leyland by Ian Nicholls here.

]]>

I took part in the first panel at the BSI conference The Digital World: Artificial Intelligence.  The subject of the panel was AI Governance and Ethics. My co-panelist was Emma Carmel, and we were expertly chaired by Katherine Holden.

Emma and I each gave short opening presentations prior to the Q&A. The title of my talk was Why is Ethical Governance in AI so hard? Something I’ve thought about alot in recent months.

Here are the slides exploring that question.

 

And here are my words.

Early in 2018 I wrote a short blog post with the title Ethical Governance: what is it and who’s doing it? Good ethical governance is important because in order for people to have confidence in their AI they need to know that it has been developed responsibly. I concluded my piece by asking for examples of good ethical governance. I had several replies, but none were nominating AI companies.

So. why is it that 3 years on we see some of the largest AI companies on the planet shooting themselves in the foot, ethically speaking? I’m not at all sure I can offer an answer but, in the next few minutes, I would like to explore the question: why is ethical governance in AI so hard? 

But from a new perspective. 

Slide 2

In the early 1970s I spent a few months labouring in a machine shop. The shop was chaotic and disorganised. It stank of machine oil and cigarette smoke, and the air was heavy with the coolant spray used to keep the lathe bits cool. It was dirty and dangerous, with piles of metal swarf cluttering the walkways. There seemed to be a minor injury every day.

Skip forward 40 years and machine shops look very different. 

Slide 3

So what happened? Those of you old enough will recall that while British design was world class – think of the British Leyland Mini, or the Jaguar XJ6 – our manufacturing fell far short. “By the mid 1970s British cars were shunned in Europe because of bad workmanship, unreliability, poor delivery dates and difficulties with spares. Japanese car manufacturers had been selling cars here since the mid 60s but it was in the 1970s that they began to make real headway. Japanese cars lacked the style and heritage of the average British car. What they did have was superb build quality and reliability”*.

What happened was Total Quality Management. The order and cleanliness of modern machine shops like this one is a strong reflection of TQM practices. 

Slide 4

In the late 1970s manufacturing companies in the UK learned – many the hard way – that ‘quality’ is not something that can be introduced by appointing a quality inspector. Quality is not something that can be hired in.

This word cloud reflects the influence from Japan. The words Japan, Japanese and Kaizen – which roughly translates as continuous improvement – appear here. In TQM everyone shares the responsibility for quality. People at all levels of an organization participate in kaizen, from the CEO to assembly line workers and janitorial staff. Importantly suggestions from anyone, no matter who, are valued and taken equally seriously.

Slide 5

In 2018 my colleague Marina Jirotka and I published a paper on ethical governance in robotics and AI. In that paper we proposed 5 pillars of good ethical governance. The top four are:

  • have an ethical code of conduct, 
  • train everyone on ethics and responsible innovation,
  • practice responsible innovation, and
  • publish transparency reports.

The 5th pillar underpins these four and is perhaps the hardest: really believe in ethics.

Now a couple of months ago I looked again at these 5 pillars and realised that they parallel good practice in Total Quality Management: something I became very familiar with when I founded and ran a company in the mid 1980s.

Slide 6 

So, if we replace ethics with quality management, we see a set of key processes which exactly parallel our 5 pillars of good ethical governance, including the underpinning pillar: believe in total quality management.

I believe that good ethical governance needs the kind of corporate paradigm shift that was forced on UK manufacturing industry in the 1970s.

Slide 7

In a nutshell I think ethics is the new quality

Yes, setting up an ethics board or appointing an AI ethics officer can help, but on their own these are not enough. Like Quality, everyone needs to understand and contribute to ethics. Those contributions should be encouraged, valued and acted upon. Nobody should be fired for calling out unethical practices.

Until corporate AI understands this we will, I think, struggle to find companies that practice good ethical governance. 

Quality cannot be ‘inspected in’, and nor can ethics.

Thank you.


Notes.

[1] I’m quoting here from the excellent history of British Leyland by Ian Nicholls.

[2] My company did a huge amount of work for Motorola and – as a subcontractor – we became certified software suppliers within their six sigma quality management programme.

[3] It was competitive pressure that forced manufacturing companies in the 1970s to up their game by embracing TQM. Depressingly the biggest AI companies face no such competitive pressures, which is why regulation is both necessary and inevitable.

]]>
Meet the #NCCRWomen in robotics https://robohub.org/meet-the-nccrwomen-in-robotics/ Mon, 24 May 2021 10:35:54 +0000 https://robohub.org/meet-the-nccrwomen-in-robotics/ Film still showing Maria Vittoria Minniti working with a robot

Film still by schwarzpictures.com

Meet Maria Vittoria and Inés!

To celebrate Women’s Day 2021 and the 50th anniversary of women’s right to vote in Switzerland, the Swiss NCCRs (National Centres of Competence in Research) wanted to show you who our women researchers are and what a day in their job looks like. The videos are targeted at women and girls of school and undergraduate age to show what day to day life as a scientist is like and make it more accessible. Each NCCR hosted a week where they published several videos covering multiple scientific disciplines, and here we are bringing you what was produced by NCCR Digital Fabrication.

The videos cover a wide range of subjects, including (but not limited to) maths, physics, microbiology, psychology and planetary science, but here we have two women who work with robots.

Maria Vittoria Minniti is a robotics engineer and PhD student, she enhances mobile manipulation capabilities in under-actuated robots.

Inés Ariza is an architect, she uses a robot to 3D print custom metal joints for complex structures.

 

Head over to YouTube or Instagram (EnglishGerman or French) to see the women featured in the #NCCRWomen campaign.

]]> On sustainable robotics https://robohub.org/on-sustainable-robotics/ Tue, 20 Apr 2021 09:02:00 +0000 https://robohub.org/on-sustainable-robotics/ The climate emergency brooks no compromise: every human activity or artefact is either part of the solution or it is part of the problem.

I’ve worried about the sustainability of consumer electronics for some time, and, more recently, the shocking energy costs of big AI. But the climate emergency has also caused me to think hard about the sustainability of robots. In recent papers we have defined responsible robotics as

… the application of Responsible Innovation in the design, manufacture, operation, repair and end-of-life recycling of robots, that seeks the most benefit to society and the least harm to the environment.

I will wager that few robotics manufacturers – even the most responsible – pay much attention to repairability and recyclability of their robots. And, I’m ashamed to say, very little robotics research is focused on the development of sustainable robots. A search on google scholar throws up a handful of excellent papers detailing work on upcycled and sustainable robots (2018), sustainable robotics for smart cities (2018), green marketing of sustainable robots (2019), and sustainable soft robots (2020).

I was therefore delighted when, a few weeks ago, my friend and colleague Michael Fisher, drafted a proposal for a new standard on Sustainable Robotics. The proposal received strong support from the BSI robotics committee. Here is the formal notice requesting comments on Michael’s proposal: BS XXXX Guide to the Sustainable Design and Application of Robotic Systems.

So what would make a robot sustainable? In my view it would have to be:

  • Made from sustainable materials. This means the robot should, as far as possible, use recycled materials (plastics or metals), or biodegradable materials like wood. Any new materials should be ethically sourced.
  • Low energy. The robot should be designed to use as little energy as possible. It should have energy saving modes. If an outdoor robot then is should use solar cells and/or hydrogen cells when they become small enough for mobile robots. Battery powered robots should always be rechargeable.
  • Repairable. The robot would be designed for ease of repair using modular, replaceable parts as much as possible – especially the battery. Additionally the manufacturers should provide a repair manual so that local workshops could fix most faults.
  • Recyclable. Robots will eventually come to the end of their useful life, and if they cannot be repaired or recycled we risk them being dumped in landfill. To reduce this risk the robot should be designed to make it easy re-use parts, such as electronics and motors, and re-cycle batteries, metals and plastics.

These are, for me, the four fundamental requirements, but there are others. The BSI proposal adds also the environmental effects of deployment (it is unlikely we would consider a sustainable robot designed to spray pesticides as truly sustainable), or of failure in the field. Also the environmental effect of maintenance; cleaning materials, for instance. The proposal also looks toward sustainable, upcyclable robots as part of a circular economy.

This is Ecobot III, developed some years ago by colleagues in the Bristol Robotics Lab’s Bio-energy group. The robot runs on electricity extracted from biomass by 48 microbial fuel cells (the two concentric brick coloured rings). The robot is 90% 3D printed, and the plastic is recyclable.

I would love to see, in the near term, not only a new standard on Sustainable Robotics as a guide (and spur) for manufacturers, but the emergence of Sustainable Robotics as a thriving new sub-discipline in robotics.

]]>
WiR IV with Johanna Austin, roboticist, helicopter pilot & techsupervixen https://robohub.org/wir-iv-with-johanna-austin-roboticist-helicopter-pilot-techsupervixen/ Sun, 14 Mar 2021 18:35:36 +0000 https://robohub.org/wir-iv-with-johanna-austin-roboticist-helicopter-pilot-techsupervixen/

Watch Johanna Austin talk about her journey, make her own career path, and trailblazing a way in STEM!! Johanna Austin was the first female Robotics and Automation Research Engineer in Boeing’s Melbourne based robotics group. She was awarded her Bachelor of Engineering with First Class Honors at RMIT and her Masters of Science in Computer Science at Georgia Tech. Her latest role is as Technical Lead Engineer – Robotics Systems at AOS Group with focus in autonomous systems and distributed AI. Johanna is also a part time helicopter pilot. She shares information about her career journey and her feelings at being the first woman in ten years in her research group, how she handled that and the importance of having women around you at work. Johanna also shows some of the advanced robotics research that she’s been engaged in with Boeing.
https://youtu.be/3Wd2tccDebs?t=250
You can also follow Johanna (or Hoj) on Instagram :) if you like flying and Matrix metaphors. Many thanks to Nicci Roussow and Poornima Nathan for organizing the Women in Robotics Melbourne chapter meetings. If you’d like to join one of our local chapters or start your own Women in Robotics chapter – please reach out to us!
]]>
Eight lessons for robotics startups from NRI PI workshop https://robohub.org/eight-lessons-for-robotics-startups-from-nri-pi-workshop/ Sun, 14 Mar 2021 16:14:55 +0000 https://robohub.org/eight-lessons-for-robotics-startups-from-nri-pi-workshop/ Research is all about being the first, but commercialization is all about repeatability, not just many times but every single time. This was one of the key takeaways from the Transitioning Research From Academia to Industry panel during the National Robotics Initiative Foundational Research in Robotics PI Meeting on March 10 2021. I had the pleasure of moderating a discussion between Lael Odhner, Co-Founder of RightHand Robotics, Andrea Thomaz, Co-Founder/CEO of Diligent Robotics and Assoc Prof at UTexas Austin, and Kel Guerin, Co-Founder/CIO of READY Robotics.

RightHand Robotics, Diligent Robotics and READY Robotics are young robotics startups that have all transitioned from the ICorps program and SBIR grant funding into becoming venture backed robotics startups. RightHand Robotics was founded in 2014 and is a Boston based company that specializes in robotics manipulation. It is spun out of work performed for the DARPA Autonomous Robotics Manipulation program and has since raised more than $34.3 million from investors that include Maniv Mobility, Playground and Menlo Ventures.

Diligent Robotics is based in Austin where they design and build robots like Moxi that assist clinical staff with routine activities so they can focus on caring for patients. Diligent Robotics is the youngest startup, founded in 2017 and having raised $15.8 million so far from investors that include True Ventures and Ubiquity Ventures. Andrea Thomaz maintains her position at UTexas Austin but has taken leave to focus on Diligent Robotics.

READY Robotics creates unique solutions that remove the barriers faced by small manufacturers when adopting robotic automation. Founded in 2016, and headquartered in Columbus, Ohio, the company has raised more than $41.8 million with investors that include Drive Capital and Canaan Capital. READY Robotics enables manufacturers to more easily deploy robots to the factory floor through a patented technology platform that combines a very easy to use programming interface and plug’n’play hardware. This enables small to medium sized manufacturers to be more competitive through the use of industrial robots.

To summarize the conversation into 8 key takeaways for startups.

  1. Research is primarily involved in developing a prototype (works once), whereas commercialization requires a product (works every time). Robustness and reliability are essential features of whatever you build.
  2. The customer development focus of the ICorps program speeds up the commercialization process, by forcing you into the field to talk face to face with potential customers and deeply explore their issues.
  3. Don’t lead with the robot! Get comfortable talking to people and learn to speak the language your customers use. Your job is to solve their problem, not persuade them to use your technology.
  4. The faster you can deeply embed yourself with your first customers, the faster you attain the critical knowledge that lets you define your product’s essential features, that the majority of your customers will need, from the merely ‘nice to have’ features or ‘one off’ ideas that can be misdirection.
  5. Team building is your biggest challenge, as many roles you will need to hire for are outside of your own experience. Conduct preparatory interviews with experts in an area that you don’t know, so that you learn what real expertize looks like, what questions to ask and what skillsets to look for.
  6. There is a lack of robotics skill sets in the marketplace so learn to look for transferable skills from other disciplines.
  7. It is actually easy to get to ‘yes’, but the real trick is knowing when to say ‘no’. In other words, don’t create or agree to bad contracts or term sheets, just for the sake of getting an agreement, considering it a ‘loss leader’. Focus on the agreements that make repeatable business sense for your company.
  8. Utilize the resources of your university, the accelerators, alumni funds, tech transfer departments, laboratories, experts and testing facilities.

And for robotics startups that don’t have immediate access to universities, then robotics clusters can provide similar assistance. From large clusters like RoboValley in Odense, MassRobotics in Boston and Silicon Valley Robotics which have startup programs, space and prototyping equipment, to smaller robotics clusters that can still provide a connection point to other resources.

 

]]>
One robot on Mars is robotics, ten robots are automation https://robohub.org/one-robot-on-mars-is-robotics-ten-robots-are-automation/ Tue, 02 Mar 2021 18:43:52 +0000 https://robohub.org/one-robot-on-mars-is-robotics-ten-robots-are-automation/ In this illustration, NASA's Ingenuity Mars Helicopter stands on the Red Planet's surface as NASA's Perseverance rover (partially visible on the left) rolls away.

In this illustration, NASA’s Ingenuity Mars Helicopter stands on the Red Planet’s surface as NASA’s Perseverance rover (partially visible on the left) rolls away. Credits: NASA/JPL-Caltech

The difference between robotics and automation is almost nonexistent and yet has a huge difference in everything from trade shows, marketing, publications to academic conferences and journals. This week, the difference was expressed as an opportunity in the Dear Colleague Letter below from Professor Ken Goldberg, CITRIS CPAR and UC Berkeley, who suggested that students whose papers were rejected from ICRA, revise them for CASE, the Conference on Automation Science and Engineering. This opportunity was expressed beautifully in the title quote from Professor Raja Chatila, ex President of IEEE Robotics and Automation Society and current President of IEEE Global Society on Ethics of Autonomous and Intelligent Systems. “One robot on Mars is robotics, ten robots on Mars is automation.”

Dear Colleagues,

Over 2000 papers were declined by ICRA today, including many that can be
effectively revised for another conference such as IEEE CASE (deadline 15
March).

IEEE CASE, the annual Conference on Automation Science and Engineering, is
a major IEEE conference that is one of three fully-supported IEEE
conferences in our field (with ICRA and IROS).

In 2021 CASE will be held 23-27 August.  It will be hybrid, with a live
component in Lyon France and an online component:
https://case2021.sciencesconf.org/

IEEE CASE was founded in 2006 so is smaller but growing quickly.  The
acceptance rate for the last CASE was about 56%, higher than ICRA 2021
(48%), IROS, or RSS.  I consider this a feature not a bug: it is an
excellent venue for exploratory and novel projects.

IEEE CASE continues the classic conference model of featuring a 10-15 min
oral presentation of each paper in contrast to poster sessions.  This is
particularly exciting for students, who get the valuable experience of
lecturing and fielding questions in front of an audience of peers.

IEEE CASE also has a tradition of spotlighting papers nominated for awards
such as Best Paper, Best Student Paper, etc.  Each nominated paper is
presented in special single session track on Day 1, where everyone at the
conference attends and there is a lively Q&A led by judges.

IEEE CASE emphasizes Automation.  Automation is very closely related to
Robotics. There is substantial overlap, but Automation emphasizes
efficiency, robustness, durability, safety, cost effectiveness. Automation
also includes topics such as optimization and applications such as
transportation and mfg. I like how RAS President Raj Chatila summed up the
relationship 10 years ago: “One robot on Mars is robotics, ten robots on
Mars is automation.”

In China there are over 100 university departments
focused on Automation.  The impact factor for the IEEE Transactions on
Automation Science and Engineering (T-ASE) this year is on par with T-RO
and higher than IJRR. Automation is important to put robotics into
practice.

Ken Goldberg

Professor, Industrial Engineering and Operations Research

William S. Floyd Jr. Distinguished Chair in Engineering, UC Berkeley

Director, CITRIS People and Robots Lab

 

]]>
Robots4Humanity in next Society, Robots and Us https://robohub.org/robots4humanity-in-next-society-robots-and-us/ Tue, 23 Feb 2021 21:30:01 +0000 https://robohub.org/robots4humanity-in-our-next-society-robots-and-us-conversation/

Speakers in tonight’s Society, Robots and Us at 6pm PST Tuesday Feb 23 include Henry Evans, mute quadriplegic and founder of Robots4Humanity and Aaron Edsinger, founder of Hello Robot. We’ll also being talking about robots for people with disabilities with Disability Advocate Adriana Mallozi, founder of Puffin Innovations and Daniel Seita, who is a deaf roboticist. The event is free and open to the public.

As a result of a sudden stroke, Henry Evans turned from being a Silicon Valley tech builder into searching for technologies and robots that would improve his life, and the life of his family and caregivers, as the founder of Robots4Humanity. Since then Henry has shaved himself with the help of the PR2 robot, and spoken on the TED stage with Chad Jenkins in a Suitable Tech Beam. Now he’s working with Aaron Edsinger and the Stretch Robot which is a very affordable household robot and teleoperation platform.

We’ll also be hearing from Adriana Mallozi, Disability Advocate and founder of Puffin Innovations which is a woman-owned assistive technology startup with a diverse team focused on developing solutions for people with disabilities to lead more inclusive and independent lives. The team at Puffin Innovations is dedicated to leveling the playing field for people with disabilities using Smart Assistive Technology (SAT).  SAT incorporates internet of things connectivity, machine learning, and artificial intelligence to provide maximum access with the greatest of ease. By tailoring everything they do, from user interfaces to our portable, durable, and affordable products, Puffin Innovations will use its Smart Assistive Technology to provide much needed solutions the disabled community has been longing for.

This continues our monthly exploration of Inclusive Robotics from CITRIS People and Robots Lab at the Universities of California, in partnership with Silicon Valley Robotics. On January 19, we discussed diversity with guest speakers Dr Michelle Johnson from the GRASP Lab at UPenn, Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai, Alka Roy from The Responsible Innovation Project, and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics, with discussion moderated by Dr Ken Goldberg, artist, roboticist and Director of the CITRIS People and Robots Lab, and Andra Keay from Silicon Valley Robotics.

You can see the full playlist of all the Society, Robots and Us conversations on the Silicon Valley Robotics youtube channel.

TRANSCRIPT OF THE FIRST INCLUSIVE ROBOTICS DISCUSSION (from video directly above)

Andra Keay 0:05
So welcome, everybody. Welcome to our first society robots and us for 2021. And I’m looking forward to a discussion that is going to help us set the agenda for robotics in 2021 and beyond. And I think it’s very important that as our technology emerges, we address the issues around how it is affecting society, and how it can have an impact both positive and negative on society. And so we have wonderful conversations. And we started doing this event in the early days of the COVID era, and we were focusing on so what actually does it mean? How can robotics and roboticists help in this time of pandemics, and it was a fantastic conversation, and we decided that it was time to expand the topic, and to start to talk about things like racism in robotics, global challenges and how we address those. So it’s one of my favorite events. And I’m delighted to see so many people. Joining us now, my role is to warm up for the speakers and the rest of the discussion, I’ll just give everybody a little bit of housekeeping as to how this is going to roll. I will introduce each of the speakers, and they will each share their thoughts with us. We will move from speaker to Speaker if you have questions specifically for one of the speakers, put them into the chat, and I can forward the question on and then we’ll start the general discussion once every speaker has had their time to speak. And in the general discussion? Well, I’m looking forward to finding out what is inclusive robotics? Why do we need it? How do we go about getting it? And even beyond that? What is the robotics agenda? For 2021? And beyond? What are the questions that we haven’t thought to ask perhaps, and perhaps it’s time that we started those discussions. And in the spirit of that, I would like to acknowledge that I and many of us are here on the lands of the aloni people who are an unrecognized First Peoples tribe of California. And I’m very pleased to actually see more events starting to acknowledge the first peoples as part of how things happen. And so I’ve given us the introduction and the housekeeping. I’d like to say a little bit about our speakers. And we’ll kick off the rest of the discussion. And I see this all because pointed out an event that’s coming up in the chat, a spring founders circle for the responsible innovation labs. I would like to also say, Silicon Valley robotics, and women in robotics, have regular events. So in women in robotics, we have a weekly book club, for example. And we have a slack community where we can meet together online, as well as having local chapters. If you’re interested in joining that, and I’ll pop the link in the chat. Please go to women in robotics.org. And it’s not a ht, TP s site, it’s still an HTTP site. So if you can’t find it, that’s the reason. But if you’re interested in signing up, please go there. Silicon Valley robotics, which is the organization that I call my day job not, although it is also a passion project is able to assist you if you are a robotics company at any stage. And we have mentor networks, we have events that are topic related, or that are related to helping you with your startup. And I’m just wondering, Ken, would you just like to say a few words now about CITRIS, where Ken is the director of the people in robots lab, and as well as the research there. There’s also the CITRIS Foundry, I believe.

Ken Goldberg 4:37
Thank you under so I want to say we’re really lucky to be partnering with you on this and so it’s a pleasure to work with you and citrus is a University of California, actually state level organization that connects for the campuses. They are said, Davis, Santa Cruz and Berkeley and the mission of CITRIS stands for the center is a center for Information Technology Research in the interest of society. So the mission of this series that Andhra is organizing is very much consistent with the with the mission of the center. And my my initiative within it is the is people and robots. So these are really come together very strongly. And this idea of inclusive robotics is something that we’re very excited about developing and expanding in the in the in the year to come and the years to come. So I really appreciate the the discussion we’re going to have tonight, I’m really looking forward to hearing your perspectives.

Andra Keay 5:35
Thanks so much again. And you’ve probably seen that we have a fantastic lineup of speakers tonight, we have Dr. Michelle Johnson, who is at the grasp lab in new pen, and actually is the director of the rehabilitation robotics lab at the grasp lab, and the Associate Professor of physical medicine and rehabilitation. We have Dr. Ariel Anders, and she is the first technical hire at robust AI, and is also a board member for women in robotics. We have Alka Roy, who is the founder of the responsible innovation project, and is working on building delight, trust and inclusion into technology and AI. Looking forward to hearing more about that. Then we have getting my notes out of order here. Then we have Kenny Chiku Nova nisi from who is a roboticist. And my notes are totally out of order now. and is a member of blacking robotics, and he will be able to speak to us a little bit about, like in robotics, as well, Kenny Andrews, who’s a computer engineering master’s student at the University of Illinois, and on the undergraduate committee of black in robotics. And, of course, Ken Goldberg, who is the director of the people in robots lab at citrus, and the distinguished William S. Floyd Jr. Chair in engineering at UC Berkeley, and is not only a roboticist, but an artist and I love the cross disciplinary perspective that that brings to the discussion. And without more from me now I think we’ve given all of the strikers time to join the conversation, I would like to introduce Dr. Michelle Johnson.

Dr Michelle Johnson 7:46
Thank you Andra. Thank you, everyone for attending. I was really intrigued by the title inclusive robotics. What is it? Why do we want it? And what do we need to do to have it and as I was pondering, the title to clear thoughts came to mind first, I thought, inclusive robotics means designing robots that reflect the diversity of society in terms of culture and race, as the first thought. The second thought was that inclusive robotics means to me, designing robots that are usable in all types of settings in low resource in high resource settings, in high income countries and low low and middle income countries that benefit all types of people at different socio economic status. And not just in kind of in the, in the US, or the UK or in Europe, but all over and globally. So those are two, two thoughts that were kind of really present. With me, as I thought about this, I wanted to just expand on those two thoughts a little, and explain a bit about what I mean by that. So go back, going back to the first idea that we should be thinking about designing robots that reflect the diversity of society in terms of culture and race. I, you know, when I’m in the healthcare area, and my PhD is in me, and I’ve been thinking about designing robots for people with disabilities for a long time, and as we suggest that robots will be taking care of us and being seen as some type of assistance to our clinicians or to our elders, or to people in general. I think that this is where we really need to start thinking about how we design them, how do we train them using the AI and their functional goals? A couple of examples of why this is important. I was talking to a colleague recently about his face recognition software. And something he said to me struck me he said, Oh, yeah, You know, our face recognition software’s really have a hard time detecting people with darker skin? And I thought, Okay, so we’re we’re the AI is how are the AI is trained. So you know, this idea that as we train some of our systems, they’re, they’re, they’re being trained, maybe not enough on a diversity of people. But they are then not able to function well, when, when meeting the diversity of people that we encounter. So that’s an example of what I what I mean by kind of designing them to really think about the diverse diversity of society in terms of culture and race. Another thought is, recently in my lab, we’ve been talking about social robot design, and thinking about robots for children with disabilities and doing remote telehealth. And one of the discussions that we had was, how do we make sure that this robot, when someone looks at it can be they can see themselves partnering with it, and they don’t automatically assume that this robot is white, or this robot is any particular race or color, but that they can actually form a collaboration with the system. And so we talked about developing a robot that kind of can be seen as multicultural. And in fact, we did an exercise with my class that I taught this past fall about asking them to interact with the robot and asking them about issues of diversity and gender, and race, and that whether interacting the robot with the robot engendered any of those thoughts. So those are that’s kind of I think we need to do a better job of including cross section of people in our development process in our discussion about what is ideal. Just another quick anecdote. There’s a paper that came out that said, robots social robots should have eyes with a pupil. And when you looked at the message section, the majority of the people that they had surveyed were white, and they had blue eyes, or eyes that you saw distinct pupil. And and so I was like, Okay, well, of course, then that makes sense that now you’re gonna say that robot should have you know, pupil, because the variety of people that you’re serving, you know, that’s what they’re used to seeing, while if you serve dade, you know, people with darker eyes, or brown or but you’re not going to see a distinct pupil. So that’s not going to be a big deal with those. So that’s just another anecdote of, we need to be careful as scientists and as developers to really consider who we’re talking to, so that we’re not, and that was a kind of well bred paper. And I’m thinking, Oh, Why didn’t anyone ask that question? Maybe because, you know, I think the responsibility is on the designer, and the person developing it to make sure that their, their population that they’re querying reflects this cross culture and gender. And I think if if we do that, we’re going to see robots that are more inclusive, and we won’t have these. At least we’ll have less of these anecdotal stories that I’m pointing out. The second point about low robots should be usable by all just quickly is that came out of kind of some work that I I’m really passionate about affordable robots in global health. Because as I look back at robot assisted therapy, Sara therapy and the systems that we develop, I find that wholeheartedly. They’re they’re quite expensive, and they really haven’t penetrated low and middle income countries yet, in terms of stroke, 80% of the strokes and functional impairments that are resulting are outside of high income countries. So there’s this disparity in terms of here’s this technology that we’re proving to be able to possibly support in areas that have low resource, not enough clinicians where you might be able to leverage technology to support recovery after stroke or upper extremity impairment. But yet still, we have not been able to penetrate these areas, because the systems are way too expensive. So I mean, my lab has been passionate about how do we develop systems that are not only effective but affordable and able to be used in these low resource settings. So that’s kind of my two things. And I think more of us need to kind of be thinking about those things. And I see sometimes we are thinking about like better and better and better and cooler and cooler tech, but the translation of that tech into communities Globally, I think is missing. So that’s my second point about inclusion in robotics. I’ll stop there, Andra, I think I made my main points.

Andra Keay 15:12
No, that was that was excellent. And I think they were very clear points. I’ve been penciling down some questions myself. If anybody else has questions specific, specifically for Michelle, or questions about the subjects that she’s raised, you can table them in the chat, and we will definitely get to them. I’m looking forward to hearing what other angles on the discussion of what is inclusive robotics? And how do we get it that we’re going to uncover tonight. So without further ado, I would like to introduce Dr. Arielle Anders, who is the first technical hire at robust AI and on the board of women in robotics.

Dr Ariel Anders 15:58
So that was a really great talk, I’m excited to try to follow up, I think I have some similar ideas. And I really appreciate Michelle’s comments on your how do we you know, we keep pushing the envelope of what robots can do, but they’re not necessarily going to everyone. And so to start, I just want to say thank you for inviting me to talk and share some of my ideas about inclusive robotics. If you’ve been able to see some of my recent presentations, I’ve been trying to get into the habit of introducing myself and providing a little bit of background and context. And I think that for this talk, the most important thing, the important thing here is that I am a human being. And I think that when we think about inclusive robotics, and the questions that ondra asked the speakers to share our thoughts on, it should come from this part of our humanity. And they’re the three questions, you know, what, what is inclusive robotics? Why do we need it? And what do we need to do to have it? I think that my answers today, you know, I’m kind of excited that we’re on zoom, and it looks like this is recorded, because I am curious to see, you know, what do I think in a decade or so on some of my thoughts here. And I think we’re going to continue growing and learning. And just reiterating on this process. And, you know, we’ve really should start having these discussions more and more, especially from people like myself, who really generally did not work so much in the idea of human robot interaction, a lot of my previous work was in programming robots to do new capabilities. So I’m really excited to share my ideas and what my kind of first impression thoughts are here. And to start off with, when I think about, you know, what is inclusive robotics? I think, what is what does it mean to be inclusive? And my definition of inclusive is belonging. So, going with that inclusive robotics is creating robots that belong in our world. I think that belonging that word to me has a lot of implications on what type of robots will have. Sorry, I think I think my dogs also very excited about that idea. Hopefully, you guys can still understand we’ll try to speak over her in terms of what does it mean for a robot to belong? It should be safe, it should be a robot that’s comfortable around us, you know, we should be comfortable to have it or around. We want to make sure the robots are, you know, they belong, they’re probably not hurting people because out exclude others. I think there’s a question about the appropriate use of robotics kind of going along what Michelle said, you know, it should not be something that alienates more people, it should be something that doesn’t exploit all our resources, it should be something that’s accessible. And to me, it should also be something that society wants. We want robots that are trustworthy, and they work and we want to have them around. And most importantly, in this very kind of circular definition. When I say people, I really do mean all human beings. And so when I think about this, you know, robots that are trustworthy and safe, and they work we want them around. To me. It makes sense that we would want robots like that. I think the question really is, why do we want this definition? To include everyone. And I honestly can’t answer that for you. But I can refer to an expert, Maya Angelou has a wonderful quote on diversity, and that it makes for a rich tapestry. And we must understand that all threads of the tapestry are equal in value no matter what their color. I think that when we think about robotics, it is incredibly important to think about creating robots for marginalized groups. It’s not just for people who can pay for them. That being said, How do we get there, and I honestly don’t know how, but I think we should follow the principle of nothing about us without us. And that means we should try to make sure we have a diverse demographic creating robots, we need people from all backgrounds, otherwise, there’s no way we’re going to get there. So the other idea there is just to keep in mind as we continue going forward with our robotics development, is to remember that we are all humans. And I do want to share a little bit of a personal story here. Back when I was an undergrad, I had the opportunity to present my research at africamps. And our keynote speaker was Maya Angelou, which is incredible that I got to see her. And I can’t paraphrase I can’t even summarize exactly what she said. But she really solidified the point home that we are all human beings. And I think that’s that’s the message I want to share with you when we go to make our robotics more inclusive. That’s a kind of my short, short presentation on what I think inclusive robotics is.

Andra Keay 21:58
Thank you. All right, that was great and beautiful quotes from my Maya Angelou as well. And I’m getting such a lot of rich material from the discussion that’s going on in the chat about what is a multicultural robot that Michelle raised? And you know, you’re very clear points, nothing about us without us. And, you know, I’m thinking we have a lot of issues about where we cite the responsibility along the production of robots. And I think everybody kind of wishes, that it’s somebody else’s responsibility to do this. And quite often we’re reaching the production of robots with great big problems somewhere or other along this production process. There is nothing is working together collaboratively to develop appropriate robotics. So I’m starting to get some thoughts myself around this process. And you know, already, so thank you both for inspiring us there. What I should do is introduce our next speaker, Alka Roy is the founder of the responsible innovation project. And you’re currently a visiting faculty at Berkeley, I believe, as well. Okay.

Alka Roy 23:29
Yes, hi. I was just trying to figure out how to share my screen on my new computer. Sometimes, simpler is better. So I am, let me know if it works. No, it doesn’t work. So. All right. So I am. First of all, thank you for inviting me to this. I was actually more excited to hear what everyone was going to say then what I was going to say. I’m definitely not making robots, which I think I congratulate you for inviting an outsider to comment on this. But I do have a lot of opinions about it. Why not? And I think more and more people should have opinions about this. So I really am trying to see if I can still share my screen because I wanted to share a framework with you that I’m hoping we would think about my provocation tonight is very different. Because when I first got invited, and I heard inclusive robots, I said I don’t want to attend this event because or at least speak to it, because I’m not sure if robot to be inclusive and what does it really mean. And so I’ve been just thinking and thinking and thinking about it. And I because I actually believe that certain things should be excluded. And so let me let me let me explain what I mean by that. I am I’m working on or I have a responsible innovation framework. And that again, I says 1% of time, can I just share my screen or you should be able to do it on the green share screen at the bottom of your screen? Right. So my security settings are not allowing it. All right, so what I see. Give me one more second, if I can, I’ll just talk to it. Alright, so I will put my link in the website. And we’ll have another time to share this. But so I’m the founder of responsible innovation framework. And prior to that, I was at an innovation center in Palo Alto, where we were making a lot of the things that I’m going to talk about. And what happened for me was, I was the end of kind of 5g and AI and everything. So we were enabling anything that you know, mobility, can enable and immersive experiences, robotics. And in the middle of all of these discussions, I found a lot of voices missing, and which were community voices, or people or even when cities were involved, it was usually their CTO that was coming in, not people were thinking about the impact of society. And I kept raising the point, raising the point until I got so loud for myself, that I had to kind of feel like I had to take a break or two sides. And so I stepped out of that arena, and took a break. And this whole world happened with COVID, which has been a really interesting place for reflection for all of us. One of the things that I worked on last year is this responsible innovation framework, which is like a nested three by three by three, which is why I was hoping I could show you the visual, but just go with me. And and I will try to create it for you. So the first thing in there is the provocation for for wear robotics, which is really, I’m really excited to see what you’re going to think about it and how you’re going to disagree with me, and how are you going to tell me I’m wrong, is I think that we need to think about stakeholders differently. So we have people with everybody talked about and and really people, not customers, not clients, not you know, people, then we have the planet in the environment, physics, our laws, you know, all those things. The third stakeholder is technology, things. So I call it people planet things. And the reason I put that as a separate stakeholder is because I think, or my, my provocation is that the reason we design technology, so problematically including robots and chatbots. And digital twins, is because we don’t separate people and things. Because we try to build things in our own image, or the image of something living. So we’re taking an inanimate thing and creating it in that image, animation world is full of it. And that causes a lot of confusion for us. Because we transfer our feelings, we transfer our biases, we transfer our confusion to these inanimate objects. And now there’s so much research and where people are, you know, feeling things about the robot, which they’ve created a robot to sort of clear the minefield. But when they, they see this robot getting blown up, they stop and they’ve removed the robot because they’re feeling that this robot is doing more than it was designed to do. There’s a reason to create social and empathetic robots. I get it. But I think that in that process, we create teenage girls, we create women as our servers, and we just perpetuate the spice just sweep deep we can decouple. So my provocation is, why not design robots is things, useful things, but not in our image, not in our voice. And so that we don’t transfer, you know, just appears as a thing. And we treat it as a very useful thing that communicates and helps us and it’s technology. So those are, you know, that’s like basic provocation for me, the stakeholders. The other aspects of this responsible innovation framework includes the values of open and safe, which are conflicting values, but, but both needed, delightful and trustworthy, and inclusive and dependable. And their inclusion is about people and ideas and types of people. And the reason I say delightful and trustworthy is because whenever we talk about responsibility, you’re doing the right thing. We can vote like we get, we feel like we have to be very serious and boring. Whereas I think that’s the part of robotics design, which is very useful. lightful for us, and I love what Michelle said. And I think Ariel said about, we always looking ahead, and we feel like we have to be chasing the latest and the greatest. But there’s so much for us to learn from our weathering technology, the old stuff, and make things accessible, you know, around around the world. So I hopefully at some point, I’ll get to share this framework with you. But those are my provocations for this conversation. I’d love to hear what other folks have to say about that. So thank you.

Andra Keay 30:33
Thanks, Alka. And I could see some people are very positive about what you’re saying there. I have to agree that the good design framework is to not create something that is perpetuating stereotypes, because we do anthropomorphize and it triggers our unconscious biases and stereotypes, and I see some more conversations coming. But I think it’s time to introduce Kenechukwu Mbanesi from the Black In Robotics undergraduate committee. And you’re currently, he’s currently doing a PhD in robotics engineering at Worcester Polytechnic Institute in Massachusetts. And I’m looking forward to learning more about that over to you.

Kenechukwu C. Mbanesi 31:29
Alright, hello, sorry, I was just trying to get my sharing going on here. Just give me one second. Can you see my screen?

Andra Keay 31:47
Yes. Looks good.

Kenechukwu C. Mbanesi 31:49
Awesome. Yes, thank you so much for the privilege to be to be here to join this conversation today. I, I’ve benefited a lot just by listening to what the previous speakers have talked about. I want to take this discussion, somewhat piggybacking on what Ariel ended with, about how one way we can achieve inclusive robotics is by getting people from all backgrounds involved in robotics engineering. For a long time, I’ve been very passionate about the power that robotics and technology has to, to provide improved quality of life and socio economic benefits for people in underserved and underrepresented communities, especially on the African continent. And that’s really where my passion lies. And you see it, I see. I see exclusivity, which is the inverse of inclusivity is caused by two things. And it could be more but in my understanding is two things. One is discrimination. In other words, on equal opportunity. And second is limitation. More or less on equal access, right. And my passion really lies in addressing the Equal Access problem. And how I envision that is using education. Right? So how can we get more people who otherwise would not have a seat at the table of discussing and developing robotics technologies to be able to do that, by providing them the means through education. And I’ve been very privileged to be part of interesting projects with this goal in mind, and I’d like to share a few of that with you. The first project is math and science for Sub Saharan Africa. This project was supported by the World Bank, and our audacious goal was, was to run a continent wide train the trainer program to upskill stem teachers to teach math and science and robotics, and really to understand the interconnectedness. Right, so how does science and math inspire people to pursue engineering and robotics? So I think it was that if we could inspire and educate teachers, well, they would in turn, educate and inspire students. Right. And so we started this in 2016. And we’ve been able to partner with government education agencies on the continent, to run training programs in person and virtually in several African countries. And just to put a plug in here, one of the major challenges that we’re faced and this project is I know we really still do is accessing affordable, educational robotics kits at scale. We were very fortunate to get Vex robotics to donate a couple of kids. But beyond that, we still see that as a significant bottleneck to getting students and teachers access to kids that can really help them learn and participate. The second project is called bad for kids or co bad is stands for public collaborative robots. And this project was supported by the advanced robotics for manufacturing Institute here in the US. And the project was designed really to combat the current gaping shortfall of skilled manufacturing talent in the US. And particularly trying to address that in an inclusive way, right. So what we’re doing really was to swim upstream, to inspire and train middle and high school students, especially from underrepresented backgrounds, to be passionate about robotics and passionate about advanced manufacturing by taking them through several week after school program, so like after school program, where we teach them about how to program collaborative robots, reasonably UI robots, you can see on the screen, and to actually teach them how to manufacture items using CNC machines with machines and other things. And really, our vision was that this hands on experiential learning opportunity would inspire them to, to be passionate about this and to help with getting them into this programs, or in college, and to really retain it to increase retention so that they can actually go and pursue robotics in the future. And the last project is when I got involved in very recently, and is very dear to my heart, it’s the pan African robotics competition. It started in 2015, really small, but right now it’s grown to be probably the largest robotics competition on the continent. And really was designed with inspiration from from first robotics, if you’re familiar with that here in the US, really with a goal to inspire the next generation of roboticists on the continent. And we have seen very strong participation and significant impact on how students are motivated to not just be consumers of this technology, but to have the confidence that they can be part of the producers of the technology, right, they can be part of people who actually develop this technology. And that’s naturally where my passion lies. And so this competition is going on every year, and we get students from all the way down from middle school, all the way up to college level students, and, you know, to program robots to compete with each other, and to just really inspire them. So really, to summarize, I, I am a very relentless believer in the future of inclusion. And one, in one way I’m striving for that as, as this short presentation shows is by working to provide young people, particularly people from underrepresented communities, with access to education, that and skill development that allows them to not only be at the table where they can develop technology in a way that’s diverse, but also benefit from the dividends of, of the technology. Thank you.

Andra Keay 38:11
Thank you very much, Kenechukwu that’s, um, I’m looking forward to learning more about all of those initiatives. Particularly, I think Ari raised an interesting point, there can be a lot of education initiatives, how can we maximize the benefit, as well as the access rather than splintering or fragmenting success. And I know that open robotics is very keen to democratize access to robotics by putting forward access to simulations, to get around some of the costs of having access to physical robots. But of course, that requires access to internet. And we’re seeing even in the United States that there is there is a complete gap between those that have access to internet for education. And those students who for the last year have really struggled because of lack of lack of lack of access to the internet. And I don’t know what steps we need to take. I think we’ll discuss that a little bit more as the discussion moves on. But I would like to introduce our next speaker, Kenya Andrews, master’s student in Computer Engineering at the University of Illinois, and on the black and robotics undergraduate committee.

Kenya Andrews 39:41
Okay, thank you so much. Hello, everyone. I’m very excited to be here. And I didn’t bring slides so I’m just going to, you know, speak. So, first, I’m gonna first talk about a little bit about my background and Kind of where I fit in the space, and then I’m gonna go into the topic. So I’m a first year Masters student, and I live in this space where we make algorithms, right? And those are the things that that are the, the decision making properties of robots, right. So that’s, that’s the space that I’m in I do. I, my focus in my research is in machine learning fairness. And within that, my, my passion areas and that are algorithmic justice, algorithmic bias and decision making. Okay, so my current project is looking at fair distribution of COVID-19 vaccinations amongst vulnerable populations at the state of Ohio. And right now we’re looking at different measures, trying to build different models. So some of the some of the things that we’ve kind of tossed around are like, well, should should there be equal hospitalization rates between vulnerable and non vulnerable people? Would that make it fair? Or would it be more fair, if every every, like census tract or, you know, center would have the same amount of vaccines distributed to them? So, and we’ve been, you know, having to work around things like, like, distrust between historical you know, because of historical injustice and things like that, how that creates distrust and less vaccines and how that affects distribution. So hopefully, that gives you a little context for for where I’m coming from. Okay, so when I first saw inclusive robotics, I thought First, well, oh, that’s a, you know, loaded topic list. So let’s break down some definitions. So first, I wanted to look at what is the robot? And is the programmable machine that does the task, you know, with or without human assistance? And then I want it to look at, well, what is inclusion? That’s where I started having some some issues, because we, we don’t have a have a real agreeance, on what inclusivity looks like. And it’s not because, you know, it’s for different reasons, right? Like, maybe maybe it’s like, you don’t agree with someone’s lifestyle, so you don’t want to, to include them in things or, you know, I think I should have more because I do more, I work more, right. So it creates, you know, these biases around if people should be or should not be. So, I went to Oxford, and some of their definitions, were not excluding any of the parties or groups involved in something, and aiming to provide equal access to opportunities and resources for people who might otherwise be excluded. So putting those two things together, I thought about, well, we want to have robots that makes smart decisions to achieve, you know, different goals in this world, right? And it’s not like, it’s not just that we want them to do things, you want them to do them well, right. So unless they can do them, well, then maybe they shouldn’t do it at all right? That’s what that’s kind of what we were talking about a little bit earlier. So this analysis of if robots are doing something well, should encompass, you know, are they being just are they being fair in the decisions that they’re dishing out? And in order to look at that it kind of needs to start from the beginning, from the beginning of algorithm in that, you know, at the end, there are several stages of that people look at fairness, and one of them is at the end, they all the outcomes at the end come out good. What about when you were making the decision? So that’s another another scheme of fairness, like, Where were they in between stages, between the decisions good, where they fear that’s what I’m good. So I do want to hopefully you guys can hear me a little bit I want to talk about a few definitions of fairness. So hopefully, that can bring some context into where we are. So right. So some definitions of fairness are unawareness. So that that’s when we exclude different attributes of people. So that from data, even though it’s there, we know it’s there, we don’t look at it, we just ignore it. And that can be really dangerous. Because if you ignore those things, then you would be ignoring the historical injustice that comes with it. Right? So you’re if you ignore the fact that someone is a black or brown person, then you would be ignoring the injustice that they had to face to get where they are, and how that could have affected what we look at as their resume. Right. So maybe they don’t look qualified to you. But if we, if we, you know, considered the things that that affected their resume, then we can start to see, you know, actually, maybe they are qualified or maybe even more qualified than, than someone you know, who has a similar or equal resume. Another definition of fairness is demographic parity. And best when you look at if all the demographics and a data set, have the have equal outcomes, right, like, if white, black, Asian, Indian people are accepted at the same rate, or rejected at the same rate, and grouping demographics together, can completely cancel, completely cancel another group demographic that they could be a part of, right. So it’s not only race. What about you, though? Oh, they’re short, they’re tall. What about age, you know, things like that. So you could completely be ignoring someone else’s a part of their a part of their person, if you group them together like that, that’s the danger in that. And then there’s something called equal last odds. And that’s if you’re qualified, that you’re equally as likely to be accepted into something as someone else. And another one is equality of opportunity. So regardless of your demographic group, you’re equally as qualified to have something as someone else or reject it as someone else. And then there’s predictive parody. And that’s when the is very similar to demographic parody. And is when the positive rates are the same. Okay, lastly, I’ll go over firearm safety. And lastly, I’ll go over counterfactual fairness. And that’s if the outcome for you is the same. If you’re in this world, where you are, who you are, versus, you know, if you had a different set of demographics. So instead of being a short black woman, then I would be a Caucasian woman, and Mother, do I have the same outcome? Right? So those things are important, because when we look at robotics, and the decisions that they’re they’re pushing out, are, are you what measure of fairness? Are we going by, you know, and which one is most appropriate for our for our context, it changes, it changes all the time. So we have to be we have to be very careful. Um, when we’re designing robots and understanding what kind of space are they going to be in? And who are they going to be working with? Sorry, I’m reading my notes one second. Okay, yes, we want to make sure that we’re not promoting disparity but that we are minimizing your or mitigating it, right. So how can we do that some things that I really think we should do is just kind of take a step back and slow down, we need to look at where we are right now. And, and come to come to agreement on, on what inclusivity looks like. And once we all can sign off on that, we need to understand how we make decisions, because we don’t even have a great understanding of that. We need to know, you know, what is it that we’re that we’re saying is good or bad? And why are we saying that? If the things that reason are Why are bad and maybe that shouldn’t be in our design, right? Or the reasons that are good, you know, maybe we need to superimpose those and make them even larger, make them count more. And another way, after we after we have a good understanding of that and we’re developing good things. I think we have to start investing, right like you, you maybe you if you’re designing things for community or robots For a community that’s going to be specific to a community, maybe you should go to some of the community meetings and sit and understand their struggles, the things that are actually going on there, so that you have a real understanding. You need to support organizations like nesby, right? The National Society of Black Engineers or women in robotics, right? support them with your funds, support them with your knowledge, volunteer to speak. Even students tell students about robotics, and you know what it looks like. But while you’re telling them about robotics, take them to the African American museums, take them to the Holocaust museums, show them, you know, teach them moments in history, where we weren’t so fair, we weren’t so just we weren’t so inclusive, and how those things can can translate over into our lives, start a scholarship plays, you know, be a mentor. Yeah, those are, those are my thoughts on. So thank you so much. I appreciate you guys.

Andra Keay 51:09
Thank you so much, Kenya, you raised great points. And it was wonderful to have the definitions there. And what I like most is that you took it back to saying starting with the algorithm. And this is something that is both critically important and also crucially problematic at the moment. Because right now, robotics has become a subset of AI. And that means that federal policies on AI, are incorporating robotics, it means that the agenda is being driven around the discussion around AI and AI ethics. And it is often being done in complete ignorance with robotics. And some of the problems there is that if the discussion is only about robotics, then it may only be about safety, rather than algorithmic transparency. But at the same time, if the discussion is primarily about the algorithm, then it’s going to be excluding the impacts of physical robot. And they just have a completely different and expanded way of inter intersecting with us in society. So, you know, I love that you started with the AI in that discussion. And let me just see if Ken would like to speak now. And

Ken Goldberg 52:31
Thank you. Good, thank you. I appreciate that. I am really inspired by by a lot of this discussion. And I also want to take a moment to acknowledge that the event tonight is sandwiched between two major events, at least in the United States. One is Martin Luther King Day, which we celebrated yesterday, and tomorrow, which is the inauguration. And it will be approximately 15 hours or so we will have a new president of the United States new ministration. Which I don’t know how others feel about that. But I for one, I’m very, very, very happy about it. The I think that it’s important, because is going to we are at an opportunity, a new chapter in in American history, which I think will affect a lot of events globally. I think as as you know, one thing that that struck me is that the that is Kenya, I’ve just mentioned that the COVID vaccines process is going to be very interesting reexamination of our sense of inclusivity. Because we are going to have to think very carefully and deeply about how we prioritize the the vaccine. It’s been very interesting to me that the that seniors and, and healthcare workers, prisoners, incarcerated individuals have been prioritized with good reason. But it’s very interesting, because, you know, they’re not often they’re often not considered in, in our priorities. And so it’s been a forcing us to reconsider. And I think as the more vaccines become available, we’re gonna have to do some really careful thinking about how this is rolled out is going to cause a reexamination. And I want to note that this pandemic has, has, has, has woken us up in so many ways. It was 100 years ago that the 1918 pandemic occurred. And I was reflecting on the idea that the word robot was coined in 1920, right after the end of the pandemic, and I’m still trying to wrap my head around the idea that it was such an interesting context where they had just gone through World War and this horrendous threat to humanity. And that’s when the playwright Karl che back in Czechoslovakia basically comes up with a story about robots rebelling against against the the totalitarian regime that was basically forcing them to work. And that that’s the word that’s where the word robot originated. So 100 years later, We think that thinking about robots in this context of our political, economic, social environment is so important. And so I think that the the points that were raised here from the beginning from Michel characterizing, you know, what are we? What is our definition? Because I think that’s so hard to actually wrap our heads around. I mean, we can talk about, you know, seniors and children age as a sort of inclusivity. Right, that’s one very big factor, then there’s a one we didn’t talk about tonight, but gender, and LGBTQ T, right, there’s all the gender issues that have come to the fore, actually this year. And race issues, I mean, Black Lives Matter. I think that the bipoc, the whole idea of thinking about in new ways that has created a lot of shifting of attention and priorities in a really positive way, I can tell you that black and robotics, and black and AI have had a big influence this year on our admissions process at Berkeley, we’re right now reviewing applicants, and we are getting a lot of attention. We’ve got more applicants than ever before, more black applicants than ever before. And two of them are here, actually. And I want to say it’s wonderful, because they’re there. What’s really important is that we’re, we’re learning and educating the faculty that what’s important is not just looking at the scores, how many papers they’re reading, but what is their trajectory been? So if a student has come from a, from adversity in a small village, and in Africa, and now is an undergraduate at, you know, doing Greenham? Well, in classes, that’s a huge trajectory. I mean, that means they’re there, you know, imagine what they had to overcome to get there. So really think about that, in regard to how you’re evaluating that student. They may not have a published paper, but they’re on a trajectory to do, nothing’s gonna stop. Right. So I think this is really fascinating. And it’s a really powerful and important time, that I also think in terms of other you know, races, we talked about, you know, you open tonight ondra with a with a story about Native Americans and Aboriginal people, I think it’s really important. We also consider Hispanic, Indian, though the full spectrum of races that are out there. And languages, by the way, a big disparity that excludes many people is their language they speak. There’s a, you know, there’s an emphasis on English in a lot of the publications. But that is very difficult that it’s not your native language. So you have a barrier to overcome and the way you read and write that is, we need to think about how to overcome that even our conversation tonight is in English. And then the translation, nowadays, some of these tools and AI, again, comes into play, are going to open up these doors, and I hope they will increase in quality so that we can have simultaneous translation. And for example, for for people with hearing disabilities, having ability to have automated translation, Closed captioning is wonderful thing we’ve been using in our classes, I have a student who’s on hearing disabled, and we use this for all of our meetings. So it’s been it’s opened a huge amount of doors for all kinds of disabilities with regard to cognitive, neuro cognitive diversity, or neuro diversity. And people have learning disabilities, we find out today with COVID-19, that these kinds of learning disabilities are much greater than we thought before, students have all kinds of challenges. And that is also important to acknowledge the and also, as Michelle noted this socio economic variations, right, we are oftentimes targeting this kind of particular people who can afford these kind of robots and tools and even have who have Wi Fi in their homes. Right, but many people do not. So how do we think about that? And also, I also think that intellectually, we also tend to target, you know, in the inclusivity, in terms of the people developing are oftentimes, you know, nerds like me, engineers, right? We’re all people who feel pretty good about doing science or math stem, but many people don’t, they just don’t have that, that they’re uncomfortable there. And they, but they feel excluded. So how do we engage with people who are the artists and the humanists, and the writers, the journalists, you know, who are so engaged across the board. So all these things, and workers are oftentimes the affected by the robots that we’re, we’re developing, so we need to be engaging with with workers, and really thinking carefully about how it’s going to affect those workers, especially minimum wage, who are, you know, and most vulnerable to these technologies. So, um, there’s so many things that this is sort of, you know, engaging for me. And I also have to say, I’m Kenny, I have not met you before, but I’m so excited to follow up with you because we have a common background in Nigeria. I was born there. And in the in the 60s, and we would those programs you mentioned I’m not aware of. But it was fascinating because we had we started something called the African robotics network with a professor in Ghana. And it’s, it’s, it’s, it’s been a little bit in with the first objective of it. This was in 2012, was to build an ultra affordable robot for education to design an ultra affordable. And the challenge was to design something under $10. So we thought nobody could ever do that it’s programmable robot for under $10. Anyway, it turned out that someone did. And it was a it was a it was a hobbyist living in Thailand, basically came up with what he calls a lolly bot. And if you look it up on the internet, it’s l o, l, l, y BOT, a lolly bot, and it costs $8.64. So you can build it from an old Sony game controller. Anyway, what I want to say is, I love what you’re saying, because I completely agree with this, there is a big opportunity, I am very excited about Africa and its potential, I think that is a major continent, and that we are that there’s going to accelerate into the future. And one of the things that African students, I found a really incredible ability is to know how to think outside the box in a way that they think differently because of their experience. So they’re very, very attuned to how to make something affordable and sustainable, how to make something that works, even when the electricity goes out, which nobody in the West usually thinks about. But those kind of things are really important. And so and there is engaged and interested in robots as any kid anywhere. So that’s why I want to be able to bring them to robots and the programs that you’re talking about, like the pan African robotics competition, I absolutely love it. So I want to connect with you because I would really like to follow up. But that is one of the things I think we can do. As the group of us tonight, which is to break this we’re forming a community I mean, what I feel is that there’s a there’s a real sense of, of some ground ground grassroots thing happening here. And I’m so excited, I want to thank you Andra for putting that putting this group together. Because there’s a spark here that I want to support. And I mean, I really want to see that grow over the next few years. And I think we are at a moment in time historical moment when this is an opportunity for us to step forward and really take take this opportunity and do something with it carried forward in a really meaningful and sustainable way. Thank you.

Andra Keay 1:02:03
Thank you so much, Ken, that was a very wonderful note to finish on. And sadly, for tonight, we are out of time. But the problem space is enormous. But it’s fitting to realize that the opportunity space is even larger. And I personally had thought a lot. What would a robot look like if it was designed by women for women, and I could imagine things being different. But imagine now what it would look like if your language models were for languages other than English. Or if you had to rise to the challenge of developing language models for multiple languages, as is the case in Africa. And I love the examples that can gave us there as well about really thinking outside of the box. If it’s been proven, fairly scientifically, that diversity drives innovation. And it might not be as comfortable. But it is certainly far more productive. If you’re looking to make change, as well as creating something that’s inclusive. So we shall have to continue this discussion and extend this discussion into our workplaces and to the rest of the people around us because we Yes, we need to get everybody in the room. And I’m looking forward to taking that journey with you all and thank you so much tonight. That’s wonderful speech. Okay, I’m going to stop the recording and say goodnight to everyone.

Dr Michelle Johnson 1:03:48
Thank you

Transcribed by https://otter.ai

 

]]>
Introducing Eliza Kosoy; E-liza Dolls https://robohub.org/introducing-eliza-kosoy-e-liza-dolls/ Thu, 18 Feb 2021 09:30:04 +0000 https://robohub.org/introducing-eliza-kosoy-e-liza-dolls/

Eliza Kosoy is a Ph.D Student at UC Berkeley. She studied mathematics in college and then worked for Prof. Joshua Tenenbaum at MIT in his computational cognitive science lab. She then started on a Ph.D at UC Berkeley working with Professor Alison Gopnik in 2018. She is most proud of receiving funding and winning an innovation prize that catalyzed her business!  Her startup is called E-liza Dolls. They are 18’’ electronic “liza” dolls that introduce young girls to coding and hardware in a fun way!

She chose this topic because as a woman in STEM she couldn’t help but feel the gender and racial divide and discrepancies in the hard sciences. With her background in child development, it only made sense that it’s best to expose children to these concepts early on so they will be embedded into their hypothesis space as they develop. The hardest challenge for her is “Soldering Errors” and when tiny components fall off without notice.

E-liza Dolls Kickstarter will open very soon in March 2021… We’ll update this post the moment it goes live!

Roboticists in Residence is a Silicon Valley Robotics initiative that provides free studio space and support for creative artists and engineers making a difference, whether it’s modding a Tesla with all the conveniences of the Victorian era or adding to the ROS2 navigational stack. For more information and updates from our Roboticists in Residence

]]>
Women in Robotics Update: Andra Keay, Nguyen Sao Mai and Selin Alara Örnek https://robohub.org/women-in-robotics-update-andra-keay-nguyen-sao-mai-and-selin-alara-ornek/ Tue, 02 Feb 2021 02:49:34 +0000 https://robohub.org/women-in-robotics-update-andra-keay-nguyen-sao-mai-and-selin-alara-ornek/

Here’s a Women in Robotics Spotlight, where we share stories from women who are working on all sorts of interesting projects who haven’t yet been featured in our Annual Showcase. We hope these stories provide inspiration to everyone to join us working in the field of robotics. And if you’re a woman working in robotics, why not contribute your story too!

“I love robots however I do find it frustrating when the code that was working the day before doesn’t work. I also find it hard supplying my robots with power. I learn online although I do have a few mentors that help me but it’s really not easy learning on my own. My favourite thing about robotics is making them, and when they work like they should. My robots make people really happy so I love that. I also love succeeding – the feeling when my robots come to life is unbelievable.” says Selin Alara Örnek a high school student who has built five robots, including a robot guide dog for the blind.

Andra Keay  Managing Director at Silicon Valley Robotics, Visiting Scholar at CITRIS People and Robots Lab and Founder at Women in Robotics

Why robots?

“My background is Human-Robot Interaction, Design, and Communications Technology – which seems a long way from robotics, but our mass communication technologies (including internet) were the most powerful and creative technologies of the 20th century.

I’ve always been interested in robots, firstly as a philosophical thing, then as interesting robots became possible, fascinated by the way in which this latest evolution of technology is spreading into society.

The sheer scope of the technology is my favorite thing, but historically, the incredible homogeneity or lack of diversity in robotics is my least favorite thing. Fortunately, we’re changing that!

We have technology that can solve the world’s greatest challenges, if we can continue to find ways to market and avoid frittering away our advances on novelty devices, games or advertising.”

What suggestions do you have for other Women in Robotics?

“Every time I was trusted to lead a project I grew a lot and gained confidence. Until then, I hadn’t even realized that I was lacking in confidence. At the time, it just seemed expected of young women to follow other people, not go my own way.

I’d like to call out people on being too self deprecating, putting themselves down, or apologizing for themselves. If I had a dollar for every time a women in robotics has said “but I’m not really a….” then I’d be able to fund a great robotics company! I also call on people to stop blaming women for not speaking up or ‘leaning in’ when no one in industry is listening to them!”

Nguyen Sao Mai   Assistant Professor at ENSTA-Paris, IP-Paris and also affiliated at IMT Atlantique

Why robots?

Nguyen Sao Mai has enabled a robot to coach physical rehabilitation in the projects RoKInter and the experiment KERAAL she coordinated, funded by the European Union through FP-7 project ECHORD++. She has participated in project AMUSAAL, for analysing human activities of daily living through cameras, and CPER VITAAL for developing assistive technologies for the elderly and disabled. She has developed machine learning algorithms combining reinforcement learning and active imitation learning for interactive and multi-task learning, contributing to the start of interactive reinforcement learning. She is currently associate editor of the journal IEEE TCDS and co-chair of the Task force “Action and Perception” of the IEEE Technical Committee on Cognitive and Developmental Systems.

“Cognitive Developmental Robotics is a wonderful field to allow us to build new assistive robots that can evolve in interaction with humans and adapt to the needs of its users by continual learning. It is also an amazing tool to understand and model biological cognition and learning processes.

Robotics is unleashing little by little its potential as a tool to address humankind’s challenges, such as the medical advances, environmental issues or social assistance for the elderly and the disabled. This year’s situation has for instance has shown the usefulness of robotics in medical environments and nursing homes.”

What suggestions do you have for other Women in Robotics?

“Robotics and artificial intelligence are wonderfully multidisciplinary fields. It can be surprising at first sight that social sciences play an essential role in the scientific advances. They need contributions from researchers of different fields. They could also greatly benefit from the women’s outlook on intelligence or the social role of robots in our future society. “

Selin Alara Örnek High School Student and Inventor

Why robots?

“I am a 14 year old high school student, I have been coding since I was 8 and building robots since I was 10 years old. I have built 5 robots till now one is a guide dog for blind people, ic4u and ic4u2, I have built 2 versions. I have another robot BB4All which is a school aid robot to help students, teachers and staff its main aim is to prevent bullying. I have also built an Android robot and a Starwars Droid DO as I love both of them.

When I was 9 we lost our family dog I was really upset as I don’t have any brothers or sisters and he was like my brother. I wanted to bring him back to life as I was little I dreamed of bringing one of my soft toys to life. Whilst on holiday with my family I saw a guide dog with its blind owner. I love dogs and was really happy to see a dog helping in such way. But then I remembered how sad I was and I thought if the blind person’s dog was to die they would not only lose their best friend but their eyes again too. So I decided to build my robot guide dog ic4u.

I am currently rebuilding BB4All as the first one I built was not very strong and made of cardboard now I am printing the pieces with a 3d printer and adding some more features. In my robots, I use image recognition, object detection, face recognition, voice commands, dialogflow, omnidirection movements, Google maps api and Google assistant integration and various sensors.

I believe that robots will be part of our everyday life and that we will need them more and more. I love robots so that makes me really happy. My dream is to build a humanoid in the future and send it to space so that it can do research on the black hole.”

What suggestions do you have for other Women in Robotics?

“I used to love playing games minecraft, etc then in English class my teacher started making games for us to play whilst learning and I really wanted to do so. I asked him how he did it and he told me to have a look at MIT Scratch and encouraged me to code my own games. He told me I could do it and that I should try. That is how I started to learn how to code and I am very happy to have a teacher like him.

A lot of girls are not very interested in coding robotics and find what I do very boring. I spend a lot of time taking my robots to events to talk about my robots and show how coding and robotics can be fun and also how technology can be used for good. I recently did a TEDx talk and have given presentations at plenty of local and international events. A lot of other kids get in touch with me to ask questions which I also like to answer. Especially little girls that get in touch as it makes me happy to see them so interested and excited. I also try to send messages to parents in my presentations and interviews, pointing out that they should respect their children’s choices and that they need to give equal opportunities to both their daughters and sons.”

We encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org  We’re a global network with local chapters around the world. The guest speaker at our most recent event, Women in Robotics Melbourne Australia was Nicole Klouet, a PhD candidate in aerospace engineering working on reducing the acoustic impact of drones on society.

 

]]>
The future of robotics research: Is there room for debate? https://robohub.org/the-future-of-robotics-research-is-there-room-for-debate/ Fri, 22 Jan 2021 10:11:54 +0000 https://robohub.org/the-future-of-robotics-research-is-there-room-for-debate/ Speakers at ICRA 2019 workshop

Participants James Mickens, Ludovic Righetti, Aude Billard, Melonee Wise and moderator Hallie Siegel at the ICRA 2019 “Debates on the Future of Robotics Research” workshop

By Brian Wang, Sarah Tang, Jaime Fernandez Fisac, Felix von Drigalski, Lee Clement, Matthew Giamou, Sylvia Herbert, Jonathan Kelly, Valentin Pertroukhin, and Florian Shkurti

As the field of robotics matures, our community must grapple with the multifaceted impact of our research; in this article, we describe two previous workshops hosting robotics debates and advocate for formal debates to become an integral, standalone part of major international conferences, whether as a plenary session or as a parallel conference track.

As roboticists build increasingly complex systems for applications spanning manufacturing, personal assistive technologies, transportation and others, we face not only technical challenges, but also the need to critically assess how our work can advance societal good. Our rapidly growing and uniquely multidisciplinary field naturally cultivates diverse perspectives, and informal dialogues about our impact, ethical responsibilities, and technologies. Indeed, such discussions have become a cornerstone of the conference experience, but there has been relatively little formal programming in this direction at major technical conferences like the IEEE International Conference on Robotics and Automation (ICRA) and Robotics: Science and Systems (RSS) Conference.

To fill this void, we organized two workshops entitled “Debates on the Future of Robotics Research” at ICRA 2019 and 2020, inspired by a similar workshop at the 2018 International Conference on Machine Learning (ICML). In these workshops, panellists from industry and academia debated key issues in a structured format, with groups of two arguing either “for” or “against” resolutions relating to these issues. The workshops featured three 80-minute sessions modelled roughly after the Oxford Union debate format, consisting of:

  1. An initial audience poll asking whether they “agree” or “disagree” with the proposed resolution
  2. Opening statements from all four panellists, alternating for and against
  3. Moderated discussion including audience questions
  4. Closing remarks
  5. A final audience poll
  6. Panel discussion and audience Q&A

The “Debates” workshops attracted approximately 400 attendees in 2019 and 1100 in 2020 (ICRA 2020 was held virtually in response to the COVID-19 pandemic). In some instances, panellists took positions out of line with their personally held beliefs and nonetheless swayed audience members to their side, with audience polls displaying notable shifts in majority opinion. These results demonstrate the ability of debate to engage our community, elucidate complex issues, and challenge ourselves to adopt new ways of thinking. Indeed, the popularity of the format is on the rise within the robotics community, as other recent workshops have also adopted a debates format — for instance, the 2${}^{nd}$ Workshop on Closing the Reality Gap in Sim2Real Transfer for Robotics or Soft Robotics Debates).

Poll results of the three questions asked at the ICRA 2019 workshop

Audience agree/disagree poll results before and after each debate of the virtual “Debates” workshop at the 2020 IEEE International Conference on Robotics and Automation (ICRA)

Given this positive community response, we argue that major robotics conferences should begin to organize structured debates as a core element of their programming. We believe a debate-style plenary session or parallel track would provide a number of benefits to conference attendees and the wider robotics community:

  • Exposure to ideas: Attendees would have more opportunities to be exposed to unfamiliar ideas and perspectives in a well-attended plenary session, while minimizing overlap with technical sessions.
  • Equity: Panellists and moderators no longer have to choose between accepting a debate invitation, accepting a workshop speaking engagement, supporting their students’ workshop presentations, and/or hosting a workshop of their own. Some of these conflicting responsibilities disproportionately affect early career researchers.
  • Inclusion: Conference organizers have the discretion to provide travel support, enabling panellists and moderators from a wider range of countries, career stages, and backgrounds to participate.
  • Impact: A notable Science Robotics article cited debates as a necessary driver of progress towards solving robotics’ grand challenges. By facilitating structured and critical self-reflection, a widely accessible debates plenary could help identify avenues for future work and move the field forward.

The driving force behind such an event should be a diverse debates committee responsible for inviting representatives within and outside academia, with different research interests, at different career stages, and from different parts of the world. The committee will also lead the selection of appropriate debate topics. Historically, our debate propositions revolved around three key questions: “what problems should we solve?”, “how should we solve them?”, and “how do we measure success and progress?”.

Recent high-profile shutterings of several prominent robotics companies testify to the importance of identifying the right problem. How do we transform our research into products that provide value to end-users, while keeping in mind our environmental and economic impact? How can we quickly introspect, learn from, and pivot in response to failures, as well as avoid repeating past mistakes? Our 2020 resolution, “robots designed for personal or household use have failed because of fundamental misunderstandings of Human-Robot Interaction (HRI),” provided an opportunity to discuss such questions.

As a multidisciplinary field drawing on advances in machine learning, computer science, mechanical design, control, optimization, biology, and more, robotics has a wealth of tools at its disposal. A central problem in robotics research is then to identify the right tool for the right problem. Our 2019 debate proposition, “The pervasiveness of deep learning in robotics research is an impediment to gaining scientific insights into robotics problems.”, probed the ascendance of data-driven approaches to robotics problems, and attracted a large audience.

In addition to identifying the right problems and the right tools to solve them, it is equally important to discuss how best to evaluate and compare proposed solutions. A central tension in robotics research is how to strike a balance between accessible and replicable benchmark datasets, and real world experiments. Our 2020 debate proposition, “Robotics research is over reliant on benchmark datasets and simulation”, challenged the audience to consider how to rigorously and accessibly evaluate the performance of robotic systems without overfitting to a particular benchmark. Indeed, failures to adequately evaluate robotics algorithms have already led to tragic loss of life, underscoring the importance of establishing common standards for measuring the performance and safety of our technologies in the context of rapid commercialization and real-world deployments. These standards must be informed by regular and rigorous critical reflection on our ethical obligations as researchers, practitioners and policymakers. The fact that our 2019 debate proposition, “Robotics needs a similar level of regulation and certification as other engineering disciplines (e.g., aviation), even if this results in slower technological innovation”, attracted and resonated with a significant audience is evidence of demand for structured discussion of these topics.

As robotics technologies continue to move from the research laboratory to the real world, we believe there should be room for debate at major robotics conferences. Enshrining debate as a core element of major robotics conferences will serve to create opportunities for self-reflection, establish institutional memory, and drive the field forward.

For more information, please see https://roboticsdebates.org/.

]]>
Women in Robotics Update: introducing our 2021 Board of Directors https://robohub.org/women-in-robotics-update-introducing-our-2021-board-of-directors/ Mon, 18 Jan 2021 02:22:24 +0000 https://robohub.org/women-in-robotics-update-introducing-our-2021-board-of-directors/ Women in Robotics is a grassroots community involving women from across the globe. Our mission is supporting women working in robotics and women who would like to work in robotics. We formed an official 501c3 non-profit organization in 2020 headquartered in Oakland California. We’d like to introduce our 2021 Board of Directors:

Andra Keay, Women in Robotics President

Managing Director at Silicon Valley Robotics | Visiting Scholar at CITRIS People and Robots Lab | Startup Advisor & Investor

Andra Keay founded Women in Robotics originally under the umbrella of Silicon Valley Robotics, the non-profit industry group supporting innovation and commercialization of robotics technologies. Andra’s background is in human-robot interaction and communication theory. She is a trained futurist, founder of the Robot Launch global startup competition, Robot Garden maker space, Women in Robotics and is a mentor, investor and advisor to startups, investors, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI. Andra speaks regularly at leading technology conferences, and is Secretary-General of the International Alliance of Robotics Associations. She is also a Visiting Scholar with the UC’s CITRIS People and Robots Research Group.

Allison Thackston

Roboticist, Software Engineer & Manager– Waymo

Allison Thackston is the Chair of Women in Robotics Website SubCommittee and CoChair of our New Chapter Formation SubCommittee. She is also a Founding Member of the ROS2 Technical Steering Committee. Prior to working at Waymo, she worked at Nuro and was the Manager of Shared Autonomy at Toyota Research Institute, and Principle Research Scientist Intelligent Manipulation. She has an MS in Robotics and MechEng from the University of Hawaii and a BS in EEng from Georgia Tech. With a passion for robots and robotic technologies, she brings energy, dedication, and smarts to all the challenges she faces. 

Ariel Anders

Roboticist – Robust.AI

Ariel Anders is a black feminist roboticist who enjoys spending time with her family and artistic self-expression. Anders is the first roboticist hired at Robust.AI, an early stage robotics startup building the world’s first industrial grade cognitive engine. Anders received a BS in Computer Engineering from UC Santa Cruz and her Doctorate in Computer Science from MIT, where she taught project-based collaborative robotics courses, developed an iOS app for people with vision impairment, and received a grant to install therapy lamps across campus. Her research focused on reliable robotic manipulation with the vision of enabling household helpers.

Cynthia Yeung

Robotics Executive & COO, Advisor, Speaker

Cynthia Yeung is the Chair of the Women in Robotics Mentoring Program SubCommittee, which will be piloting shortly. She is also a mentor and advisor to robotics companies, accelerators and venture capital firms, and speaks at leading technology conferences. Cynthia studied Entrepreneurship at Stanford, Systems Engineering at UPenn, and did a triple major at The Wharton School, UPenn, where she was a Benjamin Franklin Scholar and a Joseph Wharton Scholar. She has led Strategic or International Partnerships at organizations like Google, Capital One and led Product Partnerships at SoftBank Robotics, Checkmate.io and was COO of CafeX. In her own words, “I practice radical candor. I build teams to make myself obsolete. I create value to better human society. I edit robotics research papers for love.”

Hallie Siegel

Associate Director, Strategy & Operations at University of Toronto

Hallie Siegel is the driving force behind the emerging robotics network in Canada, centered at the University of Toronto. She is a communications professional serving the technology, innovation and research sectors, specifically robotics, automation and AI. She is pursuing a Masters in Strategic Foresight and Innovation at OCADU, where she was Dean’s Scholar. Hallie was also the first Managing Editor at Robohub.org, the site for robotics news and views, after doing science communications for Raffaelo D’Andrea’s lab at ETH Zurich. In her spare time, she is a multidisciplinary artist, and Chair of the Women in Robotics Vision Workshops. 

Kerri Fetzer-Borelli

Head of Diversity, Equity, Inclusion & Community Engagement at Toyota Research Institute

Kerri Fetzer-Borelli is the CoChair for the Women in Robotics New Chapter Formation SubCommittee. They have worked as Scientific Data Collector for the military, as a Welder in nuclear power plants, and as the Manager of Autonomous Vehicle Testing, then Prototyping and Robotics Operations at Toyota Research Institute where they now lead DEI and Community Engagement. Kerri mobilizes cross functional teams to solve complex, abstract problems by distilling strategic, actionable items and workflows from big ideas.

Laura Stelzner

Robotics Software Engineer at RIOS

Laura Stelzner is the Chair of the Women in Robotics Community Management SubCommittee increasing activity and engagement in our online community. By day, she is in charge of software for emerging robotics startup RIOS which provides factory automation as a service, deploying AI powered and dexterous robots on factory assembly lines. Prior to RIOS, Laura worked at Toyota Research Institute, Space Systems Loral, Amazon Labs, Electric Movement and Raytheon. She has a BS in Computer Engineering from UC Santa Cruz and an MS in Computer Science from Stanford.

Laurie Linz, Women in Robotics Treasurer

Software Development Engineer in Test at Alteryx

Laurie Linz is the Women in Robotics Treasurer, as well as founder of the Boulder/Denver Colorado WiR Chapter. When not working as a software developer or QA tester, Laurie can be found with her hands on an Arduino, or a drone, or a camera. As she says, “I like to build things, break things and solve puzzles all day! Thankfully development and testing allows me to do that. Fred Brooks was right when he wrote that the programmer gains the “sheer joy of making things” and he talks of “castles in the air, from air” as we are only limited by the bounds of human imagination.”

Lisa Winter

Head of Hardware at Quartz

A roboticist since childhood, Lisa has over 20 years experience designing and building robots. She has competed in Robot Wars and BattleBots competitions since 1996, and is a current judge on BattleBots. She currently holds the position of Head of Hardware at Quartz, an early stage startup working on the future of construction. Her rugged hardware can be seen attached to tower cranes all around California. In her free time she likes to volunteer her prototyping skills to the Marine Mammal Center to aid in the rehab of hundreds of animals each year. She is a Founding Board Member of Women in Robotics and Chair of the Artwork/Swag SubCommittee.

Sue Keay, Women in Robotics Secretary

CEO at Queensland AI Hub and Chair of the Board of Directors of Robotics Australia Group

Currently CEO of Queensland AI Hub, after leading cyber-physical systems research for CSIRO’s Data61. Previously Sue set-up the world’s first robotic vision research centre. She led the development of Australia’s first robotics roadmap, the Robotics Australia Network and the Queensland Robotics Cluster. A Graduate of the Australian Institute of Company Directors, she founded and Chairs the Board of Robotics Australia Group. Sue also serves on the Boards of CRC ORE, Queensland AI Hub and represents Australia in the International Alliance of Robotics Associations.

With such a go-getting Board of Directors, you can be assured that Women in Robotics is preparing for an active 2021. As of 1/1/21, we had 1270 members in our online community, 900 additional newsletter subscribers, and six active chapters in the USA, Canada, UK and Australia. All Women in Robotics events abide by our Code of Conduct and we offer it for use at any robotics event or conference.

Our focus for 2021 is on:

  • Project Inspire – our annual 30 women in robotics you need to know about list, plus regular updates, spotlights, and wikipedia pages for women in robotics.
  • Project Connect – forming new chapters, promoting our online community, and enjoying  regular member led activities and events, under a Code of Conduct.
  • Project Advance – piloting a mentoring program, providing educational resources for women in robotics, and improving accountability metrics in our workplaces.

We’d also like to thank our two Founding Board Members, Sabine Hauert of the Hauert Lab at University of Bristol UK and Founder of Robohub.org and Sarah Osentoski SVP of Engineering at Iron Ox, who are leaving the WiR Board but who will be leading our new Women in Robotics Advisory Board, another new initiative for 2021.

You can subscribe to our newsletter to keep updated on our activities, to sign up for our speaker database or volunteering opportunities, or to show your support as an ally. Please support our activities with a one off or recurring donation (tax deductible in the USA). 

]]>
Women in Robotics Update: Melonee Wise, Maren Bennewitz, Alicia Casals https://robohub.org/women-in-robotics-update-melonee-wise-maren-bennewitz-alicia-casals/ Tue, 12 Jan 2021 00:23:44 +0000 https://robohub.org/women-in-robotics-update-melonee-wise-maren-bennewitz-alicia-casals/

Introducing the seventh post in our new series of Women in Robotics Updates, featuring Melonee Wise, Maren Bennewitz and Alicia Casals and from our first “25 women in robotics you need to know about” list in 2014. These women have pioneered foundational research in robotics, created organizations of impact, and inspired the next generations of robotics researchers, of all ages.

Melonee Wise

CEO of Fetch Robotics

Melonee Wise(featured in 2014), now a CEO of Fetch Robotics has been designing, building, and programming robotic hardware for an autonomous boat, autonomous car, personal robot platforms, battlebots, and several low cost platforms since 19 years. At Fetch Robotics, she and her team provide the best AMR or Autonomous Mobile Robot solutions for the logistics industry through a fleet of products that provide ‘On Demand Autonomy’.

Fetch Robotics was also the first winner of Overall Excellence Award in the Silicon Valley Robotics ‘Good Robot’ Industry Awards in 2020. Wise was recognized by Silicon Valley Business Journal as 40 under 40 and by Technology Review TR35 as a ’40 female founders who crushed it’ in 2016. She also received the Women of Influence in 2017 Silicon Valley Business Journal. She has more than 4 Patents and more than 20 published articles.

Wise received the Distinguished Alumni Award from University of Illinois at Champaign Urbana in 2016. After the ceremony Wise shared that, “Women in Engineering is definitely a small community. There’s only so many women in engineering and there’s only so many engineers in robotics. So you’re looking at a pretty thin cross-section of a population that is not as diverse as we all hope it would be. And so sometimes it can be very challenging. But I actually think it’s harder as a start-up entrepreneur to be a woman than it is to be potentially a roboticist.”

Maren Bennewitz

Professor at the University of Bonn

Maren Bennewitz (featured in 2014) is a professor for humanoid robots and vice rector for IT at the University of Bonn and her research focuses on robots acting in human environments. She and her team have developed several innovative solutions for robotic systems co-existing and interacting with humans such as probabilistic techniques for navigation with humanoid and wheeled robots, as well as for reliably detecting and tracking humans from sensor data and analyzing their motions. She is also on the Executive Board of the Cluster of Excellence PhenoRob and the Center for Mind Research since 2019.


Bennewitz’s paper was selected as one of the outstanding papers and best papers at the IEEE-RAS International Conference on Humanoid Robots (Humanoids) and Intelligent Autonomous Systems (IAS) in 2018. She also received one of the best paper award at the International Conference on Advanced Robotics (ICAR) and the European Conference of Mobile Robots (ECMR) in 2019. She has more than 8000 citations and 150 publications.

Bennewitz was Inspired by the participation in the Minerva project, led by Sebastian Thrun back in 1998, where she and the team programmed a robot to act as a museum tour guide in the National Museum of American History. ” My favorite thing is doing experiments on real robots evaluating novel developed techniques,” says Bennewitz. “Which is also one of my least favorite things since it is so time-consuming to get the algorithms running on real-world systems with real sensor data, even when the algorithms worked reliably in simulation before.”

Alicia Casals

Professor at Universitat Politècnica de Catalunya (UPC)

Alicia Casals (featured 2014) is professor at Universitat Politècnica de Catalunya (UPC) doing her research in medical robotics, mainly in the surgical field, and has been collaborating with companies and other non-academic institutions to find the solutions to the challenges that come with Integration of robotics to real situations. She has been recently cofounder of two companies in the robotic medical field.

Casals received the “Nit de la Robòtica” award as recognition of her research and professional career, awarded in 2019 by the Industrial Engineers of Catalunya. She has been active as referent model that drives scientific and technical vocations amongst young women while taking into account the human side within the area. Casals is significantly involved with the IEEE, IEEE Robotics and Automation Society and IEEE Engineering in Medicine and Biology (IEEE RAS and IEEE EMBS), European Robotics Network (EURON), and founded the Spanish Robotics Chapter.

In this 2015 Oral History for Engineering and Technology History Wiki, Casals describes what got her started in robotics, her projects, her startup experience and her message to all the young roboticists, “It’s important to think on the ethics of robotics, on the efficiency of the work, and so trying to really solve what the problem is, because it’s a very wide area, so it’s easy that the projects solve things but don’t reach anything in particular. But because robotics has a wide field of applications, and they can be very good and can be used as an assistant tool. We are working in the medical field, so we basically work in robotics for aiding people. So that is a fantastic area of research we consider.”

Want to keep reading? There are 180 more stories on our 2013 to 2020 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Women in Robotics Update: Ecem Tuglan, Tuong Anh Ens, Sravanthi Kanchi, Kajal Gada, Dimitra Gkatzia https://robohub.org/women-in-robotics-update-ecem-tuglan-tuong-anh-ens-sravanthi-kanchi-kajal-gada-dimitra-gkatzia/ Mon, 28 Dec 2020 01:02:05 +0000 https://robohub.org/women-in-robotics-update-ecem-tuglan-tuong-anh-ens-sravanthi-kanchi-kajal-gada-dimitra-gkatzia/

Welcome to the first of our Women in Robotics Spotlights, where we share stories from women who haven’t yet been featured in our Annual Showcase but who are working on all sorts of interesting projects. We hope these stories provide inspiration to everyone to join us working in the field of robotics. And if you’re a woman working in robotics, why not contribute your story too!

“Making robots communicate with humans in natural language is a fascinating challenge. There is a lot going on during interactions between robots and humans. Humans make gestures, observe or interact with visible objects in the environment, and display emotions. What motivates me is equipping social robots with the ability to interact seamlessly, by recognizing a given situation and talking about it” says Dimitra Gkatzia who specializes in Natural Language Generation for Human-Robot Interaction.

Ecem Tuglan

The Mecademi of Team Think Tank | Cofounder of Fenom Robotics

Ecem Tuglan is a The Mecademi of Team Think Tank and Cofounder of Fenom Robotics who is active Robopsychologist working on Philosophy of Artificial Intelligence, Neurophilosophy, Human-Robot interaction, Biopolitics, Robopsychology, Cognitive Sciences and Political Theory. At Fenom Robotics, she and her team builds holograms displaying humanoid robots. She is also working on projects with Dr. Ravi Margasahayam from NASA as a robopsychologist.

Tuglan says her interest in robots started during her childhood when she prefered robotic toys and electronic gadgets and this childhood obsession turned more professional when she started studying philosophy. And still, she is always intrigued by how from micro scale to macro scale, everything is changing with robotics and how cell-like robots can save us from various diseases while AI based astrobots can find new home-planets. She enjoys the width of the research in robotics and its interdisciplinary knowledge enhancing our creativity and productivity because we are able to combine anything in our mind to this field.

Tuong Anh Ens

CEO and Founder at Go West Robotics

Tuong Anh Ens is CEO and founder of Go West Robotics which is a robotics software consulting Company. Exposed to many exciting robotics projects and having very good connections in the robotics community, she decided to focus on helping robotics companies succeed. Her main objective here was to reduce hurdles that occur for so many creative and revolutionizing ideas to take shape and get implemented. Thus, at Go West Robotics, she and her team work with the world’s leading robotics companies to build better automation systems and robots.

Ens enjoys the challenges with the future in robotics, ever-changing unknown and our ability to push beyond the boundaries of what was previously inconceivable. Going through the hurdles of both the personal and professional life balance herself, she strongly believes in hardwork and perseverance and believes in her team at Go West Robotics for the accomplishments and growth in robotics.

Sravanthi Kanchi

 Data  Engineer at Bayer Crop Science | The Founding Member of The Founders Vault

Sravanthi Kanchi is a data engineer at Bayer Crop Science and the founding member of the Founders Vault. She loves learning, building and researching about building robots and she is currently working to build the home cleaning robot. She enjoys the ideas coming into life in robotics. She aspires to make an impact into people’s lives by building something useful for mankind as she believes in robotics contribution in transformation of healthcare , ergonomics, space, industrial sectors etc.

Kajal Gada

Content creator at Youtube

Kajal Gada is a robotics software engineer and youtuber. She has 3 years of professional experience. At her last job at Brain Corp, she helped support Brain OS – a software for autonomous mobile vehicles. Her interest in robotics was sparked by a video of drones doing flips autonomously by her mentor who continuously encouraged her to explore robotics. 

Gada started working on robotics on her own starting with creating her own robot for simple projects such as a line follower and obstacle avoidance and then further enhanced her knowledge in the area with Masters in Robotics from University of Maryland. As the way of giving back to the robotics community she creates and posts tutorials in her youtube channel for free open source software webots to create projects that are beginner friendly, and thus making it easy for anyone to get started with robotics. She has been interviewing existing women in robotics in her youtube channel as well and wants to continue it to inspire younger women and set an example of how someone looking like you started it and did it.

Dimitra Gkatzia

Associate Professor at Edinburgh Napier University

Dimitra Gkatzia is an associate professor at School of Computing at Edinburgh Napier University where she leads a UK-funded project in robotics, CiViL. CiViL aims to provide robots with human-like abilities, such as reasoning and communicating using commonsense. She is also a co-founder of the workshop series NLG4HRI, which aims to bring together researchers interested in developing NLG methods for Human-Robot Interaction. 

Gkatzia’s expertise is in Natural Language Generation (NLG), i.e. teaching computers “how to talk”, Data-to-text generation, AI, Machine Learning, summarization of time-series data. With her proficiency in this field she is dedicated to  making dialogue systems (such as Alexa, Siri) converse naturally, by enhancing their responses with commonsense and world knowledge. She entertains the far-ranging scope and the endless possibilities for robotic applications. “Robotics has shown promising results in assistive technology, education, and health”, says Gkatzia who envisions a future where humans and robots coexist and collaborate in domestic, public and work settings and robots used to solve real-world problems.

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Women in Robotics Update: Ruzena Bajcsy and Radhika Nagpal https://robohub.org/women-in-robotics-update-ruzena-bajcsy-and-radhika-nagpal/ Sun, 20 Dec 2020 08:00:40 +0000 https://robohub.org/women-in-robotics-update-ruzena-bajcsy-and-radhika-nagpal/

Introducing the sixth post in our new series of Women in Robotics Updates, featuring Ruzena Bajcsy and Radhika Nagpal from our first “25 women in robotics you need to know about” list in 2013 and 2014. These women have pioneered foundational research in robotics, created organizations of impact, and inspired the next generations of robotics researchers, of all ages.

“Being an engineer at heart, I really always looked at how technology can help people? That was my model with robots, and in fact, my research in the medical area, as well as how can we make things not just empirical, but predictable, ” says Ruzena Bajcsy, expressing the motivation that guides her in both medicine and robotics.

Ruzena Bajcsy

NEC Chair and Professor at University of California Berkeley | Founder of HART

Ruzena Bajcsy (featured 2014) is a National Executive Committee (NEC) Chair and Professor, Department of Electrical Engineering and Computer Science, College of Engineering at University of California, Berkeley. She has been a pioneer in the field since 1988 when she laid out the engineering agenda for active perception. Bajcsy works on modeling people using robotic technology and is inspired by recent animal behavioral studies, especially as they pertain to navigation, namely measuring and extracting non-invasively kinematic and dynamic parameters of the individual to assess their physical movement capabilities or limitations and their respective solutions. Professor Ruzena Bajcsy became the founder of many famous research laboratories, such as, for example, the GRASP laboratory at the University of Pennsylvania, CITRIS Institute and currently the HART (Human-Assistive Robotic Technologies) laboratory.

Bajcsy has accomplished and received many prestigious awards in her 60 years in Robotics. Since, last featured she has received the Simon Ramo Founders Award Recipient in 2016, for her seminal contributions to the fields of computer vision, robotics, and medical imaging, and technology and policy leadership in computer science education and research. She also received the 2020 NCWIT Pioneer in Tech Award which is awarded to the role models whose legacies continue to inspire generations of young women to pursue computing and make history in their own right. Throughout her career she has been at the intersection of human and machine ways of interpreting the world, with research interests that include Artificial Intelligence; Biosystems and Computational Biology; Control, Intelligent Systems, and Robotics; Human-Computer Interaction; and “Bridging Information Technology to Humanities and Social Sciences.

In her recent interview at the National Center of Women & Information Technology (NCWIT) to the women who are starting in robotics and AI, she says, ” I have a few rules in my book, so to speak. First of all, when you are young, learn as much mathematics and physics as you can. It is never enough of that knowledge…….. Number two, you have to be realistic. What, with the current technology, can you verify? Because in engineering science it’s not just writing equations, but it’s also building systems where you can validate your results.”

Radhika Nagpal

Fred Kavli Professor at Harvard | Cofounder of Root Robotics

Radhika Nagpal  (featured in 2013) is a Fred Kavli Professor of Computer Science at the Harvard School of Engineering and Applied Sciences. At her Self-Organizing Systems Research Group she works on Biologically-inspired Robot Collectives, including novel hardware design, decentralized collective algorithms and theory, and global-to-local swarm programming and Biological Collectives, including mathematical models and field experiments with social insects and cellular morphogenesis. Her lab’s Kilobots are licensed and sold by KTeam inc and over 8000 robots exist in dozens of research labs worldwide.

Nagpal has won numerous prestigious awards since 2013. She was distinguished as the top ten scientists and engineers who mattered by Nature 10 in 2014. For her excellent empowerment and contribution to next-generation, she received the McDonald Award for Excellence in Mentoring and Advising at Harvard in 2015. She was named an AAAI fellow & Amazon Scholar in 2020.

“Science is of course itself an incredible manifestation of collective intelligence, but unlike the beautiful fish school that I study, I feel we still have a much longer evolutionary path to walk….. There’s this saying that I love: Who does science determines what science gets done…… I believe that we can choose our rules and we can engineer not just robots, but we can engineer our own human collective, and if we do and when we do, it will be beautiful”, says Nagpal in her Ted Talk “Harnessing the intelligence of the collective” from 2017 which has more than 1 million views.

Nagpal is also a co-founder and scientific advisor of Root Robotics which has been acquired by iRobot.  Here, she and her team designed Root, an educational robot that drives on whiteboards with magnet+wheels, senses colors, and draws under program control which can be used to teach programming across all ages. With Root, she aims to transform the home and classroom experiences with programming, by making it tangible and personal. “Every kid should learn to code in a fun way, that enhances their interests, and that inspires them to become creative technologies themselves,” says Nagpal.

Want to keep reading? There are 180 more stories on our 2013 to 2020 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Shaping the UK’s future with smart machines: Findings from four ThinkIns with academia, industry, policy, and the public https://robohub.org/shaping-the-uks-future-with-smart-machines-findings-from-four-thinkins-with-academia-industry-policy-and-the-public/ Fri, 18 Dec 2020 16:07:06 +0000 https://robohub.org/shaping-the-uks-future-with-smart-machines-findings-from-four-thinkins-with-academia-industry-policy-and-the-public/

The UK Robotics Growth Partnership (RGP) aims to set the conditions for success to empower the UK to be a global leader in Robotics and Autonomous Systems whilst delivering a smarter, safer, more prosperous, sustainable and competitive UK. The aim is for smart machines to become ubiquitous, woven into the fabric of society, in every sector, every workplace, and at home. If done right, this could lead to increased productivity, and improved quality of life. It could enable us to meet Net Zero targets, and support workers as their roles transition from menial tasks.

One thing that’s striking is that although robotics holds so much potential, they are not yet ready. The covid crisis has made this very clear. If it had been ready, we could have – at scale – deployed robots to sanitise hospitals, enable doctor-patient communication through telepresence, or connect patients with loved ones. Robots could have produced, organised, and delivered much needed samples, tests, PPE, medicine, and food across the UK. And many businesses could be reopening with a robotic interface. Robots could have powered a low-touch economy, where activities continue even when humans can’t be in close physical contact, driving recovery and resilience.

For the past year, we’ve been thinking about this at the RGP. How could we have done things better? What would it take to make an armada of disinfecting robots for a Covid pop-up hospital (called a Nightingale hospital in the UK)? Ideally we would have been able to log into a digital twin of the hospital, port in a model of a robot platform from a database, work-up a solution in the virtual world, and readily port it to an actual testbed to demonstrate its function in the physical world. We could then trial the solution in a living lab, maybe a dedicated Nightingale, before scaling up the solution to other hospitals. Others developing telepresence robots could follow the same methodology, checking that their solutions are interoperable, and working within the same virtual and physical environments. 

We have many bits of the puzzle in the UK – great research and industry, plus government buy-in, but what we need is to bring this together.

To explore this further, we spent the last months hosting ThinkIns with Tortoise Media to get feedback from Academia, Industry, Policy and the Public. You can read all the blog posts and watch the videos here:

The future of smart machines: reflections from academia https://ukrgp.org/the-future-of-smart-machines-reflections-from-academia/

Building an ecosystem to make useful robots https://ukrgp.org/building-an-ecosystem-to-make-useful-robots/

Musings with the public about their future with smart machines https://ukrgp.org/musings-with-the-public-about-their-future-with-smart-machines/

Keeping up with the pace of change – positioning the UK as a leader in smart machines https://ukrgp.org/keeping-up-with-the-pace-of-change-positioning-the-uk-as-a-leader-in-smart-machines/

Below are some preliminary findings.

From digital twins to living labs

In his blog, James Kell from Rolls Royce says “The UK is small enough to collaborate well, but big enough to be a global leader. But to be successful we will need new tools, in particular better, cheaper digital twins – synthetic environments where we can develop and test new approaches before we test them in real world environments on our $35m engines.”

Professor Samia Nefti-Meziani from Salford University had a similar comment “Society needs better tools to support a sustainable future, with a new network of synthetic environments to build and fine-tune new technologies, to ensure they work in the real world and to reduce the time from their inception to deployment from years to months. Digital platforms and accessible, open-source software tools will empower SMEs, academics and the public sector to engage and benefit from these new solutions.”

Collaboration across academia and industry

As Samia highlights, “Collaboration was a recurring theme in the ThinkIns, with academic and industry partnerships essential to ensure we target the most pressing challenges and drive innovation in the sectors that need its solutions. As smart machines become more capable and cheaper, their adoption and development within the UK business ecosystem will broaden across sectors and applications.”

Government support to unlock incentives

James continued, “Collaboration needs coordination: Government is critical to convening and leading, creating new ways and incentives to work better together. The new tools will only equip our researchers and SMEs to accelerate product development, validation and speed to market if they can trust each other and all both contribute and benefit. We need new ways for big industry (companies like mine) to have their challenges understood and find new partners to work with, to learn together to develop solutions and put them in place quicker. And if we join up the academics and link across our innovation infrastructure and existing test areas, we will accelerate the adoption of smart machines and unleash the multiple benefits they bring.”  

The human element

‘Taking the public with us’, is critical to mass adoption, says Samia. “Key will be:

– Involving the public in co-creating research and industry ambitions, to help them understand and engage with what RAS can offer

– Engaging with those who distrust RAS, to understand their concerns and gain their confidence

– Improving RAS education and lifelong learning, so those with interest and capability can be trained in RAS and directly involved in shaping their future.”

“The sector must ‘show its workings’ and be clear of the problems and challenges to prioritise. Ensuring standards and protocols are developed to protect the input of the public and the quality of the outputs is vital to buy-in in the long-term.”

David Bisset, a Robotics Consultant, commented on the Public session, highlighting that “Smart Machines are already with us, cars, aeroplanes, vacuum cleaners we don’t call them robots but they all use that technology. To make them work requires many skills; industrial design, AI people, sensor experts and interaction designer… and many more. At a human level we need to be able to trust, to know it’s built right and safe.”

He highlights the issue of “Tech Wash” mentioned by the public. “Is ‘smart machine’ just some clever rebranding? The needless selling of technology as a solution to every senior manager’s need to outshine their peers? We need to stop and think about the consequences of forcing through technology driven organisational change without evidence and stop needless disruption. We need to know these things will work!”

Overall, to make smart machines a success, we need to bring the discussion to a human level, to where this makes a difference to people.

Bringing it all together

Rob Buckingham, Head of RACE at the UK Atomic Energy Authority commented on the policy ThinkIn “Robotics includes both tools that are physically discrete from us and physical augmentation. In either case the interface between person and machine is going to be a field of rapid development driven at least in part by gaming and zooming.

The much bigger part is the informed discussion with people, with society, about the world we want to live in. Are we Canute (spoiler alert – it doesn’t end well) or are we the voice of sustainable democracy that values both people and nature?

I think Living Labs are going to sit at the heart of this… physical places where we explore the issues and opportunities together. In my field of nuclear, mock-ups have always been sensible because experimenting with the real thing is only allowed in exceptional circumstances. My hope is that we will invest in many Living Labs around the country that enable us, collectively, to explore the benefits and unintended consequences of our creativity. Of course, we might expect all of the Living Labs to be connected by data and the management of data; indeed we might expect common tech platforms and digital models of ‘nearly everything’ to be one of the highest value spin-offs.”

]]>
Women in Robotics Update: Robin Murphy, Ayanna Howard https://robohub.org/women-in-robotics-update-robin-murphy-ayanna-howard/ Sun, 13 Dec 2020 15:00:54 +0000 https://robohub.org/women-in-robotics-update-robin-murphy-ayanna-howard/

Introducing the fifth post in our new series of Women in Robotics Updates, featuring Robin Murphy and Ayanna Howard from our first “25 women in robotics you need to know about” list in 2013 and 2014. These women give unstintingly of their time, creating robots that improve the quality of life, advancing research, inspiring and supporting students, and sharing their passion for engineering with the world.

Ayanna Howard says “I think as engineers we have an amazing power, where we can take people’s wishes and convert them into reality”

And Robin Murphy agrees, “My job is so incredibly fulfilling, it’s about the science and technology and the way it could be used for societal good, that’s a big deal to me,”

Robin Murphy

Raytheon Professor at Texas A&M University | Director of Humanitarian Robotics and AI Laboratory

Robin Murphy (featured in 2013), is the Raytheon Professor of Computer Science and Engineering in Texas A & M and Director of the non-profit Humanitarian Robotics and AI Laboratory, (formerly known as Center for Robot-Assisted Search and Rescue (CRASAR). She is a distinguished Disaster Roboticist pioneering the advancement of AI and mobile robotics in unstructured and extreme environments. At CRASAR, she has been actively supplying her rescue robot since 9/11 in 2001 and has now participated in more than 30 disasters which include building collapses, earthquakes, floods, hurricanes, marine mass casualty events, nuclear accidents, tsunamis, underground mine explosions, and volcanic eruptions, in five different countries. And she has developed and taught classes in robotics for emergency response and public safety for over 1,000 members of 30 agencies from seven countries.

Murphy was named as one of the top ’30 Most Innovative Women Professors Alive Today’ in 2016 and the Engineering Faculty Fellow for Innovation in High-Impact Learning Experiences. She is also the recipient of the Eugene L. Lawler Award in 2014 for Humanitarian Contributions within Computer Science and Informatics. She has more than 400 publications and 17000 citations, 4 books (Introduction to AI Robotics, Disaster Robotics, and Robotics Through Science-Fiction Vols 1 and 2). In her Robotics Through Science-Fiction books, Murphy explains what real robotics and AI can and can’t do alongside classic robot stories from Isaac Asimov, Brian Aldiss, Philip K. Dick and Vernor Vinge.

Murphy is also the chair of Robotics for Infectious Diseases, a non-profit organization which uses robotics and technology in over 20 countries for public health, public safety, and continuity of work and life. She has been tracking the wide range of work that robots have been pioneering in COVID19 as described in her recent article in The Conversation, “Robots are playing many roles in the coronavirus crisis – and offering lessons for future disaster”.

“My job is so incredibly fulfilling, it’s about the science and technology and the way it could be used for societal good, that’s a big deal to me,” says Murphy in CNN’s the Great Big story in 2018. And Murphy’s TED talk from 2015 “These robots come to the rescue after a disaster” has more than 1 million views.

Ayanna Howard

Professor and Chair of School of Interactive Computing at Georgia Tech | Director of the Human-Automation Systems Lab (HumAnS)

Ayanna Howard (featured 2014), is currently the Chair of the School of Interactive Computing in the College of Computing at the Georgia Institute of Technology and is on the Board of Directors for Autodesk. She is also the founder and director of the Human-Automation Systems Lab (HumAnS) where she and her team study and develop techniques to enhance the autonomous capabilities of intelligent systems in different areas such as Human-Robot Interaction, Assistive Robotics, Education and Robotics, Robot Learning, Human-Robot Trust, and Space and Field Robotics.

In 2021, Howard will take up the position of Dean of Engineering at The Ohio State University. Her contributions to robotics has been recognized many times since 2013! She received an A. Richard Newton Educator ABIE Award in 2014 from the Anita Borg Institute. In 2015, she was listed in The Root 100 website, as one of the most prestigious African American achievers, and recognized by Business Insider as one of the ’23 most powerful women engineers in the world’. In 2016 she received the Computer Research Association’s A. Nico Habermann Award and Brown Engineering Alumni Medal.

She was an AAAS-Lemelson Invention Ambassador from 2016 to 2017 which is designed to showcase the modern voices that are addressing the grand challenges facing humanity and to influence policy makers. She also received the Richard A. Tapia Achievement Award in 2018 for her contribution in bringing girls, underrepresented minorities, and people with disabilities into computing through programs related to robotics. In 2020, for the demonstrable and sustained societal impact of her work she received Georgia Tech Outstanding Achievement in Research Innovation Award 2020.

In 2020, Howard co-founded the Black in Robotics community organization, and also became the first black woman to achieve IEEE RAS Fellow status for her contributions to human-robot interaction systems. She has more than 350 papers, 4000 citations, and recently published an audiobook “Sex, Race, and Robots: How to Be Human in the Age of AI” about how the tech world’s racial and sexual biases are infecting the next generation of robots and AI, with profoundly negative effects for humans of all genders and races.

“Robotics is me, It’s part of my life …. I think as engineers we have an amazing power, where we can take people’s wishes and convert them into reality.” says Howard, who also founded Zyrobotics where she has been developing assistive technologies for children with disabilities. “Robots can improve quality of life; Humans inherently trust them”, says Howard in her TedX talk where she explains how humans develop emotional attachments to social or interactive robots.

Want to keep reading? There are 180 more stories on our 2013 to 2020 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Should robots be gendered? comments on Alan Winfield’s opinion piece https://robohub.org/should-robots-be-gendered-comments-on-alan-winfields-opinion-piece/ Thu, 10 Dec 2020 09:29:06 +0000 https://robohub.org/should-robots-be-gendered-comments-on-alan-winfields-opinion-piece/ Astro Boy robot

The gendering of robots is something I’ve found fascinating since I first started building robots out of legos with my brother. We all ascribe character to robots, consciously or not, even when we understand exactly how robots work. Until recently we’ve been able to write this off as science fiction stuff, because real robots were boring industrial arms and anything else was fictional. However, since 2010, robots have been rolling out into the real world in a whole range of shapes, characters and notably, stereotypes. My original research on the naming of robots gave some indications as to just how insidious this human tendency to anthropomorphize and gender robots really is. Now we’re starting to face the consequences and it matters.

Firstly, let’s consider that many languages have gendered nouns, so there is a preliminary linguistic layer of labelling, ahead of the naming of robots, which if not defined, then tends to happen informally. The founders of two different robot companies have told me that they know when their robot has been accepted in a workplace by when it’s been named by teammates, and they deliberately leave the robot unnamed. Whereas some other companies focus on a more nuanced brand name such as Pepper or Relay, which can minimize gender stereotypes, but even then the effects persist.

Because with robots the physical appearance can’t be ignored and often aligns with ideas of gender. Next, there is the robot voice. Then, there are other layers of operation which can affect both a robot’s learning and its response. And finally, there is the robot’s task or occupation and its socio-cultural context.

Names are both informative and performative. We can usually ascribe a gender to a named object. Similarly, we can ascribe gender based on a robot’s appearance or voice, although it can differ in socio-cultural contexts.

Pepper robot

Astro Boy comic

Astro Boy original comic and Pepper from SoftBank Robotics

The robot Pepper was designed to be a childlike humanoid and according to SoftBank Robotics, Pepper is gender neutral. But in general, I’ve found that US people tend to see Pepper as female helper, while Asian people are more likely to see Pepper as a boy robot helper. This probably has something to do with the popularity of Astro Boy (Mighty Atom) from 1952 to 1968.

One of the significant issues with gendering robots is that once embodied, individuals are unlikely to have the power to change the robot that they interact with. Even if they rename it, recostume it and change the voice, the residual gender markers will be pervasive and ‘neutral’ will still elicit a gender response in everybody.

This will have an impact on how we treat and trust robots. This also has much deeper social implications for all of us, not just those who interact with robots, as robots are recreating all of our existing gender biases. And once the literal die is cast and robots are rolling out of a factory, it will be very hard to subsequently change the robot body.

Interestingly, I’m noticing a transition from a default male style of robot (think of all the small humanoid fighting, dancing and soccer playing robots) to a default female style of robot as the service robotics industry starts to grow. Even when the robot is simply a box shape on wheels, the use of voice can completely change our perception. One of the pioneering service robots from Savioke, Relay, deliberately preselected a neutral name for their robot and avoided using a human voice completely. Relay makes sounds but doesn’t use words. Just like R2D2, Relay expresses character through beeps and boops. This was a conscious, and significant, design choice for Savioke. Their preliminary experimentation on human-robot interaction showed that robots that spoke were expected to answer questions, and perform tasks at a higher level of competency than a robot that beeped.

Relay from Savioke delivering at Aloft Hotel

Not only did Savioke remove the cognitive dissonance of having a robot seem more human that it really is, but they removed some of the reiterative stereotyping that is starting to occur with less thoughtful robot deployments. The best practice for designing robots for real world interaction is to minimize human expressivity and remove any gender markers. (more about that next).

The concept of ‘marked’ and ‘unmarked’ arose in linguistics in the 1930s, but we’ve seen it play out in Natural Language Processing, search and deep learning repeatedly since then, perpetuating, reiterating and exaggerating the use of masculine terminology as the default, and feminine terminology used only in explicit (or marked) circumstances. Marked circumstances almost always relate to sexual characteristics or inferiority within power dynamics, rather than anything more interesting.

An example of unmarked or default terminology is the use of ‘man’ to describe people, but ‘woman’ to only describe a subset of ‘man’. This is also commonly seen in the use of a female specifier on a profession, ie. female police officer, female president, or female doctor. Otherwise, in spite of there being many female doctors, the search will return male examples, call female doctors he, or miscategorize them as nurse. We are all familiar with those mistakes in real life but had developed social policies to reduce the frequency of them. Now AI and robotics are bringing the stereotype back.

Ratio os masculine to femenine pronous in U.S. books, 1900-2008

And so it happens that the ‘neutral’ physical appearance of robots is usually assumed to be male, rather than female, unless the robot has explicit female features. Sadly, female robots mean either a sexualized robot, or a robot performing a stereotypically female role. This is how people actually see and receive robots unless a company, like Savioke, consciously refrains from triggering our stereotypically gendered responses.

Gendered robots

I can vouch for the fact that searching for images using the term “female roboticists”, for example, always presents me with lots of men building female robots instead. It will take a concerted effort to change things. Robot builders have the tendency to give our robots character. And unless you happen to be a very good (and rich) robotics company, there is also no financial incentive to degender robots. Quite the opposite. There is financial pressure to take advantage of our inherent anthropomorphism and gender stereotypes.

In The Media Equation in 1996, Clifford Reeves and Byron Nass demonstrated how we all attributed character, including gender, to our computing machines, and that this then affected our thoughts and actions, even though most people consciously deny conflating a computer with a personality. This unconscious anthropomorphizing can be used to make us respond differently, so of course robot builders will increasingly utilize the effect as more robots enter society and competition increases.

Can human beings relate to computer or television programs in the same way they relate to other human beings? Based on numerous psychological studies, this book concludes that people not only can but do treat computers, televisions, and new media as real people and places. Studies demonstrate that people are “polite” to computers; that they treat computers with female voices differently than “male” ones; that large faces on a screen can invade our personal space; and that on-screen and real-life motion can provoke the same physical responses.

The Media Equation

The history of voice assistants shows a sad trend. These days, they are all female, with the exception of IBM Watson, but then Watson occupies a different ecosystem niche. Watson is an expert. Watson is the doctor to the rest of our subservient, map reading, shopping list helpful nurses. By default, unless you’re in Arabia, your voice assistant device will have a female voice. You have to go through quite a few steps to consciously change it and there are very few options. In 2019, Q, a genderless voice assistant was introduced, however I can’t find it offered on any devices yet.

And while it may be possible to upload a different voice to a robot, there’s nothing we can do if the physical design of the robot evokes gender. Alan Winfield wrote a very good article “Should robots be gendered?” here on Robohub in 2016, in which he outlines three reasons that gendered robots are a bad idea, all stemming from the 4th of the EPSRC Principles of Robotics, that robots should be transparent in action, rather than capitalizing on the illusion of character, so as not to influence vulnerable people.

Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.

EPSRC Principles of Robotics

My biggest quibble with the EPSRC Principles is underestimating the size of the problem. By stating that vulnerable users are the young or the elderly, the principles imply that the rest of us are immune from emotional reaction to robots, whereas Reeves and Nass clearly show the opposite. We are all easily manipulated by our digital voice and robot assistants. And while Winfield recognizes that gender queues are powerful enough to elicit a response in everybody, he only sees the explicit gender markers rather than understanding that unmarked or neutral seeming robots also elicit a gendered response, as ‘not female’.

So Winfield’s first concern is emotional manipulation for vulnerable users (all of us!), his second concern is anthropomorphism inducing cognitive dissonance (over promising and under delivering), and his final concern is that the all the negative stereotypes contributing to sexism will be reproduced and reiterated as normal through the introduction of gendered robots in stereotyped roles (it’s happening!). These are all valid concerns, and yet while we’re just waking up to the problem, the service robot industry is growing by more than 30% per annum.

Where the growth of the industrial robotics segment is comparatively predictable, the world’s most trusted robotics statistics body, the International Federation of Robotics is consistently underestimating the growth of the service robotics industry. In 2016, the IFR predicted 10% growth for professional service robotics over the next few years from \$4.6 Billion, but by 2018 they were recording 61% growth to \$12.6B and by 2020 the IFR has recorded 85% overall growth expecting revenue from service robotics to hit \$37B by 2021.

It’s unlikely that we’ll recall robots, once designed, built and deployed, for anything other than a physical safety issue. And the gendering of robots isn’t something we can roll out a software update to fix. We need to start requesting companies to not deploy robots that reinforce gender stereotyping. They can still be cute and lovable, I’m not opposed to the R2D2 robot stereotype!

Consumers are starting to fight back against the gender stereotyping of toys, which really only started in the 20th century as a way to extract more money from parents, and some brands are realizing that there’s an opportunity for them in developing gender neutral toys. Recent research from the Pew Research Center found that overall 64% of US adults wanted boys to play with toys associated with girls, and 76% of US adults wanted girls to play with toys associated with boys. The difference between girls and boys can be explained because girls’ role playing (caring and nurturing) is still seen as more negative than boys’ roles (fighting and leadership). But the overall range that shows that society has developed a real desire to avoid gender stereotyping completely.

Sadly, it’s like knowing sugar is bad for us, while it still tastes sweet.

In 2016, I debated Ben Goertzel, maker of Sophia the Robot, on the main stage of the Web Summit on whether humanoid robots were good or bad. I believe I made the better case in terms of argument, but ultimately the crowd sided with Goertzel, and by default with Sophia. (there are a couple of descriptions of the debate referenced below).

Robots are still bright shiny new toys to us. When are we going to realize that we’ve already opened the box and played this game, and women, or any underrepresented group, or any stereotype role, is going to be the loser. No, we’re all going to lose! Because we don’t want these stereotypes any more and robots are just going to reinforce the stereotypes that we already know we don’t want.

And did I mention how white all the robots are? Yes, they are racially stereotyped too. (See Ayanna Howard’s new book “Sex, Race and Robots: How to be human in an age of AI”)

References:

]]>
Women in Robotics Update: Maja Mataric, Arianna Menciassi https://robohub.org/women-in-robotics-update-maja-mataric-arianna-menciassi/ Sun, 06 Dec 2020 17:57:43 +0000 https://robohub.org/women-in-robotics-update-maja-mataric-arianna-menciassi/

Introducing the second of our new series of Women in Robotics Updates, featuring Maja Mataric and Arianna Menciassi from our first “25 women in robotics you need to know about” list in 2013. Since we started Women in Robotics has focused on positive role models in robotics, highlighting women’s career work, but we’d like to point out just how much energy that these amazing women extend to outreach, to inspiring and supporting their junior colleagues and to science management, supporting and advancing the increasingly complex machinery of research.

For example, Ariana Menciasi has held many editorial and technical committee roles, and manages both European and extra-European research projects, which involves extensive collaboration efforts. And Maja Mataric started the US Women in Robotics Research Database, which inspired similar initiatives in Canada, with the goal that you should always be able to find a female robotics researcher for interviews, positions, panels and conferences.

Maja Mataric

Interim Vice President at University of Southern California | Founder of Embodied Inc

Maja Matarić (featured 2013) is now Interim Vice President of Research at the University of Southern California (USC) and the founding director of the USC Center for Robotics and Embedded Systems. She is a pioneer of socially assistive robotics (SAR) which focuses on developing robots that provide therapies and care through social interaction, especially for special-needs populations; her Interaction’s Lab has worked with children with autism, stroke patients, elderly users with Alzheimer’s, and many others.

Matarić received the Distinguished Professor award at USC in 2019. She became a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) in 2017 and in 2015 she was listed in Top 100 Inspiring Women in STEM by Insight into Diversity. Matarić was also a recipient of the Anita Borg Institute Women of Vision Award in Innovation in 2013 and the Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring (PAESMEM) in 2011. She has more than 650 publications and 39000 citations and is very passionate about mentoring and empowering students and communicating the excitement of interdisciplinary research and careers in STEM to a diverse audience including K-12 students and teachers, women, and other underrepresented groups in engineering.

In 2016, Matarić founded Embodied Inc, which in 2020 launched Moxie, a socially assistive robot for child development that provides “play-based learning that is paced to weekly themes and missions with content designed to promote social, emotional, and cognitive learning”.

As Liz Ohanion at KCET said, “Maja Mataric is a robotics powerhouse and, when she’s not inspiring the next generation of engineers, she’s working on a series of robots that could very well change the lives of the people who use them.”

Arianna Menciassi

Full professor at Scuola Superiore Sant’Anna

Arianna Menciassi (featured in 2013) is now a full professor in Biomedical Engineering at Scuola Superiore Sant’Anna (SSSA). She is also a team leader of the “Surgical Robotics & Allied Technologies” Area at The BioRobotics Institute in SSSA where she has been advancing intelligent devices that permit medical or surgical procedures to be performed in a minimally invasive regime, and in an increasingly reliable, reproducible and safe way.

As Menciassi says in her interview at Autonomous Robotic Surgery :

“I am looking for solutions for giving the best care to the patients not only using scissor and knife but also using energy for example ultra sounds, focused ultra sounds, when you take a pill this is sort of autonomous treatment, this is not an autonomous robot but it is an autonomous treatment.”

Menciassi has received The Women Innovation Award, for female scientists in biomedical robotics, by WomenTech in 2017. Her SupCam project was awarded as Special Electronic Design with the Compasso d’Oro, ADI Associazione per il Disegno Industriale (Golden compass, Association for Industrial Design) in 2016 for her cost-effective and minimally invasive endoscopic device. Also, the FUTURA project, a novel robotic platform for Focused Ultrasound Surgery (FUS) in clinics, which was coordinated by Menciassi was awarded the Technology Award for the Society of Medical and Innovation Technology (SMIT) in 2015.

In an already prolific career, Menciassi has more than 19000 citations, 650 publications, 7 book chapters and almost 50 patents in her name and has been constantly improving the fields of surgical and biomedical robotics. Her vision for the future is strong, “Maybe in 30 years all drugs will be more robotic and let’s say autonomous because they will be able to reach some specific areas of the human body to treat cells or to treat a disease.”

Want to keep reading? There are 180 more stories on our 2013 to 2020 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Women in Robotics Update: Girls Of Steel https://robohub.org/women-in-robotics-update-girls-of-steel/ Mon, 30 Nov 2020 19:27:57 +0000 https://robohub.org/women-in-robotics-update-girls-of-steel/

“Girls of Steel Robotics (featured 2014) was founded in 2010 at Carnegie Mellon University’s Field Robotics Center as FRC Team 3504. The organization now serves multiple FIRST robotics teams offering STEM opportunities for people of all ages.

Since 2019, Girls of Steel also organizes FIRST Ladies, an online community for anyone involved in FIRST robotics programs who supports girls and women in STEM. Their mission statement reflects their commitment to empowering everyone for success in STEM: “Girls of Steel empowers everyone, especially women and girls, to believe they are capable of success in STEM.”

Girls of Steel celebrated their 10th year in FIRST robotics with a Virtual Gala in May 2020 featuring a panel of four Girls of Steel alumni showcasing a range of STEM opportunities. One is a PhD student in Robotics at CMU, two are working as engineers, and one is a computer science teacher. Girls of Steel are extremely proud of their alumni, of whom 80% are studying or working in STEM fields.

In August 2020, Girls of Steel successfully organized 3 weeks of virtual summer camps and were also able to run 4 teams in a virtual FIRST LEGO League program from September 2020. Girls of Steel also restructured their FIRST team and launched two new sub teams; Advocacy and Diversity, Equity, and Inclusion (DEI) focusing on continuing their efforts to advocate for after-school STEM programs, and for creating an inclusive environment that welcomes all Girls of Steel members. The DEI sub team manages a suggestion box where members can anonymously post ideas for team improvements.

 

In 2016, Robohub published a follow up on the Girls of Steel and their achievements.

In 2017, Girls of Steel won the 2017 Engineering Inspiration award (Greater Pittsburgh Regional), which “celebrates outstanding success in advancing respect and appreciation for engineering within a team’s school and community.”

In 2018, Girls of Steel won the 2018 Regional Chairman’s Award (Greater Pittsburgh Regional), the most prestigious award at FIRST, it honors the team that best represents a model for other teams to emulate and best embodies the purpose and goals of FIRST.

In 2019, Girls of Steel won the 2019 Gracious Professionalism Award (Greater Pittsburgh Regional), which celebrates the outstanding demonstration of FIRST Core Values such as continuous Gracious Professionalism and working together both on and off the playing field.

And in 2020, Girls of Steel members, Anna N. and Norah O., received 2020 Dean’s List Finalist Awards (Greater Pittsburgh Regional) which reflects their ability to lead their teams and communities to increased awareness for FIRST and its mission while achieving personal technical expertise and accomplishment.

Clearly, all the Girls of Steel over the last ten years are winners. Many women in robotics today point to an early experience in a robotics competition as the turning point when they decided that STEM, particularly robotics, was going to be in their future. We want to thank all the Girls of Steel for being such great role models, and sharing the joy and fun of building robots with other girls/women. It’s working! (And it’s worth it!)

 

Want to keep reading? There are 180 more stories on our 2013 to 2020 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Women in Robotics Update: Elizabeth Croft, Helen Greiner, Heather Knight https://robohub.org/women-in-robotics-update-elizabeth-croft-helen-greiner-heather-knight/ Sun, 22 Nov 2020 21:06:25 +0000 https://robohub.org/women-in-robotics-update-elizabeth-croft-helen-greiner-heather-knight/
“Fearless Girl should provide the spark to inspire more female engineers”, writes Elizabeth Croft, Dean and Faculty of Engineering at Monash University.

“Girls are natural engineers, highly capable in maths and physics. We need to show them that these tools can be used to design a better world….So far, we’ve done a poor job of communicating to girls the very powerful impact they can make through an engineering career.” Croft continues, providing us with the inspiration to introduce the second of our new series of Women in Robotics Updates, featuring Elizabeth Croft, Helen Greiner and Heather Knight from our first “25 women in robotics you need to know about” list in 2013.

Elizabeth Croft

Dean and Faculty of Engineering at Monash University

Elizabeth Croft (featured 2013) is now a Dean and Faculty of Engineering division at Monash University in Australia, advancing human-robot interaction, industrial robotics, trajectory generation and diversity in STEM. Previously she was the Founding Director of the Collaborative Advanced Robotics and Intelligent Systems Laboratory at the University of British Columbia (UBC). As the Marshall Bauder Professor in Engineering Economics, Business and Management Training from 2015-2017, she launched the Master of Engineering Leadership degrees at the UBC.

Recognized as one of the 100 most powerful women in Canada in 2014 by Women’s Executive Network, Croft also received the RA McLachlan Peak Career Award for Professional Engineering in the Province of British Columbia, Canada in 2018. She is a fellow of the ASME, Engineers Australia, Engineers Canada, and the Canadian Academy of Engineering. She is also the recipient of other awards such as the Wendy MacDonald Award, Diversity Champion, Vancouver Board of Trade of 2016, and Just Desserts Award of University British Columbia, Alma Mater Society in 2015. She has more than 200 research publications and almost 6000 citations.

She is an advocate for women in Engineering and has an exceptional record of propelling women’s representation and participation in engineering. As the Natural Sciences and Engineering Research Council Chair for Women in Science and Engineering (2010-2015), she worked with partners in funding agencies, industry, academe, and the education system on comprehensive strategies to improve women’s participation and retention in the STEM disciplines at all levels. During this period Croft successfully increased female enrollment in Engineering to 30%.

Helen Greiner

CEO, Cofounder and Founder of Tertill, iRobot and Cyphy Works

Helen Greiner (featured in 2013), is now a founder of Tertill , in addition to founding iRobot and CyPhy Works (aka Aria Insights). Tertill is a solar-powered, weed-snipping robot for home gardens patrolling throughout the day and looks somewhat like an outdoor Roomba, one of the products from iRobot, the world’s first commercially successful consumer robotics product.

Greiner has received numerous awards and accolades, including being named an “Innovator for the Next Century” by Technology Review Magazine. She received the DEMO God Award at the DEMO Conference in 2014 and was named a Presidential Ambassador for Global Leadership (PAGE) by US President, Barack Obama and US Secretary of Commerce, Penny Pritzker. She was recognized for leadership in the design, development, and application of practical robots by the National Academy of Engineering, and was named “woman of the year” at Wentworth Institute of Technology in 2018. In 2018, she was also sworn in as a Highly Qualified Expert for the US Army. You can hear her speak in 2021 at the finals of the $2m GoFly competition .

As a child Greiner became fascinated by the robots of StarWars, particularly the three-foot-tall spunky R2D2. Says Greiner “He had moods, emotions, and dare I say, his own agenda. This was exciting to me—he was a creature, an artificial creature.” Consistently pioneering in building the helping robots to perform dull, dirty and dangerous jobs and launching robotics into the consumer market since 2008, she says, “If we don’t take robots to the next level, we’ll have a lot of explaining to do to our grandchildren.”

Heather Knight

Assistant Professor at Oregon State University

Heather Knight, (featured 2013) is now an Assistant Professor in the Computer Science department at Oregon State University and directs the CHARISMA* Research Lab. In CHARISMA Research Lab, she operationalizes methods for the performing arts to make more emotive and engaging robots, exploring minimal social robots, robot ethics, charismatic machines, and multi-robot/multi-human social interaction.

Knight has presented a TED talk: Silicon-Based Comedy in 2010 where she demonstrated a robot stand-up comedian “Data” which has gotten almost 1 million views. She was mentioned in Forbes List’s 30 under 30 in Science and named one of AdWeek’s top 100 creatives in 2017. In 2017, she was also a Robotic Artist in Residence at X, the Moonshot Factory. Her installations have been featured at the Smithsonian-Cooper Hewitt Design Museum, TED, Pop! Tech, LACMA, SIGGRAPH, and the Fortezza da Basso in Florence, Italy. She is also the Assistant Director of Robotics at Humanity+ and a fellow at the Hybrid Realities Institute and a National Science Foundation (NSF) Fellow.

She is also a founder of Marilyn Monrobot, a robot theater company performing comedy, dance and even Rube Goldberg Machine installations. Here, she successfully organizes the annual ‘Robot Film Festival’ which awards Botskers to various robot films and robot film stars. The film archives make for great viewing.

 

Want to keep reading? There are 180 more stories on our 2013 to 2020 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Women in Robotics Update: Sarah Bergbreiter, Aude Billard, Cynthia Breazeal https://robohub.org/women-in-robotics-update-sarah-bergbreiter-aude-billard-cynthia-breazeal/ Mon, 16 Nov 2020 12:00:14 +0000 https://robohub.org/women-in-robotics-update-sarah-bergbreiter-aude-billard-cynthia-breazeal/

In spite of the amazing contributions of women in the field of robotics, it’s still possible to attend robotics conferences or see panels that don’t have a single female face. Let alone seeing people of color represented! Civil rights activist Marian Wright Edelman said that “You can’t be what you don’t see”. Women in Robotics was formed to show that there were wonderful female role models in robotics, as well as providing an online professional network for women working in robotics and women who’d like to work in robotics. We’re facing an incredible skill shortage in the rapidly growing robotics industry, so we’d like to attract newcomers from other industries, as well as inspiring the next generation of girls. Introducing the first of our new series of Women in Robotics Updates, featuring Sarah Bergbreiter, Aude Billard and Cynthia Breazeal from our first “25 women in robotics you need to know about” list in 2013.

Sarah Bergbreiter

Professor in Mechanical Engineering and Principle Investigator at the Microrobotics Lab at Carnegie Mellon University

Sarah Bergbreiter (featured 2013) as an assistant professor at the University of Maryland and acting director of Maryland Robotics Center has now moved to Carnegie Mellon University (CMU) as a full professor, expanding the frontiers of knowledge pertaining to the actuation, sensing, power, and computational aspects of making tiny robots at Microrobotics lab at CMU.

She was made a 2019 Fellow in the American Society of Mechanical Engineers (ASME) for her significant and critical engineering achievements, active practice, and membership in the organization. She was the winner of the Institute of System Research’s Outstanding Faculty Award in 2017 and received the Defense University Research Instrumentation Program (DURIP) awards from the U.S. Army in 2019. She also made InStyle’s ’50 Badass Women’ list in 2019.

“Inspired by Star Wars, the professor of mechanical engineering at Carnegie Mellon made her first foray into robotics at age 7 or 8. “I tried to build a robot to clean my room,” she recalls, laughing. Now she has loftier goals. Her robots, which can be smaller than an ant and up to the size of a Tic Tac, may eventually be used for microsurgery, search and rescue, and safety inspections for hard-to-reach spaces, like inside jet engines. She doesn’t envision a dystopian world where robots replace humans, however: “You want robots to complement humans.”

She has more than 100 publications with almost 1500 citations and her 2014 TED talk about microrobotics has been viewed 1.68 million times. She specializes in micro/nanorobots and has brought impressive capabilities in millimeter-sized jumpers which can overcome obstacles 80x their height. She collaborates with experts from biology, neuroscience, dynamics and other fields to build agile robots with mechanosensors.

Aude Billard

Professor and Director of the Learning Algorithms and Systems Laboratory at EPFL

Aude Billard (featured 2013) is now a full professor at École Polytechnique Fédérale de Lausanne (EPFL) at the Learning Algorithms and Systems Laboratory (LASA), teaching robots to perform skills with the level of dexterity displayed by humans in similar tasks. These robots move seamlessly with smooth motions. They adapt adequately and on-the-fly to the presence of obstacles and to sudden perturbations, hence mimicking humans’ immediate response when facing unexpected and dangerous situations.

Billard has been nominated for the Outstanding Women in Academia by Swiss National Science Foundation, where she is a member of the Scientific Research Council, and was also nominated to the Swiss Academy of Engineering Sciences. She is currently the vice president for publication activities of the IEEE Robotics and Automation Society, the associate editor of the International Journal of Social Robotics, elected president of the EPFL Teaching Body Assembly, and elected president of the EPFL Teachers’ Council. In 2017, Billard received a European Research Council Advanced Grant for Skill Acquisition in Humans and Robots.

She is also cofounder of AICA, a young start-up from EPFL, active in the domain of artificial intelligence and robotics, which provides novel software for creating safe and flexible installations of industrial robots, with a modular approach. She specializes in building robots that can interact with, learn from, and help humans. She has also been studying the neural and cognitive processes underpinning imitation learning in humans. She has over 500 publications and more than 18000 citations, and you can watch her plenary talk at AAAI 2020 on ‘Combining Machine Learning and Control for Reactive Robots’.

Cynthia Breazeal

Professor and Associate Director at MIT Media Lab | Founder and Director of the Personal Robots Group | Founder, Chief Scientist and Chief Experience Officer at Jibo

Cynthia Breazeal (featured 2013) is currently a professor at the MIT Media Lab where she founded and directs the Personal Robots Group. She is also Associate Director of the Media Lab in charge of new strategic initiatives and spearheads MIT’s K-12 education initiative on AI. She is a leading expert in designing personal robots that naturally interact with humans and specializes in balancing AI, UX design, and understanding the psychology of engagement to design personified AI technologies that promote human flourishing and personal growth.

Breazeal was recently elected as a Fellow of the AAAI Association for the Advancement of Artificial Intelligence for significant sustained contributions. She has more than 350 publications and 23000 citations, and has spoken at prominent venues such as TED, the World Economic Forum, the UN, SXSW, CES. She was recognized as a Finalist in the National Design Awards. In 2014, she received the George R. Stibitz Computer & Communications Pioneer Award for seminal contributions to the development of Social Robotics and Human Robot Interaction.

Breazeal has also been recognized for her entrepreneurship. She is the Founder and Chief Scientist of Jibo, the pioneering crowdfunded social robot featured on the cover of TIME magazine’s 25 Best Inventions of 2017. Her journey with Jibo isn’t over. NTT Disruption is relaunching the robot as an enterprise product in healthcare and education. Breazeal shared her experiences in a recent IROS 2020 plenary ‘Living with Social Robots: from Research to Commercialization and Back’. Make sure you watch the extra feature ‘Jibo Succeeded by Failing’ which includes the classic goodbye. We can’t wait to see the hello.

 

Want to keep reading? There are 180 more stories on our 2013 to 2020 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Women in Robotics panel celebrating Ada Lovelace Day https://robohub.org/women-in-robotics-panel-celebrating-ada-lovelace-day/ Wed, 21 Oct 2020 21:25:56 +0000 https://robohub.org/women-in-robotics-panel-celebrating-ada-lovelace-day/

We’d like to share the video from our 2020 Ada Lovelace Day celebration of Women in Robotics. The speakers were all on this year’s list, last year’s list, or nominated for next year’s list! and they present a range of cutting edge robotics research and commercial products. They are also all representatives of the new organization Black in Robotics which makes this video doubly powerful. Please enjoy the impactful work of:

Dr Ayanna Howard – Chair of Interactive Computing, Georgia Tech

Dr Carlotta Berry – Professor Electrical and Computer Engineering at Rose-Hulman Institute of Technology

Angelique Taylor – PhD student in Health Robotics at UCSD and Research Intern at Facebook

Dr Ariel Anders – roboticist and first technical hire at Robust.AI

Moderated by Jasmine Lawrence – Product Manager at X the Moonshot Factory

Follow them on twitter at @robotsmarts @DRCABerry @Lique_Taylor @Ariel_Anders @EDENsJasmine

Some of the takeaways from the talk were collected by Jasmine Lawrence at the end of the discussion and include the encouragement that you’re never too old to start working in robotics. While some of the panelists knew from an early age that robotics was their passion, for others it was a discovery later in life. Particularly as robotics has a fairly small academic footprint, compared to the impact in the world.

We also learned that Dr Ayanna Howard has a book available “Sex, Race and Robots: How to be human in the age of AI”

Another insight from the panel was that as the only woman in the room, and often the only person of color too, the pressure was on to be mindful of the impact on communities of new technologies, and to represent a diversity of viewpoints. This knowledge has contributed to these amazing women focusing on robotics projects with significant social impact.

And finally, that contrary to popular opinion, girls and women could be just as competitive as male counterparts and really enjoy the experience of robotics competitions, as long as they were treated with respect. That means letting them build and program, not just manage social media.

You can sign up for Women in Robotics online community here, or the newsletter here. And please enjoy the stories of 2020’s “30 women in robotics you need to know about” as well as reading the previous years’ lists!

]]>
30 women in robotics you need to know about – 2020 https://robohub.org/30-women-in-robotics-you-need-to-know-about-2020/ Tue, 13 Oct 2020 07:00:25 +0000 https://robohub.org/30-women-in-robotics-you-need-to-know-about-2020/

It’s Ada Lovelace Day and once again we’re delighted to introduce you to “30 women in robotics you need to know about”! From 13 year old Avye Couloute to Bala Krishnamurthy who worked alongside the ‘Father of Robotics’ Joseph Engelberger in the 1970s & 1980s, these women showcase a wide range of roles in robotics. We hope these short bios will provide a world of inspiration, in our eighth Women in Robotics list! 

In 2020, we showcase women in robotics in China, Japan, Malaysia, Israel, Australia, Canada, United States, United Kingdom, Switzerland, Israel, Norway, Spain, The Netherlands, India and Iran. There are researchers, industry leaders, and artists. Some women are at the start of their careers, while others have literally written the book, the program or the standards.

CAPE CANAVERAL, Fla. – Members of the Kennedy Space Center government-industry team rise from their consoles within the Launch Control Center to watch the Apollo 11 liftoff through a window. Photo credit: NASA

 

We publish this list because the lack of visibility of women in robotics leads to the unconscious perception that women aren’t making newsworthy contributions. We encourage you to use our lists to help find women for keynotes, panels, interviews etc. Sadly, the daily experience of most women in robotics still looks like this famous NASA control room shot from the 1969 Apollo 11 moon landing, with one solitary woman in the team. It has taken sixty years for the trailblazers like Joann Morgan, Katherine G. Johnson, Dorothy Vaughan, Mary Jackson and Poppy Northcutt to become well known. And finally now we have a woman, Charlie Blackwell-Thompson, serving as Launch Director for the upcoming Artemis Mission, and another, Gwynne Shotwell, serving as President and COO of SpaceX.

In celebration of Women’s History Month, the “Women of Launch Control” working in Exploration Ground Systems take time out of their Artemis I launch planning to pose for a photo in Firing Room 1 of the Launch Control Center at NASA’s Kennedy Space Center in Florida on March 4, 2020. Artemis I will be the first integrated flight test of the Orion spacecraft and Space Launch System rocket, the system that will ultimately land the first woman and the next man on the Moon. Photo credit: NASA/Glenn Benson

In 2019, women still accounted for less than a quarter (23.6%) of those working in natural and applied sciences and related occupations. In these occupations, women earned, on average, \$0.76 to every \$1.00 earned by men in annual wages, salaries, and commissions in 2018. [ref Catalyst.org ]

This issue is even more pervasive and devastating if you are a person of color. We have always strived to showcase a wide range of origins and ethnicities in our annual list, and this year, as well as four African American roboticists, our list includes the first African American female CEO of a company valued over $1Billion USD. This is just a small step forward, but we’re pleased to announce the recent launch of the Black in Robotics organization, as well as greater recognition of the citation problem.

The citation problem is expected to significantly disadvantage women and people of color due to the historical lack of women followed by the recent growth of large scientific teams, multiplying exclusion. For example, Nature recently published a paper on the impact of NumPy, a significant scientific resource. NumPy was originally developed by many contributors. But the authoritative citation is likely to belong to this description paper, which has 26 authors, all male. [ref Space Australia]

On a positive note, many individuals and organizations intentionally try to reverse this bias. For example, Tulane University just published a guide to help you calculate how much of your reading list includes female authors and a citation guide, similar to the CiteHer campaign from BlackComputeher.org.  And as Bram Vanderborght, editor of IEEE Robotics and Automation magazine pointed out in the March 2020 issue, “Scientists are starting to consider how gender biases materialize in physical robots. The danger is that robot makers, consciously or not, may reinforce gender stereotypes and inadvertently create even greater deterrents for young, underrepresented people interested in joining our field.”

We hope you are inspired by these profiles, and if you want to work in robotics too, please join us at Women in Robotics. We are now a 501(c)(3) non-profit organization, but even so, this post wouldn’t be possible if not for the hard work of volunteers; Andra Keay, Hallie Siegel, Sabine Hauert, Sunita Pokharel, Ioannis Erripis, Ron Thalanki and Daniel Carrillo Zapata.

Fatemeh Pahlevan Aghababa

CTO and co-founder at Intelize | Technical Committee Head of International Robocup Federation | Technical Lead at Farakav co.

Fatemeh Pahlevan Aghababa is CTO and co-founder of Intelize startup which provides AI-powered solutions such as Ribona whose mission is to make a difference in Industry 4.0 via novel technologies. She is an active researcher in intelligent systems and cognitive robotic fields, and received the Silvia Coradeschi Robocup Award for “the young female scientists with distinguishing research in AI and robotics”. She is the Technical Committee Head of the International Robocup Federation, and has won the RoboCup World Championship for two consecutive years in Rescue Simulation League. Pahlevan was named among Rising Stars in VentureBeat’s Women in AI Awards.

Ariel Anders

Roboticist – Robust.AI

Ariel Anders is a black feminist roboticist who enjoys spending time with her family and artistic self-expression. Anders is the first roboticist hired at Robust.AI, an early stage robotics startup building the world’s first industrial grade cognitive engine. Anders received a BS in Computer Engineering from UC Santa Cruz and her Doctorate in Computer Science from MIT, where she taught project-based collaborative robotics courses, developed an iOS app for people with vision impairment, and received a grant to install therapy lamps across campus. Her research focused on reliable robotic manipulation with the vision of enabling household helpers.

Carlotta Berry

Professor of Electrical and Computer Engineering – Rose-Hulman Institute of Technology

Carlotta Berry, professor at Rose-Hulman Institute of Technology, helped found and co-directs the Rose Building Undergraduate Diversity (ROSE-BUD) program with a goal of recruitment, retention and professional development of historically marginalized and minoritized populations in computer science, computer, electrical and computer engineering. Berry also worked with faculty to create Rose-Hulman’s first multidisciplinary minor in robotics and has judged and organized FIRST Robotics competitions. Berry’s research is focused on educational mobile robotics, human-robot interaction and interfaces. Berry has also mentored and organized robotics programs for Girl Scout groups and visited elementary, middle, and high schools to motivate student’s interests in STEM.

Avye Couloute

Gen Arm 2Z Ambassador/ Founder of Girls into Coding

Avye Couloute is a multitalented 13-year-old, with a passion for coding from the age of 7. Avye founded Girls into Coding which is a volunteer event series empowering girls with the tools and skills to get in the tech industry. She has facilitated online and offline robotics workshops as well as raising money to provide textbooks and computer starter kits. Avye, a Gen Arm 2Z Ambassador, is also the recipient of The Diana Award’s Legacy Award and the winner of the 2020 FDM Everywoman in Technology Awards.

Yael Edan

Professor, Head of ABC Robotics Lab – Ben-Gurion University of the Negev (BGU)

Yael Edan, professor at Ben-Gurion University of the Negev is Head of the Agricultural, Biological, and Cognitive Robotics Initiative. Her research is specifically focused on agricultural robotics, human-robot cooperation, systems engineering of robotics systems and adaptive sensor selection and fusion. Edan has made major contributions in the introduction and application of intelligent automation and robotic systems to the field of agriculture with several patents. Edan has been involved in many international projects, is a member of IEEE RAS and a member of ASABE where she was the Chair of the Flexible Automation and Robotics/Mechatronics and BioRobotics committees.

Autumn Edwards

Professor – Western Michigan University

Autumn Edwards researches how ontological considerations, or people’s beliefs about the nature of communicators and of communication, both shape and are shaped by their interactions with social robots, as an emergent class of digital interlocutor. She is co-founder/co-director of the Communication and Social Robotics Labs (combotlab.org), Professor of Communication at Western Michigan University, and founding Editor-in-Chief of the journal Human-Machine Communication. She inspires passionate support from students and fellow staff, receiving the University Distinguished Teaching Award (2014), the Kim Giffin Research Award from the University of Kansas and was designated a Claude Kantner Research Fellow at Ohio University.

Marwa ElDiwiny

Researcher – University of Twente

Marwa ElDiwiny is a PhD researcher at Vrije Universiteit Brussel whose current research focus is on modelling and simulating self-healing soft materials for industrial applications. Her master thesis was UAV anti-stealth technology for safe operation. She has worked as a research engineer at Inria, Lille nord Europe, research scholar at Tartu Institute of Technology and lecturer in the Mechatronics and Industrial Robotics Program at Minia university, Egypt. Eldiwiny hosts the IEEE RAS Soft Robotics Podcast where researchers from both academia and industry are interviewed about the soft robotics field, with over 94 episodes available.

Aicha Evans

CEO – Zoox

Aicha Evans is the CEO of Zoox, the company developing purpose-built, zero-emission autonomous vehicles with end to end autonomy software stack. Evans is a former senior vice president and chief strategy officer at Intel Corporation where she drove Intel’s long-term strategy to transform from a PC-centric company to a data-centric company. Evans spent 10 years in various engineering management positions at Rockwell Semiconductors, Conexant, and Skyworks, before Intel. Evans is also on the Supervisory Board of SAP and is the first AfricanAmerican female CEO of a billion dollar company.

Elena Garcia Armada

Co-Founder – Marsi Bionics. Senior Researcher – Centre for Automation and Robotics

Elena Garcia Armada is a tenured researcher at the Spanish National Research Council and co-founder of Marsi Bionics. She is an expert in pediatric exoskeletons optimized for active gait rehabilitation of children suffering from neuromuscular diseases. She has won over 30 awards, is a member of the Innovation and Knowledge Transfer Working Group of the Ministry of Science, Innovation and Universities, a member of the Industrial Activities Board of IEEE Robotics and Automation Society, and on the jury of the Princess of Asturias Scientific and Technical Research Awards. She is passionate about helping children walk again.

Keiko Homma

Senior Researcher Service Robotics Research Team, Robot Innovation Research Center – AIST

Keiko Homma is a senior researcher at AIST in the Assistive Robotics Research Team, Human Augmentation Research Center. She received B.Sc and Ph.D degrees in Engineering from the University of Tokyo, and was a visiting researcher at Aalto University. Her current research interests include assistive and robot systems, particularly their safety aspects, and has developed a risk assessment tool for assistive robots and test dummies for exoskeleton typed physical assistant robots. She is also an IEEE RAS Distinguished Lecturer.

Michelle J. Johnson

Associate Professor & Director of Rehabilitation Robotics Lab – University of Pennsylvania

Michelle J. Johnson is an associate professor of the departments of physical medicine and rehabilitation, and of bioengineering at University of Pennsylvania, and directs the Rehabilitation Robotic Research and Design Laboratory in the Pennsylvania Institute of Rehabilitation Medicine which is affiliated with the General Robotics, Automation, Sensing & Perception (GRASP) Lab. Johnson’s PhD is in Mechanical Engineering from Stanford University. Her research specialization is in the design, development, and therapeutic use of novel, affordable, intelligent robotic assistants to quantify upper limb motor function in adults and children at risk of or with brain injury. She is also a US Fulbright Scholar.

Hadas Kress-Gazit

Associate Professor – Sibley School of Mechanical and Aerospace Engineering, and ICRA2022 Program Chair

Hadas Kress-Gazit is an Associate Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics – automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots and swarms, and synthesizes ideas from robotics, formal methods, control, hybrid systems and human-robot interaction. She has received multiple recognitions and awards for her research, her teaching and her advocacy for groups traditionally underrepresented in robotics.

Bala Krishnamurthy

CEO – Mobile Software Inc

Bala Krishnamurthy is a pioneer in robotics. From her early days at Unimation to her current role as CEO of Mobile Software. Krishnamurthy has over 40 years of experience in researching, designing and developing embedded real-time systems and software. She adapted the VAL language designed for PUMA robots to the hydraulic Unimate robots, then led the team at Engelberger’s HelpMate Robotics developing the software that allowed autonomous courier robots to navigate throughout hospitals. Krishnamurthy was also a member of NASA’s Office of Exploration Systems research proposal review panel for Human and Robotic Technology and authored dozens of technical articles.

Sam MacDonald

President – Deep Trekker

Sam MacDonald is President and cofounder of Deep Trekker, a Canadian made ROV (Remotely Operated Vehicle) for economical underwater robotic exploration. After working in automation and at Blackberry in marketing, Macdonald founded her own marketing leadership company. She then started Deep Trekker in 2009 taking it from garage to global company. Today, Deep Trekker has expanded into a host of verticals, including aquaculture, commercial diving, salvage, military, first responders, oil and gas, energy, marine surveying, wastewater, infrastructure and recreation, and sells products in 80 countries.

Karon Maclean

Professor – University of British Columbia, Director – SPIN The Sensory Perception and Interaction Research Group

Karon MacLean is a Professor in Computer Science at UBC, with degrees in Biology and Mechanical Engineering and time spent as a professional robotics engineer and haptics / interaction researcher at Interval Research, Palo Alto. MacLean’s research specializes in haptic interaction: cognitive, sensory and affective design for people interacting with the computation we touch, emote and move with and learn from, from robots to touchscreens and the situated environment. MacLean leads UBC’s Designing for People interdisciplinary research cluster and is Special Advisor, Innovation, and Knowledge Mobilization to UBC’s Faculty of Science.

Chris Macnab

Associate Professor – University of Calgary

Chris Macnab is an associate professor at the University of Calgary. She earned a BEng in Engineering Physics from the Royal Military College of Canada and a PhD from the University of Toronto Institute for Aerospace Studies, where she investigated control of flexible-joint space robots using neural networks. Macnab has worked as a control systems designer for Dynacon Enterprises, as part of a team that developed and tested a control system for an antenna on the space station. She has also worked for CRS Robotics, programming robot movements. Macnab’s specialization is in control systems, motion control, and robotics.

Shauna McIntyre

CEO – Sense Photonics

Shauna McIntyre is CEO at Sense Photonics, developer of high-performance, scalable 3D vision systems with offices in San Francisco, North Carolina and the UK. She is an automotive industry veteran who has led Google’s automotive services and Google Maps’ automotive programs. Mclntyre was also Chief of Staff for Google’s consumer electronics division and serves on the Board of Directors of Lithia Motors. Mclntyre brings a track record of driving innovation in traditional industries and a long-standing mission to deliver new, breakthrough experiences to industrial and automotive companies through intelligent hardware.

Elena Messina

Group Leader, Manipulation & Mobility – National Institute of Standards and Technology (NIST)

Elena Messina leads the Intelligent Systems Division’s Manipulation & Mobility Systems Group at the US National Institute of Standards and Technology. She also manages the Engineering Laboratory’s Robotic Systems for Smart Manufacturing Program. Messina founded key efforts to develop test methodologies for measuring performance of robots, which range from long-term use of robotic competitions to drive innovation to consensus standards for evaluating robotic components and systems. Messina has received three Department of Commerce Bronze Medals for Superior Performance and Technical Leadership, and the Edward Bennet Rosa Award for research and development leading to standardized test methods for emergency response robots.

Linda Nagata

SciFi Writer, Owner – Mythic Island Press

Linda Nagata is a science fiction and fantasy writer from Hawaii, best known for her high-tech science fiction, including the far-future adventure series, Inverted Frontier, and the near-future thriller, The Last Good Man. Nagata is most recognized for her Nanotech Succession series, which is considered exemplary of the Nanopunk genre. Nagata’s work has been nominated for the Hugo, Nebula, Locus, John W. Campbell Memorial, and Theodore Sturgeon Memorial awards. She has also won the Nebula award, and is a two-time Locus award winner.

Samia Nefti-Meziani

Founder and Head, Professor – Salford University,University of Salford’s Autonomous Systems and Advances Robotics Research Centre

Samia Nefti-Meziani is a Professor at Salford University UK, where she is founder of the ASAR Centre. She is a veteran and one of the pioneers in Soft Robotics and low cost cross-sectorial robotics systems and their application in many sectors including Manufacturing, Nuclear, Space, Autonomous cars and Healthcare. Nefti-Meziani runs several large UK and European research programs, and regularly appears on TV and radio. She is also a Cofounder and Executive Board Member of the UK’s National Robotics Network, former Vice Chairman of IEEE RAS UK and a member of the UK-Government Robotics Growth Partnership.

Nicci Rossouw

CEO – Exaptec

Nicci Rossouw is CEO and founder of Exaptec, a provider of social, service and telepresence robots. Rossouw specializes in robotics automation solution design and delivery using a Robotics-as-a-Service as a business model. Rossouw has provided robotic telepresence solutions to educational facilities and businesses to augment communication and movement for disabled and incapacitated people. Exaptec was one of Westpac’s top Businesses of Tomorrow winners.

Yulia Sandamirskaya

Applications Research Lead, Neuromorphic Computing Lab – Intel Labs

Yulia Sandamirskaya is a senior researcher at Intel Labs leading a group developing applications of Intel’s neuromorphic research chips. Sandamirskaya was also a group leader in the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. Sandamirskaya specializes in perception, movement control, memory formation, and learning in embodied neuronal systems, and implemented neuronal architectures in neuromorphic devices, interfaced with robotic sensors and motors. Sandamirskaya has served as the chair of EUCOG, the European Society for Artificial Cognitive Systems, and as coordinator of the NEUROTECH project.

Katherine Scott

Developer Advocate – Open Robotics

Katherine Scott is a Developer Advocate at Open Robotics, the maintainers of ROS and Ignition Gazebo, and sits on the board of the Open Source Hardware Association. Scott was Cofounder and Software Lead at Tempo Automation, a manufacturer of low-volume electronics, served as Director of Research and Development at Sight Machine, a leader in manufacturing analytics, and worked as a computer vision engineer. Along the way, Scott developed a large part of the SimpleCV Python library, and several innovative software prototypes in the fields of computer vision, graphics, augmented reality, and robotics.

Hazlina Selamat

Associate Professor – Universiti Teknologi Malaysia, Director – Apt Touch Sdn Bhd, Research Fellow – Centre for Artificial Intelligence and Robotics (CAIRO)

Hazlina Selamat is an Associate Professor at Universiti Teknologi Malaysia, an active member of the university’s Centre for Artificial Intelligence & Robotics (CAIRO), Founding Member for the Malaysian Society for Automatic Control Engineers (MACE). She is the co-founder and Director of Apt Touch Sdn Bhd, a company that designs, develops, and manufactures fuel injection systems for small engines. Her projects include developing the optimization engine for an energy management system, and the pre-commercialization of a retrofit fuel injection kit for carburetor engines. Her interests are adaptive control, online system identification and application of control to the high-order and nonlinear systems.

Elizabeth Sklar

Research Director and Professor – Agrirobotics and University of Lincoln and Kings College of London

Elizabeth Sklar is a Professor at the University of Lincoln and King’s College London, UK, where she heads the Centre for Robotics Research. Sklar is a veteran of MIT/Lincoln Lab, a Founder Trustee of the RoboCup Federation, a Fulbright Scholar 2013-2014, former member of the Board of Directors for the IFAAMAS, and sits on the Editorial Board for the Journal of Autonomous Agents & MultiAgent Systems. Sklar specializes in interaction for multi-robot and human-robot systems, and shared decision making by applying computational argumentation-based dialogue to human-agent and human-robot systems.

Allison Thackston

Staff Software Engineer, Manager Shared Autonomy – Waymo

Allison Thackston has a passion for solving practical problems with robotics. She currently leads a team solving some of the toughest problems in self-driving technology at Waymo. Thackston was previously the Engineering Lead and Manager of the Shared Autonomy Robotics team at Toyota Research Institute and was the Lead Engineer for Robotic Perception on Robonaut 2, the first humanoid robot on the International Space Station. Thackston is experienced in robust task and motion planning, manipulation, and applied computer vision.

Liane Thompson

CEO – Aquaai Norway AS

Liane Thompson is the CEO of Aquaai, a marine robotics company that uses data to improve our understanding of ocean health by collecting and delivering real-time visual and environmental data to a web-based dashboard. Thompson’s company has grown from a daughter’s idea into a full-fledged production, with a global VC backed brand. Norwegian salmon producer Kvarøy Fiskeoppdrett is working with Aquaai’s bio-inspired finned robots to monitor fish behavior and conditions at their fish farms. Aquaai has been awarded Top Aquaculture Innovation Award 2019 Fish2.0 Global Forum, an Innovation Norway grant, as well as being selected to Australia’s TekFish Challenge.

Claire Tomlin

Charles A. Desoer Professor of Electrical Engineering and Computer Sciences – UC Berkeley

Claire Tomlin is a professor at the University of California Berkeley, where she holds the Charles A. Desoer Chair in Engineering. She specializes in control of safety-critical systems applied to air traffic control automation and unmanned air systems, and is known for her pioneering work developing methods for verifying the safe range of operation for hybrid systems. Tomlin is a MacArthur Foundation Fellow, a Fellow of the IEEE and AIMBE, a member of the US National Academy of Engineering and the American Academy of Arts and Sciences. She received the IEEE Transportation Technologies Award, and an honorary doctorate from KTH.

Janet Wiles

Professor – University of Queensland

Janet Wiles is Professor of Complex and Intelligent Systems at the University of Queensland, where she leads a multidisciplinary team studying social robotics and has pioneered the use of robots in Indigenous communities. She also leads the Future Technologies thread at the ARC Centre of Excellence for the Dynamics of Language developing Human Centred AI for language technologies and making AI tools available to a broad range of users. She is in the Human Centred Computing discipline in UQ’s School of Information Technology and Electrical Engineering. Wiles specializes in human-robot interaction, language technologies, bio-inspired computation, visualization and artificial intelligence, complex systems modelling in biology and neuroscience, human memory, language, and cognition.

Rong Xiong

Professor at Zhejiang University – Institute of Cyber Systems and Control, Zhejiang University

Rong Xiong is a professor and head of the Robotics Laboratory at Zhejiang University, co-director of the ZJU-UTS Joint Center on Robotics Research, international trustee of RoboCup, and expert member for a special project on intelligent robots. Ministry for Science and Technology, China. Her group is known for their work developing humanoid robots that can play table tennis with each other and also compete with human players. Xiong received first prizes at the Scientific and Technological Award, and Teaching Achievement of Zhejiang Province. Xiong’s research specialization includes visual recognition, simultaneous localization and mapping, motion planning, and control for robots.

Want to keep reading? There are 180 more stories on our 2013 to 2019 lists. Why not nominate someone for inclusion next year!

And we encourage #womeninrobotics and women who’d like to work in robotics to join our professional network at http://womeninrobotics.org

]]>
Robot takes contact-free measurements of patients’ vital signs https://robohub.org/robot-takes-contact-free-measurements-of-patients-vital-signs/ Wed, 16 Sep 2020 20:28:34 +0000 https://robohub.org/robot-takes-contact-free-measurements-of-patients-vital-signs/ Researchers from MIT and Brigham and Women’s Hospital hope to reduce the risk to healthcare workers posed by Covid-19 by using robots to remotely measure patients’ vital signs.

The post Brigham and Women’s Hospital, MIT use Spot robot to measure patient vitals without contact appeared first on The Robot Report.

]]>

By Anne Trafton

During the current coronavirus pandemic, one of the riskiest parts of a health care worker’s job is assessing people who have symptoms of Covid-19. Researchers from MIT, Boston Dynamics, and Brigham and Women’s Hospital hope to reduce that risk by using robots to remotely measure patients’ vital signs.

The robots, which are controlled by a handheld device, can also carry a tablet that allows doctors to ask patients about their symptoms without being in the same room.

“In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs,” says Henwei Huang, an MIT postdoc. “We thought it should be possible for us to use a robot to remove the health care worker from the risk of directly exposing themselves to the patient.”

Using four cameras mounted on a dog-like robot developed by Boston Dynamics, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters. They are now making plans to test it in patients with Covid-19 symptoms.

“We are thrilled to have forged this industry-academia partnership in which scientists with engineering and robotics expertise worked with clinical teams at the hospital to bring sophisticated technologies to the bedside,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The researchers have posted a paper on their system on the preprint server techRxiv, and have submitted it to a peer-reviewed journal. Huang is one of the lead authors of the study, along with Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital, and Claas Ehmke, a visiting scholar from ETH Zurich.

Measuring vital signs

When Covid-19 cases began surging in Boston in March, many hospitals, including Brigham and Women’s, set up triage tents outside their emergency departments to evaluate people with Covid-19 symptoms. One major component of this initial evaluation is measuring vital signs, including body temperature.

The MIT and BWH researchers came up with the idea to use robotics to enable contactless monitoring of vital signs, to allow health care workers to minimize their exposure to potentially infectious patients. They decided to use existing computer vision technologies that can measure temperature, breathing rate, pulse, and blood oxygen saturation, and worked to make them mobile.

To achieve that, they used a robot known as Spot, which can walk on four legs, similarly to a dog. Health care workers can maneuver the robot to wherever patients are sitting, using a handheld controller. The researchers mounted four different cameras onto the robot — an infrared camera plus three monochrome cameras that filter different wavelengths of light.

The researchers developed algorithms that allow them to use the infrared camera to measure both elevated skin temperature and breathing rate. For body temperature, the camera measures skin temperature on the face, and the algorithm correlates that temperature with core body temperature. The algorithm also takes into account the ambient temperature and the distance between the camera and the patient, so that measurements can be taken from different distances, under different weather conditions, and still be accurate.

Measurements from the infrared camera can also be used to calculate the patient’s breathing rate. As the patient breathes in and out, wearing a mask, their breath changes the temperature of the mask. Measuring this temperature change allows the researchers to calculate how rapidly the patient is breathing.

The three monochrome cameras each filter a different wavelength of light — 670, 810, and 880 nanometers. These wavelengths allow the researchers to measure the slight color changes that result when hemoglobin in blood cells binds to oxygen and flows through blood vessels. The researchers’ algorithm uses these measurements to calculate both pulse rate and blood oxygen saturation.

“We didn’t really develop new technology to do the measurements,” Huang says. “What we did is integrate them together very specifically for the Covid application, to analyze different vital signs at the same time.”

Continuous monitoring

In this study, the researchers performed the measurements on healthy volunteers, and they are now making plans to test their robotic approach in people who are showing symptoms of Covid-19, in a hospital emergency department.

While in the near term, the researchers plan to focus on triage applications, in the longer term, they envision that the robots could be deployed in patients’ hospital rooms. This would allow the robots to continuously monitor patients and also allow doctors to check on them, via tablet, without having to enter the room. Both applications would require approval from the U.S. Food and Drug Administration.

The research was funded by the MIT Department of Mechanical Engineering and the Karl van Tassel (1925) Career Development Professorship, and Boston Dynamics.

]]>
Opportunities in DARPA SubT Challenge https://robohub.org/opportunities-in-darpa-subt-challenge/ Tue, 14 Jul 2020 20:13:57 +0000 https://robohub.org/opportunities-in-darpa-subt-challenge/

Opportunities Still Available to Participate in the DARPA Subterranean (SubT) Challenge: Cave Circuit 2020 and Final Event 2021. Join us for an introduction of the DARPA Subterranean Challenge with Program Manager Timothy Chung on Monday July 20 at 12pm PDT https://www.eventbrite.com/e/opportunities-with-darpa-subt-challenge-tickets-113037393888

About this Event

The DARPA Subterranean (SubT) Challenge aims to develop innovative technologies that would augment operations underground.

The SubT Challenge allows teams to demonstrate new approaches for robotic systems to rapidly map, navigate, and search complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.

The SubT Challenge is organized into two Competitions (Systems and Virtual), each with two tracks (DARPA-funded and self-funded).

The Cave Circuit, the final of three Circuit events, is planned for later this year. Final Event, planned for summer of 2021, will put both Systems and Virtual teams to the test with courses that incorporate diverse elements from all three environments. Teams will compete for up to $2 million in the Systems Final Event and up to $1.5 million in the Virtual Final Event, with additional prizes.

Learn more about the opportunities to participate either virtual or systems Team: https://www.subtchallenge.com/

Dr. Timothy Chung – Program Manager

Dr. Timothy Chung joined DARPA’s Tactical Technology Office as a program manager in February 2016. He serves as the Program Manager for the OFFensive Swarm-Enabled Tactics Program and the DARPA Subterranean (SubT) Challenge. His interests include autonomous/unmanned air vehicles, collaborative autonomy for unmanned swarm system capabilities, distributed perception, distributed decision-making, and counter unmanned system technologies.

Prior to joining DARPA, Dr. Chung served as an Assistant Professor at the Naval Postgraduate School and Director of the Advanced Robotic Systems Engineering Laboratory (ARSENL). His academic interests included modeling, analysis, and systems engineering of operational settings involving unmanned systems, combining collaborative autonomy development efforts with an extensive live-fly field experimentation program for swarm and counter-swarm unmanned system tactics and associated technologies.

Dr. Chung holds a Bachelor of Science in Mechanical and Aerospace Engineering from Cornell University. He also earned Master of Science and Doctor of Philosophy degrees in Mechanical Engineering from the California Institute of Technology.

Learn more about DARPA here: www.darpa.mil

]]>
Robotics for Infectious Diseases Organization and other resources https://robohub.org/robotics-for-infectious-diseases-and-other-resources/ Sat, 27 Jun 2020 01:32:45 +0000 https://robohub.org/robotics-for-infectious-diseases-and-other-resources/ In times of crisis, we all want to know where the robots are! And young roboticists just starting their careers, or simply thinking about robotics as a career, ask us ‘How can robotics help?’ and ‘What can I do to help?’. Cluster organizations like Silicon Valley Robotics can serve as connection points between industry and academia, between undergrads and experts, between startups and investors, which is why we rapidly organized a weekly discussion with experts about “COVID-19, robots and us” (video playlist).

During our online series, we heard from roboticists directly helping with all sorts of COVID-19 response, like Gui Cavalcanti of Open Source Medical Supplies and Alder Riley of Helpful Engineering. Both groups are great examples of the incredible power of people working together.

Open Source Medical Supplies (OSMS) was formed in March 2020 to research and disseminate open source plans for medical supplies used to treat and reduce the spread of COVID-19 that could be fabricated locally. Additionally, Open Source Medical Supplies supports, mentors, and guides local communities as they self-organize hospital systems, essential services, professional fabricators, makerspaces, and local governments into resilient, self-supporting supply units.

In its first two months of operation, Open Source Medical Supplies helped organize over 73,000 people in its Facebook group, produced 6 iterations of its Open Source COVID-19 Medical Supply Guide featuring 20 design categories and 90+ curated open source designs, engaged over 200 Local Response groups in 50 countries. OSMS materials are being translated into 40 languages, and OSMS guidance and collaboration platforms have catalyzed volunteers around the world to produce and deliver over 11 million medical supply items to their local communities (as of May 30).

Speakers like Mark Martin from California Community Colleges who started the Bay Area Manufacturers Discussion Forum and Rich Stump from Studio Fathom shared how the manufacturing community was coping with pivoting to making PPE instead of other products, and some of the issues with regulations and the supply chain.

Speakers like Tom Low and Roy Kornbluh from SRI International and Eric Bennett from Frontier Bio talked about redesigning ventilators, including designing robots to teleoperate ventilators. Ventilators are critical in treating COVID-19 but there is also a lack of trained operators, and FDA regulations don’t allow devices to be adapted or changed. And Rachel McCrafty Sadd (aka The Crafty Avenger) from Ace Monster Toys talked about making hundreds thousands of cloth masks and what sort of robots would have been useful.

In general, teleoperation is the trojan horse for adopting robots in workplaces in new ways. People trust a robot being remotely operated much more readily than a fully autonomous one. We spoke to Tra Vu from OhmniLabs and David Crawley of Ubiquity Robotics both of whom produce very affordable mobile bases for telepresence and other use cases, including disinfecting. The demand for both robots is rapidly increasing and people are asking for add-on abilities, like the ability to push a button, or open a door. Not all of these tasks are going to be simple to add but clearly once a hospital, facility or household has successfully used one robot, they are very willing to add more robots.

Rescue robotics expert Robin Murphy, who is Raytheon Professor of Computer Science and Engineering at Texas A&M, IEEE Fellow, ACM Fellow and Chair of the new Robotics for Infectious Diseases Organization joined us on several evenings to share a global tally of robot use cases around the world. Not only did the facts get the discussion going but she shared tips and tricks for how to design and deploy robots successfully in pandemic conditions.

Robotics for Infectious Diseases has launched two new interview series: Series 1 provides information from Public Health, Public Safety and Emergency Management Experts that answer many questions about what and how to deploy technology in a disaster response scenario (like COVID-19). The interviews are intended to give roboticists and robotics startups insights into the problems and requirements for technology in the health sector. Series 2 follows roboticists like Antonio Bicchi and Gangtie Zheng working on COVID-19 applications and describes the lessons learned.

Figure from Robotics for Infectious Diseases Organization

Figure from Robotics for Infectious Diseases Organization

Figure from Robotics for Infectious Diseases Organization

Murphy’s primary research is in artificial intelligence for mobile robots as applied to disaster robotics. She has literally written the book about “Disaster Robotics”. Her analyses have shown that 50% of the terminal failures in disaster robotics are due to human error, so a significant portion of her work is in human-robot interaction. Murphy works with responders and agency stakeholders to determine gaps that lead to the formulation of applied and fundamental research thrusts with her non-profit Center for Robot-Assisted Search and Rescue (CRASAR). Her research group has participated in 27 disasters or incidents and over 35 exercises gathering data spanning urban search and rescue, structural inspection, hurricanes, flooding, mudslides, mine disasters, radiological events, and wilderness search and rescue.

In our COVID-19, robots and us series, we also heard from Sue Keay, the head of Australia’s robotics research cluster and organizer of the Australian Robotics Roadmap about some successful deployments for COVID response, and also disaster scenarios of other sorts, including their 4th place finish in the recent DARPA Subterranean Robotics Challenge.

https://www.darpa.mil/news-events/2020-02-27

In the Systems competition of the Urban Circuit, 10 teams navigated two courses winding through an unfinished nuclear power plant in Elma, Washington, Feb. 18-27, 2020. DARPA designed the courses to represent complex urban underground infrastructure. The Virtual competition with eight teams took place Jan. 23-30, with results announced Feb. 27. Teams from eleven countries participated across the Virtual and Systems competitions in the Urban Circuit.

“Teams are under tremendous pressure in the SubT Challenge, not just because of the prize money at stake, but because of the significance of winning a DARPA Grand Challenge, events that have a history of jumpstarting innovation,” said Dr. Timothy Chung, program manager for the Subterranean Challenge in DARPA’s Tactical Technology Office. “At the core of the SubT Challenge is the mission to face an unknown environment and respond to changing situations.”

The focus turns now to the Cave Circuit, set for August 2020. DARPA will announce the location about three months prior to the start of the event. DARPA-funded and self-funded teams compete side-by-side throughout the Subterranean Challenge. Only self-funded teams are eligible for prizes in the Circuit Events, but they must finish in the top six overall for the Systems competition and top five overall for the Virtual competition. All qualified teams are eligible for prizes in the Final Event.

“We knew heading into the Urban Circuit that verticality would be one of the significant obstacles. Teams that traveled between floors, either flying, walking, or rolling, found more artifacts,” said Dr. Chung. “Teams designed their approaches to tackle uncertainty up front, and then toward the end of the Urban Circuit, we saw them put their platforms out there and take more risks, I look forward to seeing how they adapt for the Cave Circuit.”

Finally, if you have experiences to share deploying robots for COVID-19 applications, there is a Call for Papers for a Special Issue of IEEE RAM.

This special issue, edited by the IEEE RAS Special Interest Group on Humanitarian Technologies (SIGHT), aims to present up-to-date results and innovative advanced solutions on how robotics and automation technologies are used to fight the outbreak, giving particular emphasis to works involving the actual deployments of robots with meaningful analysis and lessons learned for the robotics community. The editors will accept both conventional full length contributions and short contributions reporting practical solutions to the problem that have proven effective in the field. The topics of interest for paper submissions include, but are not limited to:

  • autonomous or teleoperated robots for hospital disinfection and disinfection of public spaces.
  • telehealth and physical human-robot interaction systems enabling healthcare workers to remotely diagnose and treat patients.
  • hospital and laboratory supply chain robots for handling and transportation of samples and contaminated materials.
  • robots use by public safety and public health departments for quarantine enforcement and public service announcements.
  • social robots for families interacting with patients or with relatives in nursing homes.
  • robots enabling or assisting humans to return to work or companies to continue to function.
  • case studies of experimental use of robots in the COVID-19 pandemic.

Important Dates:

May 2020 – Call for papers
31 July 2020 – Submission deadline
15 September 2020 – First decisions on manuscripts
30 October 2020 – Resubmission
30 November 2020 – Final decisions
10 December 2020 – Final manuscripts uploaded
March 2021 – Publication

Click here for more details.

]]>
Conversation on racism and robotics https://robohub.org/conversation-on-racism-and-robotics/ Wed, 24 Jun 2020 23:50:47 +0000 https://robohub.org/conversation-on-racism-and-robotics/

Talking about racism and its impact on robotics and roboticists was the first conversation in our new monthly online discussion series “Society, Robots and Us” on last Tuesday of the month at 6pm PDT. It was a generous, honest and painful discussion that I hope has left a lasting impact on everyone who listened. There is systemic racism in America, and this does have an impact on robotics and roboticists in many many ways. UPDATE: Black in Robotics has launched on twitter and online https://blackinrobotics.org

The US Senator Elizabeth Warren in conversation today with Alicia Garza from Black Futures Lab said, “America was founded on principles of liberty and freedom, but it was built on the backs of enslaved people. This is a truth we must not ignore. Racism and white supremacy have shaped every crucial aspect of our economy, and our political system for generations now.”

The speakers in ‘Society, Robots and Us’ were Chad Jenkins, Monroe Kennedy III, Jasmine Lawrence, Tom Williams, Ken Goldberg and Maynard Holliday explored the impact of racism in their experiences in robotics, along with explicit information about changes that we all can make. And we discussed learnings for allies and supporters and what a difference support could make. Please listen to the full discussion but Chad Jenkin’s notes capture some of the critical insights.

“I have been in computing for nearly 30 years and a roboticist for over 20 years.  Thus, I have been able to experience firsthand many of the systemic problems that face our field. Let me build on some of the recommendations from the blackincomputing.org open letter and call to action. “

In particular, I believe we can bring equal opportunity to STEM quickly by upholding Title VI of the Civil Rights Act of 1964 and Title IX of the Educational Amendments of 1972 for institutions receiving federal funding, and public funding more generally.  We now incentivize systemic disparate impacts in STEM.

Like law enforcement, university faculty are asked to do too much. Given our bandwidth limits, we have to make hard choices about what gets our attention and effort.

This creates a dilemma in every faculty member about whether to bolster their own personal advancement (by gaining social acceptance in the establishments of the field that control access to funding, hiring, and publishing through peer review) or further create and extend opportunity to others (taking a professional sacrifice to provide mentorship and empathy to future generations towards broadening participation in the STEM workforce).

It is clear STEM incentivizes the former given systemic exclusion of underrepresented minorities, with disastrous results thus far.

I believe we are a vastly better society with the upholding of Title VII of the Civil Rights Act of 1964 yesterday by the Supreme Court to prohibit employment discrimination against LGBTQ+ citizens.  Discrimination is wrong.  My hope is that we can apply this same standard and attention for Title VI of this statue to outcomes in STEM. This is not an issue of altruism, it reflects our true values at a nation and affects the quality of our work and its impact on the world.

There are placeholder measures that can be enacted to incentivize equal opportunity.  For example, universities could decline sabbatical and leave requests from faculty seeking to collaborate with companies that have failed to provide equal opportunity, such as OpenAI and Google DeepMind.

To achieve systemic fairness in robotics, however, we must go beyond token gestures to address the causal factors of inequity rooted in the core economic incentives of our universities.  It is universities that are the central ladder to opportunity through the development of future leaders, innovators, and contributors to our society.

We have the tools at hand today to create equal opportunity in STEM.  The question is whether we have the will.

Equal opportunity cannot be true for anyone unless equal opportunity is true for everyone.

Odeste Chadwicke Jenkins, Associate Professor University of Michigan Robotics Institute

Our next episode of “Society, Robots and Us” on June 30 is going to discuss the role and the roll out of killer robots, but we’ll be coming back to talk more about racism, diversity and inclusion in robotics because we’ve only just scratched the surface. Sept 29 episode will be about Open Source Robotics.

]]>
Titan Medical, Medtronic agree to cooperate on surgical robotics development https://robohub.org/titan-medical-medtronic-agree-to-cooperate-on-surgical-robotics-development/ Thu, 04 Jun 2020 20:25:00 +0000 https://robohub.org/titan-medical-medtronic-agree-to-cooperate-on-surgical-robotics-development/ Titan Medical, which has been trying to raise money, and Medtronic, which has been looking to expand its surgical robotics offerings, have signed development, licensing, and loan agreements.

The post Titan Medical, Medtronic agree to cooperate on surgical robotics development appeared first on The Robot Report.

]]>


The development of systems for robot-assisted surgery is difficult, with the need to meet stringent clinical requirements, get regulator approvals, and keep costs under control. Today, Titan Medical Inc. announced an agreement with Medtronic PLC to advance the design and development of surgical robots. The onetime rivals also signed a licensing agreement regarding some of Titan’s intellectual property.

Under the agreement, both companies can develop robot-assisted surgical systems in their respective businesses, while Titan will receive a series of payments that reach $31 million in return for Medtronic’s license for the technologies. The payments will arrive as milestones are completed and verified.

Milestones include fundraising

A steering committee including representatives from Toronto-based Titan Medical and Dublin, Ireland-based Medtronic will oversee work toward achievement of the milestones. One of them is for Titan to raise an additional $18 million in capital within four months of the development start date, which is expected to occur this month.

Titan has also received from Medtronic a senior secured loan of $1.5 million that will be increased increased by an amount equal to certain legal expenses related to transactions and intellectual property with an interest rate of 8% per annum. The loan is repayable on June 4, 2023, or upon the earlier completion of the last milestone.

Until the loan is repaid, Medtronic may have one non-voting observer on Titan’s board of directors. Charles Federico, who has served as the company’s chairman since May 2019, and John Schellhorn, who has served as a director since June 2017, have decided to retire from Titan’s board. The board will consist of three members, including David McNally; John Barker, an independent director; and Stephen Randall, Titan’s chief financial officer, while a search for additional independent directors is conducted.

The 2020 Healthcare Robotics Engineering Forum is coming in September.

Titan Medical pays $10M for Medtronic surgical robot licenses

Under the terms of the separate agreement, Medtronic has licensed certain robot-assisted surgical technologies from Titan for an upfront payment of $10 million. Titan said it retains the rights to continue to develop and commercialize those technologies for its own business.

“These agreements with Medtronic will allow Titan to continue to develop its single-port robotic surgical technologies while sharing our expertise and technologies with Medtronic,” stated David McNally, president and CEO of Titan Medical. “We are very excited about the opportunity to continue Titan’s pioneering work to bring new single-port surgical options to the market.”

These agreements are between Medtronic and Titan Medical, which is not affiliated with Titan Spine, which Medtronic acquired in 2019. They are another step in Medtronic’s effort to break into the robot-assisted surgery space, which remains dominated by Intuitive Surgical and its da Vinci SP.

Titan Medical, Medtronic agree to cooperate on surgical robotics development

The Mazor X Stealth robot-assisted spinal surgical system. Source: Medtronic

Medtronic completed a $1.7 billion purchase of Mazor Robotics in December 2018. A month later, the company launched its Mazor X Stealth robotic-assisted spinal surgical platform in the U.S. In September 2019, Medtronic unveiled its new Hugo system that is set to rival the da Vinci SP.

Editor’s note: For more about this and other medical device deals, visit our sibling site, MassDevice.

The post Titan Medical, Medtronic agree to cooperate on surgical robotics development appeared first on The Robot Report.

]]>
MIT gives soft robotic gripper better sense of touch and perception https://robohub.org/mit-gives-soft-robotic-gripper-better-sense-of-touch-and-perception/ Tue, 02 Jun 2020 16:57:17 +0000 https://robohub.org/mit-gives-soft-robotic-gripper-better-sense-of-touch-and-perception/ MIT researchers built a soft robotic gripper that uses embedded cameras and deep learning to enable tactile sensing and awareness of its positions and movements.

The post MIT gives soft robotic gripper better sense of touch and perception appeared first on The Robot Report.

]]>

soft robotic finger

MIT researchers built a soft robotic gripper that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body). | Credit: MIT CSAIL

One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.

“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.

One paper builds off last year’s research from MIT and Harvard University, where a team developed a strong and soft robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.

To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the soft robotic gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.

When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.

“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”

In a second paper, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).

The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.

“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”

Magic ball senses

The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.

While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.

When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.

In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.

The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.

Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.

Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.

GelFlex

In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.

To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.

The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The soft robotic gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.

During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the soft robotic gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.

In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.

Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.

Editor’s Note: This article was reprinted with permission from MIT News.

The post MIT gives soft robotic gripper better sense of touch and perception appeared first on The Robot Report.

]]>
Locus Robotics expanding into Europe with $40M Series D https://robohub.org/locus-robotics-expanding-into-europe-with-40m-series-d/ Tue, 02 Jun 2020 13:59:59 +0000 https://robohub.org/locus-robotics-expanding-into-europe-with-40m-series-d/ Locus Robotics, a developer of autonomous mobile robots, closed a $40M Series D round to expedite the opening of its European headquarters in Amsterdam.

The post Locus Robotics expanding into Europe with $40M Series D appeared first on The Robot Report.

]]>

Locus

A group of LocusBots from autonomous mobile robot developer Locus Robotics. | Credit: Locus Robotics

June 2020 is off to a hot start for developers of autonomous mobile robots (AMRs). Yesterday, OTTO Motors announced a $29 million Series C, and today Locus Robotics closed $40 million in Series D funding.

The Series D brings Locus‘ total amount of funding raised to $105 million. Locus’ latest round was led by Zebra Ventures, the strategic investment arm of Zebra Technologies. Existing investors such as Scale Venture Partners also participated in the round. Locus raised its $26 million Series C in April 2019.

The new funding will be used for R&D purposes, but it will also accelerate the company’s expansion into new markets. Locus is now planning to open its European headquarters in Amsterdam in either the third or fourth quarter of 2020. A Locus spokesperson told The Robot Report “Amsterdam allows us to be centrally located and close to many of the key fulfillment and distribution centers that serve the European markets.”

Denis Niezgoda, who joined Locus in September 2019 as the Director of Business Development for the European Union, will lead the new headquarters. Prior to joining Locus, Niezgoda served as Robotics Accelerator Lead at DHL Customer Solutions and Innovation. He was responsible for identifying and implementing new technologies to drive innovation.

Locus also has multiple positions open in Cologne, Germany, including a Sales Executive. “We source our talent from all over the EU and offer remote work options to minimize the need for relocation or extensive travel,” the Locus spokesperson said. “The Cologne area is currently a key location based on some of our customer support needs.”

Many experts are saying the COVID-19 pandemic has expedited the shift to online shopping as the new normal across the globe. In the U.S. and Canada, for example, there’s been a 129% year-over-year growth of e-commerce orders as of April 21. AMRs from Locus and others are stepping up to help companies fulfill this surge in demand.

Locus

The LocusBot AMRs navigate autonomously within a warehouse to locate and transport pick items to associates. LocusBots can be flexibly deployed to support a range of picking strategies, helping to reduce time spent on routine or physically demanding tasks, reducing manual errors and increase productivity for customers.

“We have recently seen a dramatic disruption of retail with e-commerce growth as high as 400% year-over-year in some categories. And others were severely limited as the bulk of their inventory was in stores that they could not get into due to lockdowns. It’s critical that retailers are prepared for direct fulfillment from the warehouse,” said Greg Buzek, President of IHL Group, a global research and advisory firm for the retail and hospitality industries. “This announcement underscores the need for companies to prepare for today’s new labor challenges that will be impacted by the significant volume increases that are already occurring. Companies investing now in warehouse automation, particularly AMRs, will be better positioned for success in the post-pandemic economy as they can support sales from any channel.”

Locus and DHL Supply Chain recently expanded their partnership with new deployments of LocusBots throughout 2020. DHL Supply Chain, part of the Deutsche Post DHL Group, will deploy 1,000 LocusBots to support 12 DHL sites in North America.

“Locus Robotics is thrilled to announce this new round of funding amid our most transformative year yet,” . “The new funding allows Locus to accelerate expansion into global markets, enabling us to strengthen our support of retail, industrial, healthcare, and 3PL businesses around the world as they navigate through the COVID-19 pandemic, ensuring that they come out stronger on the other side.”

The post Locus Robotics expanding into Europe with $40M Series D appeared first on The Robot Report.

]]>
CES 2020: A smart city oasis https://robohub.org/ces-2020-a-smart-city-oasis/ Sun, 26 Jan 2020 22:46:49 +0000 https://robohub.org/ces-2020-a-smart-city-oasis/ Read More ›]]> Like the city that hosts the Consumer Electronics Show (CES) there is a lot of noise on the show floor. Sifting through the lights, sounds and people can be an arduous task even for the most experienced CES attendees. Hidden past the North Hall of the Las Vegas Convention Center (LVCC) is a walkway to a tech oasis housed in the Westgate Hotel. This new area hosting SmartCity/IoT innovations is reminiscent of the old Eureka Park complete with folding tables and ballroom carpeting. The fact that such enterprises require their own area separate from the main halls of the LVCC and the startup pavilions of the Sands Hotel is an indication of how urbanization is being redefined by artificial intelligence.

IMG_7974.JPGMany executives naively group AI into its own category with SmartCity inventions as a niche use case. However as Akio Toyoda, Chief Executive of Toyota, presented at CES it is the reverse. The “Woven City,” initiative by the car manufacturer illustrates that autonomous cars, IoT devices and intelligent robots are subservient to society and, hence, require their own “living laboratory”. Toyoda boldly described a novel construction project for a city of the future 60 miles from Tokyo, “With people, buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test AI technology, in both the virtual and the physical world, maximizing its potential. We want to turn artificial intelligence into intelligence amplified.” Woven City will include 2,000 residents (mostly existing and former employees) on a 175 mile acre site (formerly a Toyota factory) at the foothills of Mount Fuji, providing academics, scientists and inventors a real-life test environment.

Toyota has hired the Dutch architectural firm Bjarke Ingels Group (BIG) to design its urban biosphere. According to Bjarke Ingels, “Homes in the Woven City will serve as test sites for new technology, such as in-home robotics to assist with daily life. These smart homes will take advantage of full connectivity using sensor-based AI to do things automatically, like restocking your fridge, or taking out your trash — or even taking care of how healthy you are.” While construction is set to begin in 2021, the architect is already boasting: “In an age when technology, social media and online retail is replacing and eliminating our natural meeting places, the Woven City will explore ways to stimulate human interaction in the urban space. After all, human connectivity is the kind of connectivity that triggers wellbeing and happiness, productivity and innovation.”

IMG_7936.JPGWalking back into the LVCC from the Westgate, I heard Toyoda’s keynote in my head – “mobility for all” – forming a prism in which to view the rest of the show. Looking past Hyundai/Uber’s massive Air Taxi and Omron’s ping-pong playing robot; thousands of suited executives led me under LG’s television waterfall to the Central Hall. Hidden behind an out of place Delta Airlines lounge, I discovered a robotics startup already fulfilling aspects of the Woven City. Vancouver-based A&K Robotics displayed a proprietary autonomous mobility solution serving the ballooning geriatric population.  The U.S. Census Bureau projects that citizens over the ages of 65 will double from “52 million in 2018 to 95 million by 2060” (or close to a quarter of the country’s population). This statistic parallels other global demographic trends for most first world countries. In Japan, the current population of elderly already exceeds 28% of its citizenry, with more than 70,000 over the age of 100. When A&K first launched its company it marketed conversion kits for turning manual industrial machines into autonomous vehicles. Today, the Canadian team is applying its passion for unmanned systems to improve the lives of the most vulnerable – people with disabilities. As Jessica Yip, A&K COO, explains, “When we founded the company we set out to develop and prove our technology first in industrial environments moving large cleaning machines that have to be accurate because of their sheer size. Now we’re applying this proven system to working with people who face mobility challenges.” The company plans to initially sell its elegant self-driving wheelchair (shown below) to airports, a $2 billion opportunity serving 63 million passengers worldwide.

IMG_8067

In the United States the federal government mandates that airlines provide ‘free, prompt wheelchair assistance between curbside and cabin seat’ as part of the 1986 Air Carrier Access Act. Since passing the bill, airport wheelchair assistance has mushroomed to an almost unserviceable rate as carriers struggle to fulfill the mandated free service. In reviewing the airlines performance Eric Lipp, of the disability advocacy group Open Doors, complains, “Ninety percent of the wheelchair problems exist because there’s no money in it. I’m not 100% convinced that airline executives are really willing to pay for this service.” In balancing profits with accessibility, airlines have employed unskilled, underpaid workers to push disabled fliers to their seats. A&K’s solution has the potential of both liberating passengers and improving the airlines’ bottomline performance. Yip contends, “We’re embarking on moving people, starting in airports to help make traveling long distances more enjoyable and empowering.”

ANA-wheelchair-trials.jpgA&K joins a growing fleet of technology companies in tackling the airport mobility issue. Starting in 2017, Panasonic partnered with All Nippon Airways (ANA) to pilot self-driving wheelchairs in Tokyo’s Narita International Airport. As Juichi Hirasawa, Senior Vice President of ANA, states: “ANA’s partnership with Panasonic will make Narita Airport more welcoming and accessible, both of which are crucial to maintaining the airport’s status as a hub for international travel in the years to come. The robotic wheelchairs are just the latest element in ANA’s multi-faceted approach to improving hospitality in the air and on the ground.” Last December, the Abu Dhabi International Airport publicly demonstrated for a week autonomous wheelchairs manufactured by US-based WHILL. Ahmed Al Shamisi, Acting Chief Operations Officer of Abu Dhabi Airports, asserted: “Convenience is one of the most important factors in the traveller experience today. We want to make it even easier for passengers to enjoy our airports with ease. Through these trials, we have shown that restricted mobility passengers and their families can enjoy greater freedom of movement while still ensuring that the technology can be used safely and securely in our facilities.” Takeshi Ueda of WHILL enthusiastically added, “Seeing individuals experience the benefits of the seamless travel experience from security to boarding is so rewarding, and we are eager to translate this experience to airports across the globe.”

At the end of Toyoda’s remarks, he joked, “So by now, you may be thinking has this guy lost his mind? Is he a Japanese version of Willie Wonka?” As laughter permeated the theater, he excitedly confessed, “Perhaps, but I truly believe that THIS is the project that can benefit everyone, not just Toyota.” As I flew home, I left Vegas more encouraged about the future, entrepreneurs today are focused on something bigger than robots. In the words of Yip, “As a company we’re looking to serve all people, and are strategically focused on airports as a step towards smart cities where everyone has the opportunity to participate fully in society in whatever way they are interested. Regardless of age, physical challenges, or other, we want people to be able to get out of their homes and into their communities. To be able to see each other, interact, go to work or travel whenever they want to.”

Sign up today for next RobotLab event forum on Automating Farming: From The Holy Land To The Golden State, February 6th in New York City. 

 

]]>
The 5G report card: Building today’s smart IoT ecosystem https://robohub.org/the-5g-report-card-building-todays-smart-iot-ecosystem/ Sat, 07 Dec 2019 16:30:22 +0000 https://robohub.org/the-5g-report-card-building-todays-smart-iot-ecosystem/ Read More ›]]>

The elephant in the room loomed large two weeks ago at the inaugural Internet of Things Consortium (IoTC) Summit in New York City. Almost every presentation began apologetically with the refrain, “In a 5G world…” practically challenging the industry’s rollout goals. At one point Brigitte Daniel-Corbin, IoT Strategist with Wilco Electronic Systems, sensed the need to reassure the audience by exclaiming, ‘its not a matter of if, but when 5G will happen!’ Frontier Tech pundits too often prematurely predict hyperbolic adoption cycles, falling into the trap of most soothsaying visions. The IoTC Summit’s ability to pull back the curtain left its audience empowered with a sober roadmap forward that will ultimately drive greater innovation and profit.

IMG_6438.jpg

The industry frustration is understandable as China announced earlier this month that 5G is now commercially available in 50 cities, including: Beijing, Shanghai and Shenzhen. In fact, the communist state beat its own 2020 objectives by rolling out the technology months ahead of plan. Already more than 10 million cellular customers have signed up for the service. China has made upgrading its cellular communications a national priority with more than 86,000 5G base stations installed to date and another 130,000 5G base stations to go live by the end of the year. In the words of Wang Xiaochu, president of China Unicom, “The commercialization of 5G technology is a great measure of [President] Xi Jinping’s strategic aim of turning China into a cyber power, as well as an important milestone in China’s information communication industry development.” By contrast the United States is still testing the technology in a number of urban zones. If a recent PC Magazine review of Verizon’s Chicago pilot is any indication of the state of the technology, the United States is very far from catching up. As one reporter complains, “I walked around for three hours and found that coverage is very spotty.” Screen Shot 2019-11-15 at 2.40.07 PM.png

Last year, President Trump donning a hardhat declared “My administration is focused on freeing up as much wireless spectrum as needed [to make 5G possible].” The importance of Trump’s promotional event in April could not be more understated, as so much of the future of autonomous driving, additive manufacturing, collaborative robotics, shipping & logistics, smart city infrastructure, Internet of Things (IoT), and virtual & augmented reality relies on greater bandwidth. Most experts predict that 5G will offer a 10 to 100 times improvement over fourth generation wireless. Els Baert of NetComm explains, “The main advantage that 5G offers over 4G LTE is faster speeds — primarily because there will be more spectrum available for 5G, and it uses more advanced radio technology. It will also deliver much lower latency than 4G, which will enable new applications in the [Internet of Things] space.” Unfortunately, since Trump’s photo op, the relationship with China has worsened so much that US carriers are now blocked from doing business with the largest supplier of 5G equipment, Huawei. This leaves the United States with only a handful of suppliers, including market leaders Nokia and Ericsson. The limited supply chain is exasperated by how little America is spending on upgrading its telecommunications, according to Deloitte “we conclude that the United States underspent China in wireless infrastructure by $8 billion to $10 billion per year since 2015.”

Screen Shot 2019-11-22 at 2.18.05 PM.png

The current state of the technology (roadblocks and all) demands fostering an innovation ecosystem today that parallels the explosion of new services for the 5G economy. As McKinsey reports there are more than 25 billion connected IoT devices currently, which is estimated to grow to more than 75 billion by 2025 with the advent of fifth generation wireless. The study further cites, “General Electric projects that IoT will add $10 to $15 trillion to a worldwide Global Domestic Product (GDP) growth by 2030. To put that into perspective, that number is equivalent to China’s entire current economy.” Regrettably, most of the available 5G accelerators in the USA are built to showcase virtual and augmented reality instead of fostering applications for the larger opportunity of business-to-business services. According to Business Insider “IoT solutions will reach $6 trillion by 2021,” across a wide spectrum of industries, including: healthcare, manufacturing, logistics, energy, smart homes, transportation and urban development. In fact, hardware will only account for about one-third of the new revenues (and VR/AR headsets comprise considerably less).

global_iot_market_share@2x-100

It is challenging for publicly traded companies (like T-Mobile, Verizon & AT&T), whose stock performance is so linked to the future of next generation wireless. Clearly, market makers are overly excited by the unicorns of Oculus (acquired by Facebook for $2 billion in 2014) and Magic Leap (valued at $4.5 billion in 2016) more than IoT sensors for robotic recycling, agricultural drones, and fuel efficient rectors. However, based upon the available data, the killer app for 5G will be found in industry not digital theatrics. This focus on theatrics is illustrated in one of the few statements online by Verizon’s Christian Guirnalda, Director of its 5G Labs, boasting “We’re literally making holograms here using a dozen of different cameras in a volumetric capture studio to create near real-time images of what people and products look like in 3D.” A few miles north of Verizon 5G Labs, New York City’s hospitals are overcrowded with patients and data leading to physical and virtual latency issues. Verizon could enable New York’s hospitals with faster network speeds to treat more patients in economically challenged neighborhoods remotely. Already, 5G threatens to exasperate the digital divide in the United States by targeting affluent communities for its initial rollout. By investing in more high-speed telemedicine applications, the telecommunications giant could potentially enable less privileged patients access to better care, which validates the need for increased government spending. Guirnalda’s Lab would be better served by applying the promise of 5G to solve these real-life urban challenges from mass transit to food scarcity to access to healthcare.

Screen Shot 2019-11-24 at 2.08.09 PM.png

The drawback with most corporate 5G incubators is their windows are opaque – forcing inventors to experiment inside, while the real laboratory is bustling outside. The United Nations estimates by 2050 seventy percent of the world’s population will be urban. While most of this growth will take place in developing countries (i.e., Africa and Asia) already 80% of global GDP is generated in cities. The greatest challenge for the 21st century will be managing the sustainable development of these populations. At last month’s UN “World Cities Day,” the diplomatic body stated that 5G “big data technologies and cloud-computing offer the potential to enhance urban operations, functions, services, designs, strategies and policies.” The UN’s statement did not fall on deaf ears, even President Trump strained to comfort his constituents last month with the confession, “I asked Tim Cook to see if he could get Apple involved in building 5G in the U.S. They have it all – Money, Technology, Vision & Cook!”

Going to CES? Join me for my panel on Retail Robotics January 8th at 10am, Las Vegas Convention Center. 

]]>
Engaging the public in robotics: 11 tips from 5,000 robotics events across Europe https://robohub.org/engaging-the-public-in-robotics-11-tips-from-5000-robotics-events-across-europe/ Thu, 21 Nov 2019 02:11:18 +0000 https://robohub.org/engaging-the-public-in-robotics-11-tips-from-5000-robotics-events-across-europe/

Europe is focussed on making robots that work for the benefit of society. This requires empowering future roboticists and users of all ages and backgrounds. In its 9th edition, the European Robotics Week (#ERW2019) is expected to host more than 1000 events across Europe. Over the years, and over 5,000 events, the organisers have learned a thing or two about reaching the public, and ultimately making the robots people want.

Demystify robotics

For many, robots are only seen in the media or science fiction. The robotics community promises ubiquitous robots, yet most people don’t encounter robots in their work or daily lives. This matters. The Eurobarometer 2017 survey of attitudes towards the impact of digitisation found that the more familiar people are with robots, the more positive they are about the technology. A recent workshop for ERW organisers highlighted the “importance of being able to touch, feel, see and enjoy the presence of robots in order to remove the ‘fear factor’ and improve the image of robots.” People need to interact with real robots to understand their potential, and limitations.

Bring robots to public places

Most robotics events happen where roboticists and their robots already are, in universities and industry. This works well for those who show interest in the field, and have the means to attend. To reach a broader audience, robots need to be brought to public places, such as city centres, or shopping malls. ERW organisers said “don’t expect ordinary people to come to universities.” In Ghent Belgium for example, space was found in the city library to give visitors an opportunity to interact with robots. More recently, the Smart Cities Robotics (SciRoc) challenge held an international robot competition in a shopping mall in the UK.

Tackle global challenges

Robots have a role to play in tackling today’s most pressing challenges, whether it’s the environment, healthcare, assistive living, or education. Robots can also improve efficiencies in industry and avoid 4D (dangerous, dirty, difficult, drudgerous) jobs. This is not often explicitly highlighted, with robots presented for the sake of it as fun gadgets, instead of useful tools. By positioning robots as the helpers of tomorrow, we empower users to imagine their applications, and roboticists receive meaningful feedback on their use. Such applications may also be more exciting for a broader diversity of people.

The ‘Blue-Eyed Dragon’ Robot by Biljana Vicković (with the University of Belgrade, Mihajlo Institute, Robotics Laboratory Belgrade, Serbia) for example introduced an innovative and socially useful robotic artwork into a public space with a tin recycling function. It integrates robotics into an artwork with a demonstrable ecological, social and cultural impact. “The essence of this innovative work of art is that it enables the public to interact with it. As such people are direct participants and not merely an audience. In this way contemplation is replaced by action.” says its creator.

Tell stories about people who work with robots

Useful robots will ultimately be embedded in society, our work, our lives. Their role is often presented from the developers’s or industry’s perspective. This leaves the public with the sense that robots are being “done to them”, rather than with them. By bringing the users in the discussion, we hear stories of how they use the technology, what their hopes and concerns are, and ultimately design better robots and inspire future users to make use of robots themselves.

Bring a diversity of people together 

Making robots requires a large range of backgrounds, from social sciences, law, and business, to hardware and software engineering. Domain expertise, will also be key. Assistive robots will require input from nurse carers for example. Engaging with a diverse population of makers and users will help ensure the technology is developed for everyone. The ERW2019 central event in Poznan features a panel dedicated to women in digital and robotics.
Carmela Sánchez from Hisparob in Spain says “this year, our motto for ERW is Robotic Thinking and Inclusion. We focus on how robotics and computational thinking can help inclusion: inclusion of different abilities, social, and economic backgrounds, and genders.”

Avoid hype and exaggerations 

Inflated expectations about robotics may lead to disappointment when robots are deployed, or may lead to unfounded fears about their use. A recent ERW organiser commented “Robots are not prevalent or visible in society at large and so prevailing perceptions about robots are largely shaped by media presentation, which too often resort to negative stereotypes.” It’s worth noting robots are typically made for a single task, and many do not look like a humanoid robot. With this lens, robots no longer seem too difficult to engineer, and are far from science fiction depictions. This could be empowering for those who would like to become roboticist, and could help users imagine robots that would be helpful to them. The Smart CIties Robotics challenge for example showed the crowds how robots could help them take a lift, or deliver emergency medicine in a mall.


Teach teachers

By teaching teachers to teach robotics, we can reach many more students than what is possible through all the European Robotics Week combined. Lia Garcia, founder of Logix5 and a national coordinator of ERW in Spain underscored the need to engage the education sector: “We have to work with teachers. We need to get robotics onto the school curriculum, onto the teaching college curriculum and to get to teachers who teach teachers.” Workshops that teach educators, and help spread the word among local teachers are essential. As an added encouragement, they could receive CPD (continuing professional development) credits for taking part in robotics workshops. The ERW2019 central event in Poznan features a workshop dedicated to robotics education in Europe on 15 November.

Run competitions

Competitions are an important way of bringing students into robotics. It’s fun, and exciting, and shows they can build something that works in the real world. Europe now hosts several large robotics competitions including the European Robotics League (Emergency, Consumer, Professional, and Smart Cities). While these competitions are tailored to university students, others are run for kids. The ERW event page already has over 100 robot competitions and challenges listed for this year. Fiorella Operto from Scuola di Robotica has coordinated more than 100 teams from all over Italy committed to using a humanoid robot to promote the Italian Cultural Heritage. The 2020 edition of the NAO Challenge is devoted to “Arts&Cultures”, asking robotics to improve the knowledge of and to promote beautiful Italian art.

Keep it fun

More than ever, we have a broad range of tools to engage with the public. It could be as simple as drawing pictures of robots, to developing robot-themes escape rooms, or engaging on social media including youtube, twitter, instagram and tiktok. Robots are fun, which is why they are such good tools in education. Be creative with demos and activities. Make robots dance, allow people to decorate them, play games. University of Bristol for example will be running a swarm-themed escape room called Swarm Escape!.

Engage with stakeholders

Events with the public are a good opportunity to engage with stakeholders, including government, industry, and users. This is important as stakeholders will ultimately be the ones making robots a reality. Having them participate in such events helps them understand the potential, invest in technology and skills, and shape policy. It could also provide funding for some of the more ambitious events. “For the first time since 2012, Robotics Place, the cluster of Occitanie, organizes a one day meeting with its members on November 20th in Toulouse. Robotics Place members will meet with press, politics, students, partners and professional customers.” says Philippe Roussel, a local coordinator for France.

Act regionally, connect across Europe

Events are present across Europe, organised regionally for the local community. Connecting these events at a European scale increases impact, raise awareness, builds momentum, and allows for lessons to be shared across the content. euRobotics and Digital Innovation Hubs provide valuable resources for these purposes.

Yet there is a divide in access, with cities being better catered to than rural communities, or areas that are poorer. The challenge is to provide everyone with access and exposure to robotics and its opportunities. Extra effort should be made to reach out to underserved communities, for example using a “robot roadshow”. Organisers of ERW said “a further benefit of this cross-border approach would be to enhance the European dimension.” As an example, from May 2020, a 105m long floating Science Center called the MS Experimenta will be touring southern Germany, bringing science from port to port.

Get involved

Feeling inspired, ready to make a difference? Organise your own European Robotics Week event, big or small, and register it here along with the over 900 events already announced.

]]>
Summer travel diary: Reopening cold cases with robotic data discoveries https://robohub.org/summer-travel-diary-reopening-cold-cases-with-robotic-data-discoveries/ Wed, 14 Aug 2019 21:44:39 +0000 https://robohub.org/summer-travel-diary-reopening-cold-cases-with-robotic-data-discoveries/ Read More ›]]> Traveling to six countries in eighteen days, I journeyed with the goal of delving deeper into the roots of my family before World War II. As a child of refugees, my parents’ narrative is missing huge gaps of information. Still, more than seventy-eight years since the disappearance of my Grandmother and Uncles, we can only presume with a degree of certainty their demise in the mass graves of the forest outside of Riga, Latvia. In our data rich world, archivists are finally piecing together new clues of history using unmanned systems to reopen cold cases.

The Nazis were masters in using technology to mechanize killing and erasing all evidence of their crime. Nowhere is this more apparent than in Treblinka, Poland. The death camp exterminated close to 900,000 Jews over a 15-month period before a revolt led to its dismantlement in 1943. Only a Holocaust memorial stands today on the site of the former gas chamber as a testimony to the memory of the victims. Recently, scientists have begun to unearth new forensic evidence of the Third Reich’s war crimes using LIDAR to expose the full extent of their death factory.

In her work, “Holocaust Archeologies: Approaches and Future Directions,” Dr. Caroline Sturdy Colls undertook an eight-year project to piece together archeological facts from survivor accounts using remote sensors that are more commonly associated with autonomous vehicles and robots than Holocaust studies. As she explains, “I saw working at Treblinka as a cold case where excavation is not permitted, desirable or wanted, [non-invasive] tools offer the possibility to record and examine topographies of atrocity in such a way that the disturbance of the ground is avoided.” Stitching together point cloud outputs from aerial LIDAR sensors, Professor Sturdy Colls stripped away the post-Holocaust vegetation to expose the camp’s original foundations, “revealing the bare earth of the former camp area.” As she writes, “One of the key advantages that LIDAR offers over other remote sensing technologies is its ability to propagate the signal emitted through vegetation such as trees. This means that it is possible to record features that are otherwise invisible or inaccessible using ground-based survey methods.”

Through her research, Sturdy Colls was able to locate several previously unmarked mass graves, transport infrastructure and camp-era buildings, including structures associated with the 1943 prisoner revolt. She credits the technology for her findings, “This is mainly due to developments in remote sensing technologies, geophysics, geographical information systems (GIS) and digital archeology, alongside a greater appreciation of systematic search strategies and landscape profiling,” The researcher stressed the importance of finding closure after seventy-five years, “I work with families in forensics work, and I can’t imagine what it’s like not to know what happened to your family members.” Sturdy Colls’ techniques are now being deployed across Europe at other concentration camp sites and places of mass murder.

Flying north from Poland, I landed in the Netherlands city of Amsterdam to take part in their year-long celebration of Rembrandt (350 years since his passing). At the Rijksmuseum’s Hall of Honors a robot is featured in front of the old master’s monumental work, “Night Watch.” The autonomous macro X-ray fluorescence scanner (Macro-XRF scanner) is busy analyzing the chemical makeup of the paint layers to map and database the age of the pigments. This project, aptly named “Operation Night Watch,” can be experienced live or online showcasing a suite of technologies to determine the best methodologies to return the 1642 painting back to its original glory. Night Watch has a long history of abuse including: two world wars, multiple knifings, one acid attack, botched conservation attempts, and even the trimming of the canvas in 1715 to fit a smaller space. In fact, its modern name is really a moniker of the dirt build up over the years, not the Master’s composition initially entitled: “Militia Company of District II under the Command of Captain Frans Banninck Cocq.”

In explaining the multi-million dollar undertaking the museum’s director, Taco Dibbits, boasted in a recent interview that Operation Night Watch will be the Rijksmuseum’s “biggest conservation and research project ever.” Currently, the Macro-XRF robot takes 24 hours to perform one scan of the entire picture, with a demanding schedule ahead of 56 more scans and 12,500 high-resolution images. The entire project is slated to be completed within a couple of years. Dibbits explains that the restoration will provide insights previously unknown about the painter and his magnum opus: “You will be able to see much more detail, and there will be areas of the painting that will be much easier to read. There are many mysteries of the painting that we might solve. We actually don’t know much about how Rembrandt painted it. With the last conservation, the techniques were limited to basically X-ray photos and now we have so many more tools. We will be able to look into the creative mind of one of the most brilliant artists in the world.”

Whether it is celebrating the narrative of great works of art or preserving the memory of the Holocaust, modern conservatism relies heavily on the accessibility of affordable mechatronic devices. Anna Lopuska, a conservator at the Auschwitz-Birkenau Museum in Poland, describes the Museum’s herculean task, “We are doing something against the initial idea of the Nazis who built this camp. They didn’t want it to last. We’re making it last.” New advances in optics and hardware, enables Lopuska’s team to catalog and maintain the massive camp site with “minimum intervention.” The magnitude of its preservation efforts is listed on its website, which includes: “155 buildings (including original camp blocks, barracks, and outbuildings), some 300 ruins and other vestiges of the camp—including the ruins of the four gas chambers and crematoria at the Auschwitz II-Birkenau site that are of particular historical significance—as well as more than 13 km of fencing, 3,600 concrete fence posts, and many other installations.” This is on top of a collection of artifacts of human tragedy, as each item represents a person, such as “110 thousand shoes, about 3,800 suitcases, 12 thousand pots and pans, 40 kg of eyeglasses, 470 prostheses, 570 items of camp clothing, as well as 4,500 works of art.” Every year more and more survivors pass away making Lopuska’s task, and the unmanned systems she employs, more critical. As the conservationist reminds us, “Within 20 years, there will be only these objects speaking for this place.”

Editor’s Announcements: 1) Vote for our panel, “Love In The Robotic Age,” at SXSW; 2) Signup to attend RobotLab’s next event “Is Today’s Industry 4.0 A Hackers Paradise?” with  Chuck Brooks of General Dynamics on September 25th at 6pm, RSVP Today

]]>
Robots can play key roles in repairing our infrastructure https://robohub.org/robots-can-play-key-roles-in-repairing-our-infrastructure/ Sun, 30 Jun 2019 22:53:34 +0000 https://robohub.org/robots-can-play-key-roles-in-repairing-our-infrastructure/

Pipeline inspection robot

I was on the phone recently with a large multinational corporate investor discussing the applications for robotics in the energy market. He expressed his frustration about the lack of products to inspect and repair active oil and gas pipelines, citing too many catastrophic accidents. His point was further endorsed by a Huffington Post article that reported in a twenty-year period such tragedies have led to 534 deaths, more than 2,400 injuries, and more than $7.5 billion in damages. The study concluded that an incident occurs every 30 hours across America’s vast transcontinental pipelines.

The global market for pipeline inspection robots is estimated to exceed $2 billion in the next six years, more than tripling today’s $600 million in sales. The Zion Market Research report states: “Robots are being used increasingly in various verticals in order to reduce human intervention from work environments that are dangerous … Pipeline networks are laid down for the transportation of oil and gas, drinking waters, etc. These pipelines face the problem of corrosion, aging, cracks, and various other types of damages…. As the demand for oil and gas is increasing across the globe, it is expected that the pipeline network will increase in length in the near future thereby increasing the popularity of the in-pipe inspection robots market.”

Industry consolidation plays key role

Another big indicator of this burgeoning industry is growth of consolidation. Starting in December 2017, Pure Technologies was purchased by New York-based Xylem for more than $500 million. Xylem was already a leader in smart technology solutions for water and waste management pump facilities. Its acquisition of Pure enabled the industrial company to expand its footprint into the oil and gas market. Utilizing Pure’s digital inspection expertise with mechatronics, the combined companies are able to take a leading position in pipeline diagnostics.

Patrick Decker, Xylem president and chief executive, explained, “Pure’s solutions strongly complement the broader Xylem portfolio, particularly our recently acquired Visenti and Sensus solutions, creating a unique and disruptive platform of diagnostic, analytics and optimization solutions for clean and wastewater networks. Pure will also bring greater scale to our growing data analytics and software-as-a-service capabilities.”

According to estimates at the time of the merger, almost 25% of Pure’s business was in the oil and gas industry. Today, Pure offers a suite of products for above ground and inline inspections, as well as data management software. In addition to selling its machines, sensors and analytics to the energy sector, it has successfully deployed units in thousands of waterways globally.

This past February, Eddyfi (a leading provider of testing equipment) acquired Inuktun, a robot manufacturer of semi-autonomous crawling systems. This was the sixth acquisition by fast growing Eddyfi in less than three years. As Martin Thériault, Eddyfi’s CEO, elaborates: “We are making a significant bet that the combination of Inuktun robots with our sensors and instruments will meet the increasing needs from asset owners. Customers can now select from a range of standard Inuktun crawlers, cameras and controllers to create their own off-the-shelf, yet customized, solutions.”

Colin Dobell, president of Inuktun, echoed Thériault sentiments, “This transaction links us with one of the best! Our systems and technology are suitable to many of Eddyfi Technologies’ current customers and the combination of the two companies will strengthen our position as an industry leader and allow us to offer truly unique solutions by combining some of the industry’s best NDT [Non Destructive Testing] products with our mobile robotic solutions. The future opportunities are seemingly endless. It’s very exciting.” In addition to Xylem and Eddyfi, other entrees into this space, include: CUES, Envirosight, GE Inspection Robotics, IBAK Helmut Hunger, Medit (Fiberscope), RedZone Robotics, MISTRAS Group, RIEZLER Inspektions Systeme, and Honeybee Robotics.

Repairing lines with micro-robots

While most of the current technologies focus on inspection, the bigger opportunity could be in actively repairing pipelines with micro-bots. Last year, the government of the United Kingdom began a $35 million study with six universities to develop mechanical insect-like robots to automatically fix its large underground network. According to the government’s press release, the goal is to develop robots of one centimeter in size that will crawl, swim and quite possibly fly through water, gas and sewage pipes. The government estimates that underground infrastructure accounts for $6 billion annually in labor and business disruption costs.

One of the institutions charged with this endeavor is the University of Sheffield’s Department of Mechanical Engineering led by Professor Kirill Horoshenkov. Dr. Horoshenkov boasts that his mission is more than commercial as “Maintaining a safe and secure water and energy supply is fundamental for society but faces many challenges such as increased customer demand and climate change.”

Horoshenkov, a leader in acoustical technology, expands further on the research objectives of his team, “Our new research programme will help utility companies monitor hidden pipe infrastructure and solve problems quickly and efficiently when they arise. This will mean less disruption for traffic and general public. This innovation will be the first of its kind to deploy swarms of miniaturised robots in buried pipes together with other emerging in-pipe sensor, navigation and communication solutions with long-term autonomy.”

England is becoming a hotbed for robotic insects; last summer Rolls-Royce shared with reporters its efforts in developing mechanical bugs to repair airplane engines. The engineers at the British aerospace giant were inspired by the research of Harvard professor Robert Wood with its ambulatory microrobot for search and rescue missions. James Kell of Rolls-Royce proclaims this could be a game changer, “They could go off scuttling around reaching all different parts of the combustion chamber. If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

Currently the Harvard robot is too large to buzz through jet engines, but Rolls-Royce is not waiting for Boston’s scientist as it has established with the University of Nottingham a Centre for Manufacturing and On-Wing Technologies “to design and build a range of bespoke prototype robots capable of performing jet engine repairs remotely.” The project lead Dragos Axinte is optimistic about the spillover effect of this work into the energy market, “The emergence of robots capable of replicating human interventions on industrial equipment can be coupled with remote control strategies to reduce the response time from several days to a few hours. As well as with any Rolls-Royce engine, our robots could one day be used in other industries such as oil, gas and nuclear.”

]]>