Silicon Valley Robotics – Robohub https://robohub.org Connecting the robotics community to the world Sun, 12 Nov 2023 10:39:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 California is the robotics capital of the world https://robohub.org/california-is-the-robotics-capital-of-the-world/ Sun, 12 Nov 2023 10:37:43 +0000 https://svrobo.org/?p=27064

I came to the Silicon Valley region in 2010 because I knew it was the robotics center of the world, but it certainly doesn’t get anywhere near the media attention that some other robotics regions do. In California, robotics technology is a small fish in a much bigger technology pond, and that tends to conceal how important Californian companies are to the robotics revolution.

This conservative dataset from Pitchbook [Vertical: Robotics and Drones] provides data for 7166 robotics and drones companies, although a more customized search would provide closer to 10,000 robotics companies world wide. Regions ordered by size are:

  • North America 2802
  • Asia 2337
  • Europe 2285
  • Middle East 321
  • Oceania 155
  • South America 111
  • Africa 63
  • Central America 13

 

USA robotics companies by state

  1. California = 843 (667) * no of companies followed by no of head quarters
  2. Texas = 220 (159)
  3. New York = 193 (121)
  4. Massachusetts = 191 (135)
  5. Florida = 136 (95)
  6. Pennsylvania = 113 (89)
  7. Washington = 85 (61)
  8. Colorado = 83 (57)
  9. Virginia = 81 (61)
  10. Michigan = 70 (56)
  11. Illinois = 66 (43)
  12. Ohio = 65 (56)
  13. Georgia = 64 (46)
  14. New Jersey = 53 (36)
  15. Delaware = 49 (18)
  16. Maryland = 48 (34)
  17. Arizona = 48 (37)
  18. Nevada = 42 (29)
  19. North Carolina = 39 (29)
  20. Minnesota = 31 (25)
  21. Utah = 30 (24)
  22. Indiana = 29 (26)
  23. Oregon = 29 (20)
  24. Connecticut = 27 (22)
  25. DC = 26 (12)
  26. Alabama = 25 (21)
  27. Tennessee = 20 (18)
  28. Iowa = 17 (14)
  29. New Mexico = 17 (15)
  30. Missouri = 17 (16)
  31. Wisconsin = 15 (12)
  32. North Dakota = 14 (8)
  33. South Carolina = 13 (11)
  34. New Hampshire = 13 (12)
  35. Nebraska = 13 (11)
  36. Oklahoma = 10 (8)
  37. Kentucky = 10 (7)
  38. Kansas = 9 (9)
  39. Louisiana = 9 (8)
  40. Rhode Island = 8 (6)
  41. Idaho = 8 (6)
  42. Maine = 5 (5)
  43. Montana = 5 (4)
  44. Wyoming = 5 (3)
  45. Mississippi = 3 (1)
  46. Arkansas = 3 (2)
  47. Alaska = 3 (3)
  48. Hawaii = 2 (1)
  49. West Virginia = 1 (1)
  50. South Dakota = 1 (0)

Note – this number in brackets is for HQ locations, whereas the first number is for all company locations. The end results and rankings are practically the same.

 

ASIA robotics companies by country

  1. China = 1350
  2. Japan = 283
  3. India = 261
  4. South Korea = 246
  5. Israel = 193
  6. Hong Kong = 72
  7. Russia = 69
  8. United Arab Emirates = 50
  9. Turkey = 48
  10. Malaysia = 35
  11. Taiwan = 21
  12. Saudi Arabia = 19
  13. Thailand = 13
  14. Vietnam = 12
  15. Indonesia = 10
  16. Lebanon = 7
  17. Kazakhstan = 3
  18. Iran = 3
  19. Kuwait = 3
  20. Oman = 3
  21. Qatar = 3
  22. Pakistan = 3
  23. Philippines = 2
  24. Bahrain = 2
  25. Georgia = 2
  26. Sri Lanka = 2
  27. Azerbaijan = 1
  28. Nepal = 1
  29. Armenia = 1
  30. Burma/Myanmar = 1

Countries with no robotics; Yemen, Iraq, Syria, Turkmenistan, Afghanistan, Syria, Jordan, Uzbekistan, Kyrgyzstan, Tajikistan, Bangladesh, Bhutan, Mongolia, Cambodia, Laos, North Korea, East Timor.

 

UK/EUROPE robotics companies by country

  1. United Kingdom = 443
  2. Germany = 331
  3. France = 320
  4. Spain = 159
  5. Netherlands = 156
  6. Switzerland = 140
  7. Italy = 125
  8. Denmark = 115
  9. Sweden = 85
  10. Norway = 80
  11. Poland = 74
  12. Belgium = 72
  13. Russia = 69
  14. Austria = 51
  15. Turkey = 48
  16. Finland = 45
  17. Portugal = 36
  18. Ireland = 28
  19. Estonia = 24
  20. Ukraine = 22
  21. Czech Republic = 19
  22. Romania = 19
  23. Hungary = 18
  24. Lithuania = 18
  25. Latvia = 15
  26. Greece = 15
  27. Bulgaria = 11
  28. Slovakia = 10
  29. Croatia = 7
  30. Slovenia = 6
  31. Serbia = 6
  32. Belarus = 4
  33. Iceland = 3
  34. Cyprus = 2
  35. Bosnia & Herzegovina = 1

Countries with no robotics; Andorra, Montenegro, Albania, Macedonia, Kosovo, Moldova, Malta, Vatican City.

 

CANADA robotics companies by region

  1. Ontario = 144
  2. British Colombia = 60
  3. Quebec = 53
  4. Alberta = 34
  5. Manitoba = 7
  6. Saskatchewan = 6
  7. Newfoundland & Labrador = 2
  8. Yukon = 1

Regions with no robotics; Nunavut, Northwest Territories.

]]>
The robots of #IROS2023 https://robohub.org/the-robots-of-iros2023/ Wed, 11 Oct 2023 05:35:43 +0000 https://svrobo.org/?p=26921

The International Conference on Intelligent Robots and Systems (IROS) showcases leading-edge research in robotics. IROS was held in Detroit MI Oct 1-5 and not only showcased research but the latest commercialization in robotics, particularly robotics providers selling into robotics for research or as part of the hardware/software stack. The conference focuses on future directions in robotics, and the latest approaches, designs, and outcomes. It also provides an opportunity to network with the world’s leading roboticists.

Highlights included seeing Silicon Valley Robotics members; Foxglove, Hello Robot, Anyware Robotics and Tangram Vision, also Open Robotics and Intrinsic talking up ROS 2 and the upcoming ROSCon 23. Intrinsic sponsored a ROS/IROS meetup and Clearpath Robotics sponsored the Diversity Cocktails event. OhmniLabs sponsored 3 telepresence robots which were in constant demand touring the expo, the competition floor and the poster sessions. I also met Sol Robotics from the Bay Area, which has quite a unique robot arm structure, that’s super stable with the ability to carry a lot of weight.

There were plenty of rolling and roaming robots, like this Diablo from Direct Drive Tech (world’s first Direct-Drive Self-Balancing Wheeled-Leg Robot), also from Deep Robotics, Unitree Robotics, Fourier Intelligence, Hebi and Westwood Robotics. Also the other legged one, and the rolling robots – Clearpath, Otto, Husarion, Hebi and more. Although they weren’t on the Expo Floor, the Disney keynote session was another highlight with a live robot demo on stage,

And Franka Emika fans will be pleased to hear that not only did they win a ‘best paper’ award, but that the eminent demise of the company is much overstated. It’s a German thing. There are many investors/purchasers lined up to keep the company going while they restructure. And watch out for Psyonics! Psyonics’ smart ability hands and arms, world’s first touch sensing bionic arms, are being used by Apptronik (humanoid for NASA) as well as for people with disabilities.

IROS Exhibitor gallery
























































































Full list of IROS Exhibitors is here.

]]>
SVR Guide to Robotics Research and Education 2023 https://robohub.org/svr-guide-to-robotics-research-and-education-2023/ Tue, 22 Aug 2023 09:42:52 +0000 https://robohub.org/?p=208027

In the last decade we have seen more robotics innovation becoming real products and companies than in the entire history of robotics.

Furthermore, the greater Silicon Valley and San Francisco Bay Area is at the center of this ‘Cambrian Explosion in Robotics’ as Dr Gill Pratt, Director of Robotics at Toyota Research Institute described it. In fact, two of the very first robots were developed right here.

In 1969 at Stanford, Vic Sheinman designed the first electric robot arm able to be computer controlled. After successful pilots and interest from General Motors, Unimation acquired the concept and released the PUMA or Programmable Universal Machine for Assembly. Unimation was eventually acquired by Staubli, and the PUMA became one of the most successful industrial robots of all time.

Shakey was the first mobile robot able to perceive and reason. Also called the world’s first electronic person by Time Magazine in 1972. Shakey was developed at SRI International from 1966 to 1972 and pioneered many advances in computer vision and path planning and control systems that are still in use today.

These companies have been at the heart of Silicon Valley Robotics, the regional robotics ecosystem/association, but we have also seen enormous growth in new robotics companies and startups in the last decade.

And all of them are hiring.

This volume serves as a guide to students who are interested in studying the field of robotics in any way. Robotics jobs range from service technician, electrical or mechanical engineer, control systems and computer science, to interaction or experience designer, human factors and industrial design.

All these skills are in great demand in robotics companies around the world, and people with experience in robotics are in great demand everywhere. Robotics is a complex multidisciplinary field, which provides opportunities for you to develop problem solving skills and a holistic approach.

The robotics industry also requires people with skill sets in growing businesses, not just robotics, but product and project management, human resources, sales, marketing, operations.

Get involved in robotics – the industry of the 21st century.

The guide

]]>
The 5 Laws of Robotics https://robohub.org/the-5-laws-of-robotics/ Thu, 11 May 2023 09:19:19 +0000 https://svrobo.org/?p=26188

I have been studying the whole range of issues/opportunities in the commercial roll out of robotics for many years now, and I’ve spoken at a number of conferences about the best way for us to look at regulating robotics. In the process I’ve found that my guidelines most closely match the EPSRC Principles of Robotics, although I provide additional focus on potential solutions. And I’m calling it the 5 Laws of Robotics because it’s so hard to avoid Asimov’s Laws of Robotics in the public perception of what needs to be done.

The first most obvious point about these “5 Laws of Robotics” should be that I’m not suggesting actual laws, and neither actually was Asimov with his famous 3 Laws (technically 4 of them). Asimov proposed something that was hardwired or hardcoded into the existence of robots, and of course that didn’t work perfectly, which gave him the material for his books. Interestingly Asimov believed, as did many others at the time (symbolic AI anyone?) that it was going to be possible to define effective yet global behavioral rules for robots. Whereas, I don’t.

My 5 Laws of Robotics are:

  1. Robots should not kill.
  2. Robots should obey the law.
  3. Robots should be good products.
  4. Robots should be truthful.
  5. Robots should be identifiable.

What exactly does those laws mean?

Firstly, people should not legally able to weaponize robots, although there may be lawful exclusions for use by defense forces or first responders. Some people are completely opposed to Lethal Autonomous Weapon Systems (LAWS) in any form, whereas others draw the line at robot weapons being ultimately under human command, with accountability to law. Currently in California there is a proposed legislation to introduce fines for individuals building or modifying weaponized robots, drones or autonomous systems, with the exception of ‘lawful’ use.

Secondly, robots should be built so that they comply with existing laws, including privacy laws. This implies some form of accountability for companies on compliance in various jurisdictions, and while that is technically very complex, successful companies will be proactive because companies otherwise there will be a lot of court cases and insurance claims keeping lawyers happy but badly impacting the reputation of all robotics companies.

Thirdly, although we are continually developing and adapting standards as our technologies evolve, the core principle is that robots are products, designed to do tasks for people. As such, robots should be safe, reliable and do what they claim to do, in the manner that they claim to operate. Misrepresentation of the capabilities of any product is universally frowned upon.

Fourthly, and this is a fairly unique capability of robots, robots should not lie. Robots have the illusion of emotions and agency, and humans are very susceptible to being ‘digitally nudged’ or manipulated by artificial agents. Examples include robots or avatars claiming to be your friend, but could be as subtle as robots using a human voice just as if there was a real person listening and speaking. Or not explaining that a conversation that you’re having with a robot might have many listeners at other times and locations. Robots are potentially amazingly effective advertizing vehicles, in ways we are not yet expecting.

Finally, and this extends the principles of accountability, transparency and truthfulness, it should be possible to know who is the owner and/or operator of any robot that we interact with, even if we’re just sharing a sidewalk with them. Almost every other vehicle has to comply with some registration law or process, allowing ownership to be identified.

What can we do to act on these laws?

  1. Robot Registry (license plates, access to database of owners/operators)
  2. Algorithmic Transparency (via Model Cards and Testing Benchmarks)
  3. Independent Ethical Review Boards (as in biotech industry)
  4. Robot Ombudspeople (to liaise between the public, policy makers and the robotics industry)
  5. Rewarding Good Robots (design awards and case studies)

There are many organizations releasing guides, principles, and suggested laws. I’ve surveyed most of them and looked at the research. Most of them are just ethical hand wringing and accomplish nothing because they don’t factor in real world conditions around what the goals are, who would be responsible and how to make progress towards the goals. I wrote about this issue ahead of giving a talk at the ARM Developer Summit in 2020 (video included below).

Silicon Valley Robotics announced the first winners of our inaugural Robotics Industry Awards in 2020. The SVR Industry Awards consider the responsible design as well as technological innovation and commercial success. There are also some ethical checkmark or certification initiatives under preparation, but like the development of new standards, these can take a long time to do properly, whereas awards, endorsements and case studies can be available immediately to foster the discussion of what constitutes good robots, and, what are the social challenges that robotics needs to solve.

The Federal Trade Commission recently published “The Luring Test: AI and the engineering of consumer trust” describing the

For those not familiar with Isaac Asimov’s famous Three Laws of Robotics, they are:

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later added a Fourth (called the Zeroth Law, as in 0, 1, 2, 3)

Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Robin R. Murphy and David D. Woods have updated Asimov’s laws to be more similar to the laws I proposed above and provide a good analysis for what Asimov’s Laws meant and why they’ve changed them to deal with modern robotics. Beyond Asimov The Three Laws of Responsible Robotics (2009)

Some other selections from the hundreds of principles, guidelines and surveys of the ethical landscape that I recommend come from one of the original EPSRC authors, Joanna Bryson.

The Meaning of the EPSRC Principles of Robotics (2016)

And the 2016/2017 update from the original EPSRC team:

Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby & Alan Winfield (2017) Principles of robotics: regulating robots in the real world, Connection Science, 29:2, 124-129, DOI: 10.1080/09540091.2016.1271400

Another survey worth reading is on the Stanford Plato site: https://plato.stanford.edu/entries/ethics-ai/

]]>
The robots of CES 2023 https://robohub.org/the-robots-of-ces-2023/ Wed, 25 Jan 2023 10:44:25 +0000 https://svrobo.org/?p=24647

Robots were on the main expo floor at CES this year, and these weren’t just cool robots for marketing purposes. I’ve been tracking robots at CES for more than 10 years, watching the transition from robot toys to real robots. Increasing prominence has been given to self-driving cars, LiDARs and eVTOL drones, but, in my mind it was really the inclusion of John Deere and agricultural robots last year that confirmed that CES was incorporating more industry, more real machines, not just gadgets.
In fact, according to the organizing association CTA or the Consumer Technology Association, these days CES no longer stands for the Consumer Electronics Show. CES now just stands for CES, one of the world’s largest technology expos.

Eve from Halodi Robotics shakes hands at CES 2023 with Karinne Ramirez-Amaro, associate professor at Chalmers University of Technology and head of IEEE Robotics and Automation Society’s Women in Engineering chapter. (Image source: Andra Keay)

Eve from Halodi Robotics shakes hands at CES 2023 with Karinne Ramirez-Amaro, associate professor at Chalmers University of Technology and head of IEEE Robotics and Automation Society’s Women in Engineering chapter. (Image source: Andra Keay)

The very first robot I saw was Eve from Halodi Robotics, exhibiting in the ADT Commercial booth. I am a big fan of this company. Not only do they have great robotics technology, which is very safe and collaborative, but I’ve watched them go from an angel funded startup to their first commercial deployments, providing 140 robots to ADT. One of their secrets has been spending the last year working closely with ADT to finetune the first production features of Eve, focusing on indoor security and working alongside people. In the future, Halodi has potential for many other applications including eldercare.

https://www.youtube.com/watch?v=xv3U9FoBuRA

Another robot company (and robot) that I’m a big fan of is Labrador Robotics, and their mobile tray fetching robot for eldercare. Labrador exhibited their mobile robot in the AARP Innovation Lab pavilion, and are rolling out robots both in houses and in aged care facilities. There are two units pictured and the height of the platform can raise or lower depending on whether it needs to reach a countertop or fridge unit to retrieve items, like drinks and medications, or whether it needs to lower to become a bed or chair side table. These units can be commanded by voice, or tablet, or scheduled to travel around designated ‘bus stops’, using advanced localization and mapping. The team at Labrador have a wealth of experience at other consumer robotics companies.

Two Retrievers from Labrador Robotics in the AARP Innovation Lab Pavilion at CES 2023. (Image source: Andra Keay)

Two Retrievers from Labrador Robotics in the AARP Innovation Lab Pavilion at CES 2023. (Image source: Andra Keay)

I first met Sampriti Battacharya, pictured below with her autonomous electric boat, when she was still doing her robotics PhD at MIT, dreaming about starting her own company. Five short years later, she’s now the proud founder of Navier with not one but two working prototypes of the ‘boat of the future’. The Navier 30 is a 30’ long electric intelligent hydrofoil with a range of 75 nautical miles and a top speed of 35 knots. Not only is the electric hydrofoil 90% more energy efficient than traditional boats but it eliminates sea sickness with a super smooth ride. Sampriti’s planning to revolutionize public transport for cities that span waterways, like San Francisco or Boston or New York.

Navier’s ‘boat of the future’ with founder Sampriti Battacharya, plus an extra stowaway quadruped robot from Unitree. Image source: Andra Keay

Navier’s ‘boat of the future’ with founder Sampriti Battacharya, plus an extra stowaway quadruped robot from Unitree. Image source: Andra Keay

Another rapidly evolving robotics company is Yarbo. Starting out as a prototype snow blowing robot, after five years of solid R&D, Snowbot has evolved into the Yarbo modular family of smart yard tools. Imagine a smart docking mobile base which can be turned from a lawn mower to a snow blower or a leaf blower. It can navigate autonomously, and it’s electric of course.

And these robotics companies are making waves at CES. I met French startup Acwa Robotics earlier in 2022 and was so impressed that I nominated them as an IEEE Entrepreneurship Star. Water utilities around the world are faced with aging and damaged infrastructure, inaccessible underground pipes, responsible for huge amounts of water loss, road and building damage. Acwa’s intelligent robot travels inside the pipes without stopping water flow and provides rapid precisely localized inspection data that can pinpoint leaks, cracks and deterioration. Acwa was nominated for honors in the Smart Cities category and also won CES Best of Innovation Award.

Acwa Robotics and CES 2023 Best of Innovation Awards (Image Source: Acwa Robotics)

Acwa Robotics and CES 2023 Best of Innovation Awards (Image Source: Acwa Robotics)

Some other robotics companies and startups worth looking at were Apex.ai, Caterpillar, Unitree, Bosch Rexroth, Waymo, Zoox, Whill, Meropy, Artemis Robotics, Fluent Pet and Orangewood. Let me know who I missed! According to the app, 278 companies tagged themselves as Robotics, 73 as Drones, 514 as Vehicle Tech, and 722 as Startups, although I’d say the overall number of exhibitors and attendees was down on previous years although there were definitely more robots.

]]>
Women in Tech leadership resources from IMTS 2022 https://robohub.org/women-in-tech-leadership-resources-from-imts-2022/ Thu, 29 Sep 2022 07:52:18 +0000 https://svrobo.org/?p=23445 There’ve been quite a few events recently focusing on Women in Robotics, Women in Manufacturing, Women in 3D Printing, in Engineering, and in Tech Leadership. One of the largest tradeshows in the US IMTS 2022 kicked off with a Women Make Manufacturing Move Reception, with Allison Grealis, President of Women in Manufacturing, Andra Keay, President of Women in Robotics and Kristin Mulherin, President of Women in 3D Printing, ahead of a program packed with curated technical content and leadership sessions. (see the resource list below)

On Tuesday, I also moderated the A3 Webinar “Robots and Beyond Roundtable: How Women in Robotics and Automation are Changing Manufacturing” with Joanne Moretti, CRO of Fictiv, Jackie Ram, VP of Operations, IAM Robotics, Jessica Moran, SVP and General Manager, Berkshire Grey, and Mikell Taylor, Principal Technical Program Manager, Amazon Robotics.

And on Thursday Sept. 15 I moderated a panel on “Reaching New Heights: Women in Tech Leadership” with Meaghan Ziemba, Owner, Z-Ink Solutions and Host, Mavens of Manufacturing Podcast, Nicole Wolter, President & CEO, HM Manufacturing, Laura Elan, Senior Director of Cybersecurity, MxD. We discussed so many great resources during our panel precall, that we really wanted to put together a list that could be shared more widely after the events were over.

Resources for Women in Tech Leadership:

Suggestions from Nicole Wolter;

  • Sporting experience is a great pathway into leadership for women who often miss out on formal leadership training or experiences.
  • Find mentors and champions (male or female) with an eye towards helping you develop your career pathway and deal with your next challenges.
  • Build on your strengths, while we can always improve we should embrace your strength.
  • The Goal is a great book about running a company. (now also available as a Business Graphic Novel)
  • Crucial Conversations, Tools for Talking When Stakes are High by Joseph Grenny, Kerry Patterson and Ron McMillan

From Meaghan Ziemba;

From Laura Elan;

Other Resources:

Hardworking women deserve footwear that is both beautiful and strong. Xena Workwear’s women’s steel toe boots & safety shoes are handcrafted with high-quality materials, look stunning, and are not bulky or masculine. Each style is ASTM certified and handcrafted to help you feel safe and confident. (not just footwear!)

But wait, there’s more!

Additionally, you can find interviews with some of these kickass women in the IMTS+ coverage. 

IMTS+ In Conversation With Host, Tim Shinbara, Chief Technology Officer, AMT – The Association for Manufacturing Technology interviews Barbara Humpton,  President & CEO, Siemens Corporation. 

https://www.imts.com/watch/video-details/In-Conversation-With-Barbara-Humpton/199

Join IMTS+  Host, Marley Kayden for IMTS Unwind from Wednesday at the show. Featuring live interviews with: Nicole Wolter, President & CEO, HM Manufacturing; Aneesa Muthana, President, CEO, and Owner of Pioneer Service, Inc.; Yvonne Wiedemann, President & Owner of CAM Logic; Austin Schmidt, President of Additive Engineering Solutions; Max Egan, and Travis Egan, Chief Revenue Officer, AMT – The Association for Manufacturing Technology.

https://www.imts.com/watch/video-details/IMTS-Unwind-Wednesday-September-14-2022/198

Join IMTS+ Host, Marley Kayden for IMTS Today (Wednesday, September 14).  Featuring: Andra Keay, Vice President of Global Robotics; Robby Komljenovic, Chairman & CEO, Acieta; Richard Browning, Technologist and Founder of Gravity Industries; and Barbara Humpton, President & CEO of Siemens

Join IMTS+ Host Marley Kayden for IMTS Today, Friday, September 16: Featuring: Dr. Jeffrey Ahrstrom, CEO, Ingersoll; Meaghan Ziemba, Owner, Z-Ink Solutions, Founder & Host, Mavens of Manufacturing; Mitch Free Founder & CEO, ZYCI and Trusted Source; and John Dyck, CEO, CESMII: The Smart Manufacturing Institute.

https://www.imts.com/watch/video-details/IMTS-Today-Friday-September-16-2022/204

Join IMTS+ Host, Marley Kayden for IMTS Unwind. Featuring live interviews with: Tim Shinbara, Chief Technology Officer, AMT – The Association for Manufacturing Technology; Jeremy Nyenhuis, Owner of J3 Machine & Engineering, LLC.; and Courtney Tate, Owner of Ontime Quality Machining. Additional live interviews with social media influencers James Soto, Founder & CEO, INDUSTRIAL and Partnership Advocate; and Charli K. Matthews, Founder & CEO, Empowering Pumps & Equipment, and Champion of Women.

https://www.imts.com/watch/video-details/IMTS-Unwind-Monday-September-12-2022/188

]]> Seeing the robots at #ICRA2022 through the eyes of a robot https://robohub.org/seeing-the-robots-at-icra2022-through-the-eyes-of-a-robot/ Fri, 17 Jun 2022 11:55:06 +0000 https://svrobo.org/?p=22680 Accessbility@ICRA2022 and OhmniLabs provided three OhmniBots for the conference, allowing students, faculty and interested industry members to attend the expo and poster sessions. The deployment of telepresence robots was greatly appreciated by everyone, from people with medical conditions or visa issues preventing them from attending, to people who were able to sample a little bit of the conference as an introduction, including future conference organizers. 

There were more than 60 requests to “Tour ICRA 2022 via Telepresence Robots” from as far away as Africa, India, Asia Pacific, Australia and Europe, as well as across the USA. 24.6% of requests came from interested industry people, with an additional 1.8% from the investment community, 1.8% from the media and 5.3% was interest from the general public.

ICRA 2022 General Chair George Pappas commented that “​​The telepresence robots were a big hit! It added a very nice innovation and was discussed very positively on social media, particularly from registrants that could not make it.”

Thuc Vu, the CEO of OhmniLabs, the company that provided ICRA with telepresence robots, was delighted to see a conference that was taking a meaningful step for people with accessibility issues. OhmniLabs provide telepresence robots for hospitals and schools as well as for manufacturing or workplace settings and have seen first hand the difference that having a physical ‘body’ and the ability to move around makes in the experience, compared to a video call.

One ICRA researcher presented their poster session via robot, and several faculty who couldn’t attend in person were able to be there for their student’s poster sessions. 12.5% of the requests for telepresence robots came from people who had a paper or poster accepted at ICRA but couldn’t travel. PhD Student at The Sydney Institute for Robotics and Intelligent Systems (SIRIS) at Sydney University, Ahalya Ravendran said, 

“Did I tell y’all I bot-suit myself for my conference #ICRA2022 poster session? @RobotLaunch was kind enough to allocate a slot for me during my poster session and oh I had a lot of fun touring with the telepresence robot, via @OhmniLabs

“Hi from Robotic Imaging Team

Sabine Hauert, Publicity Chair at ICRA, President of Robohub and Associate Professor of Swarm Engineering at Bristol University was unable to attend ICRA in person due to pregnancy, but she still visited her student’s poster sessions and explored the conference Expo, saying 

“That was fun! The height of the telepresence robot is even accurate ? Thanks @OhmniLabs, @RobotLaunch and @ieee_ras_icra for making it possible for this very pregnant lady to attend #ICRA2022 remotely. Congrats on the nice work @m_alhafnawi.”

Sabine’s PhD student Merihan Alhafnawi was delighted,

“Such an amazing surprise having @sabinehauert visit my poster remotely at @ieee_ras_icra! I presented MOSAIX, a swarm of expressive robot Tiles that we designed & built. Work done with Sabine, @DrEdmundHunt, @skadge & Paul O’Dowd. Check out photos from both perspectives #ICRA2022

Specially Appointed Assistant Professor at Tohoku University in Japan, Seyed Amir Tafrishi said, “I was lucky to join #icra2022 physically by my avatar robot, where two of our papers will be presented. Thanks to #Ohmnilabs for the fantastic experience with the telepresence robot! It was a great experience to talk with different experts. P.S. Person on the robot’s screen is me! :))”

Many people gave us additional information to help us understand why they wanted to access ICRA via telepresence robots, ranging from people stuck in war torn countries, to economic or visa difficulties traveling from some parts of the world. Corporate policy contributed to several people’s inability to attend in person, as sustainably minded companies seek to minimize their carbon impact. 

There was also interest from roboticists researching the telepresence experience as well as from future robotics conference organizers. And there were a wide range of medical reasons underpinning the requests for telepresence. Some medical issues are temporary conditions but some roboticists have permanent conditions requiring accessibility support. As telepresence robots become more of the norm, we will be helping to empower researchers who are otherwise isolated from the rest of the community. 

Kavita Krishnaswamy is a Google Lime Scholar and Microsoft Research Fellow, with a Ford Foundation Predoctoral and National Science Foundation Graduate Research Fellowship. She is currently a Ph.D. candidate in Computer Science at University of Maryland working with Dr. Tim Oates. Kavita was one of the researchers who was only able to attend ICRA through the use of a telepresence robot, due to her permanent medical conditions. 

As a professional researcher with a physical disability, Kavita is highly motivated to develop robotics systems to provide assistance and increase independence for people with disabilities. She is developing prototype robotics systems to support transferring, repositioning and personal care, with a focus on usable interfaces for people with severe disabilities. 

Kavita was interviewed by Robohub attending ICRA 2015 via telepresence robot and was excited to hear that IEEE RAS and OhmniLabs are working on making telepresence a permanent conference feature. 

https://robohub.org/robotic-assistive-devices-for-independent-living/

And of course, without dedicated volunteers managing the robots and helping visitors maneuver around the conference, we wouldn’t have been able to do anything. Many thanks to everyone who volunteered, the visitors who shared their experiences with us, and to the team at OhmniLabs for their generosity and support.

Interested in getting an Ohmni yourself? Reach out to Ohmni team at sales@ohmnilabs.com.

Links to original tweets or posts used in the report:

https://twitter.com/sabinehauert/status/1529809074338021377

https://twitter.com/m_alhafnawi/status/1529790038246141952

https://twitter.com/girlinrobotics/status/1529282946095804417

https://www.linkedin.com/feed/update/urn:li:activity:6935214535754555392/

 

]]>
Unable to attend #ICRA2022 for accessibility issues? Or just curious to see robots? https://robohub.org/unable-to-attend-icra2022-for-accessibility-issues-or-just-curious-to-see-robots/ Tue, 17 May 2022 06:36:53 +0000 https://svrobo.org/?p=22623

We can now offer you a telepresence robot tour of the ICRA 2022 expo hall, competitions and poster sessions, thanks to generous support from our friends at OhmniLabs. OhmniLabs build human-centric robots that elevate quality of life for billions of people worldwide, and they build all the robots right here in Silicon Valley using advanced additive manufacturing.

Join more than 5000 roboticists, researchers and industry from 89 different countries in Pennsylvania for a fascinating showcase of robotics thought leadership. There will be 12 keynote speakers, 6 industry and entrepreneurial forums, 10 competitions, almost 60 workshops and 1500 papers presented. And on top of that there are more than 80 robotics companies demoing their technologies, ranging from Agility Robotics to Zebra Technologies/Fetch Robotics. 

There are many things that can make it difficult to attend an in person conference in the United States and so the ICRA Organizing Committee, the IEEE Robotics and Automation Society and OhmniLabs would like to help you attend ICRA virtually. Priority of access will be for robotics researchers and students who are unable to travel, particularly if you are an author of a paper or poster, but we welcome applications from people who are simply curious about robots as well. 

Three OhmniBots will be in the main exhibition hall (with all the other robots) from opening to closing on Tuesday May 24th, Wednesday May 25th and Thursday May 26th, with time slots aligning with Poster Sessions, networking breaks and Expo Hall hours. The application form allows you to select several time slots, and we’ll give you feedback as soon as possible about your application, but we won’t be able to confirm your final booking time(s) until Monday May 23.

Telepresence Robot access is also available for media tours, ICRA sponsors, and members of Black in Robotics, Women in Robotics or Open Robotics who’d like to join the networking events. Generally, the robots are limited to the Expo Floor but we might be able to make special arrangements ? 

Contact one of the Accessibility Chairs: AndraKeay@ieee.org with subject [telepresence tour]

Or one of the Media Chairs: danicarzap@scientificagitation.com with subject [media]

Let us know why you need to tour ICRA by telepresence robot!

]]>
Seamless transitions between autonomous robot capabilities and human intervention in construction robotics https://robohub.org/seamless-transitions-between-autonomous-robot-capabilities-and-human-intervention-in-construction-robotics/ Tue, 15 Mar 2022 15:02:24 +0000 https://svrobo.org/?p=22249

Congratulations to the winners of the best paper award of the International Association for Automation and Robotics in Construction 2021. The team around Cynthia Brosque, Elena Galbally, Prof. Martin Fischer, and Prof. Oussama Khatib did excellent groundwork for construction robotics. With permission, Silicon Valley Robotics is reposting the first parts of the paper below. It is also available on the IAARC website.

GOLDBECK US Inc. helped define construction robotics use-cases that generate real-world value on their job sites. In the first phase of the project, they collected data to understand the tasks on site holistically. Subsequently, simulations on how robots perform the job were created. Using this information, the researchers of Stanford University | CIFE – Center for Integrated Facility Engineering then developed suitable robotic prototypes. GOLDBECK supplied physical building components to test the prototypes under realistic conditions.

GOLDBECK is now hunting for robotics companies to help us transition this research into a real-world robotic application!

Abstract

Due to their unstructured and dynamic nature, construction sites present many challenges for robotic automation of tasks. Integrating human-robot collaboration (HRC) is critical for task success and implementation feasibility. This is particularly important for contact-rich tasks and other complex scenarios which require a level of reasoning that cannot be accomplished by a fully autonomous robot. Currently, many solutions rely on precise teleoperation that requires one operator per robot. Alternatively, one operator may oversee several semi-autonomous robots. However, the operators do not have the sensory feedback needed to adequately leverage their expertise and craftsmanship. Haptic interfaces allow for intuitive human-robot collaboration by providing rich contact feedback. This paper presents two human-robot collaboration solutions for welding and joint sealing through the use of a haptic device. Our approach allows for seamless transitions between autonomous robot capabilities and human intervention with rich contact feedback. Additionally, this work opens the door to intuitive programming of new tasks through haptic human demonstration.

1 Introduction

In recent years, progress in mobility, manipulation skills, and AI reasoning have started to enable the use of robots in space, underwater, homes, agriculture, and construction [1]. A particularly important area of interest is the automation of dangerous, strenuous, and laborintensive tasks [2].

Construction sites are especially challenging environments for autonomous robots because of their highly unpredictable and unstructured nature [3, 4]. Hence, fully autonomous robots that replace human labor are not the most feasible or ideal solution. The majority of current approaches rely on a human operator that oversees a single task autonomous robot. The operator receives only visual feedback and is limited in the type of input and recovery from failure he can provide due to the lack of an intuitive interface to do so. [4] attributed this lack in technical flexibility of construction robots to the fact that early construction solutions imitated systems initially developed for industrial fabrication [5].

Some tasks are structured enough to be autonomously performed by a robot with little human input, but many require a more flexible approach that incorporates a higher degree of human reasoning and intuition [6]. Given this reality, a method to design construction robots should be flexible enough to allow varying levels of human-robot collaboration depending on the task.

Haptic devices (Fig.1) provide an effective interface for collaboration by allowing the human to (1) feel the contact forces between the robot’s end-effector and the environment [7], and (2) easily intervene and control the robot motion in scenarios that the autonomous behaviors are not able to handle successfully [8]. Additionally, data from these haptic interventions can be collected and used to learn new autonomous skills. Remote robot control using a haptic interface has been tested in fields such as surgery [9] and underwater exploration [1], but has not yet been widely implemented in construction.

In previous work, the authors explored human-robot collaboration solutions to five hazardous and repetitive construction tasks: installing drywall, painting, bolting, welding, and sealing precast concrete slab joints [10]. Our industry partner, Goldbeck, was interested in automating these assembly and finishing tasks that require on-site, repetitive manual effort, ergonomically challenging positions, and working from dangerous heights. [10] outlines a method for designing collaborative robotic solutions with haptic feedback and to assess their feasibility in simulation.

In this paper, we focus on two of the previously explored tasks (steel welding and sealing precast concrete joints) and apply the aforementioned method to design more flexible collaborative solutions. Different from [10], we propose relying primarily on the robot’s functional autonomy and using haptics as an effective and intuitive way to intervene in unexplored or failure scenarios. Force data from the recovery strategy employed by the operator can be recorded and used to learn from demonstration and augment the robot’s autonomy. Over time, the robot will require less human intervention. This higher degree of autonomy could allow a single operator to supervise many robots at once, overcoming the problems of teleoperation in which one operator per robot is needed.

2 Related Work

While factories have typically separated workers from robots due to safety concerns, human-robot collaboration cannot be overlooked in construction, as robots and humans share one workspace [2]. This requires devising solutions that allow us to effectively combine the workers’ expertise with the robots’ autonomous skills.

Construction literature has studied the use of teleoperation devices [11, 12, 7], particularly focusing on construction machinery, such as excavators. These solutions often involve cameras for visual feedback and GPS sensors for navigation, which can be sufficient to accomplish low dexterity tasks with increased operator safety. However, [7] states that complex tasks involving contact greatly benefit from additional sensory feedback such as tactile information. Furthermore, teleoperation solutions rely heavily on the operator’s guidance and do not fully exploit the autonomous capabilities of the robot.

A different set of collaborative solutions currently used onsite have semi-autonomous robots with a human supervisor that oversees the tasks such as drywall installation, concrete drilling, and layout [13]. The supervisor can provide simple inputs to the robot, such as when to start or stop the operation, while the robot handles the rest of the task. This approach makes better use of modern robotics capabilities and allows a single operator to manage several robots. However, the interfaces used to provide inputs to the robot are often too simplistic to allow recovery from failure.

In the event of a robot failure during task execution, joysticks and control pendants do not always provide enough feedback for the operator to intervene in an effective way that enables timely task completion. Additionally, there is currently no streamlined way to learn from the operator’s intervention and use this data to improve the robot’s autonomous capabilities.

By allowing the operator to feel the contact between the robot and its environment, haptic devices increase the range of scenarios in which the operator can aid in failure recovery [14]. Additionally, we can easily record both force and position data during the operator’s intervention. These human demonstrations of recovery strategies can allow the robot to learn new skills [15] and augment its functional autonomy.

Haptic devices have been used by the construction industry in combination with virtual reality for task training purposes [16]. The technology has allowed workers to train in a safe environment with realistic task conditions. However, haptics are still a novel technology in construction applications and field use has not been reported.

Current algorithms for haptic control of robots [17] can handle large communication delays, making them effective interfaces for remote intervention at long distances. In [1] an operator haptically controls an underwater ocean exploration robot from a distance of 100m.

Finally, [18] provides an example that integrates two modalities of robot control: autonomous robot behavior and expert human-guided motion interactions. In this study, a group of mobile robot arms successfully installed drywall boards in simulation with flexible human intervention.

This body of prior work illustrates how keeping the human in the loop with adequate feedback can facilitate successful task automation in complex, unstructured environments. Moreover, it highlights the value of haptics as a way to provide a flexible and effective interface for human-robot collaboration as well as teaching robots new autonomous skills.

Full Text of Paper

 

]]>
Automated mining inspection against the odds https://robohub.org/automated-mining-inspection-against-the-odds/ Wed, 23 Feb 2022 08:07:59 +0000 https://svrobo.org/?p=22192

Image from Rajant.

Learn how Rajant Corporation, PBE Group, and Australian Droid + Robot – as part of a #MSHA (U.S. Department of Labor)-backed mine safety mission – achieved a historic unmanned underground mine inspection at one of the US’ largest underground room and pillar limestone operations in this comprehensive IM report.

Using ten ADR Explora XL unmanned robots, a Rajant wireless Kinetic Mesh below-ground communication network, and PBE hardware and technology, a horizontal mobile infrastructure distance of 1.7 km was achieved. This allowed the unmanned robots to record high-definition video and LiDAR to create a virtual 3D mine model to assess the condition of the mine, for the deepest remote underground mine inspection in history.

The inspection made it possible for MSHA to conclude within a very short time that it was safe to re-enter the operation and begin remediation efforts, which included allowing mine personnel back into the mine to re-establish power and communications, after which mining was able to recommence quickly at the site.

The project, in many ways, is the ultimate example of necessity breeding innovation. It also showcased the capability of Rajant wireless mesh networks underground to facilitate autonomous mining operations where underground Wi-Fi would not be up to the task.

]]>
What Women in Robotics achieved in 2021 and what’s coming next in 2022 https://robohub.org/what-women-in-robotics-achieved-in-2021-and-whats-coming-next-in-2022/ Fri, 28 Jan 2022 15:17:20 +0000 https://svrobo.org/?p=22158

It’s been a hard year for women all over the world, and I’d like to thank everyone who has contributed to Women in Robotics in 2021, whether you’ve simply shared information about yourself in our community #intros channel, or organized an online event, or made yourself available as an advisor in our pilot mentoring program. Perhaps you’ve been furthering our mission in an ‘unofficial’ way simply by supporting other women, and non-binary people, who are working in robotics, or who are interested in working in robotics. 

We recognize and appreciate the community building work that women do, which is so often out of the spotlight, and on top of everything else. Women’s work has rarely been given economic value as one of my heroes Marilyn Waring wrote in “Counting for Nothing”. She founded feminist economics, now called triple bottom line accounting, and changed the way that the World Bank and other global organizations evaluate economies. 

The pandemic has forced women out of the workforce at twice the rate of men, leaving women’s participation in the workforce lower than it’s been for 30 years. And the pressure shows no sign of stopping. However, I believe that whenever women are forced to step backwards, we move forward again with renewed determination and focus. And so my inspiration is renewed to further the mission of Women in Robotics, to support women and non-binary people who work in robotics, and those who would like to work in robotics. We may all find it hard to find time, but small actions in the right time and place can move mountains. 

In 2021, our annual showcase featured not 25 or 30 but ‘50 women in robotics you need to know about’ from 21 different countries, from industry, startups and academia, with particular mention of the women featured who have fought for the rights of refugees and persecuted women, especially the Afghanistan Women’s Robotics Team. For other women, this recognition has helped to raise their profile within universities or companies, leading to increased opportunities.

Our annual list also means that there’s no excuse for not including women in conferences, articles, policy making, job searches etc. In 2022, I’d love to see us create wikipedia pages for more women in robotics, and create a speaker database, and a citation list, similar to what Black in Robotics has done, and the work of 500 Women Scientists

The work of women in science is still less likely to be cited than that of men. Recent UNESCO research has found that citation bias is the start of career long lack of recognition for women, starting with women citing themselves less often than men do. In 2022, let’s focus on improving citation rates, increasing the number of women in panels, journals and conferences, or holding organizers accountable. We can target increasing the number of women cited in robotics curricula, reading lists and coursework. As an organization we can reach out to universities, labs, conferences and journals in a way that individual women can not.

Another grassroots campaign that we started was the Women in Robotics Photo Challenge, which has already resulted in some great new photos of women building robots joining the image search results. Since then we’ve realized that ordinary google or wikipedia search steer you to Sophia or sex robots, rather than referencing real women building robots. It’s also time to retire the word ‘unmanned’. Women in Robotics is planning to request that any university still referencing ‘Unmanned’ Vehicles should substitute with driverless, uncrewed or a better term.

The lack of in person conferences is severely impacting the benefits of in person networking at a senior level for women in robotics, and so we’d like to finally launch our Advisory Board through an online networking event(s) for senior women in robotics, both in academia and industry..

We piloted our first mentoring program over 12 weeks with 16 women and it was a very successful experience for almost every participant. We know that there is a lot of demand to run the program again but we’ll need more volunteers to help organize the events and match mentors/mentees. This is one area in which sponsorship for Women in Robotics could be useful, but sponsorship comes with a significant administrative cost, so we will only seek sustainable major sponsorships in 2022.

My gratitude goes to CITRIS & the Banatao Institute, through the People and Robots Lab, for providing me with some funding for the last two years that has allowed me to spend some of my time on Women in Robotics, Black in Robotics and the Society, Robots and Us Seminars. 

My call to action is for you to make your volunteer labor impactful by investing your time in a call to action with a big outcome. I hope it’s one of our Women in Robotics actions, but in everything you do you represent women in robotics and allies. Best wishes to you all for 2022. And thank you to the 2021 Women in Robotics Board of Directors! Our full Annual Report is here.

  1. Counting for Nothing (originally If Women Counted) https://en.wikipedia.org/wiki/If_Women_Counted by Marilyn Waring https://en.wikipedia.org/wiki/Marilyn_Waring
  2. https://www.theguardian.com/commentisfree/2021/nov/19/great-resignation-mothers-forced-to-leave-jobs
  3. https://www.techrepublic.com/article/women-and-middle-managers-will-lead-the-great-resignation-into-2022/
  4. https://en.unesco.org/news/unesco-research-shows-women-career-scientists-still-face-gender-bias

Reflections from the Women in Robotics Board of Directors:

We are very grateful for the work of our Board Members over the last year and we thank Kerri Fetzer-Borelli, Hallie Siegel and Ariel Anders for their vision and experience on the 2020 and 2021 Board. We are delighted to have them join our Advisory Board in 2022.

 

Allison Thackston:

Women in Robotics community and support in 2021 was different from previous years when we relied a lot on local chapters, meetups, and networking.  With many offices locked down and people more hesitant to do events in person, we’ve struggled a bit.  On the bright side, we’ve been bringing up our online presence, improving our website, and increasing our social media outreach.  In the year ahead, I hope we continue this growth.

Cynthia Yeung:

Launching the Project Advance mentorship program was a highlight of my service on the WiR board in 2021. We have received lots of great feedback from the inaugural cohort which we can use to improve the programming for the second cohort in 2022. One of the key success metrics was the percentage of returning mentors (demand is unlimited in two-sided marketplaces; supply is the constraint) and we are pleased to report that all mentors are interested in returning for the second cohort. It is personally gratifying to be in a position to implement the kind of program that I wanted to have access to earlier in my career. On a macro level, I believe that strong focus and measurable progress on a small number of initiatives will bode well for WiR’s impact roadmap.

Lisa Winter:

2021 was a year of self-reflection and a test of patience. I think 2022 will be the year when a lot of us take bigger risks as we try to figure out a better work/life balance. I would like to see more communication on the WiR Slack, specifically giving job advice and engineering advice. What I enjoy about other sites that I think we could incorporate is more sharing of personal projects; connecting over them and also learning.

Laura Stelzner:

WiR provides community and  support in a field where it can be lonely being one of the few women at your company/department.  As the field of robotics grows we would like to show women and non-binary people all the amazing career opportunities that exist, by providing them with mentorship,  networking, leadership and career advancement opportunities.

Laurie Linz:

2021 was a quiet year for Boulder/Denver, we didn’t hold any local (in person)  meetings. I do have good news and that is I am in process with setting up some in person events again. We’ll approach with caution given the covid situation, but happy to be starting again. 

WiR helps women level up or launch their career in robotics. We welcome those not ready to launch with networking and educational support. Learn, launch, level up.

Sue Keay:

One of the concerns that keeps me awake at night is wondering what important challenges we might have solved already and what technologies are missing because of the lack of diversity in robotics. That’s why Women in Robotics exists, to help to support the small number of women who are contributing to developing robotic technologies and to encourage more to join our ranks. The global list of women in robotics has been an important way to signal the important contributions that women are making in this space and to raise the profile of robotics as a valid career choice for women. Joining WiR is acknowledging that we can be doing better with diversity in robotics and may provide much needed support to a woman in robotics who may be questioning their reason for remaining in such a male-dominated field. My own experience has been that women have always been my greatest supporters and I feel less alone by being part of WiR.

Sue Keay (Robotics Australia) with Erin McColl (Toyota Research Institute) with a Ghost Robotics platform.

]]>
Robotics innovation infiltrates 2022 Consumer Electronics Show https://robohub.org/robotics-innovation-infiltrates-2022-consumer-electronics-show/ Sat, 08 Jan 2022 17:15:06 +0000 https://svrobo.org/?p=22121

Casinos and robots? It must be CES time! A few years ago, the only robots at CES were toys. And as the robot toy makers at Ologic can attest, having your robot featured as the leading image for CES was still no guarantee that your robot would make it into production (AMP is pictured above). Luckily Ologic have transferred their consumer electronics experience into building robots of every other kind. And CES now showcases robots of every shape and size, from autonomous cars, trucks and construction robotics, to food production, health and rehabilitation robots. The 2022 CES Innovation Awards recognize a range of robotics technologies as Honorees, and feature three in the “Best of Innovation” category as well.

 

See & Spray

By John Deere

Best of Innovation

Robotics

Honoree

Vehicle Intelligence & Transportation


See & Spray is a technologically advanced, huge robot for the agriculture industry that leverages computer vision and machine learning to detect the difference between plants and weeds, and precisely spray herbicide only on the weeds. This groundbreaking plant-level management technology gives a machine the gift of vision and reduces the use of herbicide by up to 80 percent, benefiting the farmer, the surrounding community and the environment. This revolutionary approach of technology is unprecedented in both the technology and agriculture industries.


Leica BLK ARC

By Leica Geosystems, part of Hexagon

Best of Innovation

Robotics


The BLK ARC is an autonomous laser scanning module for robotics applications. It provides a safe and autonomous way to capture, in 3D, the images and data of areas that are difficult or dangerous for humans to reach. Professionals in architecture/engineering/construction (AEC), manufacturing, plant, public safety, and media and entertainment attach the BLK ARC to a robotic carrier to capture data that is used to build 3D models and recreate situations. For example, man-made or natural disasters, automotive plants, bridges, and movie location sets. The product’s speed, accuracy, flexibility, and modular design provides fully autonomous mobile LiDAR scanning and navigation.


WHILL Model F – Foldable Personal EV

By WHILL, Inc.

Best of Innovation

Accessibility


WHILL Model F is a foldable personal electric vehicle for everyone, including seniors and people who have difficulty walking. Unlike wheelchairs, which are typically built for patients in a medical environment, Model F is designed to fit an active lifestyle for everyday use. It is lightweight and foldable, which makes it easy to load in cars and convenient when traveling by air. The WHILL smartphone app allows you to drive and lock the Model F remotely, as well as check key device information such as total mileage and battery level.


 

 

DROWay – Intelligent UATM Ground Control Platform

By CLROBUR Co., Ltd. / HANSEO UNIVERSITY

Honoree

Drones & Unmanned Systems


DROWay is an integrated and expansional multi-mobile platform that goes beyond former mobility control services. (Part of DROW4D service) – Multi-heterogeneous mobility HW integrated control and real-time monitoring control system – Automatic multiple flight path generation and flight simulation in a 4D airspace(corridor) – Indoor/Outdoor Swarm flight & simulation (DROW World/DROW ART Manager) – AI based ground control data managing & analysis


DJ25 – Fuel cell powered VTOL (with JOUAV)

By Doosan Mobility Innovation

Honoree

Drones & Unmanned Systems


DJ25 is the world’s first hydrogen fuel cell VTOL(Vertical Take Off and Landing) commercial drone solution. Doosan and JOUAV successfully integrated its advanced PEMFC(Proton-Exchange Membrane Fuel Cell) technology to VTOL air frame, With this collaboration make the flight time achieve up to 5 and half hours. DJ25 can cover up to 500km in a single flight, and it is suitable for long distance inspections, large scale site surveying and mapping compared to the ordinary batter drone. and also hydrogen-powered version has low noise and no vibration during power generation. It is stable and efficient when loading various mission equipment.


Mini GUSS

By GUSS Automation

Honoree

Drones & Unmanned Systems


Mini GUSS is the world’s first and only autonomous vineyard, hops, berry, and high density orchard sprayer.


Drone Charging Station “ON STATION”

By ISON

Honoree

Drones & Unmanned Systems


High Mast Drone Charging Station consists of a pole, drone hangar, control box and other accessories if necessary by the users. The high mast pole can raise and lower automatically by the control box and the drone hangar can be stored up to 50 feet tall so drone hangar or other accessories such as anemometer, CCTV camera, rain sensor, solar panel module, LED lights or even the weapon (M60) can be added on the pole for the purpose of end user’ s circumstance.


Delivery AMR:Mighty-D3

By Piezo Sonic Corporation

Honoree

Drones & Unmanned Systems


The AMR for transportation: Mighty-D3 is designed based on the technology of Japanese lunar exploration robots, and uses Piezo Sonic Motor for the steering mechanism. It is designed based on Piezo Sonic’s 3C concept (Cool, Cute, Compact), so it has both design and functionality. Mighty-D3 is capable of climbing over bumps of up to 15cm, turning on the spot, and moving horizontally, allowing it to avoid obstacles and travel autonomously to its destination in both outdoor and indoor environments. Mighty-D3 can be used as a transportation support robot for urban areas, hospitals, commercial facilities, and big apartments.


WasteShark

By RanMarine Technology BV

Honoree

Drones & Unmanned Systems


To contribute to the huge challenge of cleaning and monitoring our world’s waters, RanMarine Technology developed the patented WasteShark. It is the world’s first autonomous aquadrone designed for clearing marine plastic waste. At a running cost of 20% of other marine waste removal solutions and with zero emission operation, the intelligent WasteShark is easy to operate and maintain. The WasteShark is our first model and our roadmap includes products based on the WasteShark such as the OilShark and MegaShark. It is accessible to public and private entities that aim to contribute to restoring the marine environment to its natural state.


SPIDER-GO ; Warehouse Inventory Automation System

By TACTRACER CO., LTD.

Honoree

Drones & Unmanned Systems


SPIDER-GO is an automated warehouse inventory system that periodically scans and updates the inventory status in real time. The key features include automated modelling of the warehouse layout, presenting the inventories on the 3D map, and inspecting the inventory status through remote control. It is also equipped with functions that allow monitoring by CCTV, detecting fire, and remote updating. Conventional warehouse management systems have had issues like frequent errors in loading and picking, discrepancies between the system and the actual inventories due to human input errors. SPIDER-GO can innovatively improve the visibility and efficiency for proper warehouse inventory management.


Whiz Gambit – 2-in-1 AI-powered cleaning & disinfection solution

By Avalon SteriTech Limited

Honoree

Robotics


Whiz Gambit, is a 2-in-1 AI-powered cleaning and disinfection robotic solution jointly developed by Avalon SteriTech and SoftBank Robotics Group. The robot is the first disinfection robot to achieve Performance Mark by SGS, the world’s leading verification and testing company, with proven efficacy to eliminate >99% microbial bioburden. Importantly, Whiz Gambit greatly minimizes potential health risks in communal areas with its effective, consistent cleaning and disinfection performance. It has been a trusted partner for hospitality groups, shopping malls, schools, and offices around the world.


The Essentials; The Ultimate Building Blocks for Mobile Robots

By Avular

Honoree

Robotics


Avular proudly presents the world’s first end-to-end solution for mobile robots. The Essentials are modular hardware and software building blocks, specifically designed to (1) rapidly build new mobile robot applications, (2) turn existing machines into autonomous ones, and (3) develop robust and scalable end-products. The Essentials cover all the core functionalities a mobile robot needs, allowing you to focus on your specific application. Our products already enable entrepreneurs and engineers to build robots that make the world a better place in the fields of food, safety, energy, and more.


CygLiDAR_h3 (2D/3D Dual Solid State LiDAR)

By CYGBOT CO,. LTD.

Honoree

Robotics


It is a Solid State LiDAR product that does not operate mechanically, and it can measure 2D and 3D at the same time. Most robots use 2D LiDAR to identify their position and 3D Camera to determine obstacles while driving. CygLiDAR, you can get 2D and 3D data together with one product. CygLiDAR allows robots to reduce the number of sensors used and to produce robots that are competitive in design and cost.


Doosan NINA (New Inspiration New Angle)

By Doosan Robotics

Honoree

Robotics


As an achievement of Project NINA (New Inspiration. New Angle.), Doosan created an unprecedented camera robot system that empowers everyone to become professionals, opening next-level creativeness. To democratize filming robots, Doosan innovated software with intuitive UI that helps content creators to film effortlessly, including sharing presets of camera movement on Doosan’s platform. Another key part of the system is that the robot shoots 360-degrees and tracks shooting objects, simplifying complex camera moves to allow filming at any angle. It can be controlled by hand and/or joypad so that people with no experience with robot operations can handle it without difficulties.


DOTSTAND V1

By DOTHEAL Co.,Ltd.

Honoree

Computer Peripherals & Accessories, Fitness & Sports, Robotics


Using computers for an extended number of hours with poor posture results in forward head posture syndrome (FHP), a very common problem that causes neck pain and fatigue. Doctor visits and therapy are both time consuming and expensive. DOT STAND is a patent-pending, smart monitor stand that uses an AI sensor to analyze the user and automatically self-adjusts any monitor’s position to induce better posture – without the user even knowing it. DOT STAND relieves pain and improves concentration and work efficiency and improves average cervical curve ARA by up to 8%.


Hey-Bot(AI-based, Smart Disinfection&Guide Robot)

By Hills Engineering co.,ltd. / HANSEO UNIVERSITY

Honoree

Robotics


‘Hey-Bot’ is an AI-based self-driving disinfection and guide robot for convention centers, hospitals, negative pressure wards and other important but vulnerable places. The robot protects people from the coronavirus pandemic by minimizing chance of getting infection. It effectively kills the virus and limits unnecessary social contacts in guiding one’s way to his or her destination and sanitizing a given area based on its self-driving, guide and disinfection functions. ‘Hey-Bot’ is highly advanced intelligent Disinfection & Guide robot that can open safe and convenient “with coronavirus” era where people can be reassured for living and working.


RGB-D AI Robot

By Industrial Technology Research Institute (ITRI)

Honoree

Robotics


The RGB-D AI Robot is the world’s first collaborative robot that includes smart 3D vision as a built-in standard for high-precision object recognition. The integration of a 3D camera and MEMS laser scanning projector, along with the auto labeling technology, allows the robot to quickly learn through visually acquired data and perform hand-eye coordination skills. The miniature size of the optical sensor makes it easy to install in robotic arms and can lower installation costs. This innovation can be applied to human-robot collaboration in both manufacturing and service sectors.


Lumotive Meta-Lidar™ Platform

By Lumotive

Honoree

Robotics, Vehicle Intelligence & Transportation


The Lumotive Meta-Lidar™ Platform is the mobility industry’s smallest and most cost-effective 3D sensing solution comprising a Lumotive tiny lidar sensor with patented beam steering technology, real-time control software, and a reference system design. Unlike previous generation lidar systems that use mechanical spinning assemblies and are known for being big and expensive, Lumotive’s solution is tiny and scalable for a range of size, performance and power requirements.  The Meta-Lidar Platform removes barriers to greater proliferation of 3D sensing in automotive, industrial and consumer applications.


Monarch Tractor, MK-5

By Monarch Tractor

Honoree

Robotics


Monarch Tractor, the world’s first fully electric, driver optional, smart tractor, enhances farmer’s existing operations, alleviating labor shortages, and maximizing yields. The award-winning Monarch Tractor combines electrification, automation, machine learning, and data analysis to set a new standard in tractor technology and capabilities.


Ted

By Naio technologies

Honoree

Robotics


Ted offers a sustainable, serviceable and smart winegrowing solution combining high edge technology in robotics and AI. Ted is the first robot dedicated to vineyards, an alternative to the use of herbicides that respects your soils and improves your working conditions.


Autonomous Box Truck

By Gatik

Honoree

Vehicle Intelligence & Transportation


Gatik, the leader in autonomous Middle Mile logistics, delivers goods safely and efficiently using its fleet of light and medium duty trucks. The company focuses on short-haul, B2B logistics for Fortune 500 retailers such as Walmart and Loblaw. Gatik’s Class 3-6 autonomous box trucks are live for customers in multiple markets including Texas, Arkansas, Louisiana and Ontario. In April 2021, Gatik launched its strategic collaboration with Isuzu North America, marking the first time a global OEM is working in the medium duty category with a Middle Mile trucking company. Gatik’s autonomous driving.


XenoLidar-X

By XenomatiX – True-Solid-State-LiDAR

Honoree

Vehicle Intelligence & Transportation


XenoLidar-X is a true-solid-state LiDAR designed for high resolution and accurate analysis of the vehicle’s surroundings. It is a stand-alone, off-the-shelf solution with no moving parts, intended for autonomous and industrial applications. XenoLidar-X comes in two versions: Xact for mid-range, and Xpert for long-range measurements. Both versions include XenoWare, the 3D point cloud software enabling perception solutions in ADAS and up to fully autonomous driving.


4Sight M

By AEye, Inc.

Honoree

Vehicle Intelligence & Transportation


4Sight™ M is a high-performance, software-configurable adaptive LiDAR sensor designed to meet the diverse performance and functional requirements of autonomous and partially automated applications. Its solid-state reliability, combined with industry-leading LiDAR performance (extended range from 1cm to 1,000 meters), integrated intelligence, advanced vision capabilities (10-20x more accurate than camera-only systems), and unmatched reliability make it an ideal fit for mobility, trucking, transit, construction, rail, intelligent traffic systems, and aerospace and defense markets.


Eos Embedded Perception Software

By Algolux

Honoree

Vehicle Intelligence & Transportation


Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicle (AV) systems today are not able to support many driving environments, weather conditions, and challenging scenarios. Driving in low light, snow, rain, and even tunnels are just some of the situations where perception systems often fail due to a lack of robustness. Eos is an award-winning embedded perception solution that delivers up to 3x improved accuracy across all conditions, especially in low light and harsh weather. Its efficient end-to-end deep learning architecture can be quickly personalized to any camera lens/sensor configuration or for multi-sensor fusion.


Nova Lidar

By Cepton Technologies

Honoree

Vehicle Intelligence & Transportation


Nova is a miniature, wide-angle, near-range lidar built for modern vehicles at a target volume price below $100, to help increase vehicle safety and enable autonomous driving capabilities. Powered by Cepton’s patented MMT®, Nova can be elegantly embedded all around a vehicle to provide a complete 360° view of its immediate surroundings without disrupting its design aesthetics. Nova is designed to help minimize perception blind spots and reduce accidents and vehicle damage. Nova is a transformational sensor that fundamentally changes the game for near-range sensing applications.


Automated Steering Actuator

By Nexteer Automotive

Honoree

Vehicle Intelligence & Transportation


Nexteer’s Automated Steering Actuator: High Availability, Output & Durability offers high safety coverage through a combination of software, electrical hardware, mechanical & sealing redundancies – plus increased durability & reliability performance required for higher loads due to increasing vehicle weights (battery mass) & more demanding use-cases of Shared Autonomous Vehicles without driver manual back-up. Our Automated Steering Actuator facilitates broader adoption of Shared Autonomous Vehicles (SAVs), like people movers, to be capable of higher loads and higher speeds, compared to current last-mile, geofenced and neighborhood vehicles on the market.


1-box Brake System for Highly Autonomous Driving

By Mando Corporation

Honoree

Vehicle Intelligence & Transportation


Mando’s Integrated Dynamic Brake is a 1-box electro-hydraulic brake system replacing and integrating multiple components of a traditional brake system into one unit. This promotes the reduction in vehicle mass and simplifying the assembly step by requiring less electrical and hydraulic components. ‘IDB for Highly Autonomous Driving’ is a world-first product that secures braking functional redundancy contained within the same 1-box system, eliminating the need for additional components. Moreover, in the unlikely event of failure, it provides fully redundant brake functionality and performance.


Hammerhead™

By NODAR

Honoree

Vehicle Intelligence & Transportation


NODAR’s Hammerhead™ camera-based 3D vision software technology is a crucial component in the development of ADAS and autonomous vehicles bringing safety, advanced performance, and cost-effectiveness to the automotive market. NODAR’s 3D vision platform, Hammerhead, delivers reliable, ultra-precise, and real-time 3D sensing at long-range (up to 1000 meters), providing better than LiDAR-quality at the price point of camera technology. Between 2025 and 2030 a projected 250 million vehicles will require L2 and above autonomy to understand their environment and provide the high-level of safety required as autonomous systems control more of the driving function.


TriEye SEDAR

By TriEye

Honoree

Vehicle Intelligence & Transportation


TriEye’s SEDAR (Spectrum Enhanced Detection And Ranging) is the ultimate imaging and ranging solution for automated driving applications. SEDAR’s sensing modality, uniquely operating in the Short-Wave Infrared (SWIR) spectrum, enables imaging and 3D mapping of the road within all visibility conditions – in one sensory modality The SEDAR is based on two significant world’s first innovations: TriEye’s Raven (HD CMOS-based SWIR sensor) and TriEye’s UltraBlaze (Eye-safe SWIR pulsed laser illumination source). TriEye’s first-of-its-kind technology is suited for mass production, offering a 10X cost reduction compared to current LiDAR solutions on the market.


Phoenix Perception Radar

By Arbe

Honoree

Vehicle Intelligence & Transportation


Arbe’s Perception Radar, Phoenix, revolutionizes autonomous vehicle sensing, providing unmatched safety to the market. Ultra-high resolution transforms radar as a sensor to support advanced perception capabilities at mass market price, with top performance in all environments, weather, and lighting conditions. It is the first radar to detect stationary objects — a notorious stumbling block for autonomous radars — meeting NHTSA and NCAP requirements and resolving the factors behind ADAS and Autopilot related accidents. With an Advanced Free Space mapping and object tracking in all corner cases, Phoenix closes the sensor gap to achieve truly safe autonomous driving and Vision Zero.


‘Real-to-real’ – Unique ADAS Technology

By Cognata LTD – Ella Shlosberg

Honoree

Vehicle Intelligence & Transportation


Cognata is proud to launch its ‘real-to-real’ technology, which accelerates the global program for ADAS validation in order to meet worldwide regulatory requirements. The proprietary AI technology developed with Cognata transforms the super-high resolution data set to sensor inputs “as seen” by new sensors in new vehicle applications. Meaning, it is now possible to re-use previously collected video data to test a camera in a new position or with a new optical path and record multiscopic images and apply AI SLAM – based image transformation.


Robot Express for Smart Transportation and Logistics

By Mindtronic AI

Honoree

Smart Cities


The Robot Express is a smart logistic service that leverages the public transportation network for delivering goods. The advantage is that the sender and receiver can utilize the bus stop as express station. With the density and frequency of the shuttle bus, both service coverage and delivery time can be improved. Another advantage is energy saving because the shuttle not only transport the passengers but also the goods at the same time.


Maicat

By Macroact Inc.

Honoree

Smart Home


The social robot Maicat incorporates AI into Robotics, combining beneficial characteristics of a cute, engaging companion pet with the intelligence of AI, generating empathetic and personalized experiences for the user. Autonomous control and AI technology enable the robot to understand and adapt to its environment while its mobility, passive and active sensors support it to navigate and operate in the home, and allow an easy integration with other smart home technology via third-party Application, making it the center of the smart home. Being with Maicat adds another dimension to your life!


Mind-linked Bathbot

By Amorepacific

Honoree

Health & Wellness


Mind-linked Bathbot is the first beauty solution that produces bath bombs with customized fragrances and colors instantly, based on an individual’s electroencephalogram (EEG) data. The EEG signals are measured in real-time using a wearable headset and combined with a technology that has been developed to enable the quantitative evaluation of emotional indicators. These emotional indicators enable the identification of the fragrance and color that will provide a unique experience to the user. This user-optimized information is then transmitted to the Bathbot (bath bomb-making robot). This is the first technological invention to produce a customized bath bomb within a minute.


AVATAR 3D system (AVT-2020-700)

By AVATAR

Honoree

Health & Wellness


A next-generation system for quantification of animal behavior experiments that can replace the existing preclinical scoring that consists of human standards. As a fusion system of multi-vision hardware and AI software, all behaviors of mice, a representative animal of preclinical testing, can be quantified in real time with markerless and simulated in a 3D virtual space.


rebless

By H ROBOTICS INC.

Honoree

Health & Wellness


rebless™ is an FDA-registered robotic, exercise therapy device for both upper and lower limbs, providing motion to the elbow, wrist, ankle, and knee joints. With multiple operating modes, rebless™ allows for passive, active, active-assisted and active-resisted exercise and range of motion measurement, so therapy can be customized based on each individual patient’s condition and progress. Continuous Passive Motion (CPM) and Assist-as-Needed technology allow patients to change exercising modes depending on their rehabilitation abilities.


OMO Smart Trash Can

By GD International Inc.

Honoree

Home Appliances


The OmO Smart Can is a new and first of it’s kind self-sealing and touchless trash can.  A built-in motion sensor and compatible feature with voice assistants like Alexa and Google Assistant gives you a hand-free approach to tidying up.  When it’s time to take out the trash, an internal thermal sealing mechanism seals trash in and opens the inner bin for a ready to throw bag.  A replacement bag automatically deploys once the bin closes back up and is ready for the next dispense.


 

Narwal World First Self-Cleaning Robot Mop and Vacuum

By Narwal Robotics Corporation

Honoree

Home Appliances


Narwal J2 is a robotic cleaner with automatic mop-cleaning base station. Users can select between vacuum or mop modules to meet their cleaning needs. The spinning mop with 10N pressure applied against the floor, which greatly enhances the cleaning effect. It utilizes Lidar and SLAM technologies to intelligently navigate and map the environment. When the intelligent algorithm detects dirtiness of mopping pads, it automatically returns to base station to wash pads and resume work from where it stopped. Users do not need to deal with dirty mops at any time, saving hands from coming in contact with allergens or dirt.


 

Nailbot

By Preemadonna Inc.

Honoree

Home Appliances


Nailbot is a connected, at-home manicure device that instantly prints on nails any photo, emoji, image or any self-created design. Along with the product experience, comes a built-in community of Gen Z creators via a mobile app art marketplace. Preemadonna is the maker of Nailbot.


 

The full list of CES Innovation Award Honorees is here,

]]>
What happened in robotics in 2021? https://robohub.org/what-happened-in-robotics-in-2021/ Wed, 05 Jan 2022 11:58:07 +0000 https://robohub.org/?p=202878

Here are some postcards from 2021 and wishing you all the best for 2022!

Founded and Funded in 2021

According to Crunchbase, 26 robotics startups were founded and funded in 2021. Many others were founded but not funded, or funded but not founded. :)

AION Prosthetics
Electronics, Manufacturing, Medical Device, Robotics
AION Prosthetics develops a prosthetic system designed to provide an adjustable, durable, and affordable future for amputees.
http://aionprosthetics.com/

Atorika
Augmented Reality, EdTech, Education, Edutainment, Leisure, Personal Development, Robotics, Subscription Service, Virtual Reality
Atorika offers edutainment that adapts to any child.
https://www.atorika.fr/

BOTINKIT
Big Data, Robotics
Robots

ContRoL
Autonomous Vehicles, Mechanical Engineering
ContRoL focuses on developing a range of vehicles for controlled release.
https://control-create.mcmaster.ca/

DIWÖ
Artificial Intelligence, Drones, Machine Learning, Robotics, Software
Diwö is accelerating the transition into safe AI using autonomous UAVs and Robotics
https://www.xn–diw-una.com/

Drone Express
Artificial Intelligence, Delivery Service, Drones
Drone Express is a full-service logistics company that uses airborne autonomous drones for local package delivery.
https://droneexpress.ai

Eco City
Artificial Intelligence, CleanTech, Machine Learning, Mobile Apps, Robotics, Service Industry, Smart Cities
Mobile App, Robotic, AI, Machine Learning, Deep Learning, E-service

Eight Knot
Artificial Intelligence, Navigation, Robotics
Eight Knot designs and develops autonomous navigation system for water mobility using robotics & AI.
https://8kt.jp/

Elexir
Automotive, Autonomous Vehicles, Cyber Security, Software
We redefine mobility with the truly digital car
https://www.elexir.eu/en/

General Systems
Construction, Industrial Automation, Robotics
Construction Tech, Robotics, B2B, Automated Building Masonry
https://www.generalsystems.tech

Helgen Technologies
Agriculture, Consulting, Farming, Mining, Oil and Gas, Robotics, Software, Software Engineering, Waste Management
Software and hardware services for industrial robotics
https://www.helgen.tech

Mach9 Robotics
Machine Learning, Robotics, Software
Mach9 Robotics builds integrated hardware and software to make utility infrastructure inspection more accurate at a lower cost.
https://www.mach9.io/

Meili Technologies
Automotive, Autonomous Vehicles, Software
Meili provides automatic, contactless, in-vehicle medical emergency detection and interface with EMS to protect riders and make roads safer.
https://www.meilitechnologies.com

Mowito
Artificial Intelligence, Industrial Automation, Intelligent Systems, Robotics, Software
Mowito provides software tools for mobile robots, to enable them to navigate intelligently in indoor facilities.
https://mowito.in/

Muncho
Food and Beverage, Food Delivery, Hardware, Robotics, Transportation
Pizza cooked en-route to your door & delivered in as little as 5 minutes. Currently piloting in Philadelphia.
https://www.muncho.com

Outlift AI
Artificial Intelligence, Health Care, Information Technology, Robotics
Outlift AI develops robotic process automation designed to assist and automate back-office healthcare work.
https://www.outliftai.com/

PhiGent Robotics
3D Technology, Artificial Intelligence, Autonomous Vehicles
PhiGent Robotics offers autonomous driving solutions.

Qiangua Technology
Automotive, Autonomous Vehicles
Qiangua Technology is a Chinese autonomous driving truck company.

Serve Robotics
Food Delivery, Logistics, Robotics
Serve Robotics connects people with what they need locally via robots that are designed to serve people.
http://www.serverobotics.com

Socian Technologies
Aerospace, Artificial Intelligence, Drones, Machine Learning, Software
We create safer societies using AI-enhanced UAV technologies.
https://socian.io/

Tergeo Technologies
Industrial Automation, Machinery Manufacturing, Robotics, Waste Management
Tergeo Technologies is a developer of robotic solutions to sanitation challenges.
https://www.tergeotech.com

Urban Machine
Building Material, Robotics
Salvaging the past to build the future- Stealth Startup
http://www.urbanmachine.build

Wiingy
E-Learning, Education, Robotics
Wiingy is a unique combination of multiple learning and skill development methods including 1:1 live classes, DIY robotics kits.
https://www.wiingy.com/

Xbotod Technologies Ltd
Artificial Intelligence, Cloud Computing, Electronics, Embedded Systems, Internet of Things, Robotics
Shaping the next generation of technology and cities with Artificial Intelligence & Internet of Things (AIoT)
http://www.xbotod.com

Zbeetle
Artificial Intelligence, Electronics, Robotics
Zbeetle is a robotics innovation company engaged in producing cleaning robots.
http://www.zbeetle.com

Ziknes
Machinery Manufacturing, Robotics
Ziknes develops printing technology on industrial manufacturing of metals.
https://www.ziknes.com/

News

Robot density nearly doubled globally

The use of industrial robots in factories around the world is accelerating at a high rate: 126 robots per 10,000 employees is the new average of global robot density in the manufacturing industries – nearly double the number five years ago (2015: 66 units). This is according to the 2021 World Robot Report. By regions, […] (Click here to read more)

2022 robotics predictions from industry experts

Leading robotics experts such as Juan Aparicio and Ken Goldberg, share what they’ll be keeping an eye on in 2022.

The post 2022 robotics predictions from industry experts appeared first on The Robot Report.

Investors warn Deep Tech founders about these 12 pitfalls

Firstly, what is Deep Tech as opposed to Tech or technology enabled? Sometimes Deep Tech is regarded as a science based startup, sometimes it is regarded as disruptive to the status quo, sometimes it is regarded just as slow and hard, capital intensive, with a long ROI horizon. Or as something that investors aren’t ready […] (Click here to read more)

Mind-controlled robots now one step closer

Researchers teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. (Click here to read more)

Top 10 robotics stories of December

China’s new five-year plan for robotics and Toronto banning sidewalk robots topped our coverage in December 2021.

The post Top 10 robotics stories of December appeared first on The Robot Report.

Creating the human-robotic dream team

Using autonomous vehicle guidelines, a team has developed a system to improve interactions between people and robots. The way people interact safely with robots is at the forefront of today’s research related to automation and manufacturing, explains a researcher. She is one of several researchers who are working to develop systems that allow humans and robots to interact safely and efficiently. (Click here to read more)

How AI and robotics are reconstructing a 2,000-year-old fresco in Pompeii

Computer scientists and archeologists are working together to solve this ancient puzzle. (Click here to read more)

Bonus material!

Children as Social Robot Designers – IEEE Spectrum

What happens when you let kids design their own social robot from scratch. (Click here to read more)

Holiday robot videos 2021 (updated)

Happy holidays everyone! Here are some more robot videos to get you into the holiday spirit. Have a last minute holiday robot video of your own that you’d like to share? Send your submissions to daniel.carrillozapata@robohub.org […] (Click here to read more)

]]>
Investors warn Deep Tech founders about these 12 pitfalls https://robohub.org/investors-warn-deep-tech-founders-about-these-12-pitfalls/ Thu, 16 Dec 2021 09:54:56 +0000 https://svrobo.org/?p=22070

Firstly, what is Deep Tech as opposed to Tech or technology enabled? Sometimes Deep Tech is regarded as a science based startup, sometimes it is regarded as disruptive to the status quo, sometimes it is regarded just as slow and hard, capital intensive, with a long ROI horizon. Or as something that investors aren’t ready for yet. But the amount of money going into Deep Tech investing is increasing, and the pool of Deep Tech investors is increasing. One of the key points I made in a recent GIST Tech Connect Deep Tech panel is that most investors, including the most successful Tech investors are not able to invest seriously in Deep Tech startups because they lack the technical awareness and depth of commercialization experience specific to a Deep Tech startup. GIST or the Global Innovation in Science and Technology Network is the US State Department program to encourage and support global entrepreneurship.

In fact, if you do the research into the failure rates of some high profile Deep Tech startups, it seems that certain large funds have a much higher failure rate than others, so at best, their growth pathway is not compatible with Deep Tech startups. At worst, they are simply cherry picking some Deep Tech startups for their publicity value. Startups should always do their due diligence on investors and how they treat founders, particularly founders with similar startups.

Universities play a huge role in derisking, funding and commercializing Deep Tech startups but there is still a ‘Valley of Death’ in the transfer stages. And a Deep Tech startup can come out of any university but not all universities have real commercialization experience and a supportive startup ecosystem. Silicon Valley Robotics and Circuit Launch have provided a ‘halfway house’ for a lot of Deep Tech startups by providing affordable workspace with prototyping facilities and a startup ecosystem. But the first question I always ask entrepreneurs is if they have leveraged every advantage that their university connections can provide. Universities can provide greatly discounted lab space and testing facilities, also connections to scientific experts in most any field who can be leveraged as consultants and advisors.

The SBIR program, or the American Seed Fund, which is about a $4 billion non dilutive funding from the federal government in the form of R&D dollars, contracts and grants to small businesses and startups gives you the opportunity to derisk a lot of the technology very early on. You can really do a detailed scope and scan, and then couple that with the iCorps program and you get the opportunity to do deeper dives into customer discovery, to really understand if this is something that’s just a nice to have, or is it a real must have. Although the SBIR program is American based program, a lot of the countries around the world have been creating similar ones. A good example of that is EU Horizon 2020 grants.

Grants catalyze and do a certain amount to de-risk technology, extending the runway through non-dilutive funding and by creating a technology roadmap which validates the science as significant. Corporate venture funds or strategic investors also play an important role, alongside non-dilutive grant funding. Not only can they be a check, they can be a customer, they can be an advisor and a partner in the early prototype to manufacture stages. The best strategic investors play a huge role in helping Deep Tech startups succeed, because they need the technology you are creating.

Here’s a collection of tips for Deep Tech founders gathered from the GIST TechConnect Panel on Deep Tech with Nakia Melecio from Georgia Tech, Nhi Lê from WARF, Andra Keay from SVR and The Robotics Hub, G. Nagesh Rao from US Dept of State. Also tips from Six red flags that send investors running the other way by Sara Bloomberg, San Francisco Business Times. Quotes not attributed to other investors are my thoughts or recollections from the event.

Accelerator hopping

“When you start going from accelerator to accelerator looking for funding, then you’re doing it wrong. Accelerators only fund you to participate in their program. Their program and mentors are the real value.” Nhi Lê, WARF Accelerator

You also dilute your equity and become uninvestable.

Taking the first check, giving away too much equity in early rounds

Always negotiate terms. But don’t focus solely on the financials and at the risk of throwing away the less obvious value that a good investor can bring to you.

“Deep Tech startups may take longer to get to revenue than a traditional tech startup, so you need to think about grant funding as a source of revenue, and any contracts that help you develop part of your technology.” Nhi Lê, WARF Accelerator

Not budgeting for IP defense

“Companies often say that they’re investable because they have a patent, but they haven’t budgeted anything to defend it. Your IP is only as good as your ability to defend it. Universities play a great role in protecting and defending IP that they’ve licensed.” Nhi Lê, WARF Accelerator

Not having a plan for the whole journey

“When you go into your first funding meeting, you must be thinking about the long term journey, all the way to exit. It’s never going to be just one check, you’re growing a company.” G. Nagesh Rao, US Dept of Commerce

Not doing diligence on investors or accelerators

“Deep tech, especially at the leading edge, is usually expensive, so it’s critical to find the right path to commercialization at scale. Good investors speed up the process and lower your burn rate.” Michael Harries, The Robotics Hub

Have your potential investors brought similar startups to market? Having that experience can make the commercialization process much faster, and it’s critical to manage your resources effectively. Constant fundraising takes founders away from product development. Also, do your investors have patient capital? Or are they needing a rapid return on investment for their current fund? Don’t assume that a well known investor or accelerator guarantees you success, or even finding a good fit with their process.

Ignorance of basic financials

Overreaching on inventory, being unable to meet debts in a timely fashion, structuring the company poorly, all these things are cited by founders who’ve struggled.

Customer discovery never stops

“Focus on the customer and fall in love with the customer’s problem and you’ll never go wrong.” Nakia Melecio, Georgia Tech

Do it from the start, and never stop going to market. You can’t just outsource your business development to people with better sales skills, not until you know that pain points you’re solving for your customers and you can write the scripts for them.

Not doing the research, or using vanity metrics instead of strategy

“If a founder is estimating their market in the trillions of dollars they have either not done the research or they are just delusional.” Swati Chaturvedi, Co-Founder of PropelX

“Founders who are focused only on vanity metrics (growth rate and valuation) and not attuned to developing sound business models are a red flag.” Anurag Chandra, Fort Ross Ventures

Trying to skip steps

“Another red flag is trying to FOMO you into moving quickly. Not only is it bad for arriving at a sound investment decision, it’s an indication of how they do business with customers and partners (ie. not invested in building long term relationships). Anurag Chandra, Fort Ross Ventures

Misrepresentation or withholding data

“Investors can tell when you are avoiding details like actual product or customer development status and it may mean you are misrepresenting your business.” Caroline Winnett, Executive Director of Berkeley SkyDeck

Cofounder issues, not having a clear leader or not being open to feedback

“There needs to be agreement on who is acting as CEO, and everyone needs to be aligned on that. Another red flag is not being open to advice from experts.” Caroline Winnett, Executive Director of Berkeley SkyDeck

Being disorganized

“Founders should be responsive to requests for more information. It shows if they are organized and in the mindset to do a deal versus spin cycles.” Shruti Gandhi of Array Ventures

]]>
An inventory of robotics roadmaps to better inform policy and investment https://robohub.org/an-inventory-of-robotics-roadmaps-to-better-inform-policy-and-investment/ Mon, 29 Nov 2021 13:40:04 +0000 https://svrobo.org/?p=22068

Much excellent work has been done, by many organizations, to develop ‘Roadmaps for Robotics’, in order to steer government policy, innovation investment, and the development of standards and commercialization. However, unless you took part in the roadmapping activity, it can be very hard to find these resources. Silicon Valley Robotics in partnership with the Industrial Activities Board of the IEEE Robotics and Automation Society, is compiling an up to date resource list of various robotics, AIS and AI roadmaps, national or otherwise. This initiative will allow us all to access the best robotics commercialization advice from around the world, to be able to compare and contrast various initiatives and their regional effectiveness, and to provide guidance for countries and companies without their own robotics roadmaps.

Another issue making it harder to find recent robotics roadmaps is the subsumption of robotics into the AI landscape, at least in some national directives. Or it may appear not as robotics but as ‘AIS’, standing for Autonomous Intelligent Systems, such as in the work of OCEANIS, the Open Community for Ethics in Autonomous aNd Intelligent Systems, which hosts a global standards repository. And finally there are subcategories of robotics, ie Autonomous Vehicles, or Self Driving Cars, or Drones, or Surgical Robotics, all of which may have their own roadmaps. This is not an exhaustive list, but with your help we can continue to evolve it.

Do you know of robotics roadmaps not yet included? Please share them with us. 

]]>
What is the best simulation tool for robotics? https://robohub.org/what-is-the-best-simulation-tool-for-robotics/ Tue, 20 Jul 2021 18:09:56 +0000 https://robohub.org/what-is-the-best-simulation-tool-for-robotics/

What is the best simulation tool for robotics? This is a hard question to answer because many people (or their companies) specialize in one tool or another. Some simulators are better at one aspect of robotics than at others. When I’m asked to recommend the best simulation tool for robotics I have to find an expert and hope that they are current and across a wide range of simulation tools in order to give me the best advice, which was why I took particular note of the recent review paper from Australia’s CSIRO, “A Review of Physics Simulators for Robotics Applications” by Jack Collins, Shelvin Chand, Anthony Vanderkop, and David Howard, published in IEEE Access (Volume: 9).

“We have compiled a broad review of physics simulators for use within the major fields of robotics research. More specifically, we navigate through key sub-domains and discuss the features, benefits, applications and use-cases of the different simulators categorised by the respective research communities. Our review provides an extensive index of the leading physics simulators applicable to robotics researchers and aims to assist them in choosing the best simulator for their use case.”

Simulation underpins robotics because it’s cheaper, faster and more robust than real robots. While there are some guides that benchmark simulators against real world tasks there isn’t a comprehensive review. A more thorough review can address gaps and needs in research and research challenges for simulation. The authors focus on seven sub-domains: Mobile Ground Robotics; Manipulation; Medical Robotics; Marine Robotics; Aerial Robotics; Soft Robotics and Learning for Robotics.

I’m going to cut to the chase and provide a copy of the final comparison tables of each sub-domain but for anyone interested in utilizing these recommendations, then I recommend reading the rationale behind the rankings in the full review article. The authors also consider whether or not a simulator is actively supported. Handy to know! And the paper is also an excellent source of information about various historic and current robotics competitions.

Mobile Ground Robotics:

TABLE 1 Feature Comparison Between Popular Robotics Simulators

TABLE 1 Feature Comparison Between Popular Robotics Simulators

TABLE 2 Feature Comparison Between Popular Robotics Simulators Used for Mobile Ground Robotics

TABLE 2 Feature Comparison Between Popular Robotics Simulators Used for Mobile Ground Robotics

Manipulation:

TABLE 3 Feature Comparison for Popular Robotics Simulators Used for Manipulation

TABLE 3 Feature Comparison for Popular Robotics Simulators Used for Manipulation

Medical Robotics:

TABLE 4 Feature Comparison of Popular Robotics Simulators Used for Medical Robotics

TABLE 4 Feature Comparison of Popular Robotics Simulators Used for Medical Robotics

Marine Robotics:

TABLE 5 Feature Comparison of Popular Simulators Used for Marine Robotics

TABLE 5 Feature Comparison of Popular Simulators Used for Marine Robotics

Aerial Robotics:

TABLE 6 Feature Comparison of Popular Simulators Used for Aerial Robotics

TABLE 6 Feature Comparison of Popular Simulators Used for Aerial Robotics

Soft Robotics:

TABLE 7 Feature Comparison of Popular Simulators Used for Soft Robotics

TABLE 7 Feature Comparison of Popular Simulators Used for Soft Robotics

Learning for Robotics:

TABLE 8 Feature Comparison of Popular Simulators Used in Learning for Robotics

Conclusions:

As robotics makes more use of deep learning, simulators that can deal with data on the fly become necessary, and also a potential solution for simulation problems regarding points of contact or collisions. Rather than utilize multiple simulation methods to make a clearer abstraction of the real world in these boundary situations, the answer may be to insert neural networks trained to replicate the properties of difficult phenomena into the simulator. There is further discussion on differentiable simulation, levels of abstraction and the expansion of libraries, plug-ins, toolsets, benchmarking and algorithmic integration, all increasing both the utility and complexity of simulation for robotics.

As the field of simulation for robotics grows, so does the need for metrics that capture the accuracy of the real world representation.  “Finally, we predict that we will see further research into estimating and modeling uncertainty of simulators.”

This may have been the first review article on simulation for robotics but hopefully not the last. There’s a clear need to study and measure the field. I found the sections on soft robotics and learning for robotics particularly interesting, as the paper discussed the difficulties of simulation in these fields. And please attribute any errors in this summary to my mistakes. Read the full review here: https://ieeexplore.ieee.org/document/9386154

Published in: IEEE Access ( Volume: 9)
Page(s): 51416 – 51431
Date of Publication: 25 March 2021 
Electronic ISSN: 2169-3536
Publisher: IEEE
Funding Agency:
CCBY – IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
]]>
Eight lessons for robotics startups from NRI PI workshop https://robohub.org/eight-lessons-for-robotics-startups-from-nri-pi-workshop/ Sun, 14 Mar 2021 16:14:55 +0000 https://robohub.org/eight-lessons-for-robotics-startups-from-nri-pi-workshop/ Research is all about being the first, but commercialization is all about repeatability, not just many times but every single time. This was one of the key takeaways from the Transitioning Research From Academia to Industry panel during the National Robotics Initiative Foundational Research in Robotics PI Meeting on March 10 2021. I had the pleasure of moderating a discussion between Lael Odhner, Co-Founder of RightHand Robotics, Andrea Thomaz, Co-Founder/CEO of Diligent Robotics and Assoc Prof at UTexas Austin, and Kel Guerin, Co-Founder/CIO of READY Robotics.

RightHand Robotics, Diligent Robotics and READY Robotics are young robotics startups that have all transitioned from the ICorps program and SBIR grant funding into becoming venture backed robotics startups. RightHand Robotics was founded in 2014 and is a Boston based company that specializes in robotics manipulation. It is spun out of work performed for the DARPA Autonomous Robotics Manipulation program and has since raised more than $34.3 million from investors that include Maniv Mobility, Playground and Menlo Ventures.

Diligent Robotics is based in Austin where they design and build robots like Moxi that assist clinical staff with routine activities so they can focus on caring for patients. Diligent Robotics is the youngest startup, founded in 2017 and having raised $15.8 million so far from investors that include True Ventures and Ubiquity Ventures. Andrea Thomaz maintains her position at UTexas Austin but has taken leave to focus on Diligent Robotics.

READY Robotics creates unique solutions that remove the barriers faced by small manufacturers when adopting robotic automation. Founded in 2016, and headquartered in Columbus, Ohio, the company has raised more than $41.8 million with investors that include Drive Capital and Canaan Capital. READY Robotics enables manufacturers to more easily deploy robots to the factory floor through a patented technology platform that combines a very easy to use programming interface and plug’n’play hardware. This enables small to medium sized manufacturers to be more competitive through the use of industrial robots.

To summarize the conversation into 8 key takeaways for startups.

  1. Research is primarily involved in developing a prototype (works once), whereas commercialization requires a product (works every time). Robustness and reliability are essential features of whatever you build.
  2. The customer development focus of the ICorps program speeds up the commercialization process, by forcing you into the field to talk face to face with potential customers and deeply explore their issues.
  3. Don’t lead with the robot! Get comfortable talking to people and learn to speak the language your customers use. Your job is to solve their problem, not persuade them to use your technology.
  4. The faster you can deeply embed yourself with your first customers, the faster you attain the critical knowledge that lets you define your product’s essential features, that the majority of your customers will need, from the merely ‘nice to have’ features or ‘one off’ ideas that can be misdirection.
  5. Team building is your biggest challenge, as many roles you will need to hire for are outside of your own experience. Conduct preparatory interviews with experts in an area that you don’t know, so that you learn what real expertize looks like, what questions to ask and what skillsets to look for.
  6. There is a lack of robotics skill sets in the marketplace so learn to look for transferable skills from other disciplines.
  7. It is actually easy to get to ‘yes’, but the real trick is knowing when to say ‘no’. In other words, don’t create or agree to bad contracts or term sheets, just for the sake of getting an agreement, considering it a ‘loss leader’. Focus on the agreements that make repeatable business sense for your company.
  8. Utilize the resources of your university, the accelerators, alumni funds, tech transfer departments, laboratories, experts and testing facilities.

And for robotics startups that don’t have immediate access to universities, then robotics clusters can provide similar assistance. From large clusters like RoboValley in Odense, MassRobotics in Boston and Silicon Valley Robotics which have startup programs, space and prototyping equipment, to smaller robotics clusters that can still provide a connection point to other resources.

 

]]>
Robots4Humanity in next Society, Robots and Us https://robohub.org/robots4humanity-in-next-society-robots-and-us/ Tue, 23 Feb 2021 21:30:01 +0000 https://robohub.org/robots4humanity-in-our-next-society-robots-and-us-conversation/

Speakers in tonight’s Society, Robots and Us at 6pm PST Tuesday Feb 23 include Henry Evans, mute quadriplegic and founder of Robots4Humanity and Aaron Edsinger, founder of Hello Robot. We’ll also being talking about robots for people with disabilities with Disability Advocate Adriana Mallozi, founder of Puffin Innovations and Daniel Seita, who is a deaf roboticist. The event is free and open to the public.

As a result of a sudden stroke, Henry Evans turned from being a Silicon Valley tech builder into searching for technologies and robots that would improve his life, and the life of his family and caregivers, as the founder of Robots4Humanity. Since then Henry has shaved himself with the help of the PR2 robot, and spoken on the TED stage with Chad Jenkins in a Suitable Tech Beam. Now he’s working with Aaron Edsinger and the Stretch Robot which is a very affordable household robot and teleoperation platform.

We’ll also be hearing from Adriana Mallozi, Disability Advocate and founder of Puffin Innovations which is a woman-owned assistive technology startup with a diverse team focused on developing solutions for people with disabilities to lead more inclusive and independent lives. The team at Puffin Innovations is dedicated to leveling the playing field for people with disabilities using Smart Assistive Technology (SAT).  SAT incorporates internet of things connectivity, machine learning, and artificial intelligence to provide maximum access with the greatest of ease. By tailoring everything they do, from user interfaces to our portable, durable, and affordable products, Puffin Innovations will use its Smart Assistive Technology to provide much needed solutions the disabled community has been longing for.

This continues our monthly exploration of Inclusive Robotics from CITRIS People and Robots Lab at the Universities of California, in partnership with Silicon Valley Robotics. On January 19, we discussed diversity with guest speakers Dr Michelle Johnson from the GRASP Lab at UPenn, Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai, Alka Roy from The Responsible Innovation Project, and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics, with discussion moderated by Dr Ken Goldberg, artist, roboticist and Director of the CITRIS People and Robots Lab, and Andra Keay from Silicon Valley Robotics.

You can see the full playlist of all the Society, Robots and Us conversations on the Silicon Valley Robotics youtube channel.

TRANSCRIPT OF THE FIRST INCLUSIVE ROBOTICS DISCUSSION (from video directly above)

Andra Keay 0:05
So welcome, everybody. Welcome to our first society robots and us for 2021. And I’m looking forward to a discussion that is going to help us set the agenda for robotics in 2021 and beyond. And I think it’s very important that as our technology emerges, we address the issues around how it is affecting society, and how it can have an impact both positive and negative on society. And so we have wonderful conversations. And we started doing this event in the early days of the COVID era, and we were focusing on so what actually does it mean? How can robotics and roboticists help in this time of pandemics, and it was a fantastic conversation, and we decided that it was time to expand the topic, and to start to talk about things like racism in robotics, global challenges and how we address those. So it’s one of my favorite events. And I’m delighted to see so many people. Joining us now, my role is to warm up for the speakers and the rest of the discussion, I’ll just give everybody a little bit of housekeeping as to how this is going to roll. I will introduce each of the speakers, and they will each share their thoughts with us. We will move from speaker to Speaker if you have questions specifically for one of the speakers, put them into the chat, and I can forward the question on and then we’ll start the general discussion once every speaker has had their time to speak. And in the general discussion? Well, I’m looking forward to finding out what is inclusive robotics? Why do we need it? How do we go about getting it? And even beyond that? What is the robotics agenda? For 2021? And beyond? What are the questions that we haven’t thought to ask perhaps, and perhaps it’s time that we started those discussions. And in the spirit of that, I would like to acknowledge that I and many of us are here on the lands of the aloni people who are an unrecognized First Peoples tribe of California. And I’m very pleased to actually see more events starting to acknowledge the first peoples as part of how things happen. And so I’ve given us the introduction and the housekeeping. I’d like to say a little bit about our speakers. And we’ll kick off the rest of the discussion. And I see this all because pointed out an event that’s coming up in the chat, a spring founders circle for the responsible innovation labs. I would like to also say, Silicon Valley robotics, and women in robotics, have regular events. So in women in robotics, we have a weekly book club, for example. And we have a slack community where we can meet together online, as well as having local chapters. If you’re interested in joining that, and I’ll pop the link in the chat. Please go to women in robotics.org. And it’s not a ht, TP s site, it’s still an HTTP site. So if you can’t find it, that’s the reason. But if you’re interested in signing up, please go there. Silicon Valley robotics, which is the organization that I call my day job not, although it is also a passion project is able to assist you if you are a robotics company at any stage. And we have mentor networks, we have events that are topic related, or that are related to helping you with your startup. And I’m just wondering, Ken, would you just like to say a few words now about CITRIS, where Ken is the director of the people in robots lab, and as well as the research there. There’s also the CITRIS Foundry, I believe.

Ken Goldberg 4:37
Thank you under so I want to say we’re really lucky to be partnering with you on this and so it’s a pleasure to work with you and citrus is a University of California, actually state level organization that connects for the campuses. They are said, Davis, Santa Cruz and Berkeley and the mission of CITRIS stands for the center is a center for Information Technology Research in the interest of society. So the mission of this series that Andhra is organizing is very much consistent with the with the mission of the center. And my my initiative within it is the is people and robots. So these are really come together very strongly. And this idea of inclusive robotics is something that we’re very excited about developing and expanding in the in the in the year to come and the years to come. So I really appreciate the the discussion we’re going to have tonight, I’m really looking forward to hearing your perspectives.

Andra Keay 5:35
Thanks so much again. And you’ve probably seen that we have a fantastic lineup of speakers tonight, we have Dr. Michelle Johnson, who is at the grasp lab in new pen, and actually is the director of the rehabilitation robotics lab at the grasp lab, and the Associate Professor of physical medicine and rehabilitation. We have Dr. Ariel Anders, and she is the first technical hire at robust AI, and is also a board member for women in robotics. We have Alka Roy, who is the founder of the responsible innovation project, and is working on building delight, trust and inclusion into technology and AI. Looking forward to hearing more about that. Then we have getting my notes out of order here. Then we have Kenny Chiku Nova nisi from who is a roboticist. And my notes are totally out of order now. and is a member of blacking robotics, and he will be able to speak to us a little bit about, like in robotics, as well, Kenny Andrews, who’s a computer engineering master’s student at the University of Illinois, and on the undergraduate committee of black in robotics. And, of course, Ken Goldberg, who is the director of the people in robots lab at citrus, and the distinguished William S. Floyd Jr. Chair in engineering at UC Berkeley, and is not only a roboticist, but an artist and I love the cross disciplinary perspective that that brings to the discussion. And without more from me now I think we’ve given all of the strikers time to join the conversation, I would like to introduce Dr. Michelle Johnson.

Dr Michelle Johnson 7:46
Thank you Andra. Thank you, everyone for attending. I was really intrigued by the title inclusive robotics. What is it? Why do we want it? And what do we need to do to have it and as I was pondering, the title to clear thoughts came to mind first, I thought, inclusive robotics means designing robots that reflect the diversity of society in terms of culture and race, as the first thought. The second thought was that inclusive robotics means to me, designing robots that are usable in all types of settings in low resource in high resource settings, in high income countries and low low and middle income countries that benefit all types of people at different socio economic status. And not just in kind of in the, in the US, or the UK or in Europe, but all over and globally. So those are two, two thoughts that were kind of really present. With me, as I thought about this, I wanted to just expand on those two thoughts a little, and explain a bit about what I mean by that. So go back, going back to the first idea that we should be thinking about designing robots that reflect the diversity of society in terms of culture and race. I, you know, when I’m in the healthcare area, and my PhD is in me, and I’ve been thinking about designing robots for people with disabilities for a long time, and as we suggest that robots will be taking care of us and being seen as some type of assistance to our clinicians or to our elders, or to people in general. I think that this is where we really need to start thinking about how we design them, how do we train them using the AI and their functional goals? A couple of examples of why this is important. I was talking to a colleague recently about his face recognition software. And something he said to me struck me he said, Oh, yeah, You know, our face recognition software’s really have a hard time detecting people with darker skin? And I thought, Okay, so we’re we’re the AI is how are the AI is trained. So you know, this idea that as we train some of our systems, they’re, they’re, they’re being trained, maybe not enough on a diversity of people. But they are then not able to function well, when, when meeting the diversity of people that we encounter. So that’s an example of what I what I mean by kind of designing them to really think about the diverse diversity of society in terms of culture and race. Another thought is, recently in my lab, we’ve been talking about social robot design, and thinking about robots for children with disabilities and doing remote telehealth. And one of the discussions that we had was, how do we make sure that this robot, when someone looks at it can be they can see themselves partnering with it, and they don’t automatically assume that this robot is white, or this robot is any particular race or color, but that they can actually form a collaboration with the system. And so we talked about developing a robot that kind of can be seen as multicultural. And in fact, we did an exercise with my class that I taught this past fall about asking them to interact with the robot and asking them about issues of diversity and gender, and race, and that whether interacting the robot with the robot engendered any of those thoughts. So those are that’s kind of I think we need to do a better job of including cross section of people in our development process in our discussion about what is ideal. Just another quick anecdote. There’s a paper that came out that said, robots social robots should have eyes with a pupil. And when you looked at the message section, the majority of the people that they had surveyed were white, and they had blue eyes, or eyes that you saw distinct pupil. And and so I was like, Okay, well, of course, then that makes sense that now you’re gonna say that robot should have you know, pupil, because the variety of people that you’re serving, you know, that’s what they’re used to seeing, while if you serve dade, you know, people with darker eyes, or brown or but you’re not going to see a distinct pupil. So that’s not going to be a big deal with those. So that’s just another anecdote of, we need to be careful as scientists and as developers to really consider who we’re talking to, so that we’re not, and that was a kind of well bred paper. And I’m thinking, Oh, Why didn’t anyone ask that question? Maybe because, you know, I think the responsibility is on the designer, and the person developing it to make sure that their, their population that they’re querying reflects this cross culture and gender. And I think if if we do that, we’re going to see robots that are more inclusive, and we won’t have these. At least we’ll have less of these anecdotal stories that I’m pointing out. The second point about low robots should be usable by all just quickly is that came out of kind of some work that I I’m really passionate about affordable robots in global health. Because as I look back at robot assisted therapy, Sara therapy and the systems that we develop, I find that wholeheartedly. They’re they’re quite expensive, and they really haven’t penetrated low and middle income countries yet, in terms of stroke, 80% of the strokes and functional impairments that are resulting are outside of high income countries. So there’s this disparity in terms of here’s this technology that we’re proving to be able to possibly support in areas that have low resource, not enough clinicians where you might be able to leverage technology to support recovery after stroke or upper extremity impairment. But yet still, we have not been able to penetrate these areas, because the systems are way too expensive. So I mean, my lab has been passionate about how do we develop systems that are not only effective but affordable and able to be used in these low resource settings. So that’s kind of my two things. And I think more of us need to kind of be thinking about those things. And I see sometimes we are thinking about like better and better and better and cooler and cooler tech, but the translation of that tech into communities Globally, I think is missing. So that’s my second point about inclusion in robotics. I’ll stop there, Andra, I think I made my main points.

Andra Keay 15:12
No, that was that was excellent. And I think they were very clear points. I’ve been penciling down some questions myself. If anybody else has questions specific, specifically for Michelle, or questions about the subjects that she’s raised, you can table them in the chat, and we will definitely get to them. I’m looking forward to hearing what other angles on the discussion of what is inclusive robotics? And how do we get it that we’re going to uncover tonight. So without further ado, I would like to introduce Dr. Arielle Anders, who is the first technical hire at robust AI and on the board of women in robotics.

Dr Ariel Anders 15:58
So that was a really great talk, I’m excited to try to follow up, I think I have some similar ideas. And I really appreciate Michelle’s comments on your how do we you know, we keep pushing the envelope of what robots can do, but they’re not necessarily going to everyone. And so to start, I just want to say thank you for inviting me to talk and share some of my ideas about inclusive robotics. If you’ve been able to see some of my recent presentations, I’ve been trying to get into the habit of introducing myself and providing a little bit of background and context. And I think that for this talk, the most important thing, the important thing here is that I am a human being. And I think that when we think about inclusive robotics, and the questions that ondra asked the speakers to share our thoughts on, it should come from this part of our humanity. And they’re the three questions, you know, what, what is inclusive robotics? Why do we need it? And what do we need to do to have it? I think that my answers today, you know, I’m kind of excited that we’re on zoom, and it looks like this is recorded, because I am curious to see, you know, what do I think in a decade or so on some of my thoughts here. And I think we’re going to continue growing and learning. And just reiterating on this process. And, you know, we’ve really should start having these discussions more and more, especially from people like myself, who really generally did not work so much in the idea of human robot interaction, a lot of my previous work was in programming robots to do new capabilities. So I’m really excited to share my ideas and what my kind of first impression thoughts are here. And to start off with, when I think about, you know, what is inclusive robotics? I think, what is what does it mean to be inclusive? And my definition of inclusive is belonging. So, going with that inclusive robotics is creating robots that belong in our world. I think that belonging that word to me has a lot of implications on what type of robots will have. Sorry, I think I think my dogs also very excited about that idea. Hopefully, you guys can still understand we’ll try to speak over her in terms of what does it mean for a robot to belong? It should be safe, it should be a robot that’s comfortable around us, you know, we should be comfortable to have it or around. We want to make sure the robots are, you know, they belong, they’re probably not hurting people because out exclude others. I think there’s a question about the appropriate use of robotics kind of going along what Michelle said, you know, it should not be something that alienates more people, it should be something that doesn’t exploit all our resources, it should be something that’s accessible. And to me, it should also be something that society wants. We want robots that are trustworthy, and they work and we want to have them around. And most importantly, in this very kind of circular definition. When I say people, I really do mean all human beings. And so when I think about this, you know, robots that are trustworthy and safe, and they work we want them around. To me. It makes sense that we would want robots like that. I think the question really is, why do we want this definition? To include everyone. And I honestly can’t answer that for you. But I can refer to an expert, Maya Angelou has a wonderful quote on diversity, and that it makes for a rich tapestry. And we must understand that all threads of the tapestry are equal in value no matter what their color. I think that when we think about robotics, it is incredibly important to think about creating robots for marginalized groups. It’s not just for people who can pay for them. That being said, How do we get there, and I honestly don’t know how, but I think we should follow the principle of nothing about us without us. And that means we should try to make sure we have a diverse demographic creating robots, we need people from all backgrounds, otherwise, there’s no way we’re going to get there. So the other idea there is just to keep in mind as we continue going forward with our robotics development, is to remember that we are all humans. And I do want to share a little bit of a personal story here. Back when I was an undergrad, I had the opportunity to present my research at africamps. And our keynote speaker was Maya Angelou, which is incredible that I got to see her. And I can’t paraphrase I can’t even summarize exactly what she said. But she really solidified the point home that we are all human beings. And I think that’s that’s the message I want to share with you when we go to make our robotics more inclusive. That’s a kind of my short, short presentation on what I think inclusive robotics is.

Andra Keay 21:58
Thank you. All right, that was great and beautiful quotes from my Maya Angelou as well. And I’m getting such a lot of rich material from the discussion that’s going on in the chat about what is a multicultural robot that Michelle raised? And you know, you’re very clear points, nothing about us without us. And, you know, I’m thinking we have a lot of issues about where we cite the responsibility along the production of robots. And I think everybody kind of wishes, that it’s somebody else’s responsibility to do this. And quite often we’re reaching the production of robots with great big problems somewhere or other along this production process. There is nothing is working together collaboratively to develop appropriate robotics. So I’m starting to get some thoughts myself around this process. And you know, already, so thank you both for inspiring us there. What I should do is introduce our next speaker, Alka Roy is the founder of the responsible innovation project. And you’re currently a visiting faculty at Berkeley, I believe, as well. Okay.

Alka Roy 23:29
Yes, hi. I was just trying to figure out how to share my screen on my new computer. Sometimes, simpler is better. So I am, let me know if it works. No, it doesn’t work. So. All right. So I am. First of all, thank you for inviting me to this. I was actually more excited to hear what everyone was going to say then what I was going to say. I’m definitely not making robots, which I think I congratulate you for inviting an outsider to comment on this. But I do have a lot of opinions about it. Why not? And I think more and more people should have opinions about this. So I really am trying to see if I can still share my screen because I wanted to share a framework with you that I’m hoping we would think about my provocation tonight is very different. Because when I first got invited, and I heard inclusive robots, I said I don’t want to attend this event because or at least speak to it, because I’m not sure if robot to be inclusive and what does it really mean. And so I’ve been just thinking and thinking and thinking about it. And I because I actually believe that certain things should be excluded. And so let me let me let me explain what I mean by that. I am I’m working on or I have a responsible innovation framework. And that again, I says 1% of time, can I just share my screen or you should be able to do it on the green share screen at the bottom of your screen? Right. So my security settings are not allowing it. All right, so what I see. Give me one more second, if I can, I’ll just talk to it. Alright, so I will put my link in the website. And we’ll have another time to share this. But so I’m the founder of responsible innovation framework. And prior to that, I was at an innovation center in Palo Alto, where we were making a lot of the things that I’m going to talk about. And what happened for me was, I was the end of kind of 5g and AI and everything. So we were enabling anything that you know, mobility, can enable and immersive experiences, robotics. And in the middle of all of these discussions, I found a lot of voices missing, and which were community voices, or people or even when cities were involved, it was usually their CTO that was coming in, not people were thinking about the impact of society. And I kept raising the point, raising the point until I got so loud for myself, that I had to kind of feel like I had to take a break or two sides. And so I stepped out of that arena, and took a break. And this whole world happened with COVID, which has been a really interesting place for reflection for all of us. One of the things that I worked on last year is this responsible innovation framework, which is like a nested three by three by three, which is why I was hoping I could show you the visual, but just go with me. And and I will try to create it for you. So the first thing in there is the provocation for for wear robotics, which is really, I’m really excited to see what you’re going to think about it and how you’re going to disagree with me, and how are you going to tell me I’m wrong, is I think that we need to think about stakeholders differently. So we have people with everybody talked about and and really people, not customers, not clients, not you know, people, then we have the planet in the environment, physics, our laws, you know, all those things. The third stakeholder is technology, things. So I call it people planet things. And the reason I put that as a separate stakeholder is because I think, or my, my provocation is that the reason we design technology, so problematically including robots and chatbots. And digital twins, is because we don’t separate people and things. Because we try to build things in our own image, or the image of something living. So we’re taking an inanimate thing and creating it in that image, animation world is full of it. And that causes a lot of confusion for us. Because we transfer our feelings, we transfer our biases, we transfer our confusion to these inanimate objects. And now there’s so much research and where people are, you know, feeling things about the robot, which they’ve created a robot to sort of clear the minefield. But when they, they see this robot getting blown up, they stop and they’ve removed the robot because they’re feeling that this robot is doing more than it was designed to do. There’s a reason to create social and empathetic robots. I get it. But I think that in that process, we create teenage girls, we create women as our servers, and we just perpetuate the spice just sweep deep we can decouple. So my provocation is, why not design robots is things, useful things, but not in our image, not in our voice. And so that we don’t transfer, you know, just appears as a thing. And we treat it as a very useful thing that communicates and helps us and it’s technology. So those are, you know, that’s like basic provocation for me, the stakeholders. The other aspects of this responsible innovation framework includes the values of open and safe, which are conflicting values, but, but both needed, delightful and trustworthy, and inclusive and dependable. And their inclusion is about people and ideas and types of people. And the reason I say delightful and trustworthy is because whenever we talk about responsibility, you’re doing the right thing. We can vote like we get, we feel like we have to be very serious and boring. Whereas I think that’s the part of robotics design, which is very useful. lightful for us, and I love what Michelle said. And I think Ariel said about, we always looking ahead, and we feel like we have to be chasing the latest and the greatest. But there’s so much for us to learn from our weathering technology, the old stuff, and make things accessible, you know, around around the world. So I hopefully at some point, I’ll get to share this framework with you. But those are my provocations for this conversation. I’d love to hear what other folks have to say about that. So thank you.

Andra Keay 30:33
Thanks, Alka. And I could see some people are very positive about what you’re saying there. I have to agree that the good design framework is to not create something that is perpetuating stereotypes, because we do anthropomorphize and it triggers our unconscious biases and stereotypes, and I see some more conversations coming. But I think it’s time to introduce Kenechukwu Mbanesi from the Black In Robotics undergraduate committee. And you’re currently, he’s currently doing a PhD in robotics engineering at Worcester Polytechnic Institute in Massachusetts. And I’m looking forward to learning more about that over to you.

Kenechukwu C. Mbanesi 31:29
Alright, hello, sorry, I was just trying to get my sharing going on here. Just give me one second. Can you see my screen?

Andra Keay 31:47
Yes. Looks good.

Kenechukwu C. Mbanesi 31:49
Awesome. Yes, thank you so much for the privilege to be to be here to join this conversation today. I, I’ve benefited a lot just by listening to what the previous speakers have talked about. I want to take this discussion, somewhat piggybacking on what Ariel ended with, about how one way we can achieve inclusive robotics is by getting people from all backgrounds involved in robotics engineering. For a long time, I’ve been very passionate about the power that robotics and technology has to, to provide improved quality of life and socio economic benefits for people in underserved and underrepresented communities, especially on the African continent. And that’s really where my passion lies. And you see it, I see. I see exclusivity, which is the inverse of inclusivity is caused by two things. And it could be more but in my understanding is two things. One is discrimination. In other words, on equal opportunity. And second is limitation. More or less on equal access, right. And my passion really lies in addressing the Equal Access problem. And how I envision that is using education. Right? So how can we get more people who otherwise would not have a seat at the table of discussing and developing robotics technologies to be able to do that, by providing them the means through education. And I’ve been very privileged to be part of interesting projects with this goal in mind, and I’d like to share a few of that with you. The first project is math and science for Sub Saharan Africa. This project was supported by the World Bank, and our audacious goal was, was to run a continent wide train the trainer program to upskill stem teachers to teach math and science and robotics, and really to understand the interconnectedness. Right, so how does science and math inspire people to pursue engineering and robotics? So I think it was that if we could inspire and educate teachers, well, they would in turn, educate and inspire students. Right. And so we started this in 2016. And we’ve been able to partner with government education agencies on the continent, to run training programs in person and virtually in several African countries. And just to put a plug in here, one of the major challenges that we’re faced and this project is I know we really still do is accessing affordable, educational robotics kits at scale. We were very fortunate to get Vex robotics to donate a couple of kids. But beyond that, we still see that as a significant bottleneck to getting students and teachers access to kids that can really help them learn and participate. The second project is called bad for kids or co bad is stands for public collaborative robots. And this project was supported by the advanced robotics for manufacturing Institute here in the US. And the project was designed really to combat the current gaping shortfall of skilled manufacturing talent in the US. And particularly trying to address that in an inclusive way, right. So what we’re doing really was to swim upstream, to inspire and train middle and high school students, especially from underrepresented backgrounds, to be passionate about robotics and passionate about advanced manufacturing by taking them through several week after school program, so like after school program, where we teach them about how to program collaborative robots, reasonably UI robots, you can see on the screen, and to actually teach them how to manufacture items using CNC machines with machines and other things. And really, our vision was that this hands on experiential learning opportunity would inspire them to, to be passionate about this and to help with getting them into this programs, or in college, and to really retain it to increase retention so that they can actually go and pursue robotics in the future. And the last project is when I got involved in very recently, and is very dear to my heart, it’s the pan African robotics competition. It started in 2015, really small, but right now it’s grown to be probably the largest robotics competition on the continent. And really was designed with inspiration from from first robotics, if you’re familiar with that here in the US, really with a goal to inspire the next generation of roboticists on the continent. And we have seen very strong participation and significant impact on how students are motivated to not just be consumers of this technology, but to have the confidence that they can be part of the producers of the technology, right, they can be part of people who actually develop this technology. And that’s naturally where my passion lies. And so this competition is going on every year, and we get students from all the way down from middle school, all the way up to college level students, and, you know, to program robots to compete with each other, and to just really inspire them. So really, to summarize, I, I am a very relentless believer in the future of inclusion. And one, in one way I’m striving for that as, as this short presentation shows is by working to provide young people, particularly people from underrepresented communities, with access to education, that and skill development that allows them to not only be at the table where they can develop technology in a way that’s diverse, but also benefit from the dividends of, of the technology. Thank you.

Andra Keay 38:11
Thank you very much, Kenechukwu that’s, um, I’m looking forward to learning more about all of those initiatives. Particularly, I think Ari raised an interesting point, there can be a lot of education initiatives, how can we maximize the benefit, as well as the access rather than splintering or fragmenting success. And I know that open robotics is very keen to democratize access to robotics by putting forward access to simulations, to get around some of the costs of having access to physical robots. But of course, that requires access to internet. And we’re seeing even in the United States that there is there is a complete gap between those that have access to internet for education. And those students who for the last year have really struggled because of lack of lack of lack of access to the internet. And I don’t know what steps we need to take. I think we’ll discuss that a little bit more as the discussion moves on. But I would like to introduce our next speaker, Kenya Andrews, master’s student in Computer Engineering at the University of Illinois, and on the black and robotics undergraduate committee.

Kenya Andrews 39:41
Okay, thank you so much. Hello, everyone. I’m very excited to be here. And I didn’t bring slides so I’m just going to, you know, speak. So, first, I’m gonna first talk about a little bit about my background and Kind of where I fit in the space, and then I’m gonna go into the topic. So I’m a first year Masters student, and I live in this space where we make algorithms, right? And those are the things that that are the, the decision making properties of robots, right. So that’s, that’s the space that I’m in I do. I, my focus in my research is in machine learning fairness. And within that, my, my passion areas and that are algorithmic justice, algorithmic bias and decision making. Okay, so my current project is looking at fair distribution of COVID-19 vaccinations amongst vulnerable populations at the state of Ohio. And right now we’re looking at different measures, trying to build different models. So some of the some of the things that we’ve kind of tossed around are like, well, should should there be equal hospitalization rates between vulnerable and non vulnerable people? Would that make it fair? Or would it be more fair, if every every, like census tract or, you know, center would have the same amount of vaccines distributed to them? So, and we’ve been, you know, having to work around things like, like, distrust between historical you know, because of historical injustice and things like that, how that creates distrust and less vaccines and how that affects distribution. So hopefully, that gives you a little context for for where I’m coming from. Okay, so when I first saw inclusive robotics, I thought First, well, oh, that’s a, you know, loaded topic list. So let’s break down some definitions. So first, I wanted to look at what is the robot? And is the programmable machine that does the task, you know, with or without human assistance? And then I want it to look at, well, what is inclusion? That’s where I started having some some issues, because we, we don’t have a have a real agreeance, on what inclusivity looks like. And it’s not because, you know, it’s for different reasons, right? Like, maybe maybe it’s like, you don’t agree with someone’s lifestyle, so you don’t want to, to include them in things or, you know, I think I should have more because I do more, I work more, right. So it creates, you know, these biases around if people should be or should not be. So, I went to Oxford, and some of their definitions, were not excluding any of the parties or groups involved in something, and aiming to provide equal access to opportunities and resources for people who might otherwise be excluded. So putting those two things together, I thought about, well, we want to have robots that makes smart decisions to achieve, you know, different goals in this world, right? And it’s not like, it’s not just that we want them to do things, you want them to do them well, right. So unless they can do them, well, then maybe they shouldn’t do it at all right? That’s what that’s kind of what we were talking about a little bit earlier. So this analysis of if robots are doing something well, should encompass, you know, are they being just are they being fair in the decisions that they’re dishing out? And in order to look at that it kind of needs to start from the beginning, from the beginning of algorithm in that, you know, at the end, there are several stages of that people look at fairness, and one of them is at the end, they all the outcomes at the end come out good. What about when you were making the decision? So that’s another another scheme of fairness, like, Where were they in between stages, between the decisions good, where they fear that’s what I’m good. So I do want to hopefully you guys can hear me a little bit I want to talk about a few definitions of fairness. So hopefully, that can bring some context into where we are. So right. So some definitions of fairness are unawareness. So that that’s when we exclude different attributes of people. So that from data, even though it’s there, we know it’s there, we don’t look at it, we just ignore it. And that can be really dangerous. Because if you ignore those things, then you would be ignoring the historical injustice that comes with it. Right? So you’re if you ignore the fact that someone is a black or brown person, then you would be ignoring the injustice that they had to face to get where they are, and how that could have affected what we look at as their resume. Right. So maybe they don’t look qualified to you. But if we, if we, you know, considered the things that that affected their resume, then we can start to see, you know, actually, maybe they are qualified or maybe even more qualified than, than someone you know, who has a similar or equal resume. Another definition of fairness is demographic parity. And best when you look at if all the demographics and a data set, have the have equal outcomes, right, like, if white, black, Asian, Indian people are accepted at the same rate, or rejected at the same rate, and grouping demographics together, can completely cancel, completely cancel another group demographic that they could be a part of, right. So it’s not only race. What about you, though? Oh, they’re short, they’re tall. What about age, you know, things like that. So you could completely be ignoring someone else’s a part of their a part of their person, if you group them together like that, that’s the danger in that. And then there’s something called equal last odds. And that’s if you’re qualified, that you’re equally as likely to be accepted into something as someone else. And another one is equality of opportunity. So regardless of your demographic group, you’re equally as qualified to have something as someone else or reject it as someone else. And then there’s predictive parody. And that’s when the is very similar to demographic parody. And is when the positive rates are the same. Okay, lastly, I’ll go over firearm safety. And lastly, I’ll go over counterfactual fairness. And that’s if the outcome for you is the same. If you’re in this world, where you are, who you are, versus, you know, if you had a different set of demographics. So instead of being a short black woman, then I would be a Caucasian woman, and Mother, do I have the same outcome? Right? So those things are important, because when we look at robotics, and the decisions that they’re they’re pushing out, are, are you what measure of fairness? Are we going by, you know, and which one is most appropriate for our for our context, it changes, it changes all the time. So we have to be we have to be very careful. Um, when we’re designing robots and understanding what kind of space are they going to be in? And who are they going to be working with? Sorry, I’m reading my notes one second. Okay, yes, we want to make sure that we’re not promoting disparity but that we are minimizing your or mitigating it, right. So how can we do that some things that I really think we should do is just kind of take a step back and slow down, we need to look at where we are right now. And, and come to come to agreement on, on what inclusivity looks like. And once we all can sign off on that, we need to understand how we make decisions, because we don’t even have a great understanding of that. We need to know, you know, what is it that we’re that we’re saying is good or bad? And why are we saying that? If the things that reason are Why are bad and maybe that shouldn’t be in our design, right? Or the reasons that are good, you know, maybe we need to superimpose those and make them even larger, make them count more. And another way, after we after we have a good understanding of that and we’re developing good things. I think we have to start investing, right like you, you maybe you if you’re designing things for community or robots For a community that’s going to be specific to a community, maybe you should go to some of the community meetings and sit and understand their struggles, the things that are actually going on there, so that you have a real understanding. You need to support organizations like nesby, right? The National Society of Black Engineers or women in robotics, right? support them with your funds, support them with your knowledge, volunteer to speak. Even students tell students about robotics, and you know what it looks like. But while you’re telling them about robotics, take them to the African American museums, take them to the Holocaust museums, show them, you know, teach them moments in history, where we weren’t so fair, we weren’t so just we weren’t so inclusive, and how those things can can translate over into our lives, start a scholarship plays, you know, be a mentor. Yeah, those are, those are my thoughts on. So thank you so much. I appreciate you guys.

Andra Keay 51:09
Thank you so much, Kenya, you raised great points. And it was wonderful to have the definitions there. And what I like most is that you took it back to saying starting with the algorithm. And this is something that is both critically important and also crucially problematic at the moment. Because right now, robotics has become a subset of AI. And that means that federal policies on AI, are incorporating robotics, it means that the agenda is being driven around the discussion around AI and AI ethics. And it is often being done in complete ignorance with robotics. And some of the problems there is that if the discussion is only about robotics, then it may only be about safety, rather than algorithmic transparency. But at the same time, if the discussion is primarily about the algorithm, then it’s going to be excluding the impacts of physical robot. And they just have a completely different and expanded way of inter intersecting with us in society. So, you know, I love that you started with the AI in that discussion. And let me just see if Ken would like to speak now. And

Ken Goldberg 52:31
Thank you. Good, thank you. I appreciate that. I am really inspired by by a lot of this discussion. And I also want to take a moment to acknowledge that the event tonight is sandwiched between two major events, at least in the United States. One is Martin Luther King Day, which we celebrated yesterday, and tomorrow, which is the inauguration. And it will be approximately 15 hours or so we will have a new president of the United States new ministration. Which I don’t know how others feel about that. But I for one, I’m very, very, very happy about it. The I think that it’s important, because is going to we are at an opportunity, a new chapter in in American history, which I think will affect a lot of events globally. I think as as you know, one thing that that struck me is that the that is Kenya, I’ve just mentioned that the COVID vaccines process is going to be very interesting reexamination of our sense of inclusivity. Because we are going to have to think very carefully and deeply about how we prioritize the the vaccine. It’s been very interesting to me that the that seniors and, and healthcare workers, prisoners, incarcerated individuals have been prioritized with good reason. But it’s very interesting, because, you know, they’re not often they’re often not considered in, in our priorities. And so it’s been a forcing us to reconsider. And I think as the more vaccines become available, we’re gonna have to do some really careful thinking about how this is rolled out is going to cause a reexamination. And I want to note that this pandemic has, has, has, has woken us up in so many ways. It was 100 years ago that the 1918 pandemic occurred. And I was reflecting on the idea that the word robot was coined in 1920, right after the end of the pandemic, and I’m still trying to wrap my head around the idea that it was such an interesting context where they had just gone through World War and this horrendous threat to humanity. And that’s when the playwright Karl che back in Czechoslovakia basically comes up with a story about robots rebelling against against the the totalitarian regime that was basically forcing them to work. And that that’s the word that’s where the word robot originated. So 100 years later, We think that thinking about robots in this context of our political, economic, social environment is so important. And so I think that the the points that were raised here from the beginning from Michel characterizing, you know, what are we? What is our definition? Because I think that’s so hard to actually wrap our heads around. I mean, we can talk about, you know, seniors and children age as a sort of inclusivity. Right, that’s one very big factor, then there’s a one we didn’t talk about tonight, but gender, and LGBTQ T, right, there’s all the gender issues that have come to the fore, actually this year. And race issues, I mean, Black Lives Matter. I think that the bipoc, the whole idea of thinking about in new ways that has created a lot of shifting of attention and priorities in a really positive way, I can tell you that black and robotics, and black and AI have had a big influence this year on our admissions process at Berkeley, we’re right now reviewing applicants, and we are getting a lot of attention. We’ve got more applicants than ever before, more black applicants than ever before. And two of them are here, actually. And I want to say it’s wonderful, because they’re there. What’s really important is that we’re, we’re learning and educating the faculty that what’s important is not just looking at the scores, how many papers they’re reading, but what is their trajectory been? So if a student has come from a, from adversity in a small village, and in Africa, and now is an undergraduate at, you know, doing Greenham? Well, in classes, that’s a huge trajectory. I mean, that means they’re there, you know, imagine what they had to overcome to get there. So really think about that, in regard to how you’re evaluating that student. They may not have a published paper, but they’re on a trajectory to do, nothing’s gonna stop. Right. So I think this is really fascinating. And it’s a really powerful and important time, that I also think in terms of other you know, races, we talked about, you know, you open tonight ondra with a with a story about Native Americans and Aboriginal people, I think it’s really important. We also consider Hispanic, Indian, though the full spectrum of races that are out there. And languages, by the way, a big disparity that excludes many people is their language they speak. There’s a, you know, there’s an emphasis on English in a lot of the publications. But that is very difficult that it’s not your native language. So you have a barrier to overcome and the way you read and write that is, we need to think about how to overcome that even our conversation tonight is in English. And then the translation, nowadays, some of these tools and AI, again, comes into play, are going to open up these doors, and I hope they will increase in quality so that we can have simultaneous translation. And for example, for for people with hearing disabilities, having ability to have automated translation, Closed captioning is wonderful thing we’ve been using in our classes, I have a student who’s on hearing disabled, and we use this for all of our meetings. So it’s been it’s opened a huge amount of doors for all kinds of disabilities with regard to cognitive, neuro cognitive diversity, or neuro diversity. And people have learning disabilities, we find out today with COVID-19, that these kinds of learning disabilities are much greater than we thought before, students have all kinds of challenges. And that is also important to acknowledge the and also, as Michelle noted this socio economic variations, right, we are oftentimes targeting this kind of particular people who can afford these kind of robots and tools and even have who have Wi Fi in their homes. Right, but many people do not. So how do we think about that? And also, I also think that intellectually, we also tend to target, you know, in the inclusivity, in terms of the people developing are oftentimes, you know, nerds like me, engineers, right? We’re all people who feel pretty good about doing science or math stem, but many people don’t, they just don’t have that, that they’re uncomfortable there. And they, but they feel excluded. So how do we engage with people who are the artists and the humanists, and the writers, the journalists, you know, who are so engaged across the board. So all these things, and workers are oftentimes the affected by the robots that we’re, we’re developing, so we need to be engaging with with workers, and really thinking carefully about how it’s going to affect those workers, especially minimum wage, who are, you know, and most vulnerable to these technologies. So, um, there’s so many things that this is sort of, you know, engaging for me. And I also have to say, I’m Kenny, I have not met you before, but I’m so excited to follow up with you because we have a common background in Nigeria. I was born there. And in the in the 60s, and we would those programs you mentioned I’m not aware of. But it was fascinating because we had we started something called the African robotics network with a professor in Ghana. And it’s, it’s, it’s, it’s been a little bit in with the first objective of it. This was in 2012, was to build an ultra affordable robot for education to design an ultra affordable. And the challenge was to design something under $10. So we thought nobody could ever do that it’s programmable robot for under $10. Anyway, it turned out that someone did. And it was a it was a it was a hobbyist living in Thailand, basically came up with what he calls a lolly bot. And if you look it up on the internet, it’s l o, l, l, y BOT, a lolly bot, and it costs $8.64. So you can build it from an old Sony game controller. Anyway, what I want to say is, I love what you’re saying, because I completely agree with this, there is a big opportunity, I am very excited about Africa and its potential, I think that is a major continent, and that we are that there’s going to accelerate into the future. And one of the things that African students, I found a really incredible ability is to know how to think outside the box in a way that they think differently because of their experience. So they’re very, very attuned to how to make something affordable and sustainable, how to make something that works, even when the electricity goes out, which nobody in the West usually thinks about. But those kind of things are really important. And so and there is engaged and interested in robots as any kid anywhere. So that’s why I want to be able to bring them to robots and the programs that you’re talking about, like the pan African robotics competition, I absolutely love it. So I want to connect with you because I would really like to follow up. But that is one of the things I think we can do. As the group of us tonight, which is to break this we’re forming a community I mean, what I feel is that there’s a there’s a real sense of, of some ground ground grassroots thing happening here. And I’m so excited, I want to thank you Andra for putting that putting this group together. Because there’s a spark here that I want to support. And I mean, I really want to see that grow over the next few years. And I think we are at a moment in time historical moment when this is an opportunity for us to step forward and really take take this opportunity and do something with it carried forward in a really meaningful and sustainable way. Thank you.

Andra Keay 1:02:03
Thank you so much, Ken, that was a very wonderful note to finish on. And sadly, for tonight, we are out of time. But the problem space is enormous. But it’s fitting to realize that the opportunity space is even larger. And I personally had thought a lot. What would a robot look like if it was designed by women for women, and I could imagine things being different. But imagine now what it would look like if your language models were for languages other than English. Or if you had to rise to the challenge of developing language models for multiple languages, as is the case in Africa. And I love the examples that can gave us there as well about really thinking outside of the box. If it’s been proven, fairly scientifically, that diversity drives innovation. And it might not be as comfortable. But it is certainly far more productive. If you’re looking to make change, as well as creating something that’s inclusive. So we shall have to continue this discussion and extend this discussion into our workplaces and to the rest of the people around us because we Yes, we need to get everybody in the room. And I’m looking forward to taking that journey with you all and thank you so much tonight. That’s wonderful speech. Okay, I’m going to stop the recording and say goodnight to everyone.

Dr Michelle Johnson 1:03:48
Thank you

Transcribed by https://otter.ai

 

]]>
Introducing Eliza Kosoy; E-liza Dolls https://robohub.org/introducing-eliza-kosoy-e-liza-dolls/ Thu, 18 Feb 2021 09:30:04 +0000 https://robohub.org/introducing-eliza-kosoy-e-liza-dolls/

Eliza Kosoy is a Ph.D Student at UC Berkeley. She studied mathematics in college and then worked for Prof. Joshua Tenenbaum at MIT in his computational cognitive science lab. She then started on a Ph.D at UC Berkeley working with Professor Alison Gopnik in 2018. She is most proud of receiving funding and winning an innovation prize that catalyzed her business!  Her startup is called E-liza Dolls. They are 18’’ electronic “liza” dolls that introduce young girls to coding and hardware in a fun way!

She chose this topic because as a woman in STEM she couldn’t help but feel the gender and racial divide and discrepancies in the hard sciences. With her background in child development, it only made sense that it’s best to expose children to these concepts early on so they will be embedded into their hypothesis space as they develop. The hardest challenge for her is “Soldering Errors” and when tiny components fall off without notice.

E-liza Dolls Kickstarter will open very soon in March 2021… We’ll update this post the moment it goes live!

Roboticists in Residence is a Silicon Valley Robotics initiative that provides free studio space and support for creative artists and engineers making a difference, whether it’s modding a Tesla with all the conveniences of the Victorian era or adding to the ROS2 navigational stack. For more information and updates from our Roboticists in Residence

]]>
Field Robotics: A new, high-quality, online and open-access journal https://robohub.org/field-robotics-a-new-high-quality-online-and-open-access-journal/ Sun, 07 Feb 2021 09:00:20 +0000 https://robohub.org/field-robotics-a-new-high-quality-online-and-open-access-journal/ A robot in a field

Image credit: wata1219 on flickr (CC BY-NC-ND 2.0)

It has been almost half a year since the mass resignation of the editors and editorial board of the Journal of Field Robotics. In a new turn of events, Peter Corke has recently relaunched Field Robotics as an online open-access journal with the old editorial board. Field Robotics deals with the fundamentals of robotics in unstructured and dynamic environments. Papers are now being accepted at their website.

The story of the mass resignation was reported on Silicon Valley Robotics on 26 August, in their post Is it farewell to the Journal of Field Robotics?, which we will reproduce below.

Original post from Silicon Valley Robotics

2020 is proving to be a watershed year. First COVID-19 has forced the cancellation (eg. ROSCon 2020, Hannover Messe) or the complete redesign of almost every major robotics conference. Now it seems that the science publishing community is also undergoing a sea change, as yesterday’s mass resignation of the editors and editorial board of the Journal of Field Robotics suggests. Stay posted for updates about the future of a field robotics research journal.

August 25, 2020

Dear Colleagues,

We would like to inform you about an upcoming major transition for the Journal of Field Robotics.

After 15 years of service, John Wiley and Sons, the publisher has decided not to renew the contract of the Editor in Chief (Sanjiv Singh) and the Managing Editor (Sanae Minick) and hence our term will expire at the end of 2020.

This comes after two years of discussions between new Wiley representatives and the Editorial Board have failed to converge to a common set of principles and procedures by which the journal should operate. The Editorial Board has unanimously decided to resign.

We do want to assure the authors who have papers under review that we see it as our responsibility to bring these documents to resolution during our term. We will continue to process new submissions until the end of the year. More about the future at the end of this note.

While the issue at the heart of our disagreement with Wiley is about academic independence, it should be noted that there is a structural issue here. Scholarly publishing is broadly in flux at the moment in the search for a sustainable model. Currently, academics are not paid to create and review articles. In fact, they often have to pay fees to publish, and, readers have to pay to access the work through a pay per view system, or, through subscriptions.

Plan S, an international consortium makes the dilemma clear:

“Monetising the access to new and existing research results is profoundly at odds with the ethos of science (Merton, 1973 )… In the 21st century, science publishers should provide a service to help researchers disseminate their results. They may be paid fair value for the services they are providing, but no science should be locked behind paywalls!”

While this moment calls for creativity and collaboration with the scholarly community to find new models, Wiley is intent on making broad changes to the way that the Journal of Field Robotics is operated, guided mostly by an economic calculation to increase revenue and decrease costs. To do this, they have unilaterally decided to change the terms of the contract that has been constant since the JFR was started in 2005. Wiley has confronted a similar case (European Law Journal) with similar effect – the entire editorial board has resigned in January of 2020:

Wiley insists that the new contract is covered under a confidentiality agreement that not even the Editorial Board can examine. What we can say is that the net effect of Wiley’s demands would make the Editors contractors to the publisher rather than having them respond to the board. We see this as a breach of academic autonomy.

In resigning, the Editorial Board of the Journal of Field Robotics reaffirms its commitment to dissemination and discussion of research. In the near future we will announce a new forum for research in Field Robotics that will maintain the academic integrity of our editorial process while also ensuring open dissemination of your research.

The Editorial Board of the Journal of Field Robotics

  • Simon Lacroix, LAAS
  • David Wettergreen, CMU
  • Cédric Pradalier, GeorgiaTech Lorraine
  • Tim Barfoot, University of Toronto
  • Roland Siegwart, ETH
  • Giuseppe Loianno, NYU
  • Henrik I Christensen, UC San Diego
  • Marco Hutter, ETH
  • Kazuya Yoshida, Tohoku University
  • Aarne Halme, Aalto University
  • Hanumant Singh, Northeastern University
  • Matthew Berkemeier, Continental
  • Anibal Ollero, University of Seville
  • Jonathan Roberts, QUT
  • Hajime Asama, University of Tokyo
  • Satoshi Tadokoro, Tohoku University
  • Raja Chatila, Sorbonne University
  • Peter Corke, QUT
  • Matt Spenko, Illinois Institute of Technology
  • Larry Matthies, NASA JPL
  • Salah Sukkarieh, University of Sydney
  • Stefan Williams, University of Sydney
  • Sanjiv Singh, CMU
]]>
DOE’s E-ROBOT Prize targets robots for construction and built environment https://robohub.org/does-e-robot-prize-targets-robots-for-construction-and-built-environment/ Sat, 23 Jan 2021 12:00:42 +0000 https://robohub.org/does-e-robot-prize-targets-robots-for-construction-and-built-environment/ Silicon Valley Robotics is pleased to announce that we are a Connector organization for the E-ROBOT Prize, and other DOE competitions on the American-Made Network. There is \$2 million USD available in up to ten prizes for Phase One of the E-ROBOT Prize, and \$10 million USD available in Phase Two. Individuals or teams can sign up for the competition, as the online platform offers opportunities to connect with potential team members, as do competition events organized by Connector organizations. Please cite Silicon Valley Robotics as your Connector organization, when entering the competition.

Silicon Valley Robotics will be hosting E-Robot Prize information and connection events as part of our calendar of networking and Construction Robotics Network events. The first event will be on February 3rd at 7pm PST in our monthly robot ‘showntell’ event “Bots&Beer”, and you can register here. We’ll be announcing more Construction Robotics Network events very soon.

E-ROBOT stands for Envelope Retrofit Opportunities for Building Optimization Technologies. Phase One of the E-ROBOT Prize looks for solutions in sensing, inspection, mapping or retrofitting in building envelopes and the deadline is May 19 2021. Phase Two will focus on holistic, rather than individual solutions, i.e. bringing together the full stack of sensing, inspection, mapping and retrofitting.

The overarching goal of E-ROBOT is to catalyze the development of minimally invasive, low-cost, and holistic building envelope retrofit solutions that make retrofits easier, faster, safer, and more accessible for workers. Successful competitors will provide solutions that provide significant advancements in robot technologies that will advance the energy efficiency retrofit industry and develop building envelope retrofit technologies that meet the following criteria:

  • Holistic: The solution must include mapping, retrofit, sensing, and inspection.
  • Low cost: The solution should reduce costs significantly when compared to current state-of-the-art solutions. The target for reduction in costs should be based on a 50% reduction from the baseline costs of a fully implemented solution (not just hardware, software, or labor; the complete fully implemented solution must be considered). If costs are not at the 50% level, there should be a significant energy efficiency gain achieved.
  • Minimally invasive: The solution must not require building occupants to vacate the premises or require envelope teardown or significant envelope damage.
  • Utilizes long-lasting materials: Retrofit is done with safe, nonhazardous, and durable (30+ year lifespan) materials.
  • Completes time-efficient, high-quality installations: The results of the retrofit must meet common industry quality standards and be completed in a reasonable timeframe.
  • Provides opportunities to workers: The solution enables a net positive gain in terms of the workforce by bringing high tech jobs to the industry, improving worker safety, enabling workers to be more efficient with their time, improving envelope accessibility for workers, and/or opening up new business opportunities or markets.

The E-ROBOT Prize provides a total of \$5 million in funding, including \$4 million in cash prizes for competitors and an additional \$1 million in awards and support to network partners.

Through this prize, the U.S. Department of Energy (DOE) will stimulate technological innovation, create new opportunities for the buildings and construction workforce, reduce building retrofit costs, create a safer and faster retrofit process, ensure consistent, high-quality installations, enhance construction retrofit productivity, and improve overall energy savings of the built environment.

The E-ROBOT Prize is made up of two phases that will fast-track efforts to identify, develop, and validate disruptive solutions to meet building industry needs. Each phase will include a contest period when participants will work to rapidly advance their solutions. DOE invites anyone, individually or as a team, to compete to transform a conceptual solution into product reality.

]]>
Who are the Visionary companies in robotics? See the 2020 SVR Industry Award winners https://robohub.org/who-are-the-visionary-companies-in-robotics-see-the-2020-svr-industry-award-winners/ Sat, 19 Dec 2020 23:23:55 +0000 https://robohub.org/who-are-the-visionary-companies-in-robotics-see-the-2020-svr-industry-award-winners/ These Visionary companies have a big idea and are well on their way to achieving it, although it isn’t always an easy road for any really innovative technology. In the case of Cruise, that meant testing self driving vehicles on the streets of San Francisco, one of the hardest driving environments in the world. Some of our Visionary Awards go to companies who are opening up new market applications for robotics, such as Built Robotics in construction, Dishcraft in food services, Embark in self-driving trucks, Iron Ox in urban agriculture and Zipline in drone delivery. Some are building tools or platforms that the entire robotics industry can benefit from, such as Agility Robotics, Covariant, Formant, RobustAI and Zoox. The companies in our Good Robot Awards also show that ‘technologies built for us, have to be built by us’.


Agility Robotics builds robots that go where people go, to do pragmatically useful work in human environments.  Digit, Agility Robotics’ humanoid robot with both mobility and manipulation capabilities, is commercially available and has been shipping to customers since July 2020. Digit builds on two decades of research and development from the team on human-like dynamic mobility and manipulation, and can handle unstructured indoor and outdoor terrain. Digit is versatile and can do  a range of different jobs that have been designed around a human form factor.

In October 2020, Agility Robotics closed a $20 million Series A round led by DCVC and Playground Global, bringing their total funds raised to $29 million. The investment enables the company to meet the demand from logistics providers, e-commerce retailers and others for robots that can work alongside humans to automate repetitive, physically demanding or dangerous work safely and scalably, even in the majority of spaces that are not purpose-built for automation.


Built Robotics transforms heavy equipment for the $1 trillion earthmoving industry into autonomous robots using its proprietary AI Guidance Systems. Built Robotics combines sensors such as GPS, cameras, and IMUs with advanced software, and the systems can be installed on standard equipment from any manufacturer. The technology allows equipment operators to oversee a fleet of vehicles working autonomously in parallel.

Built Robotics is backed by some of the top investors in Silicon Valley — including Founders Fund, NEA, and Next47 — and has raised over $48M to date. They have targeted markets in which they can have a big impact, such as earthmoving, clean energy, gas pipelines, trenching, and new housing developments. Built Robotics has partnered with one of the largest labor unions in North America, the IUOE, to help train and develop the next generation of equipment operator.

“At the end of the day, robots are just tools in the hands of skilled operators, and we believe that the best-trained workers equipped with our technology will fundamentally change the future of construction,” said Noah Ready-Campbell, CEO of Built Robotics. “Together we can build and maintain the critical infrastructure our country needs.”


Covariant is building the Covariant Brain, a universal AI to give robots the ability to see, reason and act on the world around them. Bringing practical AI Robotics into the physical world is hard. It involves giving robots a level of autonomy that requires breakthroughs in AI research. That’s why Covariant assembled a team that has published cutting-edge research papers at the top AI conferences and journals, with more than 50,000 collective citations. In addition to their research, they’ve also brought together a world-class engineering team to create new types of highly robust, reliable and performant cyber-physical systems.

Instead of learning to master specific tasks separately, Covariant robots learn general abilities such as robust 3D perception, physical affordances of objects, few-shot learning and real-time motion planning. This allows them to adapt to new tasks just like people do — by breaking down complex tasks into simple steps and applying general skills to complete them. In 2020, Covariant raised a $40 million Series B round from investors such as Index Ventures, Lux Capital and Baidu Ventures, bringing their total funding to $67 million. They’ve also developed partnerships with logistics and robotics companies such as Knapp Ag. and ABB, showcasing successful order pick rates at faster than human speeds.


Self driving technology, the integration of robotics, AI and simulation, is the hardest engineering challenge of our generation. So it’s only fitting that Cruise autonomous vehicles are on the road in San Francisco navigating some of the most challenging and unpredictable driving environments, because the best way to bring self-driving technology to the world is to expose it to the same unique and complex traffic scenarios human drivers face every day.

Cruise became the industry’s first unicorn when GM acquired the company in 2016. Cruise is building the world’s most advanced all-electric, self-driving vehicles to safely connect people with the places, things, and experiences they care about. And in the first three months of the COVID-19 pandemic, Cruise delivered more than 125,000 contactless deliveries of groceries and meals to San Francisco’s most vulnerable underserved populations. And as of December 4, Cruise has started driverless testing in San Francisco. You can see the video here:


Dishcraft’s mission is to create happy, productive, sustainable workplaces by making automation accessible to food service operations. Dishcraft Daily® delivers a full-service clean-dish ‘dishwashing as a service’ every day to dining operations in business, education, and healthcare, providing measurable environmental benefits compared to using disposable wares.

Dishcraft provides environmental and financial efficiencies for both dine-in and to-go businesses once you calculate the hidden costs of normal restaurant or food service operation. Dishcraft has raised over $25 million from investors including Baseline Ventures, First Round Capital, and Lemnos. The company’s dishwashing as a service is now being used by dozens of companies, including hospitals, around the Bay Area. Since the advent of COVID-19, there’s been an increased demand for food safe and sterile processes in the food service industry.


Embark technology is already moving freight for five Fortune 500 companies in the southwest U.S. By moving real freight through our purpose-built transfer hubs, we are setting a new standard for how driverless trucks will move freight in the future. Embark has compiled many firsts for automated trucks, including driving across the country, operating in rain and fog, and navigating between transfer hubs. Embark is advancing the state of the art in automated trucks and bringing safe, efficient commercial transport closer every day.

Started as University of Waterloo startup, then at YCombinatorEmbark has raised more than $117 million with top investors like DCVC and Sequoia CapitalEmbark is assembling a world-class group of engineers from companies like Tesla, Google, Audi and NASA with a professional operations team that averages over a million miles per driver, with the goal of developing a system tailored to the demands of real world trucking.


Autonomous robots are awesome, but if you want to run a business with them, you’ll need a robust operations platform that connects people, processes, sensors and robots, and provides fleet-wide management, control, and analytics at scale. That is where Formant comes in.

Formant bridges the gap between autonomous systems and the people running them. Our robot data and operations platform provides organizations with a command center that can be used to operate, observe, and analyze the performance of a growing fleet of heterogeneous robots. Empowering customers to deploy faster, scale while reducing overhead, and maximize the value of autonomous robots and the data they collect.

So far in 2020, Formant’s robot data and operations platform is supporting dozens of different customers with a multitude of robot types and is deployed on thousands of autonomous devices worldwide. Formant’s customers span robot manufacturers, robot-as-a-service providers, and enterprises with robotic installations and represent a variety of industries, from energy to agriculture to warehouse automation.


Iron Ox is an operator of autonomous robotic greenhouses used to grow fresh and pesticide-free farm products that are accessible everywhere. It leverages plant science, machine learning, and robotics to increase the availability, quality, and flavor of leafy greens and culinary herbs that enable consumers to access naturally grown and chemical-free farm products.

Iron Ox is reimagining the modern farm, utilizing robotics and AI to grow fresh, consistent, and responsibly farmed produce for everyone. From the development of multiple robot platforms to their own custom hydroponic, seeding, and harvesting systems, Iron Ox is taking a system-level approach to creating the ideal farm. The company’s experienced team of growers, plant scientists, software engineers, and hardware engineers are passionate about bringing forward this new wave of technology to grow local, affordable fresh produce.


Robust.AI is building the world’s first industrial grade cognitive engine, with a stellar team that’s attracted $22.5 million in seed and Series A funding from Jazz Ventures, Playground Global, Fontinalis, Liquid 2, Mark Leslie and Jaan Tallis. Robust’s stated mission is to overhaul the software stack that runs many of existing robots, in order to make them function better in complex environments and be safer for operation around humans.

The all-star team of founders are Gary Marcus and Rodney Brooks, both pioneers in AI and robotics, Mohamed Amer from SRI International, Anthony Jules from Formant and Redwood Robotics, and Henrik Christensen author of the US National Robotics Roadmaps.

“Finding market fit is as important in robots and AI systems as any other product,” Brooks said in a statement. “We are building something we believe most robotics companies will find irresistible, taking solutions from single-purpose tools that today function in defined environments, to highly useful systems that can work within our world and all its intricacies.”


Zipline is a California-based automated logistics company that designs, manufactures, and operates drones to deliver vital medical products. Zipline’s mission is to provide every human on Earth with instant access to vital medical supplies. In 2014, Zipline started flying medical supplies in Africa, and has gone on to fly more than 39,000 deliveries worldwide and raise over $233 million in funding.

Zipline has built the world’s fastest and most reliable delivery drone, the world’s largest autonomous logistics network, and a truly amazing team. Zipline designs and tests its technology in Half Moon Bay, California. The company assembles the drones and the technology that powers its distribution centers in South San Francisco. Zipline performs extensive flight testing in Davis, California, and operates distribution centers around the planet with teams of local operators.


Zoox is working on the full stack for Robo-taxis, providing mobility-as-a-service. Operating at the intersection of design, computer science, and electro-mechanical engineering, Zoox is a multidisciplinary team working to imagine and build an advanced mobility experience that will support the future needs of urban mobility for both people and the environment.

In December 2018, Zoox became the first company to gain approval for providing self-driving transport services to the public in California. In January 2019, Zoox appointed a new CEO, Aicha Evans, who was previously the Chief Strategy Officer at Intel and became the first African-American CEO of a $1B company. Zoox had raised a total of $1B in funding over 6 rounds and on June 26, 2020, Amazon and Zoox signed a “definitive merger agreement” under which Amazon will acquire Zoox for over $1.2 billion. Zoox’s ground-up technology, which includes developing zero-emission vehicles built specifically for autonomous use, could be used to augment Amazon’s logistics operations.


You can see the full list of our Good Robot Awards in Innovation, Vision, Commercialization and our Community Champions here at https://svrobo.org/awards and we’ll be sharing articles about each category of award winners throughout the week.

]]>
What does Innovation look like in robotics? See the SVR 2020 Industry Award winners https://robohub.org/what-does-innovation-look-like-in-robotics-see-the-svr-2020-industry-award-winners/ Fri, 18 Dec 2020 20:18:21 +0000 https://robohub.org/what-does-innovation-look-like-in-robotics-see-the-svr-2020-industry-award-winners/

Self-driving vehicles would not be possible without sensors and so it’s not surprising to see two small new sensors in the 2020 Silicon Valley Robotics ‘Good Robot’ Innovation Awards, the Velabit from Velodyne and the nanoScan3 from SICK. We showcase three other innovations in component technology, the FHA-C with Integrated Servo Drive from Harmonic Drive, the radically new Inception Drive from SRI International and Qualcomm’s RB5 Processor, all ideal for building robots.

Our other Innovation Awards go to companies with groundbreakingly new robots; from the tensegrity structure of Squishy Robotics, which will help in both space exploration and disaster response on earth, to the Dusty Robotics full scale FieldPrinter for the construction industry, and Titan from FarmWise for agriculture, which was also named one of Time’s Best Inventions for 2020. Finally, we’re delighted to see innovation in robotics that is affordable and collaborative enough for home robot applications, with Stretch from Hello Robot and Eve from Halodi Robotics.

The Velabit, a game-changing lidar sensor, leverages Velodyne’s innovative lidar technology and manufacturing partnerships for cost optimization and high-volume production, to make high-quality 3D lidar sensors readily accessible to everyone. The Velabit is smaller than a deck of playing cards, and it shatters the price barrier, costing $100.00 per sensor. The compact, mid-range Velabit is highly configurable for specialized use cases and can be embedded almost anywhere. Gatik and May Mobility are just two of pioneers in autonomous vehicle technology using Velodyne Lidar.

The nanoScan3 from SICK is the world’s smallest safety laser scanner and is based on their latest patented Time-Of-Flight technology. Not only does it provide the most robust protection for stationary and mobile robots, but being a LiDAR, it simultaneously supports navigation and other measurement-based applications.

Founded in 1946, SICK sensors help robots make more intelligent decisions and give them the ability to sense objects, the environment, or their own position. SICK, and their west coast distributor EandM, offer solutions for all challenges in the field of robotics: Robot Vision, Safe Robotics, End-of-Arm Tooling, and Position Feedback.

The FHA-C Mini Series from Harmonic Drive is a family of extremely compact actuators that deliver high torque with exceptional accuracy and repeatability. The revolutionary FHA-C with Integrated Servo Drive eliminates the need for an external drive and greatly improves wiring while retaining high-positional accuracy and torsional stiffness in a compact housing. This new mini actuator product is ideal for use in robotics.

The Qualcomm Robotics RB5 Platform supports the development of the next generation of high-compute, AI-enabled, low power robots and drones for the consumer, enterprise, defense, industrial and professional service sectors that can be connected by 5G. The QRB5165 processor, customized for robotics applications, offers a powerful heterogeneous computing architecture coupled with the leading 5th generation Qualcomm® Artificial Intelligence (AI) Engine delivering 15 Trillion Operations Per Second (TOPS) of AI performance. It’s designed to achieve peak performance while being able to also support small battery-operated robots with challenging power and thermal dissipation requirements. The platform offers support for Linux, Ubuntu and Robot Operating System (ROS) 2, as well as pre-integrated drivers for various cameras, sensors and connectivity.

The latest breakthrough from SRI Robotics is a novel ultra-compact, infinitely variable transmission that is an order of magnitude smaller and lighter than existing technologies. The Inception Drive is a new transmission that can reverse the direction of the output relative to input without clutches or extra stages, dramatically increasing total system efficiency in applications including robotics, transportation, and heavy industry.

Squishy Robotics’ rapidly deployable, air-droppable, mobile sensor robots provide lifesaving, cost-saving information in real time, enabling faster, better-informed data-driven decisions. The company’s robots provide first responders with location and chemical sensor data as well as the visual information needed to safely plan a mitigation response, all from a safe distance away from the “hot zones.” The scalable and reconfigurable robots can carry customized, third-party equipment (e.g., COTS sensors, emergency medical aid supplies, or specialized radio components) in a variety of deployment scenarios.

The company’s first target market is the HazMat and CBRNE (chemical, biological, radiological, nuclear, and explosive) response market, enabling lifesaving maneuvers and securing the safety of first responders by providing situational awareness and sensor data in uncharted terrains. The robots can be quickly deployed by ground or be dropped from drones or other aerial vehicles and then be used in a variety of ways, including remote monitoring, disaster response, and rescue assistance. A spin-off of prior work with NASA on robots for space exploration, the company’s Stationary Robot has been successfully dropped from airplanes from heights of up to 1,000 ft; the company’s Mobile Robot can traverse rugged and uneven territory.

Dusty Robotics develops innovative robotics technology that power the creation of high-accuracy mobile printers for the construction industry. Dusty’s novel robotics algorithms enable the system to achieve 1-millimeter precision printing construction layout on concrete decks, which is a breakthrough in the industry.

Construction industry veterans who are normally skeptical about new innovations have all embraced Dusty’s FieldPrinter as the solution to critical problems in the industry. Layout today involves a number of manual steps, each of which has the potential to introduce errors into the process. Errors increase building cost and delay time to completion. Dusty’s robotic layout printer automates the BIM->field workflow and is poised to be the first widely adopted robotic technology in the field across the construction industry.

For vegetable growers who face increased growing costs and new environmental and regulatory pressures, the FarmWise suite of data-driven services harnesses plant-level data to drive precise field actions in order to streamline farm operations and increase food production efficiency.

Titan FT-35, the automated weeding robot from FarmWise Labs, has just been named one of Time Magazine’s ‘Best Inventions of 2020’. Titan consists of a driverless tractor and a smart implement that uses deep learning to detect crops from weeds and mechanically removes weeds from farmers’ fields. Thanks to the trust and collaborative effort of visionary growers, the FarmWise idea of a machine that could kill weeds without using chemicals went from a proof-of-concept to a commercialized product.

Hello Robot has reinvented the mobile manipulator. In July 2020 they launched Stretch, the first capable, portable, and affordable mobile manipulator designed specifically to assist people in the home and workplace. At a fraction the cost, size, and weight of previous capable mobile manipulators, Stretch’s novel design is a game changer.

Stretch has a low mass, contact-sensitive body with a compact footprint and slender telescopic manipulator, so that it weighs only 51lb. Stretch is ready for autonomous operation as well as teleoperation, with Python interfaces, ROS integration and open source code. In the future, mobile manipulators will enhance the lives of older adults, people with disabilities, and caregivers. Hello Robot is working to build a bridge to this future.

Halodi Robotics has developed the EVE humanoid robot platform using its patented REVO1 actuators to enable truly capable and safe humanoid robots. The robots have been commercialized, and the first commercial customer pilots are being planned for next year in security, health care and retail.

By developing a new actuator and differential rope based transmission systems, the company has overcome many of the obstacles preventing the development of capable and safe robots. Impact energies of less than a thousandth of comparable systems means that the system can be inherently safe around humans and in human environments.

Halodi Robotics is using the EVE platform to pilot humanoid robotics into new areas while their next generation robot Sarah is being developed for 2022 launch.

You can see the full list of our Good Robot Awards in Innovation, Vision, Commercialization and our Community Champions here at https://svrobo.org/awards and we’ll be sharing articles about each category of award winners throughout the week.

]]>
Let’s talk about the future of Air cargo. https://robohub.org/lets-talk-about-the-future-of-air-cargo/ Fri, 18 Dec 2020 08:01:25 +0000 https://robohub.org/lets-talk-about-the-future-of-air-cargo/ You invest in the future you want to live in. I want to invest my time in the future of rapid logistics.

Three years ago I set out on a journey to build a future where one-day delivery is available anywhere in the world by commercializing high precision, low-cost automated airdrops. In the beginning, the vision seemed almost so grand and unachievable as to be silly. A year ago we began assembling a top-notch team full of engineers, aviators and business leaders to help solve this problem.  After a lot of blood sweat and tears, we arrive at present day with the announcement of our $8M seed round raise backed by some amazing capital partners and a growing coalition excited and engaged to accelerate DASH to the next chapter.  With this occasion, we have been reflecting a lot on the journey and the “why” that inspired this endeavor to start all those years ago.


Why Does This Problem Exist?

To those of us fortunate enough to live in large well-infrastructured metropolitan cities, deliveries and logistics isn’t an issue we often consider. We expect our Amazon Prime, UPS, and FedEx packages to arrive the next day or within the standard 3-5 business days.  If you live anywhere else these networks grind to a halt trying to deliver.  For all its scale, Amazon Prime services less than 60 percent of zipcodes in the US with free 2-day prime shipping. The rural access index shows that over 3 Billion people, live in rural settings and over 700 million people don’t live within easy access to all-weather roads at all. Ask manufacturers in need of critical spare parts in Montana, earthquake rescue personnel in Nepal, grocery store owners in mountainous Columbia, or anyone on the 20,000 inhabited islands of the Philippines if rapid logistics feels solved or affordable. The short answer – it’s not.

Before that package is delivered to your door it requires a middle mile solution to move from region to region. There is only one technology that can cross oceans, mountains, and continents in a single day, and that is air cargo.

Air cargo accounts for less than one  percent of all pounds delivered, but over 33 percent of all shipping revenue globally. We collectively believe in air cargo and rely on it to get our most critical and immediate deliveries, including a  growing share of e-commerce and just in time deliveries.  If you want something fast, it’s coming by airplane. There is no substitute. 

However, the efficiency and applications for air cargo break down when the plane has to land.  While the 737 can fly over 600 mph and thousands of miles, it requires hundreds of millions in infrastructure, airports, and ground trucking to get cargo from the airport to your local warehouse making it very costly for commercial deliveries. The ground infrastructure has to exist on every island in the Philippines, every mountain town in Columbia and every town in Nepal. This infrastructure has to reach both sides of every mountain or island anywhere you want things fast.   Even when you can land at a modern airport take-off and fuel burn during climb can account for upwards of 30 percent of an entire flight’s fuel use and drives insurance and maintenance costs from landing and takeoff cycles. This problem is so intrinsic to air cargo and logistics it almost seems natural. Well of course flyover states and rural areas don’t get cheap, fast, and convenient deliveries. Are you going to land that 737 at 20 towns on the way from LA to New York City? We fly over potential customers on our way to big urban cities with modern infrastructure even though only a minority portion of the world’s population lives there. Something has to change.

Our solution

To solve this problem is simple in thought. Yet this has been  one of the most complex tasks I’ve had the honor of working on in my engineering career.  Land the package, not the plane.  By commercializing high-precision low-cost air drops you can decouple airplanes from landings, runways and trucks.  Suddenly a delivery to rural Kansas is just as fast and cost-effective as a major coastal city. Fuel, insurance, utilization rate, service improvements, coverage area, and-and-and, so many metrics improve overnight in significant ways if an existing aircraft can deliver directly to the last mile sorting facility and bypasses much of the complexity, cost and infrastructure needed for traditional hub and spoke networks.

DASH Systems performing air drop tests in Southern California (image from DASH Systems)

Perhaps one of the most common questions I received when I started DASH why hasn’t [insert your preferred enterprise organization here] done this before? Without taking a detour conversation on why large enterprises historically struggle with innovation, the simple answer is: Because now is the time.  Advancements in IoT, low size weight and power flight controllers coupled with a career implementing automation in safety-critical environments meant that the necessary ingredients were ready. Tremendous credit is due to some of the most brilliant engineers, scientists and developers I’ve had the pleasure of working with who took to task carving away raw ideas and rough prototypes into aerospace grade commercial products. All with the bravery to do so while working outside the confines of existing aerospace text books.

Beyond the intricacies of technology was a personal impetus to implement. My father’s family has origins in Barbados, during hurricane season we would make the call, when the phone lines were restored, to ask “is everything okay?” It often felt like a roll of the dice if they would be spared that year in a sick game of roulette that someone else would lose.  With islands by definition nearly all help and aid have to come from aboard, but how can supplies be distributed when ports are destroyed, runways damaged and roads washed out? To me, it is a moral imperative to help, but also to build self-sustaining commercial solutions that can scale to help more in the future.

This thought process was put to the test in 2017, just weeks after starting to seriously contemplate and study the ideas that became DASH. Hurricane Maria hit Puerto Rico. I awoke just as millions of others to witness one of the worst hurricanes to make landfall in 100 years. That day we started making calls, 10 days later we were flying inland in a rented Cessna 208 delivering thousands of pounds of humanitarian supplies via air drops to cut off communities.  The take away was that if this could be done safely and legally on an idle Fedex feeder aircraft, if those on the ground were willing and ready for rapid logistics at the same price they would have paid, why did it have wait until a natural disaster to strike?  DASH exists because there is no technology, process, or company that can honestly make the claim of delivery to anywhere or even most places in under 2 days. We in large cities have come to enjoy it and expect it, yet in the same breath, we cut the conversation short for those geographically situated elsewhere. Our solution exists and with the hard work of an amazingly talented team and excellent partners continue to scale and grow until that one day that claim can be made.

Our Future

The story of DASH is far from over, our vision is rapid logistics anywhere and there is a flight path ahead of us to get there. Today, DASH is advancing the state of the art of precision air drop technology, tomorrow we are looking to deliver into your community wherever it is and despite the circumstances.  The entire globe deserves the same level of service and convenience. The list is too long to thank everyone who has helped DASH get to where we are today, and growing longer every day. Instead I can offer up, look to the skies you may see your next delivery safely and precisely coming down to a location near you.

 

Joel Ifill is the founder and CEO of DASH Systems.  He can be found at www.dashshipping.com and reached at inquiries@dashshipping.com we are always on the hunt for talented roboticists engineers and developers who enjoy aviation, inquire at HR@DASHshipping.com

]]>
Meet the robotics community champions in the SVR Good Robot Industry Awards https://robohub.org/meet-the-robotics-community-champions/ Thu, 17 Dec 2020 07:25:58 +0000 https://robohub.org/meet-the-robotics-community-champions/

If robotics is the technology of the 21st century, rather than biotech, then we have some serious work to do. This week marks the ‘beginning of the end’ of the coronavirus pandemic as a vaccine is deployed in the US. The Wall St Journal recently profiled the incredible effort of Pfizer and BioNTech who pioneered a novel Messenger RNA (mRNA) approach, and got it in production in a tenth to a quarter of the normal vaccine development time. It undoubtedly takes a team, but the WSJ article “How Pfizer delivered a COVID vaccine in record time” highlights the efforts of two men, CEO Albert Bourla and manufacturing chief Mike McDermott, and one woman, head of Pfizer’s vaccine research Dr Kathrin Jansen, in this achievement. And the WSJ feature makes more fuss about CEO Albert Bourla’s Greek heritage than about Dr Kathrin Jansen’s femaleness. Now the WSJ isn’t exactly a leftwing propaganda machine, so this reflects the strides that the biological sciences have taken in diversity in the last fifty years. Given the growing shortage of professionals in computer science, robotics and AI occupations, including basic manufacturing, and given the basic human right of empowering everyone with access to equal opportunities, then it is clear that systemic inequity is still at work in some new technologies like robotics. Biotech shows that hardware and innovation is doable. Sadly, Silicon Valley shows that equity is harder than hardware.

Silicon Valley Robotics announces its inaugural ‘Good Robot’ Industry Awards this week, celebrating Innovation in products, Vision in action, and the Commercialization of new technologies that offer us the chance to address global challenges. The computer age did not usher in the increases in productivity that were anticipated. Unlike the advent of tractors or electricity, productivity due to technology has largely stagnated since shortly after the second world war. This is in spite of large amounts of public research and development funding in advanced computing technologies like robotics and AI. In the meantime, the negative impacts of many technology advances (like plastics) continue to wage war on the planet. But there is no point in promoting a Luddite view of a technology free era (of high infant mortality, short average lifespan and child labor), instead we can use technology wisely to address areas where the gain to society will be great.

We want to also recognize the work of robotics community champions who do all sorts of (often unsung) work that advances the science and technology of robotics, from research, to production and employment. These awardees serve as great examples of how providing support for robotics supports all of us.

Community Champion Award:

Companies:

NASA Intelligent Robotics Group
Open Robotics
PickNik Robotics
Robohub
SICK
Willow Garage (best to see the Red Hat series How to Start a Robot Revolution)

Individuals:

Alex Padilla
Ayanna Howard
Evan Ackerman
Frank Tobe
Henrik Christensen
Joy Buolamwini
Katherine Scott
Khari Johnson
Louise Poubel
Mark Martin
Rodney Brooks
Rumman Chowdhury
Timnit Gebru

Silicon Valley Robotics appreciates the contributions made by all of our inaugural Community Champions! And we look forward to next year, because there are many of you out there who are making not just good robots, but a better robotics industry. One of the things I love most about robotics is being around so many people who are are passionate about using technology to improve the world. It can be frustrating that the world is so resistant to change sometimes but in ten short years, the robotics industry has gone from insignificant (in Silicon Valley terms) to unicorns. We all have an opportunity to be part of changing the world for the better, like our Community Champions.

Make no mistake, this is not an issue for women or black & brown people to solve. Without an accurate reflection of the society in which our technologies will be used we will not produce the best technologies and we will not be attractive or competitive as an industry that is fighting for the best talent. Diversity and equity in robotics should be worrying everybody in robotics. We need robots to solve our greatest global challenges. And we need global talent to do this.

I dream of seeing a Silicon Valley Robotics industry cluster and robotics education program in every country, hand in hand with non-profit programs like Women in Robotics and Black in Robotics to support workers entering what is still not an equitable work environment.

*The Silicon Valley Robotics Board is incredibly supportive, however this commentary is all my own opinion piece and I  put my Ruth Bader Ginsburg socks on today. I highly recommend them.

About Silicon Valley Robotics

Silicon Valley Robotics (SVR) supports the innovation and commercialization of robotics technologies, as a non-profit industry association. Our first strategic plan focused on connecting startups with investment, and since our founding in 2010, our membership has grown tenfold, reflecting our success in increasing investment into robotics. We believe that with robotics, we can improve productivity, meet labor shortages, get rid of jobs that treat humans like robots and finally create precision, personalized food, mobility, housing and health technologies. For more information, please visit https://svrobo.org

SOURCE: Silicon Valley Robotics (SVR)

CONTACT: Andra Keay andra@svrobo.org

]]>
Robohub wins Champion Award in SVR ‘Good Robot’ Industry Awards https://robohub.org/robohub-wins-champion-award-in-svr-good-robot-industry-awards/ Wed, 16 Dec 2020 17:00:48 +0000 https://robohub.org/robohub-wins-champion-award-in-svr-good-robot-industry-awards/

President: Sabine Haeurt          

Founded: 2012

HQ:  Switzerland

Robohub is an online platform and non-profit that brings together leading communicators in robotics research, start-ups, business, and education from around the world, focused on connecting the robotics community to the public. It can be difficult for the public to find free, high-quality information about robotics. At Robohub, we enable roboticists to share their stories in their own words by providing them with a social media platform and editorial guidance. This means that our readers get to learn about the latest research and business news, events and opinions, directly from the experts.

Since 2012, Robohub and its international community of volunteers have published over 300 Robohub Podcasts, 7000 blog posts, videos and more, reaching 1M pageviews every year, and more than 30k followers on social media. You can follow robohub on Twitter at @robohub.

]]>
Should robots be gendered? comments on Alan Winfield’s opinion piece https://robohub.org/should-robots-be-gendered-comments-on-alan-winfields-opinion-piece/ Thu, 10 Dec 2020 09:29:06 +0000 https://robohub.org/should-robots-be-gendered-comments-on-alan-winfields-opinion-piece/

The gendering of robots is something I’ve found fascinating since I first started building robots out of legos with my brother. We all ascribe character to robots, consciously or not, even when we understand exactly how robots work. Until recently we’ve been able to write this off as science fiction stuff, because real robots were boring industrial arms and anything else was fictional. However, since 2010, robots have been rolling out into the real world in a whole range of shapes, characters and notably, stereotypes. My original research on the naming of robots gave some indications as to just how insidious this human tendency to anthropomorphize and gender robots really is. Now we’re starting to face the consequences and it matters.

Firstly, let’s consider that many languages have gendered nouns, so there is a preliminary linguistic layer of labelling, ahead of the naming of robots, which if not defined, then tends to happen informally. The founders of two different robot companies have told me that they know when their robot has been accepted in a workplace by when it’s been named by teammates, and they deliberately leave the robot unnamed. Whereas some other companies focus on a more nuanced brand name such as Pepper or Relay, which can minimize gender stereotypes, but even then the effects persist.

Because with robots the physical appearance can’t be ignored and often aligns with ideas of gender. Next, there is the robot voice. Then, there are other layers of operation which can affect both a robot’s learning and its response. And finally, there is the robot’s task or occupation and its socio-cultural context.

Names are both informative and performative. We can usually ascribe a gender to a named object. Similarly, we can ascribe gender based on a robot’s appearance or voice, although it can differ in socio-cultural contexts.

Pepper robot
Astro Boy comic

Astro Boy original comic and Pepper from SoftBank Robotics

The robot Pepper was designed to be a childlike humanoid and according to SoftBank Robotics, Pepper is gender neutral. But in general, I’ve found that US people tend to see Pepper as female helper, while Asian people are more likely to see Pepper as a boy robot helper. This probably has something to do with the popularity of Astro Boy (Mighty Atom) from 1952 to 1968.

One of the significant issues with gendering robots is that once embodied, individuals are unlikely to have the power to change the robot that they interact with. Even if they rename it, recostume it and change the voice, the residual gender markers will be pervasive and ‘neutral’ will still elicit a gender response in everybody.

This will have an impact on how we treat and trust robots. This also has much deeper social implications for all of us, not just those who interact with robots, as robots are recreating all of our existing gender biases. And once the literal die is cast and robots are rolling out of a factory, it will be very hard to subsequently change the robot body.

Interestingly, I’m noticing a transition from a default male style of robot (think of all the small humanoid fighting, dancing and soccer playing robots) to a default female style of robot as the service robotics industry starts to grow. Even when the robot is simply a box shape on wheels, the use of voice can completely change our perception. One of the pioneering service robots from Savioke, Relay, deliberately preselected a neutral name for their robot and avoided using a human voice completely. Relay makes sounds but doesn’t use words. Just like R2D2, Relay expresses character through beeps and boops. This was a conscious, and significant, design choice for Savioke. Their preliminary experimentation on human-robot interaction showed that robots that spoke were expected to answer questions, and perform tasks at a higher level of competency than a robot that beeped.

Relay from Savioke delivering at Aloft Hotel

Not only did Savioke remove the cognitive dissonance of having a robot seem more human that it really is, but they removed some of the reiterative stereotyping that is starting to occur with less thoughtful robot deployments. The best practice for designing robots for real world interaction is to minimize human expressivity and remove any gender markers. (more about that next).

The concept of ‘marked’ and ‘unmarked’ arose in linguistics in the 1930s, but we’ve seen it play out in Natural Language Processing, search and deep learning repeatedly since then, perpetuating, reiterating and exaggerating the use of masculine terminology as the default, and feminine terminology used only in explicit (or marked) circumstances. Marked circumstances almost always relate to sexual characteristics or inferiority within power dynamics, rather than anything more interesting.

An example of unmarked or default terminology is the use of ‘man’ to describe people, but ‘woman’ to only describe a subset of ‘man’. This is also commonly seen in the use of a female specifier on a profession, ie. female police officer, female president, or female doctor. Otherwise, in spite of there being many female doctors, the search will return male examples, call female doctors he, or miscategorize them as nurse. We are all familiar with those mistakes in real life but had developed social policies to reduce the frequency of them. Now AI and robotics are bringing the stereotype back.

Ratio os masculine to femenine pronous in U.S. books, 1900-2008

And so it happens that the ‘neutral’ physical appearance of robots is usually assumed to be male, rather than female, unless the robot has explicit female features. Sadly, female robots mean either a sexualized robot, or a robot performing a stereotypically female role. This is how people actually see and receive robots unless a company, like Savioke, consciously refrains from triggering our stereotypically gendered responses.

Gendered robots

I can vouch for the fact that searching for images using the term “female roboticists”, for example, always presents me with lots of men building female robots instead. It will take a concerted effort to change things. Robot builders have the tendency to give our robots character. And unless you happen to be a very good (and rich) robotics company, there is also no financial incentive to degender robots. Quite the opposite. There is financial pressure to take advantage of our inherent anthropomorphism and gender stereotypes.

In The Media Equation in 1996, Clifford Reeves and Byron Nass demonstrated how we all attributed character, including gender, to our computing machines, and that this then affected our thoughts and actions, even though most people consciously deny conflating a computer with a personality. This unconscious anthropomorphizing can be used to make us respond differently, so of course robot builders will increasingly utilize the effect as more robots enter society and competition increases.

Can human beings relate to computer or television programs in the same way they relate to other human beings? Based on numerous psychological studies, this book concludes that people not only can but do treat computers, televisions, and new media as real people and places. Studies demonstrate that people are “polite” to computers; that they treat computers with female voices differently than “male” ones; that large faces on a screen can invade our personal space; and that on-screen and real-life motion can provoke the same physical responses.

The Media Equation

The history of voice assistants shows a sad trend. These days, they are all female, with the exception of IBM Watson, but then Watson occupies a different ecosystem niche. Watson is an expert. Watson is the doctor to the rest of our subservient, map reading, shopping list helpful nurses. By default, unless you’re in Arabia, your voice assistant device will have a female voice. You have to go through quite a few steps to consciously change it and there are very few options. In 2019, Q, a genderless voice assistant was introduced, however I can’t find it offered on any devices yet.

And while it may be possible to upload a different voice to a robot, there’s nothing we can do if the physical design of the robot evokes gender. Alan Winfield wrote a very good article “Should robots be gendered?” here on Robohub in 2016, in which he outlines three reasons that gendered robots are a bad idea, all stemming from the 4th of the EPSRC Principles of Robotics, that robots should be transparent in action, rather than capitalizing on the illusion of character, so as not to influence vulnerable people.

Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.

EPSRC Principles of Robotics

My biggest quibble with the EPSRC Principles is underestimating the size of the problem. By stating that vulnerable users are the young or the elderly, the principles imply that the rest of us are immune from emotional reaction to robots, whereas Reeves and Nass clearly show the opposite. We are all easily manipulated by our digital voice and robot assistants. And while Winfield recognizes that gender queues are powerful enough to elicit a response in everybody, he only sees the explicit gender markers rather than understanding that unmarked or neutral seeming robots also elicit a gendered response, as ‘not female’.

So Winfield’s first concern is emotional manipulation for vulnerable users (all of us!), his second concern is anthropomorphism inducing cognitive dissonance (over promising and under delivering), and his final concern is that the all the negative stereotypes contributing to sexism will be reproduced and reiterated as normal through the introduction of gendered robots in stereotyped roles (it’s happening!). These are all valid concerns, and yet while we’re just waking up to the problem, the service robot industry is growing by more than 30% per annum.

Where the growth of the industrial robotics segment is comparatively predictable, the world’s most trusted robotics statistics body, the International Federation of Robotics is consistently underestimating the growth of the service robotics industry. In 2016, the IFR predicted 10% growth for professional service robotics over the next few years from \$4.6 Billion, but by 2018 they were recording 61% growth to \$12.6B and by 2020 the IFR has recorded 85% overall growth expecting revenue from service robotics to hit \$37B by 2021.

It’s unlikely that we’ll recall robots, once designed, built and deployed, for anything other than a physical safety issue. And the gendering of robots isn’t something we can roll out a software update to fix. We need to start requesting companies to not deploy robots that reinforce gender stereotyping. They can still be cute and lovable, I’m not opposed to the R2D2 robot stereotype!

Consumers are starting to fight back against the gender stereotyping of toys, which really only started in the 20th century as a way to extract more money from parents, and some brands are realizing that there’s an opportunity for them in developing gender neutral toys. Recent research from the Pew Research Center found that overall 64% of US adults wanted boys to play with toys associated with girls, and 76% of US adults wanted girls to play with toys associated with boys. The difference between girls and boys can be explained because girls’ role playing (caring and nurturing) is still seen as more negative than boys’ roles (fighting and leadership). But the overall range that shows that society has developed a real desire to avoid gender stereotyping completely.

Sadly, it’s like knowing sugar is bad for us, while it still tastes sweet.

In 2016, I debated Ben Goertzel, maker of Sophia the Robot, on the main stage of the Web Summit on whether humanoid robots were good or bad. I believe I made the better case in terms of argument, but ultimately the crowd sided with Goertzel, and by default with Sophia. (there are a couple of descriptions of the debate referenced below).

Robots are still bright shiny new toys to us. When are we going to realize that we’ve already opened the box and played this game, and women, or any underrepresented group, or any stereotype role, is going to be the loser. No, we’re all going to lose! Because we don’t want these stereotypes any more and robots are just going to reinforce the stereotypes that we already know we don’t want.

And did I mention how white all the robots are? Yes, they are racially stereotyped too. (See Ayanna Howard’s new book “Sex, Race and Robots: How to be human in an age of AI”)

References:

]]>
Why we need a robot registry https://robohub.org/why-we-need-a-robot-registry/ Thu, 26 Nov 2020 11:33:59 +0000 https://robohub.org/why-we-need-a-robot-registry/ Robots are rolling out into the real world and we need to meet the emerging challenges in responsible fashion but one that doesn’t block innovation. At the recent ARM Developers Summit 2020 I shared my suggestions for five practical steps that we could undertake at a regional, national or global level as part of the Five Laws of Robotics presentation (below).

The Five Laws of Robotics are drawn from the EPSRC Principles of Robotics, first developed in 2010 and a living document workshopped by experts across many relevant disciplines. These five principles are practical and concise, embracing the majority of principles expressed across a wide range of ethics documents. I will explain in more detail.

  1. There should be no killer robots.
  2. Robots should (be designed to) obey the law.
  3. Robots should (be designed to) be good products.
  4. Robots should be transparent in operation
  5. Robots should be identifiable

EPSRC says that robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. More information is at the Campaign to Stop Killer Robots.

Humans, not robots, are the responsible agents. Robots should be designed and operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.

Robots are products. They should be designed using processes which assure their safety and security. Quality guidelines, processes and standards already exist.

Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit users, instead their machine nature should be made transparent.

It should be possible to find out who is responsible for any robot. My suggestion here is that robots in public spaces require a license plate; a clear identification of robot and the responsible organization.

As well as speaking about Five Laws of Robotics, I introduced five practical proposals to help us respond at a regional, national and global level.

  1. Robot Registry (license plates, access to database of owners/operators)
  2. Algorithmic Transparency (via Model Cards and Testing Benchmarks)
  3. Independent Ethical Review Boards (as in biotech industry)
  4. Robot Ombudspeople to liaise between public and policy makers
  5. Rewarding Good Robots design awards and case studies

Silicon Valley Robotics is about to announce the first winners of our inaugural Robotics Industry Awards. The SVR Industry Awards consider the responsible design as well as technological innovation and commercial success. There are also some ethical checkmark or certification initiatives under preparation, but like the development of new standards, these can take a long time to do properly, whereas awards, endorsements and case studies can be available immediately to foster the discussion of what constitutes good robots and what are the social challenges that robotics needs to solve.

In fact, the robot registry suggestion was picked up recently by Stacey Higginbotham in the IEEE Spectrum. Silicon Valley Robotics is putting together these policy suggestions for the new White House administration.

]]>
Exploring the DARPA SubTerranean Challenge https://robohub.org/exploring-the-darpa-subterranean-challenge/ Mon, 27 Jul 2020 18:29:18 +0000 https://robohub.org/exploring-the-darpa-subterranean-challenge/ The DARPA Subterranean (SubT) Challenge aims to develop innovative technologies that would augment operations underground. On July 20, Dr Timothy Chung, the DARPA SubTChallenge Program Manager, joined Silicon Valley Robotics to discuss the upcoming Cave Circuit and Subterranean Challenge Finals, and the opportunities that still exist for individual and team entries in both Virtual and Systems Challenges, as per the video below.

The SubT Challenge allows teams to demonstrate new approaches for robotic systems to rapidly map, navigate, and search complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.

The SubT Challenge is organized into two Competitions (Systems and Virtual), each with two tracks (DARPA-funded and self-funded).

SYSTEMS COMPETITION RESULTS

Teams in the Systems Competition completed four total runs, two 60-minute runs on each of two courses, Experimental and Safety Research. The courses varied in difficulty and included 20 artifacts each. Teams earned points by correctly identifying artifacts within a five-meter accuracy. The final score was a total of each team’s best score from each of the courses. In instances of a points tie, team rank was determined by (1) earliest time the last artifact was successfully reported, averaged across the team’s best runs on each course; (2) earliest time the first artifact was successfully reported, averaged across the team’s best runs on each course; and (3) lowest average time across all valid artifact reports, averaged across the team’s best runs on each course.

The Tunnel Circuit final scores were as follows

25 Explorer, DARPA-funded
11 CoSTAR (Collaborative SubTerranean Autonomous Resilient Robots), DARPA-funded
10 CTU-CRAS, self-funded winner of the $200,000 Tunnel Circuit prize
9 MARBLE (Multi-agent Autonomy with Radar-Based Localization for Exploration), DARPA-funded
7 CSIRO Data61, DARPA-funded
5 CERBERUS (CollaborativE walking & flying RoBots for autonomous ExploRation in Underground Settings), DARPA-funded
2 NCTU (National Chiao Tung University), self-funded
2 Robotika, self-funded
1 CRETISE (Collaborative Robot Exploration and Teaming In Subterranean Environments), DARPA-funded
1 PLUTO (Pennsylvania Laboratory for Underground Tunnel Operations), DARPA-funded
0 Coordinated Robotics, self-funded

The Urban Circuit final scores were as follows:

16 CoSTAR (Collaborative SubTerranean Autonomous Resilient Robots), DARPA-funded
11 Explorer, DARPA-funded
10 CTU-CRAS-NORLAB (Czech Technical University in Prague – Center for Robotics and Autonomous Systems – Northern Robotics Laboratory), self-funded winner of $500,000 first place prize
9 CSIRO Data61, DARPA-funded
7 CERBERUS (CollaborativE walking & flying RoBots for autonomous ExploRation in Underground Settings), DARPA-funded
4 Coordinated Robotics, self-funded winner of the $250,000 second place prize
4 MARBLE (Multi-agent Autonomy with Radar-Based Localization for Exploration), DARPA-funded
2 NCTU (National Chiao Tung University), self-funded
2 Robotika, self-funded
1 NUS SEDS, (National University of Singapore Students for Exploration and Development of Space), self-funded

VIRTUAL COMPETITION RESULTS

The Virtual competitors developed advanced software for their respective teams of virtual aerial and wheeled robots to explore tunnel environments, with the goal of finding various artifacts hidden throughout the virtual environment and reporting their locations and types to within a five-meter radius during each 60-minute simulation run. A correct report is worth one point and competitors win by accruing the most points across multiple, diverse simulated environments.

The Tunnel Circuit final scores were as follows:

50 Coordinated Robotics, self-funded
21 BARCS, DARPA-funded
14 SODIUM-24 Robotics, self-funded
9 Robotika, self-funded
7 COLLEMBOLA, DARPA-funded
1 Flying Fitches, self-funded
0 AAUNO, self-funded
0 CYNET.ai, self-funded

The Urban Circuit final scores were as follows:

150 BARCS (Bayesian Adaptive Robot Control System), DARPA-funded
115 Coordinated Robotics, self-funded winner of the $250,000 first place prize
21 Robotika, self-funded winner of the $150,000 second place prize
17 COLLEMBOLA (Communication Optimized, Low Latency Exploration, Map-Building and Object Localization Autonomy), DARPA-funded
7 Flying Fitches, self-funded winner of the $100,000 third place prize
7 SODIUM-24 Robotics, self-funded
2 CYNET.ai, self-funded
0 AAUNO, self-funded

2020 Cave Circuit and Finals

The Cave Circuit, the final of three Circuit events, is planned for later this year. Final Event, planned for summer of 2021, will put both Systems and Virtual teams to the test with courses that incorporate diverse elements from all three environments. Teams will compete for up to $2 million in the Systems Final Event and up to $1.5 million in the Virtual Final Event, with additional prizes.

Learn more about the opportunities to participate either virtual or systems Team: https://www.subtchallenge.com/

Dr. Timothy Chung joined DARPA’s Tactical Technology Office as a program manager in February 2016. He serves as the Program Manager for the OFFensive Swarm-Enabled Tactics Program and the DARPA Subterranean (SubT) Challenge.

Prior to joining DARPA, Dr. Chung served as an Assistant Professor at the Naval Postgraduate School and Director of the Advanced Robotic Systems Engineering Laboratory (ARSENL). His academic interests included modeling, analysis, and systems engineering of operational settings involving unmanned systems, combining collaborative autonomy development efforts with an extensive live-fly field experimentation program for swarm and counter-swarm unmanned system tactics and associated technologies.

Dr. Chung holds a Bachelor of Science in Mechanical and Aerospace Engineering from Cornell University. He also earned Master of Science and Doctor of Philosophy degrees in Mechanical Engineering from the California Institute of Technology.

Learn more about DARPA here: www.darpa.mil

]]>
RSS 2020 – all the papers and videos! https://robohub.org/rss-2020-all-the-papers-and-videos/ Sat, 18 Jul 2020 21:11:07 +0000 https://robohub.org/rss-2020-all-the-papers-and-videos/

RSS 2020 was held virtually this year, from the RSS Pioneers Workshop on July 11 to the Paper Awards and Farewell on July 16. Many talks are now available online, including 103 accepted papers, each presented as an online Spotlight Talk on the RSS Youtube channel, and of course the plenaries and much of the workshop content as well. We’ve tried to link here to all of the goodness from RSS 2020.

The RSS Keynote on July 15 was delivered by Josh Tenenbaum, Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, CSAIL. Titled “It’s all in your head: Intuitive physics, planning, and problem-solving in brains, minds and machines”.

Abstract: I will overview what we know about the human mind’s internal models of the physical world, including how these models arise over evolution and developmental learning, how they are implemented in neural circuitry, and how they are used to support planning and rapid trial-and-error problem-solving in tool use and other physical reasoning tasks. I will also discuss prospects for building more human-like physical common sense in robots and other AI systems.

RSS 2020 introduces the new RSS Test of Time Award given to highest impact papers published at RSS (and potentially journal versions thereof) from at least ten years ago. Impact may mean that it changed how we think about problems or about robotic design, that it brought fully new problems to the attention of the community, or that it pioneered new approach to robotic design or problem solving. With this award, RSS generally wants to foster the discussion of the long term development of our field. The award is an opportunity to reflect on and discuss the past, which is essential to make progress in the future. The awardee’s keynote is therefore complemented with a Test of Time Panel session devoted to this important discussion.

This year’s Test of Time Awards goes to the pair of papers for pioneering an information smoothing approach to the SLAM problem via square root factorization, its interpretation as a graphical model, and the widely-used GTSAM free software repository.

Abstract: Many estimation, planning and optimal control problems in robotics have an optimization problem at their core. In most of these optimization problems, the objective function is composed of many different factors or terms that are local in nature, i.e., they only depend on a small subset of the variables. 10 years ago the Square Root SAM papers identified factor graphs as a particularly insightful way of modeling this locality structure. Since then we have realized that factor graphs can represent a wide variety of problems across robotics, expose opportunities to improve computational performance, and are beneficial in designing and thinking about how to model a problem, even aside from performance considerations. Many of these principles have been embodied in our evolving open source package GTSAM, which puts factor graphs front and central, and which has been used with great success in a number of state of the art robotics applications. We will also discuss where factor graphs, in our opinion, can break in

The RSS 2020 Plenary Sessions highlighted Early Career Awards for researchers, Byron Boots, Luca Carlone and Jeanette Bohg. Byron Boots is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University.

Title: Perspectives on Machine Learning for Robotics

Abstract: Recent advances in machine learning are leading to new tools for designing intelligent robots: functions relied on to govern a robot’s behavior can be learned from a robot’s interaction with its environment rather than hand-designed by an engineer. Many machine learning methods assume little prior knowledge and are extremely flexible, they can model almost anything! But, this flexibility comes at a cost. The same algorithms are often notoriously data hungry and computationally expensive, two problems that can be debilitating for robotics. In this talk I’ll discuss how machine learning can be combined with prior knowledge to build effective solutions to robotics problems. I’ll start by introducing an online learning perspective on robot adaptation that unifies well-known algorithms and suggests new approaches. Along the way, I’ll focus on the use of simulation and expert advice to augment learning. I’ll discuss how imperfect models can be leveraged to rapidly update simple control policies and imitation can accelerate reinforcement learning. I will also show how we have applied some of these ideas to an autonomous off-road racing task that requires impressive sensing, speed, and agility to complete.

Title: The Future of Robot Perception: Certifiable Algorithms and Real-time High-level Understanding

Abstract: Robot perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception.

This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems.

The second effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction.
Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human instructions, and operate in large dynamic environments and over and extended period of time

Title: A Tale of Success and Failure in Robotics Grasping and Manipulation

Abstract: In 2007, I was a naïve grad student and started to work on vision-based robotic grasping. I had no prior background in manipulation, kinematics, dynamics or control. Yet, I dove into the field by re-implementing and improving a learning-based method. While making some contributions, the proposed method also had many limitations partly due to the way the problem was framed. Looking back at the entire journey until today, I find that I have learned the most about robotic grasping and manipulation from observing failures and limitations of existing approaches – including my own. In this talk, I want to highlight how these failures and limitations have shaped my view on what may be some of the underlying principles of autonomous robotic manipulation. I will emphasise three points. First, perception and prediction will always be noisy, partial and sometimes just plain wrong. Therefore, one focus of my research is on methods that support decision-making under uncertainty due to noisy sensing, inaccurate models and hard-to-predict dynamics. To this end, I will present a robotic system that demonstrates the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. I will also talk about work that funnels uncertainty by enabling robots to exploit contact constraints during manipulation.

Second, a robot has many more sensors than just cameras and they all provide complementary information. Therefore, one focus of my research is on methods that can exploit multimodal information such as vision and touch for contact-rich manipulation. It is non-trivial to manually design a manipulation controller that combines modalities with very different characteristics. I will present work that uses self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. And third, choosing the right robot action representation has a large influence on the success of a manipulation policy, controller or planner. While believing many years that inferring contact points for robotic grasping is futile, I will present work that convinced me otherwise. Specifically, this work uses contact points as an abstraction that can be re-used by a diverse set of robot hands.

Inclusion@RSS is excited to host a panel “On the Future of Robotics” to discuss how we can have an inclusive robotics community and its impact on the future of the field. Moderator: Matt Johnson-Roberson (University of Michigan) with Panelists: Tom Williams (Colorado School of Mines), Eduard Fosch-Villaronga (Leiden University), Lydia Tapia (University of New Mexico), Chris Macnab (University of Calgary), Adam Poulsen (Charles Sturt University), Chad Jenkins (University of Michigan), Kendall Queen (University of Pennsylvania), Naveen Kuppuswamy (Toyota Research Institute).

The RSS community is committed to increasing the participation of groups traditionally underrepresented in robotics (including but not limited to: women, LGBTQ+, underrepresented minorities, and people with disabilities), especially people early in their studies and career. Such efforts are crucial for increasing research capacity, creativity, and broadening the impact of robotics research.

The RSS Pioneers Workshop for senior Ph.D. students and postdocs, was modelled on the highly successful HRI Pioneers Workshop and took place on Saturday July 11. The goal of RSS Pioneers is to bring together a cohort of the world’s top early career researchers to foster creativity and collaborations surrounding challenges in all areas of robotics, as well as to help young researchers navigate their next career stages. The workshop included a mix of research and career talks from senior scholars in the field from both academia and industry, research presentations from attendees and networking activities, with a poster session where Pioneers will get a chance to externally showcase their research.

Content from the various workshops on July 12 and 13 may be available through the various workshop websites.

RSS 2020 Accepted Workshops

WS1-2 Reacting to contact: Enabling transparent interactions through intelligent sensing and actuation Ankit Bhatia
Aaron M. Johnson
Matthew T. Mason
[Session]
WS1-3 Certifiable Robot Perception: from Global Optimization to Safer Robots Luca Carlone
Tat-Jun Chin
Anders Eriksson
Heng Yang
[Session]
WS1-4 Advancing the State of Machine Learning for Manufacturing Robotics Elena Messina
Holly Yanco
Megan Zimmerman
Craig Schlenoff
Dragos Margineantu
[Session]
WS1-5 Advances and Challenges in Imitation Learning for Robotics  Scott Niekum
Akanksha Saran
Yuchen Cui
Nick Walker
Andreea Bobu
Ajay Mandlekar
Danfei Xu
[Session]
WS1-6 2nd Workshop on Closing the Reality Gap in Sim2Real Transfer for Robotics Sebastian Höfer
Kostas Bekris
Ankur Handa
Juan Camilo Gamboa
Florian Golemo
Melissa Mozifian
[Session]
WS1-7 ROS Carpentry Workshop Katherine Scott
Mabel Zhang
Camilo Buscaron
Steve Macenski
N/A
WS1-8 Perception and Control for Fast and Agile Super-Vehicles II Varun Murali
Phillip Foehn
Davide Scaramuzza
Sertac Karaman
[Session]
WS1-9 Robotics Retrospectives  Jeannette Bohg
Franziska Meier
Arunkumar Byravan
Akshara Rai
[Session]
WS1-10 Heterogeneous Multi-Robot Task Allocation and Coordination  Harish Ravichandar
Ragesh Ramachandran
Sonia Chernova
Seth Hutchinson
Gaurav Sukhatme
Vijay Kumar
[Session]
WS1-11 Learning (in) Task and Motion Planning  Danny Driess
Neil T. Dantam
Lydia E. Kavraki
Marc Toussaint
[Session]
WS1-12 Performing Arts Robots & Technologies, Integrated (PARTI)  Naomi Fitter
Heather Knight
Amy LaViers
[Session]
WS1-13 Robots in the Wild: Challenges in Deploying Robust Autonomy for Robotic Exploration Hannah Kerner
Amy Tabb
Jnaneshwar Das
Pratap Tokekar
Masahiro Ono
[Session]
WS1-14 Emergent Behaviors in Human-Robot Systems  Erdem Bıyık
Minae Kwon
Dylan Losey
Noah Goodman
Stefanos Nikolaidis
Dorsa Sadigh
[Session]

Monday, July 13

WS Title Organizers Virtual Session Link
WS2-1 Interaction and Decision-Making in Autonomous Driving  Rowan McAllister
Litin Sun
Igor Gilitschenski
Daniela Rus
[Session]
WS2-2 2nd RSS Workshop on Robust Autonomy: Tools for Safety in Real-World Uncertain Environments Andrea Bajcsy
Ransalu Senanayake
Somil Bansal
Sylvia Herbert
David Fridovich-Keil
Jaime Fernández Fisac
[Session]
WS2-3 AI & Its Alternatives in Assistive & Collaborative Robotics  Deepak Gopinath
Aleksandra Kalinowska
Mahdieh Nejati
Katarina Popovic
Brenna Argall
Todd Murphey
[Session]
WS2-4 Benchmarking Tools for Evaluating Robotic Assembly of Small Parts Adam Norton
Holly Yanco
Joseph Falco
Kenneth Kimble
[Session]
WS2-5 Good Citizens of Robotics Research Mustafa Mukadam
Nima Fazeli
Niko Sünderhauf
[Session]
WS2-6 Structured Approaches to Robot Learning for Improved Generalization  Arunkumar Byravan
Markus Wulfmeier
Franziska Meier
Mustafa Mukadam
Nicolas Heess
Angela Schoellig
Dieter Fox
[Session]
WS2-7 Explainable and Trustworthy Robot Decision Making for Scientific Data Collection Nisar Ahmed
P. Michael Furlong
Geoff Hollinger
Seth McCammon
[Session]
WS2-8 Closing the Academia to Real-World Gap in Service Robotics  Guilherme Maeda
Nick Walker
Petar Kormushev
Maru Cabrera
[Session]
WS2-9 Visuotactile Sensors for Robust Manipulation: From Perception to Control  Alex Alspach
Naveen Kuppuswamy
Avinash Uttamchandani
Filipe Veiga
Wenzhen Yuan
[Session]
WS2-10 Self-Supervised Robot Learning  Abhinav Valada
Anelia Angelova
Joschka Boedecker
Oier Mees
Wolfram Burgard
[Session]
WS2-11 Power On and Go Robots: ‘Out-of-the-Box’ Systems for Real-World Applications Jonathan Kelly
Stephan Weiss
Robuffo Giordana
Valentin Peretroukhin
[Session]
WS2-12 Workshop on Visual Learning and Reasoning for Robotic Manipulation  Kuan Fang
David Held
Yuke Zhu
Dinesh Jayaraman
Animesh Garg
Lin Sun
Yu Xiang
Greg Dudek
[Session]
WS2-13 Action Representations for Learning in Continuous Control  Tamim Asfour
Miroslav Bogdanovic
Jeannette Bohg
Animesh Garg
Roberto Martín-Martín
Ludovic Righetti
[Se

RSS 2020 Accepted Papers

Paper ID Title Authors Virtual Session Link
1 Planning and Execution using Inaccurate Models with Provable Guarantees Anirudh Vemula (Carnegie Mellon University)*; Yash Oza (CMU); J. Bagnell (Aurora Innovation); Maxim Likhachev (CMU) Virtual Session #1
2 Swoosh! Rattle! Thump! – Actions that Sound Dhiraj Gandhi (Carnegie Mellon University)*; Abhinav Gupta (Carnegie Mellon University); Lerrel Pinto (NYU/Berkeley) Virtual Session #1
3 Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image Danny Driess (Machine Learning and Robotics Lab, University of Stuttgart)*; Jung-Su Ha (); Marc Toussaint () Virtual Session #1
4 Elaborating on Learned Demonstrations with Temporal Logic Specifications Craig Innes (University of Edinburgh)*; Subramanian Ramamoorthy (University of Edinburgh) Virtual Session #1
5 Non-revisiting Coverage Task with Minimal Discontinuities for Non-redundant Manipulators Tong Yang (Zhejiang University)*; Jaime Valls Miro (University of Technology Sydney); Yue Wang (Zhejiang University); Rong Xiong (Zhejiang University) Virtual Session #1
6 LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices Radu Alexandru Rosu (University of Bonn)*; Peer Schütt (University of Bonn); Jan Quenzel (University of Bonn); Sven Behnke (University of Bonn) Virtual Session #1
7 A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty Valentin Peretroukhin (University of Toronto)*; Matthew Giamou (University of Toronto); W. Nicholas Greene (MIT); David Rosen (MIT Laboratory for Information and Decision Systems); Jonathan Kelly (University of Toronto); Nicholas Roy (MIT) Virtual Session #1
8 Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication Brian Reily (Colorado School of Mines)*; Christopher Reardon (ARL); Hao Zhang (Colorado School of Mines) Virtual Session #1
9 OverlapNet: Loop Closing for LiDAR-based SLAM Xieyuanli Chen (Photogrammetry & Robotics Lab, University of Bonn)*; Thomas Läbe (Institute for Geodesy and Geoinformation, University of Bonn); Andres Milioto (University of Bonn); Timo Röhling (Fraunhofer FKIE); Olga Vysotska (Autonomous Intelligent Driving GmbH); Alexandre Haag (AID); Jens Behley (University of Bonn); Cyrill Stachniss (University of Bonn) Virtual Session #1
10 The Dark Side of Embodiment – Teaming Up With Robots VS Disembodied Agents Filipa Correia (INESC-ID & University of Lisbon)*; Samuel Gomes (IST/INESC-ID); Samuel Mascarenhas (INESC-ID); Francisco S. Melo (IST/INESC-ID); Ana Paiva (INESC-ID U of Lisbon) Virtual Session #1
11 Shared Autonomy with Learned Latent Actions Hong Jun Jeon (Stanford University)*; Dylan Losey (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #1
12 Regularized Graph Matching for Correspondence Identification under Uncertainty in Collaborative Perception Peng Gao (Colorado school of mines)*; Rui Guo (Toyota Motor North America); Hongsheng Lu (Toyota Motor North America); Hao Zhang (Colorado School of Mines) Virtual Session #1
13 Frequency Modulation of Body Waves to Improve Performance of Limbless Robots Baxi Zhong (Goergia Tech)*; Tianyu Wang (Carnegie Mellon University); Jennifer Rieser (Georgia Institute of Technology); Abdul Kaba (Morehouse College); Howie Choset (Carnegie Melon University); Daniel Goldman (Georgia Institute of Technology) Virtual Session #1
14 Self-Reconfiguration in Two-Dimensions via Active Subtraction with Modular Robots Matthew Hall (The University of Sheffield)*; Anil Ozdemir (The University of Sheffield); Roderich Gross (The University of Sheffield) Virtual Session #1
15 Singularity Maps of Space Robots and their Application to Gradient-based Trajectory Planning Davide Calzolari (Technical University of Munich (TUM), German Aerospace Center (DLR))*; Roberto Lampariello (German Aerospace Center); Alessandro Massimo Giordano (Deutches Zentrum für Luft und Raumfahrt) Virtual Session #1
16 Grounding Language to Non-Markovian Tasks with No Supervision of Task Specifications Roma Patel (Brown University)*; Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #1
17 Fast Uniform Dispersion of a Crash-prone Swarm Michael Amir (Technion – Israel Institute of Technology)*; Freddy Bruckstein (Technion) Virtual Session #1
18 Simultaneous Enhancement and Super-Resolution of Underwater Imagery for Improved Visual Perception Md Jahidul Islam (University of Minnesota Twin Cities)*; Peigen Luo (University of Minnesota-Twin Cities); Junaed Sattar (University of Minnesota) Virtual Session #1
19 Collision Probabilities for Continuous-Time Systems Without Sampling Kristoffer Frey (MIT)*; Ted Steiner (Charles Stark Draper Laboratory, Inc.); Jonathan How (MIT) Virtual Session #1
20 Event-Driven Visual-Tactile Sensing and Learning for Robots Tasbolat Taunyazov (National University of Singapore); Weicong Sng (National University of Singapore); Brian Lim (National University of Singapore); Hian Hian See (National University of Singapore); Jethro Kuan (National University of Singapore); Abdul Fatir Ansari (National University of Singapore); Benjamin Tee (National University of Singapore); Harold Soh (National University Singapore)* Virtual Session #1
21 Resilient Distributed Diffusion for Multi-Robot Systems Using Centerpoint JIANI LI (Vanderbilt University)*; Waseem Abbas (Vanderbilt University); Mudassir Shabbir (Information Technology University); Xenofon Koutsoukos (Vanderbilt University) Virtual Session #1
22 Pixel-Wise Motion Deblurring of Thermal Videos Manikandasriram Srinivasan Ramanagopal (University of Michigan)*; Zixu Zhang (University of Michigan); Ram Vasudevan (University of Michigan); Matthew Johnson Roberson (University of Michigan) Virtual Session #1
23 Controlling Contact-Rich Manipulation Under Partial Observability Florian Wirnshofer (Siemens AG)*; Philipp Sebastian Schmitt (Siemens AG); Georg von Wichert (Siemens AG); Wolfram Burgard (University of Freiburg) Virtual Session #1
24 AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos Laura Smith (UC Berkeley)*; Nikita Dhawan (UC Berkeley); Marvin Zhang (UC Berkeley); Pieter Abbeel (UC Berkeley); Sergey Levine (UC Berkeley) Virtual Session #1
25 Provably Constant-time Planning and Re-planning for Real-time Grasping Objects off a Conveyor Belt Fahad Islam (Carnegie Mellon University)*; Oren Salzman (Technion); Aditya Agarwal (CMU); Likhachev Maxim (Carnegie Mellon University) Virtual Session #1
26 Online IMU Intrinsic Calibration: Is It Necessary? Yulin Yang (University of Delaware)*; Patrick Geneva (University of Delaware); Xingxing Zuo (Zhejiang University); Guoquan Huang (University of Delaware) Virtual Session #1
27 A Berry Picking Robot With A Hybrid Soft-Rigid Arm: Design and Task Space Control Naveen Kumar Uppalapati (University of Illinois at Urbana Champaign)*; Benjamin Walt ( University of Illinois at Urbana Champaign); Aaron Havens (University of Illinois Urbana Champaign); Armeen Mahdian (University of Illinois at Urbana Champaign); Girish Chowdhary (University of Illinois at Urbana Champaign); Girish Krishnan (University of Illinois at Urbana Champaign) Virtual Session #1
28 Iterative Repair of Social Robot Programs from Implicit User Feedback via Bayesian Inference Michael Jae-Yoon Chung (University of Washington)*; Maya Cakmak (University of Washington) Virtual Session #1
29 Cable Manipulation with a Tactile-Reactive Gripper Siyuan Dong (MIT); Shaoxiong Wang (MIT); Yu She (MIT)*; Neha Sunil (Massachusetts Institute of Technology); Alberto Rodriguez (MIT); Edward Adelson (MIT, USA) Virtual Session #1
30 Automated Synthesis of Modular Manipulators’ Structure and Control for Continuous Tasks around Obstacles Thais Campos de Almeida (Cornell University)*; Samhita Marri (Cornell University); Hadas Kress-Gazit (Cornell) Virtual Session #1
31 Learning Memory-Based Control for Human-Scale Bipedal Locomotion Jonah Siekmann (Oregon State University)*; Srikar Valluri (Oregon State University); Jeremy Dao (Oregon State University); Francis Bermillo (Oregon State University); Helei Duan (Oregon State University); Alan Fern (Oregon State University); Jonathan Hurst (Oregon State University) Virtual Session #1
32 Multi-Fidelity Black-Box Optimization for Time-Optimal Quadrotor Maneuvers Gilhyun Ryou (Massachusetts Institute of Technology)*; Ezra Tal (Massachusetts Institute of Technology); Sertac Karaman (Massachusetts Institute of Technology) Virtual Session #1
33 Manipulation Trajectory Optimization with Online Grasp Synthesis and Selection Lirui Wang (University of Washington)*; Yu Xiang (NVIDIA); Dieter Fox (NVIDIA Research / University of Washington) Virtual Session #1
34 VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation Ryan Hoque (UC Berkeley)*; Daniel Seita (University of California, Berkeley); Ashwin Balakrishna (UC Berkeley); Aditya Ganapathi (University of California, Berkeley); Ajay Tanwani (UC Berkeley); Nawid Jamali (Honda Research Institute); Katsu Yamane (Honda Research Institute); Soshi Iba (Honda Research Institute); Ken Goldberg (UC Berkeley) Virtual Session #1
35 Spatial Action Maps for Mobile Manipulation Jimmy Wu (Princeton University)*; Xingyuan Sun (Princeton University); Andy Zeng (Google); Shuran Song (Columbia University); Johnny Lee (Google); Szymon Rusinkiewicz (Princeton University); Thomas Funkhouser (Princeton University) Virtual Session #2
36 Generalized Tsallis Entropy Reinforcement Learning and Its Application to Soft Mobile Robots Kyungjae Lee (Seoul National University)*; Sungyub Kim (KAIST); Sungbin Lim (UNIST); Sungjoon Choi (Disney Research); Mineui Hong (Seoul National University); Jaein Kim (Seoul National University); Yong-Lae Park (Seoul National University); Songhwai Oh (Seoul National University) Virtual Session #2
37 Learning Labeled Robot Affordance Models Using Simulations and Crowdsourcing Adam Allevato (UT Austin)*; Elaine Short (Tufts University); Mitch Pryor (UT Austin); Andrea Thomaz (UT Austin) Virtual Session #2
38 Towards Embodied Scene Description Sinan Tan (Tsinghua University); Huaping Liu (Tsinghua University)*; Di Guo (Tsinghua University); Xinyu Zhang (Tsinghua University); Fuchun Sun (Tsinghua University) Virtual Session #2
39 Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving Zhangjie Cao (Stanford University); Erdem Biyik (Stanford University)*; Woodrow Wang (Stanford University); Allan Raventos (Toyota Research Institute); Adrien Gaidon (Toyota Research Institute); Guy Rosman (Toyota Research Institute); Dorsa Sadigh (Stanford) Virtual Session #2
40 Deep Drone Acrobatics Elia Kaufmann (ETH / University of Zurich)*; Antonio Loquercio (ETH / University of Zurich); Rene Ranftl (Intel Labs); Matthias Müller (Intel Labs); Vladlen Koltun (Intel Labs); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #2
41 Active Preference-Based Gaussian Process Regression for Reward Learning Erdem Biyik (Stanford University)*; Nicolas Huynh (École Polytechnique); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #2
42 A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play Shray Bansal (Georgia Institute of Technology)*; Jin Xu (Georgia Institute of Technology); Ayanna Howard (Georgia Institute of Technology); Charles Isbell (Georgia Institute of Technology) Virtual Session #2
43 Data-driven modeling of a flapping bat robot with a single flexible wing surface Jonathan Hoff (University of Illinois at Urbana-Champaign)*; Seth Hutchinson (Georgia Tech) Virtual Session #2
44 Safe Motion Planning for Autonomous Driving using an Adversarial Road Model Alex Liniger (ETH Zurich)*; Luc Van Gool (ETH Zurich) Virtual Session #2
45 A Motion Taxonomy for Manipulation Embedding David Paulius (University of South Florida)*; Nicholas Eales (University of South Florida); Yu Sun (University of South Florida) Virtual Session #2
46 Aerial Manipulation Using Hybrid Force and Position NMPC Applied to Aerial Writing Dimos Tzoumanikas (Imperial College London)*; Felix Graule (ETH Zurich); Qingyue Yan (Imperial College London); Dhruv Shah (Berkeley Artificial Intelligence Research); Marija Popovic (Imperial College London); Stefan Leutenegger (Imperial College London) Virtual Session #2
47 A Global Quasi-Dynamic Model for Contact-Trajectory Optimization in Manipulation Bernardo Aceituno-Cabezas (MIT)*; Alberto Rodriguez (MIT) Virtual Session #2
48 Vision-Based Goal-Conditioned Policies for Underwater Navigation in the Presence of Obstacles Travis Manderson (McGill University)*; Juan Camilo Gamboa Higuera (McGill University); Stefan Wapnick (McGill University); Jean-François Tremblay (McGill University); Florian Shkurti (University of Toronto); David Meger (McGill University); Gregory Dudek (McGill University) Virtual Session #2
49 Spatio-Temporal Stochastic Optimization: Theory and Applications to Optimal Control and Co-Design Ethan Evans (Georgia Institute of Technology)*; Andrew Kendall (Georgia Institute of Technology); Georgios Boutselis (Georgia Institute of Technology ); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #2
50 Kernel Taylor-Based Value Function Approximation for Continuous-State Markov Decision Processes Junhong Xu (INDIANA UNIVERSITY)*; Kai Yin (Vrbo, Expedia Group); Lantao Liu (Indiana University, Intelligent Systems Engineering) Virtual Session #2
51 HMPO: Human Motion Prediction in Occluded Environments for Safe Motion Planning Jaesung Park (University of North Carolina at Chapel Hill)*; Dinesh Manocha (University of Maryland at College Park) Virtual Session #2
52 Motion Planning for Variable Topology Truss Modular Robot Chao Liu (University of Pennsylvania)*; Sencheng Yu (University of Pennsylvania); Mark Yim (University of Pennsylvania) Virtual Session #2
53 Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning Archit Sharma (Google)*; Michael Ahn (Google); Sergey Levine (Google); Vikash Kumar (Google); Karol Hausman (Google Brain); Shixiang Gu (Google Brain) Virtual Session #2
54 Compositional Transfer in Hierarchical Reinforcement Learning Markus Wulfmeier (DeepMind)*; Abbas Abdolmaleki (Google DeepMind); Roland Hafner (Google DeepMind); Jost Tobias Springenberg (DeepMind); Michael Neunert (Google DeepMind); Noah Siegel (DeepMind); Tim Hertweck (DeepMind); Thomas Lampe (DeepMind); Nicolas Heess (DeepMind); Martin Riedmiller (DeepMind) Virtual Session #2
55 Learning from Interventions: Human-robot interaction as both explicit and implicit feedback Jonathan Spencer (Princeton University)*; Sanjiban Choudhury (University of Washington); Matt Barnes (University of Washington); Matthew Schmittle (University of Washington); Mung Chiang (Princeton University); Peter Ramadge (Princeton); Siddhartha Srinivasa (University of Washington) Virtual Session #2
56 Fourier movement primitives: an approach for learning rhythmic robot skills from demonstrations Thibaut Kulak (Idiap Research Institute)*; Joao Silverio (Idiap Research Institute); Sylvain Calinon (Idiap Research Institute) Virtual Session #2
57 Self-Supervised Localisation between Range Sensors and Overhead Imagery Tim Tang (University of Oxford)*; Daniele De Martini (University of Oxford); Shangzhe Wu (University of Oxford); Paul Newman (University of Oxford) Virtual Session #2
58 Probabilistic Swarm Guidance Subject to Graph Temporal Logic Specifications Franck Djeumou (University of Texas at Austin)*; Zhe Xu (University of Texas at Austin); Ufuk Topcu (University of Texas at Austin) Virtual Session #2
59 In-Situ Learning from a Domain Expert for Real World Socially Assistive Robot Deployment Katie Winkle (Bristol Robotics Laboratory)*; Severin Lemaignan (); Praminda Caleb-Solly (); Paul Bremner (); Ailie Turton (University of the West of England); Ute Leonards () Virtual Session #2
60 MRFMap: Online Probabilistic 3D Mapping using Forward Ray Sensor Models Kumar Shaurya Shankar (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #2
61 GTI: Learning to Generalize across Long-Horizon Tasks from Human Demonstrations Ajay Mandlekar (Stanford University); Danfei Xu (Stanford University)*; Roberto Martín-Martín (Stanford University); Silvio Savarese (Stanford University); Li Fei-Fei (Stanford University) Virtual Session #2
62 Agbots 2.0: Weeding Denser Fields with Fewer Robots Wyatt McAllister (University of Illinois)*; Joshua Whitman (University of Illinois); Allan Axelrod (University of Illinois); Joshua Varghese (University of Illinois); Girish Chowdhary (University of Illinois at Urbana Champaign); Adam Davis (University of Illinois) Virtual Session #2
63 Optimally Guarding Perimeters and Regions with Mobile Range Sensors Siwei Feng (Rutgers University)*; Jingjin Yu (Rutgers Univ.) Virtual Session #2
64 Learning Agile Robotic Locomotion Skills by Imitating Animals Xue Bin Peng (UC Berkeley)*; Erwin Coumans (Google); Tingnan Zhang (Google); Tsang-Wei Lee (Google Brain); Jie Tan (Google); Sergey Levine (UC Berkeley) Virtual Session #2
65 Learning to Manipulate Deformable Objects without Demonstrations Yilin Wu (UC Berkeley); Wilson Yan (UC Berkeley)*; Thanard Kurutach (UC Berkeley); Lerrel Pinto (); Pieter Abbeel (UC Berkeley) Virtual Session #2
66 Deep Differentiable Grasp Planner for High-DOF Grippers Min Liu (National University of Defense Technology)*; Zherong Pan (University of North Carolina at Chapel Hill); Kai Xu (National University of Defense Technology); Kanishka Ganguly (University of Maryland at College Park); Dinesh Manocha (University of North Carolina at Chapel Hill) Virtual Session #2
67 Ergodic Specifications for Flexible Swarm Control: From User Commands to Persistent Adaptation Ahalya Prabhakar (Northwestern University)*; Ian Abraham (Northwestern University); Annalisa Taylor (Northwestern University); Millicent Schlafly (Northwestern University); Katarina Popovic (Northwestern University); Giovani Diniz (Raytheon); Brendan Teich (Raytheon); Borislava Simidchieva (Raytheon); Shane Clark (Raytheon); Todd Murphey (Northwestern Univ.) Virtual Session #2
68 Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints Shushman Choudhury (Stanford University)*; Jayesh Gupta (Stanford University); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford); Jeannette Bohg (Stanford) Virtual Session #2
69 Latent Belief Space Motion Planning under Cost, Dynamics, and Intent Uncertainty Dicong Qiu (iSee); Yibiao Zhao (iSee); Chris Baker (iSee)* Virtual Session #2
70 Learning of Sub-optimal Gait Controllers for Magnetic Walking Soft Millirobots Utku Culha (Max-Planck Institute for Intelligent Systems); Sinan Ozgun Demir (Max Planck Institute for Intelligent Systems); Sebastian Trimpe (Max Planck Institute for Intelligent Systems); Metin Sitti (Carnegie Mellon University)* Virtual Session #3
71 Nonparametric Motion Retargeting for Humanoid Robots on Shared Latent Space Sungjoon Choi (Disney Research)*; Matthew Pan (Disney Research); Joohyung Kim (University of Illinois Urbana-Champaign) Virtual Session #3
72 Residual Policy Learning for Shared Autonomy Charles Schaff (Toyota Technological Institute at Chicago)*; Matthew Walter (Toyota Technological Institute at Chicago) Virtual Session #3
73 Efficient Parametric Multi-Fidelity Surface Mapping Aditya Dhawale (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #3
74 Towards neuromorphic control: A spiking neural network based PID controller for UAV Rasmus Stagsted (University of Southern Denmark); Antonio Vitale (ETH Zurich); Jonas Binz (ETH Zurich); Alpha Renner (Institute of Neuroinformatics, University of Zurich and ETH Zurich); Leon Bonde Larsen (University of Southern Denmark); Yulia Sandamirskaya (Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland)* Virtual Session #3
75 Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping Cristian Bodnar (University of Cambridge)*; Adrian Li (X); Karol Hausman (Google Brain); Peter Pastor (X); Mrinal Kalakrishnan (X) Virtual Session #3
76 Scaling data-driven robotics with reward sketching and batch reinforcement learning Serkan Cabi (DeepMind)*; Sergio Gómez Colmenarejo (DeepMind); Alexander Novikov (DeepMind); Ksenia Konyushova (DeepMind); Scott Reed (DeepMind); Rae Jeong (DeepMind); Konrad Zolna (DeepMind); Yusuf Aytar (DeepMind); David Budden (DeepMind); Mel Vecerik (Deepmind); Oleg Sushkov (DeepMind); David Barker (DeepMind); Jonathan Scholz (DeepMind); Misha Denil (DeepMind); Nando de Freitas (DeepMind); Ziyu Wang (Google Research, Brain Team) Virtual Session #3
77 MPTC – Modular Passive Tracking Controller for stack of tasks based control frameworks Johannes Englsberger (German Aerospace Center (DLR))*; Alexander Dietrich (DLR); George Mesesan (German Aerospace Center (DLR)); Gianluca Garofalo (German Aerospace Center (DLR)); Christian Ott (DLR); Alin Albu-Schaeffer (Robotics and Mechatronics Center (RMC), German Aerospace Center (DLR)) Virtual Session #3
78 NH-TTC: A gradient-based framework for generalized anticipatory collision avoidance Bobby Davis (University of Minnesota Twin Cities)*; Ioannis Karamouzas (Clemson University); Stephen Guy (University of Minnesota Twin Cities) Virtual Session #3
79 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans Antoni Rosinol (MIT)*; Arjun Gupta (MIT); Marcus Abate (MIT); Jingnan Shi (MIT); Luca Carlone (Massachusetts Institute of Technology) Virtual Session #3
80 Robot Object Retrieval with Contextual Natural Language Queries Thao Nguyen (Brown University)*; Nakul Gopalan (Georgia Tech); Roma Patel (Brown University); Matthew Corsaro (Brown University); Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #3
81 AlphaPilot: Autonomous Drone Racing Philipp Foehn (ETH / University of Zurich)*; Dario Brescianini (University of Zurich); Elia Kaufmann (ETH / University of Zurich); Titus Cieslewski (University of Zurich & ETH Zurich); Mathias Gehrig (University of Zurich); Manasi Muglikar (University of Zurich); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #3
82 Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations Lin Shao (Stanford University)*; Toki Migimatsu (Stanford University); Qiang Zhang (Shanghai Jiao Tong University); Kaiyuan Yang (Stanford University); Jeannette Bohg (Stanford) Virtual Session #3
83 A Variable Rolling SLIP Model for a Conceptual Leg Shape to Increase Robustness of Uncertain Velocity on Unknown Terrain Adar Gaathon (Technion – Israel Institute of Technology)*; Amir Degani (Technion – Israel Institute of Technology) Virtual Session #3
84 Interpreting and Predicting Tactile Signals via a Physics-Based and Data-Driven Framework Yashraj Narang (NVIDIA)*; Karl Van Wyk (NVIDIA); Arsalan Mousavian (NVIDIA); Dieter Fox (NVIDIA) Virtual Session #3
85 Learning Active Task-Oriented Exploration Policies for Bridging the Sim-to-Real Gap Jacky Liang (Carnegie Mellon University)*; Saumya Saxena (Carnegie Mellon University); Oliver Kroemer (Carnegie Mellon University) Virtual Session #3
86 Manipulation with Shared Grasping Yifan Hou (Carnegie Mellon University)*; Zhenzhong Jia (SUSTech); Matthew Mason (Carnegie Mellon University) Virtual Session #3
87 Deep Learning Tubes for Tube MPC David Fan (Georgia Institute of Technology )*; Ali Agha (Jet Propulsion Laboratory); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #3
88 Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions Jason Choi (UC Berkeley); Fernando Castañeda (UC Berkeley); Claire Tomlin (UC Berkeley); Koushil Sreenath (Berkeley)* Virtual Session #3
89 Fast Risk Assessment for Autonomous Vehicles Using Learned Models of Agent Futures Allen Wang (MIT)*; Xin Huang (MIT); Ashkan Jasour (MIT); Brian Williams (Massachusetts Institute of Technology) Virtual Session #3
90 Online Domain Adaptation for Occupancy Mapping Anthony Tompkins (The University of Sydney)*; Ransalu Senanayake (Stanford University); Fabio Ramos (NVIDIA, The University of Sydney) Virtual Session #3
91 ALGAMES: A Fast Solver for Constrained Dynamic Games Simon Le Cleac’h (Stanford University)*; Mac Schwager (Stanford, USA); Zachary Manchester (Stanford) Virtual Session #3
92 Scalable and Probabilistically Complete Planning for Robotic Spatial Extrusion Caelan Garrett (MIT)*; Yijiang Huang (MIT Department of Architecture); Tomas Lozano-Perez (MIT); Caitlin Mueller (MIT Department of Architecture) Virtual Session #3
93 The RUTH Gripper: Systematic Object-Invariant Prehensile In-Hand Manipulation via Reconfigurable Underactuation Qiujie Lu (Imperial College London)*; Nicholas Baron (Imperial College London); Angus Clark (Imperial College London); Nicolas Rojas (Imperial College London) Virtual Session #3
94 Heterogeneous Graph Attention Networks for Scalable Multi-Robot Scheduling with Temporospatial Constraints Zheyuan Wang (Georgia Institute of Technology)*; Matthew Gombolay (Georgia Institute of Technology) Virtual Session #3
95 Robust Multiple-Path Orienteering Problem: Securing Against Adversarial Attacks Guangyao Shi (University of Maryland)*; Pratap Tokekar (University of Maryland); Lifeng Zhou (Virginia Tech) Virtual Session #3
96 Eyes-Closed Safety Kernels: Safety of Autonomous Systems Under Loss of Observability Forrest Laine (UC Berkeley)*; Chih-Yuan Chiu (UC Berkeley); Claire Tomlin (UC Berkeley) Virtual Session #3
97 Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from Suboptimal Demonstrations Glen Chou (University of Michigan)*; Necmiye Ozay (University of Michigan); Dmitry Berenson (U Michigan) Virtual Session #3
98 Nonlinear Model Predictive Control of Robotic Systems with Control Lyapunov Functions Ruben Grandia (ETH Zurich)*; Andrew Taylor (Caltech); Andrew Singletary (Caltech); Marco Hutter (ETHZ); Aaron Ames (Caltech) Virtual Session #3
99 Learning to Slide Unknown Objects with Differentiable Physics Simulations Changkyu Song (Rutgers University); Abdeslam Boularias (Rutgers University)* Virtual Session #3
100 Reachable Sets for Safe, Real-Time Manipulator Trajectory Design Patrick Holmes (University of Michigan); Shreyas Kousik (University of Michigan)*; Bohao Zhang (University of Michigan); Daphna Raz (University of Michigan); Corina Barbalata (Louisiana State University); Matthew Johnson Roberson (University of Michigan); Ram Vasudevan (University of Michigan) Virtual Session #3
101 Learning Task-Driven Control Policies via Information Bottlenecks Vincent Pacelli (Princeton University)*; Anirudha Majumdar (Princeton) Virtual Session #3
102 Simultaneously Learning Transferable Symbols and Language Groundings from Perceptual Data for Instruction Following Nakul Gopalan (Georgia Tech)*; Eric Rosen (Brown University); Stefanie Tellex (Brown University); George Konidaris (Brown) Virtual Session #3
103 A social robot mediator to foster collaboration and inclusion among children Sarah Gillet (Royal Institute of Technology)*; Wouter van den Bos (University of Amsterdam); Iolanda Leite (KTH) Virtual Session #3

The RSS Foundation is the governing body behind the Robotics: Science and Systems (RSS) conference. The foundation was started and is run by volunteers from the robotics community who believe that an open, high-quality, single-track conference is an important component of an active and growing scientific discipline.

]]>
Silicon Valley Bank reports on ‘The Future of Robotics’ https://robohub.org/silicon-valley-bank-reports-on-the-future-of-robotics/ Sat, 30 May 2020 23:10:48 +0000 https://robohub.org/silicon-valley-bank-reports-on-the-future-of-robotics/ You know robotics has ‘made it’ when Silicon Valley Bank (SVB) is reporting on it. Just five years ago, SVB barely had a hardware division, let alone a robotics and frontier tech team. This report itself shows the maturity of the field of robotics, and that’s also one of the key takeaways. There may be fewer deals in robotics, but the deals are getting bigger, as consolidation in new robotics markets starts to happen.

“Robotics is the latest advent in the multi-century trend toward the automation of production. The number of industrial robots, a key component of Industry 4.0, is accelerating. These machines are built by major multinationals and, increasingly, venture-backed startups.

As the segment continues to mature, data are coming in that allow founders, investors and policymakers to establish a framework for thinking about these companies. In this special sector report, we take a data-driven approach to emerging topics in the industry, including business models, performance metrics and capitalization trends.

Finally, we zoom out and consider how automation affects the labor market. In our view, the social implications of this industry will be massive and will require continuous examination by those driving this technology forward.”

Austin Badger, Director of Frontier Tech Practice at Silicon Valley Bank

Beyond the startup funding information though is valuable assessment of the economics of automation, from the shift from CapEx to OpEx and ARR, to the shift from automation to productivity to wealth creation. While it’s clear that automation increases wealth and productivity, there are still justifiable fears that automation will reduce labor opportunities. At the same time, it’s going to be primarily an issue for the developing countries that are currently serving as cheap labor for the world’s on-the-move manufacturing facilities.

Silicon Valley Robotics works closely with Silicon Valley Bank to help startups grow. SVB participates in our In-Depth Networks and Forums. You can download the SVB report “The Future of Robotics” here: https://www.svb.com/trends-insights/reports/the-future-of-robotics

]]>
ICRA 2020 launches with plenary panel ‘COVID-19: How Roboticists Can Help’ https://robohub.org/icra-2020-launches-with-plenary-panel-covid-19-how-roboticists-can-help/ Sat, 30 May 2020 22:47:43 +0000 https://robohub.org/icra-2020-launches-with-plenary-panel-covid-19-how-roboticists-can-help/ ICRA is the largest robotics meeting in the world and is the flagship conference of the IEEE Robotics & Automation Society. It is thus our honor and pleasure to welcome you to this edition, although the current exceptional circumstances did not allow us to organize it in Paris as planned with the glimpse and splendor that our wonderful robotics community deserves. Now, for sure, Virtual ICRA 2020, the first online ICRA, will be one of the most memorable ICRA editions ever! [Message from the General & Program Chairs]

Live Plenary Panel – COVID-19 : How Roboticist Can Help ?

Our first Plenary is a hot topic panel on COVID-19 Pandemic & Robotics, moderated by Ken Goldberg and chaired by Wolfram Burgard. Catch it on Big Screen or on IEEE.TV.

Proudly featuring:

Robin Murphy
Brad Nelson
Richard Voyles
Kris Hauser
Antonio Bicchi
Andra Keay
Gangtie Zheng
Ayanna Howard

Kirsten Thurow
Helen Grenier
Howie Choset
Guang-Zhong Yang

Join us for the virtual conference taking place May 31 to August 31 with sessions available both live and on demand. Plenaries and keynotes will be featured every afternoon (Central European Time) from June 1 to June 15, with live interactive Q&A sessions with the speaker. Our goal is bringing cutting-edge ICRA sessions to our community around the globe and provide opportunities to network with like-minded professionals from around the world. We hope that this offering reaches new members of our community and creates engaging discussions within the virtual conference platform.

Schedule

Virtual workshops 31 May to 30 June
Award ceremony 5 June
Plenary talks 1 – 17 June
Paper discussions 1 June  –  31 August
Conference recorded material 1 June  –  31 August
RAS Member Events 1 June  –  31 August
Plenaries
Lydia E. Kavraki Planning in Robotics and Beyond Tuesday June 2, 1PM UTC
Yann LeCun Self-Supervised Learning & World Models Wednesday June 3, 1PM UTC
Jean-Paul Laumond Geometry of Robot Motion: from the Rolling Car to the Rolling Man Thursday June 4, 1PM UTC
Keynotes
Allison Okamura Haptics for Humans in a Physically Distanced World Monday June 8, 1PM UTC
Kerstin Dautenhahn Human-Centred Social Robotics:
Autonomy, Trust and Interaction Challenges
Tuesday June 9, 1PM UTC
Pieter Abbeel Can Deep Reinforcement Learning from pixels
be made as efficient as from state?
Wednesday June 10,  1PM UTC
Jaeheung Park Compliant Whole-body Control for Real-World Interactions Thursday June 11, 1PM UTC
Cordelia Schmid Automatic Video Understanding Friday June 121PM UTC
Cyrill Stachniss Robots in the Fields:
Directions Towards Sustainable Crop Production
 Monday June 15, 1PM UTC
Toby Walsh How long before Killer Robots?  Tuesday June 16, 1PM UTC
Hajime Asama Robot Technology for Super Resilience – Remote Technology for Response to Disasters, Accidents, and Pandemic Wednesday June 17,  1PM UTC

Special RAS Events

There are also several virtual gatherings for IEEE Robotics and Automation Society (RAS) society members and Students. Scroll below for more information.

RAS Meet the Leaders (formally Lunch with Leaders)

RAS Meet the Leaders is the virtual equivalent of the popular RAS Lunch with Leaders event traditionally held at IEEE RAS’s flagship conferences: ICRA, CASE, and IROS.

Meet the Leaders is planned for multiple dates and time zones to accommodate the international robotics community. Each Leader will begin with an informal 5-minute presentation about their career, followed by a question and answer session.

Participants (students and young professionals) may sign up for ONE session to participate in a relaxed chat with academic and industry leaders from around the world.

The following Leaders are confirmed for the dates and times listed below (check back often for additional sessions):

  • Tuesday, June 2nd @ 12:00 PDT / 19:00 GMT
    Aleksandra Faust, 2020 RAS Early Industry Career Award in Robotics and Automation
  • Wednesday, June 3rd @ 10:00 am AEST / 00:00 GMT
    Peter Corke, 2020 RAS George Saridis Leadership Award, (and colleagues)
  • Thursday, June 4th @ 8:00 pm JST / 11:00 GMT
    Toshio Fukuda, IEEE President
  • Thursday, June 4th @ 13:00 EDT / 17:00 GMT
    Jaydev Desai, RAS AdCom Class of 2022
  • Monday, June 8th @ 12:30 JST / 03:30 GMT
    Zhidong Wang, RAS VP Electronic Products and Services Board, Yasushi Nakauchi RAS VP Financial Activities Board, Yasuhisa Hirata, RAS AdCom Class of 2022
  • Tuesday, June 9th @ 1:00 am KST / 16:00:00 GMT (Monday, June 8)
    Frank Park, RAS President Elect
  • Tuesday, June 9th @ 12:00 CDT / 17:00 GMT
    Lydia Kavraki, 2020 RAS Pioneer Award Winner
  • Thursday, June 11th @ 10:00 am PDT / 17:00:00 GMT
    Allison Okamura, Editor-in-Chief of RA-L and Marcia O’Malley, IROS 2020 Program Chair
  • Thursday, June 11th @ 12:00 pm PDT / 19:00:00 GMT
    Dieter Fox, 2020 RAS Pioneer Award WinnerFriday,
  • June 12th @ 3:00 pm CEST / 13:00 GMT
    Torsten Kroeger, RAS Vice President of Conference Activities
  • Registration Form (Required): https://app.smartsheet.com/b/form/3834d7362695475f915f52b1653439c9
]]>
RAC ‘Emerging Trends in Retail Robotics’ report released https://robohub.org/rac-emerging-trends-in-retail-robotics-report-released/ Sat, 30 May 2020 21:06:46 +0000 https://robohub.org/rac-emerging-trends-in-retail-robotics-report-released/ Robots are increasingly being deployed in retail environments. The reasons for this include: to relieve staff from the performance of repetitive and mundane tasks; to reallocate staff to more value-added, customer-facing activities; to realize operational improvements; and, to utilize real-time in-store generated data. Due to the impact of the 2020 Coronavirus outbreak, we can now add a new reason to use robots in retail: to assist with customer and employee safety.

In this Research Article, the Retail Analytics Council at NWU presents information on the benefits associated with deploying robots in stores. Estimates of the size of the global retail robot market are advanced. The impact on demand for robots in the grocery industry, in light of the Coronavirus outbreak, is discussed as well. This is followed by a review of U.S. retail robot deployments and the advancing of some emerging applications.

In summary, we find that the trend toward deploying robots in retail environments is accelerating. The reasons for this include their functional utility, advances in AI, and the ability to address both labor challenges and customer and employee safety concerns. The introduction of new uses of real-time, in-store generated data is another advantage. Further, the movement toward multimodal robots that are efficient at performing various functions adds to the value equation. We also find that changing consumer behavior to increase online purchases, especially in grocery, is a major impetus fueling this movement. Finally, establishing industry standards, which is ongoing, will fuel adoption.

Previous impediments to adoption, which are not detailed here, are also at play. These, for the most part, include issues of cost and training. The costs of robots will decrease, and the ROI will greatly increase, as complex computing moves off the payload via 5G and sensor costs continue to decrease. Increased vendor competition will also be a factor. The cost and complexity associated with environmental training are also being addressed via the introduction of synthetic data.

As the industry is still in its infancy, there are minimal reliable studies regarding market size. Estimates range from $4.8 billion to $19 billion in the 2015 to 2018 time frame, to as much as $52 billion by 2025. In April 2018, Bekryl Market Analysts published its Global Retail Robots Market Size Analysis, 2018-2028. Bekryl estimates the global retail robot market at $19 billion in 2018. They further estimate that the market will grow at a CAGR of 12.7 percent over the next ten years.

Now consider a different perspective. Verified Market Research valued the global retail robotics market at $4.78 billion in 2018, but expects a much more rapid rate of growth of 31.89 percent from 2019- 2026, reaching $41.67 billion by 2026.12 In 2016, yet another point of view was advanced by consulting firm Roland Berger, which stated “[t]he segment of robots designed for retail stores is emerging in a global robotics market that is already significant ($19 billion in 2015) and growing steadily ($52 billion in 2025).”

As the current Coronavirus pandemic constrains consumers’ ability to shop in stores, there is ample evidence that a shift to online purchasing is occurring in select categories, particularly grocery. To realize operating efficiencies while meeting this increased demand, grocery retailers, which represent the largest segment currently invested in robotics technology, are expected to accelerate their rate of investment.

The pressing question is whether this current movement to online grocery purchases during the pandemic represents a more permanent shift in consumer behavior. Consumers seem to think so. For example, in an April 2020 survey, 43 percent of adults said they were somewhat or very likely to
continue ordering groceries online once the pandemic ends (see Chart 11). McKinsey & Company’s COVID-19 U.S. Digital Sentiment Survey found that fully “75 percent of people using digital channels for the first time indicate that they will continue to use them when things return to normal.”

In conclusion, we see the pace of retail robot adoption accelerating, especially in the grocery segment. Technology advancements surrounding deployments in stores, backroom/warehouses, and delivery applications will continue to improve. Deployment costs will fall, as will the time to deploy, which will increase ROI, as will multi-functional payloads that perform a variety of tasks. Emerging innovations will add interesting new use cases. Increasing uses of real-time data generated, and the application/integration thereof, will also create additional value. Finally, ongoing efforts to establish industry standards will aid in industry adoption.

Silicon Valley Robotics is on the Robotics and AI Advisory Board of the Retail Analytics Council at NWU, where you can download the full report “Emerging Trends in Retail Robotics”.

]]>
Open Problems for Robots in Surgery and Healthcare https://robohub.org/open-problems-for-robots-in-surgery-and-healthcare/ Fri, 15 May 2020 19:46:14 +0000 https://robohub.org/open-problems-for-robots-in-surgery-and-healthcare/ * Please register at:
https://robotsinsurgeryandhealthcare.eventbrite.com

The COVID-19 pandemic is increasing global demand for robots that can 
assist in surgery and healthcare. This symposium focuses on recent 
advances and open problems in robot-assisted tele-surgery and 
tele-medicine and needs for new research and development. The online 
format will encourage active dialogue among faculty, students, 
professionals, and entrepreneurs.

Featuring:
Gary Guthart, CEO, Intuitive Surgical
Robin Murphy, Texas A&M
Pablo Garcia Kilroy, VP Research, Verb Surgical
Allison Okamura, Professor Stanford
David Noonan, Director of Research, Auris Surgical
Jaydev Desai, Director, Georgia Tech Center for Medical Robotics
Nicole Kernbaum, Principal Engineer, Seismic Powered Clothin
Monroe Kennedy III, Professor, Stanford

Presented by the University of California Center for Information 
Technology Research in the Interest of Society (CITRIS) and the Banatao 
Institute “People and Robots” Initiative, SRI International, and 
Silicon Valley Robotics.

Schedule:

   *  09:30-10:00: Conversation with Robin Murphy, Texas A&M and Director of 
Robotics for Infectious Diseases, and Andra Keay, Director of Silicon 
Valley Robotics
   *  10:00-10:30: Conversation with Gary Guthart, CEO Intuitive Surgical 
and Ken Goldberg, Director of CITRIS People and Robots Initiative
   *  10:30-11:00: Conversation with Pablo Garcia Kilroy, VP Research 
Verb Surgical and Tom Low, Director of Robotics at SRI International
   *  11:00-11:15: Coffee Break
   *  11:15-11:45: Conversation with David Noonan, Director of Research, 
Auris Surgical and Nicole Kernbaum
   *  11:45-12:45: Keynote by Jaydev Desai, Director, Georgia Tech Center 
for Medical Robotics
   *  12:45-01:15: Conversation with Allison Okamura, Stanford and Monroe 
Kennedy III, Stanford

]]>
From SLAM to Spatial AI https://robohub.org/from-slam-to-spatial-ai/ Tue, 12 May 2020 21:49:15 +0000 https://robohub.org/from-slam-to-spatial-ai/ You can watch this seminar here at 1PM EST (10AM PST) on May 15th.

 Andrew Davison (Imperial College London)

Andrew Davison

Abstract: To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general real-time geometric and semantic `Spatial AI’ perception capability. I will give many examples from our work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction. Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and I will cover research on vision algorithms for non-standard visual sensors and graph-based computing architectures.

Biography: Andrew Davison is Professor of Robot Vision and Director of the Dyson Robotics Laboratory at Imperial College London. His long-term research focus is on SLAM (Simultaneous Localisation and Mapping) and its evolution towards general `Spatial AI’: computer vision algorithms which enable robots and other artificial devices to map, localise within and ultimately understand and interact with the 3D spaces around them. With his research group and collaborators he has consistently developed and demonstrated breakthrough systems, including MonoSLAM, KinectFusion, SLAM++ and CodeSLAM, and recent prizes include Best Paper at ECCV 2016 and Best Paper Honourable Mention at CVPR 2018. He has also had strong involvement in taking this technology into real applications, in particular through his work with Dyson on the design of the visual mapping system inside the Dyson 360 Eye robot vacuum cleaner and as co-founder of applied SLAM start-up SLAMcore. He was elected Fellow of the Royal Academy of Engineering in 2017.

Robotics Today Seminars

“Robotics Today – A series of technical talks” is a virtual robotics seminar series. The goal of the series is to bring the robotics community together during these challenging times. The seminars are scheduled on Fridays at 1PM EDT (10AM PDT) and are open to the public. The format of the seminar consists of a technical talk live captioned and streamed via Web and Twitter (@RoboticsSeminar), followed by an interactive discussion between the speaker and a panel of faculty, postdocs, and students that will moderate audience questions.

Stay up to date with upcoming seminars with the Robotics Today Google Calendar (or download the .ics file) and view past seminars on the Robotics Today Youtube Channel. And follow us on Twitter!

Upcoming Seminars

Seminars will be broadcast at 1PM EST (10AM PST) here.

Leslie Kaelbling

22 May 2020: Leslie Kaelbling (MIT)

Allison Okamura

29 May 2020: Allison Okamura (Stanford)

Anca Dragan

12 June 2020: Anca Dragan (UC Berkeley)

Past Seminars

We’ll post links to the recorded seminars soon!

Organizers

Contact

]]>
Audience Choice HRI 2020 Demo https://robohub.org/audience-choice-hri-2020-demo/ Sun, 10 May 2020 18:58:07 +0000 https://robohub.org/audience-choice-hri-2020-demo/ Welcome to the voting for the Audience Choice Demo from HRI 2020 (voting closed on May 14 11:59PM BST). Each of these demos showcases an aspect of Human-Robot Interaction research, and alongside “Best Demo” award, we’re offering an “Audience Choice” award. You can see the video and abstract from each demo here. You can also register for the Online HRI 2020 Demo Discussion and Award Presentation on May 21 4:00 PM BST.


1. Demonstration of A Social Robot for Control of Remote Autonomous Systems José Lopes, David A. Robb, Xingkun Liu, Helen Hastie

Abstract: There are many challenges when it comes to deploying robots remotely including lack of situation awareness for the operator, which can lead to decreased trust and lack of adoption. For this demonstration, delegates interact with a social robot who acts as a facilitator and mediator between them and the remote robots running a mission in a realistic simulator. We will demonstrate how such a robot can use spoken interaction and social cues to facilitate teaming between itself, the operator and the remote robots.


2. Demonstrating MoveAE: Modifying Affective Robot Movements Using Classifying Variational Autoencoders Michael Suguitan, Randy Gomez, Guy Hoffman

Abstract: We developed a method for modifying emotive robot movements with a reduced dependency on domain knowledge by using neural networks. We use hand-crafted movements for a Blossom robot and a classifying variational autoencoder to adjust affective movement features by using simple arithmetic in the network’s learned latent embedding space. We will demonstrate the workflow of using a graphical interface to modify the valence and arousal of movements. Participants will be able to use the interface themselves and watch Blossom perform the modified movements in real time.


3. An Application of Low-Cost Digital Manufacturing to HRI Lavindra de Silva, Gregory Hawkridge, German Terrazas, Marco Perez Hernandez, Alan Thorne, Duncan McFarlane, Yedige Tlegenov

Abstract: Digital Manufacturing (DM) broadly refers to applying digital information to enhance manufacturing processes, supply chains, products and services. In past work we proposed a low-cost DM architecture, supporting flexible integration of legacy robots. Here we discuss a demo of our architecture using an HRI scenario.


4. Comedy by Jon the Robot John Vilk, Naomi T. Fitter

Abstract: Social robots might be more effective if they could adapt in playful, comedy-inspired ways based on heard social cues from users. Jon the Robot, a robotic stand-up comedian from the Oregon State University CoRIS Institute, showcases how this type of ability can lead to more enjoyable interactions with robots. We believe conference attendees will be both entertained and informed by this novel demonstration of social robotics.


5. CardBot: Towards an affordable humanoid robot platform for Wizard of Oz Studies in HRI Sooraj Krishna, Catherine Pelachaud

Abstract: CardBot is a cardboard based programmable humanoid robot platform designed for inexpensive and rapid prototyping of Wizard of Oz interactions in HRI incorporating technologies such as Arduino, Android and Unity3d. The table demonstration showcases the design of the CardBot and its wizard controls such as animating the movements, coordinating speech and gaze etc for orchestrating an interaction.


6. Towards Shoestring Solutions for UK Manufacturing SMEs Gregory Hawkridge, Benjamin Schönfuß, Duncan McFarlane, Lavindra de Silva, German Terrazas, Liz Salter, Alan Thorne

Abstract: In the Digital Manufacturing on a Shoestring project we focus on low-cost digital solution requirements for UK manufacturing SMEs. This paper shows that many of these fall in the HRI domain while presenting the use of low-cost and off-the-shelf technologies in two demonstrators based on voice assisted production.


7. PlantBot: A social robot prototype to help with behavioral activation in young people with minor depression Max Jan Meijer, Maaike Dokter, Christiaan Boersma, Ashwin Sadananda Bhat, Ernst Bohlmeijer, Jamy Li

Abstract: The PlantBot is a home device that shows iconographic or simple lights to depict actions that it requests a young person (its user) to do as part of Behavioral Activation therapy. In this initial prototype, a separate conversational speech agent (i.e., Amazon Alexa) is wizarded to act as a second system the user can interact with.


8. TapeBot: The Modular Robotic Kit for Creating the Environments Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, JongSuk Choi

Abstract: Various types of modular robotic kits such as the Lego Mindstorm [1], edutainment robot kit by ROBOTIS [2], and the interactive face components, FacePartBot [3] have been developed and suggested to increase children’s creativity and to learn robotic technologies. By adopting a modular design scheme, these robotic kits enable children to design various robotic characters with plenty of flexibility and creativity, such as humanoids, robotic animals, and robotic faces. However, because a robot is an artifact that perceives an environment and responds to it accordingly, it can also be characterized by the environment it encounters. Thus, in this study, we propose a modular robotic kit that is aimed at creating an interactive environment for which a robot produces various responses.

We chose intelligent tapes to build the environment for the following reasons: First, we presume that decreasing the expectations of consumers toward the functionalities of robotic products may increase their acceptance of the products, because this hinders the mismatch between the expected functions based on their appearances, and the actual functions of the products [4]. We believe that the tape, which is found in everyday life, is a perfect material to lower the consumers’ expectation toward the product and will be helpful for the consumer’s acceptance of it. Second, the tape is a familiar and enjoyable material for children, and it can be used as a flexible module, which users can cut into whatever size they want and can be attached and detached with ease.

In this study, we developed a modular robotic kit for creating an interactive environment, called the TapeBot. The TapeBot is composed of the main character robot and the modular environments, which are the intelligent tapes. Although previous robotic kits focused on building a robot, the TapeBot allows its users to focus on the environment that the robot encounters. By reversing the frame of thinking, we expect that the TapeBot will promote children’s imagination and creativity by letting them develop creative environments to design the interactions of the main character robot.


9. A Gesture Control System for Drones used with Special Operations Forces Marius Montebaur, Mathias Wilhelm, Axel Hessler, Sahin Albayrak

Abstract: Special Operations Forces (SOF) are facing extreme risks when prosecuting crimes in uncharted environments like buildings. Autonomous drones could potentially save officers’ lives by assisting in those exploration tasks, but an intuitive and reliable way of communicating with autonomous systems is yet to be established. This paper proposes a set of gestures that are designed to be used by SOF during operation for interaction with autonomous systems.


10. CoWriting Kazakh: Learning a New Script with a Robot – Demonstration Bolat Tleubayev, Zhanel Zhexenova, Thibault Asselborn, Wafa Johal, Pierre Dillenbourg, Anara Sandygulova

Abstract: This interdisciplinary project aims to assess and manage the risks relating to the transition of Kazakh language from Cyrillic to Latin in Kazakhstan in order to address challenges of a) teaching and motivating children to learn a new script and its associated handwriting, and b) training and providing support for all demographic groups, in particular senior generation. We present the system demonstration that proposes to assist and motivate children to learn a new script with the help of a humanoid robot and a tablet with stylus.


11. Voice Puppetry: Towards Conversational HRI WoZ Experiments with Synthesised Voices Matthew P. Aylett, Yolanda Vazquez-Alvarez

Abstract: In order to research conversational factors in robot design the use of Wizard of Oz (WoZ) experiments, where an experimenter plays the part of the robot, are common. However, for conversational systems using a synthetic voice, it is extremely difficult for the experimenter to choose open domain content and enter it quickly enough to retain conversational flow. In this demonstration we show how voice puppetry can be used to control a neural TTS system in almost real time. The demo hopes to explore the limitations and possibilities of such a system for controlling a robot’s synthetic voice in conversational interaction.

de1045vf.mp4


12. Teleport – Variable Autonomy across Platforms Daniel Camilleri, Michael Szollosy, Tony Prescott

Abstract: Robotics is a very diverse field with robots of different sizes and sensory configurations created with the purpose of carrying out different tasks. Different robots and platforms each require their own software ecosystem and are coded with specific algorithms which are difficult to translate to other robots.

VOTING CLOSED ON THURSDAY MAY 14 AT 11:59 PM BST [British Standard Time]

]]>
Where are the robots when you need them! https://robohub.org/where-are-the-robots-when-you-need-them/ Mon, 27 Apr 2020 00:44:22 +0000 https://robohub.org/where-are-the-robots-when-you-need-them/ Looking at the Open Source COVID-19 Medical Supplies production tally of handcrafted masks and faceshields, we’re trying to answer that question in our weekly discussions about ‘COVID-19, robots and us’. We talked to  Rachel ‘McCrafty’ Sadd has been building systems and automation for COVID mask making, as the founder of Project Mask Making and #distillmyheart projects in the SF Bay Area, an artist and also as Executive Director of Ace Monster Toys makerspace/studio. Rachel has been organizing volunteers and automating workflows to get 1700 cloth masks hand sewn and distributed to people at risk before the end of April. “Where’s my f*king robot!” was the theme of her short presentation.

If you think that volunteer efforts aren’t able to make a dent in the problems, here’s the most recent (4/20/20) production tally for the group Open Source COVID-19 Medical Supplies, who speak regularly on this web series. One volunteer group has tallied efforts by volunteers across 45 countries who have so far produced 2,315,559 pieces of PPE. And that’s not counting the #distillmyheart masks. Here’s Rachel’s recent interview on KTVU. Those masks aren’t going to make themselves, people!

We also heard from Robin Murphy, Expert in Rescue Robotics & Raytheon Professor at Texas A&M University, who updated her slides on the way in which robots are being used in COVID-19 response. You can find more information on Robotics for Infectious Diseases, an organization formed in response to the Ebola outbreak and chaired by Dr Murphy. There is also a series of interviews answering any questions a roboticist might have about deploying robots with public health, public safety and emergency managers.

Next we heard from Missy Cummings, Expert in Robotics Safety & Professor at Duke University. “I’ve been doing robotics, certification testing and certification for almost 10 years now. I started out in drones. And then kind of did a segue over into driverless cars and I also work on medical systems. So I work in this field of safety critical systems, where the operation of the robot in terms of the drone or the car or the medical robot, it can actually do damage to people if not designed correctly. Here’s a link to a paper that I’ve written for AI Magazine that’s really looking at the maturity of driverless cars.

“I spent a ridiculous amount of time on Capitol Hill trying to to be a middle ground between, yes, these are good technologies. We want to do research and investment and keep keep building a capacity. But no, we’re not ready to have widespread deployment yet. And I don’t care what Elon Musk says you’re not getting full self driving anytime soon.”

“Any reasoning system has to go through four levels of reasoning, you start at the basics, what we call skill based reasoning, then you go up to rule knowledge and expert based reasoning. And so where do we see that in cars? When you learn to drive you had to learn skill based reasoning, which was learning for example, how to track light lines on the road. Once you did that, maybe 20 minutes to learn that then you never actually have a problem with that again.”

“So once you have the cognitive capacity that you’ve learned skills, then you have enough spare mental capacity to think about rule based reasoning. And that’s when you start to understand, Okay, I see this octagon in front of me, it is a stop sign it’s read, I know that what it means there’s a set of procedures that go along with stopping and I’m going to follow those when I see it. Then once you have the rules of the environment that you’ve learned, then you have the spare capacity to start thinking about knowledge base reasoning, the big jump between rule and knowledge base reasoning is the ability to judge under uncertainty. So this is where you start to see the uncertainty arrow growing. So when you go up to knowledge base reasoning, you are starting to have to make guesses about the world with imperfect information. “

“So I like to show this picture of a stop sign partially obscured by vegetation. There are many many, many driverless car system computer vision systems right now that cannot figure this out that if they see some level of partially obscured stop sign, they just cannot see because they don’t see the way that we see. They don’t see the complete picture. And so they don’t judge that there’s a stop sign there. And you might have seen the recent case of the Tesla being tricked by a partially modified 35 mile per hour sign with a little bit of tape to make it see 85 miles per hour. It’s a really good illustration of just how brittle machine learning deep learning is when it comes to perceptual based reasoning. And then we get to the highest level of reasoning where you really have to make judgments under maximum uncertainty. “

“I love this illustration of this stop sign with these four different arrows. You cannot do this, you cannot turn left, you cannot go right, you cannot go straight and you cannot go back. So I’d be curious, I’d like to see what any driverless car would do in this situation. Because what do you do in the situation? You have to break one of these rules, you have to make a judgment, you have to figure out what is the best possible way to get yourself out of the system. And it means that you’re going to have to break rules inside the system. So the expert base reasoning knowledge base reasoning there is what we call top down reasoning, it’s you taking experience judgment in the world that you’ve had. As you’ve gotten older in life and had more experiences, you bring that to bear to make better guesses about what you should do in the future.”

“Bottom up reasoning is, is essentially what is happening in machine learning. You’re taking all the bits and bytes of information from the world, and then processing that to then make some kind of action. So right now Computers are really good at skill based reasoning, some rule based reasoning, humans are really the best at knowledge and expert based reasoning. And this is something we call functional allocation. But the problem is there’s a big break between rule and knowledge. Driverless cars cannot do this right now, until we can make that jump into knowledge and expert base reasoning. What are we going to do have to do?” [Missy Cummings, Robot Safety Expert]

Michael Sayre, CEO Cognicept said “I’m working with a company called Cognizant as CEO. We’re essentially solving a lot of the problems that Missy highlighted, which is that, you know, when we look at autonomy, and moving robots into the real world, there are a lot of complexities about the real world that cause what we call edge case failure in these systems. And so what we’ve built is essentially a system that allows a confused robot to dial out for human operator.”

“Human in the loop is not a new idea. This is something that self driving cars have used. What we’ve built is essentially a system and a service that allows this confused robot to dialogue for help on real time basis. We essentially listen for intervention requests from robots. So that can be an error code, or, you know, some kind of failure of the system timeouts, whatever it is really, we can listen for that event. And then we cause a ticket to be registered in our system, which our operators will then see, that connects them to the robot, they can kind of get a sense of what’s happening in the robots environment, they can get sensor information, populated in a 3d canvas, we can see videos and so forth, that allow the operator to make judgments on the robots behalf. “

“Self driving vehicles is probably not the best example for us. But maybe you would be able to use our system in something like a last mile delivery vehicle, which will face a lot of the same problems. Maybe the robots uncertain about whether it can cross the road. We can have a look at the video feed from that robot, understand what the traffic signals are saying, or what the environment looks like. And then give the robotic command to essentially help it with getting past the scenario that caused that Case failure. So we see this as sort of a way to help get robots into more useful service.”

Savioke robots deployed in hospitals

“You know, right now, even at 1% failure rate for a lot of these applications can be a deal breaker. You know, we, especially for self driving cars, as everybody mentioned, you know, the cost of failure is really high. But even in other sort of less critical cases, like in building delivery, you see, you know, if something is spinning around in a circle or not performing its job, it causes people to lose confidence in the system stopped using it. And it’s also you know, during the time that it’s confused, not performing its function. So we essentially built this system as a way to bring robots into a broader range of applications and improve the sort of uptime of the system so that it doesn’t get into these positions where it’s stuck during its operation.”

“Similarly, we have robots that get lost in spaces that are widely variable. So you know, a warehouse that has boxes or pallets that move in and out of the space very frequently. That’s going to confuse the robot because its internal map is static in most cases. And when you have a very large change, the robots going to be confused about its location, and then not be able to proceed with its its normal operation. That’s something that we can help with we essentially will be able to look at the robot’s environment and understand where it is in its space and then update its location. Again, you know, we look at different types of obstacles. You know, a plastic bag is not really an obstacle, we can, you can run through that. But on a LIDAR, it shows up the same way as a pile of bricks.”

Anybotics concept delivery with Continental

“So by having a human in the loop element, we are able to sort of handle these edge case failures and get robots to perform functions that they wouldn’t otherwise be able to perform and be useful in applications that were maybe too challenging for full autonomy. I think a lot of it has to do with sort of how dangerous is the robot in question. So, you know, for a self driving vehicle, very dangerous, right, we’ve got a half ton of steel, you know, moving at, you know, relatively rapid speeds. This is a dangerous system. “

“On the other hand, in building delivery robots, we’re doing some work in quarantine zones, making deliveries in buildings that allow social distancing to be maintained. We can put needed supplies inside of this delivery robot and send it in a building to the delivery room. So worst case scenario, we might bump into somebody. It’s just inconvenient and might sort of ruin the either the economics of the usefulness of the robot. That would be a good case for these less critical systems. So things like in-building delivery, material handling and logistics spaces. Maybe like a picking arm like a robot arm pulling things out of a box and putting it into a shipping container, or into another robot for in-building delivery.”

“While we try to get as fast as we can, you’re still talking about 30 seconds, maybe before you can really respond to the problem in a meaningful way. So, you know, 30 seconds is an eternity for a self driving vehicle, whereas for an in-building delivery robot, it’s not a big deal. So I think you know, the answer to that it’s pretty application dependent and also system dependent, you know, how dangerous is the system inherently?” [Michael Sayre, CEO of Cognicept]

Rex St John, ARM IOT Ecosystem Evangelist presented an unusual COVID-19 response topic. “This isn’t quite a robotics topic. But a few weeks ago, I began working on a project called Rosetta@home. So if you’re not familiar, there’s a lot of researchers that are studying protein folding, and other aspects of biological research. And they don’t have the funding to pay for supercomputer time. So what they do is they they distribute the research workloads to volunteer networks all around the world through this program called boink. So there’s a lot of these programs, there’s SETI@home, and Rosetta@home and Fold@home. And there’s all these people that volunteer their extra compute cycles by downloading this client. And then researchers upload work, jobs to the cloud, and then those jobs are distributed to these home computers.”

“So because I work at arm, we realize that Fold@home and Rosetta@home. are two projects which are used specifically to study protein folding. They did not have arm 64 bit clients available, which means you can’t run them on a Raspberry Pi four, you can’t run them on some of the newer arm hardware. So there are a lot of people in the community that were wanting to help out with Fold@home and Rosetta@home, which are now being used extensively by researchers specifically to study COVID-19. So we put together this community project. And it came together very, very quickly. Because once everybody learned about this opportunity, they jumped on board very quickly. So what happened was these guys from Neocortex, it’s a startup out of San Jose. They jumped on this and their CTO ported all the key libraries from Rosetta@home to arm 64 and then within a couple weeks a week or two actually, we’re now up to 793 arm 64 bit devices that are supporting researchers studying COVID-19 so anybody that wants to help out if you’ve got a Raspberry Pi four or an arm 64 bit device you can install Rosetta@home on your Raspberry Pi four and begin crunching on proteins to help researchers fight back COVID-19. https://www.neocortix.com/coronavirus

“You can see this is the spike protein right there of COVID-19. COVID-19 uses the spike protein to sort of latch on to the the receptors of human cells and that’s how it kind of invades your body. So they’re doing a lot of work to understand the structure and behavior that spike protein on Rosetta and Fold@home.” [Rex St John ARM IoT Evangelist]

Scientific illustration of the Coronavirus spike glycoprotein

Ken Goldberg, Director of CITRIS People and Robots Initiative said “I do have one thought that I’d like to share that occurred to me this week, which is that I wonder if we’re shifting from what used to be called the ‘High Tech High Touch’ concept from John Naisbitt. He wrote ‘Mega Trends’ about how we were moving as we got toward more high tech, we’d also just as much crave that touch. And I wonder if we’re moving toward a low touch future where we actually will see new value in things that are don’t involve touch.” “It’s been so interesting for me to be you know, to be in the house. I’ve gotten a whole new appreciation for things like washing machines and even vacuum cleaners. They’re incredible these mechanisms that help us do things, that rather than us reaching down and touching everything they basically do it for us.”

“I’ve been thinking about you before this pandemic. There are a lot of things out there like robot vending machines that I was a little skeptical about. And I thought, well, I don’t really see what’s the big advantage, given a choice I’d rather have a human making a hamburger or coffee. But now I’m starting to really think that equation has changed. And I wonder if that’s going to change permanently. In other words, are we actually going to see this a real trend toward things like these robot coffee making baristas and robot burgers like Creator, the company in San Francisco. or Miso robotics is developing fast food making robots. I think it’s time to really reevaluate those trends because I think there is going to be an actual visceral appeal for this kind of low touch future.” [Ken Goldberg CITRIS People and Robots]

There’ll be more next week on Tuesday April 28 so sign up for COVID-19, robots and us with guest speakers focusing on regulations, risks and opportunities:

  • Chelsey Colbert, Policy Council at The Future of Privacy Forum
  • Michael Froomkin, Laurie Silvers and Mitchell Rubenstein Distinguished Professor of Law
  • Ryan Calo, Lane Powell & D. Wayne Gittinger Endowed Professorship of Law
  • Sue Keay, Research Director at CSIRO Data 61 and Aust National COVID response
  • Robin Murphy, Rescue Robotics Expert and Raytheon Professor at Texas A&M University
  • Thomas Low, Director of Robotics SRI International
  • Ken Goldberg, Director of CITRIS People and Robots Initiative
  • Andra Keay, Director of Silicon Valley Robotics and founder of Women in Robotics
]]>
Can robots make food service safer for workers? https://robohub.org/can-robots-make-food-service-safer-for-workers/ Mon, 20 Apr 2020 00:25:27 +0000 https://robohub.org/can-robots-make-food-service-safer-for-workers/ Health care workers are not the only unwilling essential services frontline workers at increased risk of COVID-19. According to the Washington Post on April 12, “At least 41 grocery workers have died of the coronavirus and thousands more have tested positive in recent weeks”. At the same time, grocery stores are seeing a surge in demand and are currently hiring. The food industry is also seeing increasing adoption of robots in both the back end supply chain and in the food retail and food service sectors.

“Grocery workers are risking their safety, often for poverty-level wages, so the rest of us can shelter in place,” said John Logan, director of labor and employment studies at San Francisco State University. “The only way the rest of us are able to stay home is because they’re willing to go to work.” [Washington Post April 12 2020]

In our April 7th edition of “COVID-19, robots and us”, we heard from Linda Pouliot, CEO and founder of Dishcraft Robotics and Vipin Jain, CEO and founder of Blendid. Both provided us with insights into robotics in the food industry and the difficulties and joys of adapting robots to COVID-19 work.

Dishcraft Robotics is Linda Pouliot’s fourth startup, and second robotics startup. She previously cofounded Neato Robotics, the number two player in automated vacuuming. Dishcraft Robotics is a clean dish delivery service for cafeterias, or large kitchens. On a daily basis they deliver customized clean dishes to the customer, they collect the dirty dishes, and return them to a central hub where custom robots wash them.

“We initially thought we would develop the robot and install it directly into dish rooms. And then we found that it was a really high hurdle for an operation to have a big capital expense, and that we could solve their problem in a really frictionless way, by simply doing the delivery because there’s no upfront cost for them. There is no risk for them to try it. And in fact, we give a free couple week trial and we have found that once you start with dish craft, you immediately convert because the service solves all their labor problems, it also is more sustainable. And in today’s environment, it’s very, very sanitary.”

“We give them carts full of clean dishes, they are our dishes that are proprietary. The way our robots work is using magnetics. And so it requires using our specialized wares. And so we just bring in flexible, clean dishes and we give them a collection system where they can drop off. It’s very organized, it’s ADA compliant. It’s pretty space saving compared to where people normally store their dishes. So it’s worked really well for us. And that means for a customer, there’s no construction costs. There’s no months of planning. You know, it’s just been pretty delightful for both us and them.”

“Currently we service essential businesses and we have started to offer our services to help in this time of need to hospitals. Dish rooms are very small and cramped spaces and we realized that workers in them were unlikely to be able to be socially distance. Also today, the hospitals are overloaded. Some have twice the volume of what they normally expect a dish room to have to process. And so this was a great opportunity for them to use robots and use our service and keep their existing staff very safe.” [Linda Pouliot April 7 2020]

Vipin Jain is the CEO and founder of Blendid, a food robotic kiosk startup. After about 5 years of R&D Blendid opened their first kiosks just over a year ago in March 2019 at the University of San Francisco, followed by Sonoma State University, and while they were initially doubtful that people were ready to have their food or drink made by robots, they’ve seen great interest.

“We were told that automating food was too hard. And people won’t like it. People don’t want robots to make the food, people said. But people have different tastes and and standardized food doesn’t cut it anymore. So you need a better solution so that I can get what I like to eat anytime of the day wherever I am. And that can only be done using a lot of data, AI and robotics.”

“We were just getting ready to start deploying into retail, corporate and hospital environments when the COVID hit. So I wish we were a little bit ahead further ahead in terms of deployment because as you can imagine, food is an essential service. We all want food, we all want access to food, but in this environment, we will be fearful about the safety of people working on food preparation, because we don’t know how the food was handled.” [Vipin Jain April 7 2020]

Rich Stump is one of the cofounders of Studio Fathom for prototyping, product development and production services. Fathom specializes in 3d printing additive manufacturing, dealing with multiple materials and leveraging traditional manufacturing alongside, allowing them to solve some interesting engineering challenges or product development challenges. Fathom has been very active in COVID-19 response.

“There’s three main initiatives that we have going right now with COVID-19. And some of them are, are potentially revenue generating, but most of them are just just trying to help us with our expertise and capabilities. The first is obviously with PPE challenges we have. We have a facility in Asia and southern China and so we have a vast amount of supply chain resources. So, we called in our colleagues over there and we found a number of factories that had FDA approved PBE supplies that that were overrun from the flattening of the curve in Asia.”

“The news has been talking about investment into 3M and Honeywell ramping up production in the US, which is absolutely great. The problem is, it’s just not going to get done in time, right? Anytime you buy 150 thermoforming machines and you try to ramp up production to make n95 masks, it’s going to take weeks or potentially months to get any volume. So what we were trying to do is match hospitals and senior homes and folks that needed PPE with supply chain resources that we had in Asia.”

“We’ve had a bunch of roadblocks to the point where we’ve tried to reach out to the state government officials to try to remove some of the roadblocks with customs and tariffs and freight constraints. And so that’s been a an interesting challenge. But we’ve been able to connect, I think over 1.5 million and counting supplies into the US from Asia. It’s not our everyday business, but we have resources, so we tried to help there.

“The second initiative was around the test kits. Obviously there’s been a shortage of test kits. And it turns out, interestingly enough, one of the big shortage items is the nasopharyngeal swab that goes back into your nasal cavity. The main manufacturer was in Italy, and obviously with everything happened in Italy, that that shut down a lot of the supply. There’s been a community of about 50 folks that have come out of the 3d printing community to see if we can 3d print these swabs. And we’ve made a lot of progress.”

There’s now four manufacturers approved to 3d print nasopharyngeal swabs. Next week, we’ll get into production of these in at least hundreds of thousands, if not millions, in order to support the short term need. Long term, it probably doesn’t make sense to 3d print them given the cost base. But using 3d printing technology, we can design swabs that could potentially perform better than the traditional swab. So that’s been a fun and challenging project at the same time.”

“And the third initiative is around the ventilators. You have folks like Lawrence Livermore developing a ventilator with spare parts that are available today. And you have JPL, the NASA team, trying to develop their own ventilator. Then you have the large automotive manufacturers who have partnered up with various existing ventilator manufacturers and are trying to ramp up production. By today I think more than 270 people have come to us with either a ventilator design, a mask design, or some type of shield design, just over the last 10 days. There’s just a tremendous amount of activity. And it’s great because everyone wants to help, but at the same time, it presents a lot of challenges because so much effort going in so many different directions, that you you worry that, you know, all these efforts won’t end up being impactful by the time we need the supplies.” [Rich Stump April 7 2020]

Mark Martin is the Director of Industry and Workforce Development for California Community Colleges and founder of the Bay Area Manufacturers Discussion Forum. He’s very connected with manufacturers in the region and has a lot of insights into what’s going on in the manufacturing sector.

“Many of the manufacturing companies are being hurt and having to furlough people. There’s about 8000 in the Bay Area and we’re trying to help them repurpose. In some things, it’s relatively easy, maybe you have an injection mold facility, and you can injection mold these face shields, or some of these parts of the face shield. You have to make the molds but it’s still within your core business. Others are contract manufacturers that could theoretically assemble ventilators because they’re already doing medical products. Ventilators are a complicated product, but not necessarily more complicated than some of these people already doing.”

“But for others, you might have to get specialized machines, which can take a long time. And then you need to have expertise around. If you haven’t done medical products before, do you know how to handle the equipment and ensure QC? And the FDA approves factories, not just the designs. I created a list of Bay Area manufacturers and I got almost 200 responses with things they think they can manufacture and I’ve supplied that list to the state government.”

“And community colleges have makerspaces or fab labs. So where I’m located at Laney College in Oakland, we started working on face shields a week and a half ago, because that was relatively simple for us to be able to do. And we printed up 500 face shields to Highland hospital. It took us a day to do it with six people. And I asked the hospital what their daily use was for face shields and they said 600 a day. So I was like, all right, that was a day’s worth for this one hospital.”

“And then we actually took a design that was done by Stratasys for 3D printing and modified it a little bit. And in like two days, we got a molder to set up injection molding, although it’s going to take a couple of weeks to finish. And honestly, I have no idea if we’re going to need in a couple weeks. Because I have no idea what the demand is.”

“Apple’s bringing in over a million a week and others are doing this hundreds or even thousands a time in basements and little shops. But we don’t know if the demand is 5 million a week or 50 million. “

“So that that’s the thing the manufacturers all trying to figure out. I kept wondering why the government didn’t take a little bit of control. Even just financial incentives to say we will backstop you if you supply PPE, so that if the demand falls out, you don’t lose money.” [Mark Martin April 7 2020]

Ken Goldberg is the William S. Floyd Jr Distinguished Chair in Engineering at UC Berkeley, Director of CITRIS People and Robots Initiative and a regular guest on COVID-19, robots and us. Today in keeping with the theme of food robotics, Ken explained the reasoning behind one of his latest projects, AlphaGarden.

This is a farm bot that you can buy from a local company, in San Luis Obispo for about $3,000. Our garden includes 14 species of edible herbs and plants and they’re all growing in close proximity. The challenge is that they’re competing for increasingly scarce light and water. So what happens is that this starts to get quite out of control. Right now, the garden has been acting very autonomously because we haven’t been able to get into the greenhouse, and so it’s starting to decay, and certain invasive species like mint have essentially taken over.”

“We’re trying to optimize diversity, that is to maximize the number of different plant types that are growing. But the key idea here is that we’re using AI to simulate the garden, because a grow cycle takes three months. So we have a simulator that can simulate 10,000 times the speed of nature. We have 64 gardens but thousands of parameters to fine tune the system. Then we evaluate each of those gardens to figure out how to tune the parameters for the watering and pruning devices.”

“This is an ongoing project but the end goal is to be able to learn a policy for successful gardening because polyculture is much more labor intensive than monoculture. The reason that I call it an art project is because it’s extremely difficult for AI and robotics. It’s a big challenge. I’m not at all convinced we’re actually going to succeed. We’re really putting AI to the test. But just this week we did a deep learning method based on the data that we collected from our simulator. And in time we hope to learn how to be able to automate some of these very difficult tasks like organic gardening.” [Ken Goldberg April 7 2020]

The moderator of the weekly ‘COVID-19, robots and us’ discussions, Andra Keay, is the Managing Director of Silicon Valley Robotics, supporting innovation and commercialization of robotics technologies. Andra was also the Industry Co-Chair of the Human-Robot Interaction Conference 2020, which is taking place online. In the April 3 Industry Talks Session, Chris Roberts from Cambridge Consultants described the process of working out what sort of robotics or automation made sense for food production.

“Chris described a project with the food service industry to look at using robotics and AI. And of course what he wanted to build was a robot dipping chips and flipping burgers and all the rest, but on evaluation, that was not the best direction to go. In this instance AI optimization of locations was the best option. And this is something I’ve heard from a number of other people who’ve been doing robots for the actual preparation of food itself. For example, Creator, who make those fantastic hamburgers, didn’t have a robot arm, imitating the way humans cook. Instead, they completely re architected the entire process of constructing a burger, so that it could be done mechanically.” [Andra Keay April 7 2020]

There’s much more to hear in our weekly discussion on ‘COVID-19, robots and us’ from April 7 2020 with guest expert speakers: Linda Pouliot, CEO of Dishcraft Robotics Vipin Jain, CEO of Blendid Rich Stump, Principal at Studio Fathom Mark Martin, Industry & Workforce Development California Community Colleges and Bay Area Manufacturing Discussion Forum Ken Goldberg, Director of CITRIS People and Robots Initiative and William S. Lloyd Jr Distinguished Chair in Engineering at UC Berkeley; moderated by Andra Keay, Silicon Valley Robotics.

]]>
HRI 2020 Keynote: Stephanie Dinkins https://robohub.org/hri-2020-keynote-stephanie-dinkins/ Sat, 18 Apr 2020 18:45:48 +0000 https://robohub.org/hri-2020-keynote-stephanie-dinkins/ Community, craft, and the vernacular in artificially intelligent systems take the position that everyone participating in society is an expert in our experiences within the community infrastructures, which inform the makeup of robotic entities.

Though we may not be familiar with the jargon used in specialized professional contexts, we share the vernacular of who we are as people and communities and the intimate sense that we are being learned. We understand that our data and collaboration is valuable, and our ability to successfully cooperate with the robotic systems proliferating around is well served by the creation of qualitatively informed systems that understand and perhaps even share the aims and values of the humans they work with.

Using her art practice, which interrogates a humanoid robot and seeks to create culturally specific voice interactive entities as a case in point, Dinkins examines how interactions between humans and robots are reshaping human-robot and human-human relationships and interactions. She ponders these ideas through the lens of race, gender, and aging. She argues communities on the margins of tech production, code, and the institutions creating the future must work to upend, circumvent, or reinvent the algorithmic systems increasingly controlling the world, including robotics, that maintain us.

Publication:HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot InteractionMarch 2020 Pages 221https://doi.org/10.1145/3319502.3374844

]]>
Robots providing social support while we’re social distancing https://robohub.org/robots-providing-social-support-while-were-social-distancing/ Fri, 17 Apr 2020 20:51:48 +0000 https://robohub.org/robots-providing-social-support-while-were-social-distancing/ Wired Magazine recently called for us to, post pandemic, “ditch our tech enabled tools of social distancing”. But are our telepresence robots creating emotional distancing or are they actually improving our emotional lives. This week in our weekly “COVID-19, robots and us” discussion with experts, we’re looking at the topic of virtual presence and emotional contact as well as many other practical ways that robotics can make a difference in pandemic times.

Robin Murphy, Raytheon Professor at Texas A&M University and founder of the field of Rescue Robotics, was involved in the very first use of robots in a disaster scenario in 9/11. Since then she’s been involved in multiple disaster responses worldwide, including the Ebola outbreak in 2014-2016. During the US Ebola outbreak, the White House Office of Science, Technology and policy, and then later NSF did a series of workshops, and myself and Ken Goldberg, are among those who participated in work with various public health officials in groups such as Doctors Without Borders.

“Some of the lessons learned about infectious diseases in general, and for COVID, in particular, are that there’s really five big categories of uses of robots. Most everybody immediately thinks of clinical applications, or things that are directly related to the health care industry, but the roll of robots is far broader than that. It’s not always about replacing doctors. It’s about how can robots assist in any way they can in this whole large, complex enterprise of a disaster.” [Robin Murphy March 31 2020]

Ross Mead’s company Semio develops software for robot operation that focuses on how people will live, work and play with robots in their everyday lives. “We’re building an operating system and app ecosystem for in home personal robots. And all of our software relies on natural language user interfaces, just speech and body language or other social cues.”

“Right now as it pertains to COVID-19. We are working with a team of content creators from a company called copy to develop conversational content similar to chatbots, or voice voice based skills that’s geared towards informing users about or helping mitigate the spread of COVID-19. We’re also developing socially aware navigation systems for mobile robots, natural human environments. I would love to talk about use cases for social robots, even telepresence robots, as well as the impacts of social isolation in these times.” [Ross Mead March 31 2020]

Therapeutic robot seal Paro in an aged care facility.

Wendy Ju studies people’s interaction with autonomous cars, systems or robots. She focuses on things like nonverbal cues, and the things people do to coordinate and collaborate with one another. Her PhD dissertation was on the design of implicit interactions, something a lot of us take for granted or consider static, not dynamic. Through her work on autonomous cars, she’s been exploring the subtle cues that convey critical information.

“if we get these things wrong, or it’s life or death. I think we’re starting to understand that a lot of the things that we think of as interaction are only the top layer of what we’re actually doing all the time with other people. And if we don’t understand those lower layers, it could kill us.”

Pedestrians and driverless car interaction

“So last week, I put together a proposal to study how people are interacting with one another in the city around the social distancing policy. And I agree the name is not perfect, but I think it also gets to the heart of what’s important to do. In this epidemic, there’s a halo effect around our social interactions because we know they’re necessary and good for us. And so people think, well, I shouldn’t go to the grocery stores and go to the hospital. But surely, it can’t be bad to go visit my neighbor or surely it can’t be bad to go see my grandmother, these kind of inclinations will kill us, when taken to scale.”

“When we say social distancing, we’re saying like, yes, school is good, but school is bad in the situation, churches good, but churches are bad in the situation, really getting at the thing that we are so tempted to do that is literally the thing that we’re trying to stop right now. I think that’s why they call it social distancing. And it does definitely have a physical corollary. I’m interested to see afterwards, if those people who were playing basketball and other people who were playing soccer, are those places where people got more sick or not? We don’t actually know all the different mechanisms for transmission for disease. And I think later on, we’ll be able to figure it out.” [Wendy Ju March 31 2020]

Cory Kidd is the CEO and founder of Catalia Health, which uses social robots for medical care management. Catalia Health has done extensive clinical trials prior to commercial roll out and leads the world in understanding robots for medical care.

“The concept of chronic disease management of course is not new, it’s just that the usual model is very human powered. We do it in clinical settings, in the doctor’s office, in the hospital, and we send people out to homes, and a lot of the work is done by calling patients on the phone to check in on them. We replace all of those by putting actually a physical robot in the patient’s home to talk to them.

Catalia Health’s Mabu at home with Michelle Chin

“So what we’re doing on the AI side is generating conversations for patients on the fly, for whatever condition they’re dealing with, and we build these around specific conditions. The robotic piece of it is really driven by the psychology around why we would rather be all in a room together, as opposed to, gathering around our computers and staring into the the screen at zoom. We intuitively get that physical presence is different. “

“When we’re face to face with someone, they’re more engaged, we create stronger relationships and a number of other things. Research showed that those differences actually carry over into the future. When you put a cute little robot in front of someone that can look them in the eyes while it’s talking to them, we actually get a lot of the effects of face to face interaction. And so we’ve leveraged that to build chronic disease care management programs. Over the last couple of years, we’ve been rolling these out largely in specialty pharmacy, so we work with some of the largest pharma manufacturers in the world, like Pfizer. We’re helping patients across a number of different conditions really keep track of how they’re doing, to stay on therapy and stay out of the hospital using our AI and robotics platform.”

“The current situation around the world is really highlighting the need for more of this kind of technology.” [Cory Kidd March 31 2020]

Ken Goldberg is the director of CITRIS People and Robots Initiative, and the William s. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley. Both Ken and Robin are amongst the authors of recent editorial in Science Robotics “Combating COVID-19 – The role of robotics in manging public health and infectious diseases”.

Medical personnel works inside one of the emergency structures that were set up to ease procedures outside the hospital of Brescia, Northern Italy, Tuesday, March 10, 2020. For most people, the new coronavirus causes only mild or moderate symptoms, such as fever and cough. For some, especially older adults and people with existing health problems, it can cause more severe illness, including pneumonia. (Claudio Furlan/LaPresse via AP)

“There’s this fundamental issue that I’ve been thinking a lot about, which is protecting the health care workers, especially where they’re now having to provide these tests for huge numbers of people. Swabbing is quite an uncomfortable and invasive process. And is there any way that that that we might be able to automate that at some point? I don’t think that’s going to happen anytime soon. But it’s an interesting goal that we could move in that direction.”

“The other is the idea that intubation is an incredibly difficult process and very risky because a lot of droplets vaporizing happens. That’s another area where it would be very helpful if that could be teleoperated. Right now, the state of the art in telemedicine, tele surgery in particular, and these type of procedures is not ready for the situation we’re facing now. We are nowhere near capable of doing that. And so I think this is a really important wake up call to start to develop these technologies.”

Also, in the discussion, Jessica Armstrong who is a mechanical engineer at SuitX and local coordinator for Open Source COVID-19 Medical Supplies gave us updates on local PPE activities and how community grass roots initiatives like OSCMS and Helpful Engineering have been part of catalyzing networks of people to sew masks and gowns, to laser cut face shields and 3D print parts for PPE and medical equipment, and developing new designs for emergency ventilators and respirators, while we’re still waiting for manufacturers and the supply chain to meet the demand.

Perhaps most critically, groups like OSCMS and Helpful Engineering validate and share designs for PPE so that people aren’t wasting time designing their own solutions, nor putting health care workers at risk with badly designed homemade PPE.

Our second weekly discussion about “COVID-19, robots and us” from March 31 is now available online and as a podcast. You can sign up to join the audience for the next episodes here.

Special guests were Robin Murphy, Raytheon Professor at Texas A&M University and founder of the field of Rescue Robotics, Ross Mead, CEO of Semio and VP of AI LA, Wendy Ju, Interaction Design Professor at Cornell, Cory Kidd, CEO of Catalia Health – maker of medical social robots, Ken Goldberg, Director of CITRIS People and Robots Initiative and Jessica Armstrong, mechanical engineer at SuitX and local coordinator for Open Source Covid-19 Medical Supplies. Moderated by Andra Keay, Managing Director of Silicon Valley Robotics, with extra help from Erin Pan, Silicon Valley Robotics, and Beau Ambur from Kickstarter.

]]>
HRI 2020 Keynote: Ayanna Howard https://robohub.org/hri-2020-keynote-ayanna-howard/ Mon, 13 Apr 2020 01:41:44 +0000 https://robohub.org/hri-2020-keynote-ayanna-howard/ Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.

References

  1. A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
  2. A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
  3. J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.

Publication: HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |March 2020 | Pages 1 |https://doi.org/10.1145/3319502.3374842

 

]]>
HRI 2020 Online Day One https://robohub.org/hri-2020-online-day-one/ Tue, 07 Apr 2020 22:27:33 +0000 https://robohub.org/hri-2020-online-day-one/ HRI2020 has already kicked off with workshops and the Industry Talks Session on April 3, however the first release of videos has only just gone online with the welcome from General Chairs Tony Belpaeme, ID Lab, University of Ghent and James Young, University of Manitoba.

There is also a welcome from the Program Chairs Hatice Gunes from University of Cambridge and Laurel Riek from University of San Diego, requesting that we all engage with the participants papers and videos.

The theme of this year’s conference is “Real World Human-Robot Interaction,” reflecting on recent trends in our community toward creating and deploying systems that can facilitate real-world, long-term interaction. This theme also reflects a new theme area we have introduced at HRI this year, “Reproducibility for Human Robot Interaction,” which is key to realizing this vision and helping further our scientific endeavors. This trend was also reflected across our other four theme areas, including “Human-Robot Interaction User Studies,” “Technical Advances in Human-Robot Interaction,” “Human-Robot Interaction Design,” and “Theory and Methods in Human-Robot Interaction.”

The conference attracted 279 full paper submissions from around the world, including Asia, Australia, the Middle East, North America, South America, and Europe. Each submission was overseen by a dedicated theme chair and reviewed by an expert group of program committee members, who worked together with the program chairs to define and apply review criteria appropriate to each of the five contribution types. All papers were reviewed by a strict double-blind review process, followed by a rebuttal period, and shepherding if deemed appropriate by the program committee. Ultimately the committee selected 66 papers (23.6%) for presentation as full papers at the conference. As the conference is jointly sponsored by ACM and IEEE, papers are archived in the ACM Digital Library and the IEEE Xplore.

Along with the full papers, the conference program and proceedings include Late Breaking Reports, Videos, Demos, a Student Design Competition, and an alt.HRI section. Out of 183 total submissions, 161 (88%) Late Breaking Reports (LBRs) were accepted and will be presented as posters at the conference. A full peer-review and meta-review process ensured that authors of LBR submissions received detailed feedback on their work. Nine short videos were accepted for presentation during a dedicated video session. The program also includes 12 demos of robot systems that participants will have an opportunity to interact with during the conference. We continue to include an alt.HRI session in this year’s program, consisting of 8 papers (selected out of 43 submissions, 19%) that push the boundaries of thought and practice in the field. We are also continuing the Student Design Competition with 11 contenders, to encourage student participation in the conference and enrich the program with design inspiration and insights developed by student teams. The conference will include 6 full-day and 6 half-day workshops on a wide array of topics, in addition to the selective Pioneers Workshop for burgeoning HRI students.

Keynote speakers will reflect the interdisciplinary nature and vigour of our community. Ayanna Howard, the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology, will talk about ‘Are We Trusting AI Too Much? Examining Human-Robot Interactions in the Real World’, Stephanie Dinkins, a transmedia artist who creates platforms for dialog about artificial intelligence (AI) as it intersects race, gender, aging, and our future histories, and Dr Lola Canamero, Reader in Adaptive Systems and Head of the Embodied Emotion, Cognition and (Inter-)Action Lab in the School of Computer Science at the University of Hertfordshire in the UK, will talk about ‘Embodied Affect for Real-World HRI’.

The Industry Talks Session was held on April 3 and we are particularly grateful to the sponsors who have remained with HRI2020 as we transition into virtual. Karl Fezer from ARM, Chris Roberts from Cambridge Consultants, Ker-Jiun Wang from EXGWear and Tony Belpaeme from IDLab at University of Ghent were able to join me for the first Industry Talks Session at HRI 2020 – a very insightful discussion!

The HRI2020 proceedings are available from the ACM digital library.

Full papers:
https://dl.acm.org/doi/proceedings/10.1145/3319502

Companion Proceedings (alt.HRI, Demonstrations, Late-Breaking Reports, Pioneers Workshop, Student Design Competitions, Video Presentations, Workshop Summaries):
https://dl.acm.org/doi/proceedings/10.1145/3371382

]]>
COVID-19, robots and us – weekly online discussion https://robohub.org/covid-19-robots-and-us-weekly-online-discussion/ Tue, 24 Mar 2020 00:51:18 +0000 https://robohub.org/covid-19-robots-and-us-weekly-online-discussion/ Silicon Valley Robotics and the CITRIS People and Robots Initiative are hosting a weekly “COVID-19, robots and us” online discussion with experts from the robotics and health community on Tuesdays at 7pm (California time – PDT). You can sign-up for the free event here.

Guest speakers this week are:

Prof Ken Goldberg, UC Berkeley Director of the CITRIS People and Robots Initiative.

Alder Riley, Founder at ideastostuff and a coordinator at Helpful Engineering. Helpful Engineering is a rapidly growing global network created to design, source and execute projects that can help people suffering from the COVID-19 crisis worldwide.

Tra Vu, COO at Ohmnilabs, a telepresence robotics and 3D printing startup

Mark Martin, Regional Director Advanced Manufacturing Workforce Development California Community Colleges

Gui Cavalcanti, CEO/Cofounder of Breeze Automation and Founder of Open Source Covid-19 Medical Supplies Group. The Open Source COVID-19 Medical Supplies Group is a rapidly growing Facebook group formed to evaluate, design, validate, and source the fabrication of open source emergency medical supplies around the world, given a variety of local supply conditions.

Andra Keay, Managing Director of Silicon Valley Robotics and Visiting Scholar at CITRIS People and Robots Initiative will act as moderator.

Beau Ambur, Outreach, Design and Technology Lead for Kickstarter will be coordinating technology for us.

]]>
Transience, Replication, and the Paradox of Social Robotics https://robohub.org/transience-replication-and-the-paradox-of-social-robotics/ Wed, 30 Oct 2019 03:52:18 +0000 https://robohub.org/transience-replication-and-the-paradox-of-social-robotics/

with Guy Hoffman
Robotics Researcher, Cornell University

An Art, Technology, and Culture Colloquium, co-sponsored by the Center for New Music and Audio Technologies and CITRIS People and Robots (CPAR), presented with Berkeley Arts + Design as part of Arts + Design Mondays.

As we continue to develop social robots designed for connectedness, we struggle with paradoxes related to authenticity, transience, and replication. In this talk, I will attempt to link together 15 years of experience designing social robots with 100-year-old texts on transience, replication, and the fear of dying. Can there be meaningful relationships with robots who do not suffer natural decay? What would our families look like if we all choose to buy identical robotic family members? Could hand-crafted robotics offer a relief from the mass-replication of the robot’s physical body and thus also from the mass-customization of social experiences?

About Guy Hoffman

Dr. Guy Hoffman is an Assistant Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Prior to that he was an Assistant Professor at IDC Herzliya and co-director of the IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in the field of human-robot interaction. He heads the Human-Robot Collaboration and Companionship (HRC2) group, studying the algorithms, interaction schema, and designs enabling close interactions between people and personal robots in the workplace and at home. Among others, Hoffman developed the world’s first human-robot joint theater performance, and the first real-time improvising human-robot Jazz duet. His research papers won several top academic awards, including Best Paper awards at robotics conferences in 2004, 2006, 2008, 2010, 2013, 2015, 2018, and 2019. His TEDx talk is one of the most viewed online talks on robotics, watched more than 3 million times.

About the Art, Technology, and Culture Colloquium

Founded by Prof. Ken Goldberg in 1997, the ATC lecture series is an internationally respected forum for creative ideas. Always free of charge and open to the public, the series is coordinated by the Berkeley Center for New Media and has presented over 200 leading artists, writers, and critical thinkers who question assumptions and push boundaries at the forefront of art, technology, and culture including: Vito Acconci, Laurie Anderson, Sophie Calle, Bruno Latour, Maya Lin, Doug Aitken, Pierre Huyghe, Miranda July, Billy Kluver, David Byrne, Gary Hill, and Charles Ray.

Fall 2019 – Spring 2020 Series Theme: Robo-Exoticism

In 1920, Karl Capek coined the term “robot” in a play about mechanical workers organizing a rebellion to defeat their human overlords. A century later, increasing popularism, inequality, and xenophobia require us to reconsider our assumptions about labor, trade, political stability, and community. At the same time, advances in artificial intelligence and robotics, fueled by corporations and venture capital, challenge our assumptions about the distinctions between humans and machines. To explore potential linkages between these trends, “Robo-Exoticism” characterizes a range of human responses to AI and robots that exaggerate both their negative and positive attributes and reinforce fears, fantasies, and stereotypes.

Robo-Exoticism Calendar

09/09/19 Robots Are Creatures, Not Things
Madeline Gannon, Artist / Roboticist, Pittsburgh, PA
Co-sponsored by the Jacobs Institute for Design Innovation and CITRIS People and Robots (CPAR)

09/23/19 The Copper in my Cooch and Other Technologies
Marisa Morán Jahn, Artist, Cambridge, MA and New York, NY
Co-sponsored by the Wiesenfeld Visiting Artist Lecture Series and the Jacobs Institute for Design Innovation

10/21/19 Non-Human Art
Leonel Moura, Artist, Lisbon
Co-sponsored by the Department of Spanish & Portuguese and CITRIS People and Robots (CPAR)

11/4/19 Transience, Replication, and the Paradox of Social Robotics
Guy Hoffman, Robotics Researcher, Cornell University
Co-sponsored by the Center for New Music and Audio Technologies and CITRIS People and Robots (CPAR)

01/27/20 Dancing with Robots: Expressivity in Natural and Artificial Systems
Amy LaViers, Robotics, Automation, and Dance (RAD) Lab
Co-sponsored by the Department of Theater, Dance, and Performance Studies and CITRIS People and Robots (CPAR)

02/24/20 In Search for My Robot: Emergent Media, Racialized Gender, and Creativity
Margaret Rhee, Assistant Professor, SUNY Buffalo; Visiting Scholar, NYU
Co-sponsored by the Department of Ethnic Studies and the Department of Comparative Literature

03/30/20 The Right to Be Creative
Margarita Kuleva, National Research University Higher School of Economics, Moscow
Invisible Russia: Participatory Cultures, Their Practices and Values
Natalia Samutina, National Research University Higher School of Economics, Moscow
Co-sponsored by the Department of Slavic Languages and Literature and Department of the History of Art and the Arts Research Center

04/06/20 Artist Talk
William Pope.L, Artist
Presented by the Department of Art Practice

04/13/20 Teaching Machines to Draw
Tom White, New Zealand
Co-sponsored by Autolab and CITRIS People and Robots (CPAR)

For more information:

http://atc.berkeley.edu/

Contact: info.bcnm [​at​] berkeley.edu, 510-495-3505

ATC Director: Ken Goldberg
BCNM Director: Nicholas de Monchaux
Arts + Design Director: Shannon Jackson
BCNM Liaisons: Lara Wolfe, Laurie Macfee

ATC Highlight Video from F10-S11 Season (2 mins)
http://j.mp/atc-highlights-hd

ATC Audio-Video Archive on Brewster Kahle’s Internet Archive:
http://tinyurl.com/atc-internet-archive

ATC on Facebook:
https://www.facebook.com/cal-atc

ATC on Twitter:
https://www.twitter.com/cal_atc

]]>
Catalia Health and Pfizer collaborate on robots for healthcare https://robohub.org/catalia-health-and-pfizer-collaborate-on-robots-for-healthcare/ Tue, 17 Sep 2019 01:17:04 +0000 https://robohub.org/catalia-health-and-pfizer-collaborate-on-robots-for-healthcare/ New robot platform improves patient experience using AI to help patients navigate barriers and health care challenges

SAN FRANCISCO, Sept. 12, 2019 /PRNewswire/ — Catalia Health and Pfizer today announced they have launched a pilot program to explore patient behaviors outside of clinical environments and to test the impact regular engagement with artificial intelligence (AI) has on patients’ treatment journeys. The 12-month pilot uses the Mabu® Wellness Coach, a robot that uses artificial intelligence to gather insights into symptom management and medication adherence trends in select patients.

The Mabu robot can interact with patients using AI algorithms to engage in tailored, voice-based conversations. Mabu “talks” with patients about how they are feeling and helps answer questions they may have about their treatment. The Mabu Care Insights Platform then delivers detailed data and insights to clinicians at a specialty pharmacy provider to help human caregivers initiate timely and appropriate outreach to the patient. The goal is to help better manage symptoms and address patient questions in real-time.

“At Catalia Health we’ve seen firsthand the benefits that AI has brought to healthcare for both the patient and the healthcare systems,” said Cory Kidd, founder and CEO of Catalia Health. “Our work with Pfizer allows us to engage with patients on a larger scale and therefore gain access to more insights and data that we hope can improve health outcomes.”

Mabu is helping to deliver personalized care by gaining insights that allow the specialty pharmacy to reach out to patients as they express challenges in managing their conditions. Mabu also generates health tips and reminders to help patients get additional information about their condition and treatment that may help them along the way. Over time, it is our goal that Mabu can help patients navigate barriers and health care challenges that are often a part of managing a chronic disease.

“The healthcare system is overburdened, and as a result, patients often seek more-coordinated care and information. Through this collaboration with Catalia Health, we hope to learn through real-time data and insights about challenges patients face, outside the clinical setting, with the goal to improve their treatment journeys in the future,” said Lidia Fonseca, Chief Digital and Technology Officer at Pfizer. “This pilot is an example of how we are working to develop digital companions for all our medicines to better support patients in their treatment journeys.”

The pilot program was officially announced on stage at the National Association of Specialty Pharmacy’s Annual Meeting & Expo on September 10, 2019. Initial pilot data will be available in the coming months. For more information, visit www.cataliahealth.com

About Catalia Health

Catalia Health is a San Francisco-based patient care management company founded by Cory Kidd, Ph.D., in 2014. Catalia Health provides an effective and scalable solution for individuals managing chronic disease or taking medications on an ongoing basis. The company’s AI-powered robot, Mabu, enables healthcare providers and pharmaceutical companies to better support patients living with chronic illness. Mabu uses a voice-based interface designed for simple, intuitive use by a wide variety of patients in remote care environments. The cloud-based platform delivers unique conversations to patients each time they have a conversation with Mabu.

Catalia Health’s care management programs are tailored to increase clinically appropriate medication adherence, improve symptom management and reduce the likelihood that a patient is readmitted to the hospital after being discharged.

For more information, visit www.cataliahealth.com

]]>
Robo-Exoticism is the theme for 2019/20 Art, Technology and Culture Colloquiums https://robohub.org/robo-exoticism-is-the-theme-for-2019-20-art-technology-and-culture-colloquiums/ Sun, 08 Sep 2019 23:25:58 +0000 https://robohub.org/robo-exoticism-is-the-theme-for-2019-20-art-technology-and-culture-colloquiums/
Manus – at World Economic Forum 2018

Madeline Gannon’s “Robots Are Creatures, Not Things” will be the first work of the Fall 2019-Spring 2020 season of the Colloquiums at UC Berkeley’s Center for New Media at 6pm on Sept 9th.

Dr. Madeline Gannon is a multidisciplinary designer inventing better ways to communicate with machines. In her work, Gannon seeks to blend knowledge from design, robotics, and human-computer interaction to innovate at the intersection of art and technology. Gannon designs her research to engage with wide audiences across scientific and cultural communities: her work has been exhibited at international cultural institutions, published at ACM conferences, and covered by diverse global media outlets. Her 2016 interactive installation, Mimus, even earned her the nickname, “The Robot Whisperer.”

Mimus – a curious robot

She is three-time World Economic Forum Cultural Leader, and serves as a council member on the World Economic Forum Global Council for IoT, Robotics, & Smart Cities. Gannon holds a Ph.D. in computational design from Carnegie Mellon University, a master’s in architecture from Florida International University, and is a Research Fellow at the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University.

Her work “Robots Are Creatures, Not Things” questions how we should coexist with intelligent, autonomous machines. After 50 years of promises and potential, robots are beginning to leave the lab to live in the wild with us. In this lecture, Dr. Madeline Gannon discusses how art and technology are merging to forge new futures for human-robot relations. She shares her work in convincing robots to do things they were never intended to do: from transforming a giant industrial robot into living, breathing mechanical creature, to taming a horde autonomous robots to behave more like a pack of animals. By pushing the boundaries of human-robot interaction, her work shows that robots can not only be useful, but meaningful additions to our everyday lives.

Quipt – gestural control of industrial robots

Founded in 1997, the ATC series is an internationally respected forum for creative ideas. The ATC series, free of charge and open to the public, is coordinated by the Berkeley Center for New Media and has presented over 170 leading artists, writers, and critical thinkers who question assumptions and push boundaries at the forefront of art, technology, and culture including: Vito Acconci, Laurie Anderson, Sophie Calle, Bruno Latour, Maya Lin, Doug Aitken, Pierre Huyghe, Miranda July, Billy Kluver, David Byrne, Gary Hill, and Charles Ray.

Current ATC Director is robotics professor Ken Goldberg, who is behind this season’s “Robo-Exotica” theme as well as being the Director of the CITRIS People and Robots Initiative and head of the AutoLab at UC Berkeley.

In 1920, Karl Capek coined the term “robot” in a play about mechanical workers organizing a rebellion to defeat their human overlords. A century later, increasing popularism, inequality, and xenophobia require us to reconsider our assumptions about labor, trade, political stability, and community. At the same time, advances in artificial intelligence and robotics, fueled by corporations and venture capital, challenge our assumptions about the distinctions between humans and machines. To explore potential linkages between these trends, “Robo-Exoticism” characterizes a range of human responses to AI and robots that exaggerate both their negative and positive attributes and reinforce fears, fantasies, and stereotypes.

A Century of Art and Technology in the Bay Area” (essay)

Location:

Monday Evenings, 6:30-8:00pm
Osher Auditorium
BAMPFA, Berkeley, CA
More information
Lectures are free and open to the public. Sign up for the ATC Mailing List!

]]>
2019 Robot Launch startup competition is open! https://robohub.org/2019-robot-launch-startup-competition-is-open/ Wed, 07 Aug 2019 22:30:07 +0000 https://robohub.org/2019-robot-launch-startup-competition-is-open/

It’s time for Robot Launch 2019 Global Startup Competition! Applications are now open until September 22nd 6pm PDT. Finalists may receive up to $500k in investment offers, plus space at top accelerators and mentorship at Silicon Valley Robotics co-work space.

Winners in previous years include high profile robotics startups and acquisitions:

2018: Anybotics from ETH Zurich, with Sevensense and Hebi Robotics as runners-up.

2017: Semio from LA, with Appellix, Fotokite, Kinema Systems, BotsAndUs and Mothership Aeronautics as runners up in Seed and Series A categories.

]]>
Women in robotics on International Women’s Day 2019 – updated https://robohub.org/women-in-robotics-on-international-womens-day-2019/ Fri, 08 Mar 2019 07:29:57 +0000 https://robohub.org/women-in-robotics-on-international-womens-day-2019/ What does a day in the life of a woman working with robots look like? We asked members of WomeninRobotics.org to volunteer “a paragraph and a picture” for this first patchwork representation of the field. And if you’re a woman working in robotics or interested in the field, join us! (updated with content from Odyssey Foundation in Nigeria)

Odyssey Educational Foundation Robotics program, provides a way for girls to get exposure to the field while also discovering their passion for STEM-based careers. It has allowed more girls to feel confident in these types of roles — and it has allowed the boys to be confident that the girls are there. Girls are half the population, and there really isn’t any reason why girls shouldn’t see how fun and exciting and rewarding engineering can be.

Children have a natural curiosity that lends itself to science, technology, math, and engineering. At Odyssey Educational foundation we inspire young girls to build on their innate desire for answers by exploring engineering concepts in a fun, hands-on way. We offer after-school programs, camps, and special events designed to present young girls with challenging yet accessible engineering activities from which they can learn and grow.

Australian Nurse: Anne Elvin, recently travelled to Brisbane to present a talk at the Queensland eHealth Innovation Showcase. Anne presented an insider’s perspective about what it was like to be working with Softbank’s Pepper robot at the Townsville Hospital. Pepper’s message about flu and the importance of vaccination and hand hygiene was very simple, but the user experience provided by the incredible programming by our collaborative partners at the Australian Centre of Robotics Vision was extraordinary. As part of an innovation project created by Anne, Pepper brought a new layer of engagement in health and the robot became very popular amongst staff, patients, and volunteers at the Townsville Hospital. Where some people were initially sceptical about the presence of a robot in a hospital, even the most sceptical became very quickly accustomed to seeing the friendly little robot and began to treat Pepper as a sort of mascot or ambassador for health. Anne and her work with Pepper have paved the way to introduce other social robots into the Australian health system.

Lisa Winter with MiniTento and a middle school robotics team.

Lisa Winter is a mechanical engineer at Quartz; building hardware to identify, track, and predict everything that moves on a construction site. Her hobby of building robots started at the age of 10, when she fought in Robot Wars, and continued as she competed in BattleBots until 2016. In her spare time she likes to talk to kids about the importance of STEM. Seen here, Lisa and her robot ‘Mini Tento’ are with a middle school Lego robot building team.

Meka and Natalia Diaz Rodriguez

Natalia Diaz Rodriguez is Ass. Prof. of Artificial Intelligence at the Autonomous Systems and Robotics Lab (U2IS) at ENSTA ParisTech (Autonomous systems and Robotics (computer vision) group and INRIA Flowers team (flowers.inria.fr), which works on developmental robotics). Her research interests include deep, reinforcement and unsupervised learning, (state) representation learning, explainable AI and AI for social good. She works on open-ended learning, and continual/lifelong learning for applications in computer vision and robotics. Her background is on knowledge engineering (semantic web, ontologies and knowledge graphs) and she is interested in explainable AI and neural-symbolic approaches to practical applications of AI.

She got a Computer Engineering degree by the University of Granada (UGR, Spain) and a Double PhD from Abo Akademi (Finland) (together with UGR) on Artificial Intelligence and Semantic and Fuzzy Modelling for Human Behaviour Recognition in Smart Spaces. She has worked on R&D at CERN (Switzerland), Philips Research (Netherlands) at the Personal Health Dept., done a postdoctoral stay at University of California Santa Cruz, and worked in industry in Silicon Valley at Stitch Fix (San Francisco, CA) -a recommendation service for fashion delivery with humans in the loop.

She has participated in a range of international projects and is Management Committee member of EU COST (European Cooperation in Science and Technology) Action AAPELE.EU (Algorithms, Architectures and Platforms for Enhanced Living Environments, www.aapele.eu), or EU H2020 DREAM (www.robotsthatdream.eu). She was Google Anita Borg Scholar 2014, Heidelberg Laureate Forum 2014 & 2017 fellow, and obtained the Nokia Scholarship among others.

Cristina Zaga

Cristina Zaga is a Ph.D. candidate at the HMI group (University of Twente) and a visiting scholar at the RiG lab (Cornell University). Cristina’s doctoral research focuses on designing “robotthings” , everyday robotic objects and toys, to promote children’s prosocial behaviors in collaborative play. She studies how robots communicate intent and social qualities only through movement and nonverbal actions, defining a framework for non-anthropomorphic robots. Currently, she works on developing approaches and toolkits for research through design and participatory practices to bring together roboticists, designers and stakeholders. She envisions a future of robothings, robotics embedded in everyday objects, that meaningfully interact with people to empower them steering away from reinforcing existing biases in the society and paternalism. Cristina in her after-hours explores artistic intervention to advance the discourse on human-centered robotics, using speculative design to make what she calls poetic robots. Her work in HRI interaction design for robothings received an HRI student design competition award and has been exhibited at the Eindhoven Design Week 2017. Cristina is one of the founders of the Child-Robot Interaction international workshop series and co-organizer of the workshop Robots for Social Good. Cristina was selected as Google Women TechMaker Scholar 2018 for her research quality and her support to empower women and children in STEM.

Ecem Tuğlan

Ecem Tuğlan is Co Founder of Fenom Robotics that builds World’s First Hologram Displaying Robot. She is also founder of Revulation4.0, world’s first digital and printable clothing label which release it’s first collection soon.

She is professional robopsychologist who recognized by NASA. In June 2016 she  presented her original paper “Do Androids Sense of Electric Deja-vu?” to Dr. Ravi Margasahayam, NASA.

She graduated Ege University philosophy and sociology department. She took courses for teaching and consulting psychology at Dokuz Eylul University. She is also an amateur photographer and painter. Her pictures exhibited at Saatchi Gallery’s website and History Channel’s website. IASSR, IBAD and ECSBS invented her to different countries for making Artificial Intelligence, philosophy and Robopsychology presentations. She teached philosophy lessons at Oxford Creative Writing Center.

She is also working for robot rights.

PR2 and Laurel Riek

Dr. Laurel Riek is a professor in Computer Science and Engineering at the University of California, San Diego, with joint appointments in the Department of Emergency Medicine and Contextual Robotics Institute. Dr. Riek directs the Healthcare Robotics Lab and leads research in human-robot teaming, computer vision, and healthcare engineering, with a focus on autonomous robots that work proximately with people. Riek’s current research interests include long term learning, robot perception, and personalization; with applications in critical care, neurorehabilitation, and manufacturing.

Dr. Riek received a Ph.D. in Computer Science from the University of Cambridge, and B.S. in Logic and Computation from Carnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation from 2000-2008, working on learning and vision systems for robots, and held the Clare Boothe Luce chair in Computer Science and Engineering at the University of Notre Dame from 2011-2016. Dr. Riek has received the NSF CAREER Award, AFOSR Young Investigator Award, Qualcomm Research Award, and was named one of ASEE’s 20 Faculty Under 40.

Nao and Deanna Hood

Deanna Hood is an electrical engineer whose work focuses on humanitarian applications of engineering and robotics, with projects spanning accessibility, education and healthcare. Examples of her work include a brain-controlled car, with applications for people living with paralysis; a low-cost USB stethoscope for diagnosing childhood pneumonia in developing countries; and the first robotic partner for children with handwriting difficulties: a robot that children can teach how to write, so that even those at the bottom of their class can benefit from “learning by teaching”. These projects have resulted in a number of academic publications as well as international print and TV media coverage such as by Reuters and Discovery Channel, and saw Deanna as a finalist for TED2013.
Most recently, during her time at the Open Source Robotics Foundation, Deanna worked on the Robot Operating System (ROS), what is referred to as the “lingua franca” of robot developers, which is used in applications as diverse as autonomous cars, Antarctic research robots, and robots on the International Space Station.For her efforts in advancing society’s perception of engineering, Deanna has been recognised as a Google Anita Borg Memorial Scholar, a John Kindler Memorial Medallist, an Erasmus Mundus Scholar, and as a finalist for the Pride of Australia Young Leader Medal. This is in addition to various academic medals for placing at the top of her three degrees despite starting university at age 15.

Image: DARPA Project Manager Erin McColl with CyberPhysical Systems Research Director Sue Keay, both with CSIRO’s Data61 next to a hexapod robot being trialled for the DARPA Sub-T challenge.

CyberPhysical Systems Research Director Sue Keay: Here I am pictured with our DARPA Project Manager, Erin McColl. One of the most exciting projects in the portfolio I’ve inherited now that I am the Research Director for Cyber-Physical Systems within Australia CSIRO’s Data61 is our work on the DARPA Sub-T Challenge. The aim of the challenge is to develop innovative technologies that will augment operations underground. We are the only non-US team included in the Challenge. We are currently testing technology to rapidly map, navigate, and search underground environments. The three-year Subterranean Challenge is funded by the US Defense Advanced Research Projects Agency (DARPA).

Joanne Pransky and patients

Dubbed as the ‘real-life’ Susan Calvin’ by Isaac Asimov in 1989, Joanne Pransky, the World’s First Robotic Psychiatrist®, has been tracking the robot evolution for over three decades. Her focus is on the use and marketing of robots as well as the critical psychological issues of the relationship between humans and robots.  The field of robotic psychiatry which she pioneered in 1986, is no longer science fiction and she is accepting new robo-patients ready to be integrated into society.

Nissia Sabri at Novanta: We chose to highlight women across the company all the way from our President to the factory floor. As you will see in the attached paragraph description, these women make critical components for surgical robotics!

Celera Motion precision components and subsystems enabled ~1 million robotic surgeries in 2018. Here are some of the women who contributed to the advancement of innovative technology in the field of robotics. This great team at Celera Motion is part of Novanta, focused on delivering innovations that matter.

Allison Thackston

Allison Thackston is the Engineering Lead and Manager of the Shared Autonony Robotics team at Toyota Research Institute. Her team focuses on developing advanced robotic teleoperation technologies that enable robots and people to seamlessly and safely work together.
Allison previously held the position of Lead Manipulation Researcher/Project Manager at Toyota Partner Robotic Group where she investigated robust task and motion planning manipulation strategies in unstructured environments. Before joining Toyota, she was the Lead Engineer for Robotic Perception on Robonaut 2, the first humanoid robot on the International Space Station. There, she was responsible for software development and applied vision research to facilitate the cooperation between robots and people.
Allison has a degrees in Electrical and Mechanical Engineering. Her thesis focused on collision avoidance during supervised robotic manipulation.

The women of Omron Adept in California.

Omron Adept Technologies, Inc. is a robotics company under Omron Corporation. More specifically, we are part of the Industrial Automation group at Omron. Our company is unique in that it develops industry-leading electronics, mechanics, and software for a broad spectrum of robots for the global market. Recently we have seen an important growth of women presence in our company. The rate of women in engineering has increased by almost 10% in the last three years. And the number of women in engineering management positions is at around 25%. Women have presence in almost all engineering teams: Systems, Software fixed and mobile, applications, electrical, quality and marketing. There are also currently two women who have been named as one of the “25 Women in Robotics that you  Need to Know About”: Noriko Kageki in 2014 and Casey Schultz in 2018.

Audrey Roberts is a sophomore studying Mechanical Engineering at the University of Southern California. At USC, Audrey does robotics research in Professor Maja Matarić’s Interaction Lab. Currently, she is excited to be working under PhD student Lauren Klein, exploring the ability of socially assistive robotics to increase exploratory motor movement. This research is aimed in particular at infants at risk for developmental delay. Furthermore, Audrey is part of the USC Rocket Propulsion Lab, where she works on a team that designs and builds the mechanical components for the rockets’ avionics systems. Audrey will be interning at Microsoft this summer and hopes to continue exploring human-computer interaction in the future as a hardware engineer.

Melonee Wise

Melonee Wise is the CEO of Fetch Robotics, which is delivering on-demand automation solutions for the logistics industry. She was the second employee at Willow Garage, a research and development laboratory extremely influential in the advancement of robotics.  She led a team of engineers developing next-generation robot hardware and software, including ROS, the PR2 and TurtleBot.  Melonee was a 2015 recipient of MIT Technology Review’s TR 35 award for innovators under the age of 35. In 2017, Business Insider named her as one of eight CEOs changing the way we work. Under her leadership, the company won the MODEX Innovation award for the materials handling industry.

Roxanna Pakkar

Roxanna Pakkar is a junior studying electrical engineering at University of Southern California. She is a research assistant in the USC Interaction Lab where she has assisted in projects including a robotic system intended to improve the social interaction skills of children with Autism and a study demonstrating the role of augmented reality in improving expressiveness in human-robot interaction. She has also led her own study within the lab on help-seeking behaviors with robot tutors. In addition, Roxanna has interned at NASA JPL in the Robotics and Mobility Section, working on a swarm autonomy platform. This summer she is interning as a product engineer at Microsoft and she hopes to continue work in human-robot interaction and collaboration in the future.

I am Rania Rayyes, a PhD student in TU Braunschweig in Germany.
I am doing my PhD in Robotics and Machine Learning.  My research focus on learning robot models, i.e., learning required robot actions to accomplish specific tasks. I am developing for this purpose a novel intrinsic motivation machine learning methods.

Nicole Mirnig is a passionate researcher in social robots and human-robot interaction. She very recently finished her PhD on essentials of robot feedback at the University of Salzburg, Austria. Nicole’s research focus lies in human-robot cooperation, taking into account different factors that foster a positive user experience. Her latest work is dedicated to systematically researching erroneous robot behavior, which was well-received by both, international media and the fellow research community. She aims at making robots understand that they made a mistake and react accordingly. Another hot topic for Nicole is researching robot humor and how it can be exploited for an enjoyable user experience.


Tori Fujinami, Robotics Engineer, Cobalt Robotics
Robotics is exciting because the applications are endless, only limited by the people designing them! The Cobalt robot specifically is inspiring because it is not the kind of technology intended to replace people or their valuable skills, but rather enhance people’s capabilities and collaborate with existing systems.
Rachel Domagalski, Systems Engineer, Cobalt RoboticsI got interested in robotics because robots have an interesting intersection with software, data, and hardware development, and they provide a way to positively augment people’s lives. In particular, Cobalt is exciting to me because our robots combine making human-robot interactions friendlier with a practical application of the technology.

International Women’s Day is a chance to showcase Women in Robotics

This International Women’s Day, Universal Robots pays tribute to the women in robotics. Thanks to the growing awareness towards the untapped potential of the vast Indian female workforce, entrepreneurs have committed themselves to see that each part of society gets the opportunity to prove itself professionally. One of the leading consumer food manufacturer in India, Udhaiyam Dhall, has in fact a workforce comprised of 75% female employees that work alongside Universal Robots’ collaborative robots (cobots).

Universal Robots feels proud to associate itself with such manufacturing businesses and non-profit healthcare organisations Aurolab. They choose to employ and train local women workers, aged 18 and above to manufacture high quality eye care products. The women pride themselves in their work, showing their dedication by standing nine hours a day to ensure the smooth operations process. To improve their working conditions and grow production, Aurolab decided to deploy Universal Robots’ cobots alongside its workforce, which was retrained to operate these robotics for high precision work. The benefit for the workforce is that they get to acquire meaningful new skills with the implementation of the cobots. Aurolab employees are now able to manage the cobots operation with the simple click of a button and do checks on the machines once every hour or so.

Female employees in others industries like automotive, metal and machinery have had similar experiences. Rameshwari, female assembly line operator at Bajaj Auto, says that she is grateful to be able to work with Universal Robots’ cobots, as she is now able to achieve high-quality output. Her other female colleagues and herself find them interesting and easy to operate, as all the physically challenging parts are taken care of by the cobots. ‪The robots had to be comfortable for the staff to operate. Besides fitting in seamlessly with the workforce, in each case cobots from Universal Robots were picked for their affordability, reduced power consumption and safety, which ensured that the protective stop measure turns the power off when a load is applied to it. In these ways, robotics has not only opened doors to the female workforce, but it has empowered and instilled it with pride that will see yet more women entering this field in India and all over the globe.

JOIN US! WOMENINROBOTICS.ORG

]]>
Join the World MoveIt! Day code sprint on Oct 25 2018 https://robohub.org/join-the-world-moveit-day-code-sprint-on-oct-25-2018/ Sat, 20 Oct 2018 03:00:39 +0000 https://robohub.org/join-the-world-moveit-day-code-sprint-on-oct-25-2018/

World MoveIt! Day is an international hackathon to improve the MoveIt! code base, documentation, and community. We hope to close as many pull requests and issues as possible and explore new areas of features and improvements for the now seven year old framework. Everyone is welcome to participate from their local workplace, simply by working on open issues. In addition, a number of companies and groups host meetings on their sites all over the world. A video feed will unite the various locations and enable more collaboration. Maintainers will take part in some of these locations.

 

Locations

  • Note that the Tokyo and Singapore locations will have their events on Friday the 26th, not Thursday the 25th.

General Information Contacts

  • Dave Coleman, Nathan Brooks, Rob Coleman // PickNik Consulting

Signup

Please state your intent to join the event on this form. Note that specific locations will have their own signups in addition to this form.

If you aren’t near an organized event we encourage you to have your own event in your lab/organization/company and video conference in to all the other events. We would also like to mail your team or event some MoveIt! stickers to schwag out your robots!

Logistics

What version of MoveIt! should you use?

We recommend the Kinetic LTS branch/release. The Melodic release is also a good choice but is new and has been tested less. The Indigo branch is considered stable and frozen – and only critical bug fixes will be backported.

For your convenience, a VirtualBox image for ROS Kinetic on Ubuntu 16.04 is available here.

Finding Where You Can Help

Suggested areas for improvement are tracked on MoveIt’s GitHub repo via several labels:

  • moveit day candidate labels issues as possible entry points for participants in the event. This list will grow longer before the event.
  • simple improvements indicates the issue can probably be tackled in a few hours, depending on your background.
  • documentation suggests new tutorials, changes to the website, etc.
  • assigned aids developers to find issues that are not already being worked on.
  • no label – of course issues that are not marked can still be worked on during World MoveIt! day, though they will likely take longer than one day to complete.

If you would like to help the MoveIt! project by tackling an issue, claim the issue by commenting “I’ll work on this” and a maintainer will add the label “assigned”. Feel free to ask further questions in each issue’s comments. The developers will aim to reply to WMD-related questions before the event begins.

If you have ideas and improvements for the project, please add your own issues to the tracker, using the appropriate labels where applicable. It’s fine if you want to then claim them for yourself.

Further needs for documentation and tutorials improvement can be found directly on the moveit_tutorials issue tracker.

Other larger code sprint ideas can be found on this page. While they will take longer than a day the ideas might provide a good reference for other things to contribute on WMD.

Documentation

Improving our documentation is at least as important as fixing bugs in the system. Please add to our Sphinx and Markdown-based documentation within our packages and on the MoveIt! website. If you have studied extensively an aspect of MoveIt! that is not currently documented well, please convert your notes into a pull request in the appropriate location. If you’ve started a conversation on the mailing list or other location where a more experienced developer explained a concept, consider converting that answer into a pull request to help others in the future with the same question.

For more details on modifying documentation, see Contributing.

Video Conference and IRC

Join the conversation on IRC with #moveit at irc.freenode.net. For those new to IRC try this web client.

Joint the video conference on Appear.In

Sponsorship

We’d like to thank the following sponsors:

PickNik Consulting Iron Ox Fraunhofer IPA ROS-Industrial Asian Pacific Consortium Tokyo Opensource Robotics Kyokai Association OMRON SINIC X Corporation Southwest Research Institute

]]>
Jillian Ogle is the first ‘Roboticist in Residence’ at Co-Lab https://robohub.org/jillian-ogle-is-the-first-roboticist-in-residence-at-co-lab/ Thu, 27 Sep 2018 20:12:28 +0000 https://robohub.org/jillian-ogle-is-the-first-roboticist-in-residence-at-co-lab/

Currently also featured on the cover of MAKE magazine, Jillian Ogle is a robot builder, game designer and the founder of Let’s Robot a live streaming interactive robotics community, where users can control real robots via chatroom commands, or put their on own robots online. Some users can even make money with their robots on the Let’s Robot platform which allows viewers to make micropayments to access some robots. All you need is a robot doing something that’s interesting to someone else, whether it’s visiting new locations or letting the internet shoot ping pong balls at you while you work!

As the first ‘Roboticist in Residence’ at Co-Lab in Oakland, Ogle has access to the all the equipment and 32,000 sq ft of space, providing her robotics community with a super large robot playground for her live broadcasts. And the company of fellow robot geeks. Co-Lab is the new coworking space at Circuit Launch supported by Silicon Valley Robotics, and provides mentors, advisors and community events, as well as electronics and robotics prototyping equipment.

You can meet Ogle at the next Silicon Valley Robotics speaker salon “The Future of Robotics is FUN” on Sept 4 2018. She’ll be joined by Circuit Launch COO Dan O’Mara and Mike Winter, Battlebot World Champion and founder of new competition ‘AI or Die’. Small and cheap phone powered robots are becoming incredibly intelligent and Ogle and Winter are at the forefront of pushing the envelope.

Ogle sees Let’s Robot as the start of a new type of entertainment, where the relationship between viewers and content are two-way and interactive. Particularly because robots can go places that some of us can’t, like the Oscars. Ogle has ironed out a lot of the problems with telepresence robotics including faster response time for two way commands. Plus it’s more entertaining than old school telepresence with robots able to take a range of actions in the real world.

While the majority of robots are still small and toylike, Ogle believes that this is just the beginning of the way we’ll interact with robots in the future. Interaction is Ogle’s strength, she started her career as an interactive and game designer, previously working at companies like Disney and was also a participant in Intel’s Software Innovators program.

“I started all this by building dungeons out of cardboard and foam in my living room. My background was in game design, so I’m like, ‘Let’s make it a game.’ There’s definitely a narrative angle you could push; there’s also the real-world exploration angle. But I started to realize it’s a little bigger than that, right? With this project, you can give people access to things they couldn’t access by themselves.” said Jillian talking to Motherboard.

Here are the instructions from Makezine for connecting your own robot to Let’s Robot. The robot side software is open source, and runs on most Linux-based computers. There is even  an API that allows you to fully customize the experience. If you’re building your own, start here.

Most of the homebrew robots on Let’s Robot use the following components:

  • Raspberry Pi or other single-board computer. The newest Raspberry Pi has onboard Wi-Fi, you just need to point it at your access point.
  • SD card with Raspbian or NOOBS installed. You can follow our guide to get our software to run on your robot, and pair it with the site: letsrobot.tv/setup.
  • Microcontroller, such as Arduino. The Adafruit motor hat is also popular.
  • Camera to see
  • Microphone to hear
  • Speaker to let the robot talk
  • Body to hold all the parts
  • Motors and servos to move and drive around
  • LEDs and sensors to make things interesting
  • And a battery to power it all

A lot of devices and robots are already supported by Let’s Robot software, including the GoPiGo Robot, and Anki Cozmo. If you have an awesome robot just sitting on the shelf collecting some dust, this could be a great way to share it with everyone! There’s also a development kit called “Telly Bot” which works out of the box with the letsrobot.tv site. See you online!

 

 

]]>
ANYbotics wins ICRA 2018 Robot Launch competition! https://robohub.org/anybotics-wins-icra-2018-robot-launch-competition/ Wed, 30 May 2018 20:48:44 +0000 http://robohub.org/anybotics-wins-icra-2018-robot-launch-competition/

The four-legged design of ANYmal allows the robot to conquer difficult terrain such as gravel, sand, and snow. Photo credit: ETH Zurich / Andreas Eggenberger.

ANYbotics led the way in the ICRA 2018 Robot Launch Startup Competition on May 22, 2018 at the Brisbane Conference Center in Australia. Although ANYbotics pitched last out of the 10 startups presenting, they clearly won over the judges and audience. As competition winners, ANYbotics received a $3,000 prize from QUT bluebox, Australia’s robotics accelerator (currently taking applications for 2018!), plus Silicon Valley Robotics membership and mentoring from The Robotics Hub.

ANYbotics is a Swiss startup creating fabulous four legged robots like ANYmal and the core component, the ANYdrive highly integrated modular robotic joint actuator. Founded in 2016 by a group of ETH Zurich engineers, ANYbotics is a spin-off company of the Robotic Systems Lab (RSL), ETH Zurich.

ANYmal moves and operates autonomously in challenging terrain and interacts safely with the environment. As a multi-purpose robot platform, it is applicable on industrial indoor or outdoor sites for inspection and manipulation tasks, in natural terrain or debris areas for search and rescue tasks, or on stage for animation and entertainment. Its four legs allow the robot to crawl, walk, run, dance, jump, climb, carry — whatever the task requires.

https://youtu.be/lESsdD3o78k

ANYdrive is a highly integrated modular robotic joint actuator that guarantees

  • very precise, low-impedance torque control,
  • high impact robustness,
  • safe interaction,
  • intermittent energy storage and peak power amplification

Motor, gear, titanium spring, sensors, and motor electronics are incorporated in a compact and sealed (IP67) unit and connected by a EtherCAT and power bus. With ANYdrive joint actuators, any kinematic structure such as a robot arm or leg can be built without additional bearings, encoders or power electronics.

ANYdrive’s innovative design allows for highly dynamic movements and collision maneuvers without damage from impulsive contact forces, and at the same time for highly sensitive force controlled interaction with the environment. This is of special interest for robots that should interact with humans, such as collaborative and mobile robots.

ICRA 2018 finalists and judges; Roland Siegwart from ETH Zurich, Juliana Lim from SGInnovate, Yotam Rosenbaum from QUT bluebox, Martin Duursma from Main Sequence Ventures and Chris Moehle from The Robotics Hub Fund.

The ICRA 2018 Robot Launch Startup Competition was judged by experienced roboticists, investors and entrepreneurs. Roland Siegwart is a Professor at ETH Zurich’s Autonomous Systems Lab and cofounder of many successful robotics spinouts. Juliana Lim is Head of Talent from SGInnovate, a Singapore venture capital arm specializing in pre-seed, seed, startup, early-stage, and Series A investments in deep technologies, starting with artificial intelligence (AI) and robotics.

Yotam Rosenbaum is the ICT Entrepreneur in Residence at QUT bluebox, building on successful exits from global startups. Martin Duursma is a venture partner in Main Sequence Ventures, Australia’s new innovation fund specializing in AI, robotics and deep tech like biotech, quantum computing and the space industry. Chris Moehle is the managing partner at The Robotics Hub Fund, who may invest up to $250,000 in the overall winner of the Robot Launch Startup Competition 2018.

Organized by Silicon Valley Robotics, the Robot Launch competition is in it’s 5th year and has seen hundreds of startups from more than 20 countries around the globe. The MC for the evening, Silicon Valley Robotics Director Andra Keay, said “Some of the best robotics startups come from places like Switzerland or Australia, but to get funding and to grow fast, they usually need to spend some time in Silicon Valley.”

“The Robot Launch competition allows us to reach startups from all over the world and get them in front of top investors. Many of these startups have gone on to win major events and awards like TechCrunch Battlefield and CES Innovation Awards. So we know that robotics is also coming of age.”

As well as ANYbotics, the other 9 startups gave great pitches. In order of appearance they were:

  • Purple Robotics
  • Micromelon Robotics
  • EXGwear
  • HEBI Robotics
  • Abyss Solutions
  • EyeSyght
  • Niska Retail Robotics
  • Aubot
  • Sevensense

Purple Robotics creates drones for work, which fly for 3x longer than, or carry 3x the payload of existing commercial drones, due to their innovative design. They are not standard quadrocopters but they use the same battery technology. Purple Robotics drones are also gust resistant, providing maximum stability in the air and enabling them to fly closer to structures.

Micromelon creates a seamless integration between visual and text coding, with the ability to translate between the two languages in real time. Students and teachers are able to quickly begin programming the wireless robots. The teacher dashboard and software are designed to work together to assist teachers who may have minimal experience in coding, to instruct a class of students through the transition. Students are able to backtrack to blocks, see how the program looks as text or view both views at once students are able to be supported throughout the entire journey.

EXGwear is currently developing a “hands-free”, intuitive interaction method, in the form of a portable wearable device that is extremely compact, non-obtrusive, and comfortable to wear long hours to help disabled people solve their daily interaction problems with the environment. Our first product, EXGbuds, a customizable earbud-like device is based on patent-pending biosensing technology and machine learning-enabled App. It can measure eye movement and facial expression physiological signals at extremely high accuracy to generate user-specific actionable commands for seamless interaction with the smart IoTs and robotic devices.

HEBI Robotics produces Lego-like robotic building blocks. Our platform consists of hardware and software that make it easy to design, build and program world class robotics quickly. Our hardware platform is robust, flexible, and safe. Our cross-platform software development tools take care of the difficult math that’s required to develop a robot so that the roboticist can focus on the creative aspects of robot design.

Abyss Solutions delivers key innovations in Remotely Operated Vehicles (ROVs) and sensor technology to collect high fidelity, multi-modal data comprehensively across underwater inspections. By pushing the state-of-the-art in machine learning and data analytics, accurate and efficient condition assessments can be conducted and used to build an asset database. The database is able to grow over repeat inspection and the objectivity of the analytics enables automated change tracking. The output is a comprehensive asset representation that can enable efficient risk management for critical infrastructure.

EyeSyght is TV for your fingers. As humans we use our senses to gather and collect information to analyse the environment around us and create a mental picture of our surroundings. But what about touch? When we operate our smartphones, tablets and computers we interact with a flat piece of glass. Now through the use of Haptic Feedback, Electrical Impulses, Ultra Sound, EyeSyght will enable any surface to render Shapes, Textures, Depth, and much much more.

Niska Retail Robotics is reimagining retail, starting with icecream. “Customer demands are shifting away from products and towards services and experiences.” (CSIRO, 2017) Niska creates wonderful customer experiences with robot servers scooping out delicious gourmet icecream for you, 24/7.

Aubot (‘au’ is to meet in Japanese – pronounced “our-bot”) is focused on building robots that help us in our everyday lives. The company was founded in April 2013 by Marita Cheng, Young Australian of the Year 2012. Our first product, Teleport, is a telepresence robot. Teleport will reduce people’s need to travel while allowing them greater freedom to explore new surroundings. In the future, aubot aims to combine Jeva and Teleport to create a telepresence robot with an arm attached.

Sevensense (still based at ETH Zurich Autonomous Systems Lab) provide a visual localization system tailored to the needs of professional service robots. The use of cameras instead of laser rangefinders enables our product to perform more reliably, particularly in dynamic and geometrically ambiguous environments, and allows for a cost advantage. In addition, we offer market specific application modules along with the engineering services to successfully apply our product on the customer’s machinery.

We thank all the startups for sharing their pitches with us – the main hall at ICRA was packed and we look forward to hearing from more startups in the next rounds of Robot Launch 2018.

]]>
Robot Launch 2018 in full swing – like Tennibot! https://robohub.org/robot-launch-2018-in-full-swing-like-tennibot/ Tue, 24 Apr 2018 16:00:20 +0000 http://robohub.org/robot-launch-2018-in-full-swing-like-tennibot/ With the Robot Launch 2018 competition in full swing – deadline May 15 for entries wanting to compete on stage in Brisbane at ICRA 2018 – we thought it was time to look at last years’ Robot Launch finalists. And a very successful bunch they are too!

Tennibot won the CES 2018 Innovation Award, was covered in media like Times, Discovery Channel and LA Times.  Tennibot also won $40,000 from the Alabama Launchpad competition and are launching a crowdfunding campaign today!

Tennibot uses computer vision and artificial intelligence to locate/pick up tennis balls and navigate on the court. Tennibot is the world’s first autonomous tennis ball collector. The Tennibot team has already won the Tennis Industry Innovation Challenge. So, if you think that Tennis + Robots = Your kind of sport – then head over to Tennibot.com to learn more and purchase your Tennibot before it’s too late!

Other 2017 finalists include 

  • Semio, from California have a software platform for developing and deploying social robot skills.
  • Apellix from Florida who provide software controlled aerial robotic systems that utilize tethered and untethered drones to move workers from harm’s way.
  • Mothership Aeronautics from Silicon Valley have a solar powered drone capable of ‘infinity cruise’ where more power is generated than consumed.
  • Kinema Systems, impressive approach to logistical challenges from the original Silicon Valley team that developed ROS.
  • BotsandUs, highly awarded UK startup with a beautifully designed social robot for retail.
  • Fotokite, smart team from ETHZurich with a unique approach to using drones in large scale venues.
  • C2RO, from Canada are creating an expansive cloud based AI platform for service robots.
  • krtkl, from Silicon Valley are high end embedded board designed for both prototyping and deployment.

Apellix were also winners of Automate 2017 startup competition. Mothership have raised a $1.25 million seed round from the likes of Draper Ventures. Kinema Systems has just won the NVIDIA Inception Challenge out of more than 200 entrants and splits $1 million prize money with two other AI startups. BotsAndUs have trialled Bo in more than 11,000 customer service interactions. Krtkl is focused on revenue not fundraising, C2RO is building partnerships with companies like Qihan. And Fotokite just won the $1 million Genius NY competition.

Fotokite CEO Christopher McCall holds a ceremonial $1 million check after the company received the top prize in the Genius NY business competition at the Marriott Syracuse Downtown on Monday. (Rick Moriarty | rmoriarty@syracuse.com) via Syracuse.com

 

NVIDIA CEO and founder Jensen Huang, Kinema Systems CEO and founder Sachin Chitta, NVIDIA founder Chris Malachowsky, NVIDIA VP of healthcare and AI business development Kimberly Powell. via NVIDIA blog

You can watch the pitch presentation here: https://youtu.be/BzcrREvD8k0

You’ll also see some other familiar names from the shortlist for 2017, not to mention lots of success for our 2016 top startups. We can’t wait to see who will be finalists in 2018!

The Robot Launch startup competition has been running since 2014 and has helped robotics startups reach investors, build a reputation and grow their markets. We’ve had entries from all over the world and one of the significant trends has been how rapidly the stage of startup entrants has advanced. We now judge startups in several divisions: Preseed, Seed and PostSeed (or Pre Series A)

Do you have a startup idea, a prototype or a seed stage startup in robotics, sensors or AI?

Submit your entries by May 15 2018, if you want to be selected to pitch on the main stage of ICRA 2018 on May 22 in Brisbane Australia for a chance to win $3000 AUD prize from QUT bluebox!

The top 10 startups will pitch live on stage to a panel of investors and mentors including:

  • Martin Duursma, Main Sequence Ventures
  • Chris Moehle, The Robotics Hub Fund
  • Yotam Rosenbaum, QUT bluebox
  • Roland Siegwart, ETH Zurich

Entries are also in the running for a place in the QUT bluebox accelerator*, the Silicon Valley Robotics Accelerator*, mentorship from all the VC judges and potential investment of up to $250,000 from The Robotics Hub Fund*.  (*conditions apply – details on application)

CONDITIONS:

Pre Seed category consists of an idea and proof of concept or prototype – customer validation is also desirable.

Seed category consists of a startup younger than 24 months, with less than $250k previous investment.

Post Seed category consists of a startup younger than 36 months, with less than $2.5m previous investment.

CAN’T MAKE IT TO AUSTRALIA?

No problems, mate! We’ll be continuing the Robot Launch competition with additional rounds in the US and in Europe through out the summer. Go ahead and enter now anyway!

Enter the Robot Launch Startup Competition at ICRA 2018 here.

FOR YOUR GUIDE ON GOOD PITCH DOCUMENTS:

A sample Investor One Pager can be seen here. And your pitch should cover the content described in Nathan Gold’s 13 slide format.

 

]]>
Robotics innovations at CES 2018 https://robohub.org/robotics-innovations-at-ces2018/ Mon, 15 Jan 2018 17:58:55 +0000 http://robohub.org/robotics-innovations-at-ces2018/

The 2018 Nissan Leaf receives CES2018 Tech For a Better World Innovation Award.

Cars, cars, cars, cars. CES2018, the Consumer Technology Association’s massive annual expo, was full of self driving electric and augmented cars. Every hardware startup should visit CES before they build anything. It has to be the most humbling experience any small robotics startup could have. CES2018 is what big marketing budgets look like. And as robotics shifts more and more to consumer facing, this is what the competition looks like.

CES2018 covered a massive record breaking 2.75 million net square feet of exhibit space, featuring more than 3,900 exhibitors, including some 900 startups in the Eureka Park Innovation Zone. More than 20,000 products launched at CES 2018.

Whill’s new Model Ci intelligent personal electric vehicle

Robomart’s self driving vehicles will bring you fresh food

“The future of innovation is on display this week at CES, with technology that will empower consumers and change our world for the better,” said Gary Shapiro, president and CEO, CTA. “Every major industry is represented here at CES 2018, with global brands and a record-number of startups unveiling products that will revolutionize how we live, work and play. From the latest in self-driving vehicles, smart cities, AI, sports tech, robotics, health and fitness tech and more, the innovation at CES 2018 will further global business and spur new jobs and new markets around the world.”

In 2014, we helped produce a special “Robots on the Runway” event to bring robotics to CES. Fast forward four short years and new robots were everywhere at CES2018, ranging from agbots, tennisbots, drones, robot arms, robot prosthetics and robot wheelchairs, to the smart home companion and security robots.

Tennibot, a Robot Launch 2018 Finalist and CES2018 Innovation Award Winner

Soft Robotics a CES2018 Innovation Award Winner

It was inspiring to see so many Silicon Valley Robotics members or Robot Launch startup competition alumni winning Innovation Awards at CES2018, including Sproutel’s My Aflac Duck, Soft Robotics, Tennibot, Foldimate, Whill, Buddy from Blue Frog Robotics and Omron Adept’s robot playing ping pong.

Buddy from Blue Frog Robotics

For startups the big question is – do you build the car? Or the drone? Or do you do something innovative with the hardware, or create a platform for it? CES2018 is also shifting towards industrial and enterprise facing with their new Smart Cities Marketplace, joining the AI, Robotics, AR & VR marketplaces, and a slew of others.

With some 300,000 NSF of automotive exhibit space, the vehicle footprint at CES makes it the fifth largest stand-alone automotive show in the U.S. and was backed up by conference sessions with politicians and policy makers.

Intel celebrated innovation, explored what’s next for big data and set a Guinness World Record with its Shooting Star Mini Drone show – the most advanced software fleet of 100 drones controlled without GPS by one pilot.

CES2018 was also a little conflicted about the rise of robotics. The marketing message this year was “Let’s Go Humans”, celebrating human augmentation. However, as the second ad panel shows, CTA recognize their main attraction is showcasing the latest technologies, not necessarily the greatest technologies.

And from the looks of crowds around certain exhibits in the CES2018 Innovation Zone, after the carfest that was this year’s CES, all things Quantum will be the next big frontier. But I don’t think you have to be a Goliath at CES2018 to win the hardware market. I was most impressed by a couple of ‘Davids’ not ‘Goliaths’ in my admittedly very short CES2018 tour.

IBM’s quantum computing display at CES2018 Innovation Zone

Vesper’s VM1010 – First ZeroPower Listening MEMS Microphone

For example, Vesper’s VN1010 is the First ZeroPower Listening MEMS Microphone – a piezo electric embedded microphone chip that will allow virtually powerless voice recognition technology. With voice interface being the primary method of communicating with all of these robots and smart devices, then this little chip is worth it’s weight in cryptocurrency.

And there were robot pets everywhere. But forget the robot dogs and robot cats, shiny and metallic, plastic pastel or fur covered and cute as they were. Once again, I’m betting on the David of the field and plugging Petronic’s Mousr. I was as hooked as a feline on catnip when I saw the smart mouse in action. After a succesful Kickstarter, Mousr is now available for preorder with expected delivery in March 2018. I’ve been badly burned ordering robots from crowdfunding campaigns more than once, but I ordered a Mousr anyway.

One of many many robot pets at CES2018. But gorgeous!

Mousr, the tail that wags the cat from Petronics

CES2018 also predicted that the 2018 tech industry revenue will be $351 billion dollars – a 3.9 percent increase over 2017. For the slides and more information, visit here.

]]>
Robot Launch 2017: Deadline Sept 3 https://robohub.org/robot-launch-2017-call-for-startups/ Wed, 16 Aug 2017 10:02:16 +0000 http://robohub.org/robot-launch-2017-call-for-startups/

The Robotics Hub, in collaboration with Silicon Valley Robotics, is looking to invest up to $500,000 in robotics, AI and sensor startups! Finalists also receive exposure on Robohub and space in the new Silicon Valley Robotics Cowork Space. Plus you get to pitch your startup to an audience of top VCs, investors and experts. Entries close Sept 3.

In previous Robot Launch competitions we’ve had hundreds of entries from more than 20 countries around the world. Our finalists have also reached the finals of major startup competitions like Tech Crunch Disrupt, and gone on to raise millions of dollars of funding making strong industry partnerships, such as working with Siemens Frontier Program.

Our semifinalists will also been featured on Robohub, which means they’ll reach an audience of approx 100,000 viewers. Everyone who enters gets incredibly valuable feedback from top robotics VCs, investors and experts.

CRITERIA: Your startup should be under 5 years old, with less than $2 million in funding. You should have a great new robotics technology and business model. Your startup is related to robotics, AI, simulation, sensors or autonomous vehicles. ENTER NOW.

Robot Launch is supported by Silicon Valley Robotics to help more robotics startups present their technology and business models to prominent investors. Silicon Valley Robotics is the not-for-profit industry group supporting innovation and commercialization in robotics technologies. The Robotics Hub is the first investor in advanced robotics and AI startups, helping to get from ‘zero to one’ with their network of robotics and market experts.

Please share this in your networks and let us know if you’d like to be a judge, mentor or can offer a prize for Robot Launch 2017. Just email Andra [andra @ robotlaunch.com].

Learn more about previous Robot Launch competitions here.

]]>
ARM Institute West Coast meeting review https://robohub.org/arm-institute-west-coast-meeting-review/ Thu, 29 Jun 2017 09:16:38 +0000 http://robohub.org/arm-institute-west-coast-meeting-review/

Robotics manufacturing in the US will be getting federal support to match business or startup investments via the new Advanced Robotics Manufacturing (ARM) Institute. Perhaps more importantly, the ARM Institute can act as a conduit to connect and amplify robotics innovations between regions of the USA. As the global robotics ecosystem becomes flooded with interest, and investors, any technological lead the USA currently has is rapidly disappearing.

The ARM Institute is now one of 14 Manufacturing USA institutes and the 8th funded by the DOD. Each Manufacturing USA Institute focuses on a technology area critical to future competitiveness – such as additive manufacturing, integrated photonics, or smart sensors.  The federal government has committed over $1 billion, matched by over $2 billion in non-federal investment, across the Manufacturing USA network.

At the time of the launch, the ARM Institute had attracted 267 industry and academic partners, with a commitment of $173 million to be added to the $80 million DOD investment. Of course, this level of funding is not very much when compared to European and Asian investments into advanced manufacturing.

The European SPARC program is the largest research and innovation initiative in civilian robotics in the world. It was launched in 2014 by the joint public-private partnership between the European Commission and the robotics industry and academia. Investments under this joint initiative are expected to reach 2.8 billion euro with 700 million euro in financial investments coming from the European Commission under Horizon 2020 over 7 years.

Image via UK-RAS Network

In 2015, Japanese Prime Minister Shinzo Abe called for a “robotics revolution”, a five-year plan to increase the use of intelligent machines and boost sales. Abe urged companies to “spread the use of robotics from large-scale factories to every corner of our economy and society”, increasing the use of intelligent machines in manufacturing, supply chains, construction, and health care, while expanding robotics sales from 600 billion yen ($6.4 billion) annually to 2.4 trillion yen by 2020. The 2020 Olympics being hosted in Tokyo is also pushing innovation forward rapidly, with plans for Tokyo to have self-driving vehicles, and robots in as many places as possible.

The Korean government is also planning to invest 500 billion won into robotics manufacturing, between 2016 and 2020, in over 80 pilot projects and corporate research centers for robotics manufacturing. After growing at a compound rate of 17 per cent a year, the robot market will be worth $135bn by 2019, according to IDC, a tech research firm, with Asia now accounting for the majority of all robot spending. In 2014, President Xi Jinping of China called for a “robot revolution” that would transform first China, and then the world. “Our country will be the biggest market for robots,” he said in a speech to the Chinese Academy of Sciences, “but can our technology and manufacturing capacity cope with the competition? Not only do we need to upgrade our robots, we also need to capture markets in many places.”

China is now the world’s largest purchaser of industrial robots from overseas, but is now creating Chinese robot manufacturing companies. China is also now the leader in global robotics patent filing. The ARM Institute in the USA will need to use their comparatively small amount of federal grant money with very clear focus in order to maintain competitiveness in the global environment.

“Robotics are increasingly necessary to achieve the level of precision required for defense and other industrial manufacturing needs, “according to the Department of Defense (DOD), “but the capital cost and complexity of use often limits small to mid-size manufacturers from utilizing the technology. The ARM Institute’s mission therefore is to create and then deploy robotic technology by integrating the diverse collection of industry practices and institutional knowledge across many disciplines – sensor technologies, end-effector development, software and artificial intelligence, materials science, human and machine behavior modeling, and quality assurance – to realize the promises of a robust manufacturing innovation ecosystem.”

On June 15, the ARM Institute held an Informational and Technology Review Meeting in Los Angeles for ecosystem partners from all over the US. As one of the industry partners, Silicon Valley Robotics was represented and I’m providing a summary of my notes on the event and presentations. Speakers were; SK Gupta, USC; Howie Choset, ARM Institute; Lisa Masciantonio, ARM Institute; Suzy Teele, ARM Institute; Elena Messina, NIST; Valerie Patrick, Boston Consulting Group; followed by breakout sessions for specific technological areas.

The meeting really had two purposes. The first was to let us know how rapidly the ARM Institute was taking shape in terms of organization, staff and membership structure. While staying faithful to the original mission documents in the proposal of 2016, the Institute is able to start taking formal memberships not just letters of intent. They are still seeking staff for the position of CEO and other vacancies, although some interim positions are simply in the more stringent process of vetting.

And to inform members and potential members of the ARM Institute of the membership process and project application process. It should be noted that the DOD has said that there will also be money assigned to projects outside of the ARM Institute funding, where the company or startup might receive an extension of ARM funding or a completely separate commission.

The other was to align the ARM Institute with the most up-to-date industry challenges. The intention is to develop regional councils and hold regular meetings in order to remain at the forefront of the rapid change in robotics manufacturing innovation and to deliver meaningful assistance. Silicon Valley Robotics members can contact me for a full copy of my notes.

]]>
Silicon Valley Robot Block Party attracts over 1000 attendees https://robohub.org/silicon-valley-robot-block-party-attracts-over-1000-attendees/ Thu, 20 Apr 2017 10:29:51 +0000 http://robohub.org/silicon-valley-robot-block-party-attracts-over-1000-attendees/

The 2017 Silicon Valley Robot Block Party set a new high for attendance with over 1000 robot fans plus investors, exhibitors and media. 45 different companies, organizations and groups were represented on the day, April 12, 2017, and the Jabil Blue Sky Innovation Center proved to be the perfect host for what is now the longest running National Robotics Week celebration.

“Robotics has emerged as one of the most important technologies in the 21st century impacting on almost every part of society from self-driving cars, to improved outcomes in medicine, to taking care of our aging parents to teaching our next generation of engineers and scientists,” says John Dulchinos, VP Strategic Capabilities, Jabil. Silicon Valley has become one of the leading areas for the advancement and commercialization of robotics technologies.”

It would be hard to pick a star of the show when watching the smiles on children’s faces throughout the day. There were big robots, small robots, mobile robots, robot arms, humanoid robots, toy robots, robots you could ride on or in and even robot insects. The Robot Block Party is a blend of professional robotics, the latest in robotics research and startup innovation, and school clubs, hobbyists and makers, so the event provided entertainment for all from investors to juniors.

Companies at the Robot Block Party: Jabil, Radius Innovation, Intel, Fetch Robotics, EandM Robotics, SICK Sensors, Harmonic Drive, SRI International, Toyota Research Institute, Savioke, ABB Robotics, Olympus Controls, Universal Robots, Starship Technology, Zume Pizza, Silicon Segway, SAKE Robotics, MITSUI Chemicals, NorthEastern University, BEST Tensegrity Lab UC Berkeley, Catalia Health, Chime, Electric Movement, Augmented Pixels, Viking Team 6688, Central Park STEAM Robotics, RoboTerra, Dash Robotics, Techy Kids, USPTO, SF Drone School, sUAS News, GIGAmacro, Homebrew Robotics, Ubiquity Robotics, Point1 Seconds, EBSB, Emoshape, Carrender Robotics, Robot Garden, Tempo Automation, RMUS Dynamics, Beetl Robotics, krtkl, Greppy, and Let’s Robot.

This year there were speakers at the Robot Block Party with a keynote from Chris Anderson, CEO of 3D Robotics. Anderson talked about the power of the democratization of technology and how the continuing reduction of costs leads to the exponential expansion of access to technology, and then to innovations. His new initiative DIY Robocars is following the same path as DIY Drones – a consumer open source drone community. The success of DIY Drones heralded the successful commercialization of the drone industry and lead to Anderson 3D Robotics. The transition to 1/10th scale autonomous vehicle building seems like a timely reflection on the increasing commercialization and accessibility of self-driving vehicles.

John Dulchinos, President of Silicon Valley Robotics and VP of Global Automation at Jabil spoke abut the role of robotics in reshoring manufacturing and ensuring American economic growth. Andrew Dresner, Principal Engineer at Interbotix Labs described many uses of Intel Joule in developing innovative robotics projects.

Alex Kernbaum of SRI International showcased a range of new technologies from MotoBot to Microbot Factories. Rich Mahoney talked about Superflex, a smart assistive garment and SRI spinout that is now being commercialized in a range of scenarios, from assisting elders and children with muscular dystrophy, to helping factory workers.

We also heard from UC Berkeley’s BEST Tensegrity Lab which is creating new robotics skeletons and structures suitable for space exploration. Will Vaughan, from Savioke talked about some upcoming new roles for Relay the hotel delivery robot. And Robert DeNeve, from Brite Lab talked about the benefits of local manufacturing.

Another innovation at the 2017 Robot Block Party was the VC Office hours and startup pitch competition. Many startups applied and 6 finalists battled it out live on stage with 5 minute pitches to a panel of judges, Heather Andrus, Managing Director of Radius Innovations, Tobin Fisher, CEO/Founder of Vantage Robotics, John Dulchinos, President of Silicon Valley Robotics and VP Global Automation at Jabil and Cyril Ebersweiler, CEO/Founder of HAX.

The startups ranged from drone technology to virtual reality with robots and the winning pitch came from Ross Mead, founder and CEO of Semio. Semio has a developer framework for building social interactions for any robot, something the judges saw as having enormous potential in this new world of social and collaborative robots.

See some of the highlights from ABC 7 and NTDTV International.


Photos from the event

]]>
Congratulations to Apellix: 2017 Automate Launch Pad winner https://robohub.org/congratulations-to-apellix-2017-automate-launch-pad-winner/ Tue, 11 Apr 2017 09:00:59 +0000 http://robohub.org/congratulations-to-apellix-2017-automate-launch-pad-winner/

Apellix worker bee. Source: YouTube

Congratulations to Apellix, the winner of Automate 2017 Startup Launch Pad competition. Also, honorable mention goes to Kinema Systems and Sake Robotics. The Apellix received a $10,000 check sponsored by GE Ventures. The judging panel consisted of Steve Taub, GE Ventures, Oliver Mitchell, Autonomy Ventures, Chris Moehle, Robotics Hub and Melonee Wise, Fetch Robotics.

The field of startups was very strong this year and the 8 finalists represented a broad range of new applications with relevance to industrial robotics, but not exclusively industrial. Silicon Valley Robotics was proud to organize this biannual event with A3 and the Robotics Industry Association.

Oliver Mitchell from Autonomy Ventures has posted a great rundown of the competition and all the finalists at his blog – The Robot Rabbi.

Here’s a short summary of the finalists:

Automate Launch Pad Competition Finalists
Andros Robotics – Enabling low cost collaborative robots, and commoditizing force control expertise in custom motion system development market.  Robots equipped with CFCM-actuators will have truly collaborative qualities, enabling safety and modes of operation like teach-and-replay, thanks to force feedback and high inherent back-drivability.
Apellix – Platform-as-a-Service for industrial workers performing critical but dangerous tasks. The patent-pending Apellix Worker Bee robotics system physically interacts with and modifies its environment to move workers out of harm’s way.
Augmented Pixels – Localization and mapping technology (SLAM SDK) optimized for low CPU usage. The company works on the development of an advanced platform for autonomous navigation for drones and robots in GPS-denied environments.  It also develops a hardware-optimized solution for indoor navigation for mobile phones and AR/VR glasses with low power consumption.
HEBI Robotics – Modular series elastic actuator designed to function as full-featured robotic components. The modules quickly create custom robots of virtually any configuration from wheeled robots to collaborative robotic arms with multi-degrees of freedom.
Kinema Systems – Addresses the depalletizing problem where boxes are picked off a pallet and placed onto a conveyor. The Kinema Pick product combines a custom 3D/2D sensor with 3D vision, deep learning, and motion-planning software to provide an easily configurable solution for end-customers. By design, the Kinema Pick self-learns and does not require extensive individual training before it can start operating.
Robotic Materials – Integrated tactile sensing and robotic manipulation. The patent-pending sensors and control system is the first and only effective tactile sensing solution designed to improve and expand collaborative robot applications. The combination of proximity, contact, and force sensing enables robots to accurately identify, grasp, and manipulate previously unknown parts, such as changing CNC parts in a manufacturing environment without expensive reprogramming.
SAKE Robotics – Robotic grippers that are inexpensive, durable, light weight and very capable for use on service robotics. The core technologies include a tendon-based architecture that is low wear, super strong, and very scalable.
Vention – A machine-design platform, enabling users to build machines from a web-browser in just a few days. The platform is an “AI-enabled” cloud CAD application that integrates an ever-growing library of industrial “Lego-style” modules. Structural, motion, and control parts are fully compatible with one another, saving time typically wasted in compatibility assessment. Upon design completion, users can purchase their design directly from the 3D interface.
]]>
Japan’s World Robot Summit posts challenges for teams https://robohub.org/japans-world-robot-summit-posts-challenges-for-teams/ Mon, 20 Mar 2017 15:21:18 +0000 http://robohub.org/japans-world-robot-summit-posts-challenges-for-teams/

Japan is holding a huge robot celebration in 2018 in Tokyo and 2020 in Aichi, Fukushima, hosted by the Ministry of Economy, Trade and industry (METI) and the New Energy Industrial Technology Development Organization (NEDO). This is a commercial robotics Expo and a series of robotics Challenges with the goal of bringing together experts from around the world to advance human focused robotics.

The World Robot Summit website was just launched on March 2, 2017. The results of tenders for standard robot platforms for the competitions are being announced soon and the first trials for competition teams should happen in summer 2017.

There are a total of 8 challenges that fall into 4 categories: Industrial Robotics, Service Robotics, Disaster Robotics and Junior.

Industrial: Assembly Challenge – quick and accurate assembly of model products containing technical components require in assembling industrial products and other goods.

Service: Partner Robot Challenge – setting tasks equivalent to housework and making robots that complete such tasks – utilizing a standard robot platform.

Service: Automation of Retail Work Challenge – making robots to complete tasks eg. shelf stocking and replenishment multiple types of products such as foods, interaction between customers and staffs and cleaning restrooms.

Disaster: Plant Disaster Prevention Challenge – inspecting or maintaining infrastructures based on set standards eg. opening/closing valves and exchanging consumable supplies and searching for disaster victims.

Disaster: Tunnel Disaster Response and Recovery Challenge – collecting information and providing emergency response in case of a tunnel disaster eg. saving lives and removing vehicles from tunnels.

Disaster: Standard Disaster Robotics Challenge – assessing standard performance levels eg. mobility, sensing, information collection, wireless communication, remote control on-site deployment and durability, etc. require in disaster prevention and response.

Junior (aged 19 or younger): School Robot Challenge – making robots to complete tasks that might be useful in a school environment – utilizing a standard robot platform.

Junior (aged 19 or younger): Home Robot Challenge – setting tasks equivalent to housework and making robots that complete such tasks.

The World Robot Summit, Challenge, Expo and Symposiums are looking for potential teams and major sponsors. 

For more information, you can email: Wrs@keieiken.co.jp

]]>
Supporting Women in Robotics on International Women’s Day.. and beyond. https://robohub.org/supporting-women-in-robotics-on-international-womens-day-and-beyond/ Wed, 08 Mar 2017 19:40:37 +0000 http://robohub.org/supporting-women-in-robotics-on-international-womens-day-and-beyond/ International Women’s Day is raising discussion about the lack of diversity and role models in STEM and the potential negative outcomes of bias and stereotyping in robotics and AI. Let’s balance the words with positive actions. Here’s what we can all do to support women in robotics and AI, and thus improve diversity, innovation and reduce skills shortages for robotics and AI.

Join WomeninRobotics.org – a network of women working in robotics (or who aspire to work in robotics). We are a global discussion group supporting local events that bring women together for peer networking. We recognize that lack of support and mentorship in the workplace holds women back, particularly if there is only one woman in an organization/company.

Although the main group is only for women, we are going to start something for male ‘Allies’ or ‘Champions’. So men, you can join women in robotics too! Women need champions and while it would be ideal to have an equal number of women in leadership roles, until then, companies can improve their hiring and retention by having visible and vocal male allies. We all need mentors as our careers progress.

Women also need visibility and high profile projects for their careers to progress on par. One way of improving that is to showcase the achievements of women in robotics. Read and share all four year’s worth of our annual “25 Women in Robotics you need to know about” – that’s more than 100 women already because we have some groups in there. (There has always been a lot of women on the core team at Robohub.org, so we love showing our support.) Our next edition will come out on October 10 2017 to celebrate Ada Lovelace Day.

Change starts at the top of an organization. It’s very hard to hire women if you don’t have any women, or if they can’t see pathways for advancement in your organization. However, there are many things you can do to improve your hiring practices. Some are surprisingly simple, yet effective. I’ve collected a list and posted it at Silicon Valley Robotics – How to hire women.

And you can invest in women entrepreneurs. All the studies show that you get a higher rate of return, and higher likelihood of success from investments in female founders. And yet, proportionately investment is much less. You don’t need to be a VC to invest in women either. Kiva.org is matching loans today and $25 can empower an entrepreneur all over the world. #InvestInHer

And our next Silicon Valley/ San Francisco Women in Robotics event will be on March 22 at SoftBank Robotics – we’d love to see you there – or in support!

]]>
Yves Behar designs a security robot for Cobalt Robotics https://robohub.org/yves-behar-designs-a-security-robot-for-cobalt-robotics/ Thu, 02 Mar 2017 14:00:41 +0000 http://robohub.org/yves-behar-designs-a-security-robot-for-cobalt-robotics/

Cobalt Robotics has launched their stylish security robot. The robot was designed by Yves Behar and as a fabric covered robot, it’s putting a new spin on soft robotics! Behar’s goal was to create a robot that didn’t conform to Hollywood stereotypes but instead as an augmentation of human ability and an enhancement to the human environment.

“Creating the right form for Cobalt is crucial to its success. As a service for security and concierge, it becomes part of an office culture. This balance between approachability and discretion became a thematic challenge throughout the design process. We decided that the robot should not adopt a humanoid personality. Instead, it should aesthetically align with the furniture and décor of the office environment. The Cobalt robot’s semi-cylindrical self-driving mechanism, sensors and cameras are covered by a tensile fabric skirt. This helps maximize the access and usability of the internal technologies, creates airflow to prevent overheating, and conveys a soft and friendly persona.” said Behar.

Cobalt Robotics was founded by Travis Deyle and Erik Schluntz, former GoogleX and SpaceX engineers. After a thorough analysis of the various emerging service robotics industries, they focused on the security industry rather than retail, logistics, or hospitality, because the economics made the most sense.

“A fleet of Cobalt robots is comparable to an extremely competent guard with superhuman capabilities and omnipresent situational awareness across an entire organization,” said Cobalt CEO and Co-Founder Travis Deyle.

Security is necessary but it’s often cost prohibitive for companies to provide a 24-hour security presence. The Cobalt robot allows security to have a presence so that they can remote in, see what’s going on, look for intruders, and it also serves a purpose for the employees. If something bad happens, it’s currently on the employee to either call the police or fumble around looking for the security number of their corporate office. Cobalt lets them go up to the robot and immediately get a person to talk to.

“One of the core fundamental values of Cobalt is to enable human-to-machine interactions,” said Erik Schluntz, Cobalt CTO and Co-Founder. “The way we do that is designing a robot to interact with and around people.”

Cobalt worked with world-renowned designer Yves Behar and his company, fuseproject, to define the form and interactions. The balance between approachability and discretion became a thematic challenge throughout the design process.

“As robotics and AI touch more areas of our daily lives, the role of the designer is to make these technologies accessible, augment our abilities and create our best possible future,” says industrial designer Yves Behar. “The Cobalt design is very different in that it is made of fabric and aluminum, an aesthetic more akin to furniture and workspaces than a Hollywood robot.”

Using extremely capable sensors (day-night 360° cameras, thermal cameras, depth cameras, LIDAR, etc.) and cutting-edge algorithms (machine learning, semantic mapping, novelty detection, and deep neural networks), the Cobalt robot detects and flags security-relevant conditions or anomalies — things like people, doors & windows, suspicious items, items that have moved or changed, and water leaks. Bloomberg Beta and Promus Ventures led Cobalt’s seed round, with participation from Haystack, Subtraction Capital, Comet Labs and various individual angel investors.

“Our fund has been searching for the most immediately useful applications of robotics, and Cobalt has found one. We look forward to seeing safer and better workplaces, served by Cobalt,” said Roy Bahat, head of Bloomberg Beta.

]]>
Shakey is first robot to receive IEEE Milestone award https://robohub.org/shakey-is-first-robot-to-receive-ieee-milestone-award/ Tue, 28 Feb 2017 14:00:22 +0000 http://robohub.org/shakey-is-first-robot-to-recieve-ieee-milestone-award/

Shakey the Robot, the world’s first mobile, intelligent robot, developed at SRI International between 1966-1972, was the first robot to be honored with a prestigious IEEE Milestone in Electrical Engineering and Computing. The IEEE Milestone program honors significant inventions, locations or events related to electrical engineering and computing that have benefitted humanity, and which are at least 25 years old.

“Shakey was groundbreaking in its ability to perceive, reason about and act in its surroundings,” said Bill Mark, Ph.D., president of SRI’s Information and Computing Sciences Division. “We are thrilled that Shakey has received this prestigious recognition from the IEEE as it is a testament to its profound influence on modern robotics and AI techniques even to this day.”

The original Shakey robot is on display at the Computer History Museum where it is the centerpiece of the Artificial Intelligence portion of its “Revolution: The First 2000 Years of Computing” exhibition.  In 1970, Life magazine referred to Shakey as the “first electronic person”, and National Geographic also carried a picture of Shakey in an article on the present uses and future possibilities of computers. Shakey was also inducted into the Carnegie Mellon’s Robot Hall of Fame in 2004.

The Shakey project was initiated by Charles A. Rosen, who envisioned it not just as a “mobile automaton”, but as an experimental platform for integrating all the subfields of artificial intelligence as then understood. Logical reasoning, autonomous plan creation, robust real-world plan execution, machine learning, computer vision, navigation, and communication in ordinary English were integrated in a physical system for the first time. Nils J. Nilsson, Bertram Raphael and Peter E. Hart led the project subsequent to Rosen.

In more specific technical terms, Shakey is historically significant for three distinct reasons: (1) Its control software was structured—a first for robots—in a layered architecture that became a model for subsequent robots; (2) Its computer vision, planning and navigation methods have been used not only in many subsequent robots but in a wide variety of consumer and industrial applications; and (3) Shakey served as an existence proof that encouraged later developers to develop more advanced robots.

The significance of these contributions is captured by an unsolicited quote from James Kuffner, who as of 2016 has led robotics research at Google for seven years. In a private communication, he wrote: “It is truly amazing how both in terms of architecture and algorithms the Shakey project was ahead of its time and became a model for future robot systems for half a century”.

1.1 Layered Control Software for Robots

Shakey’s control software was structured as a multi-level hierarchy with physical actions at the lowest levels, autonomous planning in a middle level, and plan execution (with error recovery) at the top level [Ref. 1: SRI-AIC Tech Note 323]. This design has been adopted by many subsequent robots. An outstanding example is STANLEY, the self-driving vehicle that won the DARPA Grand Challenge in 2005 for driving itself across the Mojave Desert. Sebastian Thrun, the project leader wrote (in a personal communication), “…at the core we had layers just like Shakey. Figure 5 in this paper summarizes the high-level software architecture, which should look familiar.” Ref. 2: Thrun] An inspection of Figure 5 will confirm the layered software design at the core of the “Planning and Control” section.

1.2 Shakey’s Algorithms

Of the many computational methods developed in the course of the Shakey project, three in particular have had long-term impacts on both technology and in the daily lives for all of us.

1.2.1 The “Hough” Transform for Detecting Lines in Images

In 1962 Paul Hough patented a method for detecting co-linear points in images by transforming image points to straight lines in a transform space. His method was not widely used because his transform space is infinite in extent and therefore computationally infeasible. In 1972, Peter E. Hart and Richard O. Duda introduced a new sinusoidal version of the transform that eliminated this difficulty (though they did not rename the transform) [Ref. 3: Hough]. The history of this invention was later documented by Hart. [Ref. 4: Hough History]

Hart’s version of the Hough transform is one of the most widely-used algorithms in computer vision. It has been used for decades in applications like visual inspection in manufacturing. By 2014 it started appearing in automobiles, where it enables a safety feature that alerts the driver if the car is drifting out of lane.

According to Google Scholar, the referenced paper has been cited nearly 5,000 times as of 2015. According to the US Patent and Trademarks Office database, 2,115 US patents reference the Hough Transform by this same date.

1.2.2 STRIPS “Rules”; Real World Plan Execution and Error Recovery

Shakey’s planning system was named STRIPS [Ref. 5: STRIPS] (for Stanford Research Institute Problem Solver). STRIPS represented the logic of actions available to it by a set of three “Rules”: The Pre-conditions, Delete List, and Add List of the action. This representation is a practical solution to a famous problem in Artificial Intelligence called the Frame Problem. STRIPS, and particularly STRIPS Rules, were the basis of many subsequent planning systems, as this quote shows: “…. the STRIPS representation and reasoning framework was used as the basis for most automatic planning research for many years.“ [Ref. 6: STRIPS Retro]

STRIPS plans were “learned” so they could be used in future problems and they also were integrated into a plan execution monitoring and error-recovery system called PLANEX. A seminal paper [Ref. 7: STRIPS.PLANEX] on this system is among the most re-published papers in the history of artificial intelligence, the most recent re-publication occurring more than 20 years after initial publication.[Ref. 8: Re.Pub]

1.2.3 The A* Shortest Path Algorithm

The A* algorithm [Ref. 9: AStar] provably computes the shortest (or in general minimum cost) path through a network, and probably does so with minimum computation (as measured by the number of branch points considered). These attractive properties have made A*, and its later elaborations and variants, the path-finding algorithm of choice for a wide variety of applications. These include computing driving directions (whether by a web service or a car navigation system), planning the paths of characters in video games [Ref. 10: Woodcock], parsing strings, or plotting the path of Mars rover vehicles [Ref. 11: DStar].

By 2015, according to Google Scholar, the referenced A* paper has been cited nearly 5,000 times. By the same date, according to the US Patent and Trademarks Office database, 460 US Patents reference A*.

1.3 Shakey as an Existence Proof for Intelligent Robots

At the 2015 meeting of the International Conference on Robotics and Automation, there was a special session called a Celebration of the 50th Anniversary of Shakey. The session included a discussion by a distinguished panel: Prof. Ruzena Bajcsy (UC Berkeley, Director of CITRIS), Rodney Brooks (former head of the CS/AI Lab at MIT, founder of both iRobot and Rethink Robotics), Peter Hart (Shakey project leader and the most-cited author in the field of Robotics according to Google Scholar), Nils Nilsson (Shakey project leader, former Chair of CS at Stanford), James Kuffner (Director of Robotics Research at Google), Prof. Benjamin Kuipers (University of Michigan), and Prof Manuela Veloso (endowed Chair in AI and Robotics at CMU). The panel was asked to name Shakey’s biggest impact or major contribution. The panel’s consensus was that the totality of Shakey—the first physical system with computational abilities to perceive, reason and act—was the single biggest contribution. A video of the panel discussion can also be viewed here.

As cast on a bronze plaque at SRI International, the IEEE Milestone’s citation reads: “Stanford Research Institute’s Artificial Intelligence Center developed the world’s first mobile, intelligent robot, SHAKEY. It could perceive its surroundings, infer implicit facts from explicit ones, create plans, recover from errors in plan execution, and communicate using ordinary English. SHAKEY’s software architecture, computer vision, and methods for navigation and planning proved seminal in robotics and in the design of web servers, automobiles, factories, video games, and Mars rovers.”

References:

http://ieeemilestones.ethw.org/Milestone-Proposal:Shakey:_The_World%E2%80%99s_First_Mobile,_Intelligent_Robot,_1972

https://www.sri.com/work/timeline-innovation/timeline.php?timeline=computing-digital#!&innovation=shakey-the-robot

https://www.sri.com/newsroom/press-releases/sri-internationals-shakey-robot-be-honored-ieee-milestone-computer-history

]]>
CleverPet is a robot that humans (and dogs) love https://robohub.org/cleverpet-is-a-robot-that-humans-and-dogs-love/ Wed, 08 Feb 2017 09:30:29 +0000 http://robohub.org/cleverpet-is-a-robot-that-humans-and-dogs-love/

The largest markets for robots is for when there aren’t any people around. We often call those jobs the dirty, dull and dangerous ones. But then there are devices like CleverPet which play with your dog when you aren’t home. What’s not to love about playing with dogs? And yet there is a huge industry growing up around looking after your pet when you simply aren’t available to do it. CleverPet won the 2015 Robot Launch startup competition and took home 1st place at CES 2016.

In 2017, we are seeing many ‘robots’ advertise themselves as being able to play with your pets or even your children and parents when you aren’t home. What’s really involved in creating a good experience with a smart home robot? CleverPet is very tightly focused in how it delivers value.

Interview with Leo Trottier, Founder & CEO of CleverPet (edited for clarity)

What is CleverPet?

CleverPet Hub is like a game console for dogs. It teaches dogs to play games and keeps them from getting bored so that you can feel better about the time your dog needs to spend alone at home without you. It has three touchpads that light up in colors that your dogs can see, it has a speaker so your dog and hear your voice, and it’s got a microphone so you can hear your dog bark. In the morning, instead of putting your dog’s food in a bowl, you put it in the CleverPet Hub, and then our software gives your dog a single piece of his food when he does the right action for the game. There’s also a mobile app that will teach you how to use the Hub, and that allows you to see your dog’s progress.

I love your description of CleverPet as a game console for dogs. But as someone who’s not a dog owner, it looks rather complicated. How many dogs manage to get past Level 1 on this?

Most dogs are able to get to the final level.

The techniques that we’re using have been used for decades by trainers to teach agility and tricks to dogs. They’ve also been used in university contexts with a variety of other animals: monkeys, birds, cats, mice, flies. We know that, given the right curriculum, most dogs can be taught to touch touchpads in a sequence, though of course some dogs will take longer to learn it than others.

The limiting factor is not the cognitive capacity of the dogs, but their motivation to learn. The problem isn’t getting from Step 5 to Step 10; it’s getting from Step 0 to Step 1, and from Step 1 to Step 2. Those early steps are the most challenging, and it all depends on how curious, food- motivated, fearless, and open-minded the dog is.

This is a challenging UI (user interface) problem, because on the one hand this is a product for interacting with pets, and on the other hand, the product has to interact with the consumer — the person who’s buying it, setting it up, and monitoring it.

Can you describe a little bit about each of these two streams?

Both the person and the dog have to learn a sequence of actions, and at first, the progress is gated primarily by what the dog is able to do at any given point in time.

In the beginning, it’s easier for a human to understand what the goal of the sequence is, because we can use words to describe to a person what the dog should be doing. For example: “Right now we’re teaching the dog to use these touchpads.”

It’s trickier for the dog. People sometimes overestimate what their dog can actually understand, and so they might do things like grab their dog’s paw and put it onto the touchpad.

But we don’t want people to interfere with the learning that their dog is doing because if the dog develops a negative association with using the Hub, it will set the whole learning process back. On the other hand, if this is something that the dog discovers on his own, then he is likely to associate the Hub instead with a new thing that he can do in the world, and that is interesting and exciting to him.

Is there a significant population of pet owners who feel that their pets need something like CleverPet?

We know that there are people who feel bad about needing to leave their dog alone at home. I have hundreds of quotes from people saying things like, “My dog has separation anxiety and I can’t afford dog daycare. I feel horrible about it, and I wish I could give him something to do during the day.” That’s the ‘why buy?’— the thing that everyone can understand, and why most people make the initial purchase.

But what is much more interesting and powerful is the excitement of having a new way to interact with your dog. Most people have a very relatively small area of potential interaction with their dog because it only consists of the times when they can be together (which is intrinsically limited) and because there is no data associated with it. But by adding data, adding more time, and also adding insight through analytics, we can greatly expand the interaction surface in a way that I think people who love their dogs and are desperate to understand them better will find deeply exciting.

People love talking about their dogs and showing pictures of them. But now they’re not just going to be showing you pictures, they’re going to be showing you pictures and data and charts and explanations about when their dog leveled up, and why they do well on particular days and not other days, and what kind of progress their dog is making. We’re really excited about developing fodder for this kind of conversation — it will be the reason why we go from being a company that is selling a hundred thousand CleverPet Hubs to one that is selling tens of millions.

Are you able to extrapolate information about the mood of pets from this?

With this version of the Hub, we’ll be able to make indirect inferences about mood. For example, by looking at how active or inactive the dog is, especially as compared with previous days, we can recognize whether he’s feeling high energy or low energy, which might imply an underlying health issue.

Other pet devices are either tele-operated or pure auto-feed or auto-play. CleverPet stands out by having a smart interaction that adapts to the individual animal. Can you tell us more?

The key thing that we’ve got is this combination of input from the person, input from the dog, and all of that is through a flexible interface that is the source of a good chunk of the dog’s food for the day. All this is fed into a framework that allows for relatively sensitive analysis and adaptation.

What’s great about computerized interaction is you can be extremely precise: you can provide the feedback at exactly the right time and you can be extremely consistent. In other words, you only provide the feedback when the animal does the right thing, and that is something that humans are notoriously bad at.

Computerized interaction also provides great memory, so you can look at what the dog did last week, or yesterday, or ten minutes ago, and you can precisely change the way the feedback works based on all that information.

Finally, you can provide the interaction all the time — over the course of thousands of hours a year, rather than merely an hour a day. And this lets you become a fixture in the dog’s world.

These techniques are well developed in neuroscience and cognitive science, but the potential for computerized interaction with animals is not commonly understood. I did four or five years of cognitive science before I knew that this was available.

What led you from Cognitive Science to CleverPet? And why choose animal behavior rather than impacting human behavior?

If you asked someone what makes humans special, as compared to other animals, they’ll say: insight, creativity, reason. But anthropologists will say that what made the development of humanity possible over the last 20,000 years are technological developments — like agriculture, science and the development of writing — that allowed us the luxury to develop our insight, creativity and reason.

But if technology enabled humans to make these big advancements, what could technology make possible for animals? Especially animals as cognitively sophisticated as dogs appear to be?

Now, the counter-argument to that is you look at the performance of babies, at 1 and 2 years old, and compare them to the performances of dogs, and there are things that babies can do that dogs can’t and you can’t attribute the cognitive performance of babies to technological development. There’s a kind of recursive property of trained ability and connections that exist between mother and child, and it might just be that we don’t know what the inputs are to dogs in order to mold their behavior and elevate their performance.

There are dogs that seem to have some very sophisticated abilities, and so in the same way that there are humans that can do physics, although they’re in the minority, I believe there’s likely the same amount of heterogeneity in dogs and so that means there’s also potential for us to develop specialization, potentially, in dogs, that gives them additional ability, as much as there’s specialization in people. There are these huge generalizations that people tend to make, about human ability and dog ability, and I think those are all wrong- headed for the most part, because there’s massive amounts of individual difference.

When did you launch the first CleverPet prototypes into the wild, as it were, and what have you learned along the way?

We did a lot of internal work ourselves. Last summer we set ourselves a rigorous program of sending three CleverPet Hubs out every Friday, for about 12 weeks, and we learned a lot there. There’s still much more to learn: a lot of the value that will develop around this technology has yet to be invented, yet to be developed, yet to be written. But it can only really be developed once this product on the market, and sold. We now have a bunch of CleverPet Hubs coming in from China that we’re very excited about, and we’ll be distributing them so we can begin the next stage of learning. Note – Since this interview the first batch of CleverPets were distributed and the company is now taking orders for Christmas.

It’s very challenging to turn a research interest into a consumer product, and so, why this specific market and product?

To me the most powerful way of teaching people something is to demonstrate it to them. If I think it would be valuable for people to understand the process of learning and cognitive potential better, what better way to do that then to demonstrate it for them in their home with an animal that they love? That is to me the most powerful way of opening people up to that possibility and making them more curious about it. That’s why it needed to be a consumer product.

What are your hopes and plans for the future?

We want to CleverPet to become a household name. We want millions of dogs to be happier and lead more fulfilled lives, and for tens of millions of people to feel even closer to their dog, and have a much better understanding of what they’re capable of. And we want a general acceptance that there is a whole other world of cognitive behavior change that’s available and technology can help us get there.


CleverPet

CleverPet uses smart hardware to offer animals engagement anytime, automatically, whether their humans are home or not. Founded in 2013 by Leo Trottier (Clever Executive Officer), Dr. Philip Meier (Clever Product Officer) and Dan Knudsen, PhD (Clever Science Officer, Clever Technology Officer), CleverPet took root and continues to grow in San Diego, California. In 2015 CleverPet won the Robot Launch global startup competition run by Silicon Valley Robotics. CleverPet uses advanced cognitive and behavioral science techniques to develop elegant, durable technology solutions for the animals we love.

SILICON VALLEY ROBOTICS

Silicon Valley Robotics is the industry group for robotics and AI companies in the Greater San Francisco Bay Area, and is a not- for-profit (501c6) that supports innovation and commercialization of robotics technologies. We host the Silicon Valley Robot Block Party, networking events, investor forums, a directory, and a jobs board, and we provide additional services and information for members, such as these reports.

We’ll be releasing additional essays from the reports every week or so. Or, read the full reports at: https://svrobo.org/reports

]]>
Call for startups at Automate https://robohub.org/call-for-startups-at-automate/ Wed, 01 Feb 2017 17:48:14 +0000 http://robohub.org/call-for-startups-at-automate/

The Association for Advancing Automation (A3) has announced a call for startup companies in robotics, machine vision and motion control for its Automate Launch Pad Competition. The competition will be held at the Automate 2017 Show and Conference in Chicago, Illinois on April 5, 2017. The presenting sponsor of the competition is GE. The event is co-produced with Silicon Valley Robotics.

Automation applications in robotics, vision, motion and motors are impacting many industries today. Recently, there has been a groundswell of startup companies introducing new products in the sector. The Automate Launch Pad Startup Competition seeks out these startups to generate awareness of their technology and help them find new funding.

Eight (8) semi-finalist companies will be invited to participate in the competition that features a grand prize of $10,000. Companies will have three minutes to pitch their technology to a panel of investors and automation experts. Eligible companies include those in the automation space (robotics, vision, motion control, etc.) who were founded in the last 5 years; raised less than US $2 million since creation; and are not affiliated with a larger group. All semi-finalists will be provided booth space on the Automate show floor, putting them in front of an expected audience of over 20,000 people interested in automation. The exhibit space must be staffed during Automate show hours.

“We recognize the critical role of startup companies in driving innovation and bringing forth new technologies to foster continued growth,” said Jeff Burnstein, President of A3. “We encourage startups to take advantage of this opportunity to spread awareness and take their company to the next level.”

To apply for the Automate Launch Pad Competition, fill out the form here. Deadline for applications is February 17, 2017.

]]>
Catalia Health uses social robots to improve health outcomes https://robohub.org/catalia-health-uses-social-robots-to-improve-health-outcomes/ Tue, 17 Jan 2017 10:18:18 +0000 http://robohub.org/catalia-health-uses-social-robots-to-improve-health-outcomes/ Credit: Catalia Health

Credit: Catalia Health

Catalia Health is leading the surge in social robotics, with Mabu, their patient care management system. Catalia Health likes to be seen primarily as a health company that utilizes robots, rather than a robotics company. This focus on solving real world problems while shipping a product has seen Catalia attract both customers and investors, and recently close their Series A round.


Cory-Kidd

Interview with Cory Kidd, Founder & CEO of Catalia Health

(edited for clarity)

What is Catalia Health?

Catalia Health is a patient care management company. We focus on helping patients adhere to their treatment, whether that be taking medication, or managing chronic disease over the long term. That’s the focus of what we do, and part of how we deliver this to patients is through a cute little robot called Mabu who engages with patients through conversation. She’s a little over a foot tall, and can sit wherever you want to put her … on a countertop or bedside table … and she has big eyes that make eye contact with you while you’re talking to her. Conversations with her might last a minute or two, or maybe five or ten minutes; it really depends on the individual patient and what they want to talk about.

Catalia_Health-overall

Mabu has a touch screen on the front that she can use to display information, but our overall focus is to create an engaging relationship between the technology and the patient. The reason that we use the robot — as opposed to just delivering this through a phone screen or a tablet or PC — is about psychology and not about technology. When we are in front of a robot that has eyes that can look at us and blink, we tend to be more engaged, and we find the robot to be more credible and informative than if the same information were delivered to us through an app. While we have a lot of healthcare applications that we’re looking to build, the core of this is really just basic psychology: how can we create engagement that lasts for a long time? Psychologists have studied the benefits of face-to-face communication for decades.

Is speech the primary interaction that people have with Mabu?

Our platform’s primary means of interaction is conversation, but this can happen in more than one way. For example, when Mabu is talking, she also displays what she is saying on her screen, to make it easy for anyone to understand what’s going on. And when I reply, I can speak back to her, or I can touch a button or location on the screen. And if I’m not at home, I can also get a reminder via text message … in the future this might happen through an app or other desktop interface.

The physical robot is the thing that’s creating the engagement — the relationship — but we can interact with people through other forms of technology as well.

Does the conversation with Mabu end at home? How is information transferred to the healthcare provider?

We do send information summaries back to health care providers — a pharmacist or physician or some other caregiver — but the overall problem we are trying to help with is that the healthcare system simply doesn’t have enough people to manage chronic disease at scale. So while our technology might also enable tele-operation or tele-presence, the focus of our business is to be able carry out an autonomous one-on-one interaction with patients in real time.

Is the patient, or end user, your customer?

CataliaHealthPlatform1

Patients get a lot of benefit from our platform, but they are not the ones who are paying for it.

Our direct customers are pharmaceutical manufacturers and healthcare providers. They provide programs to help patients be more effective at taking their medications and managing their conditions, so in their eyes we are another tool in their arsenal.

Can you tell us about your first deployments?

The places where we are rolling out first are where there are existing care management programs already in place, and these tend to be in areas such as oncology and immunology where higher-end drugs are being used. Talking robots are very new and different, so we wanted our contract structure to look as similar as possible to existing offerings. These were the areas where there were already contract types that we could follow into market. We have been rolling out the first several hundred units in the first half of 2016 and are bringing patients onto the platform by the end of this year.

What is your business model? Is it “Robots as a Service”?

In terms of the patient relationship, our robot is key. But in terms of our business model and contracts, we don’t think of our robot as the key piece of what we’re delivering. We use a service model for care management; our customers pay us on a per patient per month basis.

What does interaction with your service look like from the patient’s perspective?

If you want to see what the patient interaction with Mabu actually looks like, we have a short video at cataliahealth.com.

Once the patient plugs Mabu in, the robot comes alive and starts talking. The conversation starts off with greetings and small talk (such as “Good morning, great to see you!”) and then moves on to whatever issue is relevant to the patient at that point in time. Maybe this is simply to check in on whether the patient has taken their medication, or maybe the patient is at a point in their treatment where it’s common to experience certain side effects, and the conversation is about how best to mitigate those for that patient. It really depends on the particular condition or treatment the patient is dealing with. We do a lot of research on each condition before rolling the platform out to patients, in order to build an understanding of common treatment challenges into the application.

In the background, the conversation is being crafted in real time for that patient.

CataliaHealthPlatform2

When Mabu first comes out of the box, we know a little about the patient’s medical condition — perhaps what drugs they are on — but we don’t know much else. So from that very first conversation we start learning about and adapting to the patient’s individual personality and the treatment issues they are facing. Mabu largely directs the conversations, but the patient has a lot of say in terms of where that conversation goes. As we build more conversations and more AI into the platform, we are able to craft appropriate conversations for the patient.

This will very quickly become applicable to a lot more drugs and a lot more disease states. Let’s look at side effects, for example. Our first conversations about side effects will be new, but there are many common side effects among drugs. So while we are starting out in just a handful of areas, our goal is to help any patient who’s dealing with a condition on an ongoing basis to better manage their care, and to provide information back to their caregivers so that they can be more effective in supporting them.

Is it valid to be concerned about robots being used to replace human companionship?

We certainly don’t think of this as robots replacing people; we think of it as robots augmenting people.

One of the big challenges in healthcare today is that there are not enough caregivers to deliver healthcare the way we need it. Almost half our population is managing a chronic disease in this country, and there are very similar rates in advanced nations around the world; if we look at the rate of people dealing with health issues on an ongoing basis, it approaches 2/3 to 3/4 of the population.

People might get to see their doctor for fifteen minutes every two months, but that’s not much time, and it’s not an effective way to provide the ongoing care that is needed. We simply don’t have enough people to manage healthcare the way we did 50 or 100 years ago.

CataliaHealthPlatform3

Patients need reminders, and they need answers to all the little questions that come up — and that’s where technology like this comes in. We see our service as a way for the people who are providing health care — doctors, nurses, and other trained caregivers — to more effectively reach a larger group of patients. We are not trying to be people’s doctors, we are trying to help their doctor do a much more effective job.

What kind of feedback have you received so far?

Broadly speaking, the feedback has been very positive. People tend to like the interaction right from the very first conversation, and they like how Mabu adapts to them.

We have a great solution that we’ve shown can effectively help many patients, but we still have a lot to learn. We are really excited about the amount of data that we’re going to be getting back from hundreds of person-months of interaction with our platform this year, and how we’re going to use that to improve conversations and personalize them to every patient.

CataliaHealthPlatform4

Thanks to social platforms like Siri, Jibo and Amazon’s Echo, we are starting to get used to having conversations with our devices. But you’ve taken a very specialized path into the market. Why did you pick this pathway and business model?

Scalability — being able to provide care to a growing number of patients — is a big challenge in health care. I spent about a year before launching Catalia Health really digging into the US healthcare market to explore the business opportunities. We were thinking broadly around medication adherence and chronic disease management, talking to potential customers and trying to understand where there was a need for this kind of technology. The quick answer was that it is needed pretty much everywhere within the healthcare system. The question of how to provide healthcare in a cost-effective and scalable way is definitely a challenge here in the US, and also in most other nations in the world. We see an enormous opportunity for using technology to provide scalable personalized care.

Of all the robotics and AI movies that have come out in the past five or ten years, Robot and Frank offers a vision comes closest to what we’re doing. The goal of the robot in that movie was to help Frank live healthier by building a relationship with him. We have the same underlying premise in what we’re doing: our technology is focused on building a relationship with the patient, because once we can do that, then we can talk to them about their health care. By comparison, usage rates on healthcare apps are incredibly low; most patients don’t pick them up after the first or second try. But as it turns out, there are particular psychological aspects of how people interact with robots that make them really effective at helping to solve this challenge.

Do you see ways that other robotics companies can leverage what you’ve learned so far?

The broad lesson is to understand where there is a real human or business need. Asking “Where is there a problem that I can solve?” rather than asking “Where can I build a robot?” or “What market can I serve?”

It’s also important to understand what the existing marketplace looks like for those kinds of solutions right now, because the solution today may look very different. Our robot is an alternative to talking to a pharmacist on the phone, and it’s a very different solution, but understanding what the business model is for that kind of service, how those contracts work, who the players are in the space — I think that’s something that any company would be smart to take a look at and understand deeply before trying to compete in those markets.

You’re tackling one of the largest growing areas of our economy, and you’re doing it with a combination of data, AI and robotics. What do you think has changed in the past couple of years to make robotics a viable solution to a broader range of applications?

One of the biggest changes has been in the cost of building both hardware and software. Our robot is pretty simple; we’re not doing anything cutting edge in terms of the physical device that we’re building. But ten years ago producing our device might have cost 100 times what it does today, and that would have limited us to a small set of business models and it would have been very hard to make money.

With the cost of building the technology drastically lowered, it has enabled us to do something very different today than what we could have done five years ago. Today we can build cutting edge technology at a reasonable price point and therefore deliver a cost effective solution.

Catalia_Health_Stacked_Color_600px

CATALIA HEALTH

Catalia Health is a patient health management system using social robotics. Founded in 2013 by Dr. Cory Kidd, Catalia Health builds on years of research into Human-Robot Interaction starting at MIT’s Media Lab and continuing with social robot startups like Intuitive Automata. In June 2015, Khosla Ventures led a $1.25 million seed round in Catalia Health for the first trial customer engagements. Catalia Health is on a mission to address both sides of the healthcare equation: improving patients’ health and extending the capabilities and efficiency of healthcare companies.

SILICON VALLEY ROBOTICS

Silicon Valley Robotics is the industry group for robotics and AI companies in the Greater San Francisco Bay Area, and is a not- for-profit (501c6) that supports innovation and commercialization of robotics technologies. We host the Silicon Valley Robot Block Party, networking events, investor forums, a directory, and a jobs board, and we provide additional services and information for members, such as these reports.

We’ll be releasing additional essays from the reports every week or so. You can read full reports by visiting the website.


If you enjoyed this article you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
HAX takes robotics to market in 2017 https://robohub.org/hax-takes-robotics-to-market-in-2017/ Thu, 12 Jan 2017 17:09:00 +0000 http://robohub.org/hax-takes-robotics-to-market-in-2017/ HAX-accelerator-robotics-robot-startup

If you attended CES 2017 last week you may have seen more than 70 HAX powered startups in Eureka Park, the ‘playground of innovation’. As service robotics steals the spotlight, we wanted to showcase some of the ways that accelerators and programs like HAX help grow hardware and robotics startups, including taking them to market.

Here’s an interview with Cyril Ebersweiler, Founder and Managing Director of HAX, excerpted from the new “Service Robotics Case Studies 2” report by Silicon Valley Robotics, the industry association.

Interview with Cyril Ebersweiler, Founder & Managing Director of HAX (edited for clarity)

You say HAX is the world’s first hardware accelerator. Can you tell us how it evolved?

We started the venture back in 2011 in a garage in Shenzhen. We had Eric Migicovsky from Pebble, who had just launched their first campaign, Ian Bernstein from Sphero, and there was also Zach Smith from Makerbot, who would later join us at HAX … a lot of people and technology were converging back then in Shenzhen.

One thing we discovered while leveraging the supply chain for these startups was that Shenzhen was a good place to prototype — and not just consumer electronics hardware (which was the modus operandi back then), but also extremely complex hardware for the health, robotics, and fabrication spaces. The robotics and fabrication spaces were particularly interesting. There are five thousand — maybe fifteen thousand — individual parts inside a robot, and it takes a lot of time, money and resources to build a prototype. By being in Shenzhen, where we had access to prototyping machines, we could accomplish this in record speed.

As more and more startups joined the ranks, we started to develop a better understanding of what it would take to bring those companies to market. Early on we created themes around our first incubation programs, and this has since become part of our philosophy at HAX. Four years and 145 companies later (as of mid 2016), we have HAX Lifestyle, HAX Health, HAX Robotics, HAX Infra, and HAX Fab, with dedicated resources, expertise, processes, curricula, and distribution channels that can help push those products to market as fast as possible.

HAX is still running, and has been scaling up — we have about fifteen people on staff — and we have changed offices almost every year. This year we are moving into a 30,000 sq foot office in the middle of the Huaqiangbei electronics market because we have a lot of lines that are continuously manufacturing and creating new products over there. Some of them are really big, like Makeblock, for example, which has 170 employees already.

After operating for a few years — we’ve done 69 Kickstarter campaigns with HAX Lifestyle alone — I started to receive a lot of requests from our network that our startups needed help to improve their marketing and sales. So eighteen months ago I moved to San Francisco in order to set up a follow-up to our accelerator program, called Hax Boost, which is run by a former Target executive, and which focused on sales funnels and marketing for each of our different themes. If you are a HAX Lifestyle company, for example, we’ll focus on getting you into retail and teach you how to build everything you need to talk to a buyer and test your products in store… test your pricing, point of sale, and packaging, etc. We also help with networking, and will travel to meet buyers and scale up the sales process. At HAX Health, on the other hand, the distribution channels are hospitals, doctors, and gyms — so we take a very different angle there.

Since we started HAX Boost, we’ve had three cohorts with thirty-two companies going through, not all of which have been through our accelerator program. We also have external companies joining us for a sales and marketing bootcamp, which is a lot shorter — just 42 days. The goal is to get these companies from zero to $5 million in revenue (which is extremely ambitious in the world of entrepreneurship and venture capital), so that they know they have a market for their product before they manufacture it. Then they can go back to Shenzhen, and if they’ve done their job well, their DFM (design for manufacture) will be less painful and they will get to market faster because they have already grown out the distribution on the other side. Essentially, we make them ‘kiss’ the other side a little earlier than usual, and we foster those relationships at scale so that instead of taking twelve months to grow to their store, it will take only three months.

How big are your cohorts at HAX Boost?

We do ten startups at a time. While the accelerator programs are fifteen weeks long (because you actually have to build a product in that time), HAX Boost is just forty-two days long because companies already have their product ready. Forty-two days is an ideal length of time to focus on marketing.

TechCrunch called you “the most active investor in crowdfunded hardware” — but that’s only one aspect of what you do, isn’t it?

Yes. We’ve had 145 companies go through our programs now, as of mid 2016 … 115 through our accelerator and 30 through Boost. These have included 65 Kickstarter campaigns, mostly lifestyle products, where we have raised an average of about $450,000 per campaign. If you consider that to raise $100,000 puts you in the top 1% of Kickstarter campaigns, that puts us in the top 0.01% every single time we launch a campaign. We represent 8% of all the $1M-plus campaigns on Kickstarter as well. So it sounds like crowdfunding investment is our focus, but it’s only half (or less) of what we do. Lifestyle companies represent only roughly 40% of what we do. The rest is divided between health, infrastructure, robotics and fabrication. These kinds of companies are B2B for the most part and require a different level of attention when it comes down to technology, obviously, but also business models — which are the most interesting aspect of this work, particularly in the robotics and fab space.

HAX-robotics-theme-startup

Many of the companies we feature in this report are B2B2C, where the customer is not the final interaction point. I think we’ll be seeing more and more robotics companies in this space … what are the challenges they face, and how have you been able to smooth the way for them?

There are three very obvious challenges on the robotics side.

One is the definition of a robot. We may define this as a machine that makes autonomous decisions and autonomous movements, but there is public confusion around this definition. What is a robot, and why does that matter? It’s important because the public’s vision of what a robot is is going to influence the success of these companies, whether they are making robots in the formal sense or not.

Another challenge is that roboticists deal with extremely complex environments, and the obvious trap for any startup is to become too enamored with the technology, or to become too busy with making it work and not getting to the specific application.

A third challenge is trying to do everything, or wanting to become a platform, because that seems to be the Holy Grail. But we’ve already seen many approaches to building robot platforms and they don’t always work immediately. Take the PR2, the Personal Robot platform, from Willow Garage. It was supposed to be something that could be shared and open sourced to create different kinds of robotics applications… only a few places ever had a PR2, it only worked as a platform if you consider Savioke and all the other startups that came out of Willow Garage.

Another important challenge is the business model. We’ve been pushing to find ways for robots to be more than a box and software being sold to a client. I’m referring to “Robotics as a Service” of course, ie. getting a monthly payment at scale. Why does this matter? It’s easier to be profitable and to attract investors if it’s not about paying for the device.

Historically, extremely high-tech robots designed for a single task were sold for $100,000 – $200,000 apiece. The industry knew this model well, and recurring revenue came mainly from maintenance contracts. But the obvious trend is that robots are getting less and less expensive as time goes on … they are following the path of consumer electronics, which are getting both more powerful and less expensive at the same time. Some of the robots we are seeing today cost just a few thousand dollars, which on the one hand could mean that you can sell a lot of them, but on the other hand, you are trying to sell the value at the moment as well. Lots to figure out.

One of the challenges of Robots as a Service is that robots are still physical and they still require a lot of maintenance, which pushes the price point up even though the cost of the hardware is coming down. This seems inherently less scalable. Do you think these obstacles have been overcome?

Not entirely. Most startups are in the phase of trying. I don’t know the perfect definition of RaaS, because simply making the robot isn’t enough. It has to be tied to an actual value being created by the robot itself. Take the RaaS model that Simbe is using: their clients are billed per item scanned, so the more items the robot scans, the more revenue they generate. Or in the case of Avidbots, the more square feet that the robot cleans, the more Avidbot earns. But it’s not the end user paying for it. It’s really the corporation behind all this infrastructure that is paying for it, and it’s still too early to know whether they will be willing to scale up.

That said, I think that hardware reliability is in some ways the bigger question, as these robots are just getting to market at scale and we don’t yet know how long they will last out there. Few robots have run more than a thousand miles today, so all this is up in the air.

HAX-For-Startups

Are there other areas in the retail and consumer goods chain that you would encourage startups to look at?

Robot arms are getting better, cheaper, more accurate and less dangerous, so I think they
will start to pop up in all sorts of places and will come in many forms. At first they will probably be used in commercial environments, for example in restaurant kitchens. In the retail space, robot arms could fill shelves and stock inventory, or they could also be used to deliver goods from the store. Today we are a little constrained in our thinking of how a robot arm can be used, but as they become more application-specific they will just become better at what they are supposed to do.

It’s a little more complex if you’re thinking about the consumer market. People tend think of consumer robots as companions — robot pets, for example. But consumer robots will no doubt differentiate as well. There is a HAX company called Trainerbot that launched this year on Kickstarter. They are building a ping pong robot that teaches you how to play and trains you — and you can imagine that could be done with many sports. When you add computer vision and sophisticated movement, some people might think of these robots as even better companions.

Obviously there is a period of land grab before the field becomes specialized. Right now we are in a phase where most companies want to be a platform, or want to have the killer app. I think it’s slowly becoming more competition-specific, but it will continue to specialize.

Being based for such a long time in China where there has been such growth in recent years, what do you see ahead?

One thing that isn’t well acknowledged is that China is starting to come up with their own robots. They now have a pretty good technical base and they are catching up on know-how. It’s an obvious market for China to serve its need for robots domestically – so that is something to watch for.

Also, we have seen a commoditization of manufacturing machines, from laser cutters and 3D printers, to production machines, CNC machines, and furnaces — anything that makes a product come alive, as it were. Being in the manufacturing center of the world, it is phenomenal the level of automation that has been achieved already. Smartphones are produced with barely any human touching them; even the touch screen is tested by robot fingers. Though it’s easier to create a fully automated robot factory if you have only one product to build, there are increasingly fewer people in factories here: there’s nobody in the injection molding department, for example, but you have robot arms and conveyor belts (which at some point will be replaced by mobile robots), so that trend will continue. Robot arms are going to replace a lot of jobs.

But I think the tipping point for robotics is going to happen when robots start creating jobs. One can imagine that some robotics companies might reappropriate themselves, with the value of the robot essentially coming from the production that comes out of it. Dispatch is a great example of a company that has to figure out whether or not it is better to create its own network and rent it — the same is true with most robot companies. Avidbots could become a cleaning company if it wanted to, and it would be a better one, for example. Or Rational Robotics could rent their industrial painting machines to someone who wants to build their own garage. We haven’t seen that yet, but I’m expecting to see it pretty soon.

What is most exciting to me about robotics is that the business model is all down to what you want to do with the future of robots: Do you want them to replace human workers? Or do you want robots to be creators of value in the very first place?

Cyril-Ebersweiler

HAX

HAX is an accelerator for hardware companies founded in 2011 in Shenzhen by Cyril Ebersweiler and Sean O’Sullivan. The HAX accelerator program selects 30 startups a year for seed investment in a cohort program and, as of 2016, the HAX Boost program provides an additional marketing bootcamp. Although HAX is well known for backing successfully crowdfunded consumer hardware, a significant number of startups now moving through HAX are enterprise robotics startups. HAX is investing in five technology themes; lifestyle, health, robotics, infrastructure and fabrication/prototyping. HAX is funded by SOSV, the accelerator venture fund founded by Sean O’Sullivan.

SILICON VALLEY ROBOTICS

Silicon Valley Robotics is the industry group for robotics and AI companies in the Greater San Francisco Bay Area, and is a not- for-profit (501c6) that supports innovation and commercialization of robotics technologies. We host the Silicon Valley Robot Block Party, networking events, investor forums, a directory, and a jobs board, and we provide additional services and information for members, such as these reports.

WE’LL BE RELEASING ADDITIONAL ESSAYS FROM THE REPORTS EVERY WEEK OR SO. OR YOU CAN READ FULL REPORTS AT: https://svrobo.org/reports

]]>
Looking towards service robotics in 2017 https://robohub.org/looking-towards-service-robotics-in-2017/ Fri, 30 Dec 2016 12:30:36 +0000 http://robohub.org/looking-towards-service-robotics-in-2017/ pepper_aldebaran_softbank_2

Sophisticated household robots are only just starting to show up in our lives, but all the building blocks for a veritable “Cambrian explosion” of robotics are there, as Gill Pratt described it when he was running the recent DARPA Robotics Challenge. The service robotics industry is emerging, and we will soon be seeing robots of all shapes and sizes making their first forays into our everyday lives.

This is borne out by the recent explosion in robotics and AI funding, which saw robotics investments increase  exponentially over the last five years. While approximately $1 billion was invested in robotics between 2009 and 2014, roughly the same amount was invested in 2015 alone and 2016 is on track to double the total investment again.

According to the CBInsights October 2016 report into “The State of Enterprise: Robotics”, these new investments are largely clustered in autonomous vehicles and the service robotics industry, with strong growth noted in enterprise focused robotics companies.

“Over 50 enterprise-focused robotics companies have raised $259M across 63 deals this year (as of 10/5/2016). These companies are building robots for industrial automation, manufacturing, warehouse automation, and restaurant services, among other tasks.

At the current run-rate, deals to enterprise robotics startups are projected to cross 80 this year, beating last year’s record of 73.

Figure from CB Insights blog “Robotics Startups Funding”

Figure from CB Insights blog “Robotics Startups Funding”

As few as five years ago, the robotics landscape looked simple. While drone and autonomous vehicle technologies were beginning to emerge and cause some excitement, most of the robotics industry was industrial, and the remainder of the landscape was comparatively arid and boring: a few hospital robots, some robot vacuum cleaners, and a few toys and educational robots.

The emergence of the service robotics industry as a serious economic force has taken many people by surprise. It has also made it more difficult to make sense of the robotics landscape, particularly because the new service robotics businesses impact so many markets and applications, making categorization and statistics difficult to obtain, and trends hard to analyze.

Source: Yahoo Finance.

Source: Yahoo Finance.

The impact of the internet on the retail sector is a marked example. The demise of physical retail has been hyped since the first dot com bubble, and perhaps this hype has lulled us into a false sense of security. But the huge decline in value of most major US retailers over the past decade is simply shocking. At the same time, robotics offers a potential solution to the pending economic disaster that is threatening the bricks and mortar retail industry. Early signs indicate that robotics is bringing manufacturing back to the US, enabling small batch, just-in-time production in a range of areas from automotive to electronics to biomedical.

Now it seems service robotics solutions in the retail and consumer spaces are offering new ways for brick and mortar companies to likewise maintain their competitive edge. And helping to shape these solutions are a number of Silicon Valley startups featured in this report: Catalia Health, Cleverpet, Bossa Nova, Fetch Robotics, Savioke, Marble, Dispatch, Dishcraft, Momentum Machines, Eatsa, Fellow Robots and Simbe. Indeed, Silicon Valley is at the epicenter of emerging service robotics industry.

fellow_robots_retail_service

Since 2010 Silicon Valley Robotics has been tracking early stage robotics startups and supporting them as they grow. Many of these companies are creating fundamentally new interactions and products, and as they do so, are leveraging the latest business model concepts, including various twists on “Robots as a Service” and “cloud robotics”. With so much new ground being broken, we saw a need to share their stories so that the industry as a whole can learn and grow, too.

Our first report on “Service Robotics Case Studies in Silicon Valley 2015” looked at enterprise robotics companies like Fetch Robotics, Adept, Fellow Robots, and Savioke, who occupied the space between backroom logistics and front-of-house customer service.

The new “Service Robotics Case Studies in Silicon Valley 2016” takes a deeper dive to explore how robotics companies are driving changes in how we interact — not just with technology — but with our pets, our health care providers, and our retail experience.

  1. CleverPet (B2C) >> changing how people interact with their pets and delivering data-driven interactions at the far end of the spectrum of the new behavioral robotics.
  2. Catalia Health (B2B2C) >> changing how people interact with their medication and life choices, but also how healthcare providers interact with patients >> intelligence built in by experts; allowing for smarter, data-driven interactions.
  3. Simbe (B2B) >> allowing retail managers to interact more efficiently with their stock

A clear theme is that each of these robotics startups can unlock data-driven insights with potentially enormous additional business upside by leveraging the ubiquitous connectivity that is being made possible by cloud computing, and more specifically, cloud robotics. Emerging cloud robotics technology enables far more than the simple ability to continuously update and upgrade a physical product, or even to augment the intelligence of that product; it allows robots to become active mobile big data collectors.

Another important theme is how the RaaS or “Robots as a Service” model is maturing, with a range of transactions and applications (e.g. commission-based vs rental-based etc.) to complement the roll out of cloud robotics. Robots are no longer being marketed as simple products for sale, but as sophisticated tools for gaining insights and creating value and driving customer change.

Our second set of case studies and commentaries looks at how accelerators and investors are supporting the drive towards RaaS and finding business models that map robotics technologies into existing business structures, in areas ranging from health to consumer, hospitality to retail.

  1. HAX >> helping startups focus on business models via programs like HAX Boost
  2. Lemnos Labs >> seeing opportunities for RaaS, working with startups on developing their business plan
  3. Comet Labs >> analyzing how intelligent machines can reimagine retail from logistics to customer experience

Every case study and commentary in this report underscores the fundamental importance (and reliance) of human-robot interaction to the service robotics business model — a trend that is sure define the industry well into the future. With so many ‘humans in the loop’ it will become critical that we design robots and businesses that set a high bar for safety, privacy, and ethics. For this reason, we are pleased to announce that Silicon Valley Robotics has just launched a “Good Robot Design Council” with industry expert advisers in an effort to develop design guidelines for the robotics industry that can help fill the ethical gap between standards and laws.

To that same end, our next service robotics reports will focus on self-driving vehicles, agriculture, and other ‘niche’ verticals, with an emphasis on the increasing importance of human-robot interaction, and a look at which aspects of design make good robots and successful businesses.


If you enjoyed this article, you may also enjoy:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
Bay Area Robotics Symposium 2016 https://robohub.org/bay-area-robotics-symposium-2016/ Mon, 21 Nov 2016 12:31:56 +0000 http://robohub.org/bay-area-robotics-symposium-2016/ bars-poster-use

The 2016 Bay Area Robotics Symposium was held at Stanford on November 18th. Each year, the annual event alternates between Stanford and Berkeley with the goal of bringing together roboticists from the Bay Area. Presentations were given by a diverse group of speakers, including:

  • Presentations by Stanford and Berkeley faculty;
  • Talks by bay area industry representatives;
  • Keynote by Stuart Russell;
  • Student poster session.

Recorded sessions are now online and can be found here:

https://www.youtube.com/watch?v=xJOxEL0_3_E

]]>
What is good robot design? https://robohub.org/what-is-good-robot-design/ Tue, 08 Nov 2016 13:05:27 +0000 http://robohub.org/what-is-good-robot-design/ Banksy robot and barcode graffiti in New York, USA.

Banksy robot and barcode graffiti in New York, USA.

Let’s stop talking about bad robots and start talking about what makes a robot good. A good or ethical robot must be carefully designed. Good robot design is much more than just the physical robot, and at the same time, good robot design is about ‘less’. Less means no extra features, and in robotics that includes not adding unnecessary interactions. It may seem like a joke, but humanoids are not always the best robots.

‘Less’ is the closing principle of the “10 laws of design” from world famous industrial designer Dieter Rams. Design thinking has framed discussion guidelines for good robot design, as ethicists, philosophers. lawyers, designers and roboticists try to proactively create the best possible robots for the 21st century.

Silicon Valley Robotics has launched a Good Robot Design Council with our “5 Laws of Robotics” that are:

  • Robots should not be designed as weapons.
  • Robots should comply with existing law, including privacy.
  • Robots are products: and as such, should be safe, reliable and not misrepresent their capabilities.
  • Robots are manufactured artifacts: the illusion of emotions and agency should not be used to exploit vulnerable users.
  • It should be possible to find out who is responsible for any robot.

These have been adapted from the EPSRC 2010 “Principles of Robotics” and we greatly thank all the researchers and practitioners who are informing all of us about this ongoing topic.

Silicon Valley is also placed at the epicenter of the emerging service robotics industry, robots that are no longer just factory workers but will be interacting with us in many ways, at home, at work, even on holiday.

In 2015, we produced our first Service Robotics Case Studies featuring robotics companies: Fetch Robotics, Fellow Robots, Adept and Savioke. We will shortly release our second report featuring: Catalia Health, Cleverpet, RobotLab and Simbe.

Design guidelines can not only create delightful products but can fill the ethical gap in between standards and laws.

After all, if our robots behave badly, we have only ourselves to blame.


If you liked this article, you may also want to read more about design and robotics:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
Announcing the Robot Launch Shortlist https://robohub.org/announcing-the-robot-launch-shortlist/ Thu, 27 Oct 2016 19:45:30 +0000 http://robohub.org/announcing-the-robot-launch-shortlist/ robotlaunchwithstickerIt’s time to showcase the Robot Launch semifinalists, or ‘The Shortlist’. Every week over the next month you will meet 9-10 very interesting robotics and AI startups from all over the world and you will have a chance to vote each week on their short pitch video. The most popular startup overall will be awarded the Robohub Readers Choice award.

The Shortlist startups range from agricultural to humanoid, from consumer to industrial and from hardware to robotics software. Some are so new they don’t have names yet. But in alphabetical order “The Shortlist” is: Aatonomy, Acro, AirZaar, bridgeOS, Choitek, Cubit, Emobie Labs, FoldiMate, Franklin Robotics, Halodi, Industrial robots with 3D vision, Internet of Robots, Kamigami Robots, LabsCubed, Modular Science, Mothership Aeronautics, MOTI, Parihug, Robolink, Robotics Materials, Robotics platform for EDU and DIY, SD3D, Semio, Bioprint startup, Tactile Robots, The Virtual Robotics Toolkit, Track mounted mobile robots, UniExo, UnNamed and ViDi Systems.

While you are voting for your choice, our panel of top VCs and investors will be giving feedback to the startups and selecting startups for regional and industry awards. Judges include Intel Capital, Grishin Robotics, Root.vc, Comet Labs, InnoSpring, PropelX, QUT bluebox, ElevenTwo Capital, Sony Ventures, Singularity U’s Explorers Fund and Robotics Hub.

Odense Robotics and Invest in Odense can offer an incubation award for the Best European Startup team (2 persons) as follows: Travel to / housing in Odense, 1-month for 2 persons in Odense (travel/housing value 1500 USD), access to all Odense Robotics StartUp Hub facilities, Robot Innovation Hall, the other startups in the Hub, for one month (during May / June 2017), preparing for the JUNE PITCH SESSION –  for the board of Odense Robotic StartUp Hub where they might be selected as one of two StartUps to enter the Odense Robotic StartUp Hub (12 months).

 

Silicon Valley Robotics will offer a startup membership to all of The Shortlist, providing introductions to investors in Silicon Valley, access to network events and hosting in the new Robot Launchpad accelerator space. We are also coordinating demo opportunities at expos around the world. There’ll also be “most promising startup awards” for MENA and Australia/NZ from other robotics cluster and entrepreneurial organizations; MENA and QUT Bluebox.

We’re still looking out for more award sponsors if you have something to offer a robotics startup. andra [at] robotlaunch.com

]]>
Who are the most active robotics investors? https://robohub.org/who-are-the-most-active-robotics-investors/ Wed, 19 Oct 2016 14:20:03 +0000 http://robohub.org/who-are-the-most-active-robotics-investors/ Charging Bull statue. Credit: Sam Valadi/Flickr

Charging Bull statue. Credit: Sam Valadi/Flickr

You may be surprised, but I’m not. These are the people I see regularly both in Silicon Valley and overseas interacting with the robotics community. That makes them the smart money (most of the time). According to CB Insights, the 7 most active robotics investors over the last 5 years are: Eclipse Ventures, High-Tech Gründerfonds, Lux, Intel Capital, Sequoia China, CRV, and Visionaire Ventures.

As CB Insights demonstrates, old school ‘smart money’ is still making investments in robotics — just at a slower pace. Overall, the last 5 years has seen an increase in global robotics equity funding to $2.6 billion in 405 deals.

Source: CB Insights

Source: CB Insights

Eclipse Ventures is a $125m hardware fund. which ‘backs iconic entrepreneurs building vertically integrated companies incorporating hardware, software and data.’ Some of their portfolio companies include; Kinema Systems, Marble, Modbot, Rise Robotics, and Clearpath Robotics.

High-Tech Gründerfonds is Germany’s most active and leading seed-stage investor across fields of cleantech, biotech and robotics, with e576m in two funds. Portfolio companies include; REVOBOTIK, Bionic Robotics, Magazino, Reactive Robotics, Medineering.

Lux Capital has $700m under management and ‘invests in emerging science and technology ventures at the outermost edges of what is possible’. Some of their portfolio companies include; Saildrone, Tempo Automation, CyPhy Works and Auris Surgical Robots.

Intel Capital has had more portfolio exits than any other venture capital firm since 2005. Intel Capital is stage agnostic, across a wide range of technologies. Portfolio companies include; Ninebot, Yuneec, Savioke, and Persimmon Technologies.

Sequoia has invested in an unprecedented number of enormously successful companies, including Apple, Google, Electronic Arts, LinkedIn, Dropbox, and WhatsApp. Today, Sequoia has robust connections to the four most innovative and fastest-changing economies in the world: China, India, Israel, and the United States. Sequoia China portfolio companies include; Ninebot, Makeblock, Quotient Kinematics Machine, and DJI Innovations.

CRV, aka Charles River Ventures, has over $2.1b under management with more than 40 years of experience in 16 funds. Some portfolio companies include; Jibo, Wonder Workshop, Airobotics, and Rethink Robotics

Visionaire Ventures has just closed a second $200m fund and is investing in companies from artificial intelligence to machine/deep learning, robotic automation, visual perception, agricultural and digital health technologies. Portfolio companies include; CANVAS Technology, Modbot, Savioke, and Zipline International.

]]>
Launching the Women in Robotics network https://robohub.org/launching-the-women-in-robotics-network/ Tue, 11 Oct 2016 16:35:55 +0000 http://robohub.org/launching-the-women-in-robotics-network/ wir_banner

Women in Robotics is a new international professional community. If you are a woman who works in robotics, or who aspires to, we are inviting you to join

Harvard Business Review cites recent studies showing that visibility and mentoring are critical for maintaining women in technology careers. Computer science degrees for women have dropped to below 20% and this is reflected in robotics. 

Our goals are threefold:

  1. To encourage local network events. Women get the most professional benefit, mentoring and networking from small events.
  2. To create a global communication platform. The new Women in Robotics network connects individuals around the world for discussion, coordination and inspiration. 
  3. To improve the visibility of women in robotics. We’ve been working on this since 2013 through our annual 25 Women In Robotics You Should Know About list, generating awareness about amazing women working in robotics at every career stage.

Women in Robotics is the culmination of many ’think global and act local’ moments, starting with our own personal experience growing careers in robotics as isolated individuals, but coming together in the globally distributed Robohub and Silicon Valley Robotics not-for-profit groups. Women in Robotics organizers include Andra Keay in Silicon Valley, Sabine Hauert in Bristol, Sue Keay in Brisbane, Hallie Siegel in Toronto, and Kassie Perlongo in Warwick. We’re hoping you’ll join us.

women-network
Women already create their own effective networking and mentoring events locally. But robotics is an international phenomenon, with employment and mentoring opportunities across the globe, and it will be so much more effective and more visible when we can connect groups and organizers with each other and with other women.

Ada Lovelace Day is on Tuesday October 11th. In celebration of this amazing mentor, let’s mobilize. We’ll be generating as much buzz about Women in Robotics as we can that week, through our annual Women in Robotics list, by encouraging local events, sharing images, and featuring women roboticists in action. We need your help!

Here’s what you can do:

  • Accept your slack invitation – or ask for one at https://goo.gl/forms/GbWa6JmHnTieU0jP2
  • Once you’ve joined you can Invite other women to join 
  • Organize a women in robotics event (all it takes is 2 or more women!)
  • Take photos of your women in robotics event and share to slack or facebook – we’d love to see a montage of women in robotics, giving women in robotics a greater visibility.
  • Share the annual Women in Robotics list
  • Get stories about women in robotics into your local news/universities etc.
  • Share your stories with us
wir-at-abb-robotics-cu

Impromptu Women in Robotics event with Valery Komissarova, Erin Rapacki and Andra Keay.

Don’t even wait to join Women in Robotics – Ada Lovelace Day is just around the corner and all it takes is two or more women to have a Women in Robotics event!

]]>
Welcoming new global partners https://robohub.org/welcoming-new-global-partners/ Mon, 10 Oct 2016 11:06:21 +0000 http://robohub.org/welcoming-new-global-partners/ innov-clusters-denmark-svfinal

Silicon Valley Robotics held an innovation tour for some of our global partners ahead of the RoboBusiness conference in San Jose. Representatives from Odense Robotics Cluster, Invest in Denmark and Denmark Innovation Center joined up with Sumitomi Mitsui Bank, Siemens, A3/RIA, Robotics Hub and the Office of Science and Technology Policy for a ’round the Bay’ innovation tour.svr-tour1The day started with a discussion on cluster strategies at Denmark Innovation Center in Palo Alto, then moved to SRI International for a lab tour, where we saw both historical robots like Shakey and the latest innovations like Motobot and Superflex.

svr-tour2

After lunch, we visited Otherlab in San Francisco to see companies like Pneubotics and Otherlab Orthotics. Then we finished the afternoon off at The Switch in Livermore, with a visit to Positron Dynamics and the Robot Garden maker space. The brews on tap rounded out a very lively day of discussion.

We also welcome new Global Partners: Blue Ocean Robotics and Harmonic Drive. Blue Ocean Robotics is looking for local robotics companies to partner with in developing robotics solutions for the global market. Odense Robotics can also offer assistance in reaching partners and markets in Europe.

welcome-svr-group

Plus, if you’re looking towards Asian markets and partners, SMBC can help. We also had another innovation tour very recently with SPRING Singapore and there are significant subsidies for US robotics companies launching into Singapore.

If you have any photos you’d like to share, or want to become a member of Silicon Valley Robotics, get in contact with Andra.

]]>
Farewell to Vic Scheinman, inventor of the modern robot arm https://robohub.org/farewell-to-vic-scheinman-inventor-of-the-modern-robot-arm/ Thu, 29 Sep 2016 17:07:34 +0000 http://robohub.org/farewell-to-vic-scheinman-inventor-of-the-modern-robot-arm/ vicscheinman-hydr-arm_

Victor Scheinman with an early robot arm at Stanford 1968.

We are sad to learn that Victor Scheinman passed away on September 20, from complications of heart disease. He was 73. Victor Scheinman was the inventor of the Stanford Arm, the first all-electric 6-axis mechanical manipulator for assembly and automation that was capable of computer control. Scheinman commercialized the robot arm as the PUMA, or Progammable Universal Machine for Assembly, which is used in almost every industry application today.

Scheinman developed the Stanford Arm while a mechanical engineering student at the Stanford Artificial Intelligence Lab (SAIL) in 1969. According to the Stanford Infolab, the original design is still in use today, although one of the first PUMA arms is now in the Smithsonian Museum.

“The kinematic configuration of the arm is non-anthropomorphic (not humanoid) with 6 joints (5 revolute, 1 prismatic) and links configured such that the mathematical computations (arm solutions) were simplified to speed up computations. Brakes were used on all joints to hold the arm in position while the computer computed the next trajectory or attended to other time shared activities. Drives are DC electric motors, Harmonic Drive and spur gear reducers, potentiometers for position feedback, analog tachometers for velocity feedback and electromechanical brakes for locking joints. Slip clutches were also used to prevent drive damage in the event of a collision. Other enhancements include a servoed, proportional electric gripper with tactile sense contacts on the fingers, and a 6 axis force/torque sensor in the wrist.

This robot arm was one of two mounted on a large table with computer interfaced video (vidicon) cameras and other special tools and tooling. The facility was used by students and researchers for over 20 years for Hand-Eye projects and for teaching purposes, as it was well characterized, reliable and easily maintained. Eventually, it was augmented with commercial electric robots and newer Stanford designs, but the Blue arm, nearly identical is still in occasional use in the Robotics laboratory on this floor.”

img_2403armoverall-1024x601

In 1973, Scheinman started Vicarm to commercialize the Stanford Arm. Vicarm sold the design to Joseph Engelberger’s Unimation in 1977 and development of the PUMA systems continued. In 1980, Scheinman left Unimation and parent company General Motors to join Automatix where he continued to create new robotics systems including the modular Robot World system that was acquired by Yaskawa.

img_0209-1024x768

Local roboticist John Meadows, of Able Designs, worked with Vic Scheinman in the 1970s and remained in close contact, attending Silicon Valley Robotics events like the Robot Block Party together.

“It has now been almost fifty years since I met Vic at Raychem and first heard his plans to start a robot company. This became known as Vicarm and I worked with him in designing the PUMA robots which were the first electric anthropomorphic arms. Vicarm became a part of Unimation after which Vic moved on to start Automatix and Robot World.

Vic was beyond any doubt a major influence on my career in automation and robotics,he now has his place in the Pantheon of the great contributors to technology.”

Scheinman was awarded the Robotics Industries Association’s Joseph F. Engelberger Award for technology in 1986, in honor of his prestigious accomplishments. In 1990, Scheinman was given the Leonardo da Vinci award from the American Society of Mechanical Engineering, its top award in product design and invention. The Robotics History Project also has an interview with Victor Scheinman.

From the obituary published in the Almanac News: Updated information on the memorial can be found and memories, photos, and thoughts can be shared on Facebook or by emailing memorial@vicarm.com.

]]>
Last call for startups – Robot Launch 2016 https://robohub.org/last-call-for-startups-robot-launch-2016/ Sat, 13 Aug 2016 17:51:38 +0000 http://robohub.org/last-call-for-startups-robot-launch-2016/

The robotics industry is maturing. The quality of startups in particular has really changed over the last 2-3 years, and this is backed up by the increasing investment levels, with over $1 billion invested in robotics in 2015. The scope of market areas that we are now seeing robotics startups in is also changing. There are now social robots for health and education, robots doing service tasks in hospitality, retail, logistics, consumer robots tackling garden and maintenance tasks. There are also new industrial, manufacturing and inspection robots, plus new sensor, software and robotics infrastructure opportunities.

We’ve also seen a real change in how polished the robotics startups that we see are, including in the Robot Launch competition – just saying the bar is getting high for all you late entrants – entries close August 15! But the Robot Launch competition is about startups at the start of their journey and it’s fascinating to see how they continue to grow and mature after the competition. We featured some Robot Launch alumni in our last post. Here are some more Robot Launch alumni stories!

Robotics Technologies of Tennessee was a 2014 finalist, winning the Silicon Valley Robotics award which included showcasing their wall climbing welding and inspection robots at SOLID, the first O’Reilly hardware conference. RTT went on to win the SOLID showcase award.

“SOLID was a who’s who of the maker movement, robotics and connected devices,” said Steve Glovsky, EVP of Robotic Technologies of Tennessee, one of the three startups that got to showcase at the event. “We made amazing contacts, discussed possible collaborations and were serendipitously asked to participate in ‘once in a lifetime’ projects. We even made the 11 o’clock news! Robot Launch 2014 allowed us to connect with people imagining and solving similar challenges in ways we might have missed without participating in the contest and SOLID.”

Welding robot, by Robotics Technologies of Tennessee.

Welding robot, by Robotics Technologies of Tennessee.

Since then RTT have received several grants and contracts, built smaller faster more adaptable robots and found a new market in the nuclear industry. One of RTT’s new robots was prominently featured in an Electric Power Research Institute’s (EPRI) Journal article titled, “EPRI Research Supports Longer Service Lives for Spent Fuel Dry Casks.” The article reports on the industries collective mindset shift from storing spent fuel for relatively short periods of time, to one of much longer periods and the role that new robotics technologies plays in enabling safe change.

Scanse.io will be shipping their first low cost LiDAR in December 2016, after a successful crowdfund campaign. Scanse raised more than their crowdfund goal of $230,000 with over 1000 backers. If you missed out, they are still accepting preorders for the LiDAR at $255 each.

sweep
specs
Sweep-Deconstruct

  • Dual Returns
  • 120 grams
  • 40m Range
  • High ambient light tolerance
  • Immunity to malicious interference
  • 500 Points per Second
  • 360° Horizontal FOV
  • Low Power Consumption

Scanse founders Kent Williams and Tyson Messori actually entered Robot Launch 2014 with a robot ground vehicle – and then withdrew their entry because they weren’t happy with the prototype. They just couldn’t get good enough navigation. They needed a better more affordable sensor.

Kent and Tyson reentered Robot Launch 2015 as Scannable with their first version of their new affordable LiDAR unit. They made it to the finals and less than 1 year later, Scanse,io is launched!

Are you the next Robot Launch super startup? This weekend is the last chance for startups to enter the 2016 Robot Launch startup competition and maybe join the ranks of some very successful new robotics companies. Deadline for entries is August 15 at 18:00 PDT

]]>
Where are the previous Robot Launch winners now? https://robohub.org/where-are-the-previous-robot-launch-winners-now/ Thu, 21 Jul 2016 19:00:50 +0000 http://robohub.org/where-are-the-previous-robot-launch-winners-now/ Photo source: LEKA

Photo source: LEKA

Catch up with past winners: CleverPet (2015), Preemadonna/Nailbot (2015 runner-up), and Leka (2014).

Will you be our next Robot Launch winner? There’s 25 days left to register! You can register by going to 2016 Robot Launch global online startup competition.

CleverPet_DogBranded8

Grand prize winner (2015): CleverPet

CleverPet is a smart dog feeder that lets you play, teach, and connect with your pet all day. More than just an automatic dog feeder, CleverPet teaches your dog games, dispenses rewards, and adapts the gameplay as your pet gets more skilled. Robot Launch judges said they picked Cleverpet because they were “a polished company that is poised to serve a big market that people take seriously,” noting that the CleverPet’s adaptability and focus on play distinguishes it from other robotic pet products, and that the team understands the needs of both pets and their owners. CleverPet received a prize combination of $5000 in cash and $5000 in Amazon Web Services, courtesy of c/o CSIRO, SMBC, iRobot.

https://www.youtube.com/watch?v=jp2qqQ3RKTw

Since winning last year, CleverPet was unveiled to the public at the Consumer Electronics Show (CES), January 2016. Co-founder Leo Trottier successfully pitched CleverPet to a delighted and enthusiastic crowd, winning first place in the startup pitch battle.

The CleverPet Hub will be sold through their website, although they are not accepting orders yet. But you can sign up to receive an email for when they start accepting orders. The price of the Hub will be $299 (USD).


1st Runner Up (2015): Preemadonna

Preemadonna designed a nail decorating robot aimed at girls and young women. Nailbot uses thermal inkjet technology, computer vision and a smartphone to allow you to paint custom designs on your fingernails.

Since being awarded 1st Runner Up, Nailbot has amassed a collaborative community of testers, hackers, designers, and nail art fans—early adopters and recruits via the Preemadonna Ambassadors program. Walia has also partnered with nonprofit partners like Maker Girl to help test, design for, and the get the word out. Nailbot is currently in beta, with a limited trial release heading out later this year, after that, they’ll head into mass production. There’s a waitlist to join. Currently, the waitlist has over 20,000 people, and will be doing a preorder campaign September 2016.

“Since Robot Launch, we’ve been humbled and excited by the response from the tech community and from girls and boys that want to be a part of Preemadonna and help bring the Nailbot to market. In fact, we have over 20,000 people on our waitlist and will launch a pre-order campaign in the early Fall! We encourage students to get involved with our Nailbot journey by submitting to one of our Ambassador challenges (design Nailbot art, win a DIY printer or share your story on our blog!). Robot Launch was a terrific launchpad for Preemadonna to introduce the Nailbot to the world.” Pree Walia, CEO of Preemadonna & inventor of the Nailbot.

They also recently won The Audience Award, selected by Embedded Vision Summit attendees, based on innovative vision-based products in the inaugural Vision Tank competition. Other awards include being a finalist at both TechCrunch Disrupt SF’s 2015 Startup Battlefield and Girls in Tech’s 2015 Lady Pitch Night, as well as, being published in the Robohub 25 Women in Robotics to Know.


Grand Prize winner (2014): LEKA

LEKA is a robotic toy that helps autistic children learn to regulate their own emotions through play. The robotic companion is designed specifically for children with special needs, to sparkle their motivation and help them learn, play & progress.

After a recent successful Indigogo campaign, Leka Inc. (formerly Moti) developed a product roadmap for full-scale development. They plan on ensuring that the first version of LEKA meets all safety and security standards for American and European markets:

  • July 2016 – Releasing the first manufactured prototype.
  • November 2016 – Release the second improved and optimized manufacturing prototype.
  • December 2016 – The last prototype will be released just before Christmas as we want to present LEKA to Santa’s workshop.
  • January 2017 – LEKA will need to go through the validation process. The production of the pre-series products will start in February.
  • April 2017 – Beginning manufacturing for the finished products.
  • May 2017 – LEKA will finally be wrapped and delivered to your door.

You can also register at leka.io to keep in the loop for when they officially launch.

Will you be our next Robot Launch winner? Register at 2016 Robot Launch global online startup competition. Pitch your robotics startup online to an audience of top VCs, investors and experts, with live finals in Silicon Valley.

Entries close August 15.


If you enjoyed this article, you may also want to read:

]]>