News – Robohub https://robohub.org Connecting the robotics community to the world Fri, 20 Dec 2024 14:48:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Robot Talk Episode 103 – Keenan Wyrobek https://robohub.org/robot-talk-episode-103-keenan-wyrobek/ Fri, 20 Dec 2024 13:57:07 +0000 https://robohub.org/?p=211550

Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Keenan Wyrobek is co-founder and head of product and engineering at Zipline, the world’s first drone delivery service whose focus is delivering life-saving medicine to the most difficult to reach places on earth. Prior to Zipline, Keenan was a co-founder and director of the Personal Robotics Program at Willow Garage. He was involved in launching the Robot Operating System (ROS) and shipping PR2, the first personal robot for software R&D. Keenan has spent years delivering high tech products to market across a range of fields including consumer electronics and medical robotics.

]]>
Robot Talk Episode 102 – Isabella Fiorello https://robohub.org/robot-talk-episode-102-isabella-fiorello/ Fri, 13 Dec 2024 13:31:24 +0000 https://robohub.org/?p=211505

Claire chatted to Isabella Fiorello from the University of Freiburg about plant-inspired robots made from living materials.

Isabella Fiorello is a Junior Group Leader and Principal Investigator of the Bioinspired Plant-hybrid Materials group at the University of Freiburg in Germany. She has a Master’s Degree in Industrial Biotechnology from the University of Turin in Italy and a PhD in Biorobotics from Scuola Superiore Sant’Anna in Italy. Her research focusses on the development of biologically-inspired microfabricated living materials able to precisely interact with complex unstructured surfaces for applications in precision agriculture, smart fabrics, space, and soft robotics.

]]>
Robot Talk Episode 101 – Christos Bergeles https://robohub.org/robot-talk-episode-101-christos-bergeles/ Fri, 06 Dec 2024 13:09:03 +0000 https://robohub.org/?p=211458

Claire chatted to Christos Bergeles from King’s College London about micro-surgical robots to deliver therapies deep inside the body.

Christos Bergeles received his PhD in Robotics from ETH Zurich in Switzerland in 2011. As a Professor at King’s College London, he directs the “Robotics and Vision in Medicine Lab” whose mission is to develop micro-surgical robots that deliver regenerative therapies deep inside the human body. He holds funding for the development of instrumentation that delivers stem cells to diseased retinal layers in the eye. He and his team are very active in public engagement and patient involvement activities, such as New Scientist Live and the Royal Society Summer Science Exhibition.

]]>
Robot Talk Episode 100 – Mini Rai https://robohub.org/robot-talk-episode-100-mini-rai/ Fri, 29 Nov 2024 13:05:24 +0000 https://robohub.org/?p=211396

Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Mini Rai is the founding Director of Orbit Rise Ltd and an honorary Professor at the University of Lincoln. Previously, she was the Global Chair in Robotic Engineering at the University of Lincoln and an Associate Professor at the Surrey Space Centre. Mini has over 27 years of research and innovation experience in Space Engineering and Technology. With deep-rooted expertise in robotics, automation, control and systems engineering, she has led a large and diverse portfolio of national and international programmes on space robotic missions, spanning orbital and planetary robotics.

]]>
Robot Talk Episode 99 – Joe Wolfel https://robohub.org/robot-talk-episode-99-joe-wolfel/ Fri, 22 Nov 2024 13:14:12 +0000 https://robohub.org/?p=211371

Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Joe Wolfel is the CEO and founder of Terradepth. He is passionate about helping people make better and faster decisions regarding what we do (and don’t do) in the ocean. Terradepth designs and builds ocean-going robots at scale, deploys them, and delivers data through an ocean data platform tailored for the maritime community. Prior to Terradepth, Joe has helped start a couple other companies, worked as a management consultant, and served as a US Navy SEAL officer with deployments throughout the Middle East and Africa. Joe was educated at the US Naval Academy.

]]>
Robot Talk Episode 98 – Gabriella Pizzuto https://robohub.org/robot-talk-episode-98-gabriella-pizzuto/ Fri, 15 Nov 2024 13:09:41 +0000 https://robohub.org/?p=211342

Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Gabriella Pizzuto is a Lecturer in Robotics and Chemistry Automation at the University of Liverpool. She is also a Royal Academy of Engineering Research Fellow and ECR Co-Chair on the EPSRC AI Hub in Chemistry. She has a Ph.D. in Computer Science from the University of Manchester, where she was also a Marie-Sklodowska Curie early stage researcher and a visiting scholar at the University of Edinburgh and Italian Institute of Technology. She was then a postdoctoral research associate at the Edinburgh Centre for Robotics, prior to joining the University of Liverpool.

]]>
Online hands-on science communication training – sign up here! https://robohub.org/online-hands-on-science-communication-training-sign-up-here/ Wed, 13 Nov 2024 08:54:31 +0000 https://aihub.org/?p=16320

On Friday 22 November, IEEE Robotics and Automation Society will be hosting an online science communication training session for robotics and AI researchers. The tutorial will introduce you to science communication and help you create your own story through hands-on activities.

Date: 22 November 2024
Time: 10:00 – 13:00 EST (07:00 – 10:00 PST, 15:00 – 18:00 GMT, 16:00 – 19:00 CET)
Location: Online – worldwide
Registration
Website

Science communication is essential. It helps demystify robotics and AI for a broad range of people including policy makers, business leaders, and the public. As a researcher, mastering this skill can not only enhance your communication abilities but also expand your network and increase the visibility and impact of your work.

In this three-hour session, leading science communicators in robotics and AI will teach you how to clearly and concisely explain your research to non-specialists. You’ll learn how to avoid hype, how to find suitable images and videos to illustrate your work, and where to start with social media. We’ll hear from a leading robotics journalist on how to deal with media and how to get your story out to a wider audience.

This is a hands-on session with exercises for you to take part in throughout the course. Therefore, please come prepared with an idea about a piece of research you’d like to communicate about.

Agenda

Part 1: How to communicate your work to a broader audience

  • The importance of science communication
  • How to produce a short summary of your research for communication via social media channels
  • How to expand your outline to write a complete blog post
  • How to find and use suitable images
  • How to avoid hype when communicating your research
  • Unconventional ways of doing science communication

Part 2: How to make videos about your robots

  • The value of video
  • Tips on making a video

Part 3: Working with media

  • Why bother talking to media anyway?
  • How media works and what it’s good and bad at
  • How to pitch media a story
  • How to work with your press office

Speakers:
Sabine Hauert, Professor of Swarm Engineering, Executive Trustee AIhub / Robohub
Lucy Smith, Senior Managing Editor AIhub / Robohub
Laura Bridgeman, Audience Development Manager IEEE Spectrum
Evan Ackerman, Senior Editor IEEE Spectrum

Sign up here.

]]>
Robot Talk Episode 97 – Pratap Tokekar https://robohub.org/robot-talk-episode-97-pratap-tokekar/ Fri, 08 Nov 2024 12:47:46 +0000 https://robohub.org/?p=211268

Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.

Pratap Tokekar is an Associate Professor in the Department of Computer Science and the Institute for Advanced Computer Studies at the University of Maryland, and an Amazon Scholar. Previously, he was a Postdoctoral Researcher at the GRASP lab of University of Pennsylvania and later, an Assistant Professor at Virginia Tech. He has a degree in Electronics and Telecommunication from the College of Engineering Pune in India and a Ph.D. in Computer Science from the University of Minnesota. He received the Amazon Research Award in 2022, and the NSF CAREER award in 2020.

]]>
Robot Talk Episode 96 – Maria Elena Giannaccini https://robohub.org/robot-talk-episode-96-maria-elena-giannaccini/ Fri, 01 Nov 2024 12:56:15 +0000 https://robohub.org/?p=211205

Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.

Maria Elena Giannaccini has a degree in Biomedical Engineering from the Università di Pisa in Italy. She conducted her Master’s thesis at Scuola Superiore Sant’Anna as part of the EU-funded OCTOPUS project. In 2015, she obtained her PhD in Robotics at the Bristol Robotics, where she focussed on developing safe, variable stiffness robotic devices. She worked at the University of Bristol on the soft, bioinspired Tactip sensor and a soft robotics artificial larynx. In 2019, Elena was appointed as a Lecturer in Robotics at the University of Aberdeen where she pioneered research in soft robotics.

]]>
Robot Talk Episode 95 – Jonathan Walker https://robohub.org/robot-talk-episode-95-jonathan-walker/ Fri, 25 Oct 2024 11:42:07 +0000 https://robohub.org/?p=211163

Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.

Jonathan Walker is the Innovation Lead for Robotics and Sensors at Innovate UK. He is working with government, universities, businesses and cross-sector teams to accelerate the development and uptake of robotics in the UK. Areas of particular interest are the built environment, circular economy, and helping people live independently for longer. Jonathan wants to support these themes through cluster development, funded collaborative R&D, skills programs from school outreach, apprenticeships and T-levels to CDTs, business support and leveraging private investment.

]]>
Robot Talk Episode 94 – Esyin Chew https://robohub.org/robot-talk-episode-94-esyin-chew/ Fri, 18 Oct 2024 11:39:23 +0000 https://robohub.org/?p=211109

Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.

Esyin Chew is the Director of the EUREKA Robotics Centre, one of 11 specialist robotics centres in the UK, impacting underprivileged communities with over 120 humanoid robots. She has led million-pound government or industrial-funded global projects across the UK, EU, Australia, Malaysia, China and Indonesia, including the British Council award-winning Global PIE programme for Women in STEAM-H. Esyin has impacted numerous underprivileged communities, particularly girls and women in education and healthcare sectors, refugees and Orang Asli (Indigenous people).

]]>
Robot Talk Episode 93 – Matt Beane https://robohub.org/robot-talk-episode-93-matt-beane/ Fri, 11 Oct 2024 11:27:35 +0000 https://robohub.org/?p=211074

Claire chatted to Matt Beane from the University of California, Santa Barbara about how humans can learn to work with intelligent machines.

Matt Beane conducts field research on robots and AI in the workplace, focusing on positive exceptions applicable to the broader world of work. He has published his award-winning research in top management journals and presented on the TED stage. He’s been recognized as a Human-Robot Interaction Pioneer and named to the Thinkers50 Radar list. Matt is an assistant professor in the Technology Management department at the University of California, Santa Barbara, and a Digital Fellow with Stanford’s Digital Economy Lab and MIT’s Initiative on the Digital Economy.

]]>
Robot Talk Episode 92 – Gisela Reyes-Cruz https://robohub.org/robot-talk-episode-92-gisela-reyes-cruz/ Fri, 04 Oct 2024 11:31:51 +0000 https://robohub.org/?p=211033

Claire chatted to Gisela Reyes-Cruz from the University of Nottingham about how humans interact with, trust and accept robots.

Gisela Reyes-Cruz investigates human-computer and human-robot interaction to understand everyday life interactions with technologies, as well as trust in them and public acceptance. These technologies include autonomous and robotic systems: from mobile apps that have, or may have, a component that works autonomously, to robots that can navigate a physical space on their own, such as telepresence robots. The goal of Gisela’s work is to inform responsible system design and design practices.

]]>
Robot Talk Episode 91 – John Leonard https://robohub.org/robot-talk-episode-91-john-leonard/ Fri, 27 Sep 2024 11:31:28 +0000 https://robohub.org/?p=210976

Claire chatted to John Leonard from Massachusetts Institute of Technology about autonomous navigation for underwater vehicles and self-driving cars.

John Leonard is a Professor of Mechanical and Ocean Engineering at Massachusetts Institute of Technology (MIT) and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research addresses the problems of navigation and mapping for autonomous underwater vehicles, self-driving vehicles, and other types of mobile robots. He has a degree in Electrical Engineering and Science from the University of Pennsylvania and PhD in Engineering Science from the University of Oxford. He is a Technical Advisor at Toyota Research Institute.

]]>
Interview with Jerry Tan: Service robot development for education https://robohub.org/interview-with-jerry-tan-service-robot-development-for-education/ Wed, 18 Sep 2024 08:51:06 +0000 https://robohub.org/?p=210914
At the International Joint Conference on Artificial Intelligence (IJCAI) 2023, I had the opportunity to interview Jerry Tan from Lattel Robotics, a company dedicated to promoting AI-focused robotics education and training. They work closely with the RoboCup@Home Education initiative, supporting schools and institutions in introducing AI and service robot development to students. Their goal is to equip learners with practical AI application skills in computer vision, autonomous navigation, object manipulation and speech interactions.

Through their AI robotics and AI applications workshops, Lattel Robotics offers an introduction to robot operating system (ROS)-based AI applications development in service robotics. As a hardware partner for the RoboCup@Home Education initiative, they assist schools and institutions in competing in AI robotic challenges by developing applications that address real-world problems. Their AI learning solutions include setting up AI laboratories, designing curriculums and developing courseware.

Andrea: Thank you for taking the time to speak with me. It’s a pleasure to meet you. Could you please tell me more about who can benefit from the Jupiter2 platform, and what exactly it is designed to do?

Jerry: Just about anyone can use it with a programming background to get started to develop their own AI applications via service robot development, using open source software.

Andrea: What kind of sensors do your robots use? Just microphones and cameras, or is there more?

Jerry: Yes, so we use a range of sensors, including RGB-D cameras and LIDAR sensors for depth perception. Besides, we have microphones and speakers for speech interaction, along with robotic arms for object manipulation and mobile platforms for navigation.

Andrea: So, I have a little question about this. Do you have an API (Application Programming Interface) that students and researchers can work through?

Jerry: It’s not always necessary, as the robot is equipped with its own laptop, acting as a central processing unit. Getting started is easier, as we have integrated the hardware platform with open-source software like OpenCV and YOLO, using the Robot Operating System (ROS1 and ROS2). These are popular tools among AI researchers and developers.

Andrea: It sounds intriguing, but the setup process seems complicated. Integration often takes time.

Jerry: The robotic platform is already integrated and ready to be deployed. For our AI learning device, Juno2, all you need to do is to connect it to a laptop via a USB port and you can straight away boot up the Linux (Ubuntu) environment with ROS, without any software installation.

AI learning device, Juno2.

Andrea: How do you work with schools?

Jerry: We’re an education solution provider, supporting schools and institutions that want to introduce practical AI applications learning in a classroom. Programming experience in Python or C++ is a prerequisite in doing AI applications development. So we provide Python introduction workshop for complete beginners. Then using a train-the-trainers approach, we offer 3-day AI robotics workshops using Jupiter2 or 2-day AI applications workshops using Juno2. Through these workshops, educators are given an overview of the AI applications modules which would enable them to customise the material accordingly for education, training or even research purposes.

Andrea: What if someone doesn’t have direct access to the robot? Can they work remotely?

Jerry: Absolutely. You can connect via a remote desktop software from your computer and continue programming. As mentioned just now, we also have a smaller device called Juno2, designed for classroom use. You don’t need the full robot setup in this case; you can use this device to do computer vision and speech interactions applications. It’s a plug-and-play solution that works with any laptop or operating system, automatically loading Ubuntu, ROS and our Jupiter software development frameworks. It’s perfect for conducting online classes, as students can continue learning from home or wherever they are.

Andrea: If you have a robot like this, can additional components, like speakers, be added to your robots?

Jerry: Yes, definitely. That’s a good question. Jupiter2 is an open platform that is
customisable and reconfigurable. If you have an engineering background, you can modify the hardware based on the requirements. This flexibility allows both beginners and advanced users to focus on their specific areas of interest, whether it is software programming or hardware customisation.

Andrea: Have you exported these gadgets before?

Jerry: Yes, we have exported to Europe, Africa, South and North America before. So far there are more than 30 over schools, universities and institutions globally that have adopted our learning platforms for education, training, research and/or robotic competitions.

Andrea: Fantastic, thank you for your time and insights. I wish you continued success in the future!

You can find more information on the Lattel Robotics website.

Jerry Tan is the Managing Director of Lattel Robotics. A German-trained engineer turned entrepreneur, Jerry is currently running the AI robotics education and training companies in Malaysia and Singapore. Striving to empower anyone in getting started to learn AI applications practically, Jerry works closely with schools and academic institutions in setting up laboratories and developing courseware.

]]>
#RoboCup2024 – daily digest: 21 July https://robohub.org/robocup2024-daily-digest-21-july/ Sun, 21 Jul 2024 19:48:45 +0000 https://aihub.org/?p=15404 A break in play during a Small Size League match.

Today, 21 July, saw the competitions draw to a close in a thrilling finale. In the third and final of our round-up articles, we provide a flavour of the action from this last day. If you missed them, you can find our first two digests here: 19 July | 20 July.

My first port of call this morning was the Standard Platform League, where Dr Timothy Wiley and Tom Ellis from Team RedbackBots, RMIT University, Melbourne, Australia, demonstrated an exciting advancement that is unique to their team. They have developed an augmented reality (AR) system with the aim of enhancing the understanding and explainability of the on-field action.

The RedbackBots travelling team for 2024 (L-to-R: Murray Owens, Sam Griffiths, Tom Ellis, Dr Timothy Wiley, Mark Field, Jasper Avice Demay). Photo credit: Dr Timothy Wiley.

Timothy, the academic leader of the team explained: “What our students proposed at the end of last year’s competition, to make a contribution to the league, was to develop an augmented reality (AR) visualization of what the league calls the team communication monitor. This is a piece of software that gets displayed on the TV screens to the audience and the referee, and it shows you where the robots think they are, information about the game, and where the ball is. We set out to make an AR system of this because we think it’s so much better to view it overlaid on the field. What the AR lets us do is project all of this information live on the field as the robots are moving.”

The team has been demonstrating the system to the league at the event, with very positive feedback. In fact, one of the teams found an error in their software during a game whilst trying out the AR system. Tom said that they’ve received a lot of ideas and suggestions from the other teams for further developments. This is one of the first (if not, the first) AR system to be trialled across the competition, and first time it has been used in the Standard Platform League. I was lucky enough to get a demo from Tom and it definitely added a new level to the viewing experience. It will be very interesting to see how the system evolves.

Mark Field setting up the MetaQuest3 to use the augmented reality system. Photo credit: Dr Timothy Wiley.

From the main soccer area I headed to the RoboCupJunior zone, where Rui Baptista, an Executive Committee member, gave me a tour of the arenas and introduced me to some of the teams that have been using machine learning models to assist their robots. RoboCupJunior is a competition for school children, and is split into three leagues: Soccer, Rescue and OnStage.

I first caught up with four teams from the Rescue league. Robots identify “victims” within re-created disaster scenarios, varying in complexity from line-following on a flat surface to negotiating paths through obstacles on uneven terrain. There are three different strands to the league: 1) Rescue Line, where robots follow a black line which leads them to a victim, 2) Rescue Maze, where robots need to investigate a maze and identify victims, 3) Rescue Simulation, which is a simulated version of the maze competition.

Team Skollska Knijgia, taking part in the Rescue Line, used a YOLO v8 neural network to detect victims in the evacuation zone. They trained the network themselves with about 5000 images. Also competing in the Rescue Line event were Team Overengeniering2. They also used YOLO v8 neural networks, in this case for two elements of their system. They used the first model to detect victims in the evacuation zone and to detect the walls. Their second model is utilized during line following, and allows the robot to detect when the black line (used for the majority of the task) changes to a silver line, which indicates the entrance of the evacuation zone.

Left: Team Skollska Knijgia. Right: Team Overengeniering2.

Team Tanorobo! were taking part in the maze competition. They also used a machine learning model for victim detection, training on 3000 photos for each type of victim (these are denoted by different letters in the maze). They also took photos of walls and obstacles, to avoid mis-classification. Team New Aje were taking part in the simulation contest. They used a graphical user interface to train their machine learning model, and to debug their navigation algorithms. They have three different algorithms for navigation, with varying computational cost, which they can switch between depending on the place (and complexity) in the maze in which they are located.

Left: Team Tanorobo! Right: Team New Aje.

I met two of the teams who had recently presented in the OnStage event. Team Medic’s performance was based on a medical scenario, with the team including two machine learning elements. The first being voice recognition, for communication with the “patient” robots, and the second being image recognition to classify x-rays. Team Jam Session’s robot reads in American sign language symbols and uses them to play a piano. They used the MediaPipe detection algorithm to find different points on the hand, and random forest classifiers to determine which symbol was being displayed.

Left: Team Medic Bot Right: Team Jam Session.

Next stop was the humanoid league where the final match was in progress. The arena was packed to the rafters with crowds eager to see the action.
Standing room only to see the Adult Size Humanoids.

The finals continued with the Middle Size League, with the home team Tech United Eindhoven beating BigHeroX by a convincing 6-1 scoreline. You can watch the livestream of the final day’s action here.

The grand finale featured the winners of the Middle Size League (Tech United Eindhoven) against five RoboCup trustees. The humans ran out 5-2 winners, their superior passing and movement too much for Tech United.

]]>
#RoboCup2024 – daily digest: 20 July https://robohub.org/robocup2024-daily-digest-20-july/ Sat, 20 Jul 2024 18:34:07 +0000 https://aihub.org/?p=15402 The Standard Platform Soccer League in action.

This is the second of our daily digests from RoboCup2024 in Eindhoven, The Netherlands. If you missed the first digest, which gives some background to RoboCup, you can find it here.

Competitions continued across all the leagues today, with participants vying for a place in Sunday’s finals.

The RoboCup@Work league focusses on robots in work-related scenarios, utilizing ideas and concepts from other RoboCup competitions to tackle open research challenges in industrial and service robotics.

I arrived at the arena in time to catch the advanced navigation test. Robots have to autonomously navigate, picking up and placing objects at different work stations. In this advanced test, caution tape is added to the arena floor, which the robots should avoid travelling over. There is also a complex placing element where teams have to put an object that they’ve collected into a slot – get the orientation or placement of the object slightly wrong and the it won’t fall into the slot.

The RoboCup@Work arena just before competition start.

Eight teams are taking part in the league this year. Executive Committee member Asad Norouzi said that there are plans to introduce a sub-league which would provide an entry point for new teams or juniors to get into the league proper.

I caught up with Harrison Burns, Mitchell Torok and Jasper Arnold from Team MiRobot. They are based at the University of New South Wales and are attending RoboCup for the first time.

Team MiRobot from UNSW.

The team actually only started six months ago, so final preparations have been a bit stressful. However, the experience has been great fun, and the competition has gone well so far. Like most teams, they’ve had to make many refinements as the competition has progressed, leading to some late nights.

One notable feature of the team’s robot is the bespoke, in-house-designed grasping mechanism on the end of the arm. The team note that “it has good flexible jaws, so when it grabs round objects it actually pulls the object directly into it. Because it uses a linear motion, compared to a lot of other rotating jaws, it has a lot better reliability for picking up objects”.

Here is some footage from the task, featuring Team bi-t-bots and Team Singapore.

In the Middle Size Soccer league (MSL), teams of five fully autonomous robots play with a regular size FIFA ball. Teams are free to design their own hardware but all sensors have to be on-board and there is a maximum size and weight limit of 40kg for the robots. The research focus is on mechatronics design, control and multi-agent cooperation at plan and perception levels. Nine teams are competing this year.

I spoke to António Ribeiro, who is a member of the technical committee and part of Team LAR@MSL from the University of Minho, Portugal. The team started in 1998, but António and most of his colleagues on the current team have only been involved in the MSL since September 2022. The robots have evolved as the competition has progressed, and further improvements are in progress. Refinements so far have included communication, the detection system, and the control system. They are pleased with the improvements from the previous RoboCup. “Last year we had a lot of hardware issues, but this year the hardware seems pretty stable. We also changed our coding architecture and it is now much easier and faster for us to develop code because we can all work on the code at the same time on different modules”.

António cited versatility and cost-effective solutions as strengths of the team. “Our robot is actually very cheap compared to other teams. We use a lot of old chassis, and our solutions always go to the lowest cost possible. Some teams have multiple thousand dollar robots, but, for example, our vision system is around $70-80. It works pretty well – we need to improve the way we handle it, but it seems stable”.

Team LAR@MSL

The RoboCup@Home league aims to develop service and assistive robot technology with high relevance for future personal domestic applications. A set of benchmark tests is used to evaluate the robots’ abilities and performance in a realistic non-standardized home environment setting. These tests include helping to prepare breakfast, clearing the table, and storing groceries.

I arrived in time to watch the “stickler for the rules” challenge, where robots have to navigate different rooms and make sure that the people inside (“guests” at a party) are sticking to four rules: 1) there is one forbidden room – if a guest is in there the robot must alert them and ask them to follow it into another room), 2) everyone must have a drink in their hand – if not, the robot directs them to a shelf with drinks, 3) no shoes to be worn in the house, 4) there should be no rubbish left on the floor.

After watching an attempt from the LAR@Home robot, Tiago from the team told me a bit about the robot. “The goal is to develop a robot capable of multi general-purpose tasks in home and healthcare environments.” With the exception of the robotic arm, all of the hardware was built by the team. The robot has two RGBD cameras, two LIDARs, a tray (where the robot can store items that it needs to carry), and two emergency stop buttons that deactivate all moving parts. Four omnidirectional wheels allow the robot to move in any direction at any time. The wheels have independent suspension systems which guarantees that they can all be on the ground at all times, even if there are bumps and cables on the venue floor. There is a tablet that acts as a visual interface, and a microphone and speakers to enable communication between humans and the robot, which is all done via speaking and listening.

Tiago told me that the team have talked to a lot healthcare practitioners to find out the main problems faced by elderly people, and this inspired one of their robot features. “They said that the two main injury sources are from when people are trying to sit down or stand up, and when they are trying to pick something up from the floor. We developed a torso that can pick objects from the floor one metre away from the robot”.

The LAR@Home team.


You can keep up with the latest news direct from RoboCup here.

Click here to see all of our content pertaining to RoboCup.

]]>
#RoboCup2024 – daily digest: 19 July https://robohub.org/robocup2024-daily-digest-19-july/ Fri, 19 Jul 2024 16:10:00 +0000 https://aihub.org/?p=15398 The main soccer arena.

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. As part of this initiative, a series of competitions and events are held throughout the year. The main showcase event is an international affair with teams travelling from far and wide to put their machines through their paces.

This year, RoboCup is being held in three arenas in the Genneper Parken, Eindhoven, The Netherlands. The organisers are expecting over 2,000 participants, from 45 different countries, with around 300 teams signed up to take part in the various competitions.

Although RoboCup started out as a football (or soccer) playing competition, other leagues have since been introduced, focussing on robots in industrial, rescue, and home settings. There is even a dedicated league for young roboticists – RoboCupJunior – where participants can take part in either football, rescue, or artistic events.

I am lucky enough to be able to attend this year, and, for the next three days, I’ll be bringing you a daily digest of some of the exciting happenings from Eindhoven.

Today, 19 July, sees the competition in full swing. The main soccer arena, boasting multiple pitches, hosts a number of the different leagues which form RoboCupSoccer.

Some of the pitches in the main soccer arena.

My first port of call was the Standard Platform League, where the round 5 champions cup match between SPQR Team vs rUNSWift was taking place. SPQR ran out winners and advance to round 6. In this league, all teams compete with identical robots (currently the humanoid NAO by Aldebaran). The robots operate fully autonomously, meaning that there is no external control from neither humans nor computers.

Standard platform league. Round 5 champions cup match between SPQR Team vs rUNSWift.

The Humanoid AdultSize league is arguably the most challenging of the leagues, with many constraints placed on the robots to make them as human-like as possible. For example, they must have roughly human-like body proportions, they need to walk on two legs, and they are only allowed to use human-like sensors (up to two cameras to sense the environment). In this AdultSize competition, two robots from each team compete, and the team members walk behind the robots to catch them in case of a fall. Such a mishap could prove costly in terms of potential hardware damage.

Action from the Humanoid AdultSize League.

The RoboCup Rescue Robot League sees teams developing robotic systems with the goal of enabling emergency responders to perform extremely hazardous tasks from safer stand-off distances. During the competition, teams compete in a round-robin, putting their robots through their paces on a number of different challenges. The leading teams following this initial phase progress to the finals on Sunday. The tasks include navigating in complex environments, opening doors, and sensing. Teams may run the machines completely autonomously, or with some assistive control. More points are awarded for completely autonomous operation.

RoboCup Rescue arena from above.

You can keep up with more RoboCup2024 news here.

]]>
Robot Talk Episode 90 – Robotically Augmented People https://robohub.org/robot-talk-episode-90-robotically-augmented-people/ Fri, 21 Jun 2024 12:36:11 +0000 https://robohub.org/?p=210384

Robotics is helping to rehabilitate and increase human abilities in areas like mobility and stamina. Innovations in robotic devices, exoskeletons, and wearable tech aim to offer disabled people different perspectives and new experiences, as well as supporting humans more widely to access, inhabit and work safely in dangerous and extreme conditions. What does the future hold for these technologies and the people they will become a part of?

In this special live recording at the Victoria and Albert Museum as part of the Great Exhibition Road Festival, Claire chatted to Milia Helena Hasbani (Imperial College London), Benjamin Metcalfe (University of Bath) and Dani Clode (Cambridge University) about robotic prosthetics and human augmentation.

 

Milia Helena Hasbani is a researcher in assistive technology at Imperial College London. She is passionate about improving people’s lives through innovation in healthcare and technology in multi-disciplinary environments interfacing with engineers, clinicians, and patients. Her research focuses on the control of active prosthetic arms, combining user intention for wrist movements with a computer vision system for dynamically selecting the grasp type to be used. Benjamin Metcalfe is a biomedical engineer who specialises in neural interfaces and implanted devices. He is Head of the Department of Electronic & Electrical Engineering at the University of Bath and Deputy Director of the Bath Institute for the Augmented Human. He is also Vice-President (Academic) of the Institute of Physics and Engineering in Medicine. His interests explore the collision between technology and biology and the extent to which engineering can be used to augment and enhance human performance. Dani Clode is an augmentation and prosthetics designer. She is the Senior Technical Specialist at the Plasticity Lab at Cambridge University and a collaborator of the Alternative Limb Project. Dani’s work investigates the future architecture of our bodies, challenging the perception and boundaries of extending the human form. Her main project the ‘Third Thumb’ is currently being utilised in collaboration with neuroscientists at Cambridge University, investigating the brain’s ability to adapt to human augmentation.

]]>
Robot Talk Episode 89 – Simone Schuerle https://robohub.org/robot-talk-episode-89-simone-schuerle/ Fri, 14 Jun 2024 11:38:11 +0000 https://robohub.org/?p=210349

Claire chatted to Simone Schuerle from ETH Zürich all about microrobots, medicine and science.

Simone Schuerle is Assistant Professor at ETH Zurich in Switzerland, where she heads the Responsive Biomedical System Lab. With her team, she develops diagnostic and therapeutic systems at the nano- and microscale with the aim of tackling a range of challenging problems in medicine. One major focus of her current research is addressing limitations in drug delivery through scalable magnetically enhanced drug transport. In 2014, she co-founded the spin-off MagnebotiX that offers electromagnetic control systems for wireless micromanipulation.

]]>
Robot Talk Episode 88 – Lord Ara Darzi https://robohub.org/robot-talk-episode-88-lord-ara-darzi/ Fri, 07 Jun 2024 11:45:02 +0000 https://robohub.org/?p=210315

Claire chatted to Lord Ara Darzi from Imperial College London all about robotic surgery – past, present and future.

Ara Darzi is Co-Director of the Institute of Global Health Innovation at Imperial College London and holds the Paul Hamlyn Chair of Surgery. In 2002, he was knighted for his services to medicine and surgery and in 2007 was introduced as Lord Darzi of Denham to the UK’s House of Lords as the Parliamentary Under-Secretary of State for Health. Professor Darzi leads a large multidisciplinary academic and policy research team, focused on convergence science across engineering, physical and data sciences, specifically in the areas of robotics, sensing, imaging and digital and AI technologies. He is Chair of the NHS Accelerated Access Collaborative, Fellow of the Academy of Medical Sciences and the Royal Society, and Honorary Fellow of the Royal Academy of Engineering.

]]>
Robot Talk Episode 87 – Isabelle Ormerod https://robohub.org/robot-talk-episode-87-isabelle-ormerod/ Fri, 31 May 2024 12:36:27 +0000 https://robohub.org/?p=210279

Claire chatted to Isabelle Ormerod from the University of Bristol all about human-centred design and women in robotics.

Isabelle Ormerod is a PhD student at Bristol Robotics Lab in the Design and Manufacturing Futures Lab. Her professional path began in the medical product design industry, where she observed firsthand the application of human factors (HF) processes for dextrous and high-risk procedures. This experience sparked her interest in leveraging data-driven HF approaches in product design. Isabelle is also part of the Leadership team of Women in Robotics UK. This organization is committed to fostering an inclusive community for women and non-binary individuals in the robotics field in the UK.

]]>
Robot Talk Episode 86 – Mario Di Castro https://robohub.org/robot-talk-episode-86-mario-di-castro/ Fri, 24 May 2024 12:28:50 +0000 https://robohub.org/?p=210247

Claire chatted to Mario Di Castro from CERN all about robotic inspection and maintenance in hazardous environments.

Mario Di Castro has a Master’s degree in electronic engineering from the University of Naples Federico II in Italy and a PhD in robotics and industrial controls from the Polytechnic University of Madrid in Spain. Since 2011 he has led the Mechatronics, Robotics and Operation section at CERN. The section is responsible for the design, construction, installation, operation and maintenance of robotic systems used for remote maintenance at the CERN accelerator complex. His research interests include tele-robotics, machine learning, and precise motion control in harsh environments.

]]>
Congratulations to the #ICRA2024 best paper winners https://robohub.org/congratulations-to-the-icra2024-best-paper-winners/ Mon, 20 May 2024 08:57:02 +0000 https://aihub.org/?p=14966

The 2024 IEEE International Conference on Robotics and Automation (ICRA) best paper winners and finalists in the various different categories have been announced. The recipients were revealed during an award luncheon at the conference, which took place from 13-17 May in Yokohama, Japan.


IEEE ICRA Best Paper Award in Automation

Winner

TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester

Finalists

  • A Movable Microfluidic Chip with Gap Effect for Manipulation of Oocytes, Shuzhang Liang, Satoshi Amaya, Hirotaka Sugiura, Hao Mo, Yuguo Dai, and Fumihito Arai
  • Under Pressure: Learning-Based Analog Gauge Reading in the Wild, Maurits Reitsma, Julian Keller, Kenneth Blomqvist, and Roland Siegwart
  • Efficient Composite Learning Robot Control Under Partial Interval Excitation, Tian Shi, Weibing Li, Haoyong Yu, and Yongping Pan
  • MORALS: Analysis of High-Dimensional Robot Controllers via Topological Tools in a Latent Space, Ewerton Vieira, Aravind Sivaramakrishnan, Sumanth Tangirala, Edgar Granados, Konstantin Mischaikow, and Kostas E. Bekris

IEEE ICRA Best Paper Award in Cognitive Robotics

Winner

VLFM: Vision-Language Frontier Maps for Semantic Navigation, Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, and Bernadette Bucher

Finalists

  • NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Resilient Legged Local Navigation: Learning to Traverse with Compromised Perception End-to-End, Chong Zhang, Jin Jin, Jonas Frey, Nikita Rudin, Matias Mattamala, Cesar Cadena Lerma, and Marco Hutter
  • Learning Continuous Control with Geometric Regularity from Robot Intrinsic Symmetry, Shengchao Yan, Baohe Zhang, Yuan Zhang, Joschka Boedecker, and Wolfram Burgard
  • Learning Vision-Based Bipedal Locomotion for Challenging Terrain, Helei Duan, Bikram Pandit, Mohitvishnu S. Gadde, Bart Jaap Van Marum, Jeremy Dao, Chanho Kim, and Alan Fern

IEEE ICRA Best Paper Award in Robot Manipulation

Winner

SARA-RT: Scaling up Robotics Transformers with Self-Adaptive Robust Attention, Isabel Leal, Krzysztof Choromanski, Deepali Jain, Avinava Dubey, Jacob Varley, Michael S. Ryoo, Yao Lu, Frederick Liu, Vikas Sindhwani, Tamas Sarlos, Kenneth Oslund, Karol Hausman, Quan Vuong, and Kanishka Rao

Finalists

  • Open X-Embodiment: Robotic Learning Datasets and RT-X Models, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Marin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley
  • Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans, Homanga Bharadhwaj, Abhinav Gupta, Vikash Kumar, and Shubham Tulsiani
  • Hearing Touch: Audio-Visual Pretraining for Contact-Rich Manipulation, Jared Mejia, Victoria Dean, Tess Hellebrekers, and Abhinav Gupta
  • DenseTact-Mini: An Optical Tactile Sensor for Grasping Multi-Scale Objects From Flat Surfaces, Won Kyung Do, Ankush Ankush Dhawan, Mathilda Kitzmann, and Monroe Kennedy
  • Constrained Bimanual Planning with Analytic Inverse Kinematics, Thomas Cohn, Seiji Shaw, Max Simchowitz, and Russ Tedrake

IEEE ICRA Best Paper Award on Human-Robot Interaction

Winner

CoFRIDA: Self-Supervised Fine-Tuning for Human-Robot Co-Painting, Peter Schaldenbrand, Gaurav Parmar, Jun-Yan Zhu, James Mccann, and Jean Oh

Finalists

  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments, Junwei Zheng, Jiaming Zhang, Kailun Yang, Kunyu Peng, and Rainer Stiefelhagen
  • Robot-Assisted Navigation for Visually Impaired through Adaptive Impedance and Path Planning, Pietro Balatti, Idil Ozdamar, Doganay Sirintuna, Luca Fortini, Mattia Leonori, Juan M. Gandarias, and Arash Ajoudani
  • Incremental Learning of Full-Pose Via-Point Movement Primitives on Riemannian Manifolds, Tilman Daab, Noémie Jaquier, Christian R. G. Dreher, Andre Meixner, Franziska Krebs, and Tamim Asfour
  • Supernumerary Robotic Limbs to Support Post-Fall Recoveries for Astronauts, Erik Ballesteros, Sang-Yoep Lee, Kalind Carpenter, and Harry Asada

IEEE ICRA Best Paper Award in Medical Robotics

Winner

Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons

Finalists

  • Intraoperatively Iterative Hough Transform Based In-plane Hybrid Control of Arterial Robotic Ultrasound for Magnetic Catheterization, Zhengyang Li, Magejiang Yeerbulati, and Qingsong Xu
  • Efficient Model Learning and Adaptive Tracking Control of Magnetic Micro-Robots for Non-Contact Manipulation, Yongyi Jia, Shu Miao, Junjian Zhou, Niandong Jiao, Lianqing Liu, and Xiang Li
  • Colibri5: Real-Time Monocular 5-DoF Trocar Pose Tracking for Robot-Assisted Vitreoretinal Surgery, Shervin Dehghani, Michael Sommersperger, Mahdi Saleh, Alireza Alikhani, Benjamin Busam, Peter Gehlbach, Ioan Iulian Iordachita, Nassir Navab, and M. Ali Nasseri
  • Hybrid Volitional Control of a Robotic Transtibial Prosthesis using a Phase Variable Impedance Controller, Ryan Posh, Jonathan Allen Tittle, David Kelly, James Schmiedeler, and Patrick M. Wensing
  • Design and Implementation of a Robotized Hand-held Dissector for Endoscopic Pulmonary Endarterectomy, Runfeng Zhu, Xilong Hou, Wei Huang, Lei Du, Zhong Wu, Hongbin Liu, Henry Chu, and Qing Xiang Zhao

IEEE ICRA Best Paper Award on Mechanisms and Design

Winner

Design and Modeling of a Nested Bi-cavity-based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu

Finalists

  • Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu
  • Lissajous Curve-Based Vibrational Orbit Control of a Flexible Vibrational Actuator with a Structural Anisotropy, Yuto Miyazaki and Mitsuru Higashimori
  • Dynamic Modeling of Wing-Assisted Inclined Running with a Morphing Multi-Modal Robot, Eric Sihite, Alireza Ramezani, and Gharib Morteza

IEEE ICRA Best Paper Award on Multi-Robot Systems

Winner

Do We Run Large-Scale Multi-Robot Systems on the Edge? More Evidence for Two-Phase Performance in System Size Scaling, Jonas Kuckling, Robin Luckey, Viktor Avrutin, Andrew Vardy, Andreagiovanni Reina, and Heiko Hamann

Finalists

  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Learning for Dynamic Subteaming and Voluntary Waiting in Heterogeneous Multi-Robot Collaborative Scheduling, Williard Joshua Jose and Hao Zhang
  • Asynchronous Distributed Smoothing and Mapping via On-Manifold Consensus ADMM, Daniel Mcgann, Kyle Lassak, and Michael Kaess
  • Uncertainty-Bounded Active Monitoring of Unknown Dynamic Targets in Road-Networks with Minimum Fleet, Shuaikang Wang, Yiannis Kantaros, and Meng Guo

IEEE ICRA Best Paper Award in Service Robotics

Winner

Learning to Walk in Confined Spaces Using 3D Representation, Takahiro Miki, Joonho Lee, Lorenz Wellhausen, and Marco Hutter

Finalists

  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits for Agricultural Robots, Federico Magistri, Rodrigo Marcuzzi, Elias Ariel Marks, Matteo Sodano, Jens Behley, and Cyrill Stachniss
  • CoPAL: Corrective Planning of Robot Actions with Large Language Models, Frank Joublin, Antonello Ceravola, Pavel Smirnov, Felix Ocker, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Stephan Hasler, Daniel Tanneberg, and Michael Gienger
  • CalliRewrite: Recovering Handwriting Behaviors from Calligraphy Images without Supervision, Yuxuan Luo, Zekun Wu, and Zhouhui Lian

IEEE ICRA Best Paper Award in Robot Vision

Winner

NGEL-SLAM: Neural Implicit Representation-based Global Consistent Low-Latency SLAM System, Yunxuan Mao, Xuan Yu, Kai Wang, Yue Wang, Rong Xiong, and Yiyi Liao

Finalists

  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • Deep Evidential Uncertainty Estimation for Semantic Segmentation under Out-Of-Distribution Obstacles, Siddharth Ancha, Philip Osteen, and Nicholas Roy
  • SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud Tracking, Yu Lin, Zhiheng Li, Yubo Cui, and Zheng Fang
  • Ultrafast Square-Root Filter-based VINS, Yuxiang Peng, Chuchu Chen, and Guoquan Huang
  • Universal Visual Decomposer: Long-Horizon Manipulation Made Easy, Zichen Zhang, Yunshuang Li, Osbert Bastani, Abhishek Gupta, Dinesh Jayaraman, Yecheng Jason Ma, and Luca Weihs

IEEE ICRA Best Paper Award on Unmanned Aerial Vehicles

Winner

Time-Optimal Gate-Traversing Planner for Autonomous Drone Racing, Chao Qin, Maxime Simon Joseph Michet, Jingxiang Chen, and Hugh H.-T. Liu

Finalists

  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao
  • Co-Design Optimisation of Morphing Topology and Control of Winged Drones, Fabio Bergonti, Gabriele Nava, Valentin Wüest, Antonello Paolino, Giuseppe L’Erario, Daniele Pucci, and Dario Floreano
  • FC-Planner: A Skeleton-guided Planning Framework for Fast Aerial Coverage of Complex 3D Scenes, Chen Feng, Haojia Li, Mingjie Zhang, Xinyi Chen, Boyu Zhou, and Shaojie Shen
  • Sequential Trajectory Optimization for Externally-Actuated Modular Manipulators with Joint Locking, Jaeu Choe, Jeongseob Lee, Hyunsoo Yang, Hai-Nguyen (Hann) Nguyen, and Dongjun Lee
  • Spatial Assisted Human-Drone Collaborative Navigation and Interaction through Immersive Mixed Reality, Luca Morando and Giuseppe Loianno

IEEE ICRA Best Student Paper Award

Winner

Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu

Finalists

  • TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester
  • Goal Masked Diffusion Policies for Unified Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Open X-Embodiment: Robotic Learning Datasets and RT-X Models, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Martin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley
  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons
  • Design and Modeling of a Nested Bi-cavity- based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu
  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao

IEEE ICRA Best Conference Paper Award

Winners

  • Goal Masked Diffusion Policies for Unified Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Open X-Embodiment: Robotic Learning Datasets and RT-X, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Martin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley

Finalists

  • TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester
  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons
  • Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu
  • Design and Modeling of a Nested Bi-cavity- based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu
  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao

]]>
Robot Talk Episode 85 – Margarita Chli https://robohub.org/robot-talk-episode-85-margarita-chli/ Fri, 17 May 2024 11:21:41 +0000 https://robohub.org/?p=210201

Claire chatted to Margarita Chli from the University of Cyprus all about vision, navigation, and small aerial drones.

Margarita Chli is a professor of Robotic Vision and the director of the Vision for Robotics Lab, at the University of Cyprus and ETH Zurich. Her work has contributed to the first vision-based autonomous flight of a small drone and the first demonstration of collaborative monocular SLAM for a small swarm of drones. Margarita has given invited keynotes at the World Economic Forum in Davos, TEDx, and ICRA, and she was featured in Robohub’s 2016 list of “25 women in Robotics you need to know about”. In 2023, she won the ERC Consolidator Grant to research advanced robotic perception.

]]>
What’s coming up at #ICRA2024? https://robohub.org/whats-coming-up-at-icra2024/ Fri, 10 May 2024 10:28:26 +0000 https://aihub.org/?p=14941

The 2024 IEEE International Conference on Robotics and Automation (ICRA) will take place from 13-17 May, in Yokohama, Japan. The event will feature plenary and keynote talks, technical sessions, posters, workshops and tutorials.

Plenary speakers

There are three plenary talks at the conference this year:

  • Yoky Matsuoka – How to Turn a Roboticist into a Corporate Explorer
  • Sami Haddadin – The Great Robot Accelerator: Collective Learning of Optimal Embodied AI
  • Sunil K Agrawal – Rehabilitation Robotics: How to Improve Daily Functions in People with Impairments?

Keynote talks

There will be 15 keynote talks, given by:

  • Lianqing Liu – Biosyncretic sensing, actuation and intelligence for robotics
  • Dawn M. Tilbury – Digital Twins for Manufacturing Automation
  • Claudio Pacchierotti – Beyond Force Feedback: Cutaneous Haptics in Human-centered Robotics
  • Yu Sun – Medical Robotics for Cell Surgery – Science and Applications
  • Yasuhisa Hirata – Adaptable AI-enabled Robots to Create a Vibrant Society – Moonshot R&D Projects in Japan
  • Calin Belta – Formal Methods for Safety-Critical Control
  • Manuel Catalano – Robots in the Wild: From Research Labs to the Real World
  • Harold Soh – Building Guidance Bridges with Generative Models for Robot Learning and Control
  • Lorenzo Sabattini – Unleashing the power of many: decentralized control of multi-robot systems
  • Myunghee Kim – Human-wearable robot co-adaptation
  • Yoko Yamanishi – Emergent Functions of Electrically-induced Bubbles and Intra-cellular-Cybernetic Avatar
  • Kensuke Harada – Robotic manipulation aiming for industrial applications
  • Iolanda Leite – The Quest for Social Robot Autonomy
  • Rong Xiong – Integration of Robotics and AI: Changes and Challenges
  • Mariana Medina-Sánchez – Tiny Robots, Big Impact: Transforming Gynecological Care

Tutorials

The tutorials will be held on Monday 13 May and Friday 17 May.

Workshops

The workshops will also be held on Monday 13 May and Friday 17 May. There are 73 to choose from this year.

You can see the programme overview here, with a detailed programme available here.

]]>
Octopus inspires new suction mechanism for robots https://robohub.org/octopus-inspires-new-suction-mechanism-for-robots/ Wed, 17 Apr 2024 23:01:00 +0000 https://www.bristol.ac.uk/news/2024/april/octopus-robots.html Suction cup grasping a stone – Image credit: Tianqi Yue

The team, based at Bristol Robotics Laboratory, studied the structures of octopus biological suckers,  which have superb adaptive suction abilities enabling them to anchor to rock.

In their findings, published in the journal PNAS today, the researchers show how they were able create a multi-layer soft structure and an artificial fluidic system to mimic the musculature and mucus structures of biological suckers.

Suction is a highly evolved biological adhesion strategy for soft-body organisms to achieve strong grasping on various objects. Biological suckers can adaptively attach to dry complex surfaces such as rocks and shells, which are extremely challenging for current artificial suction cups. Although the adaptive suction of biological suckers is believed to be the result of their soft body’s mechanical deformation, some studies imply that in-sucker mucus secretion may be another critical factor in helping attach to complex surfaces, thanks to its high viscosity.

Lead author Tianqi Yue explained: “The most important development is that we successfully demonstrated the effectiveness of the combination of mechanical conformation – the use of soft materials to conform to surface shape, and liquid seal – the spread of water onto the contacting surface for improving the suction adaptability on complex surfaces. This may also be the secret behind biological organisms ability to achieve adaptive suction.”

Their multi-scale suction mechanism is an organic combination of mechanical conformation and regulated water seal. Multi-layer soft materials first generate a rough mechanical conformation to the substrate, reducing leaking apertures to just micrometres. The remaining micron-sized apertures are then sealed by regulated water secretion from an artificial fluidic system based on the physical model, thereby the suction cup achieves long suction longevity on diverse surfaces but with minimal overflow.

 

Tianqi added: “We believe the presented multi-scale adaptive suction mechanism is a powerful new adaptive suction strategy which may be instrumental in the development of versatile soft adhesion.

”Current industrial solutions use always-on air pumps to actively generate the suction however, these are noisy and waste energy.

“With no need for a pump, it is well known that many natural organisms with suckers, including octopuses, some fishes such as suckerfish and remoras, leeches, gastropods and echinoderms, can maintain their superb adaptive suction on complex surfaces by exploiting their soft body structures.”

The findings have great potential for industrial applications, such as providing a next-generation robotic gripper for grasping a variety of irregular objects.

The team now plan to build a more intelligent suction cup, by embedding sensors into the suction cup to regulate suction cup’s behaviour.

Paper

Bioinspired multiscale adaptive suction on complex dry surfaces enhanced by regulated water secretion’ by Tianqi Yue, Weiyong Si, Alex Keller, Chenguang Yang, Hermes Bloomfield-Gadêlha and Jonathan Rossiter in PNAS.

]]>
Open Robotics Launches the Open Source Robotics Alliance https://robohub.org/open-robotics-launches-the-open-source-robotics-alliance/ Mon, 18 Mar 2024 23:11:33 +0000 https://robohub.org/?p=209746

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics software projects and ensure the health of the Robot Operating System (ROS) Suite community for many years to come. The OSRA will use a mixed membership and meritocratic model, following other successful foundations for open-source projects, including The Linux Foundation and the Eclipse Foundation.

The OSRA is extending an open invitation to all community stakeholders to participate in the technical oversight, direction, development, and support of the OSRF’s open source projects – ROS, Gazebo, Open-RMF, and their infrastructure. Involvement across the robotics ecosystem is crucial to this initiative.

The center of the OSRA will be the Technical Governance Committee (TGC), which will oversee the activities of various Project Management Committees, Technical Committees, Special
Interest Groups, and Working Groups. As a charitable program of the OSRF, overall responsibility for the OSRA remains with the OSRF Board.

The Alliance has received early support for our vision from prominent organizations such as NVIDIA, our inaugural Platinum member.

“NVIDIA develops with ROS 2 to bring accelerated computing and AI to developers, researchers, and commercial applications,” said Gordon Grigor, VP Robotics Software, NVIDIA. “As an inaugural platinum member of OSRA, we will collaborate to advance open-source robotics throughout the ecosystem by aiding development efforts, and providing governance and continuity.”

Intrinsic also continues its support of Open Robotics with its inaugural Platinum membership. “From the numerous contributions made by our team at Intrinsic across projects like ROS, Gazebo, and Open-RMF as part of the Open Robotics community, to our acquisition of the Open Source Robotics Corporation (OSRC), we’ve invested deeply in the open source community, and we look forward to continuing our support of the ecosystem as an inaugural member of the OSRA,” said Wendy Tan White, CEO of Intrinsic. Qualcomm Technologies rounds out the inaugural Platinum membership group and signals its commitment to open source robotics as well.

“Qualcomm Technologies is excited to join the Open Source Robotics Alliance (OSRA) to help drive the development of open-source robotics software and foster the growth of the vibrant ROS Suite developer community,” stated Dev Singh, Vice President of Business Development and Head of Robotics & Industrial Automation at Qualcomm Technologies. “Leveraging our longstanding history in on-device AI processing and heterogeneous computing, our comprehensive family of robotics platforms addresses all types of robots to deliver the benefits of AI at the edge.”

NVIDIA, Intrinsic, and Qualcomm Technologies join nine other inaugural members at press time, including Gold members Apex.ai and Zettascale, Silver members Clearpath Robotics, Ekumen, eProsima, and PickNik, and Associate member Silicon Valley Robotics. Initial Supporting Organizations include Canonical and Open Navigation. Incoming members include Bosch and ROS-Industrial, and several others to be announced soon.

Membership applications are now open for organizations and individuals interested in joining the OSRA and supporting the future of open source robotics. Instructions on how to apply and information on the member levels and benefits are available at www.osralliance.org.

About Open Robotics: Open Robotics is the umbrella term for the Open Source Robotics Foundation (OSRF) and its initiatives. Founded in 2012, the OSRF is a California nonprofit public benefit corporation exempt under Section 501(c)(3) of the Internal Revenue Code. Its flagship open-source robotics software, ROS, is the world’s most widely adopted robotics framework suite. For more information about its new OSRA initiative, please visit www.osralliance.org.

]]>
Robot Talk Episode 77 – Patricia Shaw https://robohub.org/robot-talk-episode-77-patricia-shaw/ Mon, 18 Mar 2024 22:50:38 +0000 https://robohub.org/?p=209671

Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.

Patricia Shaw is a Senior Lecturer in Computer Science and Robotics at Aberystwyth University. Her current research interests include technology for assistive living and she is currently leading on establishing a new Smart Home Lab at the university. This new lab will be used to research, develop and test a wide range of sensing technology for monitoring activities in the home as well as robots ranging from companions to assistants around the home. She strongly supports public engagement and has been coordinating regional robotics week events for the last 6 years.

]]>
Robot Talk Episode 64 – Rav Chunilal https://robohub.org/robot-talk-episode-64-rav-chunilal/ Sun, 31 Dec 2023 08:46:37 +0000 https://robohub.org/?p=208842 Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.

Rav Chunilal is the Head of Robotics & Artificial Intelligence at Sellafield Ltd, the UK’s largest nuclear site. He is responsible for developing and deploying Robotics and AI technologies to decommission the site safely, efficiently, and cost-effectively. Under Rav’s leadership, Sellafield Ltd and the Nuclear Decommissioning Authority Group has established itself as a leader in the use of Robotics and AI for nuclear decommissioning. The company has developed and deployed a wide range of robotic technologies to perform tasks such as remote inspection, waste handling, and demolition.

]]>
AI holidays 2023 https://robohub.org/ai-holidays-2023/ Sun, 31 Dec 2023 08:43:52 +0000 https://aihub.org/?p=13873

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season.


Josh Miller used ChatGPT to create the 12 days of Christmas. Prompt: “I want to make 12 images in an early 1980s era to depict the twelve days of Christmas”.

See the pictures in higher definition here.


“Santa’s Secret: Robots Saved Christmas! | CPS XMAS video 2023” from Chair of Cyber-Physical Systems.


“AI rewriting classic Christmas stories” from GingerPale.


Using PaLM2 to generate haikus.

Prompt: “Write me a haiku about an alternative xmas”.
No gifts, no grand feast,
Just laughter and joy, warmth spread,
Kindness fills the air.

Prompt: “Write a haiku about the winter holiday season”.
Snow blankets the land,
Hearts aglow with love’s warmth,
Joyous cheer surrounds.


“Put a Bow on It | Happy Holidays” from Boston Dynamics.


“Christmas Magic: Animated Backgrounds for an Enchanting Atmosphere” from AI-ArtificialIntelligence.


“XMAS – Episode 3: AI – Christmas music for the HAKOM Christmas party” from HAKOM Time Series.


“Christmas tree in python notebook with Turtle” from Learn Upwards.


“FZI Living Lab Christmas Robotics 2023 – Robotic Reindeer Rescue” from FZI channel.


A letter to Santa generated by PaLM2. Prompt: “Write me a letter to Santa Claus. I don’t want any presents, just inspiration in my machine learning research for 2024.”

Dear Santa Claus,

As the year comes to a close, I find myself reflecting on the remarkable advancements in machine learning that have transformed our world. From self-driving cars to personalized medicine, these innovations have opened up new possibilities and enriched our lives in ways we could have only imagined a few decades ago.

I am writing to you today not to ask for material gifts, but rather for something more precious: inspiration to continue exploring the frontiers of machine learning in the year to come. As a researcher in this field, I am eager to contribute to the development of algorithms and techniques that can further enhance our understanding of the world around us and create solutions to pressing global challenges.

I am particularly fascinated by the potential of machine learning to address the climate crisis. By harnessing the power of data and computation, we can develop more efficient renewable energy sources, optimize transportation systems, and mitigate the impact of extreme weather events. Additionally, I believe that machine learning can play a crucial role in improving education, healthcare, and environmental sustainability.

I know that you have a special ability to see the world in a unique and insightful way. I hope that you can share some of your wisdom with me and guide me towards promising research directions. I am eager to embark on new challenges and make a positive impact on the world through machine learning.

Thank you for considering my request, Santa Claus. I look forward to your guidance and inspiration as I pursue my research endeavors in the year ahead.

With warmest regards,

[Your Name]

]]>
Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives https://robohub.org/faced-with-dwindling-bee-colonies-scientists-are-arming-queens-with-robots-and-smart-hives/ Sun, 31 Dec 2023 08:29:29 +0000 http://robohub.org/?guid=9b6739d9acee77ead04840b46cd2ede9 By Farshad Arvin, Martin Stefanec, and Tomas Krajnik

Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape.

In the last three decades, the global biomass of flying insects has shrunk by 75%. Among the trend’s most notables victims is the world’s most important pollinator, the honeybee. In the United States, 48% of honeybee colonies died in 2023 alone, making it the second deadliest year on record. This significant loss is due in part to colony collapse disorder (CCD), the sudden disappearance of bees. In contrast, European countries report lower but still worrisome rates of colony losses, ranging from 6% to 32%.

This decline causes many of our essential food crops to be under-pollinated, a phenomenon that threatens our society’s food security.

Debunking the sci-fi myth of robotic bees

So, what can be done? Given pesticides’ role in the decline of bee colonies, commonly proposed solutions include a shift away from industrial farming and toward less pesticide-intensive, more sustainable forms of agriculture.

Others tend to look toward the sci-fi end of things, with some scientists imagining that we could eventually replace live honeybees with robotic ones. Such artificial bees could interact with flowers like natural insects, maintaining pollination levels despite the declining numbers of natural pollinators. The vision of artificial pollinators contributed to ingenious designs of insect-sized robots capable of flying.

In reality, such inventions are more effective at educating us over engineers’ fantasies than they are at reviving bee colonies, so slim are their prospects of materialising. First, these artificial pollinators would have to be equipped for much more more than just flying. Daily tasks carried out by the common bee include searching for plants, identifying flowers, unobtrusively interacting with them, locating energy sources, ducking potential predators, and dealing with adverse weather conditions. Robots would have to perform all of these in the wild with a very high degree of reliability since any broken-down or lost robot can cause damage and spread pollution. Second, it remains to be seen whether our technological knowledge would be even capable of manufacturing such inventions. This is without even mentioning the price tag of a swarm of robots capable of substituting pollination provided by a single honeybee colony.

Inside a smart hive

Bees on one of Hiveopolis’s augmented hives.
Hiveopolis, Fourni par l’auteur

Rather than trying to replace honeybees with robots, our two latest projects funded by the European Union propose that the robots and honeybees actually team up. Were these to succeed, struggling honeybee colonies could be transformed into bio-hybrid entities consisting of biological and technological components with complementary skills. This would hopefully boost and secure the colonies’ population growth as more bees survive over harsh winters and yield more foragers to pollinate surrounding ecosystems.

The first of these projects, Hiveopolis, investigates how the complex decentralised decision-making mechanism in a honeybee colony can be nudged by digital technology. Begun in 2019 and set to end in March 2024, the experiment introduces technology into three observation hives each containing 4,000 bees, by contrast to 40,000 bees for a normal colony.

The foundation of an augmented honeycomb.
Hiveopolis, Fourni par l’auteur

Within this honeybee smart home, combs have integrated temperature sensors and heating devices, allowing the bees to enjoy optimal conditions inside the colony. Since bees tend to snuggle up to warmer locations, the combs also enables us to direct them toward different areas of the hive. And as if that control weren’t enough, the hives are also equipped with a system of electronic gates that monitors the insects movements. Both technologies allow us to decide where the bees store honey and pollen, but also when they vacate the combs so as to enable us to harvest honey. Last but not least, the smart hive contains a robotic dancing bee that can direct foraging bees toward areas with plants to be pollinated.

Due to the experiment’s small scale, it is impossible to draw conclusions on the extent to which our technologies may have prevented bee losses. However, there is little doubt what we have seen thus far give reasons to be hopeful. We can confidently assert that our smart beehives allowed colonies to survive extreme cold during the winter in a way that wouldn’t otherwise be possible. To precisely assess how many bees these technologies have saved would require upscaling the experiment to hundreds of colonies.

Pampering the queen bee

Our second EU-funded project, RoboRoyale, focuses on the honeybee queen and her courtyard bees, with robots in this instance continuously monitoring and interacting with her Royal Highness.

Come 2024, we will equip each hive with a group of six bee-sized robots, which will groom and feed the honeybee queen to affect the number of eggs she lays. Some of these robots will be equipped with royal jelly micro-pumps to feed her, while others will feature compliant micro-actuators to groom her. These robots will then be connected to a larger robotic arm with infrared cameras, that will continuously monitor the queen and her vicinity.

A RoboRoyale robot arm susses out a honeybee colony.
RoboRoyale, Fourni par l’auteur

As witnessed by the photo to the right and also below, we have already been able to successfully introduce the robotic arm within a living colony. There it continuously monitored the queen and determined her whereabouts through light stimuli.

Emulating the worker bees

In a second phase, it is hoped the bee-sized robots and robotic arm will be able to emulate the behaviour of the workers, the female bees lacking reproductive capacity who attend to the queen and feed her royal jelly. Rich in water, proteins, carbohydrates, lipids, vitamins and minerals, this nutritious substance secreted by the glands of the worker bees enables the queen to lay up to thousands of eggs a day.

Worker bees also engage in cleaning the queen, which involves licking her. During such interactions, they collect some of the queen’s pheromones and disperse them throughout the colony as they move across the hive. The presence of these pheromones controls many of the colony’s behaviours and notifies the colony of a queen’s presence. For example, in the event of the queen’s demise, a new queen must be quickly reared from an egg laid by the late queen, leaving only a narrow time window for the colony to react.

One of RoboRoyale’s first experiments has consisted in simple interactions with the queen bee through light stimulus. The next months will then see the robotic arm stretch out to physically touch and groom her.
RoboRoyale, Fourni par l’auteur

Finally, it is believed worker bees may also act as the queen’s guides, leading her to laying eggs in specific comb cells. The size of these cells can determine if the queen lays a diploid or haploid egg, resulting in the bee developing into either into drone (male) or worker (female) bee. Taking over these guiding duties could affect no less than the rate’s entire reproductive rate.

How robots can prevent bee cannibalism

This could have another virtuous effect: preventing cannibalism.

During tough times, such as long periods of rain, bees have to make do with little pollen intake. This forces them to feed young larvae to older ones so that at least the older larvae has a chance to survive. Through RoboRoyale, we will look not only to reduce chances of this behaviour occurring, but also quantify to what extent it occurs under normal conditions.

Ultimately, our robots will enable us to deepen our understanding of the very complex regulation processes inside honeybee colonies through novel experimental procedures. The insights gained from these new research tracks will be necessary to better protect these valuable social insects and ensure sufficient pollination in the future – a high stakes enterprise for food security.


This article is the result of The Conversation’s collaboration with Horizon, the EU research and innovation magazine.

The Conversation

Farshad Arvin is a member of the Department of Computer Science at Durham University in the UK. The research of Farshad Arvin is primarily funded by the EU H2020 and Horizon Europe programmes.

Martin Stefanec is a member of the Institute of Biology at the University of Graz. He has received funding from the EU programs H2020 and Horizon Europe.

Tomas Krajnik is member of the Institute of Electrical and Electronics Engineers (IEEE). The research of Tomas Krajnik is primarily funded by EU H2020 Horizon programme and Czech National Science Foundation.

]]>
Robot Talk Episode 63 – Ayse Kucukyilmaz https://robohub.org/robot-talk-episode-63-ayse-kucukyilmaz/ Sun, 31 Dec 2023 08:28:29 +0000 https://robohub.org/?p=208808

Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.

Ayse Kucukyilmaz is Assistant Professor in the School of Computer Science at the University of Nottingham and a founding member of the CHART research group. Her research focus is human-centered robotics, where she specialises in haptic shared and traded control for physical human-robot interaction. Her group primarily works on adjustable autonomy paradigms to enable dynamic switching behaviours between different levels of robotic autonomy (e.g. full human control vs. full autonomy) during shared control of a physical task, enabling effective human-autonomy teaming.

]]>
Interview with Dautzenberg Roman: #IROS2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp. https://robohub.org/interview-with-dautzenberg-roman-iros2023-best-paper-award-on-mobile-manipulation-sponsored-by-omron-sinic-x-corp/ Sun, 19 Nov 2023 09:00:38 +0000 https://robohub.org/?p=208787

Congratulations to Dautzenberg Roman and his team of researchers, who won the IROS 2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp. for their paper “A perching and tilting aerial robot for precise and versatile power tool work on vertical walls“. Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

Our paper shows a an aerial robot (think “drone”) which can exert large forces in the horizontal direction, i.e. onto walls. This is a difficult task, as UAVs usually rely on thrust vectoring to apply horizontal forces and thus can only apply small forces before losing control authority. By perching onto walls, our system no longer needs the propulsion to remain at a desired site. Instead we use the propellers to achieve large reaction forces in any direction, also onto walls! Additionally, perching allows extreme precision, as the tool can be moved and re-adjusted, as well as being unaffected by external disturbances such as gusts of wind.

Could you tell us about the implications of your research and why it is an interesting area for study?

Precision, force exertion and mobility are the three (of many) criteria where robots – and those that develop them – make trade-offs. Our research shows that the system we designed can exert large forces precisely with only minimal compromises on mobility. This widens the horizon of conceivable tasks for aerial robots, as well as serving as the next link in automating the chain of tasks need to perform many procedures on construction sites, or on remote, complex or hazardous environments.

Could you explain your methodology?

The main aim of our paper is to characterize the behavior and performance of the system, and comparing the system to other aerial robots. To achieve this, we investigated the perching and tool positioning accuracy, as well as comparing the applicable reaction forces with other systems.

Further, the paper shows the power consumption and rotational velocities of the propellers for the various phases of a typical operation, as well as how certain mechanism of the aerial robot are configured. This allows for a deeper understanding of the characteristics of the aerial robot.

What were your main findings?

Most notably, we show the perching precision to be within +-10cm of a desired location over 30 consecutive attempts and tool positioning to have mm-level accuracy even in a “worst-case” scenario. Power consumption while perching on typical concrete is extremely low and the system is capable of performing various tasks (drilling, screwing) also in quasi-realistic, outdoor scenarios.

What further work are you planning in this area?

Going forward, enhancing the capabilities will be a priority. This relates both to the types of surface manipulations that can be performed, but also the surfaces onto which the system can perch.


About the author

Dautzenberg Roman is currently a Masters student at ETH Zürich and Team Leader at AITHON. AITHON is a research project which is transforming into a start-up for aerial construction robotics. They are a core team of 8 engineers, working under the guidance of the Autonomous Systems Lab at ETH Zürich and located at the Innovation Park Switzerland in Dübendorf.

]]>
Robot Talk Episode 62 – Jorvon Moss https://robohub.org/robot-talk-episode-62-jorvon-moss/ Fri, 17 Nov 2023 18:17:37 +0000 https://robohub.org/?p=208765

Claire chatted to Jorvon (Odd-Jayy) Moss from Digikey about making robots at home, and robot design and aesthetics.

Commonly known as Odd-Jayy, Jorvon Moss is an accomplished Maker best known for his Robotic Oddities. Jayy’s art background, BFA in Illustration, and self-taught electronics skills have combined to help launch his career and promote the wonderful world of STEAM (Science, Technology, Engineering, Art, Math). This achievement, and the many viral videos under his belt, gained him recognition from major forces in the industry; including Digi-Key Electronics, Tested Inc. with Adam Savage, various electronic and tech Faires, and as the first Black person in Make Magazine.

]]>
California is the robotics capital of the world https://robohub.org/california-is-the-robotics-capital-of-the-world/ Sun, 12 Nov 2023 10:37:43 +0000 https://svrobo.org/?p=27064

I came to the Silicon Valley region in 2010 because I knew it was the robotics center of the world, but it certainly doesn’t get anywhere near the media attention that some other robotics regions do. In California, robotics technology is a small fish in a much bigger technology pond, and that tends to conceal how important Californian companies are to the robotics revolution.

This conservative dataset from Pitchbook [Vertical: Robotics and Drones] provides data for 7166 robotics and drones companies, although a more customized search would provide closer to 10,000 robotics companies world wide. Regions ordered by size are:

  • North America 2802
  • Asia 2337
  • Europe 2285
  • Middle East 321
  • Oceania 155
  • South America 111
  • Africa 63
  • Central America 13

 

USA robotics companies by state

  1. California = 843 (667) * no of companies followed by no of head quarters
  2. Texas = 220 (159)
  3. New York = 193 (121)
  4. Massachusetts = 191 (135)
  5. Florida = 136 (95)
  6. Pennsylvania = 113 (89)
  7. Washington = 85 (61)
  8. Colorado = 83 (57)
  9. Virginia = 81 (61)
  10. Michigan = 70 (56)
  11. Illinois = 66 (43)
  12. Ohio = 65 (56)
  13. Georgia = 64 (46)
  14. New Jersey = 53 (36)
  15. Delaware = 49 (18)
  16. Maryland = 48 (34)
  17. Arizona = 48 (37)
  18. Nevada = 42 (29)
  19. North Carolina = 39 (29)
  20. Minnesota = 31 (25)
  21. Utah = 30 (24)
  22. Indiana = 29 (26)
  23. Oregon = 29 (20)
  24. Connecticut = 27 (22)
  25. DC = 26 (12)
  26. Alabama = 25 (21)
  27. Tennessee = 20 (18)
  28. Iowa = 17 (14)
  29. New Mexico = 17 (15)
  30. Missouri = 17 (16)
  31. Wisconsin = 15 (12)
  32. North Dakota = 14 (8)
  33. South Carolina = 13 (11)
  34. New Hampshire = 13 (12)
  35. Nebraska = 13 (11)
  36. Oklahoma = 10 (8)
  37. Kentucky = 10 (7)
  38. Kansas = 9 (9)
  39. Louisiana = 9 (8)
  40. Rhode Island = 8 (6)
  41. Idaho = 8 (6)
  42. Maine = 5 (5)
  43. Montana = 5 (4)
  44. Wyoming = 5 (3)
  45. Mississippi = 3 (1)
  46. Arkansas = 3 (2)
  47. Alaska = 3 (3)
  48. Hawaii = 2 (1)
  49. West Virginia = 1 (1)
  50. South Dakota = 1 (0)

Note – this number in brackets is for HQ locations, whereas the first number is for all company locations. The end results and rankings are practically the same.

 

ASIA robotics companies by country

  1. China = 1350
  2. Japan = 283
  3. India = 261
  4. South Korea = 246
  5. Israel = 193
  6. Hong Kong = 72
  7. Russia = 69
  8. United Arab Emirates = 50
  9. Turkey = 48
  10. Malaysia = 35
  11. Taiwan = 21
  12. Saudi Arabia = 19
  13. Thailand = 13
  14. Vietnam = 12
  15. Indonesia = 10
  16. Lebanon = 7
  17. Kazakhstan = 3
  18. Iran = 3
  19. Kuwait = 3
  20. Oman = 3
  21. Qatar = 3
  22. Pakistan = 3
  23. Philippines = 2
  24. Bahrain = 2
  25. Georgia = 2
  26. Sri Lanka = 2
  27. Azerbaijan = 1
  28. Nepal = 1
  29. Armenia = 1
  30. Burma/Myanmar = 1

Countries with no robotics; Yemen, Iraq, Syria, Turkmenistan, Afghanistan, Syria, Jordan, Uzbekistan, Kyrgyzstan, Tajikistan, Bangladesh, Bhutan, Mongolia, Cambodia, Laos, North Korea, East Timor.

 

UK/EUROPE robotics companies by country

  1. United Kingdom = 443
  2. Germany = 331
  3. France = 320
  4. Spain = 159
  5. Netherlands = 156
  6. Switzerland = 140
  7. Italy = 125
  8. Denmark = 115
  9. Sweden = 85
  10. Norway = 80
  11. Poland = 74
  12. Belgium = 72
  13. Russia = 69
  14. Austria = 51
  15. Turkey = 48
  16. Finland = 45
  17. Portugal = 36
  18. Ireland = 28
  19. Estonia = 24
  20. Ukraine = 22
  21. Czech Republic = 19
  22. Romania = 19
  23. Hungary = 18
  24. Lithuania = 18
  25. Latvia = 15
  26. Greece = 15
  27. Bulgaria = 11
  28. Slovakia = 10
  29. Croatia = 7
  30. Slovenia = 6
  31. Serbia = 6
  32. Belarus = 4
  33. Iceland = 3
  34. Cyprus = 2
  35. Bosnia & Herzegovina = 1

Countries with no robotics; Andorra, Montenegro, Albania, Macedonia, Kosovo, Moldova, Malta, Vatican City.

 

CANADA robotics companies by region

  1. Ontario = 144
  2. British Colombia = 60
  3. Quebec = 53
  4. Alberta = 34
  5. Manitoba = 7
  6. Saskatchewan = 6
  7. Newfoundland & Labrador = 2
  8. Yukon = 1

Regions with no robotics; Nunavut, Northwest Territories.

]]>
Robot Talk Episode 61 – Masoumeh Mansouri https://robohub.org/robot-talk-episode-61-masoumeh-mansouri/ Fri, 10 Nov 2023 12:00:14 +0000 https://robohub.org/?p=208713

Claire chatted to Masoumeh (Iran) Mansouri from the University of Birmingham about culturally sensitive robots and planning in complex environments.

Masoumeh Mansouri is an Associate Professor in the School of Computer Science at the University of Birmingham. Her research includes two complementary areas: (i) developing hybrid robot planning methods for unstructured environments shared with humans, and (ii) exploring topics at the intersection of cultural theories and robotics. In the latter, her main goal is to study whether/how robots can be culturally sensitive given the broad definitions of culture in different fields of study.

]]>
The 5 levels of Sustainable Robotics https://robohub.org/the-5-levels-of-sustainable-robotics/ Wed, 08 Nov 2023 09:38:51 +0000 https://robohub.org/?p=208696

If you look at the UN Sustainable Development Goals, it’s clear that robots have a huge role to play in advancing the SDGs. However the field of Sustainable Robotics is more than just the application area. For every application that robotics can improve in sustainability, you have to also address the question – what are the additional costs or benefits all the way along the supply chain. What are the ‘externalities’, or additional costs/benefits, of using robots to solve the problem. Does the use of robotics bring a decrease or an increase to:

  • power costs
  • production costs
  • labor costs
  • supply chain costs
  • supply chain mileage
  • raw materials consumption
  • and raw material choice

Solving our economic and environmental global challenges should not involve adding to the existing problems or creating new ones. So it’s important that we look beyond the first order ways in which robotics can solve global sustainable development goals and address every level at which robotics can have an impact.

Here I propose 5 levels of sustainability to frame the discussion, much as the 5 levels of autonomy have helped define the stages of autonomous mobility.

Level 1: Robots for existing recycling

Level 1 of Sustainable Robotics is simply making existing processes in sustainability more efficient, affordable and deployable. Making recycling better. Companies that are great examples are: AMP Robotics, Recycleye, MachineEx, Pellenc ST, Greyparrot, Everlast Labs and Fanuc. Here’s an explainer video from Fanuc.

“Because of AI, because of the robotic arms, we have seen plants recover 10, 20, 30% more than what they have been doing previously,” said JD Ambati, CEO of EverestLabs. “They have been losing millions of dollars to the landfill, and because of AI, they were able to identify the value of the losses and deploy robotic arms to capture that.”{}^{1}

Some other examples of Level 1 use robots to better monitor aquaculture, or robots to clean or install solar farms and wind turbines. If the robotics technology improves existing recycling practices then it is at Level 1 of Sustainable Robotics.

Level 2: Robots enabling new recycling

Level 2 of Sustainable Robotics is where robotics allows new materials to be recycled and in new industry application areas. A great example of this is Urban Machines, which salvages timber from construction sites and transforms it back into useable materials, something that was too difficult to do at any scale previously.

Construction using onsite materials and robotics 3D printing is another example, as seen in the NASA Habitat Challenge, sponsored by Caterpillar, Bechtel and Brick & Mortar Ventures.

Some other examples are the ocean or lake going garbage collecting robots like Waste Shark from Ran Marine, River Cleaning, or Searial Cleaners, a Quebec company whose robots were deployed in the Great Lakes Plastic Cleanup, helping to remove 74,000 plastic pieces from four lakes since 2020.

Searial Cleaners is angling for its BeBot and PixieDrone to be used as janitorial tools for beaches, marinas and golf courses, and the BeBot offers ample room for company branding. The equipment emerged from the mission of the Great Lakes Plastic Cleanup (GLPC) to harness new technologies against litter. The program also uses other devices including the Seabin, which sits in water and sucks in trash, and the Enviropod LittaTrap filter for stormwater drains.{}^{2}

If it’s a brand new way to practice recycling with robotic technology, then it’s at Level 2 of Sustainable Robotics.

Level 3: Robots electrifying everything

One of the biggest sustainability shifts enabled by robotics is the transition from fossil fuel powered transport, logistics and agricultural machinery into BEV, or Battery Electric Vehicle technology. On top of radically reducing emissions, the increasing use of smaller autonomous electric vehicles across first, last and middle mile can change the total number of trips taken, as well as reducing the need for large vehicles that are partially loaded taking longer trips.

Monarch Tractor’s MK-V is the world’s first electric tractor, and is ‘driver optional’, meaning it can be driven or operate autonomously, providing greater flexibility for farmers. Of course the increased use of computer vision and AI across all agrobots increase sustainability, by enabling precision or regenerative agriculture with less need for chemical solutions. Technically, these improvements to agricultural practice are Level 2 of Sustainable Robotics.{}^{3}

However, the use of smaller sized fully autonomous agricultural robots, such as Meropy, Burro.ai, SwarmFarm, Muddy Machines and Small Robot Company also reduces the size and soil compaction associated with agricultural machinery, and make it possible to tend smaller strip farms by machine. {}^4} This is Level 3 of Sustainable Robotics.

Level 4: Robots

The higher the sustainability level, the deeper it is into the actual design and construction of the robot system. Switching to electric from fossil fuels is a small step. Switching to locally sourced or produced materials is another. Switching to recyclable materials is another step towards fully sustainable robotics.

OhmniLabs utilize 3D printing in their robot construction, allowing them to export robots to 47 countries, while also manufacturing locally in Silicon Valley.

Meanwhile, Cornell researchers Wendy Ju and Ilan Mandel have introduced the phrase ‘Garbatrage’ to describe the opportunity to prototype or build robots using components recycled from other consumer electronics, like these hoverboards.

“The time is ripe for a practice like garbatrage, both for sustainability reasons and considering the global supply shortages and international trade issues of the last few years,” the researchers said. {}^{5}

This is a great example of Level 4 of Sustainable Robotics.

Level 5: Self-powering/repairing Robots

Self powering or self repairing or self recycling robots are the Level 5 of Sustainable Robotics. In research, there are solutions like MilliMobile: A battery-free autonomous robot capable of operating on harvested solar and RF power. MilliMobile, developed at the Paul G. Allen School of Computer Science & Engineering, is the size of a penny and can steer itself, sense its environment, and communicate wirelessly using energy harvested from light and radio waves.

It’s not just research though. In the last two years, a number of solar powered agricultural robots have entered the market. Solinftec has a solar powered spray robot, as has EcoRobotix and AIGEN, which is also powered by wind.

Modular robotics will reduce our material wastage and energy needs by making robotics multipurpose, rather than requiring multiple specialist robots. Meanwhile self powering and self repairing technologies will allow robots to enter many previously unreachable areas, including off planet, while removing our reliance on the grid. As robots incorporate self repairing materials, the product lifecycle is increased. This is Level 5 of Sustainable Robotics.

And in the future?

While we’re waiting for the future, here are a couple of resources for turning your entire company into a sustainable robotics company. Sustainable Manufacturing 101 from ITA, the International Trade Administration and the Sustainable Manufacturing Toolkit from the OECD.

References

  1. https://www.cnbc.com/2023/08/08/everestlabs-using-robotic-arms-and-ai-to-make-recycling-more-efficient.html
  2. https://www.greenbiz.com/article/great-lakes-are-awash-plastic-can-robots-and-drones-help
  3. https://www.economist.com/science-and-technology/2020/02/06/using-artificial-intelligence-agricultural-robots-are-on-the-rise
  4. https://www.wired.co.uk/article/farming-robots-small-robot-company-tractors
  5. https://news.cornell.edu/stories/2023/09/garbatrage-spins-e-waste-prototyping-gold
]]>
Using language to give robots a better grasp of an open-ended world https://robohub.org/using-language-to-give-robots-a-better-grasp-of-an-open-ended-world/ Mon, 06 Nov 2023 15:25:00 +0000 https://news.mit.edu/2023/using-language-give-robots-better-grasp-open-ended-world-1102

Feature Fields for Robotic Manipulation (F3RM) enables robots to interpret open-ended text prompts using natural language, helping the machines manipulate unfamiliar objects. The system’s 3D feature fields could be helpful in environments that contain thousands of objects, such as warehouses. Images courtesy of the researchers.

By Alex Shipps | MIT CSAIL

Imagine you’re visiting a friend abroad, and you look inside their fridge to see what would make for a great breakfast. Many of the items initially appear foreign to you, with each one encased in unfamiliar packaging and containers. Despite these visual distinctions, you begin to understand what each one is used for and pick them up as needed.

Inspired by humans’ ability to handle unfamiliar objects, a group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed Feature Fields for Robotic Manipulation (F3RM), a system that blends 2D images with foundation model features into 3D scenes to help robots identify and grasp nearby items. F3RM can interpret open-ended language prompts from humans, making the method helpful in real-world environments that contain thousands of objects, like warehouses and households.

F3RM offers robots the ability to interpret open-ended text prompts using natural language, helping the machines manipulate objects. As a result, the machines can understand less-specific requests from humans and still complete the desired task. For example, if a user asks the robot to “pick up a tall mug,” the robot can locate and grab the item that best fits that description.

“Making robots that can actually generalize in the real world is incredibly hard,” says Ge Yang, postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions and MIT CSAIL. “We really want to figure out how to do that, so with this project, we try to push for an aggressive level of generalization, from just three or four objects to anything we find in MIT’s Stata Center. We wanted to learn how to make robots as flexible as ourselves, since we can grasp and place objects even though we’ve never seen them before.”

Learning “what’s where by looking”

The method could assist robots with picking items in large fulfillment centers with inevitable clutter and unpredictability. In these warehouses, robots are often given a description of the inventory that they’re required to identify. The robots must match the text provided to an object, regardless of variations in packaging, so that customers’ orders are shipped correctly.

For example, the fulfillment centers of major online retailers can contain millions of items, many of which a robot will have never encountered before. To operate at such a scale, robots need to understand the geometry and semantics of different items, with some being in tight spaces. With F3RM’s advanced spatial and semantic perception abilities, a robot could become more effective at locating an object, placing it in a bin, and then sending it along for packaging. Ultimately, this would help factory workers ship customers’ orders more efficiently.

“One thing that often surprises people with F3RM is that the same system also works on a room and building scale, and can be used to build simulation environments for robot learning and large maps,” says Yang. “But before we scale up this work further, we want to first make this system work really fast. This way, we can use this type of representation for more dynamic robotic control tasks, hopefully in real-time, so that robots that handle more dynamic tasks can use it for perception.”

The MIT team notes that F3RM’s ability to understand different scenes could make it useful in urban and household environments. For example, the approach could help personalized robots identify and pick up specific items. The system aids robots in grasping their surroundings — both physically and perceptively.

“Visual perception was defined by David Marr as the problem of knowing ‘what is where by looking,’” says senior author Phillip Isola, MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. “Recent foundation models have gotten really good at knowing what they are looking at; they can recognize thousands of object categories and provide detailed text descriptions of images. At the same time, radiance fields have gotten really good at representing where stuff is in a scene. The combination of these two approaches can create a representation of what is where in 3D, and what our work shows is that this combination is especially useful for robotic tasks, which require manipulating objects in 3D.”

Creating a “digital twin”

F3RM begins to understand its surroundings by taking pictures on a selfie stick. The mounted camera snaps 50 images at different poses, enabling it to build a neural radiance field (NeRF), a deep learning method that takes 2D images to construct a 3D scene. This collage of RGB photos creates a “digital twin” of its surroundings in the form of a 360-degree representation of what’s nearby.

In addition to a highly detailed neural radiance field, F3RM also builds a feature field to augment geometry with semantic information. The system uses CLIP, a vision foundation model trained on hundreds of millions of images to efficiently learn visual concepts. By reconstructing the 2D CLIP features for the images taken by the selfie stick, F3RM effectively lifts the 2D features into a 3D representation.

Keeping things open-ended

After receiving a few demonstrations, the robot applies what it knows about geometry and semantics to grasp objects it has never encountered before. Once a user submits a text query, the robot searches through the space of possible grasps to identify those most likely to succeed in picking up the object requested by the user. Each potential option is scored based on its relevance to the prompt, similarity to the demonstrations the robot has been trained on, and if it causes any collisions. The highest-scored grasp is then chosen and executed.

To demonstrate the system’s ability to interpret open-ended requests from humans, the researchers prompted the robot to pick up Baymax, a character from Disney’s “Big Hero 6.” While F3RM had never been directly trained to pick up a toy of the cartoon superhero, the robot used its spatial awareness and vision-language features from the foundation models to decide which object to grasp and how to pick it up.

F3RM also enables users to specify which object they want the robot to handle at different levels of linguistic detail. For example, if there is a metal mug and a glass mug, the user can ask the robot for the “glass mug.” If the bot sees two glass mugs and one of them is filled with coffee and the other with juice, the user can ask for the “glass mug with coffee.” The foundation model features embedded within the feature field enable this level of open-ended understanding.

“If I showed a person how to pick up a mug by the lip, they could easily transfer that knowledge to pick up objects with similar geometries such as bowls, measuring beakers, or even rolls of tape. For robots, achieving this level of adaptability has been quite challenging,” says MIT PhD student, CSAIL affiliate, and co-lead author William Shen. “F3RM combines geometric understanding with semantics from foundation models trained on internet-scale data to enable this level of aggressive generalization from just a small number of demonstrations.”

Shen and Yang wrote the paper under the supervision of Isola, with MIT professor and CSAIL principal investigator Leslie Pack Kaelbling and undergraduate students Alan Yu and Jansen Wong as co-authors. The team was supported, in part, by Amazon.com Services, the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research’s Multidisciplinary University Initiative, the Army Research Office, the MIT-IBM Watson Lab, and the MIT Quest for Intelligence. Their work will be presented at the 2023 Conference on Robot Learning.

]]>
Robot Talk Episode 60 – Carl Strathearn https://robohub.org/robot-talk-episode-60-carl-strathearn/ Fri, 03 Nov 2023 12:05:37 +0000 https://robohub.org/?p=208645

Claire chatted to Carl Strathearn from Edinburgh Napier University about humanoid robots, realistic robot faces and speech.

Carl Strathearn is a researcher interested in creating assistive social humanoid robots with embodied AI systems that appear, function, and interact like humans. He believes that creating realistic humanoid robots is significant to humanity as the human face is the most natural interface for human communication, and by emulating these conditions, we can increase accessibility to state-of-the-art technology for everyone and support people with specific health conditions and circumstances in their day-to-day lives.

]]>
AAAI Fall Symposium: Patrícia Alves-Oliveira on human-robot interaction design https://robohub.org/aaai-fall-symposium-patricia-alves-oliveira-on-human-robot-interaction-design/ Tue, 31 Oct 2023 06:25:33 +0000 https://robohub.org/?p=208623 An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The AAAI Fall Symposium Series took place in Arlington, USA, and comprised seven different symposia. One of these, the tenth Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium was run as a hybrid in-person/online event, and we tuned in to the opening keynote, which was given by Patrícia Alves-Oliveira.

As a psychology student, Patrícia’s dream was to become a therapist. However, an internship, where she encountered a robot for the first time, inspired her to change her plans, and she decided to go into the field of human-robot interaction. Following a PhD in the field, she worked as a postdoc, before heading to industry as a designer in the Amazon Astro robot team.

Patrícia has worked on a number of interesting projects during her time in academia and in industry. Thinking about how to design robots for specific user needs, and keeping the user at the forefront during the design process, has been core to her work. She began by summarising three very different academic projects.

Creativity and robotics

The objective of this project was to design, fabricate, and evaluate robots as creativity-provoking tools for kids. Patrícia created a social robot named YOLO (or Your Own Living Object) that she designed to be child-proof (in other words, it could withstand being dropped and knocked over), with the aim of trying to help children explore their creativity during play. A machine learning algorithm learns the pattern of play that the child has and adapts the robot behaviour accordingly. You can see the robot in action in the demo below:

FLEXI robot

As a postdoc project, Patrícia worked on building FLEXI, a social robot embodiment kit. This kit consists of a robot (with a face, and a torso with a screen on the front), which can be customised, and an open-source end-user programming interface designed to be user-friendly. The customisation element means that it can be used for many applications. The team has deployed FLEXI across three application scenarios: community-support, mental health, and education, with the aim of assessing the flexibility of the system. You can see the robot in action, in different scenarios, here.

Social dining

This project centred on a robotic arm for people with impaired mobility. Such systems already exist for assisting people with tasks such as eating. However, in a social context they can often form a barrier between the user and the rest of the group. The idea behind this project was to consider how such a robot could be adapted to work well in a social context, for example, during a meal with family or friends. The team interviewed people with impaired mobility to assess their needs, and came up with a set of design principles for creating robot-assisted feeding systems and an implementation guide for future research in this area. You can read the research paper on this project here.

You can find out more about these three projects, and the other projects that Patrícia has been involved in, here.

Astro robot

Patrícia has long been interested in robots for the real world, and how this real-world experience is aligned with the study of robots in academia and industry. She decided to leave academia and join the Astro robot programme, which she felt was a great opportunity to work on a large-scale real-world robot project.

The Astro robot is a home robot designed to assist with tasks such as monitoring your house, delivering small objects within the home, recognising your pet, telling a story, or playing games.

Patrícia took us through a typical day in the life of a designer where she always has in mind the bigger picture of what the team is aiming for, in other words, what the ideal robot, and its interactions with humans, would look like. Coupled to that, the process is governed by core design tenets, such as the customer needs, and non-negotiable core elements that the robot should include. When considering a particular element of the robot design, for example, the delivery of an item in the robot tray, Patrícia uses storyboards to map out details of potential human-robot interactions. An important aspect of design concerns edge cases, which occur regularly in the real world. Good design will consider potential edge cases and incorporate ways to deal with them.

Patrícia closed by emphasising the importance of teamwork in the design process, in particular, the need for interdisciplinary teams; by considering design from many different points of view, the chance of innovation is higher.

You can find out more about the Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium here.

]]>
Robots for deep-sea recovery missions in sci-fi and reality https://robohub.org/robots-for-deep-sea-recovery-missions-in-sci-fi-and-reality/ Mon, 30 Oct 2023 11:35:17 +0000 https://robohub.org/?p=208595

A science fiction/science fact review of Three Miles Down by Harry Turtledove, the fictionalized version of the Hughes Glomar Explorer expedition 50 years before the OceanGate Titan tragedy.

My new science fiction/science fact article for Science Robotics is out on why deep ocean robotics is hard. Especially when trying to bring up a sunken submarine 3 miles underwater, which the CIA actually did in 1974. It’s even harder if you’re trying to bring up an alien spaceship- which is the plot of Harry Turtledove’s new sci-fi novel Three Miles Under. It’s a delightful Forrest Gump version of that 1974 Hughes Glomar Explorer expedition. Though the expedition was 50 years before the OceanGate Titan tragedy, the same challenges exist for today’s robots. The robotics science in the book is very real, the aliens, not so much.

In 1974, the CIA deployed a 3 mile long, 6 million pound robot manipulator to recover a Russian submarine. The cover story was that Howard Hughes was deep sea mining for manganese nodules- which accidentally started everyone else investing in deep sea mining.

The Glomar Explorer was also a breakthrough in computer control, as the ship had to stay on station and move the arm to the sub in the presence of wind, waves, and currents. All with an array of 16-bit microprocessor, 5MHz clock, 32K words of core memory Honeywell computers. Consider that a late model iPhone uses a 64-bit microprocessor, a 3GHz clock, 6GB of RAM and a GPU.

Turtledove takes one major liberty with the otherwise hard science retrospective: the CIA recovering the Soviet sub was in turn a cover story masking the real mission to salvage the alien space ship that apparently collided with the sub!

The dry humor and attention to scientific details makes for an entertaining sci-fi compare-and-contrast between deep sea robotics and computers in the 1970s and the present day. It’s a fun read- not just for roboticists and computer scientists.

For further robotics science reading:

For further scifi reading, check out:

]]>
Do humans get lazier when robots help with tasks? https://robohub.org/do-humans-get-lazier-when-robots-help-with-tasks/ Sun, 29 Oct 2023 09:28:09 +0000 https://robohub.org/?p=208589

Image/Shutterstock.com

By Angharad Brewer Gillham, Frontiers science writer

‘Social loafing’ is a phenomenon which happens when members of a team start to put less effort in because they know others will cover for them. Scientists investigating whether this happens in teams which combine work by robots and humans found that humans carrying out quality assurance tasks spotted fewer errors when they had been told that robots had already checked a piece, suggesting they relied on the robots and paid less attention to the work.

Now that improvements in technology mean that some robots work alongside humans, there is evidence that those humans have learned to see them as team-mates — and teamwork can have negative as well as positive effects on people’s performance. People sometimes relax, letting their colleagues do the work instead. This is called ‘social loafing’, and it’s common where people know their contribution won’t be noticed or they’ve acclimatized to another team member’s high performance. Scientists at the Technical University of Berlin investigated whether humans social loaf when they work with robots.

“Teamwork is a mixed blessing,” said Dietlind Helene Cymek, first author of the study in Frontiers in Robotics and AI. “Working together can motivate people to perform well but it can also lead to a loss of motivation because the individual contribution is not as visible. We were interested in whether we could also find such motivational effects when the team partner is a robot.”

A helping hand

The scientists tested their hypothesis using a simulated industrial defect-inspection task: looking at circuit boards for errors. The scientists provided images of circuit boards to 42 participants. The circuit boards were blurred, and the sharpened images could only be viewed by holding a mouse tool over them. This allowed the scientists to track participants’ inspection of the board.

Half of the participants were told that they were working on circuit boards that had been inspected by a robot called Panda. Although these participants did not work directly with Panda, they had seen the robot and could hear it while they worked. After examining the boards for errors and marking them, all participants were asked to rate their own effort, how responsible for the task they felt, and how they performed.

Looking but not seeing

At first sight, it looked as if the presence of Panda had made no difference — there was no statistically significant difference between the groups in terms of time spent inspecting the circuit boards and the area searched. Participants in both groups rated their feelings of responsibility for the task, effort expended, and performance similarly.

But when the scientists looked more closely at participants’ error rates, they realized that the participants working with Panda were catching fewer defects later in the task, when they’d already seen that Panda had successfully flagged many errors. This could reflect a ‘looking but not seeing’ effect, where people get used to relying on something and engage with it less mentally. Although the participants thought they were paying an equivalent amount of attention, subconsciously they assumed that Panda hadn’t missed any defects.

“It is easy to track where a person is looking, but much harder to tell whether that visual information is being sufficiently processed at a mental level,” said Dr Linda Onnasch, senior author of the study.

The experimental set-up with the human-robot team. Image supplied by the authors.

Safety at risk?

The authors warned that this could have safety implications. “In our experiment, the subjects worked on the task for about 90 minutes, and we already found that fewer quality errors were detected when they worked in a team,” said Onnasch. “In longer shifts, when tasks are routine and the working environment offers little performance monitoring and feedback, the loss of motivation tends to be much greater. In manufacturing in general, but especially in safety-related areas where double checking is common, this can have a negative impact on work outcomes.”

The scientists pointed out that their test has some limitations. While participants were told they were in a team with the robot and shown its work, they did not work directly with Panda. Additionally, social loafing is hard to simulate in the laboratory because participants know they are being watched.

“The main limitation is the laboratory setting,” Cymek explained. “To find out how big the problem of loss of motivation is in human-robot interaction, we need to go into the field and test our assumptions in real work environments, with skilled workers who routinely do their work in teams with robots.”

]]>
Robot Talk Episode 59 – Ffion Llewellyn https://robohub.org/robot-talk-episode-59-ffion-llewellyn/ Fri, 27 Oct 2023 11:05:56 +0000 https://robohub.org/?p=208603

Claire chatted to Ffion Llewellyn from Oshen about sea-faring robots and ocean sensing.

Ffion Llewellyn graduated from Imperial College London in 2022 with a masters in Aeronautical Engineering. Following this, she joined Oshen who are building low cost, autonomous micro-vessels for remote ocean sensing. Ffion has been focused on the integration and testing of sensors onto Oshen’s autonomous micro-vessels, including metocean sensors and hydrophones for the monitoring of marine mammals. Her role also includes the design and manufacture of the micro-vessels, conducting sea trials and analysing the data collected.

]]>
Code to Joy: Why Everyone Should Learn a Little Programming – Interview with Michael Littman https://robohub.org/code-to-joy-why-everyone-should-learn-a-little-programming-interview-with-michael-littman/ Thu, 26 Oct 2023 05:26:55 +0000 https://robohub.org/?p=208586

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

Could you start by telling us a bit about the book, and who the intended audience is?

The intended audience is not computer scientists, although I have been getting a very warm reception from computer scientists, which I appreciate. The idea behind the book is to try to help people understand that telling machines what to do (which is how I view much of computer science and AI) is something that is really accessible to everyone. It builds on skills and practices that people already have. I think it can be very intimidating for a lot of people, but I don’t think it needs to be. I think that the foundation is there for everybody and it’s just a matter of tapping into that and building on top of it. What I’m hoping, and what I’m seeing happening, is that machine learning and AI is helping to meet people part way. The machines are getting better at listening as we try to get better at telling them what to do.

What made you decide to write the book, what was the inspiration behind it?

I’ve taught large introductory computer science classes and I feel like there’s an important message in there about how a deeper knowledge of computing can be very empowering, and I wanted to bring that to a larger audience.

Could you talk a bit about the structure of the book?

The meat of the book talks about the fundamental components that make up programs, or, in other words, that make up the way that we tell computers what to do. Each chapter covers a different one of those topics – loops, variables, conditionals, for example. Within each chapter I talk about the ways in which this concept is already familiar to people, the ways that it shows up in regular life. I point to existing pieces of software or websites where you can make use of that one particular concept to tell computers what to do. Each chapter ends with an introduction to some concepts from machine learning that can help create that particular programming construct. For example, in the chapter on conditionals, I talk about the ways that we use the word “if” in regular life all the time. Weddings, for example, are very conditionally structured, with statements like “if anyone has anything to say, speak now or forever hold your peace”. That’s kind of an “if-then” statement. In terms of tools to play with, I talk about interactive fiction. Partway between video games and novels is this notion that you can make a story that adapts itself while it’s being read. What makes that interesting is this notion of conditionals – the reader can make a choice and that will cause a branch. There are really wonderful tools for being able to play with this idea online, so you don’t have to be a full-fledged programmer to make use of conditionals. The machine learning concept introduced there is decision trees, which is an older form of machine learning where you give a system a bunch of examples and then it outputs a little flowchart for decision making.

Do you touch on generative AI in the book?

The book was already in production by the time ChatGPT came out, but I was ahead of the curve, and I did have a section specifically about GPT-3 (pre-ChatGPT) which talks about what it is, how machine learning creates it, and how it itself can be helpful in making programs. So, you see it from both directions. You get the notion that this tool actually helps people tell machines what to do, and also the way that humanity created this tool in the first place using machine learning.

Did you learn anything while you were writing the book that was particularly interesting or surprising?

Researching the examples for each chapter caused me to dig into a whole bunch of topics. This notion of interactive fiction, and that there’s tools for creating interactive fiction, I found pretty interesting. When researching another chapter, I found an example from a Jewish prayer book that was just so shocking to me. So, Jewish prayer books (and I don’t know if this is true in other belief systems as well, but I’m mostly familiar with Judaism), contain things you’re supposed to read, but they have little conditional markings on them sometimes. For example, one might say “don’t read this if it’s a Saturday”, or “don’t read this if it’s a full moon”, or “don’t read if it’s a full moon on a Saturday”. I found one passage that actually had 14 different conditions that you had to check to decide whether or not it was appropriate to read this particular passage. That was surprising to me – I had no idea that people were expected to do so much complex computation during a worship activity.

Why is it important that everybody learns a little programming?

It’s really important to keep in mind the idea that at the end of the day what AI is doing is making it easier for us to tell machines what to do, and we should share that increased capability with a broad population. It shouldn’t just be the machine learning engineers who get to tell computers what to do more easily. We should find ways of making this easier for everybody.

Because computers are here to help, but it’s a two-way street. We need to be willing to learn to express what we want in a way that can be carried out accurately and automatically. If we don’t make that effort, then other parties, companies often, will step in and do it for us. At that point, the machines are working to serve some else’s interest instead of our own. I think it’s become absolutely essential that we restore a healthy relationship with these machines before we lose any more of our autonomy.

Any final thoughts or takeaways that we should bear in mind?

I think there’s a message here for computer science researchers, as well. When we tell other people what to do, we tend to combine a description or a rule, something that’s sort of program-like, with examples, something that’s more data-like. We just intermingle them when we talk to each other. At one point when I was writing the book, I had a dishwasher that was acting up and I wanted to understand why. I read through its manual, and I was struck by how often it was the case that in telling people what to do with the dishwasher, the authors would consistently mix together a high-level description of what they are telling you to do with some particular, vivid examples: a rule for what to load into the top rack, and a list of items that fit that rule. That seems to be the way that people want to both convey and receive information. What’s crazy to me is that we don’t program computers that way. We either use something that’s strictly programming, all rules, no examples, or we use machine learning, where it’s all examples, no rules. I think the reason that people communicate this way with each other is because those two different mechanisms have complementary strengths and weaknesses and when you combine the two together, you maximize the chance of being accurately understood. And that’s the goal when we’re telling machines what to do. I want the AI community to be thinking about how we can combine what we’ve learned about machine learning with something more programming-like to make a much more powerful way of telling machines what to do. I don’t think this is a solved problem yet, and that’s something that I really hope that people in the community think about.


Code to Joy: Why Everyone Should Learn a Little Programming is available to buy now.

michael littman

Michael L. Littman is a University Professor of Computer Science at Brown University, studying machine learning and decision making under uncertainty. He has earned multiple university-level awards for teaching and his research on reinforcement learning, probabilistic planning, and automated crossword-puzzle solving has been recognized with three best-paper awards and three influential paper awards. Littman is co-director of Brown’s Humanity Centered Robotics Initiative and a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is also a Fellow of the American Association for the Advancement of Science Leshner Leadership Institute for Public Engagement with Science, focusing on Artificial Intelligence. He is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation.

]]>
Robo-Insight #6 https://robohub.org/robo-insight-6/ Tue, 24 Oct 2023 13:33:51 +0000 https://robohub.org/?p=208552

Source: OpenAI’s DALL·E 2 with prompt “a hyperrealistic picture of a robot reading the news on a laptop at a coffee shop”

Welcome to the 6th edition of Robo-Insight, a robotics news update! In this post, we are excited to share a range of new advancements in the field and highlight robots’ progress in areas like medical assistance, prosthetics, robot flexibility, joint movement, work performance, AI design, and household cleanliness.

Robots that can aid nurses

In the medical world, researchers from Germany have developed a robotic system designed to help nurses relieve the physical strain associated with patient care. Nurses often face high physical demands when attending to bedridden patients, especially during tasks like repositioning them. Their work explores how robotic technology can assist in such tasks by remotely anchoring patients in a lateral position. The results indicate that the system improved the working posture of nurses by an average of 11.93% and was rated as user-friendly. The research highlights the potential for robotics to support caregivers in healthcare settings, improving both nurse working conditions and patient care.

Arrangement of patient room used in the study. Source.

Robots enhancing bionic hand control

Keeping our focus healthcare-related, recently researchers from numerous European institutions have achieved a significant breakthrough in robot prosthetic technology, as they successfully implanted a neuromusculoskeletal prosthesis, a bionic hand connected directly to the user’s nervous and skeletal systems, in a person with a below-elbow amputation. This achievement involved surgical procedures to place titanium implants in the radius and ulna bones and transfer severed nerves to free muscle grafts. These neural interfaces provided a direct connection between the prosthesis and the user’s body, allowing for improved prosthetic function and increased quality of life. Their work demonstrates the potential for highly integrated prosthetic devices to enhance the lives of amputees through reliable neural control and comfortable, everyday use.

Schematic and X-ray of a fully integrated human-machine interface in a patient. Source.

Reinforcement learning in soft robotics

Turning our focus to soft robotics, researchers from the Center for Research and Advanced Studies of the National Polytechnic Institut of Mexico and the Universidad Autónoma de Coahuila have proposed an approach to use reinforcement learning (RL) for motor control of a pneumatic-driven soft robot modeled after continuum media with varying density. This method involves a continuous-time Actor-Critic scheme designed for tracking tasks in a 3D soft robot subject to Lipschitz disturbances. Their study introduces a reward-based temporal difference mechanism and a discontinuous adaptive approach for neural weights in the Critic component of the system. The overall aim is to enable RL to control the complex, uncertain, and deformable nature of soft robots while ensuring stability in real-time control, a crucial requirement for physical systems. This research focuses on the application of RL in managing the unique challenges posed by soft robots.

Distinct distortions of a cylindrical-shaped flexible robot. Source.

A teen-sized humanoid robot

Moving onto human-robot interactions, researchers from the University of Texas at Austin’s Human-Centered Robotics Laboratory have introduced a teen-sized humanoid robot named DRACO 3, designed in collaboration with Apptronik. This robot, tailored for practical use in human environments, features proximal actuation and employs rolling contact mechanisms on its lower body, allowing for extensive vertical poses. A whole-body controller (WBC) has been developed to manage DRACO 3’s complex transmissions. This research offers insights into the development and control of humanoids with rolling contact joints, focusing on practicality and performance.

Diagram illustrating the rolling contact joint at the knee. Initial configuration (left) and post-angular displacement (right). Source.

Robots’ impacts on performance

Shifting our focus to psychology, recently researchers from Technische Universität Berlin have investigated the phenomenon of social loafing in human-robot teams. Social loafing refers to reduced individual effort in a team setting compared to working alone. The study involved participants inspecting circuit boards for defects, with one group working alone and the other with a robot partner. Despite a reliable robot that marked defects on boards, participants working with the robot identified fewer defects compared to those working alone, suggesting a potential occurrence of social loafing in human-robot teams. This research sheds light on the challenges associated with human-robot collaboration and its impact on individual effort and performance.

Results of solo work vs. robot work. Source.

A robot designed by AI

Changing our focus to robot design, researchers from Northwestern University have developed an AI system that designs robots from scratch, enabling it to create a walking robot in seconds, a task that took nature billions of years to evolve. This AI system runs on a lightweight personal computer, without relying on energy-hungry supercomputers or large datasets, offering the potential to design robots with unique forms rapidly. The system works by iterating on a design, assessing its flaws, and refining the structure in a matter of seconds. It paves the way for a new era of AI-designed tools capable of acting directly on the world for various applications.

3D printer designing robot. Source.

A customizable robot for household organization

Finally, in the field of home robotics, researchers from Stanford, Princeton, Columbia University, and Google, have developed TidyBot, a one-armed robot designed to clean spaces according to personal preferences. TidyBot uses a large language model trained on internet data to identify various objects and understand where to put them, making it highly customizable to different preferences. In real-world tests, the robot can correctly put away approximately 85% of objects, significantly improving household organization. While TidyBot still has room for improvement, researchers believe it holds great promise for making robots more versatile and useful in homes and other environments.

Tidybot in training. Source.

The ongoing development in a multitude of sectors highlights the flexibility and steadily advancing character of robotics technology, uncovering fresh possibilities for its incorporation into a wide range of industries. The progressive expansion in the realm of robotics mirrors unwavering commitment and offers a glimpse into the potential consequences of these advancements for the times ahead.

Sources:

  1. Hinrichs, P., Seibert, K., Arizpe Gómez, P., Pfingsthorn, M., & Hein, A. (2023). A Robotic System to Anchor a Patient in a Lateral Position and Reduce Nurses’ Physical Strain. Robotics12(5)
  2. Ortiz-Catalán, M., Zbinden, J., Millenaar, J., D’Accolti, D., Controzzi, M., Clemente, F., Cappello, L., Earley, E. J., Enzo Mastinu, Justyna Kolankowska, Munoz-Novoa, M., Stewe Jönsson, Njel, C., Paolo Sassu, & Rickard Brånemark. (2023). A highly integrated bionic hand with neural control and feedback for use in daily life. Science Robotics
  3. Pantoja-Garcia, L., Parra-Vega, V., Garcia-Rodriguez, R., & Vázquez-García, C. E. (2023). A Novel Actor—Critic Motor Reinforcement Learning for Continuum Soft Robots. Robotics12(5)
  4. Bang, S. H., Gonzalez, C., Ahn, J., Paine, N., & Sentis, L. (2023, September 26). Control and evaluation of a humanoid robot with rolling contact joints on its lower body. Frontiers.
  5. Cymek, D. H., Truckenbrodt, A., & Onnasch, L. (2023, August 31). Lean back or lean in? exploring social loafing in human–robot teams. Frontiers. 
  6. Instant evolution: AI designs new robot from scratch in seconds. (n.d.). News.northwestern.edu.
  7. University, S. (2023, October 3). Robot provides personalized room cleanup. Stanford News.
]]>
Goal representations for instruction following https://robohub.org/goal-representations-for-instruction-following/ Sun, 22 Oct 2023 08:35:00 +0000 http://bair.berkeley.edu/blog/2023/10/17/grif/

By Andre He, Vivek Myers

A longstanding goal of the field of robot learning has been to create generalist agents that can perform tasks for humans. Natural language has the potential to be an easy-to-use interface for humans to specify arbitrary tasks, but it is difficult to train robots to follow language instructions. Approaches like language-conditioned behavioral cloning (LCBC) train policies to directly imitate expert actions conditioned on language, but require humans to annotate all training trajectories and generalize poorly across scenes and behaviors. Meanwhile, recent goal-conditioned approaches perform much better at general manipulation tasks, but do not enable easy task specification for human operators. How can we reconcile the ease of specifying tasks through LCBC-like approaches with the performance improvements of goal-conditioned learning?

Conceptually, an instruction-following robot requires two capabilities. It needs to ground the language instruction in the physical environment, and then be able to carry out a sequence of actions to complete the intended task. These capabilities do not need to be learned end-to-end from human-annotated trajectories alone, but can instead be learned separately from the appropriate data sources. Vision-language data from non-robot sources can help learn language grounding with generalization to diverse instructions and visual scenes. Meanwhile, unlabeled robot trajectories can be used to train a robot to reach specific goal states, even when they are not associated with language instructions.

Conditioning on visual goals (i.e. goal images) provides complementary benefits for policy learning. As a form of task specification, goals are desirable for scaling because they can be freely generated hindsight relabeling (any state reached along a trajectory can be a goal). This allows policies to be trained via goal-conditioned behavioral cloning (GCBC) on large amounts of unannotated and unstructured trajectory data, including data collected autonomously by the robot itself. Goals are also easier to ground since, as images, they can be directly compared pixel-by-pixel with other states.

However, goals are less intuitive for human users than natural language. In most cases, it is easier for a user to describe the task they want performed than it is to provide a goal image, which would likely require performing the task anyways to generate the image. By exposing a language interface for goal-conditioned policies, we can combine the strengths of both goal- and language- task specification to enable generalist robots that can be easily commanded. Our method, discussed below, exposes such an interface to generalize to diverse instructions and scenes using vision-language data, and improve its physical skills by digesting large unstructured robot datasets.

Goal representations for instruction following

The GRIF model consists of a language encoder, a goal encoder, and a policy network. The encoders respectively map language instructions and goal images into a shared task representation space, which conditions the policy network when predicting actions. The model can effectively be conditioned on either language instructions or goal images to predict actions, but we are primarily using goal-conditioned training as a way to improve the language-conditioned use case.

Our approach, Goal Representations for Instruction Following (GRIF), jointly trains a language- and a goal- conditioned policy with aligned task representations. Our key insight is that these representations, aligned across language and goal modalities, enable us to effectively combine the benefits of goal-conditioned learning with a language-conditioned policy. The learned policies are then able to generalize across language and scenes after training on mostly unlabeled demonstration data.

We trained GRIF on a version of the Bridge-v2 dataset containing 7k labeled demonstration trajectories and 47k unlabeled ones within a kitchen manipulation setting. Since all the trajectories in this dataset had to be manually annotated by humans, being able to directly use the 47k trajectories without annotation significantly improves efficiency.

To learn from both types of data, GRIF is trained jointly with language-conditioned behavioral cloning (LCBC) and goal-conditioned behavioral cloning (GCBC). The labeled dataset contains both language and goal task specifications, so we use it to supervise both the language- and goal-conditioned predictions (i.e. LCBC and GCBC). The unlabeled dataset contains only goals and is used for GCBC. The difference between LCBC and GCBC is just a matter of selecting the task representation from the corresponding encoder, which is passed into a shared policy network to predict actions.

By sharing the policy network, we can expect some improvement from using the unlabeled dataset for goal-conditioned training. However,GRIF enables much stronger transfer between the two modalities by recognizing that some language instructions and goal images specify the same behavior. In particular, we exploit this structure by requiring that language- and goal- representations be similar for the same semantic task. Assuming this structure holds, unlabeled data can also benefit the language-conditioned policy since the goal representation approximates that of the missing instruction.

Alignment through contrastive learning

We explicitly align representations between goal-conditioned and language-conditioned tasks on the labeled dataset through contrastive learning.

Since language often describes relative change, we choose to align representations of state-goal pairs with the language instruction (as opposed to just goal with language). Empirically, this also makes the representations easier to learn since they can omit most information in the images and focus on the change from state to goal.

We learn this alignment structure through an infoNCE objective on instructions and images from the labeled dataset. We train dual image and text encoders by doing contrastive learning on matching pairs of language and goal representations. The objective encourages high similarity between representations of the same task and low similarity for others, where the negative examples are sampled from other trajectories.

When using naive negative sampling (uniform from the rest of the dataset), the learned representations often ignored the actual task and simply aligned instructions and goals that referred to the same scenes. To use the policy in the real world, it is not very useful to associate language with a scene; rather we need it to disambiguate between different tasks in the same scene. Thus, we use a hard negative sampling strategy, where up to half the negatives are sampled from different trajectories in the same scene.

Naturally, this contrastive learning setup teases at pre-trained vision-language models like CLIP. They demonstrate effective zero-shot and few-shot generalization capability for vision-language tasks, and offer a way to incorporate knowledge from internet-scale pre-training. However, most vision-language models are designed for aligning a single static image with its caption without the ability to understand changes in the environment, and they perform poorly when having to pay attention to a single object in cluttered scenes.

To address these issues, we devise a mechanism to accommodate and fine-tune CLIP for aligning task representations. We modify the CLIP architecture so that it can operate on a pair of images combined with early fusion (stacked channel-wise). This turns out to be a capable initialization for encoding pairs of state and goal images, and one which is particularly good at preserving the pre-training benefits from CLIP.

Robot policy results

For our main result, we evaluate the GRIF policy in the real world on 15 tasks across 3 scenes. The instructions are chosen to be a mix of ones that are well-represented in the training data and novel ones that require some degree of compositional generalization. One of the scenes also features an unseen combination of objects.

We compare GRIF against plain LCBC and stronger baselines inspired by prior work like LangLfP and BC-Z. LLfP corresponds to jointly training with LCBC and GCBC. BC-Z is an adaptation of the namesake method to our setting, where we train on LCBC, GCBC, and a simple alignment term. It optimizes the cosine distance loss between the task representations and does not use image-language pre-training.

The policies were susceptible to two main failure modes. They can fail to understand the language instruction, which results in them attempting another task or performing no useful actions at all. When language grounding is not robust, policies might even start an unintended task after having done the right task, since the original instruction is out of context.

Examples of grounding failures

grounding failure 1

“put the mushroom in the metal pot”

grounding failure 2

“put the spoon on the towel”

grounding failure 3

“put the yellow bell pepper on the cloth”

grounding failure 4

“put the yellow bell pepper on the cloth”

The other failure mode is failing to manipulate objects. This can be due to missing a grasp, moving imprecisely, or releasing objects at the incorrect time. We note that these are not inherent shortcomings of the robot setup, as a GCBC policy trained on the entire dataset can consistently succeed in manipulation. Rather, this failure mode generally indicates an ineffectiveness in leveraging goal-conditioned data.

Examples of manipulation failures

manipulation failure 1

“move the bell pepper to the left of the table”

manipulation failure 2

“put the bell pepper in the pan”

manipulation failure 3

“move the towel next to the microwave”

Comparing the baselines, they each suffered from these two failure modes to different extents. LCBC relies solely on the small labeled trajectory dataset, and its poor manipulation capability prevents it from completing any tasks. LLfP jointly trains the policy on labeled and unlabeled data and shows significantly improved manipulation capability from LCBC. It achieves reasonable success rates for common instructions, but fails to ground more complex instructions. BC-Z’s alignment strategy also improves manipulation capability, likely because alignment improves the transfer between modalities. However, without external vision-language data sources, it still struggles to generalize to new instructions.

GRIF shows the best generalization while also having strong manipulation capabilities. It is able to ground the language instructions and carry out the task even when many distinct tasks are possible in the scene. We show some rollouts and the corresponding instructions below.

Policy Rollouts from GRIF

rollout 1

“move the pan to the front”

rollout 2

“put the bell pepper in the pan”

rollout 3

“put the knife on the purple cloth”

rollout 4

“put the spoon on the towel”

Conclusion

GRIF enables a robot to utilize large amounts of unlabeled trajectory data to learn goal-conditioned policies, while providing a “language interface” to these policies via aligned language-goal task representations. In contrast to prior language-image alignment methods, our representations align changes in state to language, which we show leads to significant improvements over standard CLIP-style image-language alignment objectives. Our experiments demonstrate that our approach can effectively leverage unlabeled robotic trajectories, with large improvements in performance over baselines and methods that only use the language-annotated data

Our method has a number of limitations that could be addressed in future work. GRIF is not well-suited for tasks where instructions say more about how to do the task than what to do (e.g., “pour the water slowly”)—such qualitative instructions might require other types of alignment losses that consider the intermediate steps of task execution. GRIF also assumes that all language grounding comes from the portion of our dataset that is fully annotated or a pre-trained VLM. An exciting direction for future work would be to extend our alignment loss to utilize human video data to learn rich semantics from Internet-scale data. Such an approach could then use this data to improve grounding on language outside the robot dataset and enable broadly generalizable robot policies that can follow user instructions.


This post is based on the following paper:

]]>
Robot Talk Episode 58 – Lorenzo Jamone https://robohub.org/robot-talk-episode-58-lorenzo-jamone/ Fri, 20 Oct 2023 11:16:21 +0000 https://robohub.org/?p=208533 Claire chatted to Lorenzo Jamone from Queen Mary University of London all about robotic hands, dexterity, and the sense of touch.

Lorenzo Jamone is a Senior Lecturer in Robotics at Queen Mary University of London, where he is the founder and director of the CRISP group: Cognitive Robotics and Intelligent Systems for the People. He received a PhD degree in humanoid technologies at the Italian Institute of Technology. He was previously an Associate Researcher at Waseda University in Japan, and at the Instituto Superior Técnico in Portugal. His current research interests include cognitive robotics, robotic manipulation, force and tactile sensing, robot learning.

]]>
New technique helps robots pack objects into a tight space https://robohub.org/new-technique-helps-robots-pack-objects-into-a-tight-space/ Tue, 17 Oct 2023 06:38:00 +0000 https://news.mit.edu/2023/new-technique-helps-robots-pack-objects-tight-space-1017

MIT researchers are using generative AI models to help robots more efficiently solve complex object manipulation problems, such as packing a box with different objects. Image: courtesy of the researchers.

By Adam Zewe | MIT News

Anyone who has ever tried to pack a family-sized amount of luggage into a sedan-sized trunk knows this is a hard problem. Robots struggle with dense packing tasks, too.

For the robot, solving the packing problem involves satisfying many constraints, such as stacking luggage so suitcases don’t topple out of the trunk, heavy objects aren’t placed on top of lighter ones, and collisions between the robotic arm and the car’s bumper are avoided.

Some traditional methods tackle this problem sequentially, guessing a partial solution that meets one constraint at a time and then checking to see if any other constraints were violated. With a long sequence of actions to take, and a pile of luggage to pack, this process can be impractically time consuming.   

MIT researchers used a form of generative AI, called a diffusion model, to solve this problem more efficiently. Their method uses a collection of machine-learning models, each of which is trained to represent one specific type of constraint. These models are combined to generate global solutions to the packing problem, taking into account all constraints at once.

Their method was able to generate effective solutions faster than other techniques, and it produced a greater number of successful solutions in the same amount of time. Importantly, their technique was also able to solve problems with novel combinations of constraints and larger numbers of objects, that the models did not see during training.

Due to this generalizability, their technique can be used to teach robots how to understand and meet the overall constraints of packing problems, such as the importance of avoiding collisions or a desire for one object to be next to another object. Robots trained in this way could be applied to a wide array of complex tasks in diverse environments, from order fulfillment in a warehouse to organizing a bookshelf in someone’s home.

“My vision is to push robots to do more complicated tasks that have many geometric constraints and more continuous decisions that need to be made — these are the kinds of problems service robots face in our unstructured and diverse human environments. With the powerful tool of compositional diffusion models, we can now solve these more complex problems and get great generalization results,” says Zhutian Yang, an electrical engineering and computer science graduate student and lead author of a paper on this new machine-learning technique.

Her co-authors include MIT graduate students Jiayuan Mao and Yilun Du; Jiajun Wu, an assistant professor of computer science at Stanford University; Joshua B. Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Tomás Lozano-Pérez, an MIT professor of computer science and engineering and a member of CSAIL; and senior author Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT and a member of CSAIL. The research will be presented at the Conference on Robot Learning.

Constraint complications

Continuous constraint satisfaction problems are particularly challenging for robots. These problems appear in multistep robot manipulation tasks, like packing items into a box or setting a dinner table. They often involve achieving a number of constraints, including geometric constraints, such as avoiding collisions between the robot arm and the environment; physical constraints, such as stacking objects so they are stable; and qualitative constraints, such as placing a spoon to the right of a knife.

There may be many constraints, and they vary across problems and environments depending on the geometry of objects and human-specified requirements.

To solve these problems efficiently, the MIT researchers developed a machine-learning technique called Diffusion-CCSP. Diffusion models learn to generate new data samples that resemble samples in a training dataset by iteratively refining their output.

To do this, diffusion models learn a procedure for making small improvements to a potential solution. Then, to solve a problem, they start with a random, very bad solution and then gradually improve it.

Using generative AI models, MIT researchers created a technique that could enable robots to efficiently solve continuous constraint satisfaction problems, such as packing objects into a box while avoiding collisions, as shown in this simulation. Image: Courtesy of the researchers.

For example, imagine randomly placing plates and utensils on a simulated table, allowing them to physically overlap. The collision-free constraints between objects will result in them nudging each other away, while qualitative constraints will drag the plate to the center, align the salad fork and dinner fork, etc.

Diffusion models are well-suited for this kind of continuous constraint-satisfaction problem because the influences from multiple models on the pose of one object can be composed to encourage the satisfaction of all constraints, Yang explains. By starting from a random initial guess each time, the models can obtain a diverse set of good solutions.

Working together

For Diffusion-CCSP, the researchers wanted to capture the interconnectedness of the constraints. In packing for instance, one constraint might require a certain object to be next to another object, while a second constraint might specify where one of those objects must be located.

Diffusion-CCSP learns a family of diffusion models, with one for each type of constraint. The models are trained together, so they share some knowledge, like the geometry of the objects to be packed.

The models then work together to find solutions, in this case locations for the objects to be placed, that jointly satisfy the constraints.

“We don’t always get to a solution at the first guess. But when you keep refining the solution and some violation happens, it should lead you to a better solution. You get guidance from getting something wrong,” she says.

Training individual models for each constraint type and then combining them to make predictions greatly reduces the amount of training data required, compared to other approaches.

However, training these models still requires a large amount of data that demonstrate solved problems. Humans would need to solve each problem with traditional slow methods, making the cost to generate such data prohibitive, Yang says.

Instead, the researchers reversed the process by coming up with solutions first. They used fast algorithms to generate segmented boxes and fit a diverse set of 3D objects into each segment, ensuring tight packing, stable poses, and collision-free solutions.

“With this process, data generation is almost instantaneous in simulation. We can generate tens of thousands of environments where we know the problems are solvable,” she says.

Trained using these data, the diffusion models work together to determine locations objects should be placed by the robotic gripper that achieve the packing task while meeting all of the constraints.

They conducted feasibility studies, and then demonstrated Diffusion-CCSP with a real robot solving a number of difficult problems, including fitting 2D triangles into a box, packing 2D shapes with spatial relationship constraints, stacking 3D objects with stability constraints, and packing 3D objects with a robotic arm.

This figure shows examples of 2D triangle packing. These are collision-free configurations. Image: courtesy of the researchers.

This figure shows 3D object stacking with stability constraints. Researchers say at least one object is supported by multiple objects. Image: courtesy of the researchers.

Their method outperformed other techniques in many experiments, generating a greater number of effective solutions that were both stable and collision-free.

In the future, Yang and her collaborators want to test Diffusion-CCSP in more complicated situations, such as with robots that can move around a room. They also want to enable Diffusion-CCSP to tackle problems in different domains without the need to be retrained on new data.

“Diffusion-CCSP is a machine-learning solution that builds on existing powerful generative models,” says Danfei Xu, an assistant professor in the School of Interactive Computing at the Georgia Institute of Technology and a Research Scientist at NVIDIA AI, who was not involved with this work. “It can quickly generate solutions that simultaneously satisfy multiple constraints by composing known individual constraint models. Although it’s still in the early phases of development, the ongoing advancements in this approach hold the promise of enabling more efficient, safe, and reliable autonomous systems in various applications.”

This research was funded, in part, by the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, the MIT-IBM Watson AI Lab, the MIT Quest for Intelligence, the Center for Brains, Minds, and Machines, Boston Dynamics Artificial Intelligence Institute, the Stanford Institute for Human-Centered Artificial Intelligence, Analog Devices, JPMorgan Chase and Co., and Salesforce.

]]>
Interview with Marek Šuppa: insights into RoboCupJunior https://robohub.org/interview-with-marek-suppa-insights-into-robocupjunior/ Mon, 16 Oct 2023 09:30:22 +0000 https://robohub.org/?p=208492 robocupjunior soccer match in actionA RoboCupJunior soccer match in action.

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage.

Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

What is your role in RoboCupJunior and how long have you been involved with this league?

I started with RoboCupJunior quite a while ago: my first international competition was in 2009 in Graz, where I was lucky enough to compete in Soccer for the first time. Our team didn’t do all that well in that event but RoboCup made a deep impression and so I stayed around: first as a competitor and later to help organise the RoboCupJunior Soccer league. Right now I am serving as part of the RoboCupJunior Execs who are responsible for the organisation of RoboCupJunior as a whole.

How was the event this year? What were some of the highlights?

I guess this year’s theme or slogan, if we were to give it one, would be “back to normal”, or something like that. Although RoboCup 2022 already took place in-person in Thailand last year after two years of a pandemic pause, it was in a rather limited capacity, as COVID-19 still affected quite a few regions. It was great to see that the RoboCup community was able to persevere and even thrive throughout the pandemic, and that RoboCup 2023 was once again an event where thousands of robots and roboticists meet.

It would also be difficult to do this question justice without thanking the local French organisers. They were actually ready to organise the event in 2020 but it got cancelled due to COVID-19. But they did not give up on the idea and managed to put together an awesome event this year, for which we are very thankful.

robots used for robocupjuniorExamples of the robots used by the RoboCupJunior Soccer teams.

Turning to RoboCupJunior Soccer specifically, could you talk about the mission of the league and how you, as organisers, go about realising that mission?

The mission of RoboCupJunior consists of two competing objectives: on one hand, it needs to be a challenge that’s approachable, interesting and relevant for (mostly) high school students and at the same time it needs to be closely related to the RoboCup “Major” challenges, which are tackled by university students and their mentors. We are hence continuously trying to both make it more compelling and captivating for the students and at the same time ensure it is technical enough to help them grow towards the RoboCup “Major” challenges.

One of the ways we do that is by introducing what we call “SuperTeam” challenges, in which teams from respective countries form a so-called “SuperTeam” and compete against another “SuperTeam” as if these were distinct teams. In RoboCupJunior Soccer the “SuperTeams” are composed of four to five teams and they compete on a field that is six times larger than the “standard” fields that are used for the individual games. While in the individual matches each team can play with two robots at most (resulting in a 2v2 game) in a SuperTeam match each SuperTeam fields five robots, meaning there are 10 robots that play on the SuperTeam field during a SuperTeam match. The setup is very similar to the Division B of the Small Size League of RoboCup “Major”.

The SuperTeam games have existed in RoboCupJunior Soccer since 2013, so for quite a while, and the feedback we received on them was overwhelmingly positive: it was a lot of fun for both the participants as well as the spectators. But compared to the Small Size League games there were still two noticeable differences: the robots did not have a way of communicating with one another and additionally, the referees did not have a way of communicating with the robots. The result was that not only was there little coordination among robots of the same SuperTeam, whenever the game needed to be stopped, the referees had to physically run after the robots on the field to catch them and do a kickoff after a goal was scored. Although hilarious, it’s far from how we would imagine the SuperTeam games to look.

The RoboCupJunior Soccer Standard Communication Modules aim to do both. The module itself is a small device that is attached to each robot on the SuperTeam field. These devices are all connected via Bluetooth to a single smartphone, through which the referee can send commands to all robots on the field. The devices themselves also support direct message exchange between robots on a single SuperTeam, meaning the teams do not have to invest into figuring out how to communicate with the other robots but can make use of a common platform. The devices, as well as their firmware, are open source, meaning not only that everyone can build their own Standard Communication Module if they’d like but also that the community can participate in its development, which makes it an interesting addition to RoboCupJunior Soccer.

two teams setting up their robotsRoboCupJunior Soccer teams getting ready for the competition.

How did this new module work out in the competition? Did you see an improvement in experience for the teams and organisers?

In this first big public test we focused on exploring how (and whether) these modules can improve the gameplay – especially the “chasing robots at kickoff”. Although we’ve done “lab experiments” in the past and had some empirical evidence that it should work rather well, this was the first time we tried it in a real competition.

All in all, I would say that it was a very positive experiment. The modules themselves did work quite well and for some of us, who happened to have experience with “robot chasing” mentioned above, it was sort of a magical feeling to see the robots stop right on the main referee’s whistle.

We also found out potential areas for improvement in the future. The modules themselves do not have a power source of their own and were powered by the robots themselves. We didn’t think this would be a problem but in the “real world” test it transpired that the voltage levels the robots are capable of providing fluctuates significantly – for instance when the robot decides to aggressively accelerate – which in turn means some of the modules disconnect when the voltage is lowered significantly. However, it ended up being a nice lesson for everyone involved, one that we can certainly learn from when we design the next iterations.


The livestream from Day 4 of RoboCupJunior Soccer 2023. This stream includes the SuperTeam finals and the technical challenges. You can also view the livestream of the semifinals and finals from day three here.

Could you tell us about the emergence of deep-learning models in the RoboCupJunior leagues?

This is something we started to observe in recent years which surprised us organisers, to some extent. In our day-to-day jobs (that is, when we are not organising RoboCup), many of us, the organisers, work in areas related to robotics, computer science, and engineering in general – with some of us also doing research in artificial intelligence and machine learning. And while we always thought that it would be great to see more of the cutting-edge research being applied at RoboCupJunior, we always dismissed it as something too advanced and/or difficult to set up for the high school students that comprise the majority of RoboCupJunior students.

Well, to our great surprise, some of the more advanced teams have started to utilise methods and technologies that are very close to the current state-of-the-art in various areas, particularly computer vision and deep learning. A good example would be object detectors (usually based on the YOLO architecture), which are now used across all three Junior leagues: in OnStage to detect various props, robots and humans who perform on the stage together, in Rescue to detect the victims the robots are rescuing and in Soccer to detect the ball, the goals, and the opponents. And while the participants generally used an off-the-shelf implementations, they still needed to do all the steps necessary for a successful deployment of this technology: gather a dataset, finetune the deep-learning model and deploy it on their robots – all of which is far from trivial and is very close to how these technologies get used in both research and industry.

Although we have seen only the more advanced teams use deep-learning models at RoboCupJunior, we expect that in the future we will see it become much more prevalent, especially as the technology and the tooling around it becomes more mature and robust. It does show, however, that despite their age, the RoboCupJunior students are very close to cutting-edge research and state-of-the-art technologies.

Two teams ready to start - robots on the fieldAction from RoboCupJunior Soccer 2023.

How can people get involved in RCJ (either as a participant or an organiser?)

A very good question!

The best place to start would be the RoboCupJunior website where one can find many interesting details about RoboCupJunior, the respective leagues (such as Soccer, Rescue and OnStage), and the relevant regional representatives who organise regional events. Getting in touch with a regional representative is by far the easiest way of getting started with RoboCup Junior.

Additionally, I can certainly recommend the RoboCupJunior forum, where many RoboCupJunior participants, past and present, as well as the organisers, discuss many related topics in the open. The community is very beginner friendly, so if RoboCupJunior sounds interesting, do not hesitate to stop by and say hi!

About Marek Šuppa

Marek Suppa

Marek stumbled upon AI as a teenager when building soccer-playing robots and quickly realised he is not smart enough to do all the programming by himself. Since then, he’s been figuring out ways to make machines learn by themselves, particularly from text and images. He currently serves as the Principal Data Scientist at Slido (part of Cisco), improving the way meetings are run around the world. Staying true to his roots, he tries to provide others with a chance to have a similar experience by organising the RoboCupJunior competition as part of the Executive Committee.

]]>
Easing job jitters in the digital revolution https://robohub.org/easing-job-jitters-in-the-digital-revolution/ Sun, 15 Oct 2023 10:54:48 +0000 https://robohub.org/?p=208461

The world’s fourth industrial revolution is ushering in big shifts in the workplace. © demaerre, iStock.com

Professor Steven Dhondt has a reassurance of sorts for people in the EU worried about losing their jobs to automation: relax.

Dhondt, an expert in work and organisational change at the Catholic University Leuven in Belgium, has studied the impact of technology on jobs for the past four decades. Fresh from leading an EU research project on the issue, he stresses opportunities rather than threats.

Right vision

‘We need to develop new business practices and welfare support but, with the right vision, we shouldn’t see technology as a threat,’ Dhondt said. ‘Rather, we should use it to shape the future and create new jobs.’

The rapid and accelerating advance in digital technologies across the board is regarded as the world’s fourth industrial revolution, ushering in fundamental shifts in how people live and work.

If the first industrial revolution was powered by steam, the second by electricity and the third by electronics, the latest will be remembered for automation, robotics and artificial intelligence, or AI. It’s known as “Industry 4.0”.

‘Whether it was the Luddite movement in the 1800s through the introduction of automatic spinning machines in the wool industry or concerns about AI today, questions about technology’s impact on jobs really reflect wider ones about employment practices and the labour market,’ said Dhondt.

He is also a senior scientist at a Netherlands-based independent research organisation called TNO.

The EU project that Dhondt led explored how businesses and welfare systems could better adapt to support workers in the face of technological changes. The initiative, called Beyond4.0, began in January 2019 and wrapped up in June 2023.

While the emergence of self-driving cars and AI-assisted robots holds big potential for economic growth and social progress, they also sound alarm bells.

More than 70% of EU citizens fear that new technologies will “steal” people’s jobs, according to a 2019 analysis by the European Centre for the Development of Vocational Training.

Local successes

The Beyond4.0 researchers studied businesses across Europe that have taken proactive and practical steps to empower employees.

“We shouldn’t see technology as a threat – rather we should use it to shape the future and create new jobs.”

– Professor Steven Dhondt, BEYOND4.0

One example is a family-run Dutch glass company called Metaglas, which decided that staying competitive in the face of technological changes required investing more in its own workforce.

Metaglas offered workers greater openness with management and a louder voice on the company’s direction and product development.

The move, which the company named “MetaWay”, has helped it retain workers while turning a profit that is being reinvested in the workforce, according to Dhondt.

He said the example shows the importance in the business world of managers’ approach to the whole issue.

‘The technology can be an enabler, not a threat, but the decision about that lies with management in organisations,’ Dhondt said. ‘If management uses technology to downgrade the quality of jobs, then jobs are at risk. If management uses technology to enhance jobs, then you can see workers and organisations learn and improve.’

The Metaglas case has fed into a “knowledge bank” meant to inform business practices more broadly.

Dhondt also highlighted the importance of regions in Europe where businesses and job trainers join forces to support people.

BEYOND4.0 studied the case of the Finnish city of Oulu – once a leading outpost of mobile-phone giant Nokia. In the 2010s, the demise of Nokia’s handset business threatened Oulu with a “brain drain” as the company’s engineers were laid-off.

But collaboration among Nokia, local universities and policymakers helped grow new businesses including digital spin-offs and kept hundreds of engineers in the central Finnish region, once a trading centre for wood tar, timber and salmon.

Some Nokia engineers went to the local hospital to work on electronic healthcare services – “e-health” – while others moved to papermaker Stora Enso, according to Dhondt.

Nowadays there are more high-tech jobs in Oulu than during Nokia’s heyday. The BEYOND4.0 team held the area up as a successful “entrepreneurial ecosystem” that could help inform policies and practices elsewhere in Europe.

Income support

In cases where people were out of work, the project also looked to new forms of welfare support.

Dhondt’s Finnish colleagues examined the impact of a two-year trial in Finland of a “universal basic income” – or UBI – and used this to assess the feasibility of a different model called “participation income.”

In the UBI experiment, participants each received a monthly €560 sum, which was paid unconditionally. Although UBI is often touted as an answer to automation, BEYOND4.0’s evaluation of the Finnish trial was that it could weaken the principle of solidarity in society.

The project’s participation income approach requires recipients of financial support to undertake an activity deemed useful to society. This might include, for example, care for the elderly or for children.

While detailed aspects are still being worked out, the BEYOND4.0 team discussed participation income with the government of Finland and the Finnish parliament has put the idea on the agenda for debate.

Dhondt hopes the project’s findings, including on welfare support, will help other organisations better navigate the changing tech landscape.

Employment matchmakers

Another researcher keen to help people adapt to technological changes is Dr Aisling Tuite, a labour-market expert at the South East Technical University in Ireland.

“We wanted to develop a product that could be as useful for people looking for work as for those supporting them.”

– Dr Aisling Tuite, HECAT

Tuite has looked at how digital technologies can help job seekers find suitable work.

She coordinated an EU-funded project to help out-of-work people find jobs or develop new skills through a more open online system.

Called HECAT, the project ran from February 2020 through July 2023 and brought together researchers from Denmark, France, Ireland, Slovenia, Spain and Switzerland.

In recent years, many countries have brought in active labour-market policies that deploy computer-based systems to profile workers and help career counsellors target people most in need of help.

While this sounds highly targeted, Tuite said that in reality it often pushes people into employment that might be unsuitable for them and is creating job-retention troubles.

‘Our current employment systems often fail to get people to the right place – they just move people on,’ she said. ‘What people often need is individualised support or new training. We wanted to develop a product that could be as useful for people looking for work as for those supporting them.’

Ready to run

HECAT’s online system combines new vacancies with career counselling and current labour-market data.

The system was tested during the project and a beta version is now available via My Labour Market and can be used in all EU countries where data is available.

It can help people figure out where there are jobs and how to be best positioned to secure them, according to Tuite.

In addition to displaying openings by location and quality, the system offers detailed information about career opportunities and labour-market trends including the kinds of jobs on the rise in particular areas and the average time it takes to find a position in a specific sector.

Tuite said feedback from participants in the test was positive.

She recalled one young female job seeker saying it had made her more confident in exploring new career paths and another who said knowing how long the average “jobs wait” would be eased the stress of hunting.

Looking ahead, Tuite hopes the HECAT researchers can demonstrate the system in governmental employment-services organisations in numerous EU countries over the coming months. 

‘There is growing interest in this work from across public employment services in the EU and we’re excited,’ she said.


(This article was updated on 21 September 2023 to include a reference to Steven Dhondt’s role at TNO in the Netherlands)

Research in this article was funded by the EU.

This article was originally published in Horizon, the EU Research and Innovation magazine.

]]>
Robot Talk Episode 57 – Kate Devlin https://robohub.org/robot-talk-episode-57-kate-devlin/ Fri, 13 Oct 2023 11:10:18 +0000 https://robohub.org/?p=208448

Claire chatted to Kate Devlin from King’s College London about the social and ethical implications of robotics and AI.

Kate Devlin is Reader in Artificial Intelligence & Society in the Department of Digital Humanities, King’s College London. She is an interdisciplinary computer scientist investigating how people interact with and react to technologies, both past and future. Kate is the author of Turned On: Science, Sex and Robots, which examines the ethical and social implications of technology and intimacy. She is Creative and Outreach lead for the UKRI Responsible Artificial Intelligence UK programme — an international research and innovation ecosystem for responsible AI.

]]>
#IROS2023 awards finalists and winners + IROS on Demand free for one year https://robohub.org/iros2023-awards-finalists-and-winners-iros-on-demand-free-for-one-year/ Thu, 12 Oct 2023 11:09:17 +0000 https://robohub.org/?p=208437

Did you have the chance to attend the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) in Detroit? Here we bring you the papers that received an award this year in case you missed them. And good news: you can read all the papers because IROS on Demand is open to the public and freely available for one year from Oct 9th. Congratulations to all the winners and finalists!

IROS 2023 Best Overall and Best Student Paper

Winner of the IROS 2023 Best Paper

  • Autonomous Power Line Inspection with Drones via Perception-Aware MPC, by Jiaxu Xing, Giovanni Cioffi, Javier Hidalgo Carrio, Davide Scaramuzza.

Winner of the IROS 2023 Best Student Paper

  • Controlling Powered Prosthesis Kinematics over Continuous Transitions Between Walk and Stair Ascent, by Shihao Cheng, Curt A. Laubscher, Robert D. Gregg.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.
  • Towards Legged Locomotion on Steep Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Decentralized Swarm Trajectory Generation for LiDAR-based Aerial Tracking in Cluttered Environments, by Longji Yin, Fangcheng Zhu, Yunfan Ren, Fanze Kong, Fu Zhang.
  • Open-Vocabulary Affordance Detection in 3D Point Clouds, by Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le, Anh Nguyen.
  • Discovering Symbolic Adaptation Algorithms from Scratch, by Stephen Kelly, Daniel Park, Xingyou Song, Mitchell McIntire, Pranav Nashikkar, Ritam Guha, Wolfgang Banzhaf, Kalyanmoy Deb, Vishnu Boddeti, Jie Tan, Esteban Real.
  • Parallel cell array patterning and target cell lysis on an optoelectronic micro-well device, by Chunyuan Gan, Hongyi Xiong, Jiawei Zhao, Ao Wang, Chutian Wang, Shuzhang Liang, Jiaying Zhang, Lin Feng.
  • FATROP: A Fast Constrained Optimal Control Problem Solver for Robot Trajectory Optimization and Control, by Lander Vanroye, Ajay Suresha Sathya, Joris De Schutter, Wilm Decré.
  • GelSight Svelte: A Human Finger-Shaped Single-Camera Tactile Robot Finger with Large Sensing Coverage and Proprioceptive Sensing, by Jialiang Zhao, Edward Adelson.
  • Shape Servoing of a Soft Object Using Fourier Series and a Physics-based Model, by Fouad Makiyeh, Francois Chaumette, Maud Marchal, Alexandre Krupa.

IROS Best Paper Award on Agri-Robotics sponsored by YANMAR

Winner

  • Visual, Spatial, Geometric-Preserved Place Recognition for Cross-View and Cross-Modal Collaborative Perception, by Peng Gao, Jing Liang, Yu Shen, Sanghyun Son, Ming C. Lin.

Finalists

  • Online Self-Supervised Thermal Water Segmentation for Aerial Vehicles, by Connor Lee, Jonathan Gustafsson Frennert, Lu Gan, Matthew Anderson, Soon-Jo Chung.
  • Relative Roughness Measurement based Real-time Speed Planning for Autonomous Vehicles on Rugged Road, by Liang Wang, Tianwei Niu, Shuai Wang, Shoukun Wang, Junzheng Wang.

IROS Best Application Paper Award sponsored by ICROS

Winner

  • Autonomous Robotic Drilling System for Mice Cranial Window Creation: An Evaluation with an Egg Model, by Enduo Zhao, Murilo Marques Marinho, Kanako Harada.

Finalists

  • Visuo-Tactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling, by Shoujie Li, MingShan He, Wenbo Ding, Linqi Ye, xueqian WANG, Junbo Tan, Jinqiu Yuan, Xiao-Ping Zhang.
  • Improving Amputee Endurance over Activities of Daily Living with a Robotic Knee-Ankle Prosthesis: A Case Study, by Kevin Best, Curt A. Laubscher, Ross Cortino, Shihao Cheng, Robert D. Gregg.
  • Dynamic hand proprioception via a wearable glove with fabric sensors, by Lily Behnke, Lina Sanchez-Botero, William Johnson, Anjali Agrawala, Rebecca Kramer-Bottiglio.
  • Active Capsule System for Multiple Therapeutic Patch Delivery: Preclinical Evaluation, by Jihun Lee, Manh Cuong Hoang, Jayoung Kim, Eunho Choe, Hyeonwoo Kee, Seungun Yang, Jongoh Park, Sukho Park.

IROS Best Entertainment and Amusement Paper Award sponsored by JTCF

Winner

  • DoubleBee: A Hybrid Aerial-Ground Robot with Two Active Wheels, by Muqing Cao, Xinhang Xu, Shenghai Yuan, Kun Cao, Kangcheng Liu, Lihua Xie.

Finalists

  • Polynomial-based Online Planning for Autonomous Drone Racing in Dynamic Environments, by Qianhao Wang, Dong Wang, Chao Xu, Alan Gao, Fei Gao.
  • Bistable Tensegrity Robot with Jumping Repeatability based on Rigid Plate-shaped Compressors, by Kento Shimura, Noriyasu Iwamoto, Takuya Umedachi.

IROS Best Industrial Robotics Research for Applications sponsored by Mujin Inc.

Winner

  • Toward Closed-loop Additive Manufacturing: Paradigm Shift in Fabrication, Inspection, and Repair, by Manpreet Singh, Fujun Ruan, Albert Xu, Yuchen Wu, Archit Rungta, Luyuan Wang, Kevin Song, Howie Choset, Lu Li.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Bagging by Learning to Singulate Layers Using Interactive Perception, by Lawrence Yunliang Chen, Baiyu Shi, Roy Lin, Daniel Seita, Ayah Ahmad, Richard Cheng, Thomas Kollar, David Held, Ken Goldberg.
  • Exploiting the Kinematic Redundancy of a Backdrivable Parallel Manipulator for Sensing During Physical Human-Robot Interaction, by Arda Yigit, Tan-Sy Nguyen, Clement Gosselin.

IROS Best Paper Award on Cognitive Robotics sponsored by KROS

Winner

  • Extracting Dynamic Navigation Goal from Natural Language Dialogue, by Lanjun Liang, Ganghui Bian, Huailin Zhao, Yanzhi Dong, Huaping Liu.

Finalists

  • EasyGaze3D: Towards Effective and Flexible 3D Gaze Estimation from a Single RGB Camera, by Jinkai Li, Jianxin Yang, Yuxuan Liu, ZHEN LI, Guang-Zhong Yang, Yao Guo.
  • Team Coordination on Graphs with State-Dependent Edge Cost, by Sara Oughourli, Manshi Limbu, Zechen Hu, Xuan Wang, Xuesu Xiao, Daigo Shishika.
  • Is Weakly-supervised Action Segmentation Ready For Human-Robot Interaction? No, Let’s Improve It With Action-union Learning, by Fan Yang, Shigeyuki Odashima, Shochi Masui, Shan Jiang.
  • Exploiting Spatio-temporal Human-object Relations using Graph Neural Networks for Human Action Recognition and 3D Motion Forecasting, by Dimitrios Lagamtzis, Fabian Schmidt, Jan Reinke Seyler, Thao Dang, Steffen Schober.

IROS Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp.

Winner

  • A perching and tilting aerial robot for precise and versatile power tool work on vertical walls, by Roman Dautzenberg, Timo Küster, Timon Mathis, Yann Roth, Curdin Steinauer, Gabriel Käppeli, Julian Santen, Alina Arranhado, Friederike Biffar, Till Kötter, Christian Lanegger, Mike Allenspach, Roland Siegwart, Rik Bähnemann.

Finalists

  • Placing by Touching: An empirical study on the importance of tactile sensing for precise object placing, by Luca Lach, Niklas Wilhelm Funk, Robert Haschke, Séverin Lemaignan, Helge Joachim Ritter, Jan Peters, Georgia Chalvatzaki.
  • Efficient Object Manipulation Planning with Monte Carlo Tree Search, by Huaijiang Zhu, Avadesh Meduri, Ludovic Righetti.
  • Sequential Manipulation Planning for Over-actuated UAMs, by Yao Su, Jiarui Li, Ziyuan Jiao, Meng Wang, Chi Chu, Hang Li, Yixin Zhu, Hangxin Liu.
  • On the Design of Region-Avoiding Metrics for Collision-Safe Motion Generation on Riemannian Manifolds, by Holger Klein, Noémie Jaquier, Andre Meixner, Tamim Asfour.

IROS Best RoboCup Paper Award sponsored by RoboCup Federation

Winner

  • Sequential Neural Barriers for Scalable Dynamic Obstacle Avoidance, by Hongzhan Yu, Chiaki Hirayama, Chenning Yu, Sylvia Herbert, Sicun Gao.

Finalists

  • Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation, by Fabian Clemens Weigend, Shubham Sonawani, Drolet Michael, Heni Ben Amor.
  • Effectively Rearranging Heterogeneous Objects on Cluttered Tabletops, by Kai Gao, Justin Yu, Tanay Sandeep Punjabi, Jingjin Yu.
  • Prioritized Planning for Target-Oriented Manipulation via Hierarchical Stacking Relationship Prediction, by Zewen Wu, Jian Tang, Xingyu Chen, Chengzhong Ma, Xuguang Lan, Nanning Zheng.

IROS Best Paper Award on Robot Mechanisms and Design sponsored by ROBOTIS

Winner

  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.

Finalists

  • Hybrid Tendon and Ball Chain Continuum Robots for Enhanced Dexterity in Medical Interventions, by Giovanni Pittiglio, Margherita Mencattelli, Abdulhamit Donder, Yash Chitalia, Pierre Dupont.
  • c^2: Co-design of Robots via Concurrent-Network Coupling Online and Offline Reinforcement Learning, by Ci Chen, Pingyu Xiang, Haojian Lu, Yue Wang, Rong Xiong.
  • Collision-Free Reconfiguration Planning for Variable Topology Trusses using a Linking Invariant, by Alexander Spinos, Mark Yim.
  • eViper: A Scalable Platform for Untethered Modular Soft Robots, by Hsin Cheng, Zhiwu Zheng, Prakhar Kumar, Wali Afridi, Ben Kim, Sigurd Wagner, Naveen Verma, James Sturm, Minjie Chen.

IROS Best Paper Award on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi sponsored by IRSI

Winner

  • mCLARI: A Shape-Morphing Insect-Scale Robot Capable of Omnidirectional Terrain-Adaptive Locomotion, by Heiko Dieter Kabutz, Alexander Hedrick, William Parker McDonnell, Kaushik Jayaram.

Finalists

  • Towards Legged Locomotion on Steep, Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Global Localization in Unstructured Environments using Semantic Object Maps Built from Various Viewpoints, by Jacqueline Ankenbauer, Parker C. Lusk, Jonathan How.
  • EELS: Towards Autonomous Mobility in Extreme Environments with a Novel Large-Scale Screw Driven Snake Robot, by Rohan Thakker, Michael Paton, Marlin Polo Strub, Michael Swan, Guglielmo Daddi, Rob Royce, Matthew Gildner, Tiago Vaquero, Phillipe Tosi, Marcel Veismann, Peter Gavrilov, Eloise Marteau, Joseph Bowkett, Daniel Loret de Mola Lemus, Yashwanth Kumar Nakka, Benjamin Hockman, Andrew Orekhov, Tristan Hasseler, Carl Leake, Benjamin Nuernberger, Pedro F. Proença, William Reid, William Talbot, Nikola Georgiev, Torkom Pailevanian, Avak Archanian, Eric Ambrose, Jay Jasper, Rachel Etheredge, Christiahn Roman, Daniel S Levine, Kyohei Otsu, Hovhannes Melikyan, Richard Rieber, Kalind Carpenter, Jeremy Nash, Abhinandan Jain, Lori Shiraishi, Ali-akbar Agha-mohammadi, Matthew Travers, Howie Choset, Joel Burdick, Masahiro Ono.
  • Multi-IMU Proprioceptive Odometry for Legged Robots, by Shuo Yang, Zixin Zhang, Benjamin Bokser, Zachary Manchester.
]]>
The robots of #IROS2023 https://robohub.org/the-robots-of-iros2023/ Wed, 11 Oct 2023 05:35:43 +0000 https://svrobo.org/?p=26921

The International Conference on Intelligent Robots and Systems (IROS) showcases leading-edge research in robotics. IROS was held in Detroit MI Oct 1-5 and not only showcased research but the latest commercialization in robotics, particularly robotics providers selling into robotics for research or as part of the hardware/software stack. The conference focuses on future directions in robotics, and the latest approaches, designs, and outcomes. It also provides an opportunity to network with the world’s leading roboticists.

Highlights included seeing Silicon Valley Robotics members; Foxglove, Hello Robot, Anyware Robotics and Tangram Vision, also Open Robotics and Intrinsic talking up ROS 2 and the upcoming ROSCon 23. Intrinsic sponsored a ROS/IROS meetup and Clearpath Robotics sponsored the Diversity Cocktails event. OhmniLabs sponsored 3 telepresence robots which were in constant demand touring the expo, the competition floor and the poster sessions. I also met Sol Robotics from the Bay Area, which has quite a unique robot arm structure, that’s super stable with the ability to carry a lot of weight.

There were plenty of rolling and roaming robots, like this Diablo from Direct Drive Tech (world’s first Direct-Drive Self-Balancing Wheeled-Leg Robot), also from Deep Robotics, Unitree Robotics, Fourier Intelligence, Hebi and Westwood Robotics. Also the other legged one, and the rolling robots – Clearpath, Otto, Husarion, Hebi and more. Although they weren’t on the Expo Floor, the Disney keynote session was another highlight with a live robot demo on stage,

And Franka Emika fans will be pleased to hear that not only did they win a ‘best paper’ award, but that the eminent demise of the company is much overstated. It’s a German thing. There are many investors/purchasers lined up to keep the company going while they restructure. And watch out for Psyonics! Psyonics’ smart ability hands and arms, world’s first touch sensing bionic arms, are being used by Apptronik (humanoid for NASA) as well as for people with disabilities.

IROS Exhibitor gallery
























































































Full list of IROS Exhibitors is here.

]]>
Finger-shaped sensor enables more dexterous robots https://robohub.org/finger-shaped-sensor-enables-more-dexterous-robots/ Sun, 08 Oct 2023 06:59:00 +0000 https://news.mit.edu/2023/finger-shaped-sensor-enables-more-dexterous-robots-1004

MIT researchers have developed a camera-based touch sensor that is long, curved, and shaped like a human finger. Their device, which provides high-resolution tactile sensing over a large area, could enable a robotic hand to perform multiple types of grasps. Image: Courtesy of the researchers

By Adam Zewe | MIT News

Imagine grasping a heavy object, like a pipe wrench, with one hand. You would likely grab the wrench using your entire fingers, not just your fingertips. Sensory receptors in your skin, which run along the entire length of each finger, would send information to your brain about the tool you are grasping.

In a robotic hand, tactile sensors that use cameras to obtain information about grasped objects are small and flat, so they are often located in the fingertips. These robots, in turn, use only their fingertips to grasp objects, typically with a pinching motion. This limits the manipulation tasks they can perform.

MIT researchers have developed a camera-based touch sensor that is long, curved, and shaped like a human finger. Their device provides high-resolution tactile sensing over a large area. The sensor, called the GelSight Svelte, uses two mirrors to reflect and refract light so that one camera, located in the base of the sensor, can see along the entire finger’s length.

In addition, the researchers built the finger-shaped sensor with a flexible backbone. By measuring how the backbone bends when the finger touches an object, they can estimate the force being placed on the sensor.

They used GelSight Svelte sensors to produce a robotic hand that was able to grasp a heavy object like a human would, using the entire sensing area of all three of its fingers. The hand could also perform the same pinch grasps common to traditional robotic grippers.

This gif shows a robotic hand that incorporates three, finger-shaped GelSight Svelte sensors. The sensors, which provide high-resolution tactile sensing over a large area, enable the hand to perform multiple grasps, including pinch grasps that use only the fingertips and a power grasp that uses the entire sensing area of all three fingers. Credit: Courtesy of the researchers

“Because our new sensor is human finger-shaped, we can use it to do different types of grasps for different tasks, instead of using pinch grasps for everything. There’s only so much you can do with a parallel jaw gripper. Our sensor really opens up some new possibilities on different manipulation tasks we could do with robots,” says Alan (Jialiang) Zhao, a mechanical engineering graduate student and lead author of a paper on GelSight Svelte.

Zhao wrote the paper with senior author Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the IEEE Conference on Intelligent Robots and Systems.

Mirror mirror

Cameras used in tactile sensors are limited by their size, the focal distance of their lenses, and their viewing angles. Therefore, these tactile sensors tend to be small and flat, which confines them to a robot’s fingertips.

With a longer sensing area, one that more closely resembles a human finger, the camera would need to sit farther from the sensing surface to see the entire area. This is particularly challenging due to size and shape restrictions of a robotic gripper.

Zhao and Adelson solved this problem using two mirrors that reflect and refract light toward a single camera located at the base of the finger.

GelSight Svelte incorporates one flat, angled mirror that sits across from the camera and one long, curved mirror that sits along the back of the sensor. These mirrors redistribute light rays from the camera in such a way that the camera can see the along the entire finger’s length.

To optimize the shape, angle, and curvature of the mirrors, the researchers designed software to simulate reflection and refraction of light.

“With this software, we can easily play around with where the mirrors are located and how they are curved to get a sense of how well the image will look after we actually make the sensor,” Zhao explains.

The mirrors, camera, and two sets of LEDs for illumination are attached to a plastic backbone and encased in a flexible skin made from silicone gel. The camera views the back of the skin from the inside; based on the deformation, it can see where contact occurs and measure the geometry of the object’s contact surface.

A breakdown of the components that make up the finger-like touch sensor. Image: Courtesy of the researchers

In addition, the red and green LED arrays give a sense of how deeply the gel is being pressed down when an object is grasped, due to the saturation of color at different locations on the sensor.

The researchers can use this color saturation information to reconstruct a 3D depth image of the object being grasped.

The sensor’s plastic backbone enables it to determine proprioceptive information, such as the twisting torques applied to the finger. The backbone bends and flexes when an object is grasped. The researchers use machine learning to estimate how much force is being applied to the sensor, based on these backbone deformations.

However, combining these elements into a working sensor was no easy task, Zhao says.

“Making sure you have the correct curvature for the mirror to match what we have in simulation is pretty challenging. Plus, I realized there are some kinds of superglue that inhibit the curing of silicon. It took a lot of experiments to make a sensor that actually works,” he adds.

Versatile grasping

Once they had perfected the design, the researchers tested the GelSight Svelte by pressing objects, like a screw, to different locations on the sensor to check image clarity and see how well it could determine the shape of the object.

They also used three sensors to build a GelSight Svelte hand that can perform multiple grasps, including a pinch grasp, lateral pinch grasp, and a power grasp that uses the entire sensing area of the three fingers. Most robotic hands, which are shaped like parallel jaw drippers, can only perform pinch grasps.

A three-finger power grasp enables a robotic hand to hold a heavier object more stably. However, pinch grasps are still useful when an object is very small. Being able to perform both types of grasps with one hand would give a robot more versatility, he says.

Moving forward, the researchers plan to enhance the GelSight Svelte so the sensor is articulated and can bend at the joints, more like a human finger.

“Optical-tactile finger sensors allow robots to use inexpensive cameras to collect high-resolution images of surface contact, and by observing the deformation of a flexible surface the robot estimates the contact shape and forces applied. This work represents an advancement on the GelSight finger design, with improvements in full-finger coverage and the ability to approximate bending deflection torques using image differences and machine learning,” says Monroe Kennedy III, assistant professor of mechanical engineering at Stanford University, who was not involved with this research. “Improving a robot’s sense of touch to approach human ability is a necessity and perhaps the catalyst problem for developing robots capable of working on complex, dexterous tasks.”

This research is supported, in part, by the Toyota Research Institute.

]]>
Robot Talk Episode 56 – Guillaume Doisy https://robohub.org/robot-talk-episode-56-guillaume-doisy/ Fri, 06 Oct 2023 14:57:50 +0000 https://robohub.org/?p=208397

Claire chatted to Guillaume Doisy from Dexory about autonomous warehouse robots, mobile robotics, and software.

Guillaume Doisy is the current Lead Systems Architect at Dexory, the leading UK robotics and AI company building state-of-the-art fully autonomous mobile robots for use in warehouses. Guillaume has a wide range of responsibilities including building the robots ability to function autonomously. The French native has significant expertise in the field of autonomous mobile robots having begun his career at French start-up Wyca as Chief Robotics Officer. Guillaume is also a long-term contributor to ROS (Robotic Operating Software), the open-source robotics software project.

]]>
50 women in robotics you need to know about 2023 https://robohub.org/50-women-in-robotics-you-need-to-know-about-2023/ Wed, 04 Oct 2023 16:50:34 +0000 https://robohub.org/?p=208381

In celebration of the launch of International Women in Robotics Day, the Women in Robotics organization is proud to release another “50 women in robotics you need to know about” collection of stories. With a growing robotics industry there are many opportunities for everyone to get involved. This is why we showcase the wide range of roles that women play in robotics today.

Since 2012, the Women in Robotics organization has released a list of women building the future in robotics. The list has covered all ages, career stages, types of occupation and experience. We’ve featured more than 350 women already and we’ve shown that women have always been working in the robotics industry, in the earliest robotics research labs and companies, although those stories have often been forgotten.

This year’s collection includes Nancy Cornelius, co-founder of Boston Dynamics and the first engineer hired. Cornelius remained an integral part of Boston Dynamics until the company was sold to Google in 2013. Vandi Verma is the head of NASA’s rover (robot) program. Joanna Buttler is the head of the Global Autonomous Technology Group for Daimler Truck. And Whitney Rockley founded a venture capital company investing exclusively in ‘industrial internet’ companies like Clearpath Robotics.

For the first time, we feature an Indigenous (Ojibwe) American roboticist, Danielle Boyer. Boyer started a non-profit The STEAM Connection to combat the difficulties that many kids have getting access to robotics. She created an affordable robot kit that’s been distributed to thousands of students, and is proudest of the SKOBOT project. Personalized robots that keep culture and language traditions alive. Boyer epitomizes the motto “Building the Future”.

We also try to feature women from all regions of the world and this year’s collection represents Nigeria, India, China, Australia, Japan, Switzerland, Croatia, Korea, Denmark, Singapore, Italy, Romania, United States, Sweden, Spain, Canada, the UK, Israel, Austria, Belgium, Mexico, Argentina and Brazil. There is an active Latinx community in Women in Robotics engaged in translating more robotics information into Spanish, hoping to create more connections between the global robotics community and the roboticists, and potential roboticists, of Latin America.

There have always been women doing great things in robotics! And we’re pleased to present another collection of strong female role models for young and upcoming roboticists (of any gender).

You can also join in the Women in Robotics celebrations today and throughout October, with events listed on the women in robotics site, like Diversity Cocktails at the IROS conference in Detroit, or the launch of the Los Angeles women in robotics chapter. Women in Robotics is a global community organization for women and non-binary people working in robotics and those who’d like to work in robotics. Learn more at https://womeninrobotics.org

Join our events, host your own events, share our celebration on social media!

]]>
World Robotics 2023 report: Asia ahead of Europe and the Americas https://robohub.org/world-robotics-2023-report-asia-ahead-of-europe-and-the-americas/ Mon, 02 Oct 2023 05:46:37 +0000 https://robohub.org/?p=208364 The new World Robotics report recorded 553,052 industrial robot installations in factories around the world – a growth rate of 5% in 2022, year-on-year. By region, 73% of all newly deployed robots were installed in Asia, 15% in Europe and 10% in the Americas.

“The world record of 500,000 units was exceeded for the second year in succession,” says Marina Bill, President of the International Federation of Robotics. “In 2023 the industrial robot market is expected to grow by 7% to more than 590,000 units worldwide.”

Asia, Europe and the Americas – overview

China is by far the world´s largest market. In 2022, annual installations of 290,258 units replaced the previous record of 2021 by growth of 5%. This latest gain is remarkable since it even tops the 2021 result that was a 57% jump compared to 2020. To serve this dynamic market, domestic and international robot suppliers have established production plants in China and continuously increased capacity. On average, annual robot installations have grown by 13% each year (2017-2022).

Robot installations in Japan were up by 9% to 50,413 units, exceeding the pre-pandemic level of 49,908 units in 2019. The peak level remains at 55,240 units in 2018. The country ranks second to China in size of market for industrial robots. Annual installations gained 2% on average per year (2017-2022). Japan is the world´s predominant robot manufacturing country with a market share of 46% of the global robot production.

The market in the Republic of Korea rose by 1% – installations reached 31,716 units in 2022. This was the second year of marginal growth, following four years of declining installation figures. The Republic of Korea remains the fourth largest robot market in the world, following the United States, Japan, and China.

Europe

The European Union remains the world´s second largest market (70,781 units; +5%) in 2022. Germany is one of the top five adopters worldwide with a market share of 36% within the EU. Germany´s installations went down by 1% to 25,636 units. Italy follows with a market share of 16% within the EU – installations grew by 8% to 11,475 units. The third largest EU market, France, recorded a regional market share of 10% and gained 13%, installing 7,380 units in 2022.

In the post-Brexit United Kingdom, industrial robot installations were up by 3% to 2,534 units in 2022. This is less than a tenth of Germany´s sales.

The Americas

In the Americas, installations were up 8% to 56,053 units in 2022, surpassing the 2018 peak level (55,212 units). The United States, the largest regional market, accounted for 71% of the installations in the Americas in 2022. Robot installations were up by 10% to 39,576 units. This was just shy of the peak level of 40,373 units achieved in 2018. The main growth driver was the automotive industry that displayed surging installations by +47% (14,472 units). The share of the automotive industry has now grown back to 37%, followed by the metal and machinery industry (3,900 units) and the electrical/electronics industry (3,732 units).

The two other major markets are Mexico – here installations grew by 13% (6,000 units) – and Canada, where demand dropped by 24% (3,223 units). This was the result of lower demand from the automotive industry – the strongest adopter.

Brazil is an important production site for motor vehicles and automotive parts: The International Organization of Motor Vehicle Manufacturers (OICA) reports an output of 2.4 million vehicles in 2022. This shows the huge potential for automation in the country. Annual installation counts grew rather slowly with cyclical ups and downs. In 2022, 1,858 robots were installed. This was 4% more than in the previous year.

Outlook

The year 2023 will be characterized by a slowdown of the global economic growth. Robot installations in 2023 are not expected to follow this pattern. There is no indication that the overall long-term growth trend will come to an end soon: rather the contrary will be the case. The mark of 600,000 units installed per year worldwide is expected to be reached in 2024.

]]>
#IROS2023: A glimpse into the next generation of robotics https://robohub.org/iros2023-a-glimpse-into-the-next-generation-of-robotics/ Sun, 01 Oct 2023 08:30:36 +0000 https://robohub.org/?p=208353

The 2023 EEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) kicks off today at the Huntington Place in Detroit, Michigan. This year’s theme, “The Next Generation of Robotics,” is a call to the young and senior researchers to create a forum where the past, present, and future of robotics converge.

The program of IROS 2023 is a blend of theoretical insights and practical demonstrations, designed to foster a culture of innovation and collaboration. Among the highlights are the plenary and keynote talks by eminent personalities in the field of robotics.

Plenaries and keynotes

On the plenary front, Marcie O’Malley from Rice University will delve into the realm of robots that teach and learn with a human touch. Yuto Nakanishi of GITAI, Japan, will share his insights on the challenges of developing space robots for building a moonbase. Matt Johnson-Roberson from Carnegie Mellon University will explore the shared history and convergent future of AI and Robotics.

The keynote sessions are equally thought-provoking. On Monday, October 2nd, Sven Behnke from the University of Bonn, Germany, will discuss the transition from intuitive immersive telepresence systems to conscious service robots, while Michelle Johnson from the University of Pennsylvania, USA, will talk about the journey towards more inclusive rehabilitation robots. Rebecca Kramer-Bottiglio from Yale University, USA, will also share insights on shape-shifting soft robots that adapt to changing tasks and environments.

On Tuesday, October 3rd, Kostas Alexis from the Norwegian University of Science and Technology, Norway, will share experiences from the DARPA Subterranean Challenge focusing on resilient robotic autonomy. Serena Ivaldi from Inria, France, will discuss the transition from humanoids to exoskeletons, aiming at assisting and collaborating with humans. Mario Santillo from Ford Motor Company, USA, will provide a glimpse into the future of manufacturing automation.

The series continues on Wednesday, October 4th, with Moritz Bächer (Switzerland) and Morgan Pope (USA) from Disney Research discussing the design and control of expressive robotic characters. Tetsuya Ogata from Waseda University/AIST, Japan, will delve into deep predictive learning in robotics, optimizing models for adaptive perception and action. Lastly, Teresa Vidal-Calleja from the University of Technology Sydney, Australia, will talk about empowering robots with continuous space and time representations.

Competitions

The competitions segment of IROS 2023 will be a space for innovation and creativity. The Functional Fashion competition invites teams to design and demonstrate robotic clothing that is as aesthetically pleasing as it is functional. The F1/10 Autonomous Racing challenges participants to build a 1:10 scaled autonomous race car and compete in minimizing lap time while avoiding crashes. The Soft Robotics Balloon Robots competition encourages the creation of locomoting and swimming soft robots using balloons as a substrate, exploring rapid design and deployment of soft robotic structures.

Technical programme & demonstrations

The technical sessions and workshops/tutorials at IROS 2023 are designed to foster a rich exchange of ideas among the attendees. These sessions will feature presentations on cutting-edge research and innovative projects from across the globe, providing a platform for researchers to share their findings, receive feedback, and engage in meaningful discussions. In additions, the demonstrations segment will bring theories to life as participants showcase their working prototypes and models, offering a tangible glimpse into the advancements in robotics.

Participate in IROS 2023 remotely with Ohmni telepresence robots

If you are unable to participate in the conference in person, Ohmnilabs has provided three of their Ohmni telepresence robots to facilitate participation in the conference virtually. The telepresence robots will be active from October 2-4 from 9:00 a.m.- 6:00 p.m. EDT. You can secure a time slot in advance using this link.

The telepresence robots will allow you to:

  • Explore the exhibit hall and speak with exhibitors
  • Interact with authors and other attendees during interactive poster sessions
  • Attend a plenary or keynote presentation

You can check in real-time here to see if any of the robots are available throughout the day.

Watch out our blog during the following days for updates and results from the best paper awards. And enjoy IROS 2023!

]]>
Robot Talk Episode 55 – Sara Adela Abad Guaman https://robohub.org/robot-talk-episode-55-sara-adela-abad-guaman/ Sat, 30 Sep 2023 11:26:06 +0000 https://robohub.org/?p=208337

Claire chatted to Sara Adela Abad Guaman from University College London about adaptable robots inspired by nature.

Sara Adela Abad Guaman is a Lecturer at University College London’s Mechanical Engineering Department. She is also the head of the Adaptable Robotics Lab. Inspired by biological organisms, Sara aims to develop robots and mechanical systems with enhanced adaptability to variable environmental conditions. Her vision is to use bioinspiration and morphological computation to address global challenges such as climate change, biodiversity loss, and sustainability.

]]>
Robo-Insight #5 https://robohub.org/robo-insight-5/ Mon, 25 Sep 2023 05:48:57 +0000 https://robohub.org/?p=208292

Source: OpenAI’s DALL·E 2 with prompt “a hyperrealistic picture of a robot reading the news on a laptop at a coffee shop”

Welcome to the 5th edition of Robo-Insight, a robotics news update! In this post, we are excited to share a range of new advancements in the field and highlight robots’ progress in areas like human-robot interaction, agile movement, enhanced training methods, soft robotics, brain surgery, medical navigation, and ecological research.

New tools for human-robot interaction

In the realm of human-robot interactions, researchers from around Europe have developed a new tool called HEUROBOX to assess interactions. HEUROBOX offers 84 basic and 228 advanced heuristics for evaluating various aspects of human-robot interaction, such as safety, ergonomics, functionality, and interfaces. It places a strong emphasis on human-centered design, addressing the vital connection between technology and human factors. This tool aims to facilitate seamless collaboration between humans and robots in industrial settings by ensuring robots align with human capabilities and needs, emphasizing productivity and well-being.


Approach for creating a novel method to assess Human-Robot Interaction (HRI) heuristics. Source.

Innovations for enhanced control in agile robotics

Shifting our focus to agile robots, researchers from Zhejiang University have designed a cable-driven snake-like robot for efficient motion in confined spaces. This robot utilizes force transducers and angle sensors to achieve precise dual-loop control. By combining pose feedback from angle sensors and force feedback from transducers, this control strategy enhances the robot’s accuracy and ensures cable force and stiffness, guaranteeing stability and reliability during motion. This innovation has significant potential for various applications, including minimally invasive surgery, nuclear waste handling, in-space inspections, and search and rescue operations in complex environments. The robot’s design and control strategy promises advancements in high-precision robotic systems for engineering applications.

CSR Overview. Source.

Better training methods in agile robotics

Keeping within the field of agile robotics, researchers from the University of Zurich have recently pushed the boundaries of this type of robots, focusing on the pivotal role of control systems within them. Their investigation pitted two key methodologies against each other: model-based optimal control (OC) and reinforcement learning (RL). Surprisingly, RL, which enables robots to learn through trial and error, triumphed in a demanding real-world test: autonomous drone racing. Not only did RL outperform but surpassed human capabilities, with the agile drone achieving an astonishing peak acceleration, exceeding 12 times gravitational acceleration, and a remarkable top speed of 108 kilometers per hour. These results illuminate the promising future of agile robotics, where learning-centric approaches like RL pave the way for more efficient control and performance in diverse applications.

The visual progression shows a fast-racing drone being controlled by the RL strategy over time. Source.

New strong and stiff soft robots

Changing our focus to the world of soft robotics, recently researchers from Kangwon National University have presented a soft gripper robot with the ability to vary its stiffness, addressing a major challenge in the field of soft robotics. Unlike complex designs, this gripper achieves stiffness variation through a straightforward mechanism involving pneumatic control and tendons actuated by stepper motors. This innovation allows the gripper to adapt to objects of various shapes, sizes, and weights, expanding its potential applications. The study demonstrates that this gripper can increase its stiffness by up to 145% and handle weights of up to 2.075 kg. Soft robotics, inspired by natural organisms, holds promise in healthcare, manufacturing, exploration, and other fields, and this research contributes to its advancement.

CAD design of gripper mechanism. Source.

Enhanced brain surgery robots

Turning our focus to the medical robotics world, researchers from Harvard Medical School have developed a robotic device poised to enhance neurosurgery by making it less invasive. The team introduced a novel two-armed joystick-controlled endoscopic robot designed to mimic the dexterity of open surgery but with smaller incisions. This innovation was put to the test in the context of brain tumor resection, a typically invasive procedure. Compared to conventional manual endoscopic tools, the robot offered greater access to the surgical site, enabling bimanual tasks without brain tissue compression, and often completing tasks more swiftly. These findings open the door to the potential transformation of traditionally open brain surgeries into less invasive endoscopic procedures.

Robotic tube arms with varying degrees of stiffness. Source.

An advanced robotics needle

Along the same lines as medical robotics advancements, a team of researchers led by Professor Ron Alterovitz at the University of North Carolina at Chapel Hill has developed an autonomous robotic needle designed to navigate through intricate lung tissue while avoiding obstacles and important lung structures. The needle uses AI and computer vision to autonomously travel through living tissue, making it a potentially valuable tool for precise medical procedures like biopsies and targeted drug delivery. This development represents a significant step in the field of medical robotics, offering improved accuracy and safety in minimally invasive procedures. The researchers plan to further refine the technology and explore additional medical applications.

The robotic needle emerging from a bronchoscope. Source.

Robots could bee the key to ecological research

Finally, in the ecological field, robotics researchers from Durham University are teaming up with experts from various disciplines to investigate how animals are adapting to ecological challenges, with the aim of mitigating global biodiversity loss. Leading the RoboRoyale project, Dr. Farshad Arvin combines miniature robotics, artificial intelligence, and machine learning to develop robotic bees. These robotic bees are designed to interact with honeybee queens, enhancing their egg-laying and pheromone production, which influences hive behavior. This unique project focuses exclusively on queen bees, using a multi-robot system that learns over time how to optimize their well-being. Simultaneously, the MammalWeb project collects camera trap images to monitor the habits and behaviors of UK mammals, addressing the impact of climate change and human activities on biodiversity. These initiatives represent groundbreaking contributions from the robotics community to ecological research.

A robotic development device. Source.

The continuous evolution across various sectors underscores the adaptable and consistently progressing nature of robotics technology, revealing new opportunities for its integration into diverse industries. The gradual growth in the field of robotics reflects sustained dedication and provides insight into the potential implications of these developments for the future.

Sources:

  1. Apraiz, A., Mulet Alberola, J. A., Lasa, G., Mazmela, M., & Nguyen, H. N. (2023, August 16). Development of a new set of heuristics for the evaluation of human-robot interaction in industrial settings: Heuristics Robots experience (HEUROBOX). Frontiers.
  2. Xu, X., Wang, C., Xie, H., Wang, C., & Yang, H. (2023, September 4). Dual-loop control of cable-driven snake-like robots. MDPI.
  3. Song, Y., Romero, A., Matthias Müller, Koltun, V., & Davide Scaramuzza. (2023). Reaching the limit in autonomous racing: Optimal control versus reinforcement learning. Science Robotics8(82).
  4. Mawah, S. C., & Park, Y.-J. (2023, September 11). Tendon-driven variable-stiffness pneumatic soft gripper robot. MDPI.
  5. Price, K., Peine, J., Mencattelli, M., Yash Chitalia, Pu, D., Looi, T., Stone, S., Drake, J. M., & Dupont, P. E. (2023). Using robotics to move a neurosurgeon’s hands to the tip of their endoscope. Science Robotics8(82).
  6. Autonomous Medical Robot Successfully Steers Needles Through Living Tissue. (n.d.). Computer Science. Retrieved September 23, 2023
  7. University, D. (n.d.). Computer Science research to build robotic bees and monitor mammals – Durham University. Www.durham.ac.uk. Retrieved September 23, 2023‌
]]>
Soft robotic tool provides new ‘eyes’ in endovascular surgery https://robohub.org/soft-robotic-tool-provides-new-eyes-in-endovascular-surgery/ Sat, 23 Sep 2023 08:30:47 +0000 https://robohub.org/?p=208284

Scientists at the Max Planck Institute for Intelligent Systems in Stuttgart have developed a soft robotic tool that promises to one day transform minimally invasive endovascular surgery. The two-part magnetic tool can help to visualise in real time the fine morphological details of partial vascular blockages such as stenoses, even in the narrowest and most curved vessels. It can also find its way through severe blockages such as chronic total occlusions. This tool could one day take the perception of endovascular medical devices a step further.

Intravascular imaging techniques and microcatheter procedures are becoming ever more advanced, revolutionizing the diagnosis and treatment of many diseases. However, current methods often fail to accurately detect the fine features of vascular disease, such as those seen from within occluded vessels, due to limitations such as uneven contrast agent diffusion and difficulty in safely accessing occluded vessels. Such limitations can delay rapid intervention and treatment of a patient.

Scientists at the Max Planck Institute for Intelligent Systems in Stuttgart have looked at this problem. They have leveraged the concepts of soft robotics and microfabrication to develop a miniature soft magnetic tool that looks like a very slim eel. This tool may one day take the perception capabilities of endovascular devices one step further. In a paper and in a video, the team shows how the tool, which is propelled forward by the blood flow, travels through the narrowest artificial vessels – whether there is a sharp bend, curve, or obstacle.

When the tool reaches an occlusion like a partially blocked artery, it performs a wave-like deformation given the external magnetic field (more on that below). Then, the deformed soft body will be gently in contact with the surrounding occluded structures. Lastly, the real-time shapes of the device when we retract it will ‘visualize’ the morphological details inside the vessel, which facilitates the drug release at occlusion, as well as the sizing and placement of medical devices like stents and balloons for following treatment.

When there is a severe occlusion with only tiny microchannels for the blood to flow through, the tool can utilize the force from the blood to easily slide through these narrow channels. Which way was chosen indicates to the surgeon which access route to take for the following medical operation.

“The methods of diagnosing and treating endovascular narrow diseases such as vascular stenosis or chronic total occlusion are still very limited. It is difficult to accurately detect and cross these areas in the very complex network of vessels inside the body”, says Yingbo Yan, who is a guest researcher in the Physical Intelligence Department at MPI-IS. He is the first author of the paper “Magnetically-assisted soft milli-tools for occluded lumen morphology detection”, which was published in Science Advances on August 18, 2023. “We hope that our new soft robotic tool can one day help accurately detect and navigate through the many complex and narrow vessels inside a body, and perform treatments more effectively, reducing potential risks.”

This tiny and soft tool has a 20 mm long magnetic Active Deformation Segment (ADS) and a 5mm long Fluid Drag-driven Segment (FDS). The magnetization profile of ADS is pre-programmed with a vibrating-sample magnetometer, providing a uniform magnetic field. Under an external magnetic field, this part can deform into a sinusoidal shape, easily adapting to the surrounding environment and deforming into various shapes. Thus, continuous monitoring of the shape changes of ADS while retracting it can provide detailed morphological information of the partial occlusions inside a vessel.

The FDS was fabricated using a soft polymer. Small beams on its side are bent by the fluidic drag from the incoming flow. In this way, the entire tool is carried towards the area with the highest flow velocity. Therefore, learning the location of the FDS while advancing it can point to the location and the route of the microchannel inside the severe occlusions.

“Detection of vascular diseases in the distal and hard-to-reach vascular regions such as the brain can be more challenging clinically, and our tool could work with Stentbot in the untethered mode”, says Tianlu Wang, a postdoc in the Physical Intelligence Department at MPI-IS and another first author of the work. “Stentbot is a wireless robot used for locomotion and medical functions in the distal vasculature we recently developed in our research group. We believe this new soft robotic tool can add new capabilities to wireless robots and contribute new solutions in these challenging regions.”

“Our tool shows potential to greatly improve minimally invasive medicine. This technology can reach and detect areas that were previously difficult to access. We expect that our robot can help make the diagnosis and treatment of, for instance, stenosis or a CTO more precise and timelier”, says Metin Sitti, Director of the Physical Intelligence Department at MPI-IS, Professor at Koç University and ETH Zurich.

]]>
‘Brainless’ robot can navigate complex obstacles https://robohub.org/brainless-robot-can-navigate-complex-obstacles/ Thu, 21 Sep 2023 05:20:32 +0000 https://robohub.org/?p=208278

By Matt Shipman

Researchers who created a soft robot that could navigate simple mazes without human or computer direction have now built on that work, creating a “brainless” soft robot that can navigate more complex and dynamic environments.

“In our earlier work, we demonstrated that our soft robot was able to twist and turn its way through a very simple obstacle course,” says Jie Yin, co-corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at North Carolina State University. “However, it was unable to turn unless it encountered an obstacle. In practical terms this meant that the robot could sometimes get stuck, bouncing back and forth between parallel obstacles.

“We’ve developed a new soft robot that is capable of turning on its own, allowing it to make its way through twisty mazes, even negotiating its way around moving obstacles. And it’s all done using physical intelligence, rather than being guided by a computer.”

Physical intelligence refers to dynamic objects – like soft robots – whose behavior is governed by their structural design and the materials they are made of, rather than being directed by a computer or human intervention.

As with the earlier version, the new soft robots are made of ribbon-like liquid crystal elastomers. When the robots are placed on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion; the warmer the surface, the faster the robot rolls.

However, while the previous version of the soft robot had a symmetrical design, the new robot has two distinct halves. One half of the robot is shaped like a twisted ribbon that extends in a straight line, while the other half is shaped like a more tightly twisted ribbon that also twists around itself like a spiral staircase.

This asymmetrical design means that one end of the robot exerts more force on the ground than the other end. Think of a plastic cup that has a mouth wider than its base. If you roll it across the table, it doesn’t roll in a straight line – it makes an arc as it travels across the table. That’s due to its asymmetrical shape.

“The concept behind our new robot is fairly simple: because of its asymmetrical design, it turns without having to come into contact with an object,” says Yao Zhao, first author of the paper and a postdoctoral researcher at NC State. “So, while it still changes directions when it does come into contact with an object – allowing it to navigate mazes – it cannot get stuck between parallel objects. Instead, its ability to move in arcs allows it to essentially wiggle its way free.”

The researchers demonstrated the ability of the asymmetrical soft robot design to navigate more complex mazes – including mazes with moving walls – and fit through spaces narrower than its body size. The researchers tested the new robot design on both a metal surface and in sand.

“This work is another step forward in helping us develop innovative approaches to soft robot design – particularly for applications where soft robots would be able to harvest heat energy from their environment,” Yin says.

The paper, “Physically Intelligent Autonomous Soft Robotic Maze Escaper,” appears in the journal Science Advances. First author of the paper is Yao Zhao, a postdoctoral researcher at NC State. Hao Su, an associate professor of mechanical and aerospace engineering at NC State, is co-corresponding author. Additional co-authors include Yaoye Hong, a recent Ph.D. graduate of NC State; Yanbin Li, a postdoctoral researcher at NC State; and Fangjie Qi and Haitao Qing, both Ph.D. students at NC State.

The work was done with support from the National Science Foundation under grants 2005374, 2126072, 1944655 and 2026622.

]]>
Battery-free origami microfliers from UW researchers offer a new bio-inspired future of flying machines https://robohub.org/battery-free-origami-microfliers-from-uw-researchers-offer-a-new-bio-inspired-future-of-flying-machines/ Wed, 20 Sep 2023 05:36:57 +0000 https://robohub.org/?p=208259

Researchers at the University of Washington developed small robotic devices that can change how they move through the air by “snapping” into a folded position during their descent. Shown here is a timelapse photo of the “microflier” falling in its unfolded state, which makes it tumble chaotically and spread outward in the wind. Photo by Mark Stone/University of Washington

By Roger Van Scyoc

On a cool afternoon at the heart of the University of Washington’s campus, autumn, for a few fleeting moments, appears to have arrived early. Tiny golden squares resembling leaves flutter then fall, switching from a frenzied tumble to a graceful descent with a snap.

Aptly named “microfliers” and inspired by Miura-fold origami, these small robotic devices can fold closed during their descent after being dropped from a drone. This “snapping” action changes the way they disperse and may, in the future, help change the way scientists study agriculture, meteorology, climate change and more.

“In nature, you see leaves and seeds disperse in just one manner,” said Kyle Johnson, an Allen School Ph.D. student and a first co-author of the paper on the subject published in Science Robotics. “What we were able to achieve was a structure that can actually act in two different ways.”

When open flat, the devices tumble chaotically, mimicking the descent of an elm leaf. When folded closed, they drop in a more stable manner, mirroring how a maple leaf falls from a branch. Through a number of methods — onboard pressure sensor, timer or a Bluetooth signal — the researchers can control when the devices transition from open to closed, and in doing so, manipulate how far they disperse through the air.

How could they achieve this? By reading between the lines.

“The Miura-ori origami fold, inspired by geometric patterns found in leaves, enables the creation of structures that can ‘snap’ between a flat and more folded state,” said co-senior author Vikram Iyer, an Allen School professor and co-director of the Computing for the Environment (CS4Env) initiative. “Because it only takes energy to switch between the states, we began exploring this as an energy efficient way to change surface area in mid-air, with the intuition that opening or closing a parachute will change how fast an object falls.”

That energy efficiency is key to being able to operate without batteries and scale down the fliers’ size and weight. Fitted with a battery-free actuator and a solar power-harvesting circuit, microfliers boast energy-saving features not seen in larger and heavier battery-powered counterparts such as drones. Yet they are robust enough to carry sensors for a number of metrics, including temperature, pressure, humidity and altitude. Beyond measuring atmospheric conditions, the researchers say a network of these devices could help paint a picture of crop growth on farmland or detect gas leaks near population centers.

“This approach opens up a new design space for microfliers by using origami,” said Shyam Gollakota, the Thomas J. Cable Endowed Professor in the Allen School and director of the school’s Mobile Intelligence Lab who was also a co-senior author. “We hope this work is the first step towards a future vision for creating a new class of fliers and flight modalities.”

Weighing less than half a gram, microfliers require less material and cost less than drones. They also offer the ability to go where it’s too dangerous for a human to set foot.

For instance, Johnson said, microfliers could be deployed when tracking forest fires. Currently, firefighting teams sometimes rappel down to where a fire is spreading. Microfliers could assist in mapping where a fire may be heading and where best to drop a payload of water. Furthermore, the team is working on making more components of the device biodegradable in the case that they can’t be recovered after being released.

“There’s a good amount of work toward making these circuits more sustainable,” said Vicente Arroyos, another Allen School Ph.D. student and first co-author on the paper. “We can leverage our work on biodegradable materials to make these more sustainable.”

Besides improving sustainability, the researchers also tackled challenges relating to the structure of the device itself. Early prototypes lacked the carbon fiber roots that provide the rigidity needed to prevent accidental transitions between states.

The research team took inspiration from elm and maple leaves in designing the microfliers. When open flat, the devices tumble chaotically, similar to how an elm leaf falls from a branch. When they are “snapped” into a folded position, as shown here, they descend in a more stable, straight downward manner like a maple leaf. Photo by Mark Stone/University of Washington

Collecting maple and elm leaves from outside their lab, the researchers noticed that while their origami structures exhibited the bistability required to change between states, they flexed too easily and didn’t have the venation seen in the found foliage. To gain more fine-grained control, they took another cue from the environment.

“We looked again to nature to make the faces of the origami flat and rigid, adding a vein-like pattern to the structure using carbon fiber,” Johnson said. “After that modification, we no longer saw a lot of the energy that we input dissipate over the origami’s faces.”

In total, the researchers estimate that the development of their design took about two years. There’s still room to grow, they added, noting that the current microfliers can only transition from open to closed. They said newer designs, by offering the ability to switch back and forth between states, may offer more precision and flexibility in where and how they’re used.

During testing, when dropped from an altitude of 40 meters, for instance, the microfliers could disperse up to distances of 98 meters in a light breeze. Further refinements could increase the area of coverage, allowing them to follow more precise trajectories by accounting for variables such as wind and inclement conditions.

Related to their previous work with dandelion-inspired sensors, the origami microfliers build upon the researchers’ larger goal of creating the internet of bio-inspired things. Whereas the dandelion-inspired devices featured passive flight, reflecting the manner in which dandelion seeds disperse through the wind, the origami microfliers function as complete robotic systems that include actuation to change their shape, active and bi-directional wireless transmission via an onboard radio, and onboard computing and sensing to autonomously trigger shape changes upon reaching a target altitude.

“This design can also accommodate additional sensors and payload due to its size and power harvesting capabilities,” Arroyos said. “It’s exciting to think about the untapped potential for these devices.”

The future, in other words, is quickly taking shape.

“Origami is inspired by nature,” Johnson added, smiling. “These patterns are all around us. We just have to look in the right place.”

The project was an interdisciplinary work by an all-UW team. The paper’s co-authors also included Amélie Ferran, a Ph.D. student in the mechanical engineering department, as well as Raul Villanueva, Dennis Yin and Tilboon Elberier, who contributed as undergraduate students studying electrical and computer engineering, and mechanical engineering professors Alberto Aliseda and Sawyer Fuller.

Johnson and Arroyos, who co-founded and currently lead the educational nonprofit AVELA – A Vision for Engineering Literacy & Access, and their teammates have done outreach efforts in Washington state K-12 schools related to the research, including showing students how to create their own bi-stable leaf-out origami structure using a piece of paper. Check out a related demonstration video here, and learn more about the microflier project here and in a related UW News release and GeekWire story.

]]>
Virtual-reality tech is fast becoming more real https://robohub.org/virtual-reality-tech-is-fast-becoming-more-real/ Sat, 16 Sep 2023 08:26:40 +0000 https://robohub.org/?p=208227

Virtual-reality technology could help cure people of phobias including about spiders. © Leena Robinson, Shutterstock.com

By Helen Massy-Beresford

Imagine a single technology that could help a robot perform safety checks at a nuclear plant, cure a person’s arachnophobia and simulate the feeling of a hug from a distant relative.

Welcome to the world of “extended reality”. Researchers funded by the EU have sought to demonstrate its enormous potential.

Relevant research

Their goal was to make augmented reality, in which the real world is digitally enhanced, and virtual reality – a fully computer-generated environment – more immersive for users.

One of the researchers, Erik Hernandez Jimenez, never imagined the immediate relevance of a project that he led when it started in mid-2019. Within a year, the Covid-19 pandemic had triggered countless lockdowns that left people working and socialising through video connections from home.   

‘We thought about how to apply this technology, how to feel human touch even at a distance, when we were all locked at home and contact with others was through a computer,’ said Hernandez Jimenez. 

He coordinated the EU research initiative, which was named TACTILITY and ran from July 2019 until the end of September 2022. 

The TACTILITY team developed a glove that simulates the sense of touch. Users have the sensation of touching virtual objects through electrical pulses delivered by electrodes embedded in the glove.

The sensations range from pushing a button and feeling pressure on the finger to handling a solid object and feeling its shape, dimensions and texture. 

Glove and suit

‘TACTILITY is about including tactile feedback in a virtual-reality scenario,’ said Hernandez Jimenez, who is a project manager at Spanish research institute TECNALIA.

He said the principle could be extended from the glove to a whole body suit. 

Compared with past attempts to simulate touch sensations with motors, the electro-tactile feedback technique produces a more realistic result at a lower cost, according to Hernandez Jimenez. 

This opens up the possibility of making the technology more widely accessible. 

The research bolsters European Commission efforts to develop the virtual-worlds domain, which could provide 860 000 new jobs in Europe this decade as the worldwide sector grows from €27 billion in 2022. 

The EU has around 3 700 companies, research organisations and governmental bodies that operate in this sphere, according to the Commission.

Phobias to factories

The TACTILITY researchers looked at potential healthcare applications. 

“We thought about how to apply this technology, how to feel human touch even at a distance.”

– Erik Hernandez Jimenez, TACTILITY

That’s where spiders come into the picture. They were among the objects in the project’s experiments to mimic touch.

‘One that was quite impressive – although I didn’t like it at all – was feeling a spider or a cockroach crawling over your hand,’ Hernandez Jimenez said.  

A potential use for the technology is treating phobias through exposure therapy in which patients are gradually desensitised to the source of their fear. That could start by virtually “touching” cartoon-like creepy crawlies before progressing to more lifelike versions.  

The tactile glove can also be used in the manufacturing industry, helping the likes of car manufacturers train their workers to perform tricky manoeuvres on the factory floor.

Furthermore, it can help people collaborate more effectively with remotely controlled robots in hazardous environments. An example is a nuclear power plant, where a person in a control room can virtually “feel” what a robot is touching. 

‘They get another sense and another kind of feedback, with more information to perform better checks,’ Hernandez Jimenez said. 

Joyful and playful

Wearables for virtual reality. © Oğuz ‘Oz’ Buruk, 2021

Wearable technologies for virtual-reality environments are also being inspired by the gaming industry. 

Researchers in a second EU-funded project sought to expand the prospects for technologies already widely used for professional purposes. The initiative, called WEARTUAL, ran from May 2019 until late 2021.

“Wearables are fashion items – they’re part of the way we construct our identity.”

– Oğuz ‘Oz’ Buruk, WEARTUAL

‘Our project focused on the more experiential side – joyful and playful activities,’ said Oğuz ‘Oz’ Buruk, who coordinated WEARTUAL and is assistant professor of gameful experience at Tampere University in Finland. 

Until recently, experiencing a virtual-reality environment involved a hand-held controller or head-mounted display. 

The WEARTUAL researchers looked at ways of incorporating wearables worn, for example, on the wrist or ankle into virtual reality to give people a sense of greater immersion. 

That could mean having their avatar – a representative icon or figure in the virtual world – blush when nervous or excited to enhance their ability to express themselves.

On the cusp

The team developed a prototype that could integrate varying physical sensations into the virtual world by transferring to it real-life data such as heart rate.  

Buruk is interested in how games will look in the “posthuman” era, when people and machines increasingly converge through bodily implants, robotics and direct communication between the human brain and computers.  

He signals that it’s hard to overestimate the eventual impact of advances in this area on everyday life, albeit over varying timescales: wearables are likely to be much more widely used in virtual reality in the next decade, while widespread use of bodily implants is more likely to take 50 to 100 years.

As technology and human bodies become ever more closely linked, the experience of transferring them to a virtual world will be enhanced, encouraging people to spend increasing amounts of time there, according to Buruk.

Virtual-reality technologies are already being used for practical purposes such as gamifying vital information including fire-safety procedures, making it more interactive and easier to learn. This type of use could expand to many areas.

On a very different front, several fashion houses already sell clothes that can be worn in virtual environments, allowing people to express their identity and creativity.  

‘Wearables are fashion items – they’re part of the way we construct our identity,’ Buruk said. ‘Investments in virtual reality, extended reality and augmented reality are increasing every day.’

Research in this article was funded by the EU via the Marie Skłodowska-Curie Actions (MSCA).


This article was originally published in Horizon, the EU Research and Innovation magazine.

]]>
High-tech microscope with ML software for detecting malaria in returning travellers https://robohub.org/high-tech-microscope-with-ml-software-for-detecting-malaria-in-returning-travellers/ Thu, 14 Sep 2023 05:29:24 +0000 https://robohub.org/?p=208220 suitcase with passport tucked under handle

By Deborah Pirchner

Malaria is an infectious disease claiming more than half a million lives each year. Because traditional diagnosis takes expertise and the workload is high, an international team of researchers investigated if diagnosis using a new system combining an automatic scanning microscope and AI is feasible in clinical settings. They found that the system identified malaria parasites almost as accurately as experts staffing microscopes used in standard diagnostic procedures. This may help reduce the burden on microscopists and increase the feasible patient load.

Each year, more than 200 million people fall sick with malaria and more than half a million of these infections lead to death. The World Health Organization recommends parasite-based diagnosis before starting treatment for the disease caused by Plasmodium parasites. There are various diagnostic methods, including conventional light microscopy, rapid diagnostic tests and PCR.

The standard for malaria diagnosis, however, remains manual light microscopy, during which a specialist examines blood films with a microscope to confirm the presence of malaria parasites. Yet, the accuracy of the results depends critically on the skills of the microscopist and can be hampered by fatigue caused by excessive workloads of the professionals doing the testing.

Now, writing in Frontiers in Malaria, an international team of researchers has assessed whether a fully automated system, combining AI detection software and an automated microscope, can diagnose malaria with clinically useful accuracy.

“At an 88% diagnostic accuracy rate relative to microscopists, the AI system identified malaria parasites almost, though not quite, as well as experts,” said Dr Roxanne Rees-Channer, a researcher at The Hospital for Tropical Diseases at UCLH in the UK, where the study was performed. “This level of performance in a clinical setting is a major achievement for AI algorithms targeting malaria. It indicates that the system can indeed be a clinically useful tool for malaria diagnosis in appropriate settings.”

AI delivers accurate diagnosis

The researchers sampled more than 1,200 blood samples of travelers who had returned to the UK from malaria-endemic countries. The study tested the accuracy of the AI and automated microscope system in a true clinical setting under ideal conditions.

They evaluated samples using both manual light microscopy and the AI-microscope system. By hand, 113 samples were diagnosed as malaria parasite positive, whereas the AI-system correctly identified 99 samples as positive, which corresponds to an 88% accuracy rate.

“AI for medicine often posts rosy preliminary results on internal datasets, but then falls flat in real clinical settings. This study independently assessed whether the AI system could succeed in a true clinical use case,” said Rees-Channer, who is also the lead author of the study.

Automated vs manual

The fully automated malaria diagnostic system the researchers put to the test includes hard- as well as software. An automated microscopy platform scans blood films and malaria detection algorithms process the image to detect parasites and the quantity present.

Automated malaria diagnosis has several potential benefits, the scientists pointed out. “Even expert microscopists can become fatigued and make mistakes, especially under a heavy workload,” Rees-Channer explained. “Automated diagnosis of malaria using AI could reduce this burden for microscopists and thus increase the feasible patient load.” Furthermore, these systems deliver reproducible results and can be widely deployed, the scientists wrote.

Despite the 88% accuracy rate, the automated system also falsely identified 122 samples as positive, which can lead to patients receiving unnecessary anti-malarial drugs. “The AI software is still not as accurate as an expert microscopist. This study represents a promising datapoint rather than a decisive proof of fitness,” Rees-Channer concluded.

Read the research in full

Evaluation of an automated microscope using machine learning for the detection of malaria in travelers returned to the UK, Roxanne R. Rees-Channer, Christine M. Bachman, Lynn Grignard, Michelle L. Gatton, Stephen Burkot, Matthew P. Horning, Charles B. Delahunt, Liming Hu, Courosh Mehanian, Clay M. Thompson, Katherine Woods, Paul Lansdell, Sonal Shah, Peter L. Chiodini, Frontiers in Malaria (2023).

]]>
How drones are used during earthquakes https://robohub.org/how-drones-are-used-during-earthquakes/ Wed, 13 Sep 2023 05:49:32 +0000 https://robohub.org/?p=208208

In the realm of disaster response, technology plays a pivotal role in aiding communities during challenging times. In this exploration, we turn our attention to drones and their application in earthquake response, especially as how they are being used in the recent Morocco earthquake. This concise video offers valuable insights into the practical uses of drones and the considerations surrounding their deployment during earthquake-related crises.

]]>
Making life friendlier with personal robots https://robohub.org/making-life-friendlier-with-personal-robots/ Mon, 11 Sep 2023 09:30:00 +0000 https://news.mit.edu/2023/making-life-friendlier-personal-robots-sharifa-alghowinem-0910

Sharifa Alghowinem, a research scientist in the Media Lab’s Personal Robots Group, poses with Jibo, a friendly robot companion developed by Professor Cynthia Breazeal. Credits: Gretchen Ertl

By Dorothy Hanna | Department of Mechanical Engineering

“As a child, I wished for a robot that would explain others’ emotions to me” says Sharifa Alghowinem, a research scientist in the Media Lab’s Personal Robots Group (PRG). Growing up in Saudi Arabia, Alghowinem says she dreamed of coming to MIT one day to develop Arabic-based technologies, and of creating a robot that could help herself and others navigate a complex world.

In her early life, Alghowinem faced difficulties with understanding social cues and never scored well on standardized tests, but her dreams carried her through. She earned an undergraduate degree in computing before leaving home to pursue graduate education in Australia. At the Australian National University, she discovered affective computing for the first time and began working to help AI detect human emotions and moods, but it wasn’t until she came to MIT as a postdoc with the Ibn Khaldun Fellowship for Saudi Arabian Women, which is housed in the MIT Department of Mechanical Engineering, that she was finally able to work on a technology with the potential to explain others’ emotions in English and Arabic. Today, she says her work is so fun that she calls the lab “my playground.” 

Alghowinem can’t say no to an exciting project. She found one with great potential to make robots more helpful to people by working with Jibo, a friendly robot companion developed by the founder of the Personal Robots Group (PRG) and the social robot startup Jibo Inc., MIT Professor and Dean for Digital Learning Cynthia Breazeal. Breazeal’s research explores the potential for companion robots to go far beyond assistants who obey transactional commands, like requests for the daily weather, adding items to shopping lists, or controlling lighting. At the MIT Media Lab, the PRG team designs Jibo to make him an insightful coach and companion to advance social robotics technologies and research. Visitors to the MIT Museum can experience Jibo’s charming personality.

Alghowinem’s research has focused on mental health care and education, often working with other graduate students and Undergraduate Research Opportunity Program students in the group. In one study, Jibo coached young and older adults via positive psychology. He adapted his interventions based on the verbal and non-verbal responses he observed in the participants. For example, Jibo takes in the verbal content of a participant’s speech and combines it with non-verbal information like prolonged pauses and self-hugs. If he concludes that deep emotions have been disclosed, Jibo responds with empathy. When the participant doesn’t disclose, Jibo asks a gentle follow up question like, “Can you tell me more?” 

Another project studied how a robot can effectively support high-quality parent and child interactions while reading a storybook together. Multiple PRG studies work together to learn what types of data are needed for a robot to understand people’s social and emotional states.

Research Scientist Sharifa Alghowinem (left) and visiting students Deim Alfozan and Tasneem Burghleh from Saudi Arabia’s Prince Sultan University, interact with Jibo. Credits: Gretchen Ertl

“I would like to see Jibo become a companion for the whole household,” says Alghowinem. Jibo can take on different roles with different family members such as a companion, reminding elders to take medication, or as a playmate for children. Alghowinem is especially motivated by the unique role Jibo could play in emotional wellness, and playing a preventative role in depression or even suicide. Integrating Jibo into daily life provides the opportunity for Jibo to detect emerging concerns and intervene, acting as a confidential resource or mental health coach. 

Alghowinem is also passionate about teaching and mentoring others, and not only via robots. She makes sure to meet individually with the students she mentors every week and she was instrumental earlier this year in bringing two visiting undergraduate students from Prince Sultan University in Saudi Arabia. Mindful of their social-emotional experience, she worked hard to create the opportunity for the two students, together, to visit MIT so they could support each other. One of the visiting students, Tasneem Burghleh, says she was curious to meet the person who went out of her way to make opportunities for strangers and discovered in her an “endless passion that makes her want to pass it on and share it with everyone else.”

Next, Alghowinem is working to create opportunities for children who are refugees from Syria. Still in the fundraising stage, the plan is to equip social robots to teach the children English language and social-emotional skills and provide activities to preserve cultural heritage and Arabic abilities.

“We’ve laid the groundwork by making sure Jibo can speak Arabic as well as several other languages,” says Alghowinem. “Now I hope we can learn how to make Jibo really useful to kids like me who need some support as they learn how to interact with the world around them.”

]]>
Fiber-infused ink enables 3D-printed heart muscle to beat https://robohub.org/fiber-infused-ink-enables-3d-printed-heart-muscle-to-beat/ Sat, 09 Sep 2023 06:05:58 +0000 https://robohub.org/?p=208170

This illustration shows a 3D printed heart ventricle engineered with fiber-infused ink. Credit: Harvard SEAS

By Kat J. McAlpine / SEAS Communications

Over the last decade, advances in 3D printing have unlocked new possibilities for bioengineers to build heart tissues and structures. Their goals include creating better in vitro platforms for discovering new therapeutics for heart disease, the leading cause of death in the United States, responsible for about one in every five deaths nationally, and using 3D-printed cardiac tissues to evaluate which treatments might work best in individual patients. A more distant aim is to fabricate implantable tissues that can heal or replace faulty or diseased structures inside a patient’s heart.

In a paper published in Nature Materials, researchers from Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University report the development of a new hydrogel ink infused with gelatin fibers that enables 3D printing of a functional heart ventricle that mimics beating like a human heart. They discovered the fiber-infused gel (FIG) ink allows heart muscle cells printed in the shape of a ventricle to align and beat in coordination like a human heart chamber.

“People have been trying to replicate organ structures and functions to test drug safety and efficacy as a way of predicting what might happen in the clinical setting,” says Suji Choi, research associate at SEAS and first author on the paper. But until now, 3D printing techniques alone have not been able to achieve physiologically-relevant alignment of cardiomyocytes, the cells responsible for transmitting electrical signals in a coordinated fashion to contract heart muscle.

“We started this project to address some of the inadequacies in 3D printing of biological tissues.”

– Kevin “Kit” Parker

The innovation lies in the addition of fibers within a printable ink. “FIG ink is capable of flowing through the printing nozzle but, once the structure is printed, it maintains its 3D shape,” says Choi. “Because of those properties, I found it’s possible to print a ventricle-like structure and other complex 3D shapes without using extra support materials or scaffolds.”


This video shows the spontaneous beating of a 3D-printed heart muscle. Credit: Harvard SEAS.

To create the FIG ink, Choi leveraged a rotary jet spinning technique developed in the lab of Kevin “Kit” Parker, Ph.D. that fabricates microfiber materials using an approach similar to the way cotton candy is spun. Postdoctoral researcher and Wyss Lumineer Luke MacQueen, a co-author on the paper, proposed the idea that fibers created by the rotary jet spinning technique could be added to an ink and 3D printed. Parker is a Wyss Associate Faculty member and the Tarr Family Professor of Bioengineering and Applied Physics at SEAS.

“When Luke developed this concept, the vision was to broaden the range of spatial scales that could be printed with 3D printers by dropping the bottom out of the lower limits, taking it down to the nanometer scale,” Parker says. “The advantage of producing the fibers with rotary jet spinning rather than electrospinning” – a more conventional method for generating ultrathin fibers – “is that we can use proteins that would otherwise be degraded by the electrical fields in electrospinning.”

Using the rotary jet to spin gelatin fibers, Choi produced a sheet of material with a similar appearance to cotton. Next, she used sonification – sound waves – to break that sheet into fibers about 80 to 100 micrometers long and about 5 to 10 micrometers in diameter. Then, she dispersed those fibers into a hydrogel ink.

“This concept is broadly applicable – we can use our fiber-spinning technique to reliably produce fibers in the lengths and shapes we want.”

– Suji Choi

The most difficult aspect was troubleshooting the desired ratio between fibers and hydrogel in the ink to maintain fiber alignment and the overall integrity of the 3D-printed structure.

As Choi printed 2D and 3D structures using FIG ink, the cardiomyocytes lined up in tandem with the direction of the fibers inside the ink. By controlling the printing direction, Choi could therefore control how the heart muscle cells would align.

The tissue-engineered 3D ventricle model. Credit: Harvard SEAS

When she applied electrical stimulation to 3D-printed structures made with FIG ink, she found it triggered a coordinated wave of contractions in alignment with the direction of those fibers. In a ventricle-shaped structure, “it was very exciting to see the chamber actually pumping in a similar way to how real heart ventricles pump,” Choi says.

As she experimented with more printing directions and ink formulas, she found she could generate even stronger contractions within ventricle-like shapes.

“Compared to the real heart, our ventricle model is simplified and miniaturized,” she says. The team is now working toward building more life-like heart tissues with thicker muscle walls that can pump fluid more strongly. Despite not being as strong as real heart tissue, the 3D-printed ventricle could pump 5-20 times more fluid volume than previous 3D-printed heart chambers.

The team says the technique can also be used to build heart valves, dual-chambered miniature hearts, and more.

“FIGs are but one tool we have developed for additive manufacturing,” Parker says. “We have other methods in development as we continue our quest to build human tissues for regenerative therapeutics. The goal is not to be tool driven – we are tool agnostic in our search for a better way to build biology.”

Additional authors include Keel Yong Lee, Sean L. Kim, Huibin Chang, John F. Zimmerman, Qianru Jin, Michael M. Peters, Herdeline Ann M. Ardoña, Xujie Liu, Ann-Caroline Heiler, Rudy Gabardi, Collin Richardson, William T. Pu, and Andreas Bausch.

This work was sponsored by SEAS; the National Science Foundation through the Harvard University Materials Research Science and Engineering Center (DMR-1420570, DMR-2011754); the National Institutes of Health and National Center for Advancing Translational Sciences (UH3HL141798, 225 UG3TR003279); the Harvard University Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Coordinated Infrastructure Network (NNCI) which is supported by the National Science Foundation (ECCS-2025158, S10OD023519); and the American Chemical Society’s Irving S. Sigal Postdoctoral Fellowships.

]]>
Interview with Jean Pierre Sleiman, author of the paper “Versatile multicontact planning and control for legged loco-manipulation” https://robohub.org/interview-with-jean-pierre-sleiman-author-of-the-paper-versatile-multicontact-planning-and-control-for-legged-loco-manipulation/ Wed, 06 Sep 2023 10:46:12 +0000 https://robohub.org/?p=208161

Picture from paper “Versatile multicontact planning and control for legged loco-manipulation“. © American Association for the Advancement of Science

We had the chance to interview Jean Pierre Sleiman, author of the paper “Versatile multicontact planning and control for legged loco-manipulation”, recently published in Science Robotics.

What is the topic of the research in your paper?
The research topic focuses on developing a model-based planning and control architecture that enables legged mobile manipulators to tackle diverse loco-manipulation problems (i.e., manipulation problems inherently involving a locomotion element). Our study specifically targeted tasks that would require multiple contact interactions to be solved, rather than pick-and-place applications. To ensure our approach is not limited to simulation environments, we applied it to solve real-world tasks with a legged system consisting of the quadrupedal platform ANYmal equipped with DynaArm, a custom-built 6-DoF robotic arm.

Could you tell us about the implications of your research and why it is an interesting area for study?
The research was driven by the desire to make such robots, namely legged mobile manipulators, capable of solving a variety of real-world tasks, such as traversing doors, opening/closing dishwashers, manipulating valves in an industrial setting, and so forth. A standard approach would have been to tackle each task individually and independently by dedicating a substantial amount of engineering effort to handcraft the desired behaviors:

This is typically achieved through the use of hard-coded state-machines in which the designer specifies a sequence of sub-goals (e.g., grasp the door handle, open the door to a desired angle, hold the door with one of the feet, move the arm to the other side of the door, pass through the door while closing it, etc.). Alternatively, a human expert may demonstrate how to solve the task by teleoperating the robot, recording its motion, and having the robot learn to mimic the recorded behavior.

However, this process is very slow, tedious, and prone to engineering design errors. To avoid this burden for every new task, the research opted for a more structured approach in the form of a single planner that can automatically discover the necessary behaviors for a wide range of loco-manipulation tasks, without requiring any detailed guidance for any of them.

Could you explain your methodology?
The key insight underlying our methodology was that all of the loco-manipulation tasks that we aimed to solve can be modeled as Task and Motion Planning (TAMP) problems. TAMP is a well-established framework that has been primarily used to solve sequential manipulation problems where the robot already possesses a set of primitive skills (e.g., pick object, place object, move to object, throw object, etc.), but still has to properly integrate them to solve more complex long-horizon tasks.

This perspective enabled us to devise a single bi-level optimization formulation that can encompass all our tasks, and exploit domain-specific knowledge, rather than task-specific knowledge. By combining this with the well-established strengths of different planning techniques (trajectory optimization, informed graph search, and sampling-based planning), we were able to achieve an effective search strategy that solves the optimization problem.

The main technical novelty in our work lies in the Offline Multi-Contact Planning Module, depicted in Module B of Figure 1 in the paper. Its overall setup can be summarized as follows: Starting from a user-defined set of robot end-effectors (e.g., front left foot, front right foot, gripper, etc.) and object affordances (these describe where the robot can interact with the object), a discrete state that captures the combination of all contact pairings is introduced. Given a start and goal state (e.g., the robot should end up behind the door), the multi-contact planner then solves a single-query problem by incrementally growing a tree via a bi-level search over feasible contact modes jointly with continuous robot-object trajectories. The resulting plan is enhanced with a single long-horizon trajectory optimization over the discovered contact sequence.

What were your main findings?
We found that our planning framework was able to rapidly discover complex multi- contact plans for diverse loco-manipulation tasks, despite having provided it with minimal guidance. For example, for the door-traversal scenario, we specify the door affordances (i.e., the handle, back surface, and front surface), and only provide a sparse objective by simply asking the robot to end up behind the door. Additionally, we found that the generated behaviors are physically consistent and can be reliably executed with a real legged mobile manipulator.

What further work are you planning in this area?
We see the presented framework as a stepping stone toward developing a fully autonomous loco-manipulation pipeline. However, we see some limitations that we aim to address in future work. These limitations are primarily connected to the task-execution phase, where tracking behaviors generated on the basis of pre-modeled environments is only viable under the assumption of a reasonably accurate description, which is not always straightforward to define.

Robustness to modeling mismatches can be greatly improved by complementing our planner with data-driven techniques, such as deep reinforcement learning (DRL). So one interesting direction for future work would be to guide the training of a robust DRL policy using reliable expert demonstrations that can be rapidly generated by our loco-manipulation planner to solve a set of challenging tasks with minimal reward-engineering.

About the author

Jean-Pierre Sleiman received the B.E. degree in mechanical engineering from the American University of Beirut (AUB), Lebanon, in 2016, and the M.S. degree in automation and control from Politecnico Di Milano, Italy, in 2018. He is currently a Ph.D. candidate at the Robotic Systems Lab (RSL), ETH Zurich, Switzerland. His current research interests include optimization-based planning and control for legged mobile manipulation.

]]>
Robo-Insight #4 https://robohub.org/robo-insight-4/ Sun, 03 Sep 2023 08:16:40 +0000 https://robohub.org/?p=208121

Source: OpenAI’s DALL·E 2 with prompt “a hyperrealistic picture of a robot reading the news on a laptop at a coffee shop”

Welcome to the 4th edition of Robo-Insight, a biweekly robotics news update! In this post, we are excited to share a range of new advancements in the field and highlight robots’ progress in areas like mobile applications, cleaning, underwater mining, flexibility, human well-being, depression treatments, and human interactions.

Simplified mobile robot behavior adaptations

In the world of system adaptions, researchers from Eindhoven University of Technology have introduced a methodology that bridges the gap between application developers and control engineers in the context of mobile robots’ behavior adaptation. This approach leverages symbolic descriptions of robots’ behavior, known as “behavior semantics,” and translates them into control actions through a “semantic map.” This innovation aims to simplify motion control programming for autonomous mobile robot applications and facilitate integration across various vendors’ control software. By establishing a structured interaction layer between application, interaction, and control layers, this methodology could streamline the complexity of mobile robot applications, potentially leading to more efficient underground exploration and navigation systems.

The frontal perspective of the mobile platform (showcases hardware components with blue arrows). Source.

New robot for household clean-ups

Speaking of helpful robots, Princeton University has created a robot named TidyBot to address the challenge of household tidying. Unlike simple tasks such as moving objects, real-world cleanup requires a robot to differentiate between objects, place them correctly, and avoid damaging them. TidyBot accomplishes this through a combination of physical dexterity, visual recognition, and language understanding. Equipped with a mobile robotic arm, a vision model, and a language model, TidyBot can identify objects, place them in designated locations, and even infer proper actions with an 85% accuracy rate. The success of TidyBot demonstrates its potential to handle complex household tasks.

TidyBot in work. Source.

Deep sea mining robots

Shifting our focus to underwater environments, researchers are addressing the efficiency hurdles faced in deep-sea mining through innovative path planning for autonomous robotic mining vehicles. With deep-sea manganese nodules holding significant potential, these robotic vehicles are essential for their collection. By refining path planning methods, the researchers aim to improve the efficiency of these vehicles in traversing challenging underwater terrains while avoiding obstacles. This development could lead to more effective and responsible resource extraction from the ocean floor, contributing to the sustainable utilization of valuable mineral resources.

Diagram depicting the operational framework of the deep-sea mining system. Source.

Advanced soft robots with dexterity and flexibility

In regards to the field of robotic motion, recently researchers from Shanghai Jiao Tong University have developed small-scale soft robots with remarkable dexterity, enabling immediate and reversible changes in motion direction and shape reconfiguration. These robots, powered by an active dielectric elastomer artificial muscle and a unique chiral-lattice foot design, can change direction during fast movement with a single voltage input. The chiral-lattice foot generates various locomotion behaviors, including forward, backward, and circular motion, by adjusting voltage frequencies. Additionally, combining this structural design with shape memory materials allows the robots to perform complex tasks like navigating narrow tunnels or forming specific trajectories. This innovation opens the door to next-generation autonomous soft robots capable of versatile locomotion.

The soft robot achieves circular motion in either right or left directions by positioning the lattice foot towards the respective sides. Source.

Robotic dogs utilized to comfort patients

Turning our focus to robot use in the healthcare field, Stanford students, along with researchers and doctors, have partnered with AI and robotics industry leaders to showcase new robotic dogs designed to interact with pediatric patients at Lucile Packard Children’s Hospital. Patients at the hospital had the opportunity to engage with the playful robots, demonstrating the potential benefits of these mechanical pets for children’s well-being during their hospital stays. The robots, called Pupper, were developed by undergraduate engineering students and operated using handheld controllers. The goal of the demonstration was to study the interaction between the robots and pediatric patients, exploring ways to enhance the clinical experience and reduce anxiety.

A patient playing with the robotic dog. Source.

Robotic innovations could help with depression

Along the same lines as improving well-being, a recent pilot study has explored the potential benefits of using robotics in transcranial magnetic stimulation (TMS) for treating depression. Researchers led by Hyunsoo Shin developed a custom TMS robot designed to improve the accuracy of TMS coil placement on the brain, a critical aspect of effective treatment. By employing the robotic system, they reduced preparation time by 53% and significantly minimized errors in coil positioning. The study found comparable therapeutic effects on depression severity and regional cerebral blood flow (rCBF) between the robotic and manual TMS methods, shedding light on the potential of robotic assistance in enhancing the precision and efficiency of TMS treatments.

Configuration of the robotic repetitive transcranial magnetic stimulation (rTMS) within the treatment facility, and robotic positioning device for automated coil placement. Source.

Advanced robotic eye research

Finally, in the world of human-robot enhancement, a study conducted by researchers from various institutions has explored the potential of using robot eyes as predictive cues in human-robot interaction (HRI). The study aimed to understand whether and how the design of predictive robot eyes could enhance interactions between humans and robots. Four different types of eye designs were tested, including arrows, human eyes, and two anthropomorphic robot eye designs. The results indicated that abstract anthropomorphic robot eyes, which mimic certain aspects of human-like attention, were most effective at directing participants’ attention and triggering reflexive shifts. These findings suggest that incorporating abstract anthropomorphic eyes into robot design could improve the predictability of robot movements and enhance HRI.

The four types of stimuli. The first row showcases the human (left) and arrow (right) stimuli. The second row displays the abstract anthropomorphic robot eyes. Photograph of the questionnaire’s subject, the cooperative robot Sawyer. Source.

The continuous stream of progress seen across diverse domains underscores the adaptable and constantly progressing nature of robotics technology, revealing novel pathways for its incorporation across a spectrum of industries. The gradual advancement in the realm of robotics reflects persistent efforts and hints at the potential implications these strides might hold for the future.

Sources:

  1. Chen, H. L., Hendrikx, B., Torta, E., Bruyninckx, H., & van de Molengraft, R. (2023, July 10). Behavior adaptation for mobile robots via semantic map compositions of constraint-based controllers. Frontiers.
  2. Princeton Engineering – Engineers clean up with TidyBot. (n.d.). Princeton Engineering. Retrieved August 30, 2023,
  3. Xie, Y., Liu, C., Chen, X., Liu, G., Leng, D., Pan, W., & Shao, S. (2023, July 12). Research on path planning of autonomous manganese nodule mining vehicle based on lifting mining system. Frontiers.
  4. Wang, D., Zhao, B., Li, X., Dong, L., Zhang, M., Zou, J., & Gu, G. (2023). Dexterous electrical-driven soft robots with reconfigurable chiral-lattice foot design. Nature Communications14(1), 5067.
  5. University, S. (2023, August 1). Robo-dogs unleash joy at Stanford hospital. Stanford Report.
  6. Shin, H., Jeong, H., Ryu, W., Lee, G., Lee, J., Kim, D., Song, I.-U., Chung, Y.-A., & Lee, S. (2023). Robotic transcranial magnetic stimulation in the treatment of depression: a pilot study. Scientific Reports13(1), 14074.
  7. Onnasch, L., Schweidler, P., & Schmidt, H. (2023, July 3). The potential of robot eyes as predictive cues in HRI-an eye-tracking study. Frontiers.
]]>
MIT engineers use kirigami to make ultrastrong, lightweight structures https://robohub.org/mit-engineers-use-kirigami-to-make-ultrastrong-lightweight-structures/ Fri, 01 Sep 2023 08:17:00 +0000 https://news.mit.edu/2023/using-kirigami-ultrastrong-lightweight-structures-0822

MIT researchers used kirigami, the art of Japanese paper cutting and folding, to develop ultrastrong, lightweight materials that have tunable mechanical properties, like stiffness and flexibility. These materials could be used in airplanes, automobiles, or spacecraft. Image: Courtesy of the researchers

By Adam Zewe | MIT News

Cellular solids are materials composed of many cells that have been packed together, such as a honeycomb. The shape of those cells largely determines the material’s mechanical properties, including its stiffness or strength. Bones, for instance, are filled with a natural material that enables them to be lightweight, but stiff and strong.

Inspired by bones and other cellular solids found in nature, humans have used the same concept to develop architected materials. By changing the geometry of the unit cells that make up these materials, researchers can customize the material’s mechanical, thermal, or acoustic properties. Architected materials are used in many applications, from shock-absorbing packing foam to heat-regulating radiators.

Using kirigami, the ancient Japanese art of folding and cutting paper, MIT researchers have now manufactured a type of high-performance architected material known as a plate lattice, on a much larger scale than scientists have previously been able to achieve by additive fabrication. This technique allows them to create these structures from metal or other materials with custom shapes and specifically tailored mechanical properties. 

“This material is like steel cork. It is lighter than cork, but with high strength and high stiffness,” says Professor Neil Gershenfeld, who leads the Center for Bits and Atoms (CBA) at MIT and is senior author of a new paper on this approach.

The researchers developed a modular construction process in which many smaller components are formed, folded, and assembled into 3D shapes. Using this method, they fabricated ultralight and ultrastrong structures and robots that, under a specified load, can morph and hold their shape.

Because these structures are lightweight but strong, stiff, and relatively easy to mass-produce at larger scales, they could be especially useful in architectural, airplane, automotive, or aerospace components.

Joining Gershenfeld on the paper are co-lead authors Alfonso Parra Rubio, a research assistant in the CBA, and Klara Mundilova, an MIT electrical engineering and computer science graduate student; along with David Preiss, a graduate student in the CBA; and Erik D. Demaine, an MIT professor of computer science. The research will be presented at ASME’s Computers and Information in Engineering Conference.

The researchers actuate a corrugated structure by tensioning steel wires across the compliant surfaces and then connecting them to a system of pulleys and motors, enabling the structure to bend in either direction. Image: Courtesy of the researchers

Fabricating by folding

Architected materials, like lattices, are often used as cores for a type of composite material known as a sandwich structure. To envision a sandwich structure, think of an airplane wing, where a series of intersecting, diagonal beams form a lattice core that is sandwiched between a top and bottom panel. This truss lattice has high stiffness and strength, yet is very lightweight.

Plate lattices are cellular structures made from three-dimensional intersections of plates, rather than beams. These high-performance structures are even stronger and stiffer than truss lattices, but their complex shape makes them challenging to fabricate using common techniques like 3D printing, especially for large-scale engineering applications.

The MIT researchers overcame these manufacturing challenges using kirigami, a technique for making 3D shapes by folding and cutting paper that traces its history to Japanese artists in the 7th century.

Kirigami has been used to produce plate lattices from partially folded zigzag creases. But to make a sandwich structure, one must attach flat plates to the top and bottom of this corrugated core onto the narrow points formed by the zigzag creases. This often requires strong adhesives or welding techniques that can make assembly slow, costly, and challenging to scale.

The MIT researchers modified a common origami crease pattern, known as a Miura-ori pattern, so the sharp points of the corrugated structure are transformed into facets. The facets, like those on a diamond, provide flat surfaces to which the plates can be attached more easily, with bolts or rivets.

The MIT researchers modified a common origami crease pattern, known as a Miura-ori pattern, so the sharp points of the corrugated structure are transformed into facets. The facets, like those on a diamond, provide flat surfaces to which the plates can be attached more easily, with bolts or rivets. Image: Courtesy of the researchers

“Plate lattices outperform beam lattices in strength and stiffness while maintaining the same weight and internal structure,” says Parra Rubio. “Reaching the H-S upper bound for theoretical stiffness and strength has been demonstrated through nanoscale production using two-photon lithography. Plate lattices construction has been so difficult that there has been little research on the macro scale. We think folding is a path to easier utilization of this type of plate structure made from metals.”

Customizable properties

Moreover, the way the researchers design, fold, and cut the pattern enables them to tune certain mechanical properties, such as stiffness, strength, and flexural modulus (the tendency of a material to resist bending). They encode this information, as well as the 3D shape, into a creasing map that is used to create these kirigami corrugations.

For instance, based on the way the folds are designed, some cells can be shaped so they hold their shape when compressed while others can be modified so they bend. In this way, the researchers can precisely control how different areas of the structure will deform when compressed.

Because the flexibility of the structure can be controlled, these corrugations could be used in robots or other dynamic applications with parts that move, twist, and bend.

To craft larger structures like robots, the researchers introduced a modular assembly process. They mass produce smaller crease patterns and assemble them into ultralight and ultrastrong 3D structures. Smaller structures have fewer creases, which simplifies the manufacturing process.

Using the adapted Miura-ori pattern, the researchers create a crease pattern that will yield their desired shape and structural properties. Then they utilize a unique machine — a Zund cutting table — to score a flat, metal panel that they fold into the 3D shape.

“To make things like cars and airplanes, a huge investment goes into tooling. This manufacturing process is without tooling, like 3D printing. But unlike 3D printing, our process can set the limit for record material properties,” Gershenfeld says.

Using their method, they produced aluminum structures with a compression strength of more than 62 kilonewtons, but a weight of only 90 kilograms per square meter. (Cork weighs about 100 kilograms per square meter.) Their structures were so strong they could withstand three times as much force as a typical aluminum corrugation.

Using their method, researchers produced aluminum structures with a compression strength of more than 62 kilonewtons, but a weight of only 90 kilograms per square meter. Image: Courtesy of the researchers

The versatile technique could be used for many materials, such as steel and composites, making it well-suited for the production lightweight, shock-absorbing components for airplanes, automobiles, or spacecraft.

However, the researchers found that their method can be difficult to model. So, in the future, they plan to develop user-friendly CAD design tools for these kirigami plate lattice structures. In addition, they want to explore methods to reduce the computational costs of simulating a design that yields desired properties. 

“Kirigami corrugations holds exciting potential for architectural construction,” says James Coleman MArch ’14, SM ’14, co-founder of the design for fabrication and installation firm SumPoint, and former vice president for innovation and R&D at Zahner, who was not involved with this work. “In my experience producing complex architectural projects, current methods for constructing large-scale curved and doubly curved elements are material intensive and wasteful, and thus deemed impractical for most projects. While the authors’ technology offers novel solutions to the aerospace and automotive industries, I believe their cell-based method can also significantly impact the built environment. The ability to fabricate various plate lattice geometries with specific properties could enable higher performing and more expressive buildings with less material. Goodbye heavy steel and concrete structures, hello lightweight lattices!”

Parra Rubio, Mundilova and other MIT graduate students also used this technique to create three large-scale, folded artworks from aluminum composite that are on display at the MIT Media Lab. Despite the fact that each artwork is several meters in length, the structures only took a few hours to fabricate.

“At the end of the day, the artistic piece is only possible because of the math and engineering contributions we are showing in our papers. But we don’t want to ignore the aesthetic power of our work,” Parra Rubio says.

This work was funded, in part, by the Center for Bits and Atoms Research Consortia, an AAUW International Fellowship, and a GWI Fay Weber Grant.

]]>
New dual-arm robot achieves bimanual tasks by learning from simulation https://robohub.org/new-dual-arm-robot-achieves-bimanual-tasks-by-learning-from-simulation/ Tue, 29 Aug 2023 08:30:00 +0000 https://www.bristol.ac.uk/news/2023/august/dual-arm-robot.html

Dual arm robot holding crisp. Image: Yijiong Lin

The new Bi-Touch system, designed by scientists at the University of Bristol and based at the Bristol Robotics Laboratory, allows robots to carry out manual tasks by sensing what to do from a digital helper.

The findings, published in IEEE Robotics and Automation Letters, show how an AI agent interprets its environment through tactile and proprioceptive feedback, and then control the robots’ behaviours, enabling precise sensing, gentle interaction, and effective object manipulation to accomplish robotic tasks.

This development could revolutionise industries such as fruit picking, domestic service, and eventually recreate touch in artificial limbs.

Lead author Yijiong Lin from the Faculty of Engineering, explained: “With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch. And more importantly, we can directly apply these agents from the virtual world to the real world without further training.

“The tactile bimanual agent can solve tasks even under unexpected perturbations and manipulate delicate objects in a gentle way.”

Bimanual manipulation with tactile feedback will be key to human-level robot dexterity. However, this topic is less explored than single-arm settings, partly due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-action spaces. The team were able to develop a tactile dual-arm robotic system using recent advances in AI and robotic tactile sensing.

The researchers built up a virtual world (simulation) that contained two robot arms equipped with tactile sensors. They then design reward functions and a goal-update mechanism that could encourage the robot agents to learn to achieve the bimanual tasks and developed a real-world tactile dual-arm robot system to which they could directly apply the agent.

The robot learns bimanual skills through Deep Reinforcement Learning (Deep-RL), one of the most advanced techniques in the field of robot learning. It is designed to teach robots to do things by letting them learn from trial and error akin to training a dog with rewards and punishments.

For robotic manipulation, the robot learns to make decisions by attempting various behaviours to achieve designated tasks, for example, lifting up objects without dropping or breaking them. When it succeeds, it gets a reward, and when it fails, it learns what not to do. With time, it figures out the best ways to grab things using these rewards and punishments. The AI agent is visually blind relying only on proprioceptive feedback – a body’s ability to sense movement, action and location and tactile feedback.

They were able to successfully enable to the dual arm robot to successfully safely lift items as fragile as a single Pringle crisp.

Co-author Professor Nathan Lepora added: “Our Bi-Touch system showcases a promising approach with affordable software and hardware for learning bimanual behaviours with touch in simulation, which can be directly applied to the real world. Our developed tactile dual-arm robot simulation allows further research on more different tasks as the code will be open-source, which is ideal for developing other downstream tasks.”

Yijiong concluded: “Our Bi-Touch system allows a tactile dual-arm robot to learn sorely from simulation, and to achieve various manipulation tasks in a gentle way in the real world.

“And now we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch.”

]]>
AI helps robots manipulate objects with their whole bodies https://robohub.org/ai-helps-robots-manipulate-objects-with-their-whole-bodies/ Sun, 27 Aug 2023 10:49:00 +0000 https://news.mit.edu/2023/ai-technique-robots-manipulate-objects-whole-bodies-0824

MIT researchers developed an AI technique that enables a robot to develop complex plans for manipulating an object using its entire hand, not just the fingertips. This model can generate effective plans in about a minute using a standard laptop. Here, a robot attempts to rotate a bucket 180 degrees. Image: Courtesy of the researchers

By Adam Zewe | MIT News

Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box. 

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.

Now MIT researchers found a way to simplify this process, known as contact-rich manipulation planning. They use an AI technique called smoothing, which summarizes many contact events into a smaller number of decisions, to enable even a simple algorithm to quickly identify an effective manipulation plan for the robot.

While still in its early days, this method could potentially enable factories to use smaller, mobile robots that can manipulate objects with their entire arms or bodies, rather than large robotic arms that can only grasp using fingertips. This may help reduce energy consumption and drive down costs. In addition, this technique could be useful in robots sent on exploration missions to Mars or other solar system bodies, since they could adapt to the environment quickly using only an onboard computer.      

“Rather than thinking about this as a black-box system, if we can leverage the structure of these kinds of robotic systems using models, there is an opportunity to accelerate the whole procedure of trying to make these decisions and come up with contact-rich plans,” says H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this technique.

Joining Suh on the paper are co-lead author Tao Pang PhD ’23, a roboticist at Boston Dynamics AI Institute; Lujie Yang, an EECS graduate student; and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research appears this week in IEEE Transactions on Robotics.

Learning about learning

Reinforcement learning is a machine-learning technique where an agent, like a robot, learns to complete a task through trial and error with a reward for getting closer to a goal. Researchers say this type of learning takes a black-box approach because the system must learn everything about the world through trial and error.

It has been used effectively for contact-rich manipulation planning, where the robot seeks to learn the best way to move an object in a specified manner.

In these figures, a simulated robot performs three contact-rich manipulation tasks: in-hand manipulation of a ball, picking up a plate, and manipulating a pen into a specific orientation. Image: Courtesy of the researchers

But because there may be billions of potential contact points that a robot must reason about when determining how to use its fingers, hands, arms, and body to interact with an object, this trial-and-error approach requires a great deal of computation.

“Reinforcement learning may need to go through millions of years in simulation time to actually be able to learn a policy,” Suh adds.

On the other hand, if researchers specifically design a physics-based model using their knowledge of the system and the task they want the robot to accomplish, that model incorporates structure about this world that makes it more efficient.

Yet physics-based approaches aren’t as effective as reinforcement learning when it comes to contact-rich manipulation planning — Suh and Pang wondered why.

They conducted a detailed analysis and found that a technique known as smoothing enables reinforcement learning to perform so well.

Many of the decisions a robot could make when determining how to manipulate an object aren’t important in the grand scheme of things. For instance, each infinitesimal adjustment of one finger, whether or not it results in contact with the object, doesn’t matter very much.  Smoothing averages away many of those unimportant, intermediate decisions, leaving a few important ones.

Reinforcement learning performs smoothing implicitly by trying many contact points and then computing a weighted average of the results. Drawing on this insight, the MIT researchers designed a simple model that performs a similar type of smoothing, enabling it to focus on core robot-object interactions and predict long-term behavior. They showed that this approach could be just as effective as reinforcement learning at generating complex plans.

“If you know a bit more about your problem, you can design more efficient algorithms,” Pang says.

A winning combination

Even though smoothing greatly simplifies the decisions, searching through the remaining decisions can still be a difficult problem. So, the researchers combined their model with an algorithm that can rapidly and efficiently search through all possible decisions the robot could make.

With this combination, the computation time was cut down to about a minute on a standard laptop.

They first tested their approach in simulations where robotic hands were given tasks like moving a pen to a desired configuration, opening a door, or picking up a plate. In each instance, their model-based approach achieved the same performance as reinforcement learning, but in a fraction of the time. They saw similar results when they tested their model in hardware on real robotic arms.

“The same ideas that enable whole-body manipulation also work for planning with dexterous, human-like hands. Previously, most researchers said that reinforcement learning was the only approach that scaled to dexterous hands, but Terry and Tao showed that by taking this key idea of (randomized) smoothing from reinforcement learning, they can make more traditional planning methods work extremely well, too,” Tedrake says.

However, the model they developed relies on a simpler approximation of the real world, so it cannot handle very dynamic motions, such as objects falling. While effective for slower manipulation tasks, their approach cannot create a plan that would enable a robot to toss a can into a trash bin, for instance. In the future, the researchers plan to enhance their technique so it could tackle these highly dynamic motions.

“If you study your models carefully and really understand the problem you are trying to solve, there are definitely some gains you can achieve. There are benefits to doing things that are beyond the black box,” Suh says.

This work is funded, in part, by Amazon, MIT Lincoln Laboratory, the National Science Foundation, and the Ocado Group.

]]>
SVR Guide to Robotics Research and Education 2023 https://robohub.org/svr-guide-to-robotics-research-and-education-2023/ Tue, 22 Aug 2023 09:42:52 +0000 https://robohub.org/?p=208027

In the last decade we have seen more robotics innovation becoming real products and companies than in the entire history of robotics.

Furthermore, the greater Silicon Valley and San Francisco Bay Area is at the center of this ‘Cambrian Explosion in Robotics’ as Dr Gill Pratt, Director of Robotics at Toyota Research Institute described it. In fact, two of the very first robots were developed right here.

In 1969 at Stanford, Vic Sheinman designed the first electric robot arm able to be computer controlled. After successful pilots and interest from General Motors, Unimation acquired the concept and released the PUMA or Programmable Universal Machine for Assembly. Unimation was eventually acquired by Staubli, and the PUMA became one of the most successful industrial robots of all time.

Shakey was the first mobile robot able to perceive and reason. Also called the world’s first electronic person by Time Magazine in 1972. Shakey was developed at SRI International from 1966 to 1972 and pioneered many advances in computer vision and path planning and control systems that are still in use today.

These companies have been at the heart of Silicon Valley Robotics, the regional robotics ecosystem/association, but we have also seen enormous growth in new robotics companies and startups in the last decade.

And all of them are hiring.

This volume serves as a guide to students who are interested in studying the field of robotics in any way. Robotics jobs range from service technician, electrical or mechanical engineer, control systems and computer science, to interaction or experience designer, human factors and industrial design.

All these skills are in great demand in robotics companies around the world, and people with experience in robotics are in great demand everywhere. Robotics is a complex multidisciplinary field, which provides opportunities for you to develop problem solving skills and a holistic approach.

The robotics industry also requires people with skill sets in growing businesses, not just robotics, but product and project management, human resources, sales, marketing, operations.

Get involved in robotics – the industry of the 21st century.

The guide

]]>