AIhub – Robohub https://robohub.org Connecting the robotics community to the world Wed, 13 Nov 2024 09:19:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Online hands-on science communication training – sign up here! https://robohub.org/online-hands-on-science-communication-training-sign-up-here/ Wed, 13 Nov 2024 08:54:31 +0000 https://aihub.org/?p=16320

On Friday 22 November, IEEE Robotics and Automation Society will be hosting an online science communication training session for robotics and AI researchers. The tutorial will introduce you to science communication and help you create your own story through hands-on activities.

Date: 22 November 2024
Time: 10:00 – 13:00 EST (07:00 – 10:00 PST, 15:00 – 18:00 GMT, 16:00 – 19:00 CET)
Location: Online – worldwide
Registration
Website

Science communication is essential. It helps demystify robotics and AI for a broad range of people including policy makers, business leaders, and the public. As a researcher, mastering this skill can not only enhance your communication abilities but also expand your network and increase the visibility and impact of your work.

In this three-hour session, leading science communicators in robotics and AI will teach you how to clearly and concisely explain your research to non-specialists. You’ll learn how to avoid hype, how to find suitable images and videos to illustrate your work, and where to start with social media. We’ll hear from a leading robotics journalist on how to deal with media and how to get your story out to a wider audience.

This is a hands-on session with exercises for you to take part in throughout the course. Therefore, please come prepared with an idea about a piece of research you’d like to communicate about.

Agenda

Part 1: How to communicate your work to a broader audience

  • The importance of science communication
  • How to produce a short summary of your research for communication via social media channels
  • How to expand your outline to write a complete blog post
  • How to find and use suitable images
  • How to avoid hype when communicating your research
  • Unconventional ways of doing science communication

Part 2: How to make videos about your robots

  • The value of video
  • Tips on making a video

Part 3: Working with media

  • Why bother talking to media anyway?
  • How media works and what it’s good and bad at
  • How to pitch media a story
  • How to work with your press office

Speakers:
Sabine Hauert, Professor of Swarm Engineering, Executive Trustee AIhub / Robohub
Lucy Smith, Senior Managing Editor AIhub / Robohub
Laura Bridgeman, Audience Development Manager IEEE Spectrum
Evan Ackerman, Senior Editor IEEE Spectrum

Sign up here.

]]>
#RoboCup2024 – daily digest: 21 July https://robohub.org/robocup2024-daily-digest-21-july/ Sun, 21 Jul 2024 19:48:45 +0000 https://aihub.org/?p=15404 A break in play during a Small Size League match.

Today, 21 July, saw the competitions draw to a close in a thrilling finale. In the third and final of our round-up articles, we provide a flavour of the action from this last day. If you missed them, you can find our first two digests here: 19 July | 20 July.

My first port of call this morning was the Standard Platform League, where Dr Timothy Wiley and Tom Ellis from Team RedbackBots, RMIT University, Melbourne, Australia, demonstrated an exciting advancement that is unique to their team. They have developed an augmented reality (AR) system with the aim of enhancing the understanding and explainability of the on-field action.

The RedbackBots travelling team for 2024 (L-to-R: Murray Owens, Sam Griffiths, Tom Ellis, Dr Timothy Wiley, Mark Field, Jasper Avice Demay). Photo credit: Dr Timothy Wiley.

Timothy, the academic leader of the team explained: “What our students proposed at the end of last year’s competition, to make a contribution to the league, was to develop an augmented reality (AR) visualization of what the league calls the team communication monitor. This is a piece of software that gets displayed on the TV screens to the audience and the referee, and it shows you where the robots think they are, information about the game, and where the ball is. We set out to make an AR system of this because we think it’s so much better to view it overlaid on the field. What the AR lets us do is project all of this information live on the field as the robots are moving.”

The team has been demonstrating the system to the league at the event, with very positive feedback. In fact, one of the teams found an error in their software during a game whilst trying out the AR system. Tom said that they’ve received a lot of ideas and suggestions from the other teams for further developments. This is one of the first (if not, the first) AR system to be trialled across the competition, and first time it has been used in the Standard Platform League. I was lucky enough to get a demo from Tom and it definitely added a new level to the viewing experience. It will be very interesting to see how the system evolves.

Mark Field setting up the MetaQuest3 to use the augmented reality system. Photo credit: Dr Timothy Wiley.

From the main soccer area I headed to the RoboCupJunior zone, where Rui Baptista, an Executive Committee member, gave me a tour of the arenas and introduced me to some of the teams that have been using machine learning models to assist their robots. RoboCupJunior is a competition for school children, and is split into three leagues: Soccer, Rescue and OnStage.

I first caught up with four teams from the Rescue league. Robots identify “victims” within re-created disaster scenarios, varying in complexity from line-following on a flat surface to negotiating paths through obstacles on uneven terrain. There are three different strands to the league: 1) Rescue Line, where robots follow a black line which leads them to a victim, 2) Rescue Maze, where robots need to investigate a maze and identify victims, 3) Rescue Simulation, which is a simulated version of the maze competition.

Team Skollska Knijgia, taking part in the Rescue Line, used a YOLO v8 neural network to detect victims in the evacuation zone. They trained the network themselves with about 5000 images. Also competing in the Rescue Line event were Team Overengeniering2. They also used YOLO v8 neural networks, in this case for two elements of their system. They used the first model to detect victims in the evacuation zone and to detect the walls. Their second model is utilized during line following, and allows the robot to detect when the black line (used for the majority of the task) changes to a silver line, which indicates the entrance of the evacuation zone.

Left: Team Skollska Knijgia. Right: Team Overengeniering2.

Team Tanorobo! were taking part in the maze competition. They also used a machine learning model for victim detection, training on 3000 photos for each type of victim (these are denoted by different letters in the maze). They also took photos of walls and obstacles, to avoid mis-classification. Team New Aje were taking part in the simulation contest. They used a graphical user interface to train their machine learning model, and to debug their navigation algorithms. They have three different algorithms for navigation, with varying computational cost, which they can switch between depending on the place (and complexity) in the maze in which they are located.

Left: Team Tanorobo! Right: Team New Aje.

I met two of the teams who had recently presented in the OnStage event. Team Medic’s performance was based on a medical scenario, with the team including two machine learning elements. The first being voice recognition, for communication with the “patient” robots, and the second being image recognition to classify x-rays. Team Jam Session’s robot reads in American sign language symbols and uses them to play a piano. They used the MediaPipe detection algorithm to find different points on the hand, and random forest classifiers to determine which symbol was being displayed.

Left: Team Medic Bot Right: Team Jam Session.

Next stop was the humanoid league where the final match was in progress. The arena was packed to the rafters with crowds eager to see the action.
Standing room only to see the Adult Size Humanoids.

The finals continued with the Middle Size League, with the home team Tech United Eindhoven beating BigHeroX by a convincing 6-1 scoreline. You can watch the livestream of the final day’s action here.

The grand finale featured the winners of the Middle Size League (Tech United Eindhoven) against five RoboCup trustees. The humans ran out 5-2 winners, their superior passing and movement too much for Tech United.

]]>
#RoboCup2024 – daily digest: 20 July https://robohub.org/robocup2024-daily-digest-20-july/ Sat, 20 Jul 2024 18:34:07 +0000 https://aihub.org/?p=15402 The Standard Platform Soccer League in action.

This is the second of our daily digests from RoboCup2024 in Eindhoven, The Netherlands. If you missed the first digest, which gives some background to RoboCup, you can find it here.

Competitions continued across all the leagues today, with participants vying for a place in Sunday’s finals.

The RoboCup@Work league focusses on robots in work-related scenarios, utilizing ideas and concepts from other RoboCup competitions to tackle open research challenges in industrial and service robotics.

I arrived at the arena in time to catch the advanced navigation test. Robots have to autonomously navigate, picking up and placing objects at different work stations. In this advanced test, caution tape is added to the arena floor, which the robots should avoid travelling over. There is also a complex placing element where teams have to put an object that they’ve collected into a slot – get the orientation or placement of the object slightly wrong and the it won’t fall into the slot.

The RoboCup@Work arena just before competition start.

Eight teams are taking part in the league this year. Executive Committee member Asad Norouzi said that there are plans to introduce a sub-league which would provide an entry point for new teams or juniors to get into the league proper.

I caught up with Harrison Burns, Mitchell Torok and Jasper Arnold from Team MiRobot. They are based at the University of New South Wales and are attending RoboCup for the first time.

Team MiRobot from UNSW.

The team actually only started six months ago, so final preparations have been a bit stressful. However, the experience has been great fun, and the competition has gone well so far. Like most teams, they’ve had to make many refinements as the competition has progressed, leading to some late nights.

One notable feature of the team’s robot is the bespoke, in-house-designed grasping mechanism on the end of the arm. The team note that “it has good flexible jaws, so when it grabs round objects it actually pulls the object directly into it. Because it uses a linear motion, compared to a lot of other rotating jaws, it has a lot better reliability for picking up objects”.

Here is some footage from the task, featuring Team bi-t-bots and Team Singapore.

In the Middle Size Soccer league (MSL), teams of five fully autonomous robots play with a regular size FIFA ball. Teams are free to design their own hardware but all sensors have to be on-board and there is a maximum size and weight limit of 40kg for the robots. The research focus is on mechatronics design, control and multi-agent cooperation at plan and perception levels. Nine teams are competing this year.

I spoke to António Ribeiro, who is a member of the technical committee and part of Team LAR@MSL from the University of Minho, Portugal. The team started in 1998, but António and most of his colleagues on the current team have only been involved in the MSL since September 2022. The robots have evolved as the competition has progressed, and further improvements are in progress. Refinements so far have included communication, the detection system, and the control system. They are pleased with the improvements from the previous RoboCup. “Last year we had a lot of hardware issues, but this year the hardware seems pretty stable. We also changed our coding architecture and it is now much easier and faster for us to develop code because we can all work on the code at the same time on different modules”.

António cited versatility and cost-effective solutions as strengths of the team. “Our robot is actually very cheap compared to other teams. We use a lot of old chassis, and our solutions always go to the lowest cost possible. Some teams have multiple thousand dollar robots, but, for example, our vision system is around $70-80. It works pretty well – we need to improve the way we handle it, but it seems stable”.

Team LAR@MSL

The RoboCup@Home league aims to develop service and assistive robot technology with high relevance for future personal domestic applications. A set of benchmark tests is used to evaluate the robots’ abilities and performance in a realistic non-standardized home environment setting. These tests include helping to prepare breakfast, clearing the table, and storing groceries.

I arrived in time to watch the “stickler for the rules” challenge, where robots have to navigate different rooms and make sure that the people inside (“guests” at a party) are sticking to four rules: 1) there is one forbidden room – if a guest is in there the robot must alert them and ask them to follow it into another room), 2) everyone must have a drink in their hand – if not, the robot directs them to a shelf with drinks, 3) no shoes to be worn in the house, 4) there should be no rubbish left on the floor.

After watching an attempt from the LAR@Home robot, Tiago from the team told me a bit about the robot. “The goal is to develop a robot capable of multi general-purpose tasks in home and healthcare environments.” With the exception of the robotic arm, all of the hardware was built by the team. The robot has two RGBD cameras, two LIDARs, a tray (where the robot can store items that it needs to carry), and two emergency stop buttons that deactivate all moving parts. Four omnidirectional wheels allow the robot to move in any direction at any time. The wheels have independent suspension systems which guarantees that they can all be on the ground at all times, even if there are bumps and cables on the venue floor. There is a tablet that acts as a visual interface, and a microphone and speakers to enable communication between humans and the robot, which is all done via speaking and listening.

Tiago told me that the team have talked to a lot healthcare practitioners to find out the main problems faced by elderly people, and this inspired one of their robot features. “They said that the two main injury sources are from when people are trying to sit down or stand up, and when they are trying to pick something up from the floor. We developed a torso that can pick objects from the floor one metre away from the robot”.

The LAR@Home team.


You can keep up with the latest news direct from RoboCup here.

Click here to see all of our content pertaining to RoboCup.

]]>
#RoboCup2024 – daily digest: 19 July https://robohub.org/robocup2024-daily-digest-19-july/ Fri, 19 Jul 2024 16:10:00 +0000 https://aihub.org/?p=15398 The main soccer arena.

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. As part of this initiative, a series of competitions and events are held throughout the year. The main showcase event is an international affair with teams travelling from far and wide to put their machines through their paces.

This year, RoboCup is being held in three arenas in the Genneper Parken, Eindhoven, The Netherlands. The organisers are expecting over 2,000 participants, from 45 different countries, with around 300 teams signed up to take part in the various competitions.

Although RoboCup started out as a football (or soccer) playing competition, other leagues have since been introduced, focussing on robots in industrial, rescue, and home settings. There is even a dedicated league for young roboticists – RoboCupJunior – where participants can take part in either football, rescue, or artistic events.

I am lucky enough to be able to attend this year, and, for the next three days, I’ll be bringing you a daily digest of some of the exciting happenings from Eindhoven.

Today, 19 July, sees the competition in full swing. The main soccer arena, boasting multiple pitches, hosts a number of the different leagues which form RoboCupSoccer.

Some of the pitches in the main soccer arena.

My first port of call was the Standard Platform League, where the round 5 champions cup match between SPQR Team vs rUNSWift was taking place. SPQR ran out winners and advance to round 6. In this league, all teams compete with identical robots (currently the humanoid NAO by Aldebaran). The robots operate fully autonomously, meaning that there is no external control from neither humans nor computers.

Standard platform league. Round 5 champions cup match between SPQR Team vs rUNSWift.

The Humanoid AdultSize league is arguably the most challenging of the leagues, with many constraints placed on the robots to make them as human-like as possible. For example, they must have roughly human-like body proportions, they need to walk on two legs, and they are only allowed to use human-like sensors (up to two cameras to sense the environment). In this AdultSize competition, two robots from each team compete, and the team members walk behind the robots to catch them in case of a fall. Such a mishap could prove costly in terms of potential hardware damage.

Action from the Humanoid AdultSize League.

The RoboCup Rescue Robot League sees teams developing robotic systems with the goal of enabling emergency responders to perform extremely hazardous tasks from safer stand-off distances. During the competition, teams compete in a round-robin, putting their robots through their paces on a number of different challenges. The leading teams following this initial phase progress to the finals on Sunday. The tasks include navigating in complex environments, opening doors, and sensing. Teams may run the machines completely autonomously, or with some assistive control. More points are awarded for completely autonomous operation.

RoboCup Rescue arena from above.

You can keep up with more RoboCup2024 news here.

]]>
Congratulations to the #ICRA2024 best paper winners https://robohub.org/congratulations-to-the-icra2024-best-paper-winners/ Mon, 20 May 2024 08:57:02 +0000 https://aihub.org/?p=14966

The 2024 IEEE International Conference on Robotics and Automation (ICRA) best paper winners and finalists in the various different categories have been announced. The recipients were revealed during an award luncheon at the conference, which took place from 13-17 May in Yokohama, Japan.


IEEE ICRA Best Paper Award in Automation

Winner

TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester

Finalists

  • A Movable Microfluidic Chip with Gap Effect for Manipulation of Oocytes, Shuzhang Liang, Satoshi Amaya, Hirotaka Sugiura, Hao Mo, Yuguo Dai, and Fumihito Arai
  • Under Pressure: Learning-Based Analog Gauge Reading in the Wild, Maurits Reitsma, Julian Keller, Kenneth Blomqvist, and Roland Siegwart
  • Efficient Composite Learning Robot Control Under Partial Interval Excitation, Tian Shi, Weibing Li, Haoyong Yu, and Yongping Pan
  • MORALS: Analysis of High-Dimensional Robot Controllers via Topological Tools in a Latent Space, Ewerton Vieira, Aravind Sivaramakrishnan, Sumanth Tangirala, Edgar Granados, Konstantin Mischaikow, and Kostas E. Bekris

IEEE ICRA Best Paper Award in Cognitive Robotics

Winner

VLFM: Vision-Language Frontier Maps for Semantic Navigation, Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, and Bernadette Bucher

Finalists

  • NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Resilient Legged Local Navigation: Learning to Traverse with Compromised Perception End-to-End, Chong Zhang, Jin Jin, Jonas Frey, Nikita Rudin, Matias Mattamala, Cesar Cadena Lerma, and Marco Hutter
  • Learning Continuous Control with Geometric Regularity from Robot Intrinsic Symmetry, Shengchao Yan, Baohe Zhang, Yuan Zhang, Joschka Boedecker, and Wolfram Burgard
  • Learning Vision-Based Bipedal Locomotion for Challenging Terrain, Helei Duan, Bikram Pandit, Mohitvishnu S. Gadde, Bart Jaap Van Marum, Jeremy Dao, Chanho Kim, and Alan Fern

IEEE ICRA Best Paper Award in Robot Manipulation

Winner

SARA-RT: Scaling up Robotics Transformers with Self-Adaptive Robust Attention, Isabel Leal, Krzysztof Choromanski, Deepali Jain, Avinava Dubey, Jacob Varley, Michael S. Ryoo, Yao Lu, Frederick Liu, Vikas Sindhwani, Tamas Sarlos, Kenneth Oslund, Karol Hausman, Quan Vuong, and Kanishka Rao

Finalists

  • Open X-Embodiment: Robotic Learning Datasets and RT-X Models, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Marin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley
  • Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans, Homanga Bharadhwaj, Abhinav Gupta, Vikash Kumar, and Shubham Tulsiani
  • Hearing Touch: Audio-Visual Pretraining for Contact-Rich Manipulation, Jared Mejia, Victoria Dean, Tess Hellebrekers, and Abhinav Gupta
  • DenseTact-Mini: An Optical Tactile Sensor for Grasping Multi-Scale Objects From Flat Surfaces, Won Kyung Do, Ankush Ankush Dhawan, Mathilda Kitzmann, and Monroe Kennedy
  • Constrained Bimanual Planning with Analytic Inverse Kinematics, Thomas Cohn, Seiji Shaw, Max Simchowitz, and Russ Tedrake

IEEE ICRA Best Paper Award on Human-Robot Interaction

Winner

CoFRIDA: Self-Supervised Fine-Tuning for Human-Robot Co-Painting, Peter Schaldenbrand, Gaurav Parmar, Jun-Yan Zhu, James Mccann, and Jean Oh

Finalists

  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments, Junwei Zheng, Jiaming Zhang, Kailun Yang, Kunyu Peng, and Rainer Stiefelhagen
  • Robot-Assisted Navigation for Visually Impaired through Adaptive Impedance and Path Planning, Pietro Balatti, Idil Ozdamar, Doganay Sirintuna, Luca Fortini, Mattia Leonori, Juan M. Gandarias, and Arash Ajoudani
  • Incremental Learning of Full-Pose Via-Point Movement Primitives on Riemannian Manifolds, Tilman Daab, Noémie Jaquier, Christian R. G. Dreher, Andre Meixner, Franziska Krebs, and Tamim Asfour
  • Supernumerary Robotic Limbs to Support Post-Fall Recoveries for Astronauts, Erik Ballesteros, Sang-Yoep Lee, Kalind Carpenter, and Harry Asada

IEEE ICRA Best Paper Award in Medical Robotics

Winner

Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons

Finalists

  • Intraoperatively Iterative Hough Transform Based In-plane Hybrid Control of Arterial Robotic Ultrasound for Magnetic Catheterization, Zhengyang Li, Magejiang Yeerbulati, and Qingsong Xu
  • Efficient Model Learning and Adaptive Tracking Control of Magnetic Micro-Robots for Non-Contact Manipulation, Yongyi Jia, Shu Miao, Junjian Zhou, Niandong Jiao, Lianqing Liu, and Xiang Li
  • Colibri5: Real-Time Monocular 5-DoF Trocar Pose Tracking for Robot-Assisted Vitreoretinal Surgery, Shervin Dehghani, Michael Sommersperger, Mahdi Saleh, Alireza Alikhani, Benjamin Busam, Peter Gehlbach, Ioan Iulian Iordachita, Nassir Navab, and M. Ali Nasseri
  • Hybrid Volitional Control of a Robotic Transtibial Prosthesis using a Phase Variable Impedance Controller, Ryan Posh, Jonathan Allen Tittle, David Kelly, James Schmiedeler, and Patrick M. Wensing
  • Design and Implementation of a Robotized Hand-held Dissector for Endoscopic Pulmonary Endarterectomy, Runfeng Zhu, Xilong Hou, Wei Huang, Lei Du, Zhong Wu, Hongbin Liu, Henry Chu, and Qing Xiang Zhao

IEEE ICRA Best Paper Award on Mechanisms and Design

Winner

Design and Modeling of a Nested Bi-cavity-based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu

Finalists

  • Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu
  • Lissajous Curve-Based Vibrational Orbit Control of a Flexible Vibrational Actuator with a Structural Anisotropy, Yuto Miyazaki and Mitsuru Higashimori
  • Dynamic Modeling of Wing-Assisted Inclined Running with a Morphing Multi-Modal Robot, Eric Sihite, Alireza Ramezani, and Gharib Morteza

IEEE ICRA Best Paper Award on Multi-Robot Systems

Winner

Do We Run Large-Scale Multi-Robot Systems on the Edge? More Evidence for Two-Phase Performance in System Size Scaling, Jonas Kuckling, Robin Luckey, Viktor Avrutin, Andrew Vardy, Andreagiovanni Reina, and Heiko Hamann

Finalists

  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Learning for Dynamic Subteaming and Voluntary Waiting in Heterogeneous Multi-Robot Collaborative Scheduling, Williard Joshua Jose and Hao Zhang
  • Asynchronous Distributed Smoothing and Mapping via On-Manifold Consensus ADMM, Daniel Mcgann, Kyle Lassak, and Michael Kaess
  • Uncertainty-Bounded Active Monitoring of Unknown Dynamic Targets in Road-Networks with Minimum Fleet, Shuaikang Wang, Yiannis Kantaros, and Meng Guo

IEEE ICRA Best Paper Award in Service Robotics

Winner

Learning to Walk in Confined Spaces Using 3D Representation, Takahiro Miki, Joonho Lee, Lorenz Wellhausen, and Marco Hutter

Finalists

  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits for Agricultural Robots, Federico Magistri, Rodrigo Marcuzzi, Elias Ariel Marks, Matteo Sodano, Jens Behley, and Cyrill Stachniss
  • CoPAL: Corrective Planning of Robot Actions with Large Language Models, Frank Joublin, Antonello Ceravola, Pavel Smirnov, Felix Ocker, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Stephan Hasler, Daniel Tanneberg, and Michael Gienger
  • CalliRewrite: Recovering Handwriting Behaviors from Calligraphy Images without Supervision, Yuxuan Luo, Zekun Wu, and Zhouhui Lian

IEEE ICRA Best Paper Award in Robot Vision

Winner

NGEL-SLAM: Neural Implicit Representation-based Global Consistent Low-Latency SLAM System, Yunxuan Mao, Xuan Yu, Kai Wang, Yue Wang, Rong Xiong, and Yiyi Liao

Finalists

  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • Deep Evidential Uncertainty Estimation for Semantic Segmentation under Out-Of-Distribution Obstacles, Siddharth Ancha, Philip Osteen, and Nicholas Roy
  • SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud Tracking, Yu Lin, Zhiheng Li, Yubo Cui, and Zheng Fang
  • Ultrafast Square-Root Filter-based VINS, Yuxiang Peng, Chuchu Chen, and Guoquan Huang
  • Universal Visual Decomposer: Long-Horizon Manipulation Made Easy, Zichen Zhang, Yunshuang Li, Osbert Bastani, Abhishek Gupta, Dinesh Jayaraman, Yecheng Jason Ma, and Luca Weihs

IEEE ICRA Best Paper Award on Unmanned Aerial Vehicles

Winner

Time-Optimal Gate-Traversing Planner for Autonomous Drone Racing, Chao Qin, Maxime Simon Joseph Michet, Jingxiang Chen, and Hugh H.-T. Liu

Finalists

  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao
  • Co-Design Optimisation of Morphing Topology and Control of Winged Drones, Fabio Bergonti, Gabriele Nava, Valentin Wüest, Antonello Paolino, Giuseppe L’Erario, Daniele Pucci, and Dario Floreano
  • FC-Planner: A Skeleton-guided Planning Framework for Fast Aerial Coverage of Complex 3D Scenes, Chen Feng, Haojia Li, Mingjie Zhang, Xinyi Chen, Boyu Zhou, and Shaojie Shen
  • Sequential Trajectory Optimization for Externally-Actuated Modular Manipulators with Joint Locking, Jaeu Choe, Jeongseob Lee, Hyunsoo Yang, Hai-Nguyen (Hann) Nguyen, and Dongjun Lee
  • Spatial Assisted Human-Drone Collaborative Navigation and Interaction through Immersive Mixed Reality, Luca Morando and Giuseppe Loianno

IEEE ICRA Best Student Paper Award

Winner

Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu

Finalists

  • TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester
  • Goal Masked Diffusion Policies for Unified Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Open X-Embodiment: Robotic Learning Datasets and RT-X Models, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Martin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley
  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons
  • Design and Modeling of a Nested Bi-cavity- based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu
  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao

IEEE ICRA Best Conference Paper Award

Winners

  • Goal Masked Diffusion Policies for Unified Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Open X-Embodiment: Robotic Learning Datasets and RT-X, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Martin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley

Finalists

  • TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester
  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons
  • Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu
  • Design and Modeling of a Nested Bi-cavity- based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu
  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao

]]>
What’s coming up at #ICRA2024? https://robohub.org/whats-coming-up-at-icra2024/ Fri, 10 May 2024 10:28:26 +0000 https://aihub.org/?p=14941

The 2024 IEEE International Conference on Robotics and Automation (ICRA) will take place from 13-17 May, in Yokohama, Japan. The event will feature plenary and keynote talks, technical sessions, posters, workshops and tutorials.

Plenary speakers

There are three plenary talks at the conference this year:

  • Yoky Matsuoka – How to Turn a Roboticist into a Corporate Explorer
  • Sami Haddadin – The Great Robot Accelerator: Collective Learning of Optimal Embodied AI
  • Sunil K Agrawal – Rehabilitation Robotics: How to Improve Daily Functions in People with Impairments?

Keynote talks

There will be 15 keynote talks, given by:

  • Lianqing Liu – Biosyncretic sensing, actuation and intelligence for robotics
  • Dawn M. Tilbury – Digital Twins for Manufacturing Automation
  • Claudio Pacchierotti – Beyond Force Feedback: Cutaneous Haptics in Human-centered Robotics
  • Yu Sun – Medical Robotics for Cell Surgery – Science and Applications
  • Yasuhisa Hirata – Adaptable AI-enabled Robots to Create a Vibrant Society – Moonshot R&D Projects in Japan
  • Calin Belta – Formal Methods for Safety-Critical Control
  • Manuel Catalano – Robots in the Wild: From Research Labs to the Real World
  • Harold Soh – Building Guidance Bridges with Generative Models for Robot Learning and Control
  • Lorenzo Sabattini – Unleashing the power of many: decentralized control of multi-robot systems
  • Myunghee Kim – Human-wearable robot co-adaptation
  • Yoko Yamanishi – Emergent Functions of Electrically-induced Bubbles and Intra-cellular-Cybernetic Avatar
  • Kensuke Harada – Robotic manipulation aiming for industrial applications
  • Iolanda Leite – The Quest for Social Robot Autonomy
  • Rong Xiong – Integration of Robotics and AI: Changes and Challenges
  • Mariana Medina-Sánchez – Tiny Robots, Big Impact: Transforming Gynecological Care

Tutorials

The tutorials will be held on Monday 13 May and Friday 17 May.

Workshops

The workshops will also be held on Monday 13 May and Friday 17 May. There are 73 to choose from this year.

You can see the programme overview here, with a detailed programme available here.

]]>
AI holidays 2023 https://robohub.org/ai-holidays-2023/ Sun, 31 Dec 2023 08:43:52 +0000 https://aihub.org/?p=13873

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season.


Josh Miller used ChatGPT to create the 12 days of Christmas. Prompt: “I want to make 12 images in an early 1980s era to depict the twelve days of Christmas”.

See the pictures in higher definition here.


“Santa’s Secret: Robots Saved Christmas! | CPS XMAS video 2023” from Chair of Cyber-Physical Systems.


“AI rewriting classic Christmas stories” from GingerPale.


Using PaLM2 to generate haikus.

Prompt: “Write me a haiku about an alternative xmas”.
No gifts, no grand feast,
Just laughter and joy, warmth spread,
Kindness fills the air.

Prompt: “Write a haiku about the winter holiday season”.
Snow blankets the land,
Hearts aglow with love’s warmth,
Joyous cheer surrounds.


“Put a Bow on It | Happy Holidays” from Boston Dynamics.


“Christmas Magic: Animated Backgrounds for an Enchanting Atmosphere” from AI-ArtificialIntelligence.


“XMAS – Episode 3: AI – Christmas music for the HAKOM Christmas party” from HAKOM Time Series.


“Christmas tree in python notebook with Turtle” from Learn Upwards.


“FZI Living Lab Christmas Robotics 2023 – Robotic Reindeer Rescue” from FZI channel.


A letter to Santa generated by PaLM2. Prompt: “Write me a letter to Santa Claus. I don’t want any presents, just inspiration in my machine learning research for 2024.”

Dear Santa Claus,

As the year comes to a close, I find myself reflecting on the remarkable advancements in machine learning that have transformed our world. From self-driving cars to personalized medicine, these innovations have opened up new possibilities and enriched our lives in ways we could have only imagined a few decades ago.

I am writing to you today not to ask for material gifts, but rather for something more precious: inspiration to continue exploring the frontiers of machine learning in the year to come. As a researcher in this field, I am eager to contribute to the development of algorithms and techniques that can further enhance our understanding of the world around us and create solutions to pressing global challenges.

I am particularly fascinated by the potential of machine learning to address the climate crisis. By harnessing the power of data and computation, we can develop more efficient renewable energy sources, optimize transportation systems, and mitigate the impact of extreme weather events. Additionally, I believe that machine learning can play a crucial role in improving education, healthcare, and environmental sustainability.

I know that you have a special ability to see the world in a unique and insightful way. I hope that you can share some of your wisdom with me and guide me towards promising research directions. I am eager to embark on new challenges and make a positive impact on the world through machine learning.

Thank you for considering my request, Santa Claus. I look forward to your guidance and inspiration as I pursue my research endeavors in the year ahead.

With warmest regards,

[Your Name]

]]>
AAAI Fall Symposium: Patrícia Alves-Oliveira on human-robot interaction design https://robohub.org/aaai-fall-symposium-patricia-alves-oliveira-on-human-robot-interaction-design/ Tue, 31 Oct 2023 06:25:33 +0000 https://robohub.org/?p=208623 An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The AAAI Fall Symposium Series took place in Arlington, USA, and comprised seven different symposia. One of these, the tenth Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium was run as a hybrid in-person/online event, and we tuned in to the opening keynote, which was given by Patrícia Alves-Oliveira.

As a psychology student, Patrícia’s dream was to become a therapist. However, an internship, where she encountered a robot for the first time, inspired her to change her plans, and she decided to go into the field of human-robot interaction. Following a PhD in the field, she worked as a postdoc, before heading to industry as a designer in the Amazon Astro robot team.

Patrícia has worked on a number of interesting projects during her time in academia and in industry. Thinking about how to design robots for specific user needs, and keeping the user at the forefront during the design process, has been core to her work. She began by summarising three very different academic projects.

Creativity and robotics

The objective of this project was to design, fabricate, and evaluate robots as creativity-provoking tools for kids. Patrícia created a social robot named YOLO (or Your Own Living Object) that she designed to be child-proof (in other words, it could withstand being dropped and knocked over), with the aim of trying to help children explore their creativity during play. A machine learning algorithm learns the pattern of play that the child has and adapts the robot behaviour accordingly. You can see the robot in action in the demo below:

FLEXI robot

As a postdoc project, Patrícia worked on building FLEXI, a social robot embodiment kit. This kit consists of a robot (with a face, and a torso with a screen on the front), which can be customised, and an open-source end-user programming interface designed to be user-friendly. The customisation element means that it can be used for many applications. The team has deployed FLEXI across three application scenarios: community-support, mental health, and education, with the aim of assessing the flexibility of the system. You can see the robot in action, in different scenarios, here.

Social dining

This project centred on a robotic arm for people with impaired mobility. Such systems already exist for assisting people with tasks such as eating. However, in a social context they can often form a barrier between the user and the rest of the group. The idea behind this project was to consider how such a robot could be adapted to work well in a social context, for example, during a meal with family or friends. The team interviewed people with impaired mobility to assess their needs, and came up with a set of design principles for creating robot-assisted feeding systems and an implementation guide for future research in this area. You can read the research paper on this project here.

You can find out more about these three projects, and the other projects that Patrícia has been involved in, here.

Astro robot

Patrícia has long been interested in robots for the real world, and how this real-world experience is aligned with the study of robots in academia and industry. She decided to leave academia and join the Astro robot programme, which she felt was a great opportunity to work on a large-scale real-world robot project.

The Astro robot is a home robot designed to assist with tasks such as monitoring your house, delivering small objects within the home, recognising your pet, telling a story, or playing games.

Patrícia took us through a typical day in the life of a designer where she always has in mind the bigger picture of what the team is aiming for, in other words, what the ideal robot, and its interactions with humans, would look like. Coupled to that, the process is governed by core design tenets, such as the customer needs, and non-negotiable core elements that the robot should include. When considering a particular element of the robot design, for example, the delivery of an item in the robot tray, Patrícia uses storyboards to map out details of potential human-robot interactions. An important aspect of design concerns edge cases, which occur regularly in the real world. Good design will consider potential edge cases and incorporate ways to deal with them.

Patrícia closed by emphasising the importance of teamwork in the design process, in particular, the need for interdisciplinary teams; by considering design from many different points of view, the chance of innovation is higher.

You can find out more about the Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium here.

]]>
Code to Joy: Why Everyone Should Learn a Little Programming – Interview with Michael Littman https://robohub.org/code-to-joy-why-everyone-should-learn-a-little-programming-interview-with-michael-littman/ Thu, 26 Oct 2023 05:26:55 +0000 https://robohub.org/?p=208586

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

Could you start by telling us a bit about the book, and who the intended audience is?

The intended audience is not computer scientists, although I have been getting a very warm reception from computer scientists, which I appreciate. The idea behind the book is to try to help people understand that telling machines what to do (which is how I view much of computer science and AI) is something that is really accessible to everyone. It builds on skills and practices that people already have. I think it can be very intimidating for a lot of people, but I don’t think it needs to be. I think that the foundation is there for everybody and it’s just a matter of tapping into that and building on top of it. What I’m hoping, and what I’m seeing happening, is that machine learning and AI is helping to meet people part way. The machines are getting better at listening as we try to get better at telling them what to do.

What made you decide to write the book, what was the inspiration behind it?

I’ve taught large introductory computer science classes and I feel like there’s an important message in there about how a deeper knowledge of computing can be very empowering, and I wanted to bring that to a larger audience.

Could you talk a bit about the structure of the book?

The meat of the book talks about the fundamental components that make up programs, or, in other words, that make up the way that we tell computers what to do. Each chapter covers a different one of those topics – loops, variables, conditionals, for example. Within each chapter I talk about the ways in which this concept is already familiar to people, the ways that it shows up in regular life. I point to existing pieces of software or websites where you can make use of that one particular concept to tell computers what to do. Each chapter ends with an introduction to some concepts from machine learning that can help create that particular programming construct. For example, in the chapter on conditionals, I talk about the ways that we use the word “if” in regular life all the time. Weddings, for example, are very conditionally structured, with statements like “if anyone has anything to say, speak now or forever hold your peace”. That’s kind of an “if-then” statement. In terms of tools to play with, I talk about interactive fiction. Partway between video games and novels is this notion that you can make a story that adapts itself while it’s being read. What makes that interesting is this notion of conditionals – the reader can make a choice and that will cause a branch. There are really wonderful tools for being able to play with this idea online, so you don’t have to be a full-fledged programmer to make use of conditionals. The machine learning concept introduced there is decision trees, which is an older form of machine learning where you give a system a bunch of examples and then it outputs a little flowchart for decision making.

Do you touch on generative AI in the book?

The book was already in production by the time ChatGPT came out, but I was ahead of the curve, and I did have a section specifically about GPT-3 (pre-ChatGPT) which talks about what it is, how machine learning creates it, and how it itself can be helpful in making programs. So, you see it from both directions. You get the notion that this tool actually helps people tell machines what to do, and also the way that humanity created this tool in the first place using machine learning.

Did you learn anything while you were writing the book that was particularly interesting or surprising?

Researching the examples for each chapter caused me to dig into a whole bunch of topics. This notion of interactive fiction, and that there’s tools for creating interactive fiction, I found pretty interesting. When researching another chapter, I found an example from a Jewish prayer book that was just so shocking to me. So, Jewish prayer books (and I don’t know if this is true in other belief systems as well, but I’m mostly familiar with Judaism), contain things you’re supposed to read, but they have little conditional markings on them sometimes. For example, one might say “don’t read this if it’s a Saturday”, or “don’t read this if it’s a full moon”, or “don’t read if it’s a full moon on a Saturday”. I found one passage that actually had 14 different conditions that you had to check to decide whether or not it was appropriate to read this particular passage. That was surprising to me – I had no idea that people were expected to do so much complex computation during a worship activity.

Why is it important that everybody learns a little programming?

It’s really important to keep in mind the idea that at the end of the day what AI is doing is making it easier for us to tell machines what to do, and we should share that increased capability with a broad population. It shouldn’t just be the machine learning engineers who get to tell computers what to do more easily. We should find ways of making this easier for everybody.

Because computers are here to help, but it’s a two-way street. We need to be willing to learn to express what we want in a way that can be carried out accurately and automatically. If we don’t make that effort, then other parties, companies often, will step in and do it for us. At that point, the machines are working to serve some else’s interest instead of our own. I think it’s become absolutely essential that we restore a healthy relationship with these machines before we lose any more of our autonomy.

Any final thoughts or takeaways that we should bear in mind?

I think there’s a message here for computer science researchers, as well. When we tell other people what to do, we tend to combine a description or a rule, something that’s sort of program-like, with examples, something that’s more data-like. We just intermingle them when we talk to each other. At one point when I was writing the book, I had a dishwasher that was acting up and I wanted to understand why. I read through its manual, and I was struck by how often it was the case that in telling people what to do with the dishwasher, the authors would consistently mix together a high-level description of what they are telling you to do with some particular, vivid examples: a rule for what to load into the top rack, and a list of items that fit that rule. That seems to be the way that people want to both convey and receive information. What’s crazy to me is that we don’t program computers that way. We either use something that’s strictly programming, all rules, no examples, or we use machine learning, where it’s all examples, no rules. I think the reason that people communicate this way with each other is because those two different mechanisms have complementary strengths and weaknesses and when you combine the two together, you maximize the chance of being accurately understood. And that’s the goal when we’re telling machines what to do. I want the AI community to be thinking about how we can combine what we’ve learned about machine learning with something more programming-like to make a much more powerful way of telling machines what to do. I don’t think this is a solved problem yet, and that’s something that I really hope that people in the community think about.


Code to Joy: Why Everyone Should Learn a Little Programming is available to buy now.

michael littman

Michael L. Littman is a University Professor of Computer Science at Brown University, studying machine learning and decision making under uncertainty. He has earned multiple university-level awards for teaching and his research on reinforcement learning, probabilistic planning, and automated crossword-puzzle solving has been recognized with three best-paper awards and three influential paper awards. Littman is co-director of Brown’s Humanity Centered Robotics Initiative and a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is also a Fellow of the American Association for the Advancement of Science Leshner Leadership Institute for Public Engagement with Science, focusing on Artificial Intelligence. He is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation.

]]>
Interview with Marek Šuppa: insights into RoboCupJunior https://robohub.org/interview-with-marek-suppa-insights-into-robocupjunior/ Mon, 16 Oct 2023 09:30:22 +0000 https://robohub.org/?p=208492 robocupjunior soccer match in actionA RoboCupJunior soccer match in action.

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage.

Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

What is your role in RoboCupJunior and how long have you been involved with this league?

I started with RoboCupJunior quite a while ago: my first international competition was in 2009 in Graz, where I was lucky enough to compete in Soccer for the first time. Our team didn’t do all that well in that event but RoboCup made a deep impression and so I stayed around: first as a competitor and later to help organise the RoboCupJunior Soccer league. Right now I am serving as part of the RoboCupJunior Execs who are responsible for the organisation of RoboCupJunior as a whole.

How was the event this year? What were some of the highlights?

I guess this year’s theme or slogan, if we were to give it one, would be “back to normal”, or something like that. Although RoboCup 2022 already took place in-person in Thailand last year after two years of a pandemic pause, it was in a rather limited capacity, as COVID-19 still affected quite a few regions. It was great to see that the RoboCup community was able to persevere and even thrive throughout the pandemic, and that RoboCup 2023 was once again an event where thousands of robots and roboticists meet.

It would also be difficult to do this question justice without thanking the local French organisers. They were actually ready to organise the event in 2020 but it got cancelled due to COVID-19. But they did not give up on the idea and managed to put together an awesome event this year, for which we are very thankful.

robots used for robocupjuniorExamples of the robots used by the RoboCupJunior Soccer teams.

Turning to RoboCupJunior Soccer specifically, could you talk about the mission of the league and how you, as organisers, go about realising that mission?

The mission of RoboCupJunior consists of two competing objectives: on one hand, it needs to be a challenge that’s approachable, interesting and relevant for (mostly) high school students and at the same time it needs to be closely related to the RoboCup “Major” challenges, which are tackled by university students and their mentors. We are hence continuously trying to both make it more compelling and captivating for the students and at the same time ensure it is technical enough to help them grow towards the RoboCup “Major” challenges.

One of the ways we do that is by introducing what we call “SuperTeam” challenges, in which teams from respective countries form a so-called “SuperTeam” and compete against another “SuperTeam” as if these were distinct teams. In RoboCupJunior Soccer the “SuperTeams” are composed of four to five teams and they compete on a field that is six times larger than the “standard” fields that are used for the individual games. While in the individual matches each team can play with two robots at most (resulting in a 2v2 game) in a SuperTeam match each SuperTeam fields five robots, meaning there are 10 robots that play on the SuperTeam field during a SuperTeam match. The setup is very similar to the Division B of the Small Size League of RoboCup “Major”.

The SuperTeam games have existed in RoboCupJunior Soccer since 2013, so for quite a while, and the feedback we received on them was overwhelmingly positive: it was a lot of fun for both the participants as well as the spectators. But compared to the Small Size League games there were still two noticeable differences: the robots did not have a way of communicating with one another and additionally, the referees did not have a way of communicating with the robots. The result was that not only was there little coordination among robots of the same SuperTeam, whenever the game needed to be stopped, the referees had to physically run after the robots on the field to catch them and do a kickoff after a goal was scored. Although hilarious, it’s far from how we would imagine the SuperTeam games to look.

The RoboCupJunior Soccer Standard Communication Modules aim to do both. The module itself is a small device that is attached to each robot on the SuperTeam field. These devices are all connected via Bluetooth to a single smartphone, through which the referee can send commands to all robots on the field. The devices themselves also support direct message exchange between robots on a single SuperTeam, meaning the teams do not have to invest into figuring out how to communicate with the other robots but can make use of a common platform. The devices, as well as their firmware, are open source, meaning not only that everyone can build their own Standard Communication Module if they’d like but also that the community can participate in its development, which makes it an interesting addition to RoboCupJunior Soccer.

two teams setting up their robotsRoboCupJunior Soccer teams getting ready for the competition.

How did this new module work out in the competition? Did you see an improvement in experience for the teams and organisers?

In this first big public test we focused on exploring how (and whether) these modules can improve the gameplay – especially the “chasing robots at kickoff”. Although we’ve done “lab experiments” in the past and had some empirical evidence that it should work rather well, this was the first time we tried it in a real competition.

All in all, I would say that it was a very positive experiment. The modules themselves did work quite well and for some of us, who happened to have experience with “robot chasing” mentioned above, it was sort of a magical feeling to see the robots stop right on the main referee’s whistle.

We also found out potential areas for improvement in the future. The modules themselves do not have a power source of their own and were powered by the robots themselves. We didn’t think this would be a problem but in the “real world” test it transpired that the voltage levels the robots are capable of providing fluctuates significantly – for instance when the robot decides to aggressively accelerate – which in turn means some of the modules disconnect when the voltage is lowered significantly. However, it ended up being a nice lesson for everyone involved, one that we can certainly learn from when we design the next iterations.


The livestream from Day 4 of RoboCupJunior Soccer 2023. This stream includes the SuperTeam finals and the technical challenges. You can also view the livestream of the semifinals and finals from day three here.

Could you tell us about the emergence of deep-learning models in the RoboCupJunior leagues?

This is something we started to observe in recent years which surprised us organisers, to some extent. In our day-to-day jobs (that is, when we are not organising RoboCup), many of us, the organisers, work in areas related to robotics, computer science, and engineering in general – with some of us also doing research in artificial intelligence and machine learning. And while we always thought that it would be great to see more of the cutting-edge research being applied at RoboCupJunior, we always dismissed it as something too advanced and/or difficult to set up for the high school students that comprise the majority of RoboCupJunior students.

Well, to our great surprise, some of the more advanced teams have started to utilise methods and technologies that are very close to the current state-of-the-art in various areas, particularly computer vision and deep learning. A good example would be object detectors (usually based on the YOLO architecture), which are now used across all three Junior leagues: in OnStage to detect various props, robots and humans who perform on the stage together, in Rescue to detect the victims the robots are rescuing and in Soccer to detect the ball, the goals, and the opponents. And while the participants generally used an off-the-shelf implementations, they still needed to do all the steps necessary for a successful deployment of this technology: gather a dataset, finetune the deep-learning model and deploy it on their robots – all of which is far from trivial and is very close to how these technologies get used in both research and industry.

Although we have seen only the more advanced teams use deep-learning models at RoboCupJunior, we expect that in the future we will see it become much more prevalent, especially as the technology and the tooling around it becomes more mature and robust. It does show, however, that despite their age, the RoboCupJunior students are very close to cutting-edge research and state-of-the-art technologies.

Two teams ready to start - robots on the fieldAction from RoboCupJunior Soccer 2023.

How can people get involved in RCJ (either as a participant or an organiser?)

A very good question!

The best place to start would be the RoboCupJunior website where one can find many interesting details about RoboCupJunior, the respective leagues (such as Soccer, Rescue and OnStage), and the relevant regional representatives who organise regional events. Getting in touch with a regional representative is by far the easiest way of getting started with RoboCup Junior.

Additionally, I can certainly recommend the RoboCupJunior forum, where many RoboCupJunior participants, past and present, as well as the organisers, discuss many related topics in the open. The community is very beginner friendly, so if RoboCupJunior sounds interesting, do not hesitate to stop by and say hi!

About Marek Šuppa

Marek Suppa

Marek stumbled upon AI as a teenager when building soccer-playing robots and quickly realised he is not smart enough to do all the programming by himself. Since then, he’s been figuring out ways to make machines learn by themselves, particularly from text and images. He currently serves as the Principal Data Scientist at Slido (part of Cisco), improving the way meetings are run around the world. Staying true to his roots, he tries to provide others with a chance to have a similar experience by organising the RoboCupJunior competition as part of the Executive Committee.

]]>
High-tech microscope with ML software for detecting malaria in returning travellers https://robohub.org/high-tech-microscope-with-ml-software-for-detecting-malaria-in-returning-travellers/ Thu, 14 Sep 2023 05:29:24 +0000 https://robohub.org/?p=208220

By Deborah Pirchner

Malaria is an infectious disease claiming more than half a million lives each year. Because traditional diagnosis takes expertise and the workload is high, an international team of researchers investigated if diagnosis using a new system combining an automatic scanning microscope and AI is feasible in clinical settings. They found that the system identified malaria parasites almost as accurately as experts staffing microscopes used in standard diagnostic procedures. This may help reduce the burden on microscopists and increase the feasible patient load.

Each year, more than 200 million people fall sick with malaria and more than half a million of these infections lead to death. The World Health Organization recommends parasite-based diagnosis before starting treatment for the disease caused by Plasmodium parasites. There are various diagnostic methods, including conventional light microscopy, rapid diagnostic tests and PCR.

The standard for malaria diagnosis, however, remains manual light microscopy, during which a specialist examines blood films with a microscope to confirm the presence of malaria parasites. Yet, the accuracy of the results depends critically on the skills of the microscopist and can be hampered by fatigue caused by excessive workloads of the professionals doing the testing.

Now, writing in Frontiers in Malaria, an international team of researchers has assessed whether a fully automated system, combining AI detection software and an automated microscope, can diagnose malaria with clinically useful accuracy.

“At an 88% diagnostic accuracy rate relative to microscopists, the AI system identified malaria parasites almost, though not quite, as well as experts,” said Dr Roxanne Rees-Channer, a researcher at The Hospital for Tropical Diseases at UCLH in the UK, where the study was performed. “This level of performance in a clinical setting is a major achievement for AI algorithms targeting malaria. It indicates that the system can indeed be a clinically useful tool for malaria diagnosis in appropriate settings.”

AI delivers accurate diagnosis

The researchers sampled more than 1,200 blood samples of travelers who had returned to the UK from malaria-endemic countries. The study tested the accuracy of the AI and automated microscope system in a true clinical setting under ideal conditions.

They evaluated samples using both manual light microscopy and the AI-microscope system. By hand, 113 samples were diagnosed as malaria parasite positive, whereas the AI-system correctly identified 99 samples as positive, which corresponds to an 88% accuracy rate.

“AI for medicine often posts rosy preliminary results on internal datasets, but then falls flat in real clinical settings. This study independently assessed whether the AI system could succeed in a true clinical use case,” said Rees-Channer, who is also the lead author of the study.

Automated vs manual

The fully automated malaria diagnostic system the researchers put to the test includes hard- as well as software. An automated microscopy platform scans blood films and malaria detection algorithms process the image to detect parasites and the quantity present.

Automated malaria diagnosis has several potential benefits, the scientists pointed out. “Even expert microscopists can become fatigued and make mistakes, especially under a heavy workload,” Rees-Channer explained. “Automated diagnosis of malaria using AI could reduce this burden for microscopists and thus increase the feasible patient load.” Furthermore, these systems deliver reproducible results and can be widely deployed, the scientists wrote.

Despite the 88% accuracy rate, the automated system also falsely identified 122 samples as positive, which can lead to patients receiving unnecessary anti-malarial drugs. “The AI software is still not as accurate as an expert microscopist. This study represents a promising datapoint rather than a decisive proof of fitness,” Rees-Channer concluded.

Read the research in full

Evaluation of an automated microscope using machine learning for the detection of malaria in travelers returned to the UK, Roxanne R. Rees-Channer, Christine M. Bachman, Lynn Grignard, Michelle L. Gatton, Stephen Burkot, Matthew P. Horning, Charles B. Delahunt, Liming Hu, Courosh Mehanian, Clay M. Thompson, Katherine Woods, Paul Lansdell, Sonal Shah, Peter L. Chiodini, Frontiers in Malaria (2023).

]]>
[UPDATE] A list of resources, articles, and opinion pieces relating to large language models & robotics https://robohub.org/a-list-of-resources-articles-and-opinion-pieces-relating-to-large-language-models-robotics/ Wed, 23 Aug 2023 09:51:24 +0000 https://robohub.org/?p=207140 A black keyboard at the bottom of the picture has an open book on it, with red words in labels floating on top, with a letter A balanced on top of them. The perspective makes the composition form a kind of triangle from the keyboard to the capital A. The AI filter makes it look like a messy, with a kind of cartoon style.Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / Licenced by CC-BY 4.0.

We’ve collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the third in the series. (The previous versions are here: v1 | v2.)

What LLMs are and how they work

Journal, conference, arXiv, and other articles

Newspaper, magazine, University website, and blogpost articles

Reports

Podcasts and video discussions

Focus on LLMs and education

Relating to art and other creative processes

Pertaining to robotics

Misinformation, fake news and the impact on journalism

Regulation and policy

]]>
Interview with Roberto Figueiredo: the RoboCup experience https://robohub.org/interview-with-roberto-figueiredo-the-robocup-experience/ Fri, 04 Aug 2023 10:54:26 +0000 https://robohub.org/?p=207916 Five people holding kid-sized robots
Roberto Figueiredo is a master’s student at the University of Aveiro. He is a member of the Bold Hearts RoboCup team which competes in the Humanoid KidSize soccer league. He is currently the local representative for the Junior Rescue Simulation. We spoke to Roberto about his RoboCup journey, from the junior to the major leagues, and his experience of RoboCup events.

When was your first RoboCup event and which competition did you take part in?

I started in 2016 in the Junior leagues with my high school and I took part in the rescue simulation competition (although I originally joined the on-stage competition). This first event actually happened in Portugal, and it was similar to a workshop. We qualified to go to the world cup in rescue simulation, in Leipzig, Germany, and we ended up in second place. That was really nice, and it was my first contact with RoboCup, and with robotics generally. I’d been working with electronics in the past, but simulation gave me a bit of an introduction to the more theoretical aspects of robotics, and to AI in general. Rescue simulation makes you think of ways to make the robots independent and not manually controlled by humans.

RoboCup team in front of the rescue setupRoberto’s first RoboCup in 2016, Leipzig, pictured with the Singapore team celebrating after the finals.

Could you tell us about the subsequent RoboCup events that you took part in?

In 2017 we qualified to go to Nagoya, Japan, which was not just an amazing RoboCup, but an amazing trip. That’s another good thing about robotics, you get to meet a lot of new people in new countries. We did quite well in this competition as well, I think we reached 5th place.

After that we went to European RoboCup Junior in Italy. The following year was my last RoboCup as a junior, which was in Sydney. That was also an interesting event and I got to chat a bit more with the majors and understand how their teams worked. By this point, I had gained more experience, and I felt ready to get involved with a major league RoboCup team.

There is a big gap between the junior and major leagues. When I joined my team (the Bold Hearts), most of the team were PhDs and I was just a second year bachelor’s student so it was quite hard to pick up all the knowledge. However, if you are persistent enough and you are interested in, and passionate about, robotics you’ll get the hang of it and you’ll learn by trial and error.

Seven people standing and one kneelingEuroRoboCup 2022 in Portugal. Roberto (kneeling in photo) was part of the organising committee.

When was your first competition with the team in the major league?

My first competition was actually last year, in Thailand. We didn’t perform as we would like to, however, there is much more to RoboCup than just the competition – it is now more of a scientific and knowledge-sharing event, it’s unique. Just this year, in Bordeaux, we had a problem with our robots. Every time we disconnected the ethernet cable, the robot just stopped playing, and we couldn’t figure out what was happening. I asked another team that was using the same software – they had figured out the problem before and they told us how to solve it. I don’t think you’ll see that in other competitions. Every team has a joint objective which is making science progress, making friendships, and making other teams better by sharing their knowledge. That’s really unique.

How did you join the Bold Hearts team?

I decided to do my master’s in the UK (at the University of Hertfordshire), to experience a different country and a different style of education. When I joined, I knew there was a team so I was already looking forward to joining. After a couple of years of work, we finally got to go to a competition as a team. It’s been an amazing time and a huge learning experience.

What is your role on the team?

In our team, everyone does a bit of everything. We still have a lot of problems to solve – on both the hardware and software side. All of us currently are computer scientists so it’s a bit more of a struggle to work on the hardware side. So, I do a bit of everything, both AI and non-AI related problems. For example, I’ve done some 3d modelling for the robots, and I’m currently working on the balancing problem. We all work together on the problems which is amazing because you get to see a bit of everything and learn from everyone. Robotics is a very multidisciplinary field. You get to learn about all kinds of topics: mechanical engineering, electrical engineering, machine learning, coding in general.

The Bold Hearts’ qualification video for this year’s RoboCup competition

Could you tell us about this year’s competition (which took place in Bordeaux)?

This year we were a lot more prepared than last year, when we’d just come back from COVID, and all of our experienced members had recently left the team, due to finishing their PhDs and starting work. Creating a successful robot team is a huge integration problem. There are so many pieces that need to go together and work perfectly for the robots to function, and if one fails it looks like your system isn’t doing anything. We got walking working perfectly this year, we had vision working well too, and we had a stable decision tree, and we were able to listen to the controller (which is like a referee and passes on information about fouls, game start and stops etc.). However, we had some bugs in the decision tree that made everything fall apart and we spent a lot of time trying to debug it. This happens to a lot of teams. However, you can still appreciate the work and progress of what they have done.

Five people holding kid-sized robotsRoboCup 2023 in Bordeaux. Roberto (left) with Bold Hearts teammates.

What are the immediate plans for the team?

We are now thinking about joining the simulation competition, which is part of our league. It takes place in the winter season and we’re planning on joining to work on our software. The transition between simulation and hardware is quite hard. You need a good simulation base to be able to transfer directly the knowledge to the robot. We’re working on having a very good simulation so we can transfer, at least more easily, the knowledge learnt in simulation to the robots.

RoboCup is moving more towards AI and learning, which we can see in the 3d simulation. The robots learn a lot of the motion through reinforcement learning, for example. In the physical leagues it’s not as easy as we have to transfer that to the real world, where there is play in the joints, there’s backlash, there’s play in the 3d parts – there are a lot of variables that are not taken into account in simulations.

How has being part of RoboCup inspired your studies and research?

Every time I go to RoboCup I come out thinking about what I’m going to do next. I couldn’t be more inspired. It’s a really intense field but I love it. It makes you want to work really hard and it makes you passionate about science. I did my bachelor’s project related to RoboCup, I joined a master’s course on robotics, I keep asking my Professors if they want to start a team back in Portugal. I’m going to do my master’s thesis on robotics, on humanoids. I think humanoids are a very complex and interesting challenge. There is no one single solution.

About Roberto

Roberto Figueiredo

Roberto Figueiredo is a Portuguese, AI-focused computer scientist with a bachelor’s degree from the University of Hertfordshire. He currently pursuing a master’s in Robotics and Intelligent Systems from the University of Aveiro, and is passionate about advancing his expertise in robotics. He has long been very enthusiastic about robots and AI, being a participant in RoboCup since 2016 in the Rescue Simulation league. He has since become local representative for the Rescue League in Portugal and joined a Major team, Bold Hearts, in the Kid Size league, one of the most challenging in RoboCup Humanoid Soccer.

]]>
#RoboCup2023 in tweets – part 2 https://robohub.org/robocup2023-in-tweets-part-2/ Sun, 16 Jul 2023 09:31:52 +0000 https://robohub.org/?p=207790

As this year’s RoboCup draws to a close, we take a look back at some of the highlights from the second half of the conference. Over the course of the weekend, the event focussed on the latter stages of the competitions, with the winners in all the different leagues decided. If you missed our round-up of the first half of RoboCup, you can catch up here.

]]>
#RoboCup2023 in tweets – part 1 https://robohub.org/robocup2023-in-tweets-part-1/ Sat, 15 Jul 2023 09:31:45 +0000 https://robohub.org/?p=207787

This year’s RoboCup kicked off on 4 July and will run until 10 July. Taking place in Bordeaux, the event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. Find out what attendees have been up to in preparation for, and in the first half of, the event.

]]>
What’s coming up at #RoboCup2023? https://robohub.org/whats-coming-up-at-robocup2023/ Mon, 03 Jul 2023 11:47:30 +0000 https://robohub.org/?p=207677 robocup2023 logo
This year, RoboCup will be held in Bordeaux, from 4-10 July. The event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. You can see the schedule for the week here.

The leagues and their competitions

The league competitions will take place on 6-9 July. You can find out more about the different leagues at these links:

Symposium

The RoboCup symposium will take place on 10 July. The programme can be found here.

There will be three keynote talks:

  • Cynthia Breazeal, Social Robots: Reflections and Predictions of Our Future Relationship with Personal Robots
  • Ben Moran and Guy Lever, Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
  • Laurence Devillers, Socio-affective robots: ethical issues

Find out more at the event website.

]]>
CLAIRE and euRobotics: all questions answered on humanoid robotics https://robohub.org/claire-and-eurobotics-all-questions-answered-on-humanoid-robotics/ Tue, 20 Dec 2022 11:42:18 +0000 https://robohub.org/?p=206188

On 9 December, CLAIRE and euRobotics jointly hosted an All Questions Answered (AQuA) event. This one hour session focussed on humanoid robotics, and participants could ask questions regarding the current and future state of AI, robotics and human augmentation in Europe.

The questions were fielded by an expert panel, comprising:

  • Rainer Bischoff, euRobotics
  • Wolfram Burgard, Professor of Robotics and AI, University of Technology Nuremberg
  • Francesco Ferro, CEO, PAL Robotics
  • Holger Hoos, Chair of the Board of Directors, CLAIRE

The session was recorded and you can watch in full below:

]]>
Estimating manipulation intentions to ease teleoperation https://robohub.org/estimating-manipulation-intentions-to-ease-teleoperation/ Tue, 06 Dec 2022 11:54:47 +0000 https://robohub.org/?p=206080

Teleoperation is one of the longest-standing application fields in robotics. While full autonomy is still work in progress, the possibility to remotely operate a robot has already opened scenarios where humans can act in risky environments without endangering their own safety, such as when defusing explosives or decommissioning nuclear waste. It also allows one to be present and act even at great distance: underwater, in space, or inside a patient miles away from the surgeon. These are all critical applications, where skilled and qualified operators control the robot after receiving specific training to learn to use the system safely.

Teleoperation for everyone?

The recent pandemic has yet made even more apparent the need for immersive telepresence and remote action also for non-expert users: not only could teleoperated robots take vitals or bring drugs to infectious patients, but we could assist our elderly living far away with chores like moving heavy stuff, or cooking, for example. Also, numerous physical jobs could be executed from home.

The recent ANA-Xprize finals have shown how far teleoperation can go (see this impressive video of the winning team), but in such situations both the perceptual and control load lie entirely on the operator. This can be quite taxing on a cognitive level: both perception and action are mediated, by cameras and robotic arms respectively, reducing the user’s situation awareness and natural eye-hand coordination. While robot sensing capabilities and actuators have undergone relevant technological progress, the interface with the user still lacks intuitive solutions facilitating the operator’s job (Rea & Seo, 2022).

Human and robot joining forces

Shared control has gained popularity in recent years, as an approach championing human-machine cooperation: low-level motor control is carried out by the robot, while the human is focused on high-level action planning. To achieve such a blend, the robotic system still needs a timely way to infer the operator intention, so as to consequently assist with the execution. Usually, motor intentions are inferred by tracking arm movements or motion control commands (if the robot is operated by means of a joystick), but especially during object manipulation the hand is tightly following information collected by the gaze. In the last decades, increasing evidence in eye-hand coordination studies has shown that gaze reliably anticipates the hand movement target (Hayhoe et al., 2012), providing an early cue about human intention.

Gaze and motion features to estimate intentions

In a contribution presented at IROS 2022 last month (Belardinelli et al., 2022), we introduced an intention estimation model that relies on both gaze and motion features. We collected pick-and-place sequences in a virtual environment, where participants could operate two robotic grippers to grasp objects on a cluttered table. Motion controllers were used to track arm motions and to grasp objects by button press. Eye movements were tracked by the eye-tracker embedded in the virtual reality headset.

Gaze features were computed by defining a Gaussian distribution centered at the gaze position and taking for each object the likelihood for it to be the target of visual attention, which was given by the cumulative distribution collected by the object bounding box. For the motion features, the hand pose and velocity were used to estimate the hand’s current trajectory which was compared to an estimated optimal trajectory to each object. The normalized similarity between the two trajectories defined the likelihood of each object to be the target of the current movement.


Figure 1: Gaze features (top) and motion features (bottom) used for intention estimation. In both videos the object highlighted in green is the most likely target of visual attention and of hand movement, respectively.

These features along with the binary grasping state were used to train two Gaussian Hidden Markov Models, one on pick and one on place sequences. For 12 different intentions (picking of 6 different objects and placing at 6 different locations) the general accuracy (F1 score) was above 80%, even for occluded objects. Importantly, for both actions already 0.5 seconds before the end of the movement a prediction with over 90% accuracy was available for at least 70% of the observations. This would allow for an assisting plan to be instantiated and executed by the robot.

We also conducted an ablation study to determine the contribution of different feature combinations. While the models with gaze, motion, and grasping features performed better in the cross validation, the improvement with respect to only gaze and grasping state was minimal. Even when checking obstacles nearby at first, in fact, the gaze was already on the target before the hand trajectory became sufficiently discriminative.

We also ascertained that our models could generalize from one hand to the other (when fed the corresponding hand motion features), hence the same models could be used to concurrently estimate each hand intention. By feeding each hand prediction to a simple rule-based framework, basic bimanual intentions could also be recognized. So, for example, reaching for an object with the left hand while the right hand is going to place the same object on the left hand is considered a bimanual handover.

Figure 2: Online intention estimation: the red frame denotes the current right-hand intention prediction, the green frame the left-hand prediction. Above the scene, the bimanual intention is shown in capital letters.

Such an intention estimation model could help an operator to execute such manipulations without focusing on selecting the parameters for the exact motor execution of the pick and place, something we don’t usually do consciously in natural eye-hand coordination, since we automated such cognitive processes. For example, once a grasping intention is estimated with enough confidence, the robot could autonomously select the best grasp and grasping position and execute the grasp, relieving the operator of carefully monitoring a grasp without tactile feedback and possibly with inaccurate depth estimation.

Further, even if in our setup motion features were not decisive for early intention prediction, they might play a larger role in more complex settings and when extending the spectrum of bimanual manipulations.

Combined with suitable shared control policies and feedback visualizations, such systems could also enable untrained operators to control robotic manipulators transparently and effectively for longer times, improving the general mental workload of remote operation.

References

Belardinelli, A., Kondapally, A. R., Ruiken, D., Tanneberg, D., & Watabe, T. (2022). Intention estimation from gaze and motion features for human-robot shared-control object manipulation. 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.

Hayhoe, M. M., McKinney, T., Chajka, K., & Pelz, J. B. (2012). Predictive eye movements in natural vision. Experimental brain research, 217(1), 125-136.

Rea, D. J., & Seo, S. H. (2022). Still Not Solved: A Call for Renewed Focus on User-Centered Teleoperation Interfaces. Frontiers in Robotics and AI, 9.

]]>
Bipedal robot achieves Guinness World Record in 100 metres https://robohub.org/bipedal-robot-achieves-guinness-world-record-in-100-metres/ Mon, 03 Oct 2022 09:30:50 +0000 https://robohub.org/?p=205675 bipedal robot running on trackCassie the robot sets 100-metre record, photo by Kegan Sims.

By Steve Lundeberg

Cassie the robot, invented at the Oregon State University College of Engineering and produced by OSU spinout company Agility Robotics, has established a Guinness World Record for the fastest 100 metres by a bipedal robot.

Cassie clocked the historic time of 24.73 seconds at OSU’s Whyte Track and Field Center, starting from a standing position and returning to that position after the sprint, with no falls.

The 100-metre record builds on earlier achievements by the robot, including traversing five kilometres in 2021 in just over 53 minutes. Cassie, the first bipedal robot to use machine learning to control a running gait on outdoor terrain, completed the 5K on Oregon State’s campus untethered and on a single battery charge.

Cassie was developed under the direction of Oregon State robotics professor Jonathan Hurst. The robot has knees that bend like an ostrich’s and operates with no cameras or external sensors, essentially as if blind.

Since Cassie’s introduction in 2017, in collaboration with artificial intelligence professor Alan Fern, OSU students have been exploring machine learning options in Oregon State’s Dynamic Robotics and AI Lab.

“We have been building the understanding to achieve this world record over the past several years, running a 5K and also going up and down stairs,” said graduate student Devin Crowley, who led the Guinness effort. “Machine learning approaches have long been used for pattern recognition, such as image recognition, but generating control behaviors for robots is new and different.”

The Dynamic Robotics and AI Lab melds physics with AI approaches more commonly used with data and simulation to generate novel results in robot control, Fern said. Students and researchers come from a range of backgrounds including mechanical engineering, robotics and computer science.

“Cassie has been a platform for pioneering research in robot learning for locomotion,” Crowley said. “Completing a 5K was about reliability and endurance, which left open the question of, how fast can Cassie run? That led the research team to shift its focus to speed.”

Cassie was trained for the equivalent of a full year in a simulation environment, compressed to a week through a computing technique known as parallelization – multiple processes and calculations happening at the same time, allowing Cassie to go through a range of training experiences simultaneously.

“Cassie can perform a spectrum of different gaits but as we specialized it for speed we began to wonder, which gaits are most efficient at each speed?” Crowley said. “This led to Cassie’s first optimized running gait and resulted in behavior that was strikingly similar to human biomechanics.”

The remaining challenge, a “deceptively difficult” one, was to get Cassie to reliably start from a free-standing position, run, and then return to the free-standing position without falling.

“Starting and stopping in a standing position are more difficult than the running part, similar to how taking off and landing are harder than actually flying a plane,” Fern said. “This 100-metre result was achieved by a deep collaboration between mechanical hardware design and advanced artificial intelligence for the control of that hardware.”

Hurst, chief technology officer at Agility Robotics and a robotics professor at Oregon State, said: “This may be the first bipedal robot to learn to run, but it won’t be the last. I believe control approaches like this are going to be a huge part of the future of robotics. The exciting part of this race is the potential. Using learned policies for robot control is a very new field, and this 100-metre dash is showing better performance than other control methods. I think progress is going to accelerate from here.”

]]>
RoboCup humanoid league: Interview with Jasper Güldenstein https://robohub.org/robocup-humanoid-league-interview-with-jasper-guldenstein/ Tue, 20 Sep 2022 07:22:53 +0000 https://robohub.org/?p=205558

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event returned to an in-person format for 2022, taking place from 13-17 July in Bangkok. RoboCup comprises a number of leagues, with perhaps the most well-known being the soccer leagues.

In the Humanoid League, autonomous robots with a human-inspired body plan and senses play soccer against each other. We spoke to Jasper Güldenstein, a member of the technical committee, about the competition at RoboCup 2022, and also about the Humanoid League Virtual Season. As a biweekly virtual competition taking place between the physical RoboCup events it provides teams with an opportunity to test their ideas and keep in touch throughout the year.

Could you give us an overview of the Humanoid League competition at RoboCup this year?

This year we had the first in-person event after a three year break. It was really good to have the majority of the teams coming back together and working on their robots again. Although a lot of teams came with fewer participants than they usually do, we still managed to have 12 teams in the kid size league and three teams in the adult size. Unfortunately, some teams could not participate due to travel restrictions, but we hope to see them back next year.

Humanoid teamsHumanoid league finalists with their robots

What was the format for the competition?

At the beginning we had a drop-in round, which is where we play games with one robot from each team participating in a joint team. So, we have eight robots playing from eight different teams. That is useful for finding out which teams are likely to perform the best in the competition. Next, the results from the drop-in games were used as seeding for the round robin phase of the regular competition, with the strongest teams separated into different groups. After the round robin, we have a knockout competition. The seeding method means that we can hopefully avoid the situation where very good teams get kicked out early. We saw that the most interesting games were towards the end of the competition when the teams performed really well.

Have you noticed improvements since the last physical competition?

I’d say definitely that one big thing that has improved for a lot of teams is the localisation. A lot of teams are more easily able to localise themselves on the field, and they don’t run off randomly. They are more certain that they are in the correct position.

Furthermore, I think the kicking has improved. The robots kick the ball much further than they used to. People have been tuning their kicking motions to increase the distance.

In terms of computer vision, this has definitely improved quite a bit. Something we did differently this time, which was inspired by what we did in the virtual season, is that we had a set of six different balls, all from previous FIFA competitions. For each game a ball was drawn randomly, so the teams couldn’t really prepare for all the balls. Although they were visually quite different, the teams didn’t really have any problems detecting the ball. We’ve seen, in general, that computer vision approaches have improved and these improvements have been transferred to the RoboCup competition. I think that almost all teams are using a neural network to detect the ball. This is a change from three, four, five years ago, where many teams used hand-tuned classical computer vision algorithms.

To talk a bit more about ball detection, it will be interesting to see what happens if we move to an environment with natural and/or varying light conditions. This year we were in a convention hall with uniform lighting. I believe next year, in Bordeaux, there is going to be some form of natural light coming in, and perhaps even fields that are outside. It’s still at the planning stage but we are looking forward to that. It will be a challenge and I strongly believe that the participants will find approaches to make their vision approach robust against these varying conditions.

Teams in actionThe setup and testing for the Humanoid League competition at RoboCup 2022, Bangkok.

Thinking about the transfer from the simulation to the real world, are there any specific elements that lend themselves well to being transferred?

In terms of computer vision, we had a bit of transfer. In the virtual season we concentrated a lot on changing the lighting conditions and having varied backgrounds, to be able to emulate the real world a bit better. I think a few teams used their vision approaches from the virtual season in the real world.

However, I think the strongest part is behaviour. Teams were able to test their strategies in the virtual competition and adapt every other week. For example, CIT Brains, which won the virtual season and the physical competition, made quite a few changes to their strategy and they had robust systems running. Their strategy worked really well, and in the final they managed to score several goals win against the previous world champions (Rhoban Football Club).

How did the competition go for your team (Hamburg Bit-Bots)?

We actually had quite a few hardware problems, especially on the mechanics side. The motors are wearing out and warped due to wear, and flexed more than we expected. This meant we had difficulties walking stably. And, if you can’t walk in a stable manner that defeats the purpose of everything else. It’s a really integrated system – if one component breaks, you are out of luck as you are very restricted in what you can change during the competition as you don’t have much spare equipment with you.

However, what was good for us was that we had a lot of software up and running, and a lot of it had been tested really well during the virtual league. We had to try and find a way round the walking problem algorithmically, to try to find walking parameters that were more stable. We also switched from [robot operating system] ROS 1 to ROS 2 which brought with it many challenges. We actually did a write up on the issues we faced, with some tricks and tips.

Will there be a new virtual league season this year?

Yes, we’ve discussed this in our technical committee and we plan on doing it again. The last event was successful and the teams enjoyed it. We plan on making some changes, such as logging the games to extract some interesting metrics and doing some analysis on those.

Another thing we want to do is domain randomisation, making the environment a bit more varied. This means that the approaches have to be more robust. The hope is that, when they are more robust, they can be transferred more easily to the real world. We were thinking about making the terrain slightly uneven. Another approach could be to modify the models of the robots such that the joints emulate a bit of wear, so they simulated actuators might be a bit weaker or stronger randomly, and teams have to find robust approaches to deal with that.

We won’t do everything at the beginning. We’ll move through the season and talk to the teams and form some organising groups to develop the simulation further, to run the games and to organise the competition itself. We are always happy to have input and we always talk to the teams to see what they think. It’s a competition but it’s something we build together.

Robots at the humanoid league BangkokHumanoid robots at RoboCup 2022, Bangkok.

Could you tell us about the satellite event that took place at RoboCup?

This was a discussion about how to get teams more interested in participating and how to bridge the junior leagues and the major leagues.

We know that some people who participated in RoboCup Junior specifically selected a University that has a RoboCup team so that they could join that team. It would be awesome for more people to do this, and for more juniors to know what the major league is about.

To bridge the gap between the junior and major leagues we don’t want to introduce another league, but we want some form of events where the two groups can meet, and where the juniors can show off how well they are doing. It would be good to have more interaction between the leagues, although we haven’t decided on the exact format yet.

About Jasper

Jasper

Jasper Güldenstein is a PhD student at University of Hamburg. His research focuses on humanoid navigation. He has been participating in RoboCup as a member of the Hamburg Bit-Bots since 2016 where his focus is developing not only the software of the team but also the electronics of the robot platform. In his recent master thesis he evaluated using reinforcement learning to perform path planning and execution for humanoid robots.

]]>
#IJCAI invited talk: engineering social and collaborative agents with Ana Paiva https://robohub.org/ijcai-invited-talk-engineering-social-and-collaborative-agents-with-ana-paiva/ Sun, 04 Sep 2022 08:53:33 +0000 https://robohub.org/?p=205440 An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJACI-ECAI 2022) took place from 23-29 July, in Vienna. In this post, we summarise the presentation by Ana Paiva, University of Lisbon and INESC-ID. The title of her talk was “Engineering sociality and collaboration in AI systems”.

Robots are widely used in industrial settings, but what happens when they enter our everyday world, and, specifically, social situations? Ana believes that social robots, chatbots and social agents have the potential to change the way we interact with technology. She envisages a hybrid society where humans and AI systems work in tandem. However, for this to be realised we need to carefully consider how such robots will interact with us socially and collaboratively. In essence, our world is social, so when machines enter they need to have some capabilities to interact with this social world.

Ana took us through the theory of what it means to the social. There are three aspects to this:

  1. Social understanding: the capacity to perceive others, exhibit theory of mind and respond appropriately.
  2. Intrapersonal competencies: the capability to communicate socially, establish relationships and adapt to others.
  3. Social responsibility: the capability to take actions towards the social environment, follow norms and adopt morally appropriate actions.

Ana talkingScreenshot from Ana’s talk.

Ana wants to go from this notion of social intelligence to what is called artificial social intelligence, which can be defined as: “the capability to perceive and understand social signals, manage and participate in social interactions, act appropriately in social settings, establish social relations, adapt to others, and exhibit social responsibility.”

As an engineer, she likes to build things, and, on seeing the definition above, wonders how she can pass from said definition to a model that will allow her to build social machines. This means looking at social perception, social modelling and decision making, and social acting. A lot of Ana’s work revolves around design, study and development for achieving this kind of architecture.

Ana gave us a flavour of some of the projects that she and her groups have carried out with regards to trying to engineer sociality and collaboration in robots and other agents.

One of these projects was called “Teach me how to write”, and it centres on using robots to improve the handwriting abilities of children. In this project the team wanted to create a robot that kids could teach to write. Through teaching the robot it was hypothesised that they would, in turn, improve their own skills.

The first step was to create and train a robot that could learn how to write. They used learning from demonstration to train a robotic arm to draw characters. The team realised that if they wanted to teach the kids to write, the robot had to learn and improve, and it had to make mistakes in order to be able to improve. They studied the taxonomy of handwriting mistakes that are made by children, so that they could put those mistakes into the system, and so that the robot could learn from the kids how to fix the mistakes.

You can see the system architecture in the figure below, and it includes the handwriting task element, and social behaviours. To add these social behaviours they used a toolkit developed in Ana’s lab, called FAtiMA. This toolkit can be integrated into a framework and is an affective agent architecture for creating autonomous characters that can evoke empathic responses.

system architectureScreenshot from Ana’s talk. System architecture.

In terms of actually using and evaluating the effectiveness of the robot, they couldn’t actually put the robot arm in the classroom as it was too big, unwieldy and dangerous. Therefore, they had to use a Nao robot, which moved its arms like it was writing, but it didn’t actually write.

Taking part in the study were 24 Portuguese-speaking children, and they participated in four sessions over the course of a few weeks. They assigned the robot two contrasting competencies: “learning” (where the robot improved over the course of the sessions) and “non-learning” (where the robot’s abilities remained constant). They measured the kids’ writing ability and improvement, and they used questionnaires to find out what the children thought about the friendliness of the robot, and their own teaching abilities.

They found that the children who worked with learning robot significantly improved their own abilities. They also found that the robot’s poor writing abilities did not affect the children’s fondness for it.

You can find out more about this project, and others, on Ana’s website.

]]>
Using reinforcement learning for control of direct ink writing https://robohub.org/using-reinforcement-learning-for-control-of-direct-ink-writing/ Sat, 20 Aug 2022 10:02:09 +0000 https://robohub.org/?p=205292 3d printing machine using viscous materilClosed-loop printing enhanced by machine learning. © Michal Piovarči/ISTA

Using fluids for 3D printing may seem paradoxical at first glance, but not all fluids are watery. Many useful materials are more viscous, from inks to hydrogels, and thus qualify for printing. Yet their potential has been relatively unexplored due to the limited control over their behaviour. Now, researchers of the Bickel group at the Institute of Science and Technology Austria (ISTA) are employing machine learning in virtual environments to achieve better results in real-world experiments.

3D printing is on the rise. Many people are familiar with the characteristic plastic structures. However, attention has also turned to different printing materials, such as inks, viscous pastes and hydrogels, which could be potentially be used to 3D-print biomaterials and even food. But printing such fluids is challenging. Exact control over them requires painstaking trial-and-error experiments, because they typically tend to deform and spread after application.

A team of researchers, including Michal Piovarči and Bernd Bickel, are tackling these challenges. In their laboratories at the Institute of Science and Technology Austria (ISTA), they are using reinforcement learning – a type of machine learning – to improve the printing technique of viscous materials. The results were presented at the SIGGRAPH conference, the annual meeting of simulation and visual computing researchers.

A critical component of manufacturing is identifying the parameters that consistently produce high-quality structures. Certainly, an assumption is implicit here: the relationship between parameters and outcome is predictable. However, real processes always exhibit some variability due to the nature of the materials used. In printing with viscous materials, this notion is more prevalent, because they take significant time to settle after deposition. The question is: how can we understand, and deal with, the complex dynamics?

“Instead of printing thousands of samples, which is not only expensive, but rather tedious, we put our expertise in computer simulations to action,” responds Piovarči, lead-author of the study. While computer graphics often trade physical accuracy for faster simulation, here, the team came up with a simulated environment that mirrors the physical processes with accuracy. “We modelled the ink’s current and short-horizon future states based on fluid physics. The efficiency of our model allowed us to simulate hundreds of prints simultaneously, more often than we could ever have done in the experiment. We used the dataset for reinforcement learning and gained the knowledge of how to control the ink and other materials.”

Learning in virtual environments how to control the ink. © Michal Piovarči/ISTA

The machine learning algorithm established various policies, including one to control the movement of the ink-dispensing nozzle at a corner such that no unwanted blobs occur. The printing apparatus would not follow the baseline of the desired shape anymore, but rather take a slightly altered path which eventually yields better results. To verify that these rules can handle various materials, they trained three models using liquids of different viscosity. They tested their method with experiments using inks of various thicknesses.

The team opted for closed-loop forms instead of simple lines or writing, because “closed loops represent the standard case for 3D printing and that is our target application,” explains Piovarči. Although the single-layer printing in this project is sufficient for the use cases in printed electronics, he wants to add another dimension. “Naturally, three dimensional objects are our goal, such that one day we can print optical designs, food or functional mechanisms. I find it fascinating that we as computer graphics community can be the major driving force in machine learning for 3D printing.”

Read the research in full

Closed-Loop Control of Direct Ink Writing via Reinforcement Learning
Michal Piovarči, Michael Foshey, Jie Xu, Timmothy Erps, Vahid Babaei, Piotr Didyk, Szymon Rusinkiewicz, Wojciech Matusik, Bernd Bickel

]]>
International conference on intelligent robots and systems (IROS) https://robohub.org/international-conference-on-intelligent-robots-and-systems-iros/ Fri, 23 Oct 2020 15:10:35 +0000 https://robohub.org/international-conference-on-intelligent-robots-and-systems-iros/

This Sunday sees the start of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). This year the event is online and free for anyone to attend. Content will be available from the platform on demand, with access available from 25 October to 25 November 2020.

IROS conferences have traditionally had a theme and this year is no different with the emphasis being on “consumer robotics and our future”. You can sign up here.

Plenaries

IROS will feature three plenary talks. The speakers and topics are as follows:

  • Danica Kragic Robotics and artificial intelligence impacts on the fashion industry
  • Cynthia Breazeal Living with social robots: from research to commercialization and back
  • Yukie Nagai Cognitive development in humans and robots: new insights into intelligence

Keynote speakers

There are also nine keynote talks covering a number of topics.

  • Frank Dellaert Air, sea, and space robots
  • Anya Petrovskaya Driverless vehicles and field robots
  • Ashish Deshpande Rehabilitation robotics
  • Jonathan Hurst Humanoids
  • I-Ming Chen Food handling robotics
  • Steve LaValle Perception, action and control
  • Nikolaus Correll Grasping, haptics and end-effectors
  • Andrea Thomaz Human-robot interaction
  • Sarah Bergbreiter Design, micro and bio-inspired robotics

Technical talks

The technical talks have been divided into 12 topic areas.

  • Air, sea, and space robots
  • Driverless vehicles and field robots
  • Medical, cellular, micro and nano robots
  • Humanoids, exoskeletons, and rehab robots
  • Localization, mapping and navigation
  • Dynamics, control and learning
  • Design, mechanisms, actuators, soft and bio-inspired robots
  • Perception, action, and cognition
  • Grasping, haptics and end-effectors
  • Human-robot interaction, teleoperation, and virtual reality
  • Swarms and multi-robots
  • Industry 4.0

Each talk will feature its digest slide, pre-recorded video presentation, and the paper’s PDF. These will be available from 25 October, so keep an eye on the website.

Workshops

There are a whopping 35 workshops to choose from. These have on-demand content and also live sessions (dates vary so visit the webpages below for more information about each specific workshop).

  1. 3rd workshop on proximity perception in robotics: towards multi-modal cognition, Stefan Escaida Navarro*, Stephan Mühlbacher-Karrer, Hubert Zangl, Keisuke Koyama, Björn Hein, Ulrike Thomas, Hosam Alagi, Yitao Ding, Christian Stetco
  2. Bringing geometric methods to robot learning, optimization and control, Noémie Jaquier*, Leonel Rozo, Søren Hauberg, Hans-Peter Schröcker, Suvrit Sra
  3. 12th IROS20 workshop on planning, perception and navigation for intelligent vehicles, Philippe Martinet*, Christian Laugier, Marcelo H Ang Jr, Denis Fernando Wolf
  4. Robot-assisted training for primary care: how can robots help train doctors in medical examinations?, Thrishantha Nanayakkara*, Florence Ching Ying Leong, Thilina Dulantha Lalitharatne, Liang He, Fumiya Iida, Luca Scimeca, Simon Hauser, Josie Hughes, Perla Maiolino
  5. Workshop on animal-robot interaction, Cesare Stefanini and Donato Romano*
  6. Ergonomic human-robot collaboration: opportunities and challenges, Wansoo Kim*, Luka Peternel, Arash Ajoudani, Eiichi Yoshida
  7. New advances in soft robots control, Concepción A. Monje*, Egidio Falotico, Santiago Martínez de la Casa
  8. Autonomous system in medicine: current challenges in design, modeling, perception, control and applications, Hang Su, Yue Chen*, Jing GUO, Angela Faragasso, Haoyong Yu, Elena De Momi
  9. MIT MiniCheetah workshop, Sangbae Kim*, Patrick M. Wensing, Inhyeok Kim
  10. Workshop on humanitarian robotics, Garrett Clayton*, Raj Madhavan, Lino Marques
  11. Robotics-inspired biology, Nick Gravish*, Kaushik Jayaram, Chen Li, Glenna Clifton, Floris van Breugel
  12. Robots building robots. Digital manufacturing and human-centered automation for building consumer robots, Paolo Dario*, George Q. Huang, Peter Luh, MengChu Zhou
  13. Cognitive robotic surgery, Michael C. Yip, Florian Richter*, Danail Stoyanov, Francisco Vasconcelos, Fanny Ficuciello, Emmanuel B Vander Poorten, Peter Kazanzides, Blake Hannaford, Gregory Scott Fischer
  14. Application-driven soft robotic systems: Translational challenges, Sara Adela Abad Guaman, Lukas Lindenroth, Perla Maiolino, Agostino Stilli*, Kaspar Althoefer, Hongbin Liu, Arianna Menciassi, Thrishantha Nanayakkara, Jamie Paik, Helge Arne Wurdemann
  15. Reliable deployment of machine learning for long-term autonomy, Feras Dayoub*, Tomáš Krajník, Niko Sünderhauf, Ayoung Kim
  16. Robotic in-situ servicing, assembly, and manufacturing, Craig Carignan*, Joshua Vander Hook, Chakravarthini Saaj, Renaud Detry, Giacomo Marani
  17. Benchmarking progress in autonomous driving, Liam Paull*, Andrea Censi, Jacopo Tani, Matthew Walter, Felipe Codevilla, Sahika Genc, Sunil Mallya, Bhairav Mehta
  18. ROMADO: RObotic MAnipulation of Deformable Objects, Miguel Aranda*, Juan Antonio Corrales Ramon, Pablo Gil, Gonzalo Lopez-Nicolas, Helder Araujo, Youcef Mezouar
  19. Perception, learning, and control for autonomous agile vehicles, Giuseppe Loianno*, Davide Scaramuzza, Sertac Karaman
  20. Planetary exploration robots: challenges and opportunities, Hendrik Kolvenbach*, William Reid, Kazuya Yoshida, Richard Volpe
  21. Application-oriented modelling and control of soft robots, Thomas George Thuruthel*, Cosimo Della Santina, Seyedmohammadhadi Sadati, Federico Renda, Cecilia Laschi
  22. State of the art in robotic leg prostheses: where we are and where we want to be, Tommaso Lenzi*, Robert D. Gregg, Elliott Rouse, Joost Geeroms
  23. Worskhop on perception, planning and mobility in forestry robotics (WPPMFR 2020), João Filipe Ferreira* and David Portugal
  24. Why robots fail to grasp? – failure ca(u)ses in robot grasping and manipulation, Joao Bimbo*, Dimitrios Kanoulas, Giulia Vezzani, Kensuke Harada
  25. Trends and advances in machine learning and automated reasoning for intelligent robots and systems, Abdelghani Chibani, Craig Schlenoff, Yacine Amirat*, Shiqi Zhang, Jong-Hwan Kim, Ferhat Attal
  26. Learning impedance modulation for physical interaction: insights from humans and advances in robotics, Giuseppe Averta*, Franco Angelini, Meghan Huber, Jongwoo Lee, Manolo Garabini
  27. New horizons of robot learning – from industrial challenges to future capabilities, Kim Daniel Listmann* and Elmar Rueckert
  28. Robots for health and elderly care (RoboHEC), Leon Bodenhagen*, Oskar Palinko, Francois Michaud, Adriana Tapus, Julie Robillard
  29. Wearable SuperLimbs: design, communication, and control, Harry Asada*
  30. Human Movement Understanding for Intelligent Robots and Systems, Emel Demircan*, Taizo Yoshikawa, Philippe Fraisse, Tadej Petric
  31. Construction and architecture robotics, Darwin Lau*, Yunhui Liu, Tobias Bruckmann, Thomas Bock, Stéphane Caro
  32. Mechanisms and design: from inception to realization, Hao Su*, Matei Ciocarlie, Kyu-Jin Cho, Darwin Lau, Claudio Semini, Damiano Zanotto
  33. Bringing constraint-based robot programming to real-world applications, Wilm Decré*, Herman Bruyninckx, Gianni Borghesan, Erwin Aertbelien, Lars Tingelstad, Darwin G. Caldwell, Enrico Mingo, Abderrahmane Kheddar, Pierre Gergondet
  34. Managing deformation: a step towards higher robot autonomy, Jihong Zhu*, Andrea Cherubini, Claire Dune, David Navarro-Alarcon
  35. Social AI for human-robot interaction of human-care service robots, Ho Seok Ahn*, Hyungpil Moon, Minsu Jang, Jongsuk Choi

Robot challenges

Another element of the conference that sounds interesting is the robot challenges. There are three of these and you should be able to watch the competitions in action next week.

  1. Open cloud robot table organization challenge (OCRTOC). This competition focusses on table organisation tasks. Participants will need to organize the objects in the scene according to a target configuration. This competition will be broadcast on 25-27 October.
  2. 8th F1Tenth autonomous Grand Prix @ IROS 2020. This competition will take the form of a virtual race with standardised vehicles and hardware. The qualifying phase is a timed trial. The Grand Prix phase pits virtual competitors against each other on the same track. The race will be broadcast on 27 October.
  3. Robotic grasping and manipulation competition. There are two sections to this competition. In the first the robot has to make five cups of iced Matcha green tea. The second involves disassembly and assembly using a NIST Task Board.
]]>
#ICRA2020 workshops on robotics and learning https://robohub.org/icra2020-workshops-on-robotics-and-learning/ Wed, 15 Jul 2020 22:11:57 +0000 https://robohub.org/icra2020-workshops-on-robotics-and-learning/

This year the International Conference on Robotics and Automation (ICRA) is being run as a virtual event. One interesting feature of this conference is that it has been extended to run from 31 May to 31 August. A number of workshops were held on the opening day and here we focus on two of them: “Learning of manual skills in humans and robots” and “Emerging learning and algorithmic methods for data association in robotics”.

Learning of manual skills in humans and robots

This workshop was organised by Aude Billard, EPFL and Dagmar Sternad, Northeastern University. It brought together researchers from human motor control and from robotics to answer questions such as: How do humans achieve manual dexterity? What kind of practice schedules can shape these skills? Can some of these strategies be transferred to robots? To which extent is robot manual skill limited by the hardware, what can be learned and what cannot?

The third session of the workshop focussed on “Learning skills” and you can watch the two talks and the discussions below:

Jeannette Bohg – Learning to scaffold the development of robotic manipulation skills

Dagmar Sternad – Learning and control in skilled interactions with objects: A task-dynamic approach

Discussion with Jeannette Bohg and Dagmar Sternad

Emerging learning and algorithmic methods for data association in robotics

This workshop covered emerging algorithmic methods based on optimization and graph-theoretic techniques, learning and end-to-end solutions based on deep neural networks, and the relationships between these techniques.

You can watch the workshop in full here:

Below is the programme with the times indicating the position of that talk in the YouTube video:
11:00 Ayoung Kim – Learning motion and place descriptor from LiDARs for long-term navigation
34:11 Xiaowei Zhou – Learning correspondences for 3D reconstruction and pose estimation
51:30 Florian Bernard – Higher-order projected power iterations for scalable multi-matching
1:11:24 Cesar Cadena – High level understanding in the data association problem
1:34:55 Spotlight talk 1: Daniele Cattaneo – CMRNet++: map and camera agnostic monocular visual localization in LiDAR maps
1:50:45 Nicholas Roy – The role of semantics in perception
2:11:12 Kostas Daniilidis – Learning representations for matching
2:33:26 Jonathan How – Consistent multi-view data association
2:51:40 John Leonard – A research agenda for robust semantic SLAM
3:17:58 Luca Carlone – Towards certifiably robust spatial perception
3:39:36 Roberto Tron – Fast, consistent distributed matching for robotics applications
3:59:22 Randal Beard – Tracking moving objects from a moving camera in 3d environments
4:18:49 Nikolay Atanasov – A unifying view of geometry, semantics, and data association in SLAM
4:39:03 Spotlight talk 2: Nathaniel Glaser – Enhancing multi-robot perception via learned data association

]]>
The Age of A.I. YouTube series https://robohub.org/the-age-of-a-i-youtube-series/ Mon, 06 Apr 2020 21:53:00 +0000 https://robohub.org/the-age-of-a-i-youtube-series/

The YouTube originals series “The Age of A.I.” was released in December 2019. If you haven’t already seen it now could be a good time to catch up – with much of the world in enforced or voluntary isolation many of us will be stuck at home with hours to fill. Sit back and marvel at the many incredible, and often heart-warming, applications of AI.

Episode 1 features researchers Mark Sagar, University of Auckland and Gil Weinberg, Georgia Tech. Mark works with will.i.am on a digital avatar and presents BabyX – a virtual animated baby that learns and reacts like a human baby. Gil makes music using AI and has teamed up other researchers at Georgia Tech to create smart prosthetics for amputees, combining ultrasound signals and machine learning.

Watch the other episodes:
Episode 2: Healed through A.I.

Episode 3: Using A.I. to build a better human

Episode 4: Love, art and stories: decoded

Episode 5: The ‘Space Architects’ of Mars

Episode 6: Will a robot take my job?

Episode 7: Saving the world one algorithm at a time

Episode 8: How A.I. is searching for Aliens

You can access the full series here.

]]>