Daniel Carrillo-Zapata – Robohub https://robohub.org Connecting the robotics community to the world Sun, 19 Nov 2023 09:00:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Interview with Dautzenberg Roman: #IROS2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp. https://robohub.org/interview-with-dautzenberg-roman-iros2023-best-paper-award-on-mobile-manipulation-sponsored-by-omron-sinic-x-corp/ Sun, 19 Nov 2023 09:00:38 +0000 https://robohub.org/?p=208787

Congratulations to Dautzenberg Roman and his team of researchers, who won the IROS 2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp. for their paper “A perching and tilting aerial robot for precise and versatile power tool work on vertical walls“. Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

Our paper shows a an aerial robot (think “drone”) which can exert large forces in the horizontal direction, i.e. onto walls. This is a difficult task, as UAVs usually rely on thrust vectoring to apply horizontal forces and thus can only apply small forces before losing control authority. By perching onto walls, our system no longer needs the propulsion to remain at a desired site. Instead we use the propellers to achieve large reaction forces in any direction, also onto walls! Additionally, perching allows extreme precision, as the tool can be moved and re-adjusted, as well as being unaffected by external disturbances such as gusts of wind.

Could you tell us about the implications of your research and why it is an interesting area for study?

Precision, force exertion and mobility are the three (of many) criteria where robots – and those that develop them – make trade-offs. Our research shows that the system we designed can exert large forces precisely with only minimal compromises on mobility. This widens the horizon of conceivable tasks for aerial robots, as well as serving as the next link in automating the chain of tasks need to perform many procedures on construction sites, or on remote, complex or hazardous environments.

Could you explain your methodology?

The main aim of our paper is to characterize the behavior and performance of the system, and comparing the system to other aerial robots. To achieve this, we investigated the perching and tool positioning accuracy, as well as comparing the applicable reaction forces with other systems.

Further, the paper shows the power consumption and rotational velocities of the propellers for the various phases of a typical operation, as well as how certain mechanism of the aerial robot are configured. This allows for a deeper understanding of the characteristics of the aerial robot.

What were your main findings?

Most notably, we show the perching precision to be within +-10cm of a desired location over 30 consecutive attempts and tool positioning to have mm-level accuracy even in a “worst-case” scenario. Power consumption while perching on typical concrete is extremely low and the system is capable of performing various tasks (drilling, screwing) also in quasi-realistic, outdoor scenarios.

What further work are you planning in this area?

Going forward, enhancing the capabilities will be a priority. This relates both to the types of surface manipulations that can be performed, but also the surfaces onto which the system can perch.


About the author

Dautzenberg Roman is currently a Masters student at ETH Zürich and Team Leader at AITHON. AITHON is a research project which is transforming into a start-up for aerial construction robotics. They are a core team of 8 engineers, working under the guidance of the Autonomous Systems Lab at ETH Zürich and located at the Innovation Park Switzerland in Dübendorf.

]]>
#IROS2023 awards finalists and winners + IROS on Demand free for one year https://robohub.org/iros2023-awards-finalists-and-winners-iros-on-demand-free-for-one-year/ Thu, 12 Oct 2023 11:09:17 +0000 https://robohub.org/?p=208437

Credits: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023)

Did you have the chance to attend the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) in Detroit? Here we bring you the papers that received an award this year in case you missed them. And good news: you can read all the papers because IROS on Demand is open to the public and freely available for one year from Oct 9th. Congratulations to all the winners and finalists!

IROS 2023 Best Overall and Best Student Paper

Winner of the IROS 2023 Best Paper

  • Autonomous Power Line Inspection with Drones via Perception-Aware MPC, by Jiaxu Xing, Giovanni Cioffi, Javier Hidalgo Carrio, Davide Scaramuzza.

Winner of the IROS 2023 Best Student Paper

  • Controlling Powered Prosthesis Kinematics over Continuous Transitions Between Walk and Stair Ascent, by Shihao Cheng, Curt A. Laubscher, Robert D. Gregg.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.
  • Towards Legged Locomotion on Steep Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Decentralized Swarm Trajectory Generation for LiDAR-based Aerial Tracking in Cluttered Environments, by Longji Yin, Fangcheng Zhu, Yunfan Ren, Fanze Kong, Fu Zhang.
  • Open-Vocabulary Affordance Detection in 3D Point Clouds, by Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le, Anh Nguyen.
  • Discovering Symbolic Adaptation Algorithms from Scratch, by Stephen Kelly, Daniel Park, Xingyou Song, Mitchell McIntire, Pranav Nashikkar, Ritam Guha, Wolfgang Banzhaf, Kalyanmoy Deb, Vishnu Boddeti, Jie Tan, Esteban Real.
  • Parallel cell array patterning and target cell lysis on an optoelectronic micro-well device, by Chunyuan Gan, Hongyi Xiong, Jiawei Zhao, Ao Wang, Chutian Wang, Shuzhang Liang, Jiaying Zhang, Lin Feng.
  • FATROP: A Fast Constrained Optimal Control Problem Solver for Robot Trajectory Optimization and Control, by Lander Vanroye, Ajay Suresha Sathya, Joris De Schutter, Wilm Decré.
  • GelSight Svelte: A Human Finger-Shaped Single-Camera Tactile Robot Finger with Large Sensing Coverage and Proprioceptive Sensing, by Jialiang Zhao, Edward Adelson.
  • Shape Servoing of a Soft Object Using Fourier Series and a Physics-based Model, by Fouad Makiyeh, Francois Chaumette, Maud Marchal, Alexandre Krupa.

IROS Best Paper Award on Agri-Robotics sponsored by YANMAR

Winner

  • Visual, Spatial, Geometric-Preserved Place Recognition for Cross-View and Cross-Modal Collaborative Perception, by Peng Gao, Jing Liang, Yu Shen, Sanghyun Son, Ming C. Lin.

Finalists

  • Online Self-Supervised Thermal Water Segmentation for Aerial Vehicles, by Connor Lee, Jonathan Gustafsson Frennert, Lu Gan, Matthew Anderson, Soon-Jo Chung.
  • Relative Roughness Measurement based Real-time Speed Planning for Autonomous Vehicles on Rugged Road, by Liang Wang, Tianwei Niu, Shuai Wang, Shoukun Wang, Junzheng Wang.

IROS Best Application Paper Award sponsored by ICROS

Winner

  • Autonomous Robotic Drilling System for Mice Cranial Window Creation: An Evaluation with an Egg Model, by Enduo Zhao, Murilo Marques Marinho, Kanako Harada.

Finalists

  • Visuo-Tactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling, by Shoujie Li, MingShan He, Wenbo Ding, Linqi Ye, xueqian WANG, Junbo Tan, Jinqiu Yuan, Xiao-Ping Zhang.
  • Improving Amputee Endurance over Activities of Daily Living with a Robotic Knee-Ankle Prosthesis: A Case Study, by Kevin Best, Curt A. Laubscher, Ross Cortino, Shihao Cheng, Robert D. Gregg.
  • Dynamic hand proprioception via a wearable glove with fabric sensors, by Lily Behnke, Lina Sanchez-Botero, William Johnson, Anjali Agrawala, Rebecca Kramer-Bottiglio.
  • Active Capsule System for Multiple Therapeutic Patch Delivery: Preclinical Evaluation, by Jihun Lee, Manh Cuong Hoang, Jayoung Kim, Eunho Choe, Hyeonwoo Kee, Seungun Yang, Jongoh Park, Sukho Park.

IROS Best Entertainment and Amusement Paper Award sponsored by JTCF

Winner

  • DoubleBee: A Hybrid Aerial-Ground Robot with Two Active Wheels, by Muqing Cao, Xinhang Xu, Shenghai Yuan, Kun Cao, Kangcheng Liu, Lihua Xie.

Finalists

  • Polynomial-based Online Planning for Autonomous Drone Racing in Dynamic Environments, by Qianhao Wang, Dong Wang, Chao Xu, Alan Gao, Fei Gao.
  • Bistable Tensegrity Robot with Jumping Repeatability based on Rigid Plate-shaped Compressors, by Kento Shimura, Noriyasu Iwamoto, Takuya Umedachi.

IROS Best Industrial Robotics Research for Applications sponsored by Mujin Inc.

Winner

  • Toward Closed-loop Additive Manufacturing: Paradigm Shift in Fabrication, Inspection, and Repair, by Manpreet Singh, Fujun Ruan, Albert Xu, Yuchen Wu, Archit Rungta, Luyuan Wang, Kevin Song, Howie Choset, Lu Li.

Finalists

  • Learning Contact-Based State Estimation for Assembly Tasks, by Johannes Pankert, Marco Hutter.
  • Bagging by Learning to Singulate Layers Using Interactive Perception, by Lawrence Yunliang Chen, Baiyu Shi, Roy Lin, Daniel Seita, Ayah Ahmad, Richard Cheng, Thomas Kollar, David Held, Ken Goldberg.
  • Exploiting the Kinematic Redundancy of a Backdrivable Parallel Manipulator for Sensing During Physical Human-Robot Interaction, by Arda Yigit, Tan-Sy Nguyen, Clement Gosselin.

IROS Best Paper Award on Cognitive Robotics sponsored by KROS

Winner

  • Extracting Dynamic Navigation Goal from Natural Language Dialogue, by Lanjun Liang, Ganghui Bian, Huailin Zhao, Yanzhi Dong, Huaping Liu.

Finalists

  • EasyGaze3D: Towards Effective and Flexible 3D Gaze Estimation from a Single RGB Camera, by Jinkai Li, Jianxin Yang, Yuxuan Liu, ZHEN LI, Guang-Zhong Yang, Yao Guo.
  • Team Coordination on Graphs with State-Dependent Edge Cost, by Sara Oughourli, Manshi Limbu, Zechen Hu, Xuan Wang, Xuesu Xiao, Daigo Shishika.
  • Is Weakly-supervised Action Segmentation Ready For Human-Robot Interaction? No, Let’s Improve It With Action-union Learning, by Fan Yang, Shigeyuki Odashima, Shochi Masui, Shan Jiang.
  • Exploiting Spatio-temporal Human-object Relations using Graph Neural Networks for Human Action Recognition and 3D Motion Forecasting, by Dimitrios Lagamtzis, Fabian Schmidt, Jan Reinke Seyler, Thao Dang, Steffen Schober.

IROS Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp.

Winner

  • A perching and tilting aerial robot for precise and versatile power tool work on vertical walls, by Roman Dautzenberg, Timo Küster, Timon Mathis, Yann Roth, Curdin Steinauer, Gabriel Käppeli, Julian Santen, Alina Arranhado, Friederike Biffar, Till Kötter, Christian Lanegger, Mike Allenspach, Roland Siegwart, Rik Bähnemann.

Finalists

  • Placing by Touching: An empirical study on the importance of tactile sensing for precise object placing, by Luca Lach, Niklas Wilhelm Funk, Robert Haschke, Séverin Lemaignan, Helge Joachim Ritter, Jan Peters, Georgia Chalvatzaki.
  • Efficient Object Manipulation Planning with Monte Carlo Tree Search, by Huaijiang Zhu, Avadesh Meduri, Ludovic Righetti.
  • Sequential Manipulation Planning for Over-actuated UAMs, by Yao Su, Jiarui Li, Ziyuan Jiao, Meng Wang, Chi Chu, Hang Li, Yixin Zhu, Hangxin Liu.
  • On the Design of Region-Avoiding Metrics for Collision-Safe Motion Generation on Riemannian Manifolds, by Holger Klein, Noémie Jaquier, Andre Meixner, Tamim Asfour.

IROS Best RoboCup Paper Award sponsored by RoboCup Federation

Winner

  • Sequential Neural Barriers for Scalable Dynamic Obstacle Avoidance, by Hongzhan Yu, Chiaki Hirayama, Chenning Yu, Sylvia Herbert, Sicun Gao.

Finalists

  • Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation, by Fabian Clemens Weigend, Shubham Sonawani, Drolet Michael, Heni Ben Amor.
  • Effectively Rearranging Heterogeneous Objects on Cluttered Tabletops, by Kai Gao, Justin Yu, Tanay Sandeep Punjabi, Jingjin Yu.
  • Prioritized Planning for Target-Oriented Manipulation via Hierarchical Stacking Relationship Prediction, by Zewen Wu, Jian Tang, Xingyu Chen, Chengzhong Ma, Xuguang Lan, Nanning Zheng.

IROS Best Paper Award on Robot Mechanisms and Design sponsored by ROBOTIS

Winner

  • Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV, by Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu, Fangcheng Zhu, Ximin Lyu, Fu Zhang.

Finalists

  • Hybrid Tendon and Ball Chain Continuum Robots for Enhanced Dexterity in Medical Interventions, by Giovanni Pittiglio, Margherita Mencattelli, Abdulhamit Donder, Yash Chitalia, Pierre Dupont.
  • c^2: Co-design of Robots via Concurrent-Network Coupling Online and Offline Reinforcement Learning, by Ci Chen, Pingyu Xiang, Haojian Lu, Yue Wang, Rong Xiong.
  • Collision-Free Reconfiguration Planning for Variable Topology Trusses using a Linking Invariant, by Alexander Spinos, Mark Yim.
  • eViper: A Scalable Platform for Untethered Modular Soft Robots, by Hsin Cheng, Zhiwu Zheng, Prakhar Kumar, Wali Afridi, Ben Kim, Sigurd Wagner, Naveen Verma, James Sturm, Minjie Chen.

IROS Best Paper Award on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi sponsored by IRSI

Winner

  • mCLARI: A Shape-Morphing Insect-Scale Robot Capable of Omnidirectional Terrain-Adaptive Locomotion, by Heiko Dieter Kabutz, Alexander Hedrick, William Parker McDonnell, Kaushik Jayaram.

Finalists

  • Towards Legged Locomotion on Steep, Planetary Terrain, by Giorgio Valsecchi, Cedric Weibel, Hendrik Kolvenbach, Marco Hutter.
  • Global Localization in Unstructured Environments using Semantic Object Maps Built from Various Viewpoints, by Jacqueline Ankenbauer, Parker C. Lusk, Jonathan How.
  • EELS: Towards Autonomous Mobility in Extreme Environments with a Novel Large-Scale Screw Driven Snake Robot, by Rohan Thakker, Michael Paton, Marlin Polo Strub, Michael Swan, Guglielmo Daddi, Rob Royce, Matthew Gildner, Tiago Vaquero, Phillipe Tosi, Marcel Veismann, Peter Gavrilov, Eloise Marteau, Joseph Bowkett, Daniel Loret de Mola Lemus, Yashwanth Kumar Nakka, Benjamin Hockman, Andrew Orekhov, Tristan Hasseler, Carl Leake, Benjamin Nuernberger, Pedro F. Proença, William Reid, William Talbot, Nikola Georgiev, Torkom Pailevanian, Avak Archanian, Eric Ambrose, Jay Jasper, Rachel Etheredge, Christiahn Roman, Daniel S Levine, Kyohei Otsu, Hovhannes Melikyan, Richard Rieber, Kalind Carpenter, Jeremy Nash, Abhinandan Jain, Lori Shiraishi, Ali-akbar Agha-mohammadi, Matthew Travers, Howie Choset, Joel Burdick, Masahiro Ono.
  • Multi-IMU Proprioceptive Odometry for Legged Robots, by Shuo Yang, Zixin Zhang, Benjamin Bokser, Zachary Manchester.
]]>
#IROS2023: A glimpse into the next generation of robotics https://robohub.org/iros2023-a-glimpse-into-the-next-generation-of-robotics/ Sun, 01 Oct 2023 08:30:36 +0000 https://robohub.org/?p=208353

Credits: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023).

The 2023 EEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) kicks off today at the Huntington Place in Detroit, Michigan. This year’s theme, “The Next Generation of Robotics,” is a call to the young and senior researchers to create a forum where the past, present, and future of robotics converge.

The program of IROS 2023 is a blend of theoretical insights and practical demonstrations, designed to foster a culture of innovation and collaboration. Among the highlights are the plenary and keynote talks by eminent personalities in the field of robotics.

Plenaries and keynotes

On the plenary front, Marcie O’Malley from Rice University will delve into the realm of robots that teach and learn with a human touch. Yuto Nakanishi of GITAI, Japan, will share his insights on the challenges of developing space robots for building a moonbase. Matt Johnson-Roberson from Carnegie Mellon University will explore the shared history and convergent future of AI and Robotics.

The keynote sessions are equally thought-provoking. On Monday, October 2nd, Sven Behnke from the University of Bonn, Germany, will discuss the transition from intuitive immersive telepresence systems to conscious service robots, while Michelle Johnson from the University of Pennsylvania, USA, will talk about the journey towards more inclusive rehabilitation robots. Rebecca Kramer-Bottiglio from Yale University, USA, will also share insights on shape-shifting soft robots that adapt to changing tasks and environments.

On Tuesday, October 3rd, Kostas Alexis from the Norwegian University of Science and Technology, Norway, will share experiences from the DARPA Subterranean Challenge focusing on resilient robotic autonomy. Serena Ivaldi from Inria, France, will discuss the transition from humanoids to exoskeletons, aiming at assisting and collaborating with humans. Mario Santillo from Ford Motor Company, USA, will provide a glimpse into the future of manufacturing automation.

The series continues on Wednesday, October 4th, with Moritz Bächer (Switzerland) and Morgan Pope (USA) from Disney Research discussing the design and control of expressive robotic characters. Tetsuya Ogata from Waseda University/AIST, Japan, will delve into deep predictive learning in robotics, optimizing models for adaptive perception and action. Lastly, Teresa Vidal-Calleja from the University of Technology Sydney, Australia, will talk about empowering robots with continuous space and time representations.

Competitions

The competitions segment of IROS 2023 will be a space for innovation and creativity. The Functional Fashion competition invites teams to design and demonstrate robotic clothing that is as aesthetically pleasing as it is functional. The F1/10 Autonomous Racing challenges participants to build a 1:10 scaled autonomous race car and compete in minimizing lap time while avoiding crashes. The Soft Robotics Balloon Robots competition encourages the creation of locomoting and swimming soft robots using balloons as a substrate, exploring rapid design and deployment of soft robotic structures.

Technical programme & demonstrations

The technical sessions and workshops/tutorials at IROS 2023 are designed to foster a rich exchange of ideas among the attendees. These sessions will feature presentations on cutting-edge research and innovative projects from across the globe, providing a platform for researchers to share their findings, receive feedback, and engage in meaningful discussions. In additions, the demonstrations segment will bring theories to life as participants showcase their working prototypes and models, offering a tangible glimpse into the advancements in robotics.

Participate in IROS 2023 remotely with Ohmni telepresence robots

If you are unable to participate in the conference in person, Ohmnilabs has provided three of their Ohmni telepresence robots to facilitate participation in the conference virtually. The telepresence robots will be active from October 2-4 from 9:00 a.m.- 6:00 p.m. EDT. You can secure a time slot in advance using this link.

The telepresence robots will allow you to:

  • Explore the exhibit hall and speak with exhibitors
  • Interact with authors and other attendees during interactive poster sessions
  • Attend a plenary or keynote presentation

You can check in real-time here to see if any of the robots are available throughout the day.

Watch out our blog during the following days for updates and results from the best paper awards. And enjoy IROS 2023!

]]>
How drones are used during earthquakes https://robohub.org/how-drones-are-used-during-earthquakes/ Wed, 13 Sep 2023 05:49:32 +0000 https://robohub.org/?p=208208

In the realm of disaster response, technology plays a pivotal role in aiding communities during challenging times. In this exploration, we turn our attention to drones and their application in earthquake response, especially as how they are being used in the recent Morocco earthquake. This concise video offers valuable insights into the practical uses of drones and the considerations surrounding their deployment during earthquake-related crises.

]]>
Interview with Jean Pierre Sleiman, author of the paper “Versatile multicontact planning and control for legged loco-manipulation” https://robohub.org/interview-with-jean-pierre-sleiman-author-of-the-paper-versatile-multicontact-planning-and-control-for-legged-loco-manipulation/ Wed, 06 Sep 2023 10:46:12 +0000 https://robohub.org/?p=208161

Picture from paper “Versatile multicontact planning and control for legged loco-manipulation“. © American Association for the Advancement of Science

We had the chance to interview Jean Pierre Sleiman, author of the paper “Versatile multicontact planning and control for legged loco-manipulation”, recently published in Science Robotics.

What is the topic of the research in your paper?
The research topic focuses on developing a model-based planning and control architecture that enables legged mobile manipulators to tackle diverse loco-manipulation problems (i.e., manipulation problems inherently involving a locomotion element). Our study specifically targeted tasks that would require multiple contact interactions to be solved, rather than pick-and-place applications. To ensure our approach is not limited to simulation environments, we applied it to solve real-world tasks with a legged system consisting of the quadrupedal platform ANYmal equipped with DynaArm, a custom-built 6-DoF robotic arm.

Could you tell us about the implications of your research and why it is an interesting area for study?
The research was driven by the desire to make such robots, namely legged mobile manipulators, capable of solving a variety of real-world tasks, such as traversing doors, opening/closing dishwashers, manipulating valves in an industrial setting, and so forth. A standard approach would have been to tackle each task individually and independently by dedicating a substantial amount of engineering effort to handcraft the desired behaviors:

This is typically achieved through the use of hard-coded state-machines in which the designer specifies a sequence of sub-goals (e.g., grasp the door handle, open the door to a desired angle, hold the door with one of the feet, move the arm to the other side of the door, pass through the door while closing it, etc.). Alternatively, a human expert may demonstrate how to solve the task by teleoperating the robot, recording its motion, and having the robot learn to mimic the recorded behavior.

However, this process is very slow, tedious, and prone to engineering design errors. To avoid this burden for every new task, the research opted for a more structured approach in the form of a single planner that can automatically discover the necessary behaviors for a wide range of loco-manipulation tasks, without requiring any detailed guidance for any of them.

Could you explain your methodology?
The key insight underlying our methodology was that all of the loco-manipulation tasks that we aimed to solve can be modeled as Task and Motion Planning (TAMP) problems. TAMP is a well-established framework that has been primarily used to solve sequential manipulation problems where the robot already possesses a set of primitive skills (e.g., pick object, place object, move to object, throw object, etc.), but still has to properly integrate them to solve more complex long-horizon tasks.

This perspective enabled us to devise a single bi-level optimization formulation that can encompass all our tasks, and exploit domain-specific knowledge, rather than task-specific knowledge. By combining this with the well-established strengths of different planning techniques (trajectory optimization, informed graph search, and sampling-based planning), we were able to achieve an effective search strategy that solves the optimization problem.

The main technical novelty in our work lies in the Offline Multi-Contact Planning Module, depicted in Module B of Figure 1 in the paper. Its overall setup can be summarized as follows: Starting from a user-defined set of robot end-effectors (e.g., front left foot, front right foot, gripper, etc.) and object affordances (these describe where the robot can interact with the object), a discrete state that captures the combination of all contact pairings is introduced. Given a start and goal state (e.g., the robot should end up behind the door), the multi-contact planner then solves a single-query problem by incrementally growing a tree via a bi-level search over feasible contact modes jointly with continuous robot-object trajectories. The resulting plan is enhanced with a single long-horizon trajectory optimization over the discovered contact sequence.

What were your main findings?
We found that our planning framework was able to rapidly discover complex multi- contact plans for diverse loco-manipulation tasks, despite having provided it with minimal guidance. For example, for the door-traversal scenario, we specify the door affordances (i.e., the handle, back surface, and front surface), and only provide a sparse objective by simply asking the robot to end up behind the door. Additionally, we found that the generated behaviors are physically consistent and can be reliably executed with a real legged mobile manipulator.

What further work are you planning in this area?
We see the presented framework as a stepping stone toward developing a fully autonomous loco-manipulation pipeline. However, we see some limitations that we aim to address in future work. These limitations are primarily connected to the task-execution phase, where tracking behaviors generated on the basis of pre-modeled environments is only viable under the assumption of a reasonably accurate description, which is not always straightforward to define.

Robustness to modeling mismatches can be greatly improved by complementing our planner with data-driven techniques, such as deep reinforcement learning (DRL). So one interesting direction for future work would be to guide the training of a robust DRL policy using reliable expert demonstrations that can be rapidly generated by our loco-manipulation planner to solve a set of challenging tasks with minimal reward-engineering.

About the author

Jean-Pierre Sleiman received the B.E. degree in mechanical engineering from the American University of Beirut (AUB), Lebanon, in 2016, and the M.S. degree in automation and control from Politecnico Di Milano, Italy, in 2018. He is currently a Ph.D. candidate at the Robotic Systems Lab (RSL), ETH Zurich, Switzerland. His current research interests include optimization-based planning and control for legged mobile manipulation.

]]>
[UPDATE] A list of resources, articles, and opinion pieces relating to large language models & robotics https://robohub.org/a-list-of-resources-articles-and-opinion-pieces-relating-to-large-language-models-robotics/ Wed, 23 Aug 2023 09:51:24 +0000 https://robohub.org/?p=207140 A black keyboard at the bottom of the picture has an open book on it, with red words in labels floating on top, with a letter A balanced on top of them. The perspective makes the composition form a kind of triangle from the keyboard to the capital A. The AI filter makes it look like a messy, with a kind of cartoon style.Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / Licenced by CC-BY 4.0.

We’ve collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the third in the series. (The previous versions are here: v1 | v2.)

What LLMs are and how they work

Journal, conference, arXiv, and other articles

Newspaper, magazine, University website, and blogpost articles

Reports

Podcasts and video discussions

Focus on LLMs and education

Relating to art and other creative processes

Pertaining to robotics

Misinformation, fake news and the impact on journalism

Regulation and policy

]]>
#ICRA2023 daily video digest https://robohub.org/icra2023-daily-video-digest/ Sun, 11 Jun 2023 08:48:35 +0000 https://robohub.org/?p=207500

If you missed the 2023 edition of the IEEE International Conference on Robotics and Automation in London, here we bring you the video digests that were made each of the main days of the conference. Enjoy!

Tuesday 30th May

Wednesday 31st May

Thursday 1st June

Full highlights

]]>
#ICRA2023 awards finalists and winners https://robohub.org/icra2023-awards-finalists-and-winners/ Mon, 05 Jun 2023 11:32:59 +0000 https://robohub.org/?p=207467

In this post we bring you all the paper awards finalists and winners presented during the 2023 edition of the IEEE International Conference on Robotics and Automation (ICRA). Congratulations to the winners and finalists!

ICRA 2023 Outstanding Paper

ICRA 2023 Outstanding Automation Paper

ICRA 2023 Outstanding Student Paper

ICRA 2023 Outstanding Deployed Systems Paper

ICRA 2023 Outstanding Dynamics and Control Paper

ICRA 2023 Outstanding Healthcare and Medical Robotics Paper

ICRA 2023 Outstanding Locomotion Paper

ICRA 2023 Outstanding Manipulation Paper

ICRA 2023 Outstanding Mechanisms and Design Paper

ICRA 2023 Outstanding Multi-Robot Systems Paper

ICRA 2023 Outstanding Navigation Paper

  • IMODE: Real-Time Incremental Monocular Dense Mapping Using Neural Field, by Matsuki, Hidenobu; Sucar, Edgar; Laidlow, Tristan; Wada, Kentaro; Scona, Raluca; Davison, Andrew J.
  • SmartRainNet: Uncertainty Estimation for Laser Measurement in Rain, by Zhang, Chen; Huang, Zefan; Tung, Beatrix; Ang Jr, Marcelo H; Rus, Daniela. (WINNER)
  • Online Whole-Body Motion Planning for Quadrotor Using Multi-Resolution Search, by Ren, Yunfan; Liang, Siqi; Zhu, Fangcheng; Lu, Guozheng; Zhang, Fu.

ICRA 2023 Outstanding Physical Human-Robot Interaction Paper

ICRA 2023 Outstanding Planning Paper

ICRA 2023 Outstanding Robot Learning Paper

ICRA 2023 Outstanding Sensors and Perception Paper

]]>
Interview with Hae-Won Park, Seungwoo Hong and Yong Um about MARVEL, a robot that can climb on various inclined steel surfaces https://robohub.org/interview-with-hae-won-park-seungwoo-hong-and-yong-um-about-marvel-a-robot-that-can-climb-on-various-inclined-steel-surfaces/ Sun, 15 Jan 2023 09:48:13 +0000 https://robohub.org/?p=206343

Prof. Hae-Won Park (left), Ph.D. Student Yong Um (centre), Ph.D. Student Seungwoo Hong (right). Credits: KAIST

We had the chance to interview Hae-Won Park, Seungwoo Hong and Yong Um, authors of the paper “Agile and versatile climbing on ferromagnetic surfaces with a quadrupedal robot”, recently published in Science Robotics.

What is the topic of the research in your paper?
The main topic of our work is that the robot we have developed can move agilely, not only on flat ground but also on vertical walls and ceilings made of ferromagnetic materials. Also, it has the ability to perform dexterous maneuvers such as crossing gaps, overcoming obstacles, and transitioning upon corners.

Could you tell us about the implications of your research and why it is an interesting area for study?
Such agile and dexterous locomotion capabilities will be able to expand the robot’s operational workspace and approach places that are difficult or dangerous for human operators to access directly. For example, inspection and welding operations in heavy industries such as shipbuilding, steel bridges, and storage tanks.

Could you explain your methodology? What were your main findings?
Our magnet foot can switch the on/off state in a short period of time (5 ms) and in an energy-efficient way, thanks to the novel geometry design of EPM. At the same time, the magnet foot can provide large holding forces in both shear and normal directions due to the MRE footpad. Also, our actuators can provide balanced speed/torque characteristics, high-bandwidth torque control capability, and the ability to mediate high impulsive force. To control vertical and inverted locomotion as well as various versatile motions, we have utilized a control framework (model predictive control) that can generate reliable and robust reaction forces to track desired body motions in 3D space while preventing slippage or tipping-over occurs. We found that all the elements mentioned earlier are imperative to perform dynamic maneuvers against gravity.

What further work are you planning in this area?
So far, the robot is able to move on smooth surfaces with moderate curvature. To enable the robot to move on irregularly shaped surfaces, we are working on designing a compliantly-integrated multiple miniaturized EPMs with MRE footpads that can increase the effective contact area to provide robust adhesion. Also, a vision system with high-level navigation algorithms will be included to enable the robot to move autonomously in the near future.

About the authors

Hae-Won Park received the B.S. and M.S. degrees from Yonsei University, Seoul, South Korea, in 2005 and 2007, respectively, and the Ph.D. degree from the University of Michigan, Ann Arbor, MI, USA, in 2012, all in mechanical engineering. He is an Associate Professor of mechanical engineering with the Korea Advanced Institute of Science and Technology, Daejeon, South Korea. His research interests include the intersection of control, dynamics, and mechanical design of robotic systems, with special emphasis on legged locomotion robots. Dr. Park is the recipient of the 2018 National Science Foundation (NSF) CAREER Award and NSF most prestigious awards in support of early-career faculty.

Seungwoo Hong received the B.S. degree from Shanghai Jiao Tong University, Shanghai, China, in July 2014, and the M.S. degree from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in August 2017, all in mechanical engineering. He is currently a Ph.D. candidate with the Department of Mechanical Engineering, KAIST, Daejeon, Korea. His current research interests include model-based optimization, motion planning and control of legged robotic systems.

Yong Um received the B.S. degree in mechanical engineering from the Korea Advanced Institute of Science and Technology, Daejeon, South Korea, in 2020. He is currently working toward the Ph.D. degree in mechanical engineering in Korea Advanced Institute of Science and Technology. His research interests include mechanical system and magnetic device design for legged robot.


]]>
Science Magazine robot videos 2022 (+ breakthrough of the year) https://robohub.org/science-magazine-robot-videos-2022-breakthrough-of-the-year/ Tue, 03 Jan 2023 12:01:48 +0000 https://robohub.org/?p=206272

Image generated by DALLE 2 using prompt “a hyperrealistic image of a robot watching robot videos on a laptop”

Did you manage to watch all the holiday robot videos of 2022? If you did but are still hungry for more, I have a few more videos from Science Magazine featuring robotics research that were released during last year. Enjoy!

Extra: breakthrough of the year

]]>
Robot Talk Podcast – November & December episodes (+ bonus winter treats) https://robohub.org/robot-talk-podcast-november-december-episodes-bonus-winter-treats/ Fri, 30 Dec 2022 09:30:54 +0000 https://robohub.org/?p=206245

Episode 24 – Gopal Ramchurn

Claire chatted to Gopal Ramchurn from the University of Southampton about artificial intelligence, autonomous systems and renewable energy.

Sarvapali (Gopal) Ramchurn is a Professor of Artificial Intelligence, Turing Fellow, and Fellow of the Institution of Engineering and Technology. He is the Director of the UKRI Trustworthy Autonomous Systems hub and Co-Director of the Shell-Southampton Centre for Maritime Futures. He is also a Co-CEO of Empati Ltd, an AI startup working on decentralised green hydrogen technologies. His research is about the design of Responsible Artificial Intelligence for socio-technical applications including energy systems and disaster management.

Episode 25 – Ferdinando Rodriguez y Baena

Claire chatted to Ferdinando Rodriguez y Baena from Imperial College London about medical robotics, robotic surgery, and translational research.

Ferdinando Rodriguez y Baena is Professor of Medical Robotics in the Department of Mechanical Engineering at Imperial College, where he leads the Mechatronics in Medicine Laboratory and the Applied Mechanics Division. He has been the Engineering Co-Director of the Hamlyn Centre, which is part of the Institute of Global Health Innovation, since July 2020. He is a founding member and great advocate of the Imperial College Robotics Forum, now the first point of contact for roboticists at Imperial College.

Episode 26 – Séverin Lemaignan

Claire chatted to Séverin Lemaignan from PAL Robotics all about social robots, behaviour, and robot-assisted human-human interactions.

Séverin Lemaignan is Senior Scientist at Barcelona-based PAL Robotics. He leads the Social Intelligence team, in charge of designing and developing the socio-cognitive capabilities of robots like PAL TIAGo and PAL ARI. He obtained his PhD in Cognitive Robotics in 2012 from the CNRS/LAAS and the Technical University of Munich, and worked at Bristol Robotics Lab as Associate Professor in Social Robotics, before moving to industry. His research primarily concerns socio-cognitive human-robot interaction, child-robot interaction and human-in-the-loop machine learning for social robots.

Episode 27 – Simon Wanstall

Claire chatted to Simon Wanstall from the Edinburgh Centre for Robotics all about soft robotics, robotic prostheses, and taking inspiration from nature.

Simon Wanstall is a PhD student at the Edinburgh Centre for Robotics, working on advancements in soft robotic prosthetics. His research interests include soft robotics, bioinspired design and healthcare devices. Simon’s current project is to develop soft sensors so that robotic prostheses can feel the world around them. In order to develop his skills in this area, Simon is also undertaking an industrial placement with Touchlab, a robotics company specialising in sensors.

Episode 28 – Amanda Prorok

Claire chatted to Amanda Prorok from the University of Cambridge all about self-driving cars, industrial robots, and multi-robot systems.

Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology at Cambridge University, and a Fellow of Pembroke College. She is interested in finding practical methods for hard coordination problems that arise in multi-robot and multi-agent systems.

Episode 29 – Sina Sareh

Claire chatted to Sina Sareh from the Royal College of Art all about industrial inspection, soft robotics, and robotic grippers.

Sina Sareh is the Academic Leader in Robotics at Royal College of Art. He is currently a Reader (Associate Professor) in Robotics and Design Intelligence at RCA, and a Fellow of EPSRC, whose research develops technological solutions to problems of human safety, access and performance involved in a range of industrial operations. Dr Sareh holds a PhD from the University of Bristol, 2012, and served as an impact assessor of Sub-panel 12: Engineering in the assessment phase of the Research Excellence Framework (REF) 2021.

Episode 30 – Ana Cavalcanti

Claire chatted to Ana Cavalcanti from the University of York all about software development, testing and verification, and autonomous mobile robots.

Ana Cavalcanti is a Royal Academy of Engineering Chair in Emerging Technologies. She is the leader of the RoboStar centre of excellence on Software Engineering for Robotics. The RoboStar approach to model-based Software Engineering complements current practice of design and verification of robotic systems, covering simulation, testing, and proof. It is practical, supported by tools, and yet mathematically rigorous.

Bonus winter treats

What is your favourite fictional robot?

What is your advice for a robotics career?

What is your favourite machine or tool?

Could you be friends with a robot?

A day in the life

]]>
Holiday robot videos 2022 updated (+ how robots prepare an Amazon warehouse for Christmas) https://robohub.org/holiday-robot-videos-2022-how-robots-prepare-an-amazon-warehouse-for-christmas/ Thu, 29 Dec 2022 09:00:21 +0000 https://robohub.org/?p=206216

Image generated by OpenAI’s DALL-E 2 with prompt “a robot surrounded by humans, Santa Claus and a Christmas tree at Christmas, digital art”.

Happy holidays everyone! And many thanks to all those that sent us their holiday videos. Here are some robot videos of this year to get you into the spirit of the season. We wish you the very best for these holidays and the year 2023 :)

And here are some very special season greetings from robots!

Recent submissions

https://www.youtube.com/shorts/OH-vK5VujMU

Extra: How robots prepare an Amazon warehouse for Christmas


Did we miss your video? You can send it to daniel.carrillozapata@robohub.org and we’ll include it in this list.

]]>
2nd call for robot holiday videos 2022 (with first submissions!) https://robohub.org/2nd-call-for-robot-holiday-videos-2022-with-first-submissions/ Thu, 15 Dec 2022 10:43:42 +0000 https://robohub.org/?p=206137

That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub!

Drop your submissions down our chimney at daniel.carrillozapata@robohub.org and share the spirit of the season.

Here are our first two submissions of the roundup:

]]>
Call for robot holiday videos 2022 https://robohub.org/call-for-robot-holiday-videos-2022/ Fri, 02 Dec 2022 11:12:05 +0000 https://robohub.org/?p=206016

That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub!

Drop your submissions down our chimney at daniel.carrillozapata@robohub.org and share the spirit of the season.

For inspiration, here are our lists from previous years.

]]>
#IROS2022 best paper awards https://robohub.org/iros2022-best-paper-awards/ Mon, 14 Nov 2022 09:00:14 +0000 https://robohub.org/?p=205912

Did you have the chance to attend the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) in Kyoto? Here we bring you the papers that received an award this year in case you missed them. Congratulations to all the winners and finalists!

Best Paper Award on Cognitive Robotics

Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation. Payam Jome Yazdian, Mo Chen, and Angelica Lim.

Best RoboCup Paper Award

RCareWorld: A Human-centric Simulation World for Caregiving Robots. Ruolin Ye, Wenqiang Xu, Haoyuan Fu, Rajat Kumar, Jenamani, Vy Nguyen, Cewu Lu, Katherine Dimitropoulou, and Tapomayukh Bhattacharjee.

SpeedFolding: Learning Efficient Bimanual Folding of Garments. Yahav Avigal, Lars Burscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg.

Best Paper Award on Robot Mechanisms and Design

Aerial Grasping and the Velocity Sufficiency Region. Tony G. Chen, Kenneth Hoffmann, JunEn Low, Keiko Nagami, David Lentink, and Mark Cutkosky.

Best Entertainment and Amusement Paper Award

Robot Learning to Paint from Demonstrations. Younghyo Park, Seunghun Jeon, and Taeyoon Lee.

Best Paper Award on Safety, Security, and Rescue Robotics

Power-based Safety Layer for Aerial Vehicles in Physical Interaction using Lyapunov Exponents. Eugenio Cuniato, Nicholas Lawrance, Marco Tognon, and Roland Siegwart.

Best Paper Award on Agri-Robotics

Explicitly Incorporating Spatial Information to Recurrent Networks for Agriculture. Claus Smitt, Michael Allan Halstead, Alireza Ahmadi, and Christopher Steven McCool.

Best Paper Award on Mobile Manipulation

Robot Learning of Mobile Manipulation with Reachability Behavior Priors. Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki.

Best Application Paper Award

Soft Tissue Characterisation Using a Novel Robotic Medical Percussion Device with Acoustic Analysis and Neural Networks. Pilar Zhang Qiu, Yongxuan Tan, Oliver Thompson, Bennet Cobley, and Thrishantha Nanayakkara.

Best Paper Award for Industrial Robotics Research for Applications

Absolute Position Detection in 7-Phase Sensorless Electric Stepper Motor. Vincent Groenhuis, Gijs Rolff, Koen Bosman, Leon Abelmann, and Stefano Stramigioli.

ABB Best Student Paper Award

FAR Planner: Fast, Attemptable Route Planner using Dynamic Visibility Update. Fan Yang, Chao Cao, Hongbiao Zhu, Jean Oh, and Ji Zhang.

Best Paper Award

SpeedFolding: Learning Efficient Bimanual Folding of Garments. Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg.

]]>
Robot Talk Podcast – October episodes https://robohub.org/robot-talk-podcast-october-episodes/ Sat, 12 Nov 2022 09:41:53 +0000 https://robohub.org/?p=205938

Episode 20 – Paul Dominick Baniqued

Claire talked to Dr Paul Dominick Baniqued from The University of Manchester all about brain-computer interface technology and rehabilitation robotics.

Paul Dominick Baniqued received his PhD in robotics and immersive technologies at the University of Leeds. His research tackled the integration of a brain-computer interface with virtual reality and hand exoskeletons for motor rehabilitation and skills learning. He is currently working as a postdoc researcher on cyber-physical systems and digital twins at the Robotics for Extreme Environments Group at the University of Manchester.

Episode 21 – Sean Katagiri

Claire chatted to Sean Katagiri from The National Robotarium all about underwater robots, offshore energy, and other industrial applications of robotics.

Sean Katagiri is a robotics engineer who has the pleasure of being surrounded by and working with robots for a living. His experience in robotics mainly comes from the subsea domain, but has also worked with wheeled and legged ground robots as well. Sean is very excited to have recently started his role at The National Robotarium, whose goal is to bring ideas from academia and turn them into real world solutions.

Episode 22 – Iveta Eimontaite

Claire talked to Dr Iveta Eimontaite from Cranfield University about psychology, human-robot interaction, and industrial robots.


Iveta Eimontaite studied Cognitive Neuroscience at the University of York and completed her PhD in Cognitive Psychology at Hull University. Prior to joining Cranfield University, Iveta held research positions at Bristol Robotics Laboratory and Sheffield Robotics. Her work mainly focuses on behavioural and cognitive aspects of Human-Technology Interaction, with particular interest in user needs and requirements for the successful integration of technology within the workplace/social environments.

Episode 23 – Mickey Li

Claire talked to Mickey Li from the University of Bristol about aerial robotics, building inspection and multi-robot teams.

Mickey Li is a Robotics and Autonomous systems PhD researcher at the Bristol Robotics Laboratory and the University of Bristol. His research focuses on optimal multi-UAV path planning for building inspection, in particular how guarantees can be provided despite vehicle failures. Most recently he has been developing a portable development and deployment infrastructure for multi-UAV experimentation for the BRL Flight Arena inspired by advances in cloud computing.

]]>
Robots come out of the research lab https://robohub.org/robots-come-out-of-the-research-lab/ Wed, 02 Nov 2022 14:33:52 +0000 https://robohub.org/?p=205875

This year’s Swiss Robotics Day – an annual event run by the EPFL-led National Centre of Competence in Research (NCCR) Robotics – will be held at the Beaulieu convention center in Lausanne. For the first time, this annual event will take place over two days: the first day, on 4 November, will be reserved for industry professionals, while the second, on 5 November, will be open to the public.

Visitors at this year’s Swiss Robotics Day are in for a glimpse of some exciting new technology: a robotic exoskeleton that enables paralyzed patients to ski, a device the width of a strand of hair that can be guided through a human vein, a four-legged robot that can walk over obstacles, an artificial skin that can diagnose early-stage Parkinson’s, a swarm of flying drones, and more.

The event, now in its seventh year, was created by NCCR Robotics in 2015. It has expanded into a leading conference for the Swiss robotics industry, bringing together university researchers, businesses and citizens from across the country. For Swiss robotics experts, the event provides a chance to meet with peers, share ideas, explore new business opportunities and look for promising new hires. That’s what they’ll do on Friday, 4 November – the day reserved for industry professionals.

On Saturday, 5 November, the doors will open to the general public. Visitors of all ages can discover the latest inventions coming out of Swiss R&D labs and fabricated by local companies – including some startups. The event will feature talks and panel discussions on topics such as ethics in robotics, space robotics, robotics in art and how artificial intelligence can be used to promote sustainable development – all issues that will shape the future of the industry. PhD students will provide a snapshot of where robotics research stands today, while school-age children can sign up for robot-building workshops. Teachers can take part in workshops given by the Roteco robot teaching community and see how robotics technology can support learning in the classroom.

In the convention center’s 5,000 m² exhibit hall, some 70 booths will be set up with all sorts of robot demonstrations, complete with an area for flying drones. Technology developed as part of the Cybathlon international competition will be on display; this competition was introduced by NCCR Robotics in 2016 to encourage research on assistance systems for people with disabilities. Silke Pan will give a dance performance with a robotic exoskeleton, choregraphed by Antoine Le Moal of the Béjart ballet company. Talks will be given in French and English. Entrance is free of charge but registration is required.

Laying the foundation for future success

The 2022 Swiss Robotics Day will mark the end of NCCR Robotics, capping 12 years of cutting-edge research. The center was funded by the Swiss National Science Foundation and has sponsored R&D at over 30 labs in seven Swiss institutions: EPFL, ETH Zurich, the University of Zurich, IDSIA-SUPSI-USI in Lugano, the University of Bern, EMPA and the University of Basel. NCCR Robotics has given rise to 16 spin-offs in high-impact fields like portable robots, drones, search-and-rescue systems and education. Together the spin-offs have raised over CHF 100 million in funding and some of them, like Flyability and ANYbotics, have grown into established businesses creating hundreds of high-tech jobs. The center has also rolled out several educational and community initiatives to further the teaching of robotics in Switzerland.

After the center closes, some of its activities – especially those related to technology transfer – will be carried out by the Innovation Booster Robotics program sponsored by Innosuisse and housed at EPFL. This program, initially funded for three years, is designed to promote robotics in universities and the business world.

A day for industry professionals only

The first day of the event, 4 November, is intended for robotics-industry businesses, investors, researchers, students and journalists. It will kick off with a talk by Robin Murphy, a world-renowned expert in rescue robotics and a professor at Texas A&M University; she will be followed by Auke Ijspeert from EPFL’s Biorobotics Laboratory, Elena García Armada from the Center for Automation and Robotics in Spain, Raffaello D’Andrea (a pioneer in robotics-based inventory management) from ETH Zurich, Thierry Golliard from Swiss Post and Adrien Briod, the co-founder of Flyability.

In the afternoon, a panel discussion will explore how robots and artificial intelligence are changing the workplace. Experts will include Dario Floreano from NCCR Robotics and EPFL, Rafael Lalive from the University of Lausanne, Alisa Rupenyan-Vasileva from ETH Zurich, Agnès Petit Markowski from Mobbot and Pierre Dillenbourg from EPFL. Event participants will also have a chance to network that afternoon. The day will conclude with an awards ceremony to designate Switzerland’s best Master’s thesis on robotics. The booths and robot demonstrations will take place on both days of the event.

A virtual glimpse of NCCR Robotics research

At NCCR Robotics, a new generation of robots that can work side by side with humans (fighting disabilities, facing emergencies and transforming education) is developed. Check out the videos below to see them in more detail.

]]>
Cooperative cargo transportation by a swarm of molecular machines https://robohub.org/cooperative-cargo-transportation-by-a-swarm-of-molecular-machines/ Sat, 23 Jul 2022 15:36:20 +0000 https://robohub.org/?p=205098

Dr. Akira Kakugo and his team from Hokkaido University in Japan sent us a video presentation of his recent paper ‘Cooperative cargo transportation by a swarm of molecular machines’, published in Science Robotics.

‘Despite the advancements in macro-scale robots, the development of a large number of small-size robots is still challenging, which is crucial to enhance the scalability of swarm robots,’ says Dr. Kakugo. In the paper, researchers showed it is possible to collectively transport molecular cargo by a swarm of artificial molecular robots (engineered systems with biological/molecular sensors, processors and actuators) responding to light.

]]>
The one-wheel Cubli https://robohub.org/the-one-wheel-cubli/ Thu, 30 Jun 2022 07:34:39 +0000 https://robohub.org/?p=204912

Researchers Matthias Hofer, Michael Muehlebach and Raffaello D’Andrea have developed the one-wheel Cubli, a three-dimensional pendulum system that can balance on its pivot using a single reaction wheel. How is it possible to stabilize the two tilt angles of the system with only a single reaction wheel?

The key is to design the system such that the inertia in one direction is higher than in the other direction by attaching two masses far away from the center. As a consequence, the system moves faster in the direction with the lower inertia and slower in the direction with the higher inertia. The controller can leverage this property and stabilize both directions simultaneously.

This work was carried out at the Institute for Dynamic Systems and Control, ETH Zurich, Switzerland.

Almost a decade has passed since the first Cubli

The Cubli robot started with a simple idea: Can we build a 15cm sided cube that can jump up, balance on its corner, and walk across our desk using off-the-shelf motors, batteries, and electronic components? The educational article Cubli – A cube that can jump up, balance, and walk across your desk shows all the design principles and prototypes that led to the development of the robot.

Cubli, from ETH Zurich.

]]>
Coffee with a Researcher (#ICRA2022) https://robohub.org/coffee-with-a-researcher-icra2022/ Mon, 20 Jun 2022 10:46:19 +0000 https://robohub.org/?p=204827

As part of her role as one of the IEEE ICRA 2022 Science Communication Awardees, Avie Ravendran sat down virtually with a few researchers from academia and industry attending the conference. Curious about what they had to say? Read their quotes below!

“I really believe that learned methods, especially imitation and transfer learning, will enable scalable robot applications in human and unstructured environments We’re on the cusp of seeing robot agents dynamically adapt and solve real world problems”

– Nicholas Nadeau, CTO, Halodi Robotics

“On one hand I think that the interplay of perception and control is quite exciting, in terms of the common underlying principles, while on the other, it’s both cool and inspiring to see more robots getting out of the lab”

– Matías Mattamala, PhD Student, Oxford Dynamic Robot Systems, Oxford Robotics Institute

“I believe that incorporating priors regarding the existing scene geometry and the temporal consistency that’s present in the context of mobile robotics, can be used to guide the learning of more robust representations”

– Kavisha Vidanapathirana, QUT & CSIRORobotics

“At the moment, I am aiming to find out what researchers need in order to take care of their motivation and wellbeing”

– Daniel Carrillo-Zapata, Founder, Scientific Agitation

“We have an immense amount of unsupervised knowledge and we’re always updating our priors. Taking advantage of large-scale unsupervised pretraining and having a lifelong learning system seems like a significant step in the right direction”

– Nitish Dashora, Researcher, Berkeley AI Research & Redwood Center for Theoretical Neuroscience

“When objects are in clutter, with various objects lying on top of one another, the robot needs to interactively and autonomously rearrange the scene in order to retrieve the pose of the target object with minimal number of actions to achieve overall efficiency. I work on pose estimation algorithms to process dense visual data as well as sparse tactile data”

– Prajval Kumar, BMW & University of Glasgow

“Thinking of why the robots or even the structures behave the way they do, and framing and answering questions in that line satisfies my curiosity as a researcher”

– Tung Ta, Postdoctoral Researcher, The University of Tokyo

“I sometimes hear that legged locomotion is a solved problem, but I disagree. I think that the standards of performance have just been raised and collectively we can now tackle more dynamic, efficient and reliable gaits”

– Kevin Green, PhD Candidate, Oregon State University

“My goal in robotics research is to bring down the cost and improve the capabilities of marine research platforms by introducing modularity and underactuation into the field. We’re working on understanding how to bring our collective swimming technology into flowing environments now”

– Gedaliah Knizhnik, PhD Candidate, GRASP Laboratory & The modular robotics laboratory, University of Pennsylvania

“I am interested in how we can develop the algorithms and representations needed to enable long-term autonomous robot navigation without human intervention, such as in the case of an autonomous underwater robot persistently mapping a marine ecosystem for an extended period of time. There are lots of challenges like how can we build a compact representation of the world, ideally grounded in human-understandable semantics? How can we deal gracefully with outliers in perception that inevitably occur in the lifelong setting? and also how can we scale robot state estimation methods in time and space while bounding memory and computation requirements?”

– Kevin Doherty, Computer Science and AI Lab, MIT & Woods Hole Oceanographic Institution

“How can robots learn to interact with and reason about themselves and the world without an intuitive feel for either? Communication is at the heart of biological and robotic systems. Inspired by control theory, information theory, and neuroscience, early work in artificial intelligence (AI) and robotics focused on a class of dynamical system known as feedback systems. These systems are characterized by recurrent mechanisms or feedback loops that govern, regulate, or ‘steer’ the behaviour of the system toward desirable stable states in the presence of disturbance in diverse environments. Feedback between sensation, prediction, decision, action, and back is a critical component of sensorimotor learning needed to realize robust intelligent robotic systems in the wild, a grand challenge of the field. Existing robots are fundamentally numb to the world, limiting their ability to sense themselves and their environment. This problem will only increase as robots grow in complexity, dexterity, and maneuverability, guided by biomimicry. Feedback control systems such as the proportional integral derivative (PID), reinforcement learning (RL), and model predictive control (MPC) are now common in robotics, as is (optimal, Bayesian) Kálmán filtering of point-based IMU-GPS signals. Lacking are the distributed multi-modal, high-dimensional sensations needed to realize general intelligent behaviour, executing complex action sequences through high-level abstractions built up from an intuitive feel or understanding of physics.While the central nervous system and biological neural networks are quantum parallel distributed processing (PDP) engines, most digital artificial neural networks are fully decoupled from sensors and provide only a passive image of the world. We are working to change that by coupling parallel distributed sensing and data processing through a neural paradigm. This involves innovations in hardware, software, and datasets. At Nervosys, we aim to make this dream a reality by building the first nervous system and platform for general robotic intelligence.”

– Adam Erickson, Founder, Nervosys

]]>
After a few years apart, IEEE #ICRA2022 reunited the robotics community again https://robohub.org/after-a-few-years-apart-ieee-icra2022-reunited-the-robotics-community-again/ Fri, 03 Jun 2022 08:07:10 +0000 https://robohub.org/?p=204713

Welcome Reception. Credits: Wise Owl Multimedia

The 39th edition of the IEEE International Conference on Robotics and Automation (ICRA) took place at the Pennsylvania Convention Center in Philadelphia (USA) and online May 23-27. ICRA 2022 brought together the world’s top researchers and companies to share ideas and advances in the fields of robotics and automation.

Nearly 8,000 participants from academia and industry, including 4700 in person, from a total of 97 countries, joined the largest conference in robotics. Indeed, these figures reflect the evolution of the field in the last 34 years, with the last ICRA in Philadelphia (1988) only welcoming around 300 participants and a few exhibitors.

“We were thrilled to see the robotics community respond so positively to the first in-person ICRA conference since the pandemic started,” ICRA 2022 General Co-Chair George J. Pappas (University of Pennsylvania) commented. “In addition to breaking numerous records, such as in-person attendance and number of countries represented, there was an incredible amount of energy and interaction among attendees, exhibitors, competitors, and other aspects of the technical program. By all accounts, ICRA 2022 has been a major success.”

Not only did ICRA 2022 have an impact on the robotics community worldwide, but also locally in the city of Philadelphia and the state of Pennsylvania. As ICRA 2022 General Co-Chair Vijay Kumar (University of Pennsylvania) stated, “ICRA 2022 showcased Philadelphia area robotics research and development activities in academia and industry, and exposed high school and university students to exciting STEM opportunities and cutting edge research in robotics.”

The main topic of ICRA 2022 was ‘Future of Work’, with dedicated discussion sessions such as the Future of Work Forum, which featured panelists Jeff Burnstein (A3), Erik Brynjolfssonn (Stanford), Moshe Vardi (Rice University), Michael Lotito (Littler), Bernd Liepert (EuRobotics), Cecilia Laschi (NUS), and Ioana Marinescu (University of Pennsylvania). The main take-home message of the session, Chaired by Henrik Christensen (University of California, San Diego), was that employment and robot sales are correlated. “When companies hire people, they also invest in new technology. An important aspect is how we train the workforce, and how we re-train to continue to have access to a competitive workforce,” as Henrik Christensen summarized. “The panel agreed that there is a need for a comprehensive view of how we provide training, invest in technology, and provide appropriate policies to continue to grow the economy,” he finished. Another five Forum sessions were organized, covering the topics of Industry, US National Robotics Programs, Investor/Venture Capital, Start-ups, and Autonomous Driving.

Participants in one of the sessions. Credits: Wise Owl Multimedia

Plenary and Keynote sessions were an important part of the conference, with 12 top researchers from the robotics community being invited to share their reflections on topics such as Robots, Ethics, and Society (Ayanna Howard, Ohio State University, USA); Human-Machine Partnerships and Work of the Future (Julie Shah, MIT, USA); or the Embodied Intelligence Aporia (Antonio Bicchi, University of Pisa, Italy), among others. Many research contributions were also presented during ICRA 2022 by the robotics community during the 1,500 Technical presentations and 56 Workshop & Tutorial sessions, especially by students, who represented 46% of all participants. A total of 13 Awards were given to researchers to honor their outstanding contributions in topics such as Computer Vision and Pattern Recognition, Robot Grasping, Collaborative Transportation, Legged & Aerial Robot Subterranean Exploration, Monocopters, Human-Robot Interaction for Surgery, Manipulation, Track-leg & Wheel-leg Ground Robotics, Biomimetic Tactile Sensors, Visual Odometry, and Trajectory Optimization.

Researchers and companies were able to show their robots in action in the 10 Competitions that were organized, including robots understanding where they are and moving or flying around obstacles, battling robots, robots setting dinner tables, and even examining the ethics of robots doing tasks for humans. A wide range of international teams competed in challenges like the ‘10th F1TENTH Grand Prix’ (winning place went to University of Pennsylvania students), where competitors had to build a 1:10 scaled autonomous race car, and write software for it to race head-to-head with the rest of the cars, minimizing lap time without crashing. Other competitions like the ‘FIRST LEGO League Challenge’ featured younger students, where local teams of 4th-8th graders had to design, build, and program a LEGO robot to complete autonomous missions. “Competitions provide an excellent way to validate research. Robotics research results often show great progress, but it is hard to compare different methods unless they are on the same playing field literally,” ICRA 2022 Chair of Competitions Mark Yim (University of Pennsylvania) stated. “There are also the outreach components. Competitions provide an exciting mechanism to show students and the general population what is involved in robotics,” Mark added.

Competitors of the ‘10th F1TENTH Grand Prix’ Competition. Credits: Daniel Carrillo-Zapata

On the industry side of the conference, a robot exhibition hall was prepared with nearly 100 Exhibitors offering robot displays, Tech Huddles, and demos from companies like Dyson, Motional, Built Robotics, Exyn, Ghost Robotics, NVIDIA, Technology Innovation Institute, Boston Dynamics, Pal-Robotics, KUKA Robotics, Amazon Science, Toyota Research Institute, or Tesla, among others. As ICRA 2022 Accessibility Chair Andra Keay (Silicon Valley Robotics), who also Chaired the Startup Forum, commented, “It was incredibly exciting to see such a strong participation from industry at ICRA 2022. Companies were able to showcase their technologies, recruit talented candidates, network with their peers, and discover the latest advances in robotics. Startups also were able to showcase to investors and potential customers. Tech Huddles were a great addition to the program because they gave students and faculty more opportunities to network with industry.”

Exhibitors’ robots round-up. Credits: Wise Owl Multimedia

As everyone could not attend ICRA 2022 in person, the Organizing Committee, the IEEE Robotics and Automation Society and OhmniLabs teamed up to offer access to the OhmiLab’s telepresence robots that were on site. Three OhmniBots were in the main exhibition hall (with all the other robots) from opening to closing from Tuesday to Thursday, with time slots aligning with Poster Sessions, networking breaks and Expo Hall hours.

ICRA 2022 continued exploring innovative ways to generate impact on robotics and automation through its partnership with the RAD Lab and several Philadelphia-based art galleries to offer a central space for art in its program. Building on the previous ICRA robotic art programs, this year’s installment explored aesthetic and creative influences on robot motion through interactive, expressive, and meditative robotic art installations. When artist-in-residence from the RAD Lab Kate Ladenheim was interviewed by David Garzón Ramos (IRIDIA) -one of the four ICRA 2022 Science Communication Awardees- about the intersection of robotics and art, she answered, “Roboticists and artists both engage in really intensive creative processes in order to make their work come to life. If a roboticist is trying to make something with a high level of intellectual and conceptual rigor, with an output that is not necessarily commercial in nature but is one that is meant to enrich the lives of people around it, then I think it crosses the realm from a project into a project that is artistic.”

This year’s edition of ICRA was a record-breaking gathering of the top roboticists and stakeholders from academia and industry, having a huge impact on the local and international community. Preparations are already in place for the 40th edition of the conference. ICRA 2023 will take place at ExCeL London between 29 May – 2 June 2023. Kaspar Althoefer (Queen Mary University of London), General Chair of ICRA 2023, stated, “We are looking forward to building on the wonderful success of ICRA 2022 in Philadelphia with the ICRA 2023 conference. The theme of the congress is ‘Embracing the Future. Making Robots for Humans’, and we are planning a wide range of workshops, sessions, networking events and tours to fit in with this theme. We are looking forward to welcoming everyone to London next year!”

ICRA 2023 will take place in London. Credits: Wise Owl Multimedia

We would like to thank ICRA 2022 Partners, which have also supported us in record numbers this year, as well as all participants, organization members and volunteers. Thank you for coming together again in person and online after a few years apart to imagine a new future where humans and robots work together!

]]>
The IEEE International Conference on Robotics and Automation (ICRA) kicks off with the largest in-person participation and number of represented countries ever https://robohub.org/the-ieee-international-conference-on-robotics-and-automation-icra-kicks-off-with-the-largest-in-person-participation-and-number-of-represented-countries-ever/ Tue, 24 May 2022 16:49:56 +0000 https://robohub.org/?p=204553

Photo credit: Wise Owl Multimedia

The IEEE International Conference on Robotics and Automation (ICRA), taking place simultaneously at the Pennsylvania Convention Center in Philadelphia and virtually, has just kicked off. ICRA 2022 brings together the world’s top researchers and most important companies to share ideas and advances in the fields of robotics and automation.

This is the first time the ICRA community is reunited after the pandemic, resulting in record breaking attendance with over 7,000 registrations and 95 countries represented. As the ICRA 2022 Co-Chair Vijay Kumar (University of Pennsylvania, USA) states, “we could not be happier to host the largest robotics conference in the world in Philadelphia, and the beginning of the re-emergence from the pandemic after a three year hiatus. We are back!”

Many important developments in robotics and automation have historically been first presented at ICRA, and in its 39th year, ICRA 2022 promises to take this trend one step further. As the practical and socio-economic impact of our field continues to expand, robotics and automation are increasingly taking the center stage in our lives and will play an important role in the Future of Work and Society, the theme of this year’s conference. Indeed, the Future of Work Forum Session being held on Thursday May 26th will specifically address the impact of automation on the future of employment, featuring panelists Jeff Burnstein (A3), Erik Brynjolfssonn (Stanford), Moshe Vardi (Rice University), Michael Lotito (Littler), Bernd Liepert (EuRobotics), Cecilia Laschi (NUS), and Ioana Marinescu (University of Pennsylvania). Five more Forums will be happening from Tuesday to Thursday, including an Industry Forum on Tuesday May 24th, or a Startup Forum on Wednesday May 25th.

The conference will also feature Plenaries and Keynotes by world-renowned roboticists on topics such as Robot Ethics, Legged Robots for Industrial and Search & Rescue Applications, Robot Farming, Autonomous Logistics or Smart Sensing, as well as 1500 Technical Talks on the state-of-the-art in robotics. A total of 39 researchers have been nominated to the 13 awards that ICRA 2022 is giving on Thursday, for their outstanding research contributions in categories such as Automation, Coordination, Interaction, Learning, Locomotion, Manipulation, Navigation or Planning, among others. As the ICRA 2022 Program Chair Hadas Kress-Gazit (Cornell University, USA) comments, “we are so excited to see the latest and greatest in robotics research and meet old and new friends!”.

Furthermore, a robot exhibition hall has been prepared with nearly 100 confirmed Exhibitors offering robot displays and demos from companies like Dyson, Motional, Built Robotics, NVIDIA, Technology Innovation Institute, Boston Dynamics, Pal-Robotics, KUKA Robotics, Toyota Research Institute, Tesla or Waymo, among others.

Competitions are also a big part of ICRA 2022. A total of 10 exciting Competitions will be taking place from Monday May 23rd to Friday May 27th, on the following topics: Autonomous Ground & Aerial Robots Navigation, Localization and Mapping, Robot Manipulation, Autonomous Racing, Roboethics, and LEGO League for 12th grade students.

To complete the program of the largest worldwide robotics conference, there will also be several Industry Tech Huddles led by industry experts, Technical Tours to the Singh Center for Nanotechnology, Penn Medicine and the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory, and several Networking Events.

ICRA 2022 is also proud to partner with the RAD Lab and several Philadelphia-based art galleries to offer a central space for art in its program. Building on the previous ICRA robotic art programs, this year’s installment explores aesthetic and creative influences on robot motion through interactive, expressive, and meditative robotic art installations. The exhibition and the associated workshop will provide new perspectives on imagining new technology futures.

“This is a very special conference for the majority of ICRA attendees,” ICRA 2022 General Co-Chair George J. Pappas (University of Pennsylvania, USA) comments. “The reason? 64% of all registrants and 56% of all in-person registrants are attending ICRA for the very first time! Given this influx of fantastic talent to our field, the future of ICRA is as bright as it has ever been.”

Not everyone can attend ICRA in person. That’s why the ICRA Organizing Committee, the IEEE Robotics and Automation Society and OhmniLabs have teamed up to offer access to the OhmiLab’s telepresence robots that will be on site. Three OhmniBots will be in the main exhibition hall (with all the other robots) from opening to closing on Tuesday May 24th, Wednesday May 25th and Thursday May 26th, with time slots aligning with Poster Sessions, networking breaks and Expo Hall hours.

No matter where you are, we hope you enjoy the conference either in person or virtually!

We would like to thank ICRA 2022 Partners, which have also supported us in record numbers this year.

]]>
Interview with Axel Krieger and Justin Opfermann: autonomous robotic laparoscopic surgery for intestinal anastomosis https://robohub.org/interview-with-axel-krieger-and-justin-opfermann-autonomous-robotic-laparoscopic-surgery-for-intestinal-anastomosis/ Sun, 06 Mar 2022 10:30:09 +0000 https://robohub.org/?p=203710

Axel Krieger is the Head of the Intelligent Medical Robotic Systems and Equipment (IMERSE) Lab at Johns Hopkins University, where Justin Opfermann is pursuing his PhD degree. Together with H. Saeidi, M. Kam, S. Wei, S. Leonard , M. H. Hsieh and J. U. Kang, they recently published the paper ‘Autonomous robotic laparoscopic surgery for intestinal anastomosis‘ in Science Robotics. Below, Axel and Justin tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

Our research is focused on the design and evaluation of medical robots for autonomous soft tissue surgeries. In particular, this paper describes a surgical robot and workflow to perform autonomous anastomosis of the small bowel. Performance of the robot is conducted in synthetic tissues against expert surgeons, followed by experiments in pig studies to demonstrate preclinical feasibility of the system and approach.

Could you tell us about the implications of your research and why it is an interesting area for study?

Anastomosis is an essential step to the reconstructive phase of surgery and is performed over a million times each year in the United States alone. Surgical outcomes for patients are highly dependent on the surgeon’s skill, as even a single missed stitch can lead to anastomotic leak and infection in the patient. In laparoscopic surgeries these challenges are even more difficult due to space constraints, tissue motion, and deformations. Robotic anastomosis is one way to ensure that surgical tasks that require high precision and repeatability can be performed with more accuracy and precision in every patient independent of surgeon skill. Already there are autonomous surgical robots for hard tissue surgeries such as bone drilling for hip and knee implants. The Smart Tissue Autonomous Robot (STAR) takes the autonomous robotic skill one step further by performing surgical tasks on soft tissues. This enables a robot to work with a human to complete more complicated surgical tasks where preoperative planning is not possible. We hypothesize that this will result in a democratized surgical approach to patient care with more predictable and consistent patient outcomes.

Could you explain your methodology?

Until this paper, autonomous laparoscopic surgery was not possible in soft tissue due to the unpredictable motions of the tissue and limitations on the size of surgical tools. Performing autonomous surgery required the development of novel suturing tools, imaging systems, and robotic controls to visualize a surgical scene, generate an optimized surgical plan, and then execute that surgical plan with the highest precision. Combining all of these features into a single system is challenging. To accomplish these goals we integrated a robotic suturing tool that simplifies wristed suturing motions to the press of a button, developed a three dimensional endoscopic imaging system based on structured light that was small enough for laparoscopic surgery, and implemented a conditional autonomy control scheme that enables autonomous laparoscopic anastomosis. We evaluated the system against expert surgeons performing end to end anastomosis using either laparoscopic or da Vinci tele-operative techniques on synthetic small bowel across metrics such as consistency of suture spacing and suture bite, stitch hesitancy, and overall surgical time. These experiments were followed by preclinical feasibility tests in porcine small bowel. Limited necropsy was performed after one week to evaluate the quality of the anastomosis and immune response.

What were your main findings?

Comparison studies in synthetic tissues indicated that sutures placed by the STAR system had more consistent spacing and bite depth than those applied by surgeons using either a manual laparoscopic technique or robotic assistance with the da Vinci surgical system. The improved precision afforded by the autonomous approach led to a higher quality anastomosis for the STAR system which was qualitatively verified by laminar four dimension MRI flow fields across the anastomosis. The STAR system completed the anastomosis with a first stitch success rate of 83% which was better than surgeons in either group. Following the ex-vivo tests, STAR performed laparoscopic small bowel anastomosis in four pigs. All animals survived the procedure and had an average weight gain over the 1 week survival period. STAR’s anastomoses had similar burst strength, lumen area reduction, and healing as manually sewn samples, indicating the feasibility of autonomous soft tissue surgeries.

What further work are you planning in this area?

Our group is researching marker-less strategies to track tissue position, motion, and plan surgical tasks without the need for fiducial markers on tissues. The ability to three dimensionally reconstruct the surgical field on a computer and plan surgical tasks without the need for artificial landmarks would simplify autonomous surgical planning and enable collaborative surgery between an autonomous robot and human. Using machine learning and neural networks, we have demonstrated the robot’s ability to identify tissue edges and track natural landmarks. We are planning to implement fail-safe techniques and hope to perform first in human studies in the next few years.


About the interviewees

Axel Krieger (PhD), an Assistant Professor in mechanical engineering, focuses on the development of novel tools, image guidance, and robot-control techniques for medical robotics. He is a member of the Laboratory for Computational Sensing and Robotics. He is also the Head of the Intelligent Medical Robotic Systems and Equipment (IMERSE) Lab at Johns Hopkins University.

Justin Opfermann (MS) is a PhD robotics student in the Department of Mechanical Engineering at Johns Hopkins University. Justin has ten years of experience in the design of autonomous robots and tools for laparoscopic surgery, and is also affiliated with the Laboratory for Computational Sensing and Robotics. Before joining JHU, Justin was a Project Manager and Senior Research and Design Engineer at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital.

]]>
Tamim Asfour’s Keynote talk – Learning humanoid manipulation from humans https://robohub.org/tamim-asfours-keynote-talk-learning-humanoid-manipulation-from-humans/ Tue, 25 Jan 2022 12:56:43 +0000 https://robohub.org/?p=203369

Through manipulation, a robotic system can physically change the state of the world. This is intertwined with intelligence, which is the ability whereby such system can detect and adapt to change. In his talk, Tamim Asfour gives an overview of the developments in manipulation for robotic systems his lab has done by learning manipulation task models from human observations, and the challenges and open questions associated with this.

Bio: Tamim Asfour is full Professor at the Institute for Anthropomatics and Robotics, where he holds the chair of Humanoid Robotics Systems and is head of the High Performance Humanoid Technologies Lab (H2T). His current research interest is high performance 24/7 humanoid robotics. Specifically, his research focuses on engineering humanoid robot systems integrating the mechano-informatics of such systems with the capabilities of predicting, acting and learning from human demonstration and sensorimotor experience. He is developer and the leader of the development team of the ARMAR humanoid robot family.

]]>
Science Magazine robot videos 2021 https://robohub.org/science-magazine-robot-videos-2021/ Fri, 14 Jan 2022 10:47:32 +0000 https://robohub.org/?p=202945

Did you manage to watch all the holiday robot videos of 2021? If you did but are still hungry for more, I have prepared this compilation of Science Magazine videos featuring robotics research that were released during last year. Enjoy!

]]>
Interview with Huy Ha and Shuran Song: CoRL 2021 best system paper award winners https://robohub.org/interview-with-huy-ha-and-shuran-song-corl-2021-best-system-paper-award-winners/ Sun, 12 Dec 2021 10:59:03 +0000 https://robohub.org/?p=202675

Congratulations to Huy Ha and Shuran Song who have won the CoRL 2021 best system paper award!

Their work, FlingBot: the unreasonable effectiveness of dynamic manipulations for cloth unfolding, was highly praised by the judging committee. “To me, this paper constitutes the most impressive account of both simulated and real-world cloth manipulation to date.”, commented one of the reviewers.

Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

In my most recent publication with my advisor, Professor Shuran Song, we studied the task of cloth unfolding. The goal of the task is to manipulate a cloth from a crumpled initial state to an unfolded state, which is equivalent to maximizing the coverage of the cloth on the workspace.

Could you tell us about the implications of your research and why it is an interesting area for study?

Historically, most robotic manipulation research topics, such as grasp planning, are concerned with rigid objects, which have only 6 degrees of freedom since their geometry does not change. This allows one to apply the typical state estimation – task & motion planning pipeline in robotics. In contrast, deformable objects could bend and stretch in arbitrary directions, leading to infinite degrees of freedom. It’s unclear what the state of the cloth should even be. In addition, deformable objects such as clothes could experience severe self occlusion – given a crumpled piece of cloth, it’s difficult to identify whether it’s a shirt, jacket, or pair of pants. Therefore, cloth unfolding is a typical first step of cloth manipulation pipelines, since it reveals key features of the cloth for downstream perception and manipulation.

Despite the abundance of sophisticated methods for cloth unfolding over the years, they typically only address the easy case (where the cloth already starts off mostly unfolded) or take upwards of a hundred steps for challenging cases. These prior works all use single arm quasi-static actions, such as pick and place, which is slow and limited by the physical reach range of the system.

Could you explain your methodology?

In our daily lives, humans typically use both hands to manipulate cloths, and with as little as a single high velocity fling or two, we can unfold an initially crumpled cloth. Based on this observation, our key idea is simple: Use dual arm dynamic actions for cloth unfolding.

FlingBot is a self-supervised framework for cloth unfolding which uses a pick, stretch, and fling primitive for a dual-arm setup from visual observations. There are three key components to our approach. First is the decision to use a high velocity dynamic action. By relying on cloths’ mass combined with a high-velocity throw to do most of its work, a dynamic flinging policy can unfold cloths much more efficiently than a quasi-static policy. Second is a dual-arm grasp parameterization which makes satisfying collision safety constraints easy. By treating a dual-arm grasp not as two points but as a line with a rotation and length, we can directly constrain the rotation and length of the line to ensure arms do not cross over each other and do not try to grasp too close to each other. Third is our choice of using Spatial Action Maps, which learns translational, rotational, and scale equivariant value maps, and allows for sample efficient learning.

What were your main findings?

We found that dynamic actions have three desirable properties over quasi-static actions for the task of cloth unfolding. First, they are efficient – FlingBot achieves over 80% coverage within 3 actions on novel cloths. Second, they are generalizable – trained on only square cloths, FlingBot also generalizes to T-shirts. Third, they expand the system’s effective reach range – even when FlingBot can’t fully lift or stretch a cloth larger than the system’s physical reach range, it’s able to use high velocity flings to unfold the cloth.

After training and evaluating our model in simulation, we deployed and finetuned our model on a real world dual-arm system, which achieves above 80% coverage for all cloth categories. Meanwhile, the quasi-static pick & place baseline was only able to achieve around 40% coverage.

What further work are you planning in this area?

Although we motivated cloth unfolding as a precursor for downstream modules such as cloth state estimation, unfolding could also benefit from state estimation. For instance, if the system is confident it has identified the shoulders of the shirt in its state estimation, the unfolding policy could directly grasp the shoulders and unfold the shirt in one step. Based on this observation, we are currently working on a cloth unfolding and state estimation approach which can learn in a self-supervised manner in the real world.


About the authors

Huy Ha is a Ph.D. student in Computer Science at Columbia University. He is advised by Professor Shuran Song and is a member of the Columbia Artificial Intelligence and Robotics (CAIR) lab.

Shuran Song is an assistant professor in computer science department at Columbia University, where she directs the Columbia Artificial Intelligence and Robotics (CAIR) Lab. Her research focuses on computer vision and robotics. She’s interested in developing algorithms that enable intelligent systems to learn from their interactions with the physical world, and autonomously acquire the perception and manipulation skills necessary to execute complex tasks and assist people.


Find out more

  • Read the paper on arXiv.
  • The videos of the real-world experiments and code are available here, as is a video of the authors’ presentation at CoRL.
  • Read more about the winning and shortlisted papers for the CoRL awards here.
]]>
Top tweets from the Conference on Robot Learning #CoRL2021 https://robohub.org/top-tweets-from-the-conference-on-robot-learning-corl-2021/ Fri, 19 Nov 2021 14:01:59 +0000 https://robohub.org/?p=202466

The Conference on Robot Learning (CoRL) is an annual international conference specialised in the intersection of robotics and machine learning. The fifth edition took place last week in London and virtually around the globe. Apart from the novelty of being a hybrid conference, this year the focus was put on openness. OpenReview was used for the peer review process, meaning that the reviewers’ comments and replies from the authors are public, for anyone to see. The research community suggests that open review could encourage mutual trust, respect, and openness to criticism, enable constructive and efficient quality assurance, increase transparency and accountability, facilitate wider, and more inclusive discussion, give reviewers recognition and make reviews citable [1]. You can access all CoRL 2021 papers and their corresponding reviews here. In addition, you may want to listen to all presentations, available in the conference YouTube channel.

In this post we bring you a glimpse of the conference through the most popular tweets written last week. Cool robot demos, short and sweet explanation of papers and award finalists to look forward to next year’s edition in New Zealand. Enjoy!

Robots, robots, robots!

Papers and presentations

Awards

References

]]>
We are delighted to announce the launch of Scicomm – a joint science communication project from Robohub and AIhub https://robohub.org/we-are-delighted-to-announce-the-launch-of-scicomm-a-joint-science-communication-project-from-robohub-and-aihub/ Tue, 02 Nov 2021 10:49:21 +0000 https://robohub.org/?p=202348

Scicomm.io is a science communication project which aims to empower people to share stories about their robotics and AI work. The project is a joint effort from Robohub and AIhub, both of which are educational platforms dedicated to connecting the robotics and AI communities to the rest of the world.

This project focuses on training the next generation of communicators in robotics and AI to build a strong connection with the outside world, by providing effective communication tools.

People working in the field are developing an enormous array of systems and technologies. However, due to a relative lack of high quality, impartial information in the mainstream media, the general public receive a lot hyped news which ends up causing fear and / or unrealistic expectations surrounding these technologies.

Scicomm.io has been created to facilitate the connection between the robotics and AI world and the rest of the world through teaching how to establish truthful, honest and hype-free communication. One that brings benefit to both sides.

Scicomm bytes

With our series of bite-sized videos you can quickly learn about science communication for robotics and AI. Find out why science communication is important, how to talk to the media, and about some of the different ways in which you can communicate your work. We have also produced guides with tips for turning your research into blog post and for avoiding hype when promoting your research.

Training

Training the next generation of science communicators is an important mission for scicomm.io (and indeed Robohub and AIhub). As part of scicomm.io, we run training courses to empower researchers to communicate about their work. When done well, stories about AI and robotics can help increase the visibility and impact of the work, lead to new connections, and even raise funds. However, most researchers don’t engage in science communication, due to a lack of skills, time, and reach that makes the effort worthwhile.

With our workshops we aim to overcome these barriers and make communicating robotics and AI ‘easy’. This is done through short training sessions with experts, and hands-on practical exercises to help students begin their science communication journey with confidence.

scicomm workshop in actionA virtual scicomm workshop in action.

During the workshops, participants will hear why science communication matters, learn the basic techniques of science communication, build a story around their own research, and find out how to connect with journalists and other communicators. We’ll also discuss different science communication media, how to use social media, how to prepare blog posts, videos and press releases, how to avoid hype, and how to communicate work to a general audience.

For more information about our workshops, contact the team by email.

Find out more about the scicomm.io project here.

]]>
IEEE 17th International Conference on Automation Science and Engineering paper awards (with videos) https://robohub.org/ieee-17th-international-conference-on-automation-science-and-engineering-paper-awards-with-videos/ Wed, 27 Oct 2021 08:30:32 +0000 https://robohub.org/?p=202293

The IEEE International Conference on Automation Science and Engineering (CASE) is the flagship automation conference of the IEEE Robotics and Automation Society and constitutes the primary forum for cross-industry and multidisciplinary research in automation. Its goal is to provide a broad coverage and dissemination of foundational research in automation among researchers, academics, and practitioners. Here we bring you the online presentations by the finalists of the four awards given at the conference. Congratulations to all the finalists and winners!

Best student paper award

Winner

  • Designing a User-Centred and Data-Driven Controller for Pushrim-Activated Power-Assisted Wheels: A Case Study
    Mahsa Khalili, H.F. Machiel Van der Loos and Jaimie Borisoff

Finalists

  • Including Sparse Production Knowledge into Variational Autoencoders to Increase Anomaly Detection Reliability
    Tom Hammerbacher, Markus Lange-Hegermann, Gorden Platz

  • Synthesis and Implementation of Distributed Supervisory Controllers with Communication Delays
    Lars Moormann, Reinier Hendrik Jacob Schouten, Joanna Maria Van de Mortel-Fronczak, Wan Fokkink, Jacobus E. Rooda

  • Optimal Planning of Internet Data Centers Decarbonized by Hydrogen-Water-Based Energy Systems
    Jinhui Liu, Zhanbo Xu, Jiang Wu, kun liu, Xunhang Sun, Xiaohong Guan

  • Deep Reinforcement Learning for Prefab Assembly Planning in Robot-Based Prefabricated Construction
    Zhu Aiyu, Gangyan Xu, Pieter Pauwels, Bauke de Vries, Meng Fang

  • Singularity-Aware Motion Planning for Multi-Axis Additive Manufacturing
    Charlie C.L. Wang, Tianyu Zhang, Xiangjia Chen, Guoxin Fang, Yingjun Tian

Best conference paper award

Winner

  • Extended Fabrication-Aware Convolution Learning Framework for Predicting 3D Shape Deformation in Additive Manufacturing
    Yuanxiang Wang, Cesar Ruiz, Qiang Huang

Finalists

  • Probabilistic Movement Primitive Control Via Control Barrier Functions
    Mohammadreza Davoodi, Asif Iqbal, Joe Cloud, William Beksi, Nicholas Gans

  • Efficient Optimization-Based Falsification of Cyber-Physical Systems with Multiple Conjunctive Requirements
    Logan Mathesen, Giulia Pedrielli, Georgios Fainekos

Best application paper award

Winner

  • A Seamless Workflow for Design and Fabrication of Multimaterial Pneumatic Soft Actuators
    Lawrence Smith, Travis Hainsworth, Zachary Jordan, Xavier Bell, Robert MacCurdy

Finalists

  • Dynamic Multi-Goal Motion Planning with Range Constraints for Autonomous Underwater Vehicles Following Surface Vehicles
    James McMahon, Erion Plaku

  • OpenUAV Cloud Testbed: a Collaborative Design Studio for Field Robotics
    Harish Anand, Stephen A. Rees, Zhiang Chen, Ashwin Jose Poruthukaran, Sarah Bearman, Lakshmi Gana Prasad Antervedi, Jnaneshwar Das

Best healthcare automation paper award

Winner

  • Hospital Beds Planning and Admission Control Policies for COVID-19 Pandemic: A Hybrid Computer Simulation Approach
    Yiruo Lu, Yongpei Guan, Xiang Zhong, Jennifer Fishe, Thanh Hogan

Finalists

  • Rollout-Based Gantry Call-Back Control for Proton Therapy Systems
    Feifan Wang, Yu-Li Huang, Feng Ju

  • Progress in Development of an Automated Mosquito Salivary Gland Extractor: A Step Forward to Malaria Vaccine Mass Production
    Wanze Li, Zhuoqun Zhang, Zhuohong He, Parth Vora, Alan Lai, Balazs Vagvolgyi, Simon Leonard, Anna Goodridge, Ioan Iulian Iordachita, Stephen L. Hoffman, Sumana Chakravarty, B Kim Lee Sim, Russell H. Taylor
]]>
Robotics Today latest talks – Raia Hadsell (DeepMind), Koushil Sreenath (UC Berkeley) and Antonio Bicchi (Istituto Italiano di Tecnologia) https://robohub.org/robotics-today-latest-talks-raia-hadsell-deepmind-koushil-sreenath-uc-berkeley-and-antonio-bicchi-istituto-italiano-di-tecnologia/ Thu, 21 Oct 2021 10:55:29 +0000 https://robohub.org/?p=202193

Robotics Today held three more online talks since we published the one from Amanda Prorok (Learning to Communicate in Multi-Agent Systems). In this post we bring you the last talks that Robotics Today (currently on hiatus) uploaded to their YouTube channel: Raia Hadsell from DeepMind talking about ‘Scalable Robot Learning in Rich Environments’, Koushil Sreenath from UC Berkeley talking about ‘Safety-Critical Control for Dynamic Robots’, and Antonio Bicchi from the Istituto Italiano di Tecnologia talking about ‘Planning and Learning Interaction with Variable Impedance’.

Raia Hadsell (DeepMind) – Scalable Robot Learning in Rich Environments

Abstract: As modern machine learning methods push towards breakthroughs in controlling physical systems, games and simple physical simulations are often used as the main benchmark domains. As the field matures, it is important to develop more sophisticated learning systems with the aim of solving more complex real-world tasks, but problems like catastrophic forgetting and data efficiency remain critical, particularly for robotic domains. This talk will cover some of the challenges that exist for learning from interactions in more complex, constrained, and real-world settings, and some promising new approaches that have emerged.

Bio: Raia Hadsell is the Director of Robotics at DeepMind. Dr. Hadsell joined DeepMind in 2014 to pursue new solutions for artificial general intelligence. Her research focuses on the challenge of continual learning for AI agents and robots, and she has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting. Dr. Hadsell is on the executive boards of ICLR (International Conference on Learning Representations), WiML (Women in Machine Learning), and CoRL (Conference on Robot Learning). She is a fellow of the European Lab on Learning Systems (ELLIS), a founding organizer of NAISys (Neuroscience for AI Systems), and serves as a CIFAR advisor.

Koushil Sreenath (UC Berkeley) – Safety-Critical Control for Dynamic Robots: A Model-based and Data-driven Approach

Abstract: Model-based controllers can be designed to provide guarantees on stability and safety for dynamical systems. In this talk, I will show how we can address the challenges of stability through control Lyapunov functions (CLFs), input and state constraints through CLF-based quadratic programs, and safety-critical constraints through control barrier functions (CBFs). However, the performance of model-based controllers is dependent on having a precise model of the system. Model uncertainty could lead not only to poor performance but could also destabilize the system as well as violate safety constraints. I will present recent results on using model-based control along with data-driven methods to address stability and safety for systems with uncertain dynamics. In particular, I will show how reinforcement learning as well as Gaussian process regression can be used along with CLF and CBF-based control to address the adverse effects of model uncertainty.

Bio: Koushil Sreenath is an Assistant Professor of Mechanical Engineering, at UC Berkeley. He received a Ph.D. degree in Electrical Engineering and Computer Science and a M.S. degree in Applied Mathematics from the University of Michigan at Ann Arbor, MI, in 2011. He was a Postdoctoral Scholar at the GRASP Lab at University of Pennsylvania from 2011 to 2013 and an Assistant Professor at Carnegie Mellon University from 2013 to 2017. His research interest lies at the intersection of highly dynamic robotics and applied nonlinear control. His work on dynamic legged locomotion was featured on The Discovery Channel, CNN, ESPN, FOX, and CBS. His work on dynamic aerial manipulation was featured on the IEEE Spectrum, New Scientist, and Huffington Post. His work on adaptive sampling with mobile sensor networks was published as a book. He received the NSF CAREER, Hellman Fellow, Best Paper Award at the Robotics: Science and Systems (RSS), and the Google Faculty Research Award in Robotics.

Antonio Bicchi (Istituto Italiano di Tecnologia) – Planning and Learning Interaction with Variable Impedance

Abstract: In animals and in humans, the mechanical impedance of their limbs changes not only in dependence of the task, but also during different phases of the execution of a task. Part of this variability is intentionally controlled, by either co-activating muscles or by changing the arm posture, or both. In robots, impedance can be varied by varying controller gains, stiffness of hardware parts, and arm postures. The choice of impedance profiles to be applied can be planned off-line, or varied in real time based on feedback from the environmental interaction. Planning and control of variable impedance can use insight from human observations, from mathematical optimization methods, or from learning. In this talk I will review the basics of human and robot variable impedance, and discuss how this impact applications ranging from industrial and service robotics to prosthetics and rehabilitation.

Bio: Antonio Bicchi is a scientist interested in robotics and intelligent machines. After graduating in Pisa and receiving a Ph.D. from the University of Bologna, he spent a few years at the MIT AI Lab of Cambridge before becoming Professor in Robotics at the University of Pisa. In 2009 he founded the Soft Robotics Laboratory at the Italian Institute of Technology in Genoa. Since 2013 he is Adjunct Professor at Arizona State University, Tempe, AZ. He has coordinated many international projects, including four grants from the European Research Council (ERC). He served the research community in several ways, including by launching the WorldHaptics conference and the IEEE Robotics and Automation Letters. He is currently the President of the Italian Institute of Robotics and Intelligent Machines. He has authored over 500 scientific papers cited more than 25,000 times. He supervised over 60 doctoral students and more than 20 postdocs, most of whom are now professors in universities and international research centers, or have launched their own spin-off companies. His students have received prestigious awards, including three first prizes and two nominations for the best theses in Europe on robotics and haptics. He is a Fellow of IEEE since 2005. In 2018 he received the prestigious IEEE Saridis Leadership Award.

]]>
Robohub gets a fresh look https://robohub.org/robohub-gets-a-fresh-look/ Sun, 17 Oct 2021 12:25:11 +0000 https://robohub.org/?p=202124

If you visited Robohub this week, you may have spotted a big change: how this blog looks now! On Tuesday (coinciding with Ada Lovelace Day and our ‘50 women in robotics that you need to know about‘ by chance), Robohub got a massive modernisation on its look by our technical director Ioannis K. Erripis and his team.

There are many improvements and new features but the biggest update apart from the code is the design which is more clean and simple looking (especially on the single post view).

Ioannis K. Erripis

This fresher look has recently been tested on our sister project, AIhub.org. As Ioannis says, it offers a cleaner, simpler and more readable way to access the content from the robotics community that we post in this blog. You will also notice that the main page now displays a mosaic of content which is more accessible than before to facilitate how you scroll down our site.

Your feedback matters to us: if you find any issue with the new design or have any comment/idea for improvement, please let us know! You can reach us at editors[at]robohub.org.

And there are even more exciting news coming up together with this fresh look! We will soon release a brand new project that we have been developing in the background during this year. Stay tuned!

]]>
Online events to look out for on Ada Lovelace Day 2021 https://robohub.org/online-events-to-look-out-for-on-ada-lovelace-day-2021/ Sun, 10 Oct 2021 11:54:59 +0000 https://robohub.org/?p=201873

On the 12th of October, the world will celebrate Ada Lovelace Day to honor the achievements of women in science, technology, engineering and maths (STEM). After a successful worldwide online celebration of Ada Lovelace Day last year, this year’s celebration returns with a stronger commitment to online inclusion. In Finding Ada (the main network supporting Ada Lovelace Day), there will be three free webinars that you can enjoy in the comfort of your own home. There will also be loads of events happening around the world, so you have a wide range of content to celebrate Ada Lovelace Day 2021!

Engineering – Solving Problems for Real People

Engineering is the science of problem solving, and we have some pretty big problems in front of us. So how are engineers tackling the COVID-19 pandemic and climate change? And how do they stay focused on the impact of their engineering solutions on people and communities?

In partnership with STEM Wana Trust, we invite you to join Renée Young, associate mechanical engineer at Beca, Victoria Clark, senior environmental engineer at Beca, Natasha Mudaliar, operations manager at Reliance Reinforcing, and Sujata Roy, system planning engineer at Transpower, for a fascinating conversation about the challenges and opportunities of engineering.

13:00 NZST, 12 Oct: Perfect for people in New Zealand, Australia, and the Americas. (Note for American audiences: This panel will be on Monday for you.)

Register here, and find out about the speakers here.

Fusing Tech & Art in Games

The Technical Artist is a new kind of role in the games industry, but the possibilities for those who create and merge art and technology is endless. So what is tech art? And how are tech artists pushing the boundaries and creating new experiences for players?

Ada Lovelace Day and Ukie’s #RaiseTheGame invite you to join tech artist Kristrun Fridriksdottir, Jodie Azhar, technical art director at Silver Rain Games, Emma Roseburgh from Avalanche Studios, and Laurène Hurlin from Pixel Toys for our tech art webinar.

13:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, Middle East, India, for early birds in the Americas and night owls in AsiaPacific.

Register here, and find out about the speakers here.

The Science of Hypersleep

Hypersleep is a common theme in science fiction, but what does science have to say about putting humans into suspended animation? What can we learn from hibernating animals? What’s the difference between hibernation and sleep? What health impacts would extended hypersleep have?

Ada Lovelace Day and the Arthur C. Clarke Award invite you to join science fiction author Anne Charnock, Prof Gina Poe, an expert on the relationship between sleep and memory, Dr Anusha Shankar, who studies torpor in hummingbirds, and Prof Kelly Drew, who studies hibernation in squirrels, for a discussion of whether hypersleep in humans is possible.

19:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, and the Americas.

Register here, and find out about the speakers here.

Other worldwide events

Apart from the three webinars above, many other organisations will hold their own events to celebrate the day. From a 24-hour global edit-a-thon (The Pankhurst Centre) to a digital theatre play (STEM on Stage) to an online machine learning breakfast (Square Women Engineers + Allies Australia), plus several talks and panel discussions like this one on how you can change the world with the help of physics (Founders4Schools), or this other one on inspiring women and girls in STEAM (Engine Shed), you have plenty of options to choose from.

For a full overview of international events, check out this website.

We also hope that you enjoy reading our annual list of women in robotics that you need to know that will be released on the day. Happy Ada Lovelace Day 2021!

]]>
Real Roboticist focus series #6: Dennis Hong (Making People Happy) https://robohub.org/real-roboticist-focus-series-6-dennis-hong-making-people-happy/ Wed, 29 Sep 2021 09:03:02 +0000 https://robohub.org/real-roboticist-focus-series-6-dennis-hong-making-people-happy/

In this final video of our focus series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist, you’ll meet Dennis Hong speaking about the robots he and his team have created (locomotion and new ways of moving; an autonomous car for the visually impaired; disaster relief robots), Star Wars and cooking. All in all, ingredients from different worlds that Dennis is using to benefit society.

Dennis Hong is a Professor and the Founding Director of RoMeLa (Robotics & Mechanisms Laboratory) of the Mechanical & Aerospace Engineering Department at UCLA. If you’d like to find out more about how Star Wars influenced his professional career in robotics, how his experience taking a cooking assistant robot to MasterChef USA inspired a multi-million research project, and all the robots he is creating, check out his video below!

]]>
Sinergies between automation and robotics https://robohub.org/sinergies-between-automation-and-robotics/ Sat, 25 Sep 2021 09:37:38 +0000 https://robohub.org/sinergies-between-automation-and-robotics/

In this IEEE ICRA 2021 Plenary Panel aimed at the younger generation of roboticists and automation experts, panelists Seth Hutchinson, Maria Pia Fanti, Peter B. Luh, Pieter Abbeel, Kaneko Harada, Michael Y. Wang, Kevin Lynch, Chinwe Ekenna, Animesh Garg and Frank Park, under the moderation of Ken Goldberg, discussed about how to close the gap between both disciplines, which have many topics in common. The panel was organised by the Ad Hoc Committee to Explore Synergies in Automation and Robotics (CESAR).

As the IEEE Robotics and Automation Society (IEEE RAS) explain, “robotics and automation have always been siblings. They are similar in many ways and have substantial overlap in topics and research communities, but there are also differences–many RAS members view them as disjoint and consider themselves purely in robotics or purely in automation. This committee’s goal is to reconsider these perceptions and think about ways we can bring these communities closer.”

]]>
‘Black in Robotics’ special series https://robohub.org/iros2020-black-in-robotics-special-series/ Mon, 13 Sep 2021 14:37:29 +0000 https://robohub.org/iros2020-black-in-robotics-special-series/

Apart from the IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist that we have been featuring in the last weeks, another series of three videos was produced together with Black in Robotics and the support of Toyota Research Institute. In this series, black roboticists give their personal examples of why diversity matters in robotics, showcase their research and explain what made them build a career in robotics.

Here’s a list of all the speakers and organisations who took part in the videos:

  • Ariel Anders – Roboticist at Robust.AI
  • Allison Okamura – Professor of Mechanical Engineering at Stanford University
  • Alivia Blount – Data Scientist
  • Anthony Jules – Co-founder and COO at Robust.AI
  • Andra Keay – Robotics Industry Futurist, Managing Director of Silicon Valley Robotics and Core Team Member of Robohub
  • Carlotta A. Berry – Professor of Electrical and Computer Engineering at Rose-Hulman Institute of Technology
  • Donna Auguste – Entrepreneur and Data Scientist
  • Clinton Enwerem – Robotics Trainee from the Robotics & Artificial Intelligence Nigeria (RAIN) team
  • Quentin Sanders – Postdoctoral Research Fellow at North Carolina State University
  • George Okoroafor – Robotics Research Engineer from the Robotics & Artificial Intelligence Nigeria (RAIN) team
  • Tatiana Jean-Louis – Amazon & Robotics Geek
  • Patrick Musau – Graduate Research Assistant at Vanderbilt University
  • Melanie Moses – Professor of Computer Science at the University of New Mexico
]]>
Plenary and Keynote talks focus series #6: Jonathan Hurst & Andrea Thomaz https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-6-jonathan-hurst-andrea-thomaz/ Wed, 08 Sep 2021 10:56:15 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-6-jonathan-hurst-andrea-thomaz/

This week you’ll be able to listen to the talks of Jonathan Hurst (Professor of Robotics at Oregon State University, and Chief Technology Officer at Agility Robotics) and Andrea Thomaz (Associate Professor of Robotics at the University of Texas at Austin, and CEO of Diligent Robotics) as part of this series that brings you the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems). Jonathan’s talk is in the topic of humanoids, while Andrea’s is about human-robot interaction.

Prof. Jonathan Hurst – Design Contact Dynamics in Advance

Bio: Jonathan W. Hurst is Chief Technology Officer and co-founder of Agility Robotics, and Professor and co-founder of the Oregon State University Robotics Institute. He holds a B.S. in mechanical engineering and an M.S. and Ph.D. in robotics, all from Carnegie Mellon University. His university research focuses on understanding the fundamental science and engineering best practices for robotic legged locomotion and physical interaction. Agility Robotics is bringing this new robotic mobility to market, solving problems for customers, working towards a day when robots can go where people go, generate greater productivity across the economy, and improve quality of life for all.

Prof. Andrea Thomaz – Human + Robot Teams: From Theory to Practice

Bio: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics and a renowned social robotics expert. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President’s Council of Advisors on Science and Tech, MIT Technology Review on its Next Generation of 35 Innovators Under 35 list, Popular Science on its Brilliant 10 list, TEDx as a featured keynote speaker on social robotics and Texas Monthly on its Most Powerful Texans of 2018 list.

Andrea’s robots have been featured in the New York Times and on the covers of MIT Technology Review and Popular Science. Her passion for social robotics began during her work at the MIT Media Lab, where she focused on using AI to develop machines that address everyday human needs. Andrea co-founded Diligent Robotics to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab).

Andrea is published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning in order to build social robots and other machines that are intuitive for everyday people to teach.

Andrea has received an NSF CAREER award in 2010 and an Office of Naval Research Young Investigator Award in 2008. In addition Diligent Robotics robot Moxi has been featured on NBC Nightly News and most recently in National Geographic “The robot revolution has arrived”.

]]>
Real Roboticist focus series #5: Michelle Johnson (Robots That Matter) https://robohub.org/iros2020-real-roboticist-focus-series-5-michelle-johnson-robots-that-matter/ Sun, 05 Sep 2021 08:03:15 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-5-michelle-johnson-robots-that-matter/

We’re reaching the end of this focus series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist. This week you’ll meet Michelle Johnson, Associate Professor of Physical Medicine and Rehabilitation at the University of Pennsylvania.

Michelle is also the Director of the Rehabilitation Robotics Lab at the University of Pennsylvania, whose aim is to use rehabilitation robotics and neuroscience to investigate brain plasticity and motor function after non-traumatic brain injuries, for example in stroke survivors or persons diagnosed with cerebral palsy. If you’d like to know more about her professional journey, her work with affordable robots for low/middle income countries and her next frontier in robotics, among many more things, check out her video below!

]]>
Plenary and Keynote talks focus series #5: Nikolaus Correll & Cynthia Breazeal https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-5-nikolaus-correll-cynthia-breazeal/ Wed, 01 Sep 2021 10:40:45 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-5-nikolaus-correll-cynthia-breazeal/

As part of our series showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems), this week we bring you Nikolaus Correll (Associate Professor at the University of Colorado at Boulder) and Cynthia Breazeal (Professor of Media Arts and Sciences at MIT). Nikolaus’ talk is on the topic of robot manipulation, while Cynthia’s talk is about the topic of social robots.

Prof. Nikolaus Correll – Robots Getting a Grip on General Manipulation

Bio: Nikolaus Correll is an Associate Professor at the University of Colorado at Boulder. He obtained his MS in Electrical Engineering from ETH Zürich and his PhD in Computer Science from EPF Lausanne in 2007. From 2007-2009 he was a post-doc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). Nikolaus is the recipient of a NSF CAREER award, a NASA Early Career Faculty Fellowship and a Provost Faculty achievement award. In 2016, he founded Robotic Materials Inc. to commercialize robotic manipulation technology.

Prof. Cynthia Breazeal – Living with Social Robots: from Research to Commercialization and Back

Abstract: Social robots are designed to interact with people in an interpersonal way, engaging and supporting collaborative social and emotive behavior for beneficial outcomes. We develop adaptive algorithmic capabilities and deploy multitudes of cloud-connected robots in schools, homes, and other living facilities to support long-term interpersonal engagement and personalization of specific interventions. We examine the impact of the robot’s social embodiment, emotive and relational attributes, and personalization capabilities on sustaining people’s engagement, improving learning, impacting behavior, and shaping attitudes to help people achieve long-term goals. I will also highlight challenges and opportunities in commercializing social robot technologies for impact at scale. In a time where citizens are beginning to live with intelligent machines on a daily basis, we have the opportunity to explore, develop, study, and assess humanistic design principles to support and promote human flourishing at all ages and stages.

Bio: Cynthia Breazeal is a Professor at the MIT Media Lab where she founded and Directs the Personal Robots Group. She is also Associate Director of the Media Lab in charge of new strategic initiatives, and she is spearheading MIT’s K-12 education initiative on AI in collaboration with the Media Lab, Open Learning and the Schwarzman College of Computing. She is recognized as a pioneer in the field of social robotics and human-robot interaction and is a AAAI Fellow. She is a recipient of awards by the National Academy of Engineering as well as the National Design Awards. She has received Technology Review’s TR100/35 Award and the George R. Stibitz Computer & Communications Pioneer Award. She has also been recognized as an award-winning entrepreneur, designer and innovator by CES, Fast Company, Entrepreneur Magazine, Forbes, and Core 77 to name a few. Her robots have been recognized by TIME magazine’s Best Inventions in 2008 and in 2017 where her award-wining Jibo robot was featured on the cover. She received her doctorate from MIT in Electrical Engineering and Computer Science in 2000.

]]>
Real Roboticist focus series #4: Peter Corke (Learning) https://robohub.org/iros2020-real-roboticist-focus-series-4-peter-corke-learning/ Sun, 29 Aug 2021 16:02:42 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-4-peter-corke-learning/

In this fourth release of our series dedicated to IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist, we bring you Peter Corke. He is a Distinguished Professor of Robotic Vision at Queensland University of Technology, Director of the QUT Centre for Robotics, and Director of the ARC Centre of Excellence for Robotic Vision.

If you’ve ever studied a robotics or computer vision course, you might have read a classic book: Peter Corke’s Robotics, Vision and Control. Moreover, Peter has also released several open-source robotics resources and free courses, all available at his website. If you’d like to hear more about his career in robotics and education, his main challenges and what he learnt from them, and what’s his advice for current robotics students, check out his video below. Have fun!

]]>
2021 IEEE RAS Publications best paper awards https://robohub.org/2021-ieee-ras-publications-best-paper-awards/ Fri, 27 Aug 2021 14:21:11 +0000 https://robohub.org/2021-ieee-ras-publications-best-paper-awards/

The IEEE Robotics and Automation Society has recently released the list of winners of their best paper awards. Below you can see the list and access the publications. Congratulations to all winners!

IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award

IEEE Transactions on Automation Science and Engineering Best Paper Award

IEEE Transactions on Automation Science and Engineering Best New Application Paper Award

IEEE Robotics and Automation Letters Best Paper Award


IEEE Robotics and Automation Magazine Best Paper Award


IEEE Transactions on Haptics Best Paper Award


IEEE Transactions on Haptics Best Application Paper Award

]]>
Plenary and Keynote talks focus series #4: Steve LaValle & Sarah Bergbreiter https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-4-steve-lavalle-sarah-bergbreiter/ Wed, 25 Aug 2021 17:19:03 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-4-steve-lavalle-sarah-bergbreiter/

In this new release of our series showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems) you’ll meet Steve LaValle (University of Oulu) talking about the area of perception, action and control, and Sarah Bergbreiter (Carnegie Mellon University) talking about bio-inspired microrobotics.

Prof. Steve LaValle – Rapidly exploring Random Topics

Bio: Steve LaValle is Professor of Computer Science and Engineering, in Particular Robotics and Virtual Reality, at the University of Oulu. From 2001 to 2018, he was a professor in the Department of Computer Science at the University of Illinois. He has also held positions at Stanford University and Iowa State University. His research interests include robotics, virtual and augmented reality, sensing, planning algorithms, computational geometry, and control theory. In research, he is mostly known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm, which is widely used in robotics and other engineering fields. In industry, he was an early founder and chief scientist of Oculus VR, acquired by Facebook in 2014, where he developed patented tracking technology for consumer virtual reality and led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration, health and safety, and the design of comfortable user experiences. From 2016 to 2017 he was Vice President and Chief Scientist of VR/AR/MR at Huawei Technologies, Ltd. He has authored the books Planning Algorithms, Sensing and Filtering, and Virtual Reality.

Prof. Sarah Bergbreiter – Microsystems-inspired Robotics

Bio: Sarah Bergbreiter joined the Department of Mechanical Engineering at Carnegie Mellon University in the fall of 2018 after spending ten years at the University of Maryland, College Park. She received her B.S.E. degree in electrical engineering from Princeton University in 1999. After a short introduction to the challenges of sensor networks at a small startup company, she received the M.S. and Ph.D. degrees from the University of California, Berkeley in 2004 and 2007 with a focus on microrobotics. Prof. Bergbreiter received the DARPA Young Faculty Award in 2008, the NSF CAREER Award in 2011, and the Presidential Early Career Award for Scientists and Engineers (PECASE) Award in 2013 for her research bridging microsystems and robotics. She has received several Best Paper awards at conferences like ICRA, IROS, and Hilton Head Workshop. She currently serves as vice-chair of DARPA’s Microsystems Exploratory Council and as an associate editor for IEEE Transactions on Robotics.

]]>
Real Roboticist focus series #3: Radhika Nagpal (Enjoying the Ride) https://robohub.org/iros2020-real-roboticist-focus-series-3-radhika-nagpal-enjoying-the-ride/ Sun, 22 Aug 2021 17:37:49 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-3-radhika-nagpal-enjoying-the-ride/

Today we continue with our series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist. This time you’ll meet Radhika Nagpal, who is a Fred Kavli Professor of Computer Science at the Wyss Institute for Biologically Inspired Engineering from Harvard University.

Did you know Radhika directed the research that led to the creation of the Kilobots, the first open-source, low-cost robots that were specifically designed for large scale experiments with hundreds and thousands of them? You can watch this example or this other one if you’re curious. If you’d like to know more about Radhika and her achievements, challenges and what she would tell her younger self, below is the whole interview. Enjoy!

]]>
Plenary and Keynote talks focus series #3: Anya Petrovskaya & I-Ming Chen https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-3-anya-petrovskaya-i-ming-chen/ Wed, 18 Aug 2021 16:31:08 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-3-anya-petrovskaya-i-ming-chen/

In this new release of our series showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems) you’ll meet Dr. Anya Petrovskaya (Stanford University), talking about driverless vehicles and field robots, and Prof. I-Ming Chen (Nanyang Technological University), whose talk is about food handling robots.

Dr. Anya Petrovskaya – A Journey through Autonomy

Topical Area: Driverless Vehicles and Field Robots

Bio: Dr. Anna Petrovskaya is a scientist and entrepreneur with decades of experience in the field of AI, autonomy, and 3D computer vision. Most recently, Anna built a 3D mapping startup that was acquired by Mobileye/Intel, where she became Head of LiDAR AI. She completed her Doctorate degree in Computer Science at Stanford University in 2011, where she focused on Artificial Intelligence and Robotics. In 2012, her thesis was named among the winners of the IEEE Intelligent Transportation Systems Society Best PhD Thesis Award. Anna was part of the core team that built the Stanford autonomous car Junior, which was a precursor to the Waymo/Google autonomous car. She has served as an Associate Editor for International Conference on Robotics and Automation (ICRA) since 2011. Based on her expertise, Anna has been invited to co-author chapters for the Handbook of Intelligent Vehicles and the 2nd edition of the Handbook of Robotics

Prof. I-Ming Chen – Automation of Food Handling: From Item-Picking to Food-Picking

Topical Area: Food Handling Robotics

Bio: I-Ming Chen received the B.S. degree from National Taiwan University in 1986, and M.S. and Ph.D. degrees from California Institute of Technology, Pasadena, CA in 1989 and 1994 respectively. He is currently Professor in the School of Mechanical and Aerospace Engineering of Nanyang Technological University (NTU) in Singapore, and Editor-in-chief of IEEE/ASME Transactions on Mechatronics. He is Director of Robotics Research Centre in NTU from 2013 to 2017, and is also a member of the Robotics Task Force 2014 under the National Research Foundation which is responsible for Singapore’s strategic R&D plan in future robotics. Professor Chen is Fellow of Singapore Academy of Engineering, Fellow of IEEE and Fellow of ASME, General Chairman of 2017 IEEE International Conference on Robotics and Automation (ICRA 2017) in Singapore. His research interests are in logistics and construction robots, wearable devices, human-robot interaction and industrial automation. He is also CEO of Transforma Robotics Pte Ltd developing robots for construction industry and CTO of Hand Plus Robotics Pte Ltd developing robotics and AI solutions for logistics and manufacturing industry.

]]>
Real Roboticist focus series #2: Ruzena Bajczy (Foundations) https://robohub.org/iros2020-real-roboticist-focus-series-2-ruzena-bajczy-foundations/ Sun, 08 Aug 2021 14:55:08 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-2-ruzena-bajczy-foundations/

Last Sunday we started another series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist. In this episode you’ll meet Ruzena Bajczy, Professor Emerita of Electrical Engineering and Computer Science at the University of California, Berkeley. She is also the founding Director of CITRIS (the Center for Information Technology Research in the Interest of Society).

In her talk, she explains her path from being an electrical engineer to becoming a researcher with Emeritus honours, and with over 50 years of experience in robotics, artificial intelligence and the foundations of how humans interact with our environment. Are you curious about the tips she’s got to share and her own prediction of the future of robotics? Don’t miss it out!

]]>
Plenary and Keynote talks focus series #2: Frank Dellaert & Ashish Deshpande https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-2-frank-dellaert-ashish-deshpande/ Wed, 04 Aug 2021 09:40:16 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-2-frank-dellaert-ashish-deshpande/

Last Wednesday we started this series of posts showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems). This is a great opportunity to stay up to date with the latest robotics & AI research from top roboticists in the world. This week we’re bringing you Prof. Frank Dellaert (Georgia Institute of Technology; Google AI) and Prof. Ashish Deshpande (The University of Texas).

Prof. Frank Dellaert – Perception in Aerial, Marine & Space Robotics: a Biased Outlook

Bio: Frank Dellaert is a Professor in the School of Interactive Computing at the Georgia Institute of Technology and a Research Scientist at Google AI. While on leave from Georgia Tech in 2016-2018, he served as Technical Project Lead at Facebook’s Building 8 hardware division. Before that he was also Chief Scientist at Skydio, a startup founded by MIT grads to create intuitive interfaces for micro-aerial vehicles. His research is in the overlap between robotics and computer vision, and he is particularly interested in graphical model techniques to solve large-scale problems in mapping, 3D reconstruction, and increasingly model-predictive control. The GTSAM toolbox embodies many of the ideas his research group has worked on in the past few years and is available at https://gtsam.org.

Prof. Ashish Deshpande – Harmony Exoskeleton: A Journey from Robotics Lab to Stroke

Bio: Ashish D. Deshpande is passionate about helping stroke patients recover from their disabilities and he believes robots could serve as important tools in the recovery process. He is a faculty member in Mechanical Engineering at The University of Texas at Austin, where he directs the ReNeu Robotics Lab. His work focuses on the study of human system and design of robotic systems toward the goals accelerating recovery after a neurological injury (e.g. stroke and spinal cord injury), improving the quality of lives of those living disabilities (e.g. amputation) and enhancing lives and productivity of workers, soldiers and astronauts. Specifically, his group has designed two novel exoskeletons for delivering engaging and subject-specific training for neuro-recovery of upper-body movements after stroke and spinal cord injury. Dr. Deshpande is a co-founder of Harmonic Bionics whose mission is to improve rehabilitation outcomes for the stroke patients.

]]>
Real Roboticist focus series #1: Davide Scaramuzza (Drones & Magic) https://robohub.org/iros2020-real-roboticist-focus-series-1-davide-scaramuzza-drones-magic/ Sun, 01 Aug 2021 18:39:58 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-1-davide-scaramuzza-drones-magic/

Are you curious about the people behind the robots? The series ‘Real Roboticist’, produced by the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), shows the people at the forefront of robotics research from a more personal perspective. How did they become roboticists? What made them proud and what challenges did they face? What advice would they give to their younger self? What does a typical day look like? And where do they see the future of robotics? In case you missed it during the On-Demand conference, no worries! IEEE has recently made their original series public, and every Sunday we’ll bring you an interview with a real roboticist for you to get inspired.

This week is the turn of Davide Scaramuzza, Professor and Director of the Robotics and Perception Group at the University of Zürich. In his talk, Davide explains his journey from Electronics Engineering to leading a top robotics vision research group developing a promising technology: event cameras. He’ll also speak about the challenges he faced along the way, and even how he combines the robotics research with another of his passions, magic. Curious about where the magic happens? Davide also takes you around his research lab during the interview. Let the magic happen!

]]>
Plenary and Keynote talks focus series #1: Yukie Nagai & Danica Kragic https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-1-yukie-nagai-danica-kragic/ Wed, 28 Jul 2021 16:03:01 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-1-yukie-nagai-danica-kragic/

Would you like to stay up to date with the latest robotics & AI research from top roboticists? The IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems) recently released their Plenary and Keynote talks in the IEEE RAS YouTube channel. We’re starting a new focus series with all their talks. This week, we’re featuring Professor Yukie Nagai (University of Tokyo), talking about cognitive development in humans and robots, and Professor Danica Kragic (KTH Royal Institute of Technology), talking about the impact of robotics and AI in the fashion industry.

Prof. Yukie Nagai – Cognitive Development in Humans and Robots: New Insights into Intelligence

Abstract: Computational modeling of cognitive development has the potential to uncover the underlying mechanism of human intelligence as well as to design intelligent robots. We have been investigating whether a unified theory accounts for cognitive development and what computational framework embodies such a theory. This talk introduces a neuroscientific theory called predictive coding and shows how robots as well as humans acquire cognitive abilities using predictive processing neural networks. A key idea is that the brain works as a predictive machine; that is, the brain tries to minimize prediction errors by updating the internal model and/or by acting on the environment. Our robot experiments demonstrate that the process of minimizing prediction errors leads to continuous development from non-social to social cognitive abilities. Internal models acquired through their own sensorimotor experiences enable robots to interact with others by inferring their internal state. Our experiments inducing atypicality in predictive processing also explains why and how developmental disorders appear in social cognition. I discuss new insights into human and robot intelligence obtained from these studies.

Bio: Yukie Nagai is a Project Professor at the International Research Center for Neurointelligence, the University of Tokyo. She received her Ph.D. in Engineering from Osaka University in 2004 and worked at the National Institute of Information and Communications Technology, Bielefeld University, and Osaka University. Since 2019, she leads Cognitive Developmental Robotics Lab at the University of Tokyo. Her research interests include cognitive developmental robotics, computational neuroscience, and assistive technologies for developmental disorders. Her research achievements have been widely reported in the media as novel techniques to understand and support human development. She also serves as the research director of JST CREST Cognitive Mirroring.

Prof. Danica Kragic – Robotics and Artificial Intelligence Impacts on the Fashion Industry

Abstract: This talk will overview how robotics and artificial intelligence can impact fashion industry. What can we do to make fashion industry more sustainable and what are the most difficult parts in this industry to automate? Concrete examples of research problems in terms of perception, manipulation of deformable materials and planning will be discussed in this context.

Bio: Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology. Her research is in the area of robotics, computer vision and machine learning. She received ERC Starting and Advanced Grant. Her research is supported by the EU, Knut and Alice Wallenberg Foundation, Swedish Foundation for Strategic Research and Swedish Research Council. She is an IEEE Fellow.

]]>
Bristol Robotics Lab Virtual Conference 2021 (with videos) https://robohub.org/bristol-robotics-lab-virtual-conference-2021-with-videos/ Fri, 09 Jul 2021 10:21:47 +0000 https://robohub.org/bristol-robotics-lab-virtual-conference-2021-with-videos/

The Bristol Robotics Laboratory (BRL) hosted their first virtual conference last Wednesday, the 30th of June. With over 50 talks, the conference was a gathering of top robotics researchers, business leaders and PhD/post-doctoral students showcasing cutting-edge research. In their four dedicated tracks, speakers covered a wide range of topics such as unmanned aerial vehicles, soft robotics, assistive technologies, human-robot interaction, robot safety & ethics, or swarm robotics, among others. Moreover, there were two panels discussing the future of robotics, and smart automation & startups.

In case you missed the conference, or you would like to re-watch it, BRL has made all the talks available through their dedicated YouTube channel. For your comfort, you’ll find all the videos in the playlist below. To see the details of each track or talk, you can access their programme navigator. Speakers’ email addresses are also available for you to contact them directly.

Would you like to go even deeper? You can now discover where the previous research takes place through their virtual walkthrough. Enjoy!

]]>
Play RoC-Ex: Robotic Cave Explorer and unearth the truth about what happened in cave system #0393 https://robohub.org/play-roc-ex-robotic-cave-explorer-and-unearth-the-truth-about-what-happened-in-cave-system-0393/ Sat, 03 Jul 2021 16:47:03 +0000 https://robohub.org/play-roc-ex-robotic-cave-explorer-and-unearth-the-truth-about-what-happened-in-cave-system-0393/

Dive into the experience of piloting a robotic scout through what appears to be an ancient cave system leading down to the centre of the Earth. With the help of advanced sensors, guide your robot explorer along dark tunnels and caverns, avoiding obstacles, collecting relics of aeons past and, hopefully, discover what happened to its predecessor.

Mickey Li, Julian Hird, G. Miles, Valentina Lo Gatto, Alex Smith and WeiJie Wong (most from the FARSCOPE CDT programme of the University of Bristol and the University of the West of England) have created this educational game as part of the UKRAS festival of Robotics 2021. The game has been designed to teach you about how sensors work, how they are used in reality and perhaps give a glimpse into the mind of the robot. With luck, this game can show how exciting it can be to work in robotics.

As the authors suggest, you can check out the DARPA Subterranean Challenge (video) (website) (not affiliated) for an example of the real life version of this game.

The game is hosted on this website. The site is secured and no data is collected when playing the game (apart from if you decide to fill in the anonymous feedback form). The source code for the game is available open source here on Github. Enjoy!

]]>
Robot Lab Live at the UK Festival of Robotics 2021 #RobotFest https://robohub.org/robot-lab-live-at-the-uk-festival-of-robotics-2021-robotfest/ Mon, 21 Jun 2021 09:41:03 +0000 https://robohub.org/robot-lab-live-at-the-uk-festival-of-robotics-2021-robotfest/

For five years, the EPSRC UK Robotics and Autonomous Systems (UK-RAS) Network have been holding the UK Robotics Week. This year’s edition kicked off on the 19th of June as the UK Festival of Robotics with the aim of boosting public engagement in robotics and intelligent systems. The festival features online events, special competitions, and interactive activities for robot enthusiasts of all ages. Among them, we chose to recommend you the Robot Lab Live session that will take place online on Wednesday the 23rd of June, 4pm – 6pm (BST).

Robot Lab Live is a virtual robotics showcase featuring 16 of the UK’s top robotics research groups. Each team will show-off their cutting-edge robots and autonomous systems simultaneously to live audiences on YouTube. You can flick between different demos running during the two-hour livestream, ask questions and interact with the research teams in the chat. Here’s the link to watch the livestream.

Apart from Robot Lab Live, there are other interactive (and online!) events that we find of particular interest:

  • Mosaix with Swarm Robot Tiles (Tuesday the 22nd of June, 4pm – 6pm BST): In this event, you will be able to remotely control your own Tile at the Bristol Robotics Laboratory to create collective art with other users. Tiles are small, 4-inch screens-on-wheels that users can draw on, colour, and move. ‘Mosaix’ emerges from the interactions between swarms of robot ‘Tiles’.
  • Tech Tag (Thursday the 24th of June, 5pm – 7pm BST): Control one of our robots at Harwell campus in Oxford as they play a high-tech version of the schoolyard classic – tag. Visit this website to join one of the four robot teams (blue, purple, red or yellow) and vote for where your robot should go next to avoid being tagged. If you’re it – try to catch one of the other robots as quickly as you can! With live commentary from science communicator and presenter, Sam Langford.
  • CSI Robot (Friday the 25th of June, 3pm – 4pm BST): Would you like to try being an accident investigator, finding out the cause of incidents involving humans and social robots? Then join us for this fun, interactive session!

To find out about more events, please visit this website.

]]>
Robot Swarms in the Real World workshop at IEEE ICRA 2021 https://robohub.org/robot-swarms-in-the-real-world-workshop-at-ieee-icra-2021/ Thu, 17 Jun 2021 11:19:28 +0000 https://robohub.org/robot-swarms-in-the-real-world-workshop-at-ieee-icra-2021/

Siddharth Mayya (University of Pennsylvania), Gennaro Notomista (CNRS Rennes), Roderich Gross (The University of Sheffield) and Vijay Kumar (University of Pennsylvania) were the organisers of this IEEE ICRA 2021 workshop aiming to identify and accelerate developments that help swarm robotics technology transition into the real world. Here we bring you the recordings of the session in case you missed it or would like to re-watch.

As the organisers describe, “in swarm robotics systems, coordinated behaviors emerge via local interactions among the robots as well as between robots and the environment. From Kilobots to Intel Aeros, the last decade has seen a rapid increase in the number of physically instantiated robot swarms. Such deployments can be broadly classified into two categories: in-laboratory swarms designed primarily as research aids, and industry-led efforts, especially in the entertainment and automated warehousing domains. In both of these categories, researchers have accumulated a vast amount of domain-specific knowledge, for example, regarding physical robot design, algorithm and software architecture design, human-swarm interfacing, and the practicalities of deployment.” The workshop brought together swarm roboticists from academia to industry to share their latest developments—from theory to real-world deployment. Enjoy the playlist with all the recordings below!

]]>
IEEE ICRA 2021 Awards (with videos and papers) https://robohub.org/ieee-icra-2021-awards-with-videos-and-papers/ Tue, 15 Jun 2021 09:27:09 +0000 https://robohub.org/ieee-icra-2021-awards-with-videos-and-papers/

Did you have the chance to attend the 2021 International Conference on Robotics and Automation (ICRA 2021)? Here we bring you the papers that received an award this year in case you missed them. Congratulations to all the winners and finalists!

Best Paper Award in Automation

Automated Fabrication of the High-Fidelity Cellular Micro-Scaffold through Proportion-Corrective Control of the Photocuring Process. Xin Li, Huaping Wang, Qing Shi, JiaXin Liu, Zhanhua Xin, Xinyi Dong, Qiang Huang and Toshio Fukuda

“An essential and challenging use case solved and evaluated convincingly. This work brings to light the artisanal field that can gain a lot in terms of safety and worker’s health preservation through the use of collaborative robots. Simulation is used to design advanced control architectures, including virtual walls around the cutting-tool as well as adaptive damping that would account for the operator know-how and level of expertise.”

Access the paper here.

Best Paper Award in Cognitive Robotics

How to Select and Use Tools?: Active Perception of Target Objects Using Multimodal Deep Learning. Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori and Shigeki Sugano

“Robots benefit from being able to select and use appropriate tools. This paper contributes to the advancement of robotics by focusing on tool-object-action relations. The proposed deep neural network model generates motions for tool selection and use. Results demonstrated for a relatively complex ingredient handling task have broader applications in robotics. The approach that relies on active perception and multimodal information fusion is an impactful contribution to cognitive robotics.”

Access the paper here or here.

Best Paper Award on Human-Robot Interaction (HRI)

Reactive Human-To-Robot Handovers of Arbitrary Objects. Wei Yang, Chris Paxton, Arsalan Mousavian, Yu-Wei Chao, Maya Cakmak and Dieter Fox

“This paper presents a method combining realtime motion planning and grasp selection for object handover task from a human to a robot, with effective evaluation on a user study on 26 diverse household objects. The incremental contribution has been made for human robot interaction. Be great if the cost function of best grasp selection somehow involves robotic manipulation metric, eg., form closure.”

Access the paper here.

Best Paper Award on Mechanisms and Design

Soft Hybrid Aerial Vehicle Via Bistable Mechanism. Xuan Li, Jessica McWilliams, Minchen Li, Cynthia Sung and Chenfanfu Jiang

“This paper presents a novel morphing hybrid aerial vehicle with folding wings that exhibits both a quadrotor and a fixed wing modewithout requiring any extra actuation by leveraging the motion of a bistable mechanism at the center of the aircraft. A topology optimization method is developed to optimize the bistable mechanism and the folding wing. This work is an important contribution to design of hybrid aerial vehicles.”

Access the paper here.

Best Paper Award in Medical Robotics

Relational Graph Learning on Visual and Kinematics Embeddings for Accurate Gesture Recognition in Robotic Surgery. Yonghao Long, Jie Ying Wu, Bo Lu, Yueming Jin, Mathias Unberath, Yunhui Liu, Pheng Ann Heng and Qi Dou

“This paper presents a novel online multi-modal graph learning method to dynamically integrate complementary information in video and kinematics data from robotic systems, to achieve accurate surgical gesture recognition. The proposed method is validated on collected in-house dVRK datasets, shedding light on the general efficacy of their approach.”

Access the paper here.

Best Paper Award on Multi-Robot Systems

Optimal Sequential Stochastic Deployment of Multiple Passenger Robots. Chris (Yu Hsuan) Lee, Graeme Best and Geoffrey Hollinger

“The paper presents rigorous results (well validated experimentally) and visionary ideas: the innovative idea of marsupial robots is very promising for the multi-robot systems community.”

Access the paper here.

Best Paper Award in Robot Manipulation

StRETcH: A Soft to Resistive Elastic Tactile Hand. Carolyn Matl, Josephine Koe and Ruzena Bajcsy

“The committee was particularly impressed by the high level of novelty in this work with unique applications for tactile manipulation of soft objects. Both the paper and presentation provided a clear description of the problem solved, methods and contribution suitable for the general ICRA audience. Significant experimental validations made for a compelling record of the contribution.”

Access the paper here.

Best Paper Award in Robotic Vision

Interval-Based Visual-LiDAR Sensor Fusion. Raphael Voges and Bernardo Wagner

“The paper proposes to use interval analysis to propagate the error from the input sources to the fused information in a straightforward way. To show the applicability of our approach, the paper uses the fused information for dead reckoning. An evaluation using real data shows that the proposed approach localizes the robot in a guaranteed way.”

Access the paper here.

Best Paper Award in Service Robotics

Compact Flat Fabric Pneumatic Artificial Muscle (ffPAM) for Soft Wearable Robotic Devices. Woojong Kim, Hyunkyu Park and Jung Kim

“This paper presents design and evaluation of a novel flat fabric pneumatic artificial muscle with embedded sensing. Experimental results clearly demonstrate that the innovative ffPAM is durable, compact, and has great potential to advance broader application of wearable service robots.”

Access the paper here.

Best Paper Award on Unmanned Aerial Vehicles

Aerial Manipulator Pushing a Movable Structure Using a DOB-Based Robust Controller. Dongjae Lee, Hoseong Seo, Inkyu Jang,Seung Jae Lee and H. Jin Kim

“This paper provides a robust control approach that maintains UAV stability through manipulator contact forces during pushing. It contributes control design along with convincing experimental validation on manipulated objects of unknown size and dynamics. The approach provides practical utility for unmanned aerial manipulation with contact forces.”

Access the paper here.

Best Student Paper Award

Unsupervised Learning of Lidar Features for Use in a Probabilistic Trajectory Estimator. David Juny Yoon, Haowei Zhang, Mona Gridseth, Hugues Thomas and Timothy Barfoot

“The paper presents an unsupervised parameter learning approach in the context of Gaussian variational inference. The approach is innovative and sound. It has been well evaluated using open benchmark datasets. The paper has a broad impact on autonomous navigation.”

Access the paper here.

Best Conference Paper Award

Extrinsic Contact Sensing with Relative-Motion Tracking from Distributed Tactile Measurements. Daolin Ma, Siyuan Dong, Alberto Rodriguez

“The paper makes a notable contribution to the important and re-emerging field of tactile perception by solving the problem of contact localization between an unknown object held by an imprecise grasp and the unknown environment with which it is in contact. This paper represents an excellent theory-to-practice exercise as the novel proposal of using extrinsic tactile array data to infer contact is verified with a new tactile sensor and real robotic manipulation in a simplified, but realistic environment. The authors also provide a robust and honest discussion of results, both positive and negative, for reader evaluation.”

Access the paper here.

Best Video Demonstration Award

Merihan Alhafnawi at the Robot Swarms in the Real World workshop.

Access the video here.

]]>
Robohub and AIhub’s free workshop trial on sci-comm of robotics and AI https://robohub.org/robohub-and-aihubs-free-workshop-trial-on-sci-comm-of-robotics-and-ai/ Wed, 21 Apr 2021 15:35:45 +0000 https://robohub.org/robohub-and-aihubs-free-workshop-trial-on-sci-comm-of-robotics-and-ai/ A robot in a field

Image credit: wata1219 on flickr (CC BY-NC-ND 2.0)

Would you like to learn how to tell your robotics/AI story to the public? Robohub and AIhub are testing a new workshop to train you as the next generation of communicators. You will learn to quickly create your story and shape it to any format, from short tweets to blog posts and beyond. In addition, you will learn how to communicate about robotics/AI in a realistic way (avoiding the hype), and will receive tips from top communicators, science journalists and ealy career researchers. If you feel like being part of our beta testers, join this free workshop to experience how much impact science communication can have on your professional journey!

The workshop is taking place on Friday the 30th of April, 10am-12.30pm (UK time) via Zoom. Please, sign up by sending an email to daniel.carrillozapata@robohub.org.

]]>
Building a 7 axis robot from scratch https://robohub.org/building-a-7-axis-robot-from-scratch/ Sun, 11 Apr 2021 08:50:53 +0000 https://robohub.org/building-a-7-axis-robot-from-scratch/

Do you fancy making yourself an industrial robot to enjoy at home? Jeremy Fielding, a passionate fan of mechanical engineering, did. So he built one. Good news is: he’s preparing a series of videos to teach you the whole process from scratch. How much power do you need to run 7 motors at one time? If you lose power, how do you prevent the arm from collapsing on you or dropping the load? How do you keep the cost down? He’s recorded over 100 hours of video, and he’s planning to teach you how he used the servo motors, how you can use them for your projects and how he designed his 7 axis, articulated robot.

Jeremy’s aim (website, YouTube, Twitter, Instagram) is simple: draw people to engineering with amazing projects, inspire them with ideas, then teach them how to do it. And for this video series, he’s also looking for your collaboration. So if you’ve got experience and knowledge on building this type of robots and you’d like to share it, maybe you end up being part of the series!

We’d like to thank Black in Robotics for making it possible for us to discover Jeremy. Here’s the video Jeremy has released to introduce his project:

Check out Jeremy’s YouTube channel to discover many more instructional videos. You can also support his work on Patreon.

]]>
Talking Robotics’ seminars of January – April 2021 (with videos and even a musical summary!) https://robohub.org/talking-robotics-seminars-of-january-april-2021-with-videos-and-even-a-musical-summary/ Fri, 09 Apr 2021 07:59:59 +0000 https://robohub.org/talking-robotics-seminars-of-january-april-2021-with-videos-and-even-a-musical-summary/

Talking Robotics is a series of virtual seminars about Robotics and its interaction with other relevant fields, such as Artificial Intelligence, Machine Learning, Design Research, Human-Robot Interaction, among others. They aim to promote reflections, dialogues, and a place to network. In this seminars compilation, we bring you 7 talks (and a half?) from current roboticists for your enjoyment.

Filipa Correia “Group Intelligence on Social Robots”

Filipa Correia received a M.Sc. in Computer Science from University of Lisbon, Portugal, 2015. She is currently a junior researcher at GAIPSLab and she is pursuing a Ph.D. on Human-Robot Interaction at University of Lisbon, Portugal.

Her PhD thesis is focused on the challenges of creating social robots that are capable of sustaining cohesive alliances in team settings with humans. Moreover, it contributes with computational mechanisms for the robotic teammate to autonomously express group-based emotions or to gaze human teammates in multi-party settings.

For more information about the speaker and related papers, please see this website.

Ross Mead “Bringing Robots To Life”

Dr. Ross Mead is the Founder and CEO of Semio. Ross received his PhD and MS in Computer Science from the University of Southern California in 2015, and his BS in Computer Science from Southern Illinois University Edwardsville in 2007.

In this talk, Dr. Ross Mead discussed the technological advances paving the way for the personal robotics revolution, and the robotics hardware companies leading the charge along that path. He also introduced the software innovations in development at Semio for bringing robots to life.

For more information about the speaker and related papers, please see this website.

Kim Baraka “Humans and Robots Teaching and Learning through Social Interaction”

Kim Baraka is currently a postdoctoral fellow at the Socially Intelligent Machines Lab at the University of Texas at Austin. He holds a dual Ph.D. in Robotics from Carnegie Mellon University and Instituto Superior Técnico (Portugal), an M.S. in Robotics from Carnegie Mellon, and a Bachelor in Electrical and Computer Engineering from the American University of Beirut.

In the first part of the talk, focusing on robots teaching humans, Kim discussed algorithmic solutions that enable socially assistive robots to teach both children and therapists in a personalized way. In the second part of the talk, focusing on humans teaching robots, Kim discussed some preliminary efforts towards developing ways in which robots can learn tasks from human teachers in richer and more natural ways.

For more information about the speaker and related papers, please see this website.

Glenda Hannibal “Trust in HRI: Probing Vulnerability as an Active Precondition”

Glenda has previously worked in the Department of Sociology at the University of Vienna and as an expert for the HUMAINT project at the European Commission. Glenda holds a BA and MA in Philosophy from Aarhus University and is currently a PhD student in the Trust Robots Doctoral College and Human-Computer Interaction group at TU Wien.

In this talk, Glenda presented her research on vulnerability as a precondition of trust in HRI. In the first part, she argued that while the most commonly cited definitions of trust used in HRI recognize vulnerability as an essential element of trust, it is also often considered somewhat problematic too. In the second part of her talk, she presented the results of two empirical studies she has undertaken to explore trust in HRI in relation to vulnerability. Finally, she reflected on few ethical aspects related to this theme to end this talk.

For more information about the speaker and related papers, please see this website.

Carl Mueller “Robot Learning from Demonstration Driven Constrained Skill Learning & Motion Planning”

Carl Mueller is a Ph.D. student of computer science at the University of Colorado – Boulder, advised by Professor Bradley Hayes within the Collaborative Artificial Intelligence and Robotics Laboratory. He graduated from the University of California – Santa Barbara with a degree in Biopsychology and after a circuitous route through the pharmaceutical industry, he ended up in the tech, founding his own company building intelligent chat agents for business analytics.

The major theme of his research is the enablement of human users to communicate additional information to the robot learning system through ‘concept constraints’. Concept Constraints are abstract behavioral restrictions grounded as geometric and kinodynamical planning predicates that prohibit or limit the behavior of the robot resulting in more robust, generalizable, and safe skill execution. In this talk, Carl discussed how conceptual constraints are integrated into existing LfD methods, how unique interfaces can further enhance the communication of such constraints, and how the grounding of these constraints requires constrained motion planning techniques.

For more information about the speaker and related papers, please see this website.

Daniel Rakita “Methods and Applications for Generating Accurate and Feasible Robot-arm Motions in Real-time”

Daniel Rakita is a Ph.D. student of computer science at the University of Wisconsin-Madison advised by Michael Gleicher and Bilge Mutlu. He received a Bachelors of Music Performance from the Indiana University Jacobs School of Music in 2012.

In this talk, he overviewed technical methods they have developed that attempt to achieve feasible, accurate, and time-sensitive robot-arm motions. In particular, he detailed their inverse kinematics solver called RelaxedIK that utilizes both non-linear optimization and machine learning to achieve a smooth, feasible, and accurate end-effector to joint-space mapping on-the-fly. He highlighted numerous ways they have applied their technical methods to real-world-inspired problems, such as mapping human-arm-motion to robot-arm-motion in real-time to afford effective shared-control interfaces and automatically moving a camera-in-hand robot in a remote setting to optimize a viewpoint for a teleoperator.

For more information about the speaker and related papers, please see this website.

Barbara Bruno “Culture-Aware Robotics”

Barbara Bruno is a post-doc researcher at the École Polytechnique Fédérale de Lausanne (EPFL), in Lausanne, Switzerland, in the CHILI lab. Barbara received the M.Sc. and the Ph.D. in Robotics from the University of Genoa in 2011 and 2015, respectively. She is part of the NCCR Robotics organisation and currently involved in the EU ITN ANIMATAS.

In this talk, she explored how existing quantitative and qualitative methods for the assessment of culture, and cultural differences, can be combined with knowledge representation and reasoning tools such as ontologies and fuzzy controllers to endow robots with the capability of taking cultural factors into account in a range of tasks going from low-level motion planning to high-level dialogue management and user adaptation.

For more information about the speaker and related papers, please see this website.

Extra: When you forget to hit the record button but you value the speaker so much that you compose a musical summary of his talk…

Because as they say, we are human after all!

Nils Hagberg “A call for a Human Approach to Technology”

Nils Hagberg is the Product Owner at Furhat Robotics. He is a computer linguist with almost ten years of industry experience in human-machine interaction and conversational technology.

In this talk, he gave a few examples of human-centric business case designs that he’s come across earlier in his career and at Furhat Robotics – the social robotics company that have set out to make technology more human. His hope for this talk is that it will put your own work in a larger context and nudge you towards a path that will ensure humans will be allowed to remain human.

For more information about the speaker and related papers, please see this website.

]]>
Field Robotics: A new, high-quality, online and open-access journal https://robohub.org/field-robotics-a-new-high-quality-online-and-open-access-journal/ Sun, 07 Feb 2021 09:00:20 +0000 https://robohub.org/field-robotics-a-new-high-quality-online-and-open-access-journal/ A robot in a field

Image credit: wata1219 on flickr (CC BY-NC-ND 2.0)

It has been almost half a year since the mass resignation of the editors and editorial board of the Journal of Field Robotics. In a new turn of events, Peter Corke has recently relaunched Field Robotics as an online open-access journal with the old editorial board. Field Robotics deals with the fundamentals of robotics in unstructured and dynamic environments. Papers are now being accepted at their website.

The story of the mass resignation was reported on Silicon Valley Robotics on 26 August, in their post Is it farewell to the Journal of Field Robotics?, which we will reproduce below.

Original post from Silicon Valley Robotics

2020 is proving to be a watershed year. First COVID-19 has forced the cancellation (eg. ROSCon 2020, Hannover Messe) or the complete redesign of almost every major robotics conference. Now it seems that the science publishing community is also undergoing a sea change, as yesterday’s mass resignation of the editors and editorial board of the Journal of Field Robotics suggests. Stay posted for updates about the future of a field robotics research journal.

August 25, 2020

Dear Colleagues,

We would like to inform you about an upcoming major transition for the Journal of Field Robotics.

After 15 years of service, John Wiley and Sons, the publisher has decided not to renew the contract of the Editor in Chief (Sanjiv Singh) and the Managing Editor (Sanae Minick) and hence our term will expire at the end of 2020.

This comes after two years of discussions between new Wiley representatives and the Editorial Board have failed to converge to a common set of principles and procedures by which the journal should operate. The Editorial Board has unanimously decided to resign.

We do want to assure the authors who have papers under review that we see it as our responsibility to bring these documents to resolution during our term. We will continue to process new submissions until the end of the year. More about the future at the end of this note.

While the issue at the heart of our disagreement with Wiley is about academic independence, it should be noted that there is a structural issue here. Scholarly publishing is broadly in flux at the moment in the search for a sustainable model. Currently, academics are not paid to create and review articles. In fact, they often have to pay fees to publish, and, readers have to pay to access the work through a pay per view system, or, through subscriptions.

Plan S, an international consortium makes the dilemma clear:

“Monetising the access to new and existing research results is profoundly at odds with the ethos of science (Merton, 1973 )… In the 21st century, science publishers should provide a service to help researchers disseminate their results. They may be paid fair value for the services they are providing, but no science should be locked behind paywalls!”

While this moment calls for creativity and collaboration with the scholarly community to find new models, Wiley is intent on making broad changes to the way that the Journal of Field Robotics is operated, guided mostly by an economic calculation to increase revenue and decrease costs. To do this, they have unilaterally decided to change the terms of the contract that has been constant since the JFR was started in 2005. Wiley has confronted a similar case (European Law Journal) with similar effect – the entire editorial board has resigned in January of 2020:

Wiley insists that the new contract is covered under a confidentiality agreement that not even the Editorial Board can examine. What we can say is that the net effect of Wiley’s demands would make the Editors contractors to the publisher rather than having them respond to the board. We see this as a breach of academic autonomy.

In resigning, the Editorial Board of the Journal of Field Robotics reaffirms its commitment to dissemination and discussion of research. In the near future we will announce a new forum for research in Field Robotics that will maintain the academic integrity of our editorial process while also ensuring open dissemination of your research.

The Editorial Board of the Journal of Field Robotics

  • Simon Lacroix, LAAS
  • David Wettergreen, CMU
  • Cédric Pradalier, GeorgiaTech Lorraine
  • Tim Barfoot, University of Toronto
  • Roland Siegwart, ETH
  • Giuseppe Loianno, NYU
  • Henrik I Christensen, UC San Diego
  • Marco Hutter, ETH
  • Kazuya Yoshida, Tohoku University
  • Aarne Halme, Aalto University
  • Hanumant Singh, Northeastern University
  • Matthew Berkemeier, Continental
  • Anibal Ollero, University of Seville
  • Jonathan Roberts, QUT
  • Hajime Asama, University of Tokyo
  • Satoshi Tadokoro, Tohoku University
  • Raja Chatila, Sorbonne University
  • Peter Corke, QUT
  • Matt Spenko, Illinois Institute of Technology
  • Larry Matthies, NASA JPL
  • Salah Sukkarieh, University of Sydney
  • Stefan Williams, University of Sydney
  • Sanjiv Singh, CMU
]]>
Robotics trends at #CES2021 https://robohub.org/robotics-trends-at-ces2021/ Sat, 16 Jan 2021 13:39:39 +0000 https://robohub.org/robotics-trends-at-ces2021/

Even massive events like the 54th edition of Consumer Electronics Show (CES) have gone virtual due to the current pandemic. Since 1967, the Consumer Technology Association (CTA), which is the North American trade association for the consumer technology industry, has been organising the fair, and this year was not going to be any different—well, except they had to take the almost 300,000m${}^2$ from CES 2020 to the cloud. In this post, I mainly put the focus on current and future hardware/robotics trends presented at CES 2021 (because we all love to make predictions, even during uncertain times).

“Innovation accelerates and bunches up during economic downturns only to be unleashed as the economy begins to recover, ushering in powerful waves of technological change”—Christopher Freeman, British Economist. With this quote, I start the first session on ‘my show’ of CES 2021, ‘Tech trends to watch’ by CTA (see their slides here). There are not-that-surprising trends such as Artificial Intelligence/Machine Learning or services like cloud computing, video streaming or remote learning, but let’s kick off with hardware/robotics. Steve Koenig (Vice President of Research for CTA) highlighted this recent study from Gartner that predicts Robotic Process Automation will become a \$2 billion global industry in 2021, and will continue growing with double figures through 2024.

https://www.youtube.com/watch?v=kOVLuEee1JU
Dual Arm Robot System (DARS) from ITRI

Lesley Rorhbaugh (Director of Research for CTA) also presented some other hardware/robotics trends for 2021, including digital health wearables going beyond your wrists (for example, the Oura ring can measure your body temperature or respiratory rate, and generate a health score based on the data collected during the day, making it a potential tool to detect early COVID-19 symptoms) or robot triage helpers to support during high influx of patients at hospitals and to reduce the exposure rate of hospital workers. We all know this past year has been marked by COVID-19. In a more focused session on ‘robots to the rescue’ chaired by Eugene Demaitre (Senior Editor at the Robot Report), he stated that “this year, the definition of what a frontline worker is has changed”. In fact, the field of rescue robots has expanded with the current pandemic. Not only are they applied to assist in disaster situations nowadays, but areas such as autonomous delivery of goods, automatic cleaning for indoor sanitising (such as ADIBOT) or even cooking (such as the kitchen robot by Moley)—as Lesley showed in her session.

ADIBOT

ADIBOT from UBTECH

Moley kitchen robot

Kitchen robot by Moley

“Delivery is actually the largest unautomated industry in the world”, Ahti Heinla (CEO of Starship Technologies) said. “From the consumers’ perspective, delivery today kind of works. You pay a couple bucks and you get what you want, or maybe you don’t pay these couple of bucks at all. It appears to be free. Well, guess what? It isn’t free, and while it might be working for the consumer, it isn’t always working for the company doing the delivery. They are looking for solutions.”

Startship delivery robot

Delivery robot by Startship Technologies

What’s more, there’s a shortage of drivers to cope with the exponential growth of deliveries, Kathy Winter (Vice President at the IoT group at Intel) mentioned. But automation comes with a price, and the management of autonomous delivery fleets isn’t straightforwards. In relation to drone delivery, James Burgess (CEO of Wing) said that “data is one of the key elements here. There’s so much to keep track of, both individual robots or airplanes as you have, but also the environment, the weather, the traffic, the other systems that are moving through.” One of the biggest challenges is to actually develop the platform that manages the fleets. But also, “you need to build hardware, build software, think about regulations, think about safety, think about the consumer adoption and value proposition. You also need to build an app”, Ahti expressed. When it comes to regulations (whether traffic regulations for autonomous vehicles or standards for safety on sidewalks), the technology is far more advanced, Kathy said. In her opinion, we need a common standard that certifies safety of autonomous ground/aerial vehicles to avoid having different safety levels depending on the vehicle.

Wing delivery drone

Delivery drone by Wing

As in recent years at CES, vehicle tech takes a huge part of the whole event. With the rise of 5G connectivity (expected to really kick off during 2021), not only are self-driving cars in the trends conversation, but also connectivity via Cellular V2X (Vehicle to Everything communication). This is especially remarkable, as the area of smart cities is also a current trend in development under the umbrella of the Internet of Things.

Hans Vestberg’s keynote at CES 2021 (CEO of Verizon)

As shown above, research into consumer habits and future technology trends is a huge part of the work that CTA does. Some preexisting tech has skyrocketed as a result of the current health crisis, as CTA’s latest research reveals. Indeed, the tech industry grew by 5.5% to \$442 billions during 2020 in the United States.

Tech trends forecast

5-year tech trends forecast by CTA

While hardware-related tech—which represents three quarters of the industry retail value—had a flat growth, services grew by a substantial 31%. It is not surprising that the overall five key trends including hardware, software and services that CTA found are: 1) remote learning (educational robots, AR/VR, STEM products…), 2) digital entertainment (e.g. audio/video platforms or gaming), 3) smart homes (tech to improve energy efficiency, air/water quality, etc.), 4) online shopping (as exemplified above, autonomous grocery delivery is becoming a thing), and 5) personal vehicles & travel tech (did I mention autonomous vehicles already?). When it comes to their hardware forecast, their prediction for the short run points to smart homes technology (with home robots being a very popular choice) and digital health, an area worth \$845 millions, with a great opportunity for health-monitoring devices (e.g. wearables).

Samsung's Handy bot

Handy bot by Samsung

Dallara IL-15 racecar, the autonomous car designed for the Indy Autonomous Challenge in October, 2021

There was also room at CES 2021 for diversity, equity and inclusion (DEI). In the session chaired by Tiffany Moore (Senior Vice President, Political and Industry Affairs at CTA), invited speakers Dawn Jones (Acting Chief Diversity and Inclusion Officer & Global Director of Social Impact communications, policy and strategy at Intel) and Monica Poindexter (Head of Employee Relations and Inclusion & Diversity at Lyft) commented on the findings from the reports on diversity, inclusion and racial equity launched recently by Intel and Lyft. As stressed by both Dawn and Monica, retention and progression of employees from underrepresented groups is key for successful DEI in the long run. In fact, it can take at least one or two years before the outcome of DEI policies start to show up, Monica pointed out. Another crucial aspect both speakers shared is the support from the C-suite and middle managers. They all have to believe in the same DEI goals across the organisation. Active listening and the implementation of mechanisms for bottom-up feedback where employees can anonymously express their opinion and raise their concerns have also helped both companies improve their DEI. However, the two reports show there are still DEI barriers to break down (e.g. no more than 21.3% were females at senior, directors or executive levels at Intel in 2020, and there was an overall loss of 1.9% of the Black/African American employees at Lyft last year). That is why the work done by organisations such as Women in Robotics and Blacks in Robotics is vital to improve DEI inside companies. Still, a lot of work to be done.

Let’s hope #CES2022 returns to Las Vegas (because this will mean the pandemic is over). See you there!

]]>
#CYBATHLON2020GlobalEdition winners of the powered leg prosthesis race (with interview) https://robohub.org/cybathlon2020globaledition-winners-of-the-powered-leg-prosthesis-race-with-interview/ Sun, 10 Jan 2021 12:17:06 +0000 https://robohub.org/cybathlon2020globaledition-winners-of-the-powered-leg-prosthesis-race-with-interview/ Winning team Circleg

Winning team Circleg

Finishing this series of CYBATHLON 2020 winners, today we feature the victory of the startup Circleg from Switzerland. We also had the chance to interview them (see the end of this post).

In this race, pilots wearing a leg prosthesis from five teams had to complete a circuit using any kind of active/passive prosthesis. In the CYBATHLON organisers’ own words, “passive prostheses are primarily for cosmetic purposes and have few functional characteristics. So-​called active leg prostheses can be controlled accurately thanks to innovative technologies. After a leg amputation, motorised prostheses allow users to do things like climb stairs more easily and walk up and down sloped surfaces successfully.” The challenge for the pilots was to complete the following tasks:

(1) Balancing cups and plates while sitting down and standing up to test leg strength in a confined space.

(2) Overcoming hurdles while carrying apples on two plates from one end to the other to test their bending ability and movement control of the knee joints.

(3) Transporting two buckets on a beam to test their ability to balance while moving forwards and backwards.

(4) Transporting balls and boxes to the other side of the stairs with only one foot on each step to test their ability to bend the knee joint, the motor power on the stairs, the precision of steps, their stability, all with limited vision.

(5) Crossing a tilted path in both directions while carrying plates with apples to test the bending ability and angle control of the knee and ankle joint.

(6) Balancing a plate with apples while ascending and descending a ramp to test the ability to bend knee and ankle joints, and their bending stability and motor power at the ramps.

Powered leg prosthesis race tasks

Powered leg prosthesis race tasks

As with other disciplines, the top three races were very tight. Team Circleg with pilot Andre Frei completed the circuit in 2m 43s, giving them the gold medal. Only five extra seconds did take the Swiss silver medalist team, NeuroLegs with pilot Stefan Poth. The bronze medal went to the Polish team Contur 2000 with pilot Adrian Bak, finishing the race in 2m 57s. Here’s a summary of the races of the top 4 finalists:

You can see the results from the rest of the teams in this discipline here, or watch the recorded livestreams of both days on their website.

Interview to Simon Oschwald – Co-founder of Circleg

We had the pleasure to interview Simon Oschwald, one of the co-founders of the startup Circleg. Simon studied Industrial and Product Design at the Zurich University of the Arts.

Simon Oschwald (left) and Fabian Engel (right), co-founders of Circleg


D. C. Z.: What does it mean for your team to have won in your CYBATHLON category?

S.O.: To win the prosthetic leg race at the Cybathlon exceeded all our expectations! Participating in the Cybathlon was a huge milestone for our team and a great opportunity to present the Circleg and our vision on this global stage. It was important for us to be able to show that the Circleg, with its functionality, can support amputees for the various challenges they face in everyday life. Our pilot Andre has now definitely proven this by winning the Cybhatlon race! The performance of the Circleg at the Cybathlon 2020 is also a confirmation that we are on the right track with the development to ultimately achieve our vision of Freedom of Mobility for everyone. We are thrilled!

D. C. Z.: And what does it mean for people with disabilities?

S.O.: I hope that many Amputees worldwide will see the performance of the Circleg at the Cybathlon as a sign that even with limited financial means it is possible to achieve freedom of mobility. Together we can realize with Circleg a holistic and sustainable prosthetic care for the majority of amputees worldwide. See it as a promise from our side that we will give everything to turn this vision into reality!

D. C. Z.: What are still your challenges?

S.O.: There are quite a few: The transformation of Circleg Zero into a mass product, the development of the local production chain and, finally, the implementation of our business model. A sustainable prosthetic supply does not only consist of a functioning product, but also requires locally functioning production, high-quality support, repair and service facilities and appropriate financing mechanisms. With the Circleg we address all these issues an our interdisciplinary is extremely motivated to tackle these challenges!

]]>
Science Magazine robot videos 2020 https://robohub.org/science-magazine-robot-videos-2020/ Wed, 06 Jan 2021 15:50:19 +0000 https://robohub.org/science-magazine-robot-videos-2020/

Did you manage to watch all the holiday robot videos of 2020? If you did but are still hungry for more, I have prepared this compilation of Science Magazine videos featuring robotics research that were released during last year. Enjoy!

These ‘beetlebots’ keep flying, even after crashing into poles

What if the folding wings of beetles could help robots navigate narrow places by not being affected by crashes? You can read a bit more here, and see the research article here.

Magnetic spray transforms inanimate objects into mini-robots

Researchers developed an iron-based spray that sticks to surfaces like origami paper or cotton thread, and turns objects into tiny robots that could be maneuvered inside our bodies for future biomedical applications. You can read a bit more here, and see the research article here.

Speedy drones count Antarctic penguin colonies in record time

Reducing the amount of time that it takes to count penguins in Antarctica is crucial when you have to survive its extreme weather conditions. Researchers developed a new algorithm for multiple drones that cut the time from two days to three hours. You can read the story here.

Mosquito-inspired drone dodges obstacles, thanks to air-pressure sensors

By taking inspiration from the way some mosquitoes use changes in air flow to detect close objects, researchers created a sensor that can be fitted into flying robots to avoid crashes even when objects can’t be seen in the dark. You can read a bit more here, and see the research article here.

How NASA’s new rover will search for signs of ancient life on Mars

On 18 February, 2021, a NASA’s rover launched last summer will land on Mars to help researchers understand the planet’s climatic history. You can read the story here.

These sweaty robots cool themselves faster than humans

Cooling systems are important for robots in the same way they are for us. Indeed, researchers were inspired by the human best cooling system: sweat. You can read a bit more here, and see the research article here.

Swarm of drones flies through heavy forest—while staying in formation

Maintaining connectivity while avoiding crashes during outdoor navigation is a difficult challenge for robots flying through forests. Researchers found the way to ease this task. You can read a bit more here, and see the research article here.

]]>
James Bruton focus series #3: Virtual Reality combat with a real robot https://robohub.org/james-bruton-focus-series-3-virtual-reality-combat-with-a-real-robot/ Sat, 26 Dec 2020 09:12:44 +0000 https://robohub.org/james-bruton-focus-series-3-virtual-reality-combat-with-a-real-robot/

It’s Saturday, it’s the turn of another post of the James Bruton focus series, and it’s Boxing Day in the UK and most of the Commonwealth countries. Even if this holiday has nothing to do with boxing, I didn’t want to miss the opportunity to take it literally and bring you a project in which James teamed up with final year degree students in Computer Games Technology at Portsmouth University to build a robot that fights a human in a Virtual Reality (VR) game.

For this project, the students Michael (Coding & VR Hardware), Stephen (Character Design & Animation), George (Environment Art) and Boyan (Character Design & Animation) designed a VR combat game in which you fight another character. James’ addition was to design a real robot that fights the player, so that when they get hit in the game, they also get hit in real life by the robot. The robot and the player’s costume are tracked using Vive trackers so the VR system knows where to position each of them in the 3D virtual environment. You can see some artwork and more details about the project here and here. Without further ado, here’s James’ video:

Happy holidays!

]]>
#CYBATHLON2020GlobalEdition winners of the functional electrical stimulation bike race (with interview) https://robohub.org/cybathlon2020globaledition-winners-of-the-functional-electrical-stimulation-bike-race-with-interview/ Sat, 19 Dec 2020 09:16:04 +0000 https://robohub.org/cybathlon2020globaledition-winners-of-the-functional-electrical-stimulation-bike-race-with-interview/ Winning team pilot Sander Koomen

Winning team pilot Sander Koomen

In continuation to this series of CYBATHLON 2020 winners, today we feature the victory of PULSE Racing from VU University Amsterdam. We also had the chance to interview them (see the end of this post).

In this race, pilots with paraplegia from nine teams competed against each other using a recumbent bicycle that they could pedal with the help of functional electrical stimulation (FES) of their leg muscles. As the organizers of CYBATHLON describe, “FES is a technique that allows paralysed muscles to move again. By placing electrodes on the skin or implanting them, currents are applied to the muscles, making them contract. Thus, a person whose nerves from the brain to the leg muscles are disconnected due to a spinal cord injury (SCI) can use an intelligent control device to initiate a movement, e.g. stepping on a bike pedal. New types of electrodes and an exact control of the currents make it possible to maximise the pedal force with each rotation while avoiding early muscle fatigue.” Pilots had 8 minutes to complete 1200m on their static recumbent bike.

A recumbent bike

Functional electrical stimulation bike race illustration

Five out of the nine teams finished the 1200m race. Therefore, the podium was decided based on finishing time, with pilot Sander Koomen from the winning team PULSE Racing (The Netherlands) completing the distance in 2 minutes and 40 seconds (that’s an average speed of 27km/h). The silver medal went to team ImperialBerkel (UK & The Netherlands) with pilot Johnny Beer, who made a time of 2 minutes and 59 seconds. Finally, team Cleveland (US) with pilot Mark Steven Muhn won the bronze medal with a finishing time of 3 minutes and 13 seconds. You can watch a summary of the top 4 races in the video below.

You can also see the results from the rest of the teams in this discipline here, or watch the recorded livestreams of both days on their website.

Interview to PULSE Racing

Team PULSE Racing

Team PULSE Racing


D. C. Z.: What does it mean for your team to have won in your CYBATHLON category?

P.R.: We saw this as a good reflection of where we stand as a team. The result was unexpected, because sometimes it is hard to see results during training. By winning the Cybathlon, our uncertainties about our developments vanished. The golden medal emphasizes our strength and motivation as a student team. Our dream, winning the Cybathlon on the first attempt, came true. We are thankful that the Cybathlon gave us the opportunity to participate, which helped us to gain publicity.

D. C. Z.: And what does it mean for people with disabilities?

P.R.: It means that with joining our team, people with a disability are able to keep developing their bodies and mind, according to a training scheme. This improves mental well-being and health. Besides that they can still be part of the society, and become more confident.

D. C. Z.: What are still your challenges?

P.R.: In the coming years we want to focus more on the mechanics of the bike, to see if we can make some improvements. And it would be great if this form of exercise is accessible for more people with a spinal cord injury. For that it is important that more people are aware of the possibilities of Functional Electrostimulation (FES).

You can follow team updates on Instragram (#pulse.racingnl) and Facebook/LinkedIN (#pulseracing).

]]>
James Bruton focus series #2: Barcode scanner guitar synths https://robohub.org/james-bruton-focus-series-2-barcode-scanner-guitar-synths/ Sat, 12 Dec 2020 10:09:35 +0000 https://robohub.org/james-bruton-focus-series-2-barcode-scanner-guitar-synths/ James Bruton playing his barcode synths

James Bruton playing his barcode synths

As every other Saturday, I’m bringing you another cool open-source project from James Bruton. Today, how about becoming an experimental musician with your own barcode scanner synthesizer?

I introduced James Bruton in the first post of this focus series on him, where I showed you the Boston Dynamics-inspired open robot dogs projects that consolidated him as one of the top maker on YouTube. As a sort of musician, the barcode synth project I’ve picked for this second post grabbed my attention among the countless videos he’s got on his channel.

To be more specific, the barcode synth consists of two projects. A bit more than a year ago, James showed how to build a four-neck guitar synth with the frets (the place where you put your fingers to play a note on the guitar) being barcodes instead of strings. To play this guitar, you only need a barcode reader connected to an Arduino that converts the data read from the barcodes into a number that represents a MIDI note – which is a digital representation of a musical note based on the MIDI standard. You can then plug the Arduino into a synth or your computer (if you love virtual instruments as much as I do!) to transform the MIDI output into actual sound. Extra features of this guitar included pitch bending and octave shifting buttons. You can access the open-source code of this type of guitar here, and enjoy James’ explanation (and performance) in the following video:

A couple of months ago, James made an improved version of the previous guitar synth. Instead of using the number given by the barcode, for this improved synth he hacked the barcode reader to interpret the barcodes as images so that the output is the raw square wave that it sees. With he help of a Teensy microcontroller to do the digital signal processing and a Raspberry Pi to display barcode images on a screen fitted to a 3D-printed guitar, he could produce a richer range of sounds compared to the previous version. If you want to build you own barcode synth, check out the open-source files and his video (you’ll be impressed to find out how a zebra sounds like!):

Make tech, make music, and stay tuned!

]]>
#CYBATHLON2020GlobalEdition winners of the powered wheelchair race (with interview + story of pilot) https://robohub.org/cybathlon2020globaledition-winners-of-the-powered-wheelchair-race-with-interview-story-of-pilot/ Mon, 07 Dec 2020 12:00:48 +0000 https://robohub.org/cybathlon2020globaledition-winners-of-the-powered-wheelchair-race-with-interview-story-of-pilot/ HSR Enhanced team

Winning team: HSR Enhanced with pilot Florian Hauser

In continuation to this series of CYBATHLON 2020 winners, today we feature the victory of the HSR Enhanced team from the Eastern Switzerland University of Applied Sciences (OST). In addition, we interviewed their team leader of this year, Christian Bermes.

In this race, pilots with a severe walking disability from seven teams competed against each other in a motorized wheelchair. As the organizers of CYBATHLON describe, “motorized wheelchairs can make everyday life much easier for people with a walking disability. The important thing is that they can overcome obstacles such as ramps and yet are not too large to drive under a normal table. Motorized wheelchairs that are controlled by joystick, tongue control, touchpad or other technologies are eligible for this race and are characterized by innovative approaches to overcome obstacles such as stairs”. The challenge for the pilots was to complete the following tasks:

(1) Driving up to a table until half of the thighs were covered without displacing the table to test the size and seat height of the wheelchair.

(2) Driving through furniture without displacing it to test the size of the wheelchair and precise maneuverability.

(3) Crossing uneven terrain to test the grip of the wheels, ground clearance and power.

(4) Ascending and descending stairs, and bringing the wheelchair to a standstill while descending to test the ability to climb and descend stairs in a controlled manner, and power.

(5) Driving across tilted path with different surfaces to test the drifting and tipping stability, and power.

(6) Driving a ramp up and down, opening and closing the door in the middle using an externally powered technical support (e.g. a robotic arm) to test precise maneuverability and control of technical support in a confined space.

Powered wheelchair race tasks

Powered wheelchair race tasks

This year, team HSR Enhanced with pilot Florian Hauser was unbeatable again, as it happened in the 2016 edition. The silver medal went to team Caterwil from Russia with pilot Iurii Larin. The third finalist was team Fortississimo from Japan with pilot Hiroshi Nojima. Here’s a summary of the races of the top 4 finalists:

You can see the results from the rest of the teams in this discipline here, or watch the recorded livestreams of both days on their website.

Interview to Christian Bermes – (ex) Team leader of HSR Enhanced team

We had the pleasure to interview Christian Bermes, team leader of the HSR Enhanced team in both 2016 and 2020 editions. After CYBATHLON 2020, he handed over the team leads, as he moved from OST to be Professor for Mobile Robotics at the University of Applied Sciences of the Grisons.

Christian Bermes

Christian Bermes – (ex) Team leader of HSR Enhanced team


D. C. Z.: What does it mean for your team to have won in your CYBATHLON category?

C.B.: It is a huge confirmation that our first win in 2016 was not just a coïncidence, but again the result of human-centered innovation together with our pilot Florian Hauser, meticulous engineering, proper prior planning, hard training and of course a next-level pilot performance. Needless to say that there was a certain amount of luck involved, too – and hard work puts you where luck finds you.

D. C. Z.: And what does it mean for people with disabilities?

C.B.: I find it hard to answer this in general terms. Our wheelchair as it is right now will not enter a mass market, however some of its modules could, if we found partners for industrialization. I just think it’s great that people with disabilities are the figureheads and heroes of CYBATHLON. They have prepared themselves in the most professional way, have unmatched control over their machines and are simply impressive.

D. C. Z.: What are still your challenges?

C.B.: Right now we enjoy the weightlessness of the win. Soon after, CYBATHLON will publish the new race obstaces for 2024 and I am 100% sure that there will be many technical challenges right away. Moreover, budget has to be secured, sponsors must be found, the team must be sworn in, plus many things more. The goal for 2024 is clear – win another title. I have full confidence in the new lead crew and their team, they will outperform everything we have seen from HSR enhanced until this day. And with Florian Hauser as pilot, we will see lightning speed on the race day.

The story of pilot Florian Hauser

The organizers released a series of videos telling the personal story of some of the competing pilots. The pilot of the HSR Enhanced team, Florian Hauser, “is a tetraplegic since he had a motorcycle accident in 2014. However, this does not prevent him from riding fast. Not on a bike anymore, but in his wheelchair,” as the organizers describe. Apart from being the winner of his discipline in this 2020 edition, Florian also won CYBATHLON 2016 and the CYBATHLON wheelchair Series in Japan.

]]>
#CYBATHLON2020GlobalEdition winners of the powered exoskeleton race (with interview) https://robohub.org/cybathlon2020globaledition-winners-interview-powered-exoskeleton-race/ Fri, 04 Dec 2020 17:00:23 +0000 https://robohub.org/cybathlon2020globaledition-winners-interview-powered-exoskeleton-race/ Team Angel Robotics 1

Winning team: Angel Robotics with pilot Byeong-Uk Kim

The last edition of CYBATHLON took place on 13-14 November, 2020. This competition, created by ETH Zurich and run as a non-profit project, aims to advance in the research and development of assistive technology by involving developers, people with disabilities, and the general public. We had the chance to interview the winning team of the powered exoskeleton race, Angel Robotics from South Korea.

In this race, pilots with complete thoracic or lumbar spinal cord injury from nine teams competed using an exoskeleton. This wearable, powered support enables them to walk and master other everyday tasks. Indeed, the motivation behind this race is that “the use of exoskeletons is still rare, they are currently mainly used for physiotherapy in hospitals and rehabilitation centers. Exoskeletons dramatically increase the mobility of people with paraplegia, which consequently improves their overall physical and psychological health and therefore might represent a welcome addition to a wheelchair”, as the organizers of CYBATHLON state. This race involved:

(1) Sitting down & standing up from a sofa, and stacking cups while standing next to a table to test the range of motion and strength in the knee and hip joints, and stability.

(2) Slaloming around furniture without displacing it to test precision of steps and agility.

(3) Crossing uneven terrain to test precision of steps and adaptation of step lengths and widths.

(4) Climbing and descending stairs to test range of motion and strength in the knee and hip joints, and step precision.

(5) Walking across a tilted path to test the lateral range of motion in hip and foot joints, and stability.

(6) Climbing a ramp, opening and closing the door in the middle of the ramp, and descending the ramp to test the range of motion in foot, knee and hip joints, stability and maneuvering in confined spaces.

Race tasks

Powered exoskeleton race tasks. Credit: CYBATHLON

The top three teams were the company Angel Robotics (1) from South Korea with pilot Byeong-Uk Kim, TWIICE from EPFL research group REHAssist with pilot Silke Pan, and Angel Robotics (2) with pilot Lee Joo-Hyun. Remarkably, the three of them achieved the highest score – 100 points. With this impressive result, the podium was decided based on finishing time. If you can’t wait to watch how tight the races were, you can enjoy them in the summary video below.

You can see the results from the rest of the teams in this discipline here, or watch the recorded livestreams of both days on their website.

Interview to Kyoungchul Kong – Team leader of Angel Robotics team

We had the pleasure to interview Kyungchul Kong, team leader and CEO of Angel Robotics (1&2). He is also an Associate Professor of KAIST (Korea Advanced Institute of Science and Technology).

Kyoungchul Kong

Kyoungchul Kong – Team leader of Angel Robotics team


D. C. Z.: What does it mean for your team to have won in your CYBATHLON category?

K.K.: In WalkON Suit, the powered exoskeleton of Angel Robotics, there have been various dramatic technical advances. Since the first Cybathlon in 2016, the walking speed has become as fast as people without disabilities. The most important feature of WalkON Suit is its balance; as the center of mass is placed on the area of feet while standing straight, the wearer can stand without much effort for a long time. These superior functionalities of WalkON Suit could be proved by winning the Gold and Bronze medals at Cybathlon 2020.

D. C. Z.: And what does it mean for people with disabilities?

K.K.: While winning the Gold medal is glorious, winning two medals is especially meaningful. The physical conditions of the two pilots (i.e., the Gold medalist and the Bronze medalist) of Team Angel Robotics were extremely different. One was a male with very strong upper body, while the other was a female with much less muscles. Such different people could be successfully assisted by WalkON Suit, which means that the powered exoskeleton is not a technology optimized for a single user, but able to be utilized by many people with different body conditions.

D. C. Z.: What are still your challenges?

K.K.: In order to bring the WalkON Suit into the real life of people who need this technology, it has to be much improved in terms of wearability, price, and weight. The user should be able to wear the robot without anyone else’s help. It should be light enough to handle while sitting on a wheelchair. The price is another critical issue considering practical conditions. With these restrictions, the functionalities and performance of the robot must not be deteriorated. These are the challenges we are much trying to get over.

]]>
James Bruton focus series #1: openDog, Mini Robot Dog & openDog V2 https://robohub.org/james-bruton-focus-series-1-opendog-mini-robot-dog-opendog-v2/ Sat, 28 Nov 2020 10:35:28 +0000 https://robohub.org/james-bruton-focus-series-1-opendog-mini-robot-dog-opendog-v2/ James Bruton with openDog V2

James Bruton with openDog V2

What if you could ride your own giant LEGO electric skateboard, make a synthesizer that you can play with a barcode reader, or build a strong robot dog based on the Boston Dynamics dog robot? Today sees the start of a new series of videos that focuses on James Bruton’s open source robot projects.

James Bruton is a former toy designer, current YouTube maker and general robotics, electrical and mechanical engineer. He has a reputation for building robot dogs and building Iron Man inspired cosplays. He uses 3D printing, CNC and sometimes welding to build all sorts of robotics related creations. Highlights include building Mark Rober’s auto-strike bowling ball and working with Colin Furze to build a life-sized Iron Man Hulkbuster for an official eBay and Marvel promo. He also built a life-sized Bumblebee Transformer for Paramount to promote the release of the Bumblebee movie.

I discovered James’ impressive work in this episode of Ricardo Tellez’s ROS Developers Podcast on The Construct, which I highly recommend. Whether you enjoy getting your hands dirty with CAD files, 3D-printed parts, arduinos, motors and code, or you like learning about the full research & development (R&D) process of a robotics project, you will have loads of hours of fun following this series.

Today I brought one of James’ coolest and most successful open source projects: openDog and its different versions. In James’ own words, “if you want your very own four-legged friend to play fetch with and go on long walks then this is the perfect project for you.” You can access all the CAD files and code here. And without further ado, here’s the full YouTube playlist of the first version of openDog:

James also released another series of videos developing an affordable version of openDog: Mini Robot Dog. This robot is half the size of openDog and its mechanical components and 3D-printed parts are much more cheaper than the former robot without sacrificing compliance. You can see the full development in the playlist below, and access the open source files of version 1 and version 2.

Based on the insight gained through the R&D of openDog, Mini Robot Dog and these test dogs, James built the ultimate robot dog: openDog V2. For this improved version of openDog, he used brushless motors which can be back-driven to increase compliance. And by adding an Inertial Measurements Unit, he improved the balance of the robot. CAD files and code are available here. If you want to find out whether the robot is able to walk, check out the openDog V2 video series:

If you like James Bruton’s project, you can check out his website for more resources, updates and support options. See you in the next post of our focus series!

]]>
#IROS2020 BiR-IROS: Black in Robotics https://robohub.org/iros2020-bir-iros-black-in-robotics/ Thu, 19 Nov 2020 12:27:08 +0000 https://robohub.org/iros2020-bir-iros-black-in-robotics/

The 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) has teamed up with Black in Robotics (website, Twitter) to release a new special series named BiR-IROS: Black in Robotics with the support of Toyota Research Institute. This series consists of three short but powerful videos of roboticists giving personal examples of why diversity matters in robotics, showcasing their research and explaining what got them into robotics.

BiR-IROS: Black in Robotics is available for free through the OnDemand platform until 25 November (located under Technical Talks or at this link). Here’s a list of all the speakers and organisations who took part in the videos:

  • Ariel Anders – Roboticist at Robust.AI
  • Allison Okamura – Professor of Mechanical Engineering at Stanford University
  • Alivia Blount – Data Scientist
  • Anthony Jules – Co-founder and COO at Robust.AI
  • Andra Keay – Robotics Industry Futurist, Managing Director of Silicon Valley Robotics and Core Team Member of Robohub
  • Carlotta A. Berry – Professor of Electrical and Computer Engineering at Rose-Hulman Institute of Technology
  • Donna Auguste – Entrepreneur and Data Scientist
  • Clinton Enwerem – Robotics Trainee from the Robotics & Artificial Intelligence Nigeria (RAIN) team
  • Quentin Sanders – Postdoctoral Research Fellow at North Carolina State University
  • George Okoroafor – Robotics Research Engineer from the Robotics & Artificial Intelligence Nigeria (RAIN) team
  • Tatiana Jean-Louis – Amazon & Robotics Geek
  • Patrick Musau – Graduate Research Assistant at Vanderbilt University
  • Melanie Moses – Professor of Computer Science at the University of New Mexico
]]>
#IROS2020 Original Series: Real Roboticist https://robohub.org/iros2020-original-series-real-roboticist/ Mon, 16 Nov 2020 15:50:37 +0000 https://robohub.org/iros2020-original-series-real-roboticist/
Are you curious about the people behind the robots? The 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) features a new Original Series called Real Roboticist hosted by Sabine Hauert, President of Robohub and faculty at University of Bristol.

The show looks at the people at the forefront of robotics research. How did they become roboticists? What made them proud and what challenges did they face? What advice would they give to their younger self? What does a typical day look like? And where do they see the future of robotics? If you want to find out, watch the series for free on the IROS On-Demand platform until 25 November (located under Technical Talks or at this link).

The series features the following roboticists:

  • Michelle Johnson (Associate Professor of Physical Medicine and Rehabilitation at the University of Pennsylvania): Robots that Matter
  • Davide Scaramuzza (Professor and Director of the Robotics and Perception Group at the University of Zürich): Drones & Magic
  • Dennis Hong (Professor of Mechanical and Aerospace Engineering at the University of California Los Angeles): Making People Happy
  • Ruzena Bajczy (Professor Emerita of Electrical Engineering and Computer Sciences at the University of California Berkeley): Foundations
  • Peter Corke (Distinguished Professor of Robotic Vision at the Queensland University of Technology): Learning
  • Radhika Nagpal (Fred Kavli Professor of Computer Science at Harvard University): Enjoying the Ride
]]>
CYBATHLON 2020 Global Edition: A competition to break down barriers between the public, people with disabilities and technology developers https://robohub.org/cybathlon-2020-global-edition-a-competition-to-break-down-barriers-between-the-public-people-with-disabilities-and-technology-developers/ Wed, 11 Nov 2020 17:49:27 +0000 https://robohub.org/cybathlon-2020-global-edition-a-competition-to-break-down-barriers-between-the-public-people-with-disabilities-and-technology-developers/ Involving potential users of a particular technology in the research and development (R&D) process is a very powerful way to maximise success when such technology is deployed in the real world. In addition, this can speed up the R&D process because the researchers’ perspective to the problem is combined with that of end-users. The non-​profit project CYBATHLON was created by ETH Zurich as a way to advance R&D of assistive technology through competitions that involve developers, people with disabilities, and the general public.

Competitor getting up from sofa

Over 50 teams from all over the world will compete against each other in the Cybathlon 2020 Global Edition. (Credit: Alessandro Della Bella / ETH Zürich)

This 13th and 14th of November, the CYBATHLON 2020 edition is taking place. The event will be live-streamed, and it is completely open to the public. You can access it through their website. Here’s the full programme for the two days:

Friday, 13 November 2020

4pm CET (3pm UTC): Brain-Computer Interface Race

The power of thoughts

  • Welcome to CYBATHLON 2020 Global Edition!
  • Kick-off by the head of competition, Lukas Jaeger
  • Races of all teams
  • Team stories and insights
  • Analysis by the BCI expert Nicole Wenderoth of ETH Zurich
  • Special guest: Joël Mesot, President of ETH Zurich
  • The top 4: Who will win?

5pm CET (4pm UTC): Powered Arm Prosthesis Race

Grasping and feeling

  • Races of all teams
  • Team stories and insights
  • Guest: Roger Gassert, researcher on assistive technologies at ETH Zurich
  • Analysis by the arm prosthesis expert Michel Fornasier
  • The top 4: Who will win?

6pm CET (5pm UTC): Functional Electrical Stimulation Bike Race

Power to the muscles

  • Races of all teams
  • Team stories and insights
  • Guest: Robert Riener, initiator of CYBATHLON
  • Analysis by Claudio Perret, expert in functional electrical stimulation
  • The top 4: Who will win?

7pm CET (6pm UTC): Inside CYBATHLON – Stories, recap and outlook

Insights of the protagonists and organisers – the journey of CYBATHLON

  • Robert Riener, initiator of CYBATHLON
  • Roger Gassert, researcher on assistive technologies at ETH Zurich
  • Florian Hauser, powered wheelchair pilot of team HSR enhanced
  • Roland Sigrist, director of CYBATHLON

The medical checks

  • Who can compete? Insights of the medical examiners Zina-Mary Manjaly and Jirí Dvořák

Focus: Inclusion

Recap and outlook

Saturday, 14 November 2020

1pm CET (12pm UTC): Powered Wheelchair Race

Overcoming stairs and ramps

  • Races of all teams
  • Insights of the head of competition, Lukas Jaeger
  • Team stories and insights
  • Guest: Roger Gassert, researcher on assistive technologies at ETH Zurich
  • Analysis by scientist Sue Bertschy
  • The top 4: Who will win?

2pm CET (1pm UTC): Powered Leg Prosthesis Race

Watch your step

  • Races of all teams
  • Team stories and insights
  • Guest: Robert Riener, initiator of CYBATHLON
  • Analysis by expert Lukas Christen, parathlete and coach
  • The top 4: Who will win?

3pm CET (2pm UTC): Powered Exoskeleton Race

Walking in robotic suits

  • Races of all teams
  • Guest: Roger Gassert, researcher on assistive technologies at ETH Zurich
  • Analysis by the exoskeleton developers Jaime Duarte and Kai Schmidt
  • The top 4: Who will win?

4pm CET (3pm UTC): Inside CYBATHLON – Stories, recap and outlook

Insights of the protagonists and organisers – the future of CYBATHLON and social inclusion

  • Silke Pan, Powered Exoskeleton Pilot of team TWIICE
  • Robert Riener, Initiator of CYBATHLON

New systems

  • Maria Fossati, powered arm prosthesis pilot of team SoftHand Pro
  • Max Erick Busse-Grawitz, expert on mechatronics
  • Roger Gassert, researcher on assistive technologies at ETH Zurich

Recap and outlook – the CYBATHLON @school and the next CYBATHLON

  • Special guest: Sarah Springman, rector of ETH Zurich
  • Robert Riener, initiator of CYBATHLON
  • Roland Sigrist, director of CYBATHLON

The next CYBATHLON

]]>
Online events to look out for on Ada Lovelace Day 2020 https://robohub.org/online-events-to-look-out-for-on-ada-lovelace-day-2020/ Mon, 12 Oct 2020 14:57:51 +0000 https://robohub.org/online-events-to-look-out-for-on-ada-lovelace-day-2020/ Tomorrow the world celebrates Ada Lovelace Day to honor the achievements of women in science, technology, engineering and maths. We’ve specially chosen a couple of online events featuring amazing women in robotics and technology. You can enjoy their talks in the comfort of your own home.

Ada Lovelace Day: The Near Future (panel discussion)

Ada Lovelace Day 2020: The Near Future (panel discussion)

Organized by Ada Lovelace Day, this panel session will be joined by Dr Beth Singler (Junior Research Fellow in Artificial Intelligence at the University of Cambridge), Prof Praminda Caleb-Solly (Professor of Assistive Robotics and Intelligent Health Technologies at the University of the West of England), Dr Anat Caspi (director of the Taskar Center for Accessible Technology, University of Washington) and Dr Chanuki Seresinhe (visiting data science researcher at The Alan Turing Institute). The event will take place at 4pm (UTC). You can register here.


Ada Lovelace Day 2020 Celebration of Women in Robotics

Ada Lovelace Day 2020 Celebration of Women in Robotics

Hosted by UC CITRIS CPAR and Silicon Valley Robotics, this event will be joined by Dr Ayanna Howard (Chair of Interactive Computing Georgia Tech), Dr Carlotta Berry (Professor of Electrical and Computer Engineering Rose-Hulman Institute of Technology), Angelique Taylor (PhD Candidate at the Healthcare Robotics Lab in UCSD and Facebook Research Intern), Dr Ariel Anders (First Technical Hire at Robust.ai) and Jasmine Lawrence, (Product Manager at X, the Moonshot Factory). This event will take place at 1am (UTC). You can register here.


Tomorrow we will also publish our 2020 list of women in robotics you need to know about. Stay tuned!

]]>