Lucy Smith – Robohub https://robohub.org Connecting the robotics community to the world Thu, 25 Jul 2024 09:36:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 #RoboCup2024 – daily digest: 21 July https://robohub.org/robocup2024-daily-digest-21-july/ Sun, 21 Jul 2024 19:48:45 +0000 https://aihub.org/?p=15404 A break in play during a Small Size League match.

Today, 21 July, saw the competitions draw to a close in a thrilling finale. In the third and final of our round-up articles, we provide a flavour of the action from this last day. If you missed them, you can find our first two digests here: 19 July | 20 July.

My first port of call this morning was the Standard Platform League, where Dr Timothy Wiley and Tom Ellis from Team RedbackBots, RMIT University, Melbourne, Australia, demonstrated an exciting advancement that is unique to their team. They have developed an augmented reality (AR) system with the aim of enhancing the understanding and explainability of the on-field action.

The RedbackBots travelling team for 2024 (L-to-R: Murray Owens, Sam Griffiths, Tom Ellis, Dr Timothy Wiley, Mark Field, Jasper Avice Demay). Photo credit: Dr Timothy Wiley.

Timothy, the academic leader of the team explained: “What our students proposed at the end of last year’s competition, to make a contribution to the league, was to develop an augmented reality (AR) visualization of what the league calls the team communication monitor. This is a piece of software that gets displayed on the TV screens to the audience and the referee, and it shows you where the robots think they are, information about the game, and where the ball is. We set out to make an AR system of this because we think it’s so much better to view it overlaid on the field. What the AR lets us do is project all of this information live on the field as the robots are moving.”

The team has been demonstrating the system to the league at the event, with very positive feedback. In fact, one of the teams found an error in their software during a game whilst trying out the AR system. Tom said that they’ve received a lot of ideas and suggestions from the other teams for further developments. This is one of the first (if not, the first) AR system to be trialled across the competition, and first time it has been used in the Standard Platform League. I was lucky enough to get a demo from Tom and it definitely added a new level to the viewing experience. It will be very interesting to see how the system evolves.

Mark Field setting up the MetaQuest3 to use the augmented reality system. Photo credit: Dr Timothy Wiley.

From the main soccer area I headed to the RoboCupJunior zone, where Rui Baptista, an Executive Committee member, gave me a tour of the arenas and introduced me to some of the teams that have been using machine learning models to assist their robots. RoboCupJunior is a competition for school children, and is split into three leagues: Soccer, Rescue and OnStage.

I first caught up with four teams from the Rescue league. Robots identify “victims” within re-created disaster scenarios, varying in complexity from line-following on a flat surface to negotiating paths through obstacles on uneven terrain. There are three different strands to the league: 1) Rescue Line, where robots follow a black line which leads them to a victim, 2) Rescue Maze, where robots need to investigate a maze and identify victims, 3) Rescue Simulation, which is a simulated version of the maze competition.

Team Skollska Knijgia, taking part in the Rescue Line, used a YOLO v8 neural network to detect victims in the evacuation zone. They trained the network themselves with about 5000 images. Also competing in the Rescue Line event were Team Overengeniering2. They also used YOLO v8 neural networks, in this case for two elements of their system. They used the first model to detect victims in the evacuation zone and to detect the walls. Their second model is utilized during line following, and allows the robot to detect when the black line (used for the majority of the task) changes to a silver line, which indicates the entrance of the evacuation zone.

Left: Team Skollska Knijgia. Right: Team Overengeniering2.

Team Tanorobo! were taking part in the maze competition. They also used a machine learning model for victim detection, training on 3000 photos for each type of victim (these are denoted by different letters in the maze). They also took photos of walls and obstacles, to avoid mis-classification. Team New Aje were taking part in the simulation contest. They used a graphical user interface to train their machine learning model, and to debug their navigation algorithms. They have three different algorithms for navigation, with varying computational cost, which they can switch between depending on the place (and complexity) in the maze in which they are located.

Left: Team Tanorobo! Right: Team New Aje.

I met two of the teams who had recently presented in the OnStage event. Team Medic’s performance was based on a medical scenario, with the team including two machine learning elements. The first being voice recognition, for communication with the “patient” robots, and the second being image recognition to classify x-rays. Team Jam Session’s robot reads in American sign language symbols and uses them to play a piano. They used the MediaPipe detection algorithm to find different points on the hand, and random forest classifiers to determine which symbol was being displayed.

Left: Team Medic Bot Right: Team Jam Session.

Next stop was the humanoid league where the final match was in progress. The arena was packed to the rafters with crowds eager to see the action.
Standing room only to see the Adult Size Humanoids.

The finals continued with the Middle Size League, with the home team Tech United Eindhoven beating BigHeroX by a convincing 6-1 scoreline. You can watch the livestream of the final day’s action here.

The grand finale featured the winners of the Middle Size League (Tech United Eindhoven) against five RoboCup trustees. The humans ran out 5-2 winners, their superior passing and movement too much for Tech United.

]]>
#RoboCup2024 – daily digest: 20 July https://robohub.org/robocup2024-daily-digest-20-july/ Sat, 20 Jul 2024 18:34:07 +0000 https://aihub.org/?p=15402 The Standard Platform Soccer League in action.

This is the second of our daily digests from RoboCup2024 in Eindhoven, The Netherlands. If you missed the first digest, which gives some background to RoboCup, you can find it here.

Competitions continued across all the leagues today, with participants vying for a place in Sunday’s finals.

The RoboCup@Work league focusses on robots in work-related scenarios, utilizing ideas and concepts from other RoboCup competitions to tackle open research challenges in industrial and service robotics.

I arrived at the arena in time to catch the advanced navigation test. Robots have to autonomously navigate, picking up and placing objects at different work stations. In this advanced test, caution tape is added to the arena floor, which the robots should avoid travelling over. There is also a complex placing element where teams have to put an object that they’ve collected into a slot – get the orientation or placement of the object slightly wrong and the it won’t fall into the slot.

The RoboCup@Work arena just before competition start.

Eight teams are taking part in the league this year. Executive Committee member Asad Norouzi said that there are plans to introduce a sub-league which would provide an entry point for new teams or juniors to get into the league proper.

I caught up with Harrison Burns, Mitchell Torok and Jasper Arnold from Team MiRobot. They are based at the University of New South Wales and are attending RoboCup for the first time.

Team MiRobot from UNSW.

The team actually only started six months ago, so final preparations have been a bit stressful. However, the experience has been great fun, and the competition has gone well so far. Like most teams, they’ve had to make many refinements as the competition has progressed, leading to some late nights.

One notable feature of the team’s robot is the bespoke, in-house-designed grasping mechanism on the end of the arm. The team note that “it has good flexible jaws, so when it grabs round objects it actually pulls the object directly into it. Because it uses a linear motion, compared to a lot of other rotating jaws, it has a lot better reliability for picking up objects”.

Here is some footage from the task, featuring Team bi-t-bots and Team Singapore.

In the Middle Size Soccer league (MSL), teams of five fully autonomous robots play with a regular size FIFA ball. Teams are free to design their own hardware but all sensors have to be on-board and there is a maximum size and weight limit of 40kg for the robots. The research focus is on mechatronics design, control and multi-agent cooperation at plan and perception levels. Nine teams are competing this year.

I spoke to António Ribeiro, who is a member of the technical committee and part of Team LAR@MSL from the University of Minho, Portugal. The team started in 1998, but António and most of his colleagues on the current team have only been involved in the MSL since September 2022. The robots have evolved as the competition has progressed, and further improvements are in progress. Refinements so far have included communication, the detection system, and the control system. They are pleased with the improvements from the previous RoboCup. “Last year we had a lot of hardware issues, but this year the hardware seems pretty stable. We also changed our coding architecture and it is now much easier and faster for us to develop code because we can all work on the code at the same time on different modules”.

António cited versatility and cost-effective solutions as strengths of the team. “Our robot is actually very cheap compared to other teams. We use a lot of old chassis, and our solutions always go to the lowest cost possible. Some teams have multiple thousand dollar robots, but, for example, our vision system is around $70-80. It works pretty well – we need to improve the way we handle it, but it seems stable”.

Team LAR@MSL

The RoboCup@Home league aims to develop service and assistive robot technology with high relevance for future personal domestic applications. A set of benchmark tests is used to evaluate the robots’ abilities and performance in a realistic non-standardized home environment setting. These tests include helping to prepare breakfast, clearing the table, and storing groceries.

I arrived in time to watch the “stickler for the rules” challenge, where robots have to navigate different rooms and make sure that the people inside (“guests” at a party) are sticking to four rules: 1) there is one forbidden room – if a guest is in there the robot must alert them and ask them to follow it into another room), 2) everyone must have a drink in their hand – if not, the robot directs them to a shelf with drinks, 3) no shoes to be worn in the house, 4) there should be no rubbish left on the floor.

After watching an attempt from the LAR@Home robot, Tiago from the team told me a bit about the robot. “The goal is to develop a robot capable of multi general-purpose tasks in home and healthcare environments.” With the exception of the robotic arm, all of the hardware was built by the team. The robot has two RGBD cameras, two LIDARs, a tray (where the robot can store items that it needs to carry), and two emergency stop buttons that deactivate all moving parts. Four omnidirectional wheels allow the robot to move in any direction at any time. The wheels have independent suspension systems which guarantees that they can all be on the ground at all times, even if there are bumps and cables on the venue floor. There is a tablet that acts as a visual interface, and a microphone and speakers to enable communication between humans and the robot, which is all done via speaking and listening.

Tiago told me that the team have talked to a lot healthcare practitioners to find out the main problems faced by elderly people, and this inspired one of their robot features. “They said that the two main injury sources are from when people are trying to sit down or stand up, and when they are trying to pick something up from the floor. We developed a torso that can pick objects from the floor one metre away from the robot”.

The LAR@Home team.


You can keep up with the latest news direct from RoboCup here.

Click here to see all of our content pertaining to RoboCup.

]]>
#RoboCup2024 – daily digest: 19 July https://robohub.org/robocup2024-daily-digest-19-july/ Fri, 19 Jul 2024 16:10:00 +0000 https://aihub.org/?p=15398 The main soccer arena.

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. As part of this initiative, a series of competitions and events are held throughout the year. The main showcase event is an international affair with teams travelling from far and wide to put their machines through their paces.

This year, RoboCup is being held in three arenas in the Genneper Parken, Eindhoven, The Netherlands. The organisers are expecting over 2,000 participants, from 45 different countries, with around 300 teams signed up to take part in the various competitions.

Although RoboCup started out as a football (or soccer) playing competition, other leagues have since been introduced, focussing on robots in industrial, rescue, and home settings. There is even a dedicated league for young roboticists – RoboCupJunior – where participants can take part in either football, rescue, or artistic events.

I am lucky enough to be able to attend this year, and, for the next three days, I’ll be bringing you a daily digest of some of the exciting happenings from Eindhoven.

Today, 19 July, sees the competition in full swing. The main soccer arena, boasting multiple pitches, hosts a number of the different leagues which form RoboCupSoccer.

Some of the pitches in the main soccer arena.

My first port of call was the Standard Platform League, where the round 5 champions cup match between SPQR Team vs rUNSWift was taking place. SPQR ran out winners and advance to round 6. In this league, all teams compete with identical robots (currently the humanoid NAO by Aldebaran). The robots operate fully autonomously, meaning that there is no external control from neither humans nor computers.

Standard platform league. Round 5 champions cup match between SPQR Team vs rUNSWift.

The Humanoid AdultSize league is arguably the most challenging of the leagues, with many constraints placed on the robots to make them as human-like as possible. For example, they must have roughly human-like body proportions, they need to walk on two legs, and they are only allowed to use human-like sensors (up to two cameras to sense the environment). In this AdultSize competition, two robots from each team compete, and the team members walk behind the robots to catch them in case of a fall. Such a mishap could prove costly in terms of potential hardware damage.

Action from the Humanoid AdultSize League.

The RoboCup Rescue Robot League sees teams developing robotic systems with the goal of enabling emergency responders to perform extremely hazardous tasks from safer stand-off distances. During the competition, teams compete in a round-robin, putting their robots through their paces on a number of different challenges. The leading teams following this initial phase progress to the finals on Sunday. The tasks include navigating in complex environments, opening doors, and sensing. Teams may run the machines completely autonomously, or with some assistive control. More points are awarded for completely autonomous operation.

RoboCup Rescue arena from above.

You can keep up with more RoboCup2024 news here.

]]>
AI holidays 2023 https://robohub.org/ai-holidays-2023/ Sun, 31 Dec 2023 08:43:52 +0000 https://aihub.org/?p=13873

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season.


Josh Miller used ChatGPT to create the 12 days of Christmas. Prompt: “I want to make 12 images in an early 1980s era to depict the twelve days of Christmas”.

See the pictures in higher definition here.


“Santa’s Secret: Robots Saved Christmas! | CPS XMAS video 2023” from Chair of Cyber-Physical Systems.


“AI rewriting classic Christmas stories” from GingerPale.


Using PaLM2 to generate haikus.

Prompt: “Write me a haiku about an alternative xmas”.
No gifts, no grand feast,
Just laughter and joy, warmth spread,
Kindness fills the air.

Prompt: “Write a haiku about the winter holiday season”.
Snow blankets the land,
Hearts aglow with love’s warmth,
Joyous cheer surrounds.


“Put a Bow on It | Happy Holidays” from Boston Dynamics.


“Christmas Magic: Animated Backgrounds for an Enchanting Atmosphere” from AI-ArtificialIntelligence.


“XMAS – Episode 3: AI – Christmas music for the HAKOM Christmas party” from HAKOM Time Series.


“Christmas tree in python notebook with Turtle” from Learn Upwards.


“FZI Living Lab Christmas Robotics 2023 – Robotic Reindeer Rescue” from FZI channel.


A letter to Santa generated by PaLM2. Prompt: “Write me a letter to Santa Claus. I don’t want any presents, just inspiration in my machine learning research for 2024.”

Dear Santa Claus,

As the year comes to a close, I find myself reflecting on the remarkable advancements in machine learning that have transformed our world. From self-driving cars to personalized medicine, these innovations have opened up new possibilities and enriched our lives in ways we could have only imagined a few decades ago.

I am writing to you today not to ask for material gifts, but rather for something more precious: inspiration to continue exploring the frontiers of machine learning in the year to come. As a researcher in this field, I am eager to contribute to the development of algorithms and techniques that can further enhance our understanding of the world around us and create solutions to pressing global challenges.

I am particularly fascinated by the potential of machine learning to address the climate crisis. By harnessing the power of data and computation, we can develop more efficient renewable energy sources, optimize transportation systems, and mitigate the impact of extreme weather events. Additionally, I believe that machine learning can play a crucial role in improving education, healthcare, and environmental sustainability.

I know that you have a special ability to see the world in a unique and insightful way. I hope that you can share some of your wisdom with me and guide me towards promising research directions. I am eager to embark on new challenges and make a positive impact on the world through machine learning.

Thank you for considering my request, Santa Claus. I look forward to your guidance and inspiration as I pursue my research endeavors in the year ahead.

With warmest regards,

[Your Name]

]]>
AAAI Fall Symposium: Patrícia Alves-Oliveira on human-robot interaction design https://robohub.org/aaai-fall-symposium-patricia-alves-oliveira-on-human-robot-interaction-design/ Tue, 31 Oct 2023 06:25:33 +0000 https://robohub.org/?p=208623 An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The AAAI Fall Symposium Series took place in Arlington, USA, and comprised seven different symposia. One of these, the tenth Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium was run as a hybrid in-person/online event, and we tuned in to the opening keynote, which was given by Patrícia Alves-Oliveira.

As a psychology student, Patrícia’s dream was to become a therapist. However, an internship, where she encountered a robot for the first time, inspired her to change her plans, and she decided to go into the field of human-robot interaction. Following a PhD in the field, she worked as a postdoc, before heading to industry as a designer in the Amazon Astro robot team.

Patrícia has worked on a number of interesting projects during her time in academia and in industry. Thinking about how to design robots for specific user needs, and keeping the user at the forefront during the design process, has been core to her work. She began by summarising three very different academic projects.

Creativity and robotics

The objective of this project was to design, fabricate, and evaluate robots as creativity-provoking tools for kids. Patrícia created a social robot named YOLO (or Your Own Living Object) that she designed to be child-proof (in other words, it could withstand being dropped and knocked over), with the aim of trying to help children explore their creativity during play. A machine learning algorithm learns the pattern of play that the child has and adapts the robot behaviour accordingly. You can see the robot in action in the demo below:

FLEXI robot

As a postdoc project, Patrícia worked on building FLEXI, a social robot embodiment kit. This kit consists of a robot (with a face, and a torso with a screen on the front), which can be customised, and an open-source end-user programming interface designed to be user-friendly. The customisation element means that it can be used for many applications. The team has deployed FLEXI across three application scenarios: community-support, mental health, and education, with the aim of assessing the flexibility of the system. You can see the robot in action, in different scenarios, here.

Social dining

This project centred on a robotic arm for people with impaired mobility. Such systems already exist for assisting people with tasks such as eating. However, in a social context they can often form a barrier between the user and the rest of the group. The idea behind this project was to consider how such a robot could be adapted to work well in a social context, for example, during a meal with family or friends. The team interviewed people with impaired mobility to assess their needs, and came up with a set of design principles for creating robot-assisted feeding systems and an implementation guide for future research in this area. You can read the research paper on this project here.

You can find out more about these three projects, and the other projects that Patrícia has been involved in, here.

Astro robot

Patrícia has long been interested in robots for the real world, and how this real-world experience is aligned with the study of robots in academia and industry. She decided to leave academia and join the Astro robot programme, which she felt was a great opportunity to work on a large-scale real-world robot project.

The Astro robot is a home robot designed to assist with tasks such as monitoring your house, delivering small objects within the home, recognising your pet, telling a story, or playing games.

Patrícia took us through a typical day in the life of a designer where she always has in mind the bigger picture of what the team is aiming for, in other words, what the ideal robot, and its interactions with humans, would look like. Coupled to that, the process is governed by core design tenets, such as the customer needs, and non-negotiable core elements that the robot should include. When considering a particular element of the robot design, for example, the delivery of an item in the robot tray, Patrícia uses storyboards to map out details of potential human-robot interactions. An important aspect of design concerns edge cases, which occur regularly in the real world. Good design will consider potential edge cases and incorporate ways to deal with them.

Patrícia closed by emphasising the importance of teamwork in the design process, in particular, the need for interdisciplinary teams; by considering design from many different points of view, the chance of innovation is higher.

You can find out more about the Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium here.

]]>
Code to Joy: Why Everyone Should Learn a Little Programming – Interview with Michael Littman https://robohub.org/code-to-joy-why-everyone-should-learn-a-little-programming-interview-with-michael-littman/ Thu, 26 Oct 2023 05:26:55 +0000 https://robohub.org/?p=208586

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

Could you start by telling us a bit about the book, and who the intended audience is?

The intended audience is not computer scientists, although I have been getting a very warm reception from computer scientists, which I appreciate. The idea behind the book is to try to help people understand that telling machines what to do (which is how I view much of computer science and AI) is something that is really accessible to everyone. It builds on skills and practices that people already have. I think it can be very intimidating for a lot of people, but I don’t think it needs to be. I think that the foundation is there for everybody and it’s just a matter of tapping into that and building on top of it. What I’m hoping, and what I’m seeing happening, is that machine learning and AI is helping to meet people part way. The machines are getting better at listening as we try to get better at telling them what to do.

What made you decide to write the book, what was the inspiration behind it?

I’ve taught large introductory computer science classes and I feel like there’s an important message in there about how a deeper knowledge of computing can be very empowering, and I wanted to bring that to a larger audience.

Could you talk a bit about the structure of the book?

The meat of the book talks about the fundamental components that make up programs, or, in other words, that make up the way that we tell computers what to do. Each chapter covers a different one of those topics – loops, variables, conditionals, for example. Within each chapter I talk about the ways in which this concept is already familiar to people, the ways that it shows up in regular life. I point to existing pieces of software or websites where you can make use of that one particular concept to tell computers what to do. Each chapter ends with an introduction to some concepts from machine learning that can help create that particular programming construct. For example, in the chapter on conditionals, I talk about the ways that we use the word “if” in regular life all the time. Weddings, for example, are very conditionally structured, with statements like “if anyone has anything to say, speak now or forever hold your peace”. That’s kind of an “if-then” statement. In terms of tools to play with, I talk about interactive fiction. Partway between video games and novels is this notion that you can make a story that adapts itself while it’s being read. What makes that interesting is this notion of conditionals – the reader can make a choice and that will cause a branch. There are really wonderful tools for being able to play with this idea online, so you don’t have to be a full-fledged programmer to make use of conditionals. The machine learning concept introduced there is decision trees, which is an older form of machine learning where you give a system a bunch of examples and then it outputs a little flowchart for decision making.

Do you touch on generative AI in the book?

The book was already in production by the time ChatGPT came out, but I was ahead of the curve, and I did have a section specifically about GPT-3 (pre-ChatGPT) which talks about what it is, how machine learning creates it, and how it itself can be helpful in making programs. So, you see it from both directions. You get the notion that this tool actually helps people tell machines what to do, and also the way that humanity created this tool in the first place using machine learning.

Did you learn anything while you were writing the book that was particularly interesting or surprising?

Researching the examples for each chapter caused me to dig into a whole bunch of topics. This notion of interactive fiction, and that there’s tools for creating interactive fiction, I found pretty interesting. When researching another chapter, I found an example from a Jewish prayer book that was just so shocking to me. So, Jewish prayer books (and I don’t know if this is true in other belief systems as well, but I’m mostly familiar with Judaism), contain things you’re supposed to read, but they have little conditional markings on them sometimes. For example, one might say “don’t read this if it’s a Saturday”, or “don’t read this if it’s a full moon”, or “don’t read if it’s a full moon on a Saturday”. I found one passage that actually had 14 different conditions that you had to check to decide whether or not it was appropriate to read this particular passage. That was surprising to me – I had no idea that people were expected to do so much complex computation during a worship activity.

Why is it important that everybody learns a little programming?

It’s really important to keep in mind the idea that at the end of the day what AI is doing is making it easier for us to tell machines what to do, and we should share that increased capability with a broad population. It shouldn’t just be the machine learning engineers who get to tell computers what to do more easily. We should find ways of making this easier for everybody.

Because computers are here to help, but it’s a two-way street. We need to be willing to learn to express what we want in a way that can be carried out accurately and automatically. If we don’t make that effort, then other parties, companies often, will step in and do it for us. At that point, the machines are working to serve some else’s interest instead of our own. I think it’s become absolutely essential that we restore a healthy relationship with these machines before we lose any more of our autonomy.

Any final thoughts or takeaways that we should bear in mind?

I think there’s a message here for computer science researchers, as well. When we tell other people what to do, we tend to combine a description or a rule, something that’s sort of program-like, with examples, something that’s more data-like. We just intermingle them when we talk to each other. At one point when I was writing the book, I had a dishwasher that was acting up and I wanted to understand why. I read through its manual, and I was struck by how often it was the case that in telling people what to do with the dishwasher, the authors would consistently mix together a high-level description of what they are telling you to do with some particular, vivid examples: a rule for what to load into the top rack, and a list of items that fit that rule. That seems to be the way that people want to both convey and receive information. What’s crazy to me is that we don’t program computers that way. We either use something that’s strictly programming, all rules, no examples, or we use machine learning, where it’s all examples, no rules. I think the reason that people communicate this way with each other is because those two different mechanisms have complementary strengths and weaknesses and when you combine the two together, you maximize the chance of being accurately understood. And that’s the goal when we’re telling machines what to do. I want the AI community to be thinking about how we can combine what we’ve learned about machine learning with something more programming-like to make a much more powerful way of telling machines what to do. I don’t think this is a solved problem yet, and that’s something that I really hope that people in the community think about.


Code to Joy: Why Everyone Should Learn a Little Programming is available to buy now.

michael littman

Michael L. Littman is a University Professor of Computer Science at Brown University, studying machine learning and decision making under uncertainty. He has earned multiple university-level awards for teaching and his research on reinforcement learning, probabilistic planning, and automated crossword-puzzle solving has been recognized with three best-paper awards and three influential paper awards. Littman is co-director of Brown’s Humanity Centered Robotics Initiative and a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is also a Fellow of the American Association for the Advancement of Science Leshner Leadership Institute for Public Engagement with Science, focusing on Artificial Intelligence. He is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation.

]]>
Interview with Marek Šuppa: insights into RoboCupJunior https://robohub.org/interview-with-marek-suppa-insights-into-robocupjunior/ Mon, 16 Oct 2023 09:30:22 +0000 https://robohub.org/?p=208492 robocupjunior soccer match in actionA RoboCupJunior soccer match in action.

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage.

Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

What is your role in RoboCupJunior and how long have you been involved with this league?

I started with RoboCupJunior quite a while ago: my first international competition was in 2009 in Graz, where I was lucky enough to compete in Soccer for the first time. Our team didn’t do all that well in that event but RoboCup made a deep impression and so I stayed around: first as a competitor and later to help organise the RoboCupJunior Soccer league. Right now I am serving as part of the RoboCupJunior Execs who are responsible for the organisation of RoboCupJunior as a whole.

How was the event this year? What were some of the highlights?

I guess this year’s theme or slogan, if we were to give it one, would be “back to normal”, or something like that. Although RoboCup 2022 already took place in-person in Thailand last year after two years of a pandemic pause, it was in a rather limited capacity, as COVID-19 still affected quite a few regions. It was great to see that the RoboCup community was able to persevere and even thrive throughout the pandemic, and that RoboCup 2023 was once again an event where thousands of robots and roboticists meet.

It would also be difficult to do this question justice without thanking the local French organisers. They were actually ready to organise the event in 2020 but it got cancelled due to COVID-19. But they did not give up on the idea and managed to put together an awesome event this year, for which we are very thankful.

robots used for robocupjuniorExamples of the robots used by the RoboCupJunior Soccer teams.

Turning to RoboCupJunior Soccer specifically, could you talk about the mission of the league and how you, as organisers, go about realising that mission?

The mission of RoboCupJunior consists of two competing objectives: on one hand, it needs to be a challenge that’s approachable, interesting and relevant for (mostly) high school students and at the same time it needs to be closely related to the RoboCup “Major” challenges, which are tackled by university students and their mentors. We are hence continuously trying to both make it more compelling and captivating for the students and at the same time ensure it is technical enough to help them grow towards the RoboCup “Major” challenges.

One of the ways we do that is by introducing what we call “SuperTeam” challenges, in which teams from respective countries form a so-called “SuperTeam” and compete against another “SuperTeam” as if these were distinct teams. In RoboCupJunior Soccer the “SuperTeams” are composed of four to five teams and they compete on a field that is six times larger than the “standard” fields that are used for the individual games. While in the individual matches each team can play with two robots at most (resulting in a 2v2 game) in a SuperTeam match each SuperTeam fields five robots, meaning there are 10 robots that play on the SuperTeam field during a SuperTeam match. The setup is very similar to the Division B of the Small Size League of RoboCup “Major”.

The SuperTeam games have existed in RoboCupJunior Soccer since 2013, so for quite a while, and the feedback we received on them was overwhelmingly positive: it was a lot of fun for both the participants as well as the spectators. But compared to the Small Size League games there were still two noticeable differences: the robots did not have a way of communicating with one another and additionally, the referees did not have a way of communicating with the robots. The result was that not only was there little coordination among robots of the same SuperTeam, whenever the game needed to be stopped, the referees had to physically run after the robots on the field to catch them and do a kickoff after a goal was scored. Although hilarious, it’s far from how we would imagine the SuperTeam games to look.

The RoboCupJunior Soccer Standard Communication Modules aim to do both. The module itself is a small device that is attached to each robot on the SuperTeam field. These devices are all connected via Bluetooth to a single smartphone, through which the referee can send commands to all robots on the field. The devices themselves also support direct message exchange between robots on a single SuperTeam, meaning the teams do not have to invest into figuring out how to communicate with the other robots but can make use of a common platform. The devices, as well as their firmware, are open source, meaning not only that everyone can build their own Standard Communication Module if they’d like but also that the community can participate in its development, which makes it an interesting addition to RoboCupJunior Soccer.

two teams setting up their robotsRoboCupJunior Soccer teams getting ready for the competition.

How did this new module work out in the competition? Did you see an improvement in experience for the teams and organisers?

In this first big public test we focused on exploring how (and whether) these modules can improve the gameplay – especially the “chasing robots at kickoff”. Although we’ve done “lab experiments” in the past and had some empirical evidence that it should work rather well, this was the first time we tried it in a real competition.

All in all, I would say that it was a very positive experiment. The modules themselves did work quite well and for some of us, who happened to have experience with “robot chasing” mentioned above, it was sort of a magical feeling to see the robots stop right on the main referee’s whistle.

We also found out potential areas for improvement in the future. The modules themselves do not have a power source of their own and were powered by the robots themselves. We didn’t think this would be a problem but in the “real world” test it transpired that the voltage levels the robots are capable of providing fluctuates significantly – for instance when the robot decides to aggressively accelerate – which in turn means some of the modules disconnect when the voltage is lowered significantly. However, it ended up being a nice lesson for everyone involved, one that we can certainly learn from when we design the next iterations.


The livestream from Day 4 of RoboCupJunior Soccer 2023. This stream includes the SuperTeam finals and the technical challenges. You can also view the livestream of the semifinals and finals from day three here.

Could you tell us about the emergence of deep-learning models in the RoboCupJunior leagues?

This is something we started to observe in recent years which surprised us organisers, to some extent. In our day-to-day jobs (that is, when we are not organising RoboCup), many of us, the organisers, work in areas related to robotics, computer science, and engineering in general – with some of us also doing research in artificial intelligence and machine learning. And while we always thought that it would be great to see more of the cutting-edge research being applied at RoboCupJunior, we always dismissed it as something too advanced and/or difficult to set up for the high school students that comprise the majority of RoboCupJunior students.

Well, to our great surprise, some of the more advanced teams have started to utilise methods and technologies that are very close to the current state-of-the-art in various areas, particularly computer vision and deep learning. A good example would be object detectors (usually based on the YOLO architecture), which are now used across all three Junior leagues: in OnStage to detect various props, robots and humans who perform on the stage together, in Rescue to detect the victims the robots are rescuing and in Soccer to detect the ball, the goals, and the opponents. And while the participants generally used an off-the-shelf implementations, they still needed to do all the steps necessary for a successful deployment of this technology: gather a dataset, finetune the deep-learning model and deploy it on their robots – all of which is far from trivial and is very close to how these technologies get used in both research and industry.

Although we have seen only the more advanced teams use deep-learning models at RoboCupJunior, we expect that in the future we will see it become much more prevalent, especially as the technology and the tooling around it becomes more mature and robust. It does show, however, that despite their age, the RoboCupJunior students are very close to cutting-edge research and state-of-the-art technologies.

Two teams ready to start - robots on the fieldAction from RoboCupJunior Soccer 2023.

How can people get involved in RCJ (either as a participant or an organiser?)

A very good question!

The best place to start would be the RoboCupJunior website where one can find many interesting details about RoboCupJunior, the respective leagues (such as Soccer, Rescue and OnStage), and the relevant regional representatives who organise regional events. Getting in touch with a regional representative is by far the easiest way of getting started with RoboCup Junior.

Additionally, I can certainly recommend the RoboCupJunior forum, where many RoboCupJunior participants, past and present, as well as the organisers, discuss many related topics in the open. The community is very beginner friendly, so if RoboCupJunior sounds interesting, do not hesitate to stop by and say hi!

About Marek Šuppa

Marek Suppa

Marek stumbled upon AI as a teenager when building soccer-playing robots and quickly realised he is not smart enough to do all the programming by himself. Since then, he’s been figuring out ways to make machines learn by themselves, particularly from text and images. He currently serves as the Principal Data Scientist at Slido (part of Cisco), improving the way meetings are run around the world. Staying true to his roots, he tries to provide others with a chance to have a similar experience by organising the RoboCupJunior competition as part of the Executive Committee.

]]>
Interview with Roberto Figueiredo: the RoboCup experience https://robohub.org/interview-with-roberto-figueiredo-the-robocup-experience/ Fri, 04 Aug 2023 10:54:26 +0000 https://robohub.org/?p=207916 Five people holding kid-sized robots
Roberto Figueiredo is a master’s student at the University of Aveiro. He is a member of the Bold Hearts RoboCup team which competes in the Humanoid KidSize soccer league. He is currently the local representative for the Junior Rescue Simulation. We spoke to Roberto about his RoboCup journey, from the junior to the major leagues, and his experience of RoboCup events.

When was your first RoboCup event and which competition did you take part in?

I started in 2016 in the Junior leagues with my high school and I took part in the rescue simulation competition (although I originally joined the on-stage competition). This first event actually happened in Portugal, and it was similar to a workshop. We qualified to go to the world cup in rescue simulation, in Leipzig, Germany, and we ended up in second place. That was really nice, and it was my first contact with RoboCup, and with robotics generally. I’d been working with electronics in the past, but simulation gave me a bit of an introduction to the more theoretical aspects of robotics, and to AI in general. Rescue simulation makes you think of ways to make the robots independent and not manually controlled by humans.

RoboCup team in front of the rescue setupRoberto’s first RoboCup in 2016, Leipzig, pictured with the Singapore team celebrating after the finals.

Could you tell us about the subsequent RoboCup events that you took part in?

In 2017 we qualified to go to Nagoya, Japan, which was not just an amazing RoboCup, but an amazing trip. That’s another good thing about robotics, you get to meet a lot of new people in new countries. We did quite well in this competition as well, I think we reached 5th place.

After that we went to European RoboCup Junior in Italy. The following year was my last RoboCup as a junior, which was in Sydney. That was also an interesting event and I got to chat a bit more with the majors and understand how their teams worked. By this point, I had gained more experience, and I felt ready to get involved with a major league RoboCup team.

There is a big gap between the junior and major leagues. When I joined my team (the Bold Hearts), most of the team were PhDs and I was just a second year bachelor’s student so it was quite hard to pick up all the knowledge. However, if you are persistent enough and you are interested in, and passionate about, robotics you’ll get the hang of it and you’ll learn by trial and error.

Seven people standing and one kneelingEuroRoboCup 2022 in Portugal. Roberto (kneeling in photo) was part of the organising committee.

When was your first competition with the team in the major league?

My first competition was actually last year, in Thailand. We didn’t perform as we would like to, however, there is much more to RoboCup than just the competition – it is now more of a scientific and knowledge-sharing event, it’s unique. Just this year, in Bordeaux, we had a problem with our robots. Every time we disconnected the ethernet cable, the robot just stopped playing, and we couldn’t figure out what was happening. I asked another team that was using the same software – they had figured out the problem before and they told us how to solve it. I don’t think you’ll see that in other competitions. Every team has a joint objective which is making science progress, making friendships, and making other teams better by sharing their knowledge. That’s really unique.

How did you join the Bold Hearts team?

I decided to do my master’s in the UK (at the University of Hertfordshire), to experience a different country and a different style of education. When I joined, I knew there was a team so I was already looking forward to joining. After a couple of years of work, we finally got to go to a competition as a team. It’s been an amazing time and a huge learning experience.

What is your role on the team?

In our team, everyone does a bit of everything. We still have a lot of problems to solve – on both the hardware and software side. All of us currently are computer scientists so it’s a bit more of a struggle to work on the hardware side. So, I do a bit of everything, both AI and non-AI related problems. For example, I’ve done some 3d modelling for the robots, and I’m currently working on the balancing problem. We all work together on the problems which is amazing because you get to see a bit of everything and learn from everyone. Robotics is a very multidisciplinary field. You get to learn about all kinds of topics: mechanical engineering, electrical engineering, machine learning, coding in general.

The Bold Hearts’ qualification video for this year’s RoboCup competition

Could you tell us about this year’s competition (which took place in Bordeaux)?

This year we were a lot more prepared than last year, when we’d just come back from COVID, and all of our experienced members had recently left the team, due to finishing their PhDs and starting work. Creating a successful robot team is a huge integration problem. There are so many pieces that need to go together and work perfectly for the robots to function, and if one fails it looks like your system isn’t doing anything. We got walking working perfectly this year, we had vision working well too, and we had a stable decision tree, and we were able to listen to the controller (which is like a referee and passes on information about fouls, game start and stops etc.). However, we had some bugs in the decision tree that made everything fall apart and we spent a lot of time trying to debug it. This happens to a lot of teams. However, you can still appreciate the work and progress of what they have done.

Five people holding kid-sized robotsRoboCup 2023 in Bordeaux. Roberto (left) with Bold Hearts teammates.

What are the immediate plans for the team?

We are now thinking about joining the simulation competition, which is part of our league. It takes place in the winter season and we’re planning on joining to work on our software. The transition between simulation and hardware is quite hard. You need a good simulation base to be able to transfer directly the knowledge to the robot. We’re working on having a very good simulation so we can transfer, at least more easily, the knowledge learnt in simulation to the robots.

RoboCup is moving more towards AI and learning, which we can see in the 3d simulation. The robots learn a lot of the motion through reinforcement learning, for example. In the physical leagues it’s not as easy as we have to transfer that to the real world, where there is play in the joints, there’s backlash, there’s play in the 3d parts – there are a lot of variables that are not taken into account in simulations.

How has being part of RoboCup inspired your studies and research?

Every time I go to RoboCup I come out thinking about what I’m going to do next. I couldn’t be more inspired. It’s a really intense field but I love it. It makes you want to work really hard and it makes you passionate about science. I did my bachelor’s project related to RoboCup, I joined a master’s course on robotics, I keep asking my Professors if they want to start a team back in Portugal. I’m going to do my master’s thesis on robotics, on humanoids. I think humanoids are a very complex and interesting challenge. There is no one single solution.

About Roberto

Roberto Figueiredo

Roberto Figueiredo is a Portuguese, AI-focused computer scientist with a bachelor’s degree from the University of Hertfordshire. He currently pursuing a master’s in Robotics and Intelligent Systems from the University of Aveiro, and is passionate about advancing his expertise in robotics. He has long been very enthusiastic about robots and AI, being a participant in RoboCup since 2016 in the Rescue Simulation league. He has since become local representative for the Rescue League in Portugal and joined a Major team, Bold Hearts, in the Kid Size league, one of the most challenging in RoboCup Humanoid Soccer.

]]>
#RoboCup2023 in tweets – part 2 https://robohub.org/robocup2023-in-tweets-part-2/ Sun, 16 Jul 2023 09:31:52 +0000 https://robohub.org/?p=207790

As this year’s RoboCup draws to a close, we take a look back at some of the highlights from the second half of the conference. Over the course of the weekend, the event focussed on the latter stages of the competitions, with the winners in all the different leagues decided. If you missed our round-up of the first half of RoboCup, you can catch up here.

]]>
#RoboCup2023 in tweets – part 1 https://robohub.org/robocup2023-in-tweets-part-1/ Sat, 15 Jul 2023 09:31:45 +0000 https://robohub.org/?p=207787

This year’s RoboCup kicked off on 4 July and will run until 10 July. Taking place in Bordeaux, the event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. Find out what attendees have been up to in preparation for, and in the first half of, the event.

]]>
What’s coming up at #RoboCup2023? https://robohub.org/whats-coming-up-at-robocup2023/ Mon, 03 Jul 2023 11:47:30 +0000 https://robohub.org/?p=207677 robocup2023 logo
This year, RoboCup will be held in Bordeaux, from 4-10 July. The event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. You can see the schedule for the week here.

The leagues and their competitions

The league competitions will take place on 6-9 July. You can find out more about the different leagues at these links:

Symposium

The RoboCup symposium will take place on 10 July. The programme can be found here.

There will be three keynote talks:

  • Cynthia Breazeal, Social Robots: Reflections and Predictions of Our Future Relationship with Personal Robots
  • Ben Moran and Guy Lever, Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
  • Laurence Devillers, Socio-affective robots: ethical issues

Find out more at the event website.

]]>
RoboCup humanoid league: Interview with Jasper Güldenstein https://robohub.org/robocup-humanoid-league-interview-with-jasper-guldenstein/ Tue, 20 Sep 2022 07:22:53 +0000 https://robohub.org/?p=205558

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event returned to an in-person format for 2022, taking place from 13-17 July in Bangkok. RoboCup comprises a number of leagues, with perhaps the most well-known being the soccer leagues.

In the Humanoid League, autonomous robots with a human-inspired body plan and senses play soccer against each other. We spoke to Jasper Güldenstein, a member of the technical committee, about the competition at RoboCup 2022, and also about the Humanoid League Virtual Season. As a biweekly virtual competition taking place between the physical RoboCup events it provides teams with an opportunity to test their ideas and keep in touch throughout the year.

Could you give us an overview of the Humanoid League competition at RoboCup this year?

This year we had the first in-person event after a three year break. It was really good to have the majority of the teams coming back together and working on their robots again. Although a lot of teams came with fewer participants than they usually do, we still managed to have 12 teams in the kid size league and three teams in the adult size. Unfortunately, some teams could not participate due to travel restrictions, but we hope to see them back next year.

Humanoid teamsHumanoid league finalists with their robots

What was the format for the competition?

At the beginning we had a drop-in round, which is where we play games with one robot from each team participating in a joint team. So, we have eight robots playing from eight different teams. That is useful for finding out which teams are likely to perform the best in the competition. Next, the results from the drop-in games were used as seeding for the round robin phase of the regular competition, with the strongest teams separated into different groups. After the round robin, we have a knockout competition. The seeding method means that we can hopefully avoid the situation where very good teams get kicked out early. We saw that the most interesting games were towards the end of the competition when the teams performed really well.

Have you noticed improvements since the last physical competition?

I’d say definitely that one big thing that has improved for a lot of teams is the localisation. A lot of teams are more easily able to localise themselves on the field, and they don’t run off randomly. They are more certain that they are in the correct position.

Furthermore, I think the kicking has improved. The robots kick the ball much further than they used to. People have been tuning their kicking motions to increase the distance.

In terms of computer vision, this has definitely improved quite a bit. Something we did differently this time, which was inspired by what we did in the virtual season, is that we had a set of six different balls, all from previous FIFA competitions. For each game a ball was drawn randomly, so the teams couldn’t really prepare for all the balls. Although they were visually quite different, the teams didn’t really have any problems detecting the ball. We’ve seen, in general, that computer vision approaches have improved and these improvements have been transferred to the RoboCup competition. I think that almost all teams are using a neural network to detect the ball. This is a change from three, four, five years ago, where many teams used hand-tuned classical computer vision algorithms.

To talk a bit more about ball detection, it will be interesting to see what happens if we move to an environment with natural and/or varying light conditions. This year we were in a convention hall with uniform lighting. I believe next year, in Bordeaux, there is going to be some form of natural light coming in, and perhaps even fields that are outside. It’s still at the planning stage but we are looking forward to that. It will be a challenge and I strongly believe that the participants will find approaches to make their vision approach robust against these varying conditions.

Teams in actionThe setup and testing for the Humanoid League competition at RoboCup 2022, Bangkok.

Thinking about the transfer from the simulation to the real world, are there any specific elements that lend themselves well to being transferred?

In terms of computer vision, we had a bit of transfer. In the virtual season we concentrated a lot on changing the lighting conditions and having varied backgrounds, to be able to emulate the real world a bit better. I think a few teams used their vision approaches from the virtual season in the real world.

However, I think the strongest part is behaviour. Teams were able to test their strategies in the virtual competition and adapt every other week. For example, CIT Brains, which won the virtual season and the physical competition, made quite a few changes to their strategy and they had robust systems running. Their strategy worked really well, and in the final they managed to score several goals win against the previous world champions (Rhoban Football Club).

How did the competition go for your team (Hamburg Bit-Bots)?

We actually had quite a few hardware problems, especially on the mechanics side. The motors are wearing out and warped due to wear, and flexed more than we expected. This meant we had difficulties walking stably. And, if you can’t walk in a stable manner that defeats the purpose of everything else. It’s a really integrated system – if one component breaks, you are out of luck as you are very restricted in what you can change during the competition as you don’t have much spare equipment with you.

However, what was good for us was that we had a lot of software up and running, and a lot of it had been tested really well during the virtual league. We had to try and find a way round the walking problem algorithmically, to try to find walking parameters that were more stable. We also switched from [robot operating system] ROS 1 to ROS 2 which brought with it many challenges. We actually did a write up on the issues we faced, with some tricks and tips.

Will there be a new virtual league season this year?

Yes, we’ve discussed this in our technical committee and we plan on doing it again. The last event was successful and the teams enjoyed it. We plan on making some changes, such as logging the games to extract some interesting metrics and doing some analysis on those.

Another thing we want to do is domain randomisation, making the environment a bit more varied. This means that the approaches have to be more robust. The hope is that, when they are more robust, they can be transferred more easily to the real world. We were thinking about making the terrain slightly uneven. Another approach could be to modify the models of the robots such that the joints emulate a bit of wear, so they simulated actuators might be a bit weaker or stronger randomly, and teams have to find robust approaches to deal with that.

We won’t do everything at the beginning. We’ll move through the season and talk to the teams and form some organising groups to develop the simulation further, to run the games and to organise the competition itself. We are always happy to have input and we always talk to the teams to see what they think. It’s a competition but it’s something we build together.

Robots at the humanoid league BangkokHumanoid robots at RoboCup 2022, Bangkok.

Could you tell us about the satellite event that took place at RoboCup?

This was a discussion about how to get teams more interested in participating and how to bridge the junior leagues and the major leagues.

We know that some people who participated in RoboCup Junior specifically selected a University that has a RoboCup team so that they could join that team. It would be awesome for more people to do this, and for more juniors to know what the major league is about.

To bridge the gap between the junior and major leagues we don’t want to introduce another league, but we want some form of events where the two groups can meet, and where the juniors can show off how well they are doing. It would be good to have more interaction between the leagues, although we haven’t decided on the exact format yet.

About Jasper

Jasper

Jasper Güldenstein is a PhD student at University of Hamburg. His research focuses on humanoid navigation. He has been participating in RoboCup as a member of the Hamburg Bit-Bots since 2016 where his focus is developing not only the software of the team but also the electronics of the robot platform. In his recent master thesis he evaluated using reinforcement learning to perform path planning and execution for humanoid robots.

]]>
#IJCAI invited talk: engineering social and collaborative agents with Ana Paiva https://robohub.org/ijcai-invited-talk-engineering-social-and-collaborative-agents-with-ana-paiva/ Sun, 04 Sep 2022 08:53:33 +0000 https://robohub.org/?p=205440 An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJACI-ECAI 2022) took place from 23-29 July, in Vienna. In this post, we summarise the presentation by Ana Paiva, University of Lisbon and INESC-ID. The title of her talk was “Engineering sociality and collaboration in AI systems”.

Robots are widely used in industrial settings, but what happens when they enter our everyday world, and, specifically, social situations? Ana believes that social robots, chatbots and social agents have the potential to change the way we interact with technology. She envisages a hybrid society where humans and AI systems work in tandem. However, for this to be realised we need to carefully consider how such robots will interact with us socially and collaboratively. In essence, our world is social, so when machines enter they need to have some capabilities to interact with this social world.

Ana took us through the theory of what it means to the social. There are three aspects to this:

  1. Social understanding: the capacity to perceive others, exhibit theory of mind and respond appropriately.
  2. Intrapersonal competencies: the capability to communicate socially, establish relationships and adapt to others.
  3. Social responsibility: the capability to take actions towards the social environment, follow norms and adopt morally appropriate actions.

Ana talkingScreenshot from Ana’s talk.

Ana wants to go from this notion of social intelligence to what is called artificial social intelligence, which can be defined as: “the capability to perceive and understand social signals, manage and participate in social interactions, act appropriately in social settings, establish social relations, adapt to others, and exhibit social responsibility.”

As an engineer, she likes to build things, and, on seeing the definition above, wonders how she can pass from said definition to a model that will allow her to build social machines. This means looking at social perception, social modelling and decision making, and social acting. A lot of Ana’s work revolves around design, study and development for achieving this kind of architecture.

Ana gave us a flavour of some of the projects that she and her groups have carried out with regards to trying to engineer sociality and collaboration in robots and other agents.

One of these projects was called “Teach me how to write”, and it centres on using robots to improve the handwriting abilities of children. In this project the team wanted to create a robot that kids could teach to write. Through teaching the robot it was hypothesised that they would, in turn, improve their own skills.

The first step was to create and train a robot that could learn how to write. They used learning from demonstration to train a robotic arm to draw characters. The team realised that if they wanted to teach the kids to write, the robot had to learn and improve, and it had to make mistakes in order to be able to improve. They studied the taxonomy of handwriting mistakes that are made by children, so that they could put those mistakes into the system, and so that the robot could learn from the kids how to fix the mistakes.

You can see the system architecture in the figure below, and it includes the handwriting task element, and social behaviours. To add these social behaviours they used a toolkit developed in Ana’s lab, called FAtiMA. This toolkit can be integrated into a framework and is an affective agent architecture for creating autonomous characters that can evoke empathic responses.

system architectureScreenshot from Ana’s talk. System architecture.

In terms of actually using and evaluating the effectiveness of the robot, they couldn’t actually put the robot arm in the classroom as it was too big, unwieldy and dangerous. Therefore, they had to use a Nao robot, which moved its arms like it was writing, but it didn’t actually write.

Taking part in the study were 24 Portuguese-speaking children, and they participated in four sessions over the course of a few weeks. They assigned the robot two contrasting competencies: “learning” (where the robot improved over the course of the sessions) and “non-learning” (where the robot’s abilities remained constant). They measured the kids’ writing ability and improvement, and they used questionnaires to find out what the children thought about the friendliness of the robot, and their own teaching abilities.

They found that the children who worked with learning robot significantly improved their own abilities. They also found that the robot’s poor writing abilities did not affect the children’s fondness for it.

You can find out more about this project, and others, on Ana’s website.

]]>
RoboCup2022 underway – where to find the livestream action https://robohub.org/robocup2022-underway-where-to-find-the-livestream-action/ Thu, 14 Jul 2022 13:59:14 +0000 https://robohub.org/?p=205042

RoboCup 2022 kicked off yesterday, and there have already been lots of competitions within the various leagues. Many of these are livestreamed to YouTube, and the recordings are available for anyone to watch.

Below are the links to the livestream (and recorded) channels for the leagues that have them.

In addition to these channels, there are also some stand-alone recordings.

Here are some highlights from the humanoid league drop-in tournament:

This video features a match between the HTWK-Robots and rUNSWift.

Find out more about RoboCup 2022 here.

]]>
Radhika Nagpal at #NeurIPS2021: the collective intelligence of army ants https://robohub.org/radhika-nagpal-at-neurips2021-the-collective-intelligence-of-army-ants/ Tue, 01 Feb 2022 12:34:46 +0000 https://robohub.org/?p=203423

The 35th conference on Neural Information Processing Systems (NeurIPS2021) featured eight invited talks. In this post, we give a flavour of the final presentation.

The collective intelligence of army ants, and the robots they inspire

Radhika Nagpal

Radhika’s research focusses on collective intelligence, with the overarching goal being to understand how large groups of individuals, with local interaction rules, can cooperate to achieve globally complex behaviour. These are fascinating systems. Each individual is miniscule compared to the massive phenomena that they create, and, with a limited view of the actions of the rest of the swarm, they achieve striking coordination.

Looking at collective intelligence from an algorithmic point-of-view, the phenomenon emerges from many individuals interacting using simple rules. When run by these large, decentralised groups, these simple rules result in highly intelligent behaviour.

The subject of Radhika’s talk was army ants, a species which spectacularly demonstrate collective intelligence. Without any leader, millions of ants work together to self-assemble nests and build bridge structures using their own bodies.

One particular aspect of study concerned self-assembly of such bridges. Radhika’s research team, which comprised three roboticists and two biologists, found that the ants created bridges adapt to traffic flow and terrain. The ants also disassembled the bridge when the flow of ants had stopped and it wasn’t needed any more.

The team proposed the following simple hypothesis to explain this behaviour using local rules: if an ant is walking along, and experiences congestion (i.e. another ant steps on it), then it becomes stationary and turns into a bridge, allowing other ants to walk over it. Then, if no ants are walking on it any more, it can get up and leave.

These observations, and this hypothesis, led the team to consider two research questions:

  • Could they build a robot swarm with soft robots that can self-assemble amorphous structures, just like the ant bridges?
  • Could they formulate rules which allowed these robots to self-assemble temporary and adaptive bridge structures?

There were two motivations for these questions. Firstly, the goal of moving closer to realising robot swarms that can solve problems in a particular environment. Secondly, the use of a synthetic system to better understand the collective intelligence of army ants.

Screenshot from Radhika's talkScreenshot from Radhika’s talk

Radhika showed a demonstration of the soft robot designed by her group. It has two feet and a soft body, and moves by flipping – one foot remains attached, while the other detaches from the surface and flips to attach in a different place. This allows movement in any orientation. Upon detaching, a foot searches through space to find somewhere to attach. By using grippers on the feet that can hook onto textured surfaces, and having a stretchable Velcro skin, the robots can climb over each other, like the ants. The robot pulses, and uses a vibration sensor, to detect whether it is in contact with another robot. A video demonstration of two robots interacting showed that they have successfully created a system that can recreate the simple hypothesis outlined above.

In order to investigate the high-level properties of army ant bridges, which would require a vast number of robots, the team created a simulation. Modelling the ants to have the same characteristics as their physical robots, they were able to replicate the high level properties of army ant bridges with their hypothesized rules.


You can read the round-ups of the other NeurIPS invited talks at these links:
#NeurIPS2021 invited talks round-up: part one – Duolingo, the banality of scale and estimating the mean
#NeurIPS2021 invited talks round-up: part two – benign overfitting, optimal transport, and human and machine intelligence

]]>
Maria Gini wins the 2022 ACM/SIGAI Autonomous Agents Research Award https://robohub.org/maria-gini-wins-the-2022-acm-sigai-autonomous-agents-research-award/ Tue, 18 Jan 2022 09:28:50 +0000 https://robohub.org/?p=202975

Congratulations to Professor Maria Gini on winning the ACM/SIGAI Autonomous Agents Research Award for 2022! This prestigious prize recognises years of research and leadership in the field of robotics and multi-agent systems.

Maria Gini is Professor of Computer Science and Engineering at the University of Minnesota, and has been at the forefront of the field of robotics and multi-agent systems for many years, consistently bringing AI into robotics.

Her work includes the development of:

  • novel algorithms to connect the logical and geometric aspects of robot motion and learning,
  • novel robot programming languages to bridge the gap between high-level programming languages and programming by guidance,
  • pioneering novel economic-based multi-agent task planning and execution algorithms.

Her work has spanned both the design of novel algorithms and practical applications. These applications have been utilized in settings as varied as warehouses and hospitals, with uses such as surveillance, exploration, and search and rescue.

Maria has been an active member and leader of the agents community since its inception. She has been a consistent mentor and role model, deeply committed to bringing diversity to the fields of AI, robotics, and computing. She is also the former President of International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).

Maria will be giving an invited talk at AAMAS 2022. More details on this will be available soon on the conference website.

]]>
Interview with Tao Chen, Jie Xu and Pulkit Agrawal: CoRL 2021 best paper award winners https://robohub.org/interview-with-tao-chen-jie-xu-and-pulkit-agrawal-corl-2021-best-paper-award-winners/ Wed, 24 Nov 2021 13:39:08 +0000 https://robohub.org/?p=202555

Congratulations to Tao Chen, Jie Xu and Pulkit Agrawal who have won the CoRL 2021 best paper award!

Their work, A system for general in-hand object re-orientation, was highly praised by the judging committee who commented that “the sheer scope and variation across objects tested with this method, and the range of different policy architectures and approaches tested makes this paper extremely thorough in its analysis of this reorientation task”.

Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

We present a system for reorienting novel objects using an anthropomorphic robotic hand with any configuration, with the hand facing both upwards and downwards. We demonstrate the capability of reorienting over 2000 geometrically different objects in both cases. The learned controller can also reorient novel unseen objects.

Could you tell us about the implications of your research and why it is an interesting area for study?

Our learned skill (in-hand object reorientation) can enable fast pick-and-place of objects in desired orientations and locations. For example, in logistics and manufacturing, it is a common demand to pack objects into slots for kitting. Currently, this is usually achieved via a two-stage process involving re-grasping. Our system will be able to achieve it in one step, which can substantially improve the packing speed and boost the manufacturing efficiency.

Another application is enabling robots to operate a wider variety of tools. The most common end-effector in industrial robots is a parallel-jaw gripper, partially due to its simplicity in control. However, such an end-effector is physically unable to handle many tools we see in our daily life. For example, even using pliers is difficult for such a gripper as it cannot dexterously move one handle back and forth. Our system will allow a multi-fingered hand to dexterously manipulate such tools, which opens up a new area for robotics applications.

Could you explain your methodology?

We use a model-free reinforcement learning algorithm to train the controller for reorienting objects. In-hand object reorientation is a challenging contact-rich task. It requires a tremendous amount of training. To speed up the learning process, we first train the policy with privileged state information such as object velocities. Using the privileged state information drastically improves the learning speed. Other than this, we also found that providing a good initialization on the hand and object pose is critical for training the controller to reorient objects when the hand faces downward. In addition, we develop a technique to facilitate the training by building a curriculum on gravitational acceleration. We call this technique “gravity curriculum”.

With these techniques, we are able to train a controller that can reorient many objects even with a downward-facing hand. However, a practical concern of the learned controller is that it makes use of privileged state information, which can be nontrivial to get in the real world. For example, it is hard to measure the object’s velocity in the real world. To ensure that we can deploy a controller reliably in the real world, we use teacher-student training. We use the controller trained with the privileged state information as the teacher. Then we train a second controller (student) that does not rely on any privileged state information and hence has the potential to be deployed reliably in the real world. This student controller is trained to imitate the teacher controller using imitation learning. The training of the student controller becomes a supervised learning problem and is therefore sample-efficient. In the deployment time, we only need the student controller.

What were your main findings?

We developed a general system that can be used to train controllers that can reorient objects with either the robotic hand facing upward or downward. The same system can also be used to train controllers that use external support such as a supporting surface for object re-orientation. Such controllers learned in our system are robust and can also reorient unseen novel objects. We also identified several techniques that are important for training a controller to reorient objects with a downward-facing hand.

A priori one might believe that it is important for the robot to know about object shape in order to manipulate new shapes. Surprisingly, we find that the robot can manipulate new objects without knowing their shape. It suggests that robust control strategies mitigate the need for complex perceptual processing. In other words, we might need much simpler perceptual processing strategies than previously thought for complex manipulation tasks.

What further work are you planning in this area?

Our immediate next step is to achieve such manipulation skills on a real robotic hand. To achieve this, we will need to tackle many challenges. We will investigate overcoming the sim-to-real gap such that the simulation results can be transferred to the real world. We also plan to design new robotic hand hardware through collaboration such that the entire robotic system can be dexterous and low-cost.


About the authors

Tao ChenTao Chen is a Ph.D. student in the Improbable AI Lab at MIT CSAIL, advised by Professor Pulkit Agrawal. His research interests revolve around the intersection of robot learning, manipulation, locomotion, and navigation. More recently, he has been focusing on dexterous manipulation. His research papers have been published in top AI and robotics conferences. He received his master’s degree, advised by Professor Abhinav Gupta, from the Robotics Institute at CMU, and his bachelor’s degree from Shanghai Jiao Tong University.

Jie XuJie Xu is a Ph.D. student at MIT CSAIL, advised by Professor Wojciech Matusik in the Computational Design and Fabrication Group (CDFG). He obtained a bachelor’s degree from Department of Computer Science and Technology at Tsinghua University with honours in 2016. During his undergraduate period, he worked with Professor Shi-Min Hu in the Tsinghua Graphics & Geometric Computing Group. His research mainly focuses on the intersection of Robotics, Simulation, and Machine Learning. Specifically, he is interested in the following topics: robotics control, reinforcement learning, differentiable physics-based simulation, robotics control and design co-optimization, and sim-to-real.

Pulkit AgrawalDr Pulkit Agrawal is the Steven and Renee Finn Chair Professor in the Department of Electrical Engineering and Computer Science at MIT. He earned his Ph.D. from UC Berkeley and co-founded SafelyYou Inc. His research interests span robotics, deep learning, computer vision and reinforcement learning. Pulkit completed his bachelor’s at IIT Kanpur and was awarded the Director’s Gold Medal. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Machine Learning Research Award, Signatures Fellow Award, Fulbright Science and Technology Award, Goldman Sachs Global Leadership Award, OPJEMS, and Sridhar Memorial Prize, among others.


Find out more

  • Read the paper on arXiv.
  • The videos of the learned policies are available here, as is a video of the authors’ presentation at CoRL.
  • Read more about the winning and shortlisted papers for the CoRL awards here.
]]>
International conference on intelligent robots and systems (IROS) https://robohub.org/international-conference-on-intelligent-robots-and-systems-iros/ Fri, 23 Oct 2020 15:10:35 +0000 https://robohub.org/international-conference-on-intelligent-robots-and-systems-iros/

This Sunday sees the start of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). This year the event is online and free for anyone to attend. Content will be available from the platform on demand, with access available from 25 October to 25 November 2020.

IROS conferences have traditionally had a theme and this year is no different with the emphasis being on “consumer robotics and our future”. You can sign up here.

Plenaries

IROS will feature three plenary talks. The speakers and topics are as follows:

  • Danica Kragic Robotics and artificial intelligence impacts on the fashion industry
  • Cynthia Breazeal Living with social robots: from research to commercialization and back
  • Yukie Nagai Cognitive development in humans and robots: new insights into intelligence

Keynote speakers

There are also nine keynote talks covering a number of topics.

  • Frank Dellaert Air, sea, and space robots
  • Anya Petrovskaya Driverless vehicles and field robots
  • Ashish Deshpande Rehabilitation robotics
  • Jonathan Hurst Humanoids
  • I-Ming Chen Food handling robotics
  • Steve LaValle Perception, action and control
  • Nikolaus Correll Grasping, haptics and end-effectors
  • Andrea Thomaz Human-robot interaction
  • Sarah Bergbreiter Design, micro and bio-inspired robotics

Technical talks

The technical talks have been divided into 12 topic areas.

  • Air, sea, and space robots
  • Driverless vehicles and field robots
  • Medical, cellular, micro and nano robots
  • Humanoids, exoskeletons, and rehab robots
  • Localization, mapping and navigation
  • Dynamics, control and learning
  • Design, mechanisms, actuators, soft and bio-inspired robots
  • Perception, action, and cognition
  • Grasping, haptics and end-effectors
  • Human-robot interaction, teleoperation, and virtual reality
  • Swarms and multi-robots
  • Industry 4.0

Each talk will feature its digest slide, pre-recorded video presentation, and the paper’s PDF. These will be available from 25 October, so keep an eye on the website.

Workshops

There are a whopping 35 workshops to choose from. These have on-demand content and also live sessions (dates vary so visit the webpages below for more information about each specific workshop).

  1. 3rd workshop on proximity perception in robotics: towards multi-modal cognition, Stefan Escaida Navarro*, Stephan Mühlbacher-Karrer, Hubert Zangl, Keisuke Koyama, Björn Hein, Ulrike Thomas, Hosam Alagi, Yitao Ding, Christian Stetco
  2. Bringing geometric methods to robot learning, optimization and control, Noémie Jaquier*, Leonel Rozo, Søren Hauberg, Hans-Peter Schröcker, Suvrit Sra
  3. 12th IROS20 workshop on planning, perception and navigation for intelligent vehicles, Philippe Martinet*, Christian Laugier, Marcelo H Ang Jr, Denis Fernando Wolf
  4. Robot-assisted training for primary care: how can robots help train doctors in medical examinations?, Thrishantha Nanayakkara*, Florence Ching Ying Leong, Thilina Dulantha Lalitharatne, Liang He, Fumiya Iida, Luca Scimeca, Simon Hauser, Josie Hughes, Perla Maiolino
  5. Workshop on animal-robot interaction, Cesare Stefanini and Donato Romano*
  6. Ergonomic human-robot collaboration: opportunities and challenges, Wansoo Kim*, Luka Peternel, Arash Ajoudani, Eiichi Yoshida
  7. New advances in soft robots control, Concepción A. Monje*, Egidio Falotico, Santiago Martínez de la Casa
  8. Autonomous system in medicine: current challenges in design, modeling, perception, control and applications, Hang Su, Yue Chen*, Jing GUO, Angela Faragasso, Haoyong Yu, Elena De Momi
  9. MIT MiniCheetah workshop, Sangbae Kim*, Patrick M. Wensing, Inhyeok Kim
  10. Workshop on humanitarian robotics, Garrett Clayton*, Raj Madhavan, Lino Marques
  11. Robotics-inspired biology, Nick Gravish*, Kaushik Jayaram, Chen Li, Glenna Clifton, Floris van Breugel
  12. Robots building robots. Digital manufacturing and human-centered automation for building consumer robots, Paolo Dario*, George Q. Huang, Peter Luh, MengChu Zhou
  13. Cognitive robotic surgery, Michael C. Yip, Florian Richter*, Danail Stoyanov, Francisco Vasconcelos, Fanny Ficuciello, Emmanuel B Vander Poorten, Peter Kazanzides, Blake Hannaford, Gregory Scott Fischer
  14. Application-driven soft robotic systems: Translational challenges, Sara Adela Abad Guaman, Lukas Lindenroth, Perla Maiolino, Agostino Stilli*, Kaspar Althoefer, Hongbin Liu, Arianna Menciassi, Thrishantha Nanayakkara, Jamie Paik, Helge Arne Wurdemann
  15. Reliable deployment of machine learning for long-term autonomy, Feras Dayoub*, Tomáš Krajník, Niko Sünderhauf, Ayoung Kim
  16. Robotic in-situ servicing, assembly, and manufacturing, Craig Carignan*, Joshua Vander Hook, Chakravarthini Saaj, Renaud Detry, Giacomo Marani
  17. Benchmarking progress in autonomous driving, Liam Paull*, Andrea Censi, Jacopo Tani, Matthew Walter, Felipe Codevilla, Sahika Genc, Sunil Mallya, Bhairav Mehta
  18. ROMADO: RObotic MAnipulation of Deformable Objects, Miguel Aranda*, Juan Antonio Corrales Ramon, Pablo Gil, Gonzalo Lopez-Nicolas, Helder Araujo, Youcef Mezouar
  19. Perception, learning, and control for autonomous agile vehicles, Giuseppe Loianno*, Davide Scaramuzza, Sertac Karaman
  20. Planetary exploration robots: challenges and opportunities, Hendrik Kolvenbach*, William Reid, Kazuya Yoshida, Richard Volpe
  21. Application-oriented modelling and control of soft robots, Thomas George Thuruthel*, Cosimo Della Santina, Seyedmohammadhadi Sadati, Federico Renda, Cecilia Laschi
  22. State of the art in robotic leg prostheses: where we are and where we want to be, Tommaso Lenzi*, Robert D. Gregg, Elliott Rouse, Joost Geeroms
  23. Worskhop on perception, planning and mobility in forestry robotics (WPPMFR 2020), João Filipe Ferreira* and David Portugal
  24. Why robots fail to grasp? – failure ca(u)ses in robot grasping and manipulation, Joao Bimbo*, Dimitrios Kanoulas, Giulia Vezzani, Kensuke Harada
  25. Trends and advances in machine learning and automated reasoning for intelligent robots and systems, Abdelghani Chibani, Craig Schlenoff, Yacine Amirat*, Shiqi Zhang, Jong-Hwan Kim, Ferhat Attal
  26. Learning impedance modulation for physical interaction: insights from humans and advances in robotics, Giuseppe Averta*, Franco Angelini, Meghan Huber, Jongwoo Lee, Manolo Garabini
  27. New horizons of robot learning – from industrial challenges to future capabilities, Kim Daniel Listmann* and Elmar Rueckert
  28. Robots for health and elderly care (RoboHEC), Leon Bodenhagen*, Oskar Palinko, Francois Michaud, Adriana Tapus, Julie Robillard
  29. Wearable SuperLimbs: design, communication, and control, Harry Asada*
  30. Human Movement Understanding for Intelligent Robots and Systems, Emel Demircan*, Taizo Yoshikawa, Philippe Fraisse, Tadej Petric
  31. Construction and architecture robotics, Darwin Lau*, Yunhui Liu, Tobias Bruckmann, Thomas Bock, Stéphane Caro
  32. Mechanisms and design: from inception to realization, Hao Su*, Matei Ciocarlie, Kyu-Jin Cho, Darwin Lau, Claudio Semini, Damiano Zanotto
  33. Bringing constraint-based robot programming to real-world applications, Wilm Decré*, Herman Bruyninckx, Gianni Borghesan, Erwin Aertbelien, Lars Tingelstad, Darwin G. Caldwell, Enrico Mingo, Abderrahmane Kheddar, Pierre Gergondet
  34. Managing deformation: a step towards higher robot autonomy, Jihong Zhu*, Andrea Cherubini, Claire Dune, David Navarro-Alarcon
  35. Social AI for human-robot interaction of human-care service robots, Ho Seok Ahn*, Hyungpil Moon, Minsu Jang, Jongsuk Choi

Robot challenges

Another element of the conference that sounds interesting is the robot challenges. There are three of these and you should be able to watch the competitions in action next week.

  1. Open cloud robot table organization challenge (OCRTOC). This competition focusses on table organisation tasks. Participants will need to organize the objects in the scene according to a target configuration. This competition will be broadcast on 25-27 October.
  2. 8th F1Tenth autonomous Grand Prix @ IROS 2020. This competition will take the form of a virtual race with standardised vehicles and hardware. The qualifying phase is a timed trial. The Grand Prix phase pits virtual competitors against each other on the same track. The race will be broadcast on 27 October.
  3. Robotic grasping and manipulation competition. There are two sections to this competition. In the first the robot has to make five cups of iced Matcha green tea. The second involves disassembly and assembly using a NIST Task Board.
]]>
#ICRA2020 workshops on robotics and learning https://robohub.org/icra2020-workshops-on-robotics-and-learning/ Wed, 15 Jul 2020 22:11:57 +0000 https://robohub.org/icra2020-workshops-on-robotics-and-learning/

This year the International Conference on Robotics and Automation (ICRA) is being run as a virtual event. One interesting feature of this conference is that it has been extended to run from 31 May to 31 August. A number of workshops were held on the opening day and here we focus on two of them: “Learning of manual skills in humans and robots” and “Emerging learning and algorithmic methods for data association in robotics”.

Learning of manual skills in humans and robots

This workshop was organised by Aude Billard, EPFL and Dagmar Sternad, Northeastern University. It brought together researchers from human motor control and from robotics to answer questions such as: How do humans achieve manual dexterity? What kind of practice schedules can shape these skills? Can some of these strategies be transferred to robots? To which extent is robot manual skill limited by the hardware, what can be learned and what cannot?

The third session of the workshop focussed on “Learning skills” and you can watch the two talks and the discussions below:

Jeannette Bohg – Learning to scaffold the development of robotic manipulation skills

Dagmar Sternad – Learning and control in skilled interactions with objects: A task-dynamic approach

Discussion with Jeannette Bohg and Dagmar Sternad

Emerging learning and algorithmic methods for data association in robotics

This workshop covered emerging algorithmic methods based on optimization and graph-theoretic techniques, learning and end-to-end solutions based on deep neural networks, and the relationships between these techniques.

You can watch the workshop in full here:

Below is the programme with the times indicating the position of that talk in the YouTube video:
11:00 Ayoung Kim – Learning motion and place descriptor from LiDARs for long-term navigation
34:11 Xiaowei Zhou – Learning correspondences for 3D reconstruction and pose estimation
51:30 Florian Bernard – Higher-order projected power iterations for scalable multi-matching
1:11:24 Cesar Cadena – High level understanding in the data association problem
1:34:55 Spotlight talk 1: Daniele Cattaneo – CMRNet++: map and camera agnostic monocular visual localization in LiDAR maps
1:50:45 Nicholas Roy – The role of semantics in perception
2:11:12 Kostas Daniilidis – Learning representations for matching
2:33:26 Jonathan How – Consistent multi-view data association
2:51:40 John Leonard – A research agenda for robust semantic SLAM
3:17:58 Luca Carlone – Towards certifiably robust spatial perception
3:39:36 Roberto Tron – Fast, consistent distributed matching for robotics applications
3:59:22 Randal Beard – Tracking moving objects from a moving camera in 3d environments
4:18:49 Nikolay Atanasov – A unifying view of geometry, semantics, and data association in SLAM
4:39:03 Spotlight talk 2: Nathaniel Glaser – Enhancing multi-robot perception via learned data association

]]>
The Age of A.I. YouTube series https://robohub.org/the-age-of-a-i-youtube-series/ Mon, 06 Apr 2020 21:53:00 +0000 https://robohub.org/the-age-of-a-i-youtube-series/

The YouTube originals series “The Age of A.I.” was released in December 2019. If you haven’t already seen it now could be a good time to catch up – with much of the world in enforced or voluntary isolation many of us will be stuck at home with hours to fill. Sit back and marvel at the many incredible, and often heart-warming, applications of AI.

Episode 1 features researchers Mark Sagar, University of Auckland and Gil Weinberg, Georgia Tech. Mark works with will.i.am on a digital avatar and presents BabyX – a virtual animated baby that learns and reacts like a human baby. Gil makes music using AI and has teamed up other researchers at Georgia Tech to create smart prosthetics for amputees, combining ultrasound signals and machine learning.

Watch the other episodes:
Episode 2: Healed through A.I.

Episode 3: Using A.I. to build a better human

Episode 4: Love, art and stories: decoded

Episode 5: The ‘Space Architects’ of Mars

Episode 6: Will a robot take my job?

Episode 7: Saving the world one algorithm at a time

Episode 8: How A.I. is searching for Aliens

You can access the full series here.

]]>