Learn – Robohub https://robohub.org Connecting the robotics community to the world Tue, 22 Aug 2023 09:58:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 [UPDATE] A list of resources, articles, and opinion pieces relating to large language models & robotics https://robohub.org/a-list-of-resources-articles-and-opinion-pieces-relating-to-large-language-models-robotics/ Wed, 23 Aug 2023 09:51:24 +0000 https://robohub.org/?p=207140 A black keyboard at the bottom of the picture has an open book on it, with red words in labels floating on top, with a letter A balanced on top of them. The perspective makes the composition form a kind of triangle from the keyboard to the capital A. The AI filter makes it look like a messy, with a kind of cartoon style.Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / Licenced by CC-BY 4.0.

We’ve collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the third in the series. (The previous versions are here: v1 | v2.)

What LLMs are and how they work

Journal, conference, arXiv, and other articles

Newspaper, magazine, University website, and blogpost articles

Reports

Podcasts and video discussions

Focus on LLMs and education

Relating to art and other creative processes

Pertaining to robotics

Misinformation, fake news and the impact on journalism

Regulation and policy

]]>
Building a Tablebot https://robohub.org/building-a-tablebot/ Tue, 23 May 2023 11:05:42 +0000 https://robohub.org/?p=207389 There was a shortage of entries in the tablebot competition shortly before the registration window closed for RoboGames 2023. To make sure the contest would be held, I entered a robot. Then I had to build one.

What’s a tablebot?

A tablebot lives on the table. There are three “phases” to the competition:

  • Phase I: Build a robot that goes from one end of a table to the other and back.
  • Phase II: Have the robot push a block off the ledge of the table.
  • Phase III: Have the robot push the block into a shoebox mounted at the end of the table.

There is also an unofficial Phase IV – which is to fall off the table and survive. I did not attempt this phase.

The majority of tablebots are quite simple – a couple of sonar or IR sensors and they kind of wander around the tabletop in hopes of completing the different phases. My tablebot is decidedly different – and it paid off as the robot won the gold medal at RoboGames 2023.

Robot build

The entire robot is built of 3D printed parts and random things I had on hand.

I’ve had one of those $99 LD-06 lidars sitting around for a while, and decided this was a great project to use it on. I used a Dynamixel AX-12 servo to tilt the laser so I can find the table, the cube, or the goal.

All of the code runs on an STM32, on my custom Etherbotix board which was designed for my Maxwell robot a number of years ago. The robot uses differential drive with some 30:1 12V gear motors, which were purchased from Lynxmotion in 2008 and used in various fire fighting robots over the years.

A set of small digital Sharp IR sensors are used as cliff sensors. These can be moved up or down to calibrate for different table surfaces using a pair of adjustment screws. While the sensors are very accurate and stop the robot, they don’t see far enough ahead when going at full speed, and so I also use the laser to detect when the table edge is approaching.

Phase 1 Software

Phase 1 is pretty straight forward – and mostly based on dead reckoning odometry:

  • The laser is angled downwards looking for the table. This is done by projecting to the scan to 3D points, and filtering out anything not in front of the robot at roughly table height. When the table disappears (number of points drops too low), we reduce our maximum speed to something that is safe for the cliff sensors to detect.
  • While the laser sensors look for the end of the table, the robot drives forward, and a simple feedback loop keeps the robot centered on the table using odometry.
  • When the cliff sensors eventually trigger, the robot stops, backs up 15 centimeters, and then turns 180 degrees – all using dead reckoning odometry.
  • The maximum speed is then reset and we take off to the other end of the table with the same behavior.

Phase 2 Software

The movements of Phase 2 are basically the same as Phase 1 – we drive forward, staying centered with odometry. The speed is a bit lower than Phase 1 because the laser is also looking for the block:

  • The laser scan is projected to 3D, and we filter out any points that are part of the table based on height. These remaining points are then clustered and the clusters are analyzed for size.
  • If a cluster is a good candidate for the block, the robot turn towards the block (using, you guessed it, dead reckoning from odometry).
  • The robot then drives towards the block using a simple control loop to keep the heading.
  • Once the block is arrived at, the robot drives straight until a cliff sensor trips.
  • At that point, the robot stops the wheel on the side of the tripped cliff sensor and drives the other wheel very slowly forward so that we align the front of the robot with the edge of the table – ensuring the block has been pushed off the table.

Phase 3 Software

The final phase is the most complex, but not by much. As with the earlier phases, the robot moves down the table finding the block:

  • Unlike in Phase 2, the robot actually approaches a pose just behind the block.
  • Once that pose has been reached, the robot tilts the laser back to level and finds the goal.
  • The robot then turns towards the goal in the same way it first turned towards the block.
  • The robot then approaches the goal using the same simple control loop, and in the process ends up pushing the block to the goal.

All of the software for my Tablebot is availble on GitHub.

Robogames video

Jim Dinunzio, a member of the Homebrew Robotics Club, took a video during the actual competition at Robogames so you can actually see the winning set of runs:

Visualization

To make development easier, I also wrote a Python GUI that renders the table, the robot odometry trail, the laser data, and detected goals and cubes.

Fun with math

Along the way I actually ran into a bug in the ARM CMSIS DSP library. I used the arm_sin_cos_f32() function to compute my odometry:

arm_sin_cos_f32(system_state.pose_th * 57.2958f, &sin_th, &cos_th);
system_state.pose_x += cos_th * d;
system_state.pose_y += sin_th * d;
system_state.pose_th = angle_wrap(system_state.pose_th + dth);

This function takes the angle (in degrees!) and returns the sine and cosine of the angle using a lookup table and some interesting interpolation. With the visualization of the robot path, I noticed the robot odometry would occasionally jump to the side and backwards – which made no sense.

Further investigation showed that for very small negative angles, arm_sin_cos_f32 returned huge values. I dug deeper into the code and found that there are several different versions out there:

  • The version from my older STM32 library, had this particular issue at very small negative numbers. The same bug was still present in the official CMSIS-DSP on the arm account.
  • The version in the current STM32 library had a fix for this spot – but that fix then broke the function for an entire quadrant!

The issue turned out to be quite simple:

  • The code uses a 512 element lookup table.
  • For a given angle, it has to interpolate between the previous and next entry in the table.
  • If your angle fell between the 511th entry and the next (which would be the 0th entry due to wrap around) then you used a random value in the next memory slot to interpolate between (and to compute the interpolation). At one point, this resulted in sin(-1/512) returning outrageous values of like 30.

With that bug fixed, odometry worked flawlessly afterwards. As it turned out, I had this same function/bug existing in some brushless motor control code at work.

Robogames wrap up

It is awesome that RoboGames is back! This little robot won’t be making another appearance, but I am starting to work on a RoboMagellan robot for next year.

]]>
CLAIRE and euRobotics: all questions answered on humanoid robotics https://robohub.org/claire-and-eurobotics-all-questions-answered-on-humanoid-robotics/ Tue, 20 Dec 2022 11:42:18 +0000 https://robohub.org/?p=206188

On 9 December, CLAIRE and euRobotics jointly hosted an All Questions Answered (AQuA) event. This one hour session focussed on humanoid robotics, and participants could ask questions regarding the current and future state of AI, robotics and human augmentation in Europe.

The questions were fielded by an expert panel, comprising:

  • Rainer Bischoff, euRobotics
  • Wolfram Burgard, Professor of Robotics and AI, University of Technology Nuremberg
  • Francesco Ferro, CEO, PAL Robotics
  • Holger Hoos, Chair of the Board of Directors, CLAIRE

The session was recorded and you can watch in full below:

]]>
The original “I, Robot” had a Frankenstein complex https://robohub.org/the-original-i-robot-had-a-frankenstein-complex/ Wed, 09 Nov 2022 09:32:38 +0000 https://robohub.org/?p=205933 Eando Binder’s Adam Link scifi series predates Isaac Asimov’s more famous robots, posing issues in trust, control, and intellectual property.

Read more about these challenges in my Science Robotics article here.

And yes, there’s a John Wick they-killed-my-dog scene in there too.

Snippet for the article with some expansion:

In 1939, Eando Binder began a short story cycle about a robot named Adam Link. The first story in Binder’s series was titled “I, Robot.” That clever phrase would be recycled by Isaac Asimov’s publisher (against Asimov’s wishes) for his famous short story cycle that started in 1940 about the Three Laws of Robotics. But the Binder series had another influence on Asimov: the stories explicitly related Adam’s poor treatment to how humans reacted to the Creature in Frankenstein. (After the police killed his dog- did I mention John Wick?- and put him in jail, Adam conveniently finds a copy of Mary Shelley’s Frankenstein and the penny drops on why everyone is so mean to him…) In response, Asimov coined the term “the Frankenstein Complex” in his stories[1], with his characters stating that Three Laws of Robotics gave humans the confidence in robots to overcome this neurosis.

Note that the Frankenstein Complex is different from the Uncanny Valley; in the Uncanny Valley, the robot is creepy because it almost looks and moves like a human or animal but not quite, in the Frankenstein Complex people believe that intelligent robots regardless of what they look like will rise up against their creators.

Whether humans really have a Frankenstein Complex is a source of endless debate. Frederic Kaplan in a seminal paper presented the baseline assessment of the cultural differences and the role of popular media in trust of robots that everyone still uses[2]. Humanoid robotics researchers even have developed a formal measure of a user’s perception of the Frankenstein Complex.[3] So that group of HRI researchers believes the Frankenstein Complex is a real phenomena. But Binder’s Adam Link story cycle is also worth reexamining because it foresaw two additional challenges for robots and society that Asimov, and other early writers, did not: what is the appropriate form of control and can a robot own intellectual property.

You can get the Adam Link stories from the web as individual stories published in the online back issues of Amazing Stories but it is probably easier to get the story collection here. Binder did a fix-up novel where he organized the stories to form a chronology and added segue ways between stories.

If you’d like to learn more about

References

[1] Frankenstein Monster, Encyclopedia of Science Fiction, https://sf-encyclopedia.com/entry/frankenstein_monster, accessed July 28, 2022

[2] F. Kaplan, “Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots,” International Journal of Humanoid Robotics, 1–16 (2004)

[3] Syrdal, D.S., Nomura, T., Dautenhahn, K. (2013). The Frankenstein Syndrome Questionnaire – Results from a Quantitative Cross-Cultural Survey. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds) Social Robotics. ICSR 2013. Lecture Notes in Computer Science(), vol 8239. Springer, Cham. https://doi.org/10.1007/978-3-319-02675-6_27

]]>
Integrated Task and Motion Planning (TAMP) in robotics https://robohub.org/integrated-task-and-motion-planning-tamp-in-robotics/ Fri, 16 Sep 2022 16:18:54 +0000 https://robohub.org/?p=205525

In the previous post, we introduced task planning in robotics. This field broadly involves a set of planning techniques that are domain-independent: That is, we can design a domain which describes the world at some (typically high) level of abstraction, using some modeling language like PDDL. However, the planning algorithms themselves can be applied to any domain that can be modeled in that language, and critically, to solve any valid problem specification within that domain.

The key takeaway of the last post was that task planning is ultimately search. These search problems are often challenging and grow exponentially with the size of the problem, so it is no surprise that task planning is often symbolic: There are relatively few possible actions to choose from, with a relatively small set of finite parameters. Otherwise, search is prohibitively expensive even in the face of clever algorithms and heuristics.

Bridging the gap between this abstract planning space and the real world, which we deeply care about in robotics, is hard. In our example of a mobile robot navigating a household environment, this might look as follows: Given that we know two rooms are connected, a plan that takes our robot from room A to room B is guaranteed to execute successfully. Of course, this is not necessarily true. We might come up with a perfectly valid abstract plan, but then fail to generate a dynamically feasible motion plan through a narrow hallway, or fail to execute a perfectly valid motion plan on our real robot hardware.

This is where Task and Motion Planning (TAMP) comes in. What if our planner spends more effort deliberating about more concrete aspects of a plan before executing it? This presents a key tradeoff in more up-front computation, but a lower risk of failing at runtime. In this post we will explore a few things that differentiate TAMP from “plain” task planning, and dive into some detailed examples with the pyrobosim and PDDLStream software tools.

Some motivating examples

Before we formalize TAMP further, let’s consider some tricky examples you might encounter with purely symbolic planning in robotics applications.

In this first example, our goal is to pick up an apple. In purely symbolic planning where all actions have the same cost, there is no difference in navigating to table0 and table1, both of which have apples. However, you will notice that table0 is in an unreachable location. Furthermore, if we decide to embed navigation actions with a heuristic cost such as straight-line distance from the starting point, our heuristic below will prefer table0 because it’s closer to the robot’s starting position than table1.

It wouldn’t be until we try to refine our plan — for example, using a motion planner to search for a valid path to table0 in the unreachable room — that we would fail, have to update our planning domain to somehow flag that main_room and unreachable are disconnected, and then replan with this new information.

Pathological task planning example for goal-directed navigation.
Both table0 and table1 can lead to solving the goal specification of holding an apple, but table0 is completely unreachable.

In this second example, we want to place a banana on a desk. As with the previous example, we could choose to place this object on either desk0 or desk1. However, in the absence of more information — and especially if we continue to treat nearby locations as lower cost — we may plan to place banana0 on desk0 and fail to execute at runtime because of the other obstacles.

Here, some alternative solutions would include placing banana0 on desk1, or moving one of the other objects (water0 or water1) out of the way to enable banana0 to fit on desk0. Either way, we need some notion of collision checking to enable our planner to eliminate the seemingly optimal, but in practice infeasible, plan of simply placing the object on desk0.

Pathological task planning example for object manipulation.
Placing banana0 on either desk0 or desk1 will satisfy the goal, but desk0 has other objects that would lead to collisions. So, banana0 must either placed on desk1, or the objects need to be rearranged and/or moved elsewhere to allow banana0 to fit on desk0.

In both cases, what we’re missing from our purely symbolic task planner is the ability to consider the feasibility of abstract actions before spitting out a plan and hoping for the best. Specifically for embodied agents such as robots, which move in the physical world, symbolic plans need to be made concrete through motion planning. As seen in these two examples, what if our task planner also required the existence of a specific path to move between two locations, or a specific pose for placing objects in a cluttered space?

What is Task and Motion Planning?

In our examples, the core issue is that if our task planning domain is too abstract, a seemingly valid plan is likely to fail down the line when we call a completely decoupled motion planner to try execute some portion of that plan. So, task and motion planning is exactly as its name states — jointly thinking about tasks and motion in a single planner. As Garrett et al. put it in their 2020 survey paper, “TAMP actually lies between discrete “high-level” task planning and continuous “low-level” motion planning”.

However, there is no free lunch. When considering all the fine details up front in deliberative planning, search becomes expensive very quickly. In symbolic planning, an action may have a discrete, finite list of possible goals (let’s say somewhere around 5-10), so it may be reasonable to exhaustively search over these and find the one parameter that is optimal according to our model. When we start thinking about detailed motion plans that have a continuous parameter space spanning infinite possible solutions, this becomes intractable. So, several approaches to TAMP will apply sampling-based techniques to make planning work in continuous action spaces.

Another way to ensure TAMP is practical is to leverage hierarchy. One popular technique for breaking down symbolic planning into manageable pieces is Hierarchical Task Networks (HTNs). In these 2012 lecture slides, Nilufer Onder mentions “It would be a waste of time to construct plans from individual operators. Using the built-in hierarchy helps escape from exponential explosion.” An example of hierarchical planning is shown in the diagram below. Using this diagram, you can explore the benefits of hierarchy; for example, this planner would never have to even consider how to open a door if the abstract plan didn’t require going down the hallway.

An example of hierarchical planning for a robot, where high-level, or abstract, plans for a robot could be refined into lower-level, or concrete, actions.
Source: Automated Planning and Acting (2016)

Hierarchical planning is great in that it helps prune infeasible plans before spending time producing detailed, low-level plans. However, in this space the mythical downward refinement property is often cited. To directly quote the 1991 paper by Bacchus and Yang, this property states that “given that a concrete-level solution exists, every abstract solution can be refined to a concrete-level solution without backtracking across abstract levels”. This is not always (and I would argue rarely) achievable in robotics, so backtracking in hierarchical planning is largely unavoidable.

To this end, another strategy behind TAMP has to do with commitment in sampling parameters during search. In the literature, you will see many equivalent words thrown around, but I find the main distinction is between the following strategies:

  • Early-commitment (or binding) strategies will sample action parameters from continuous space before search, effectively converting the problem to a purely discrete task planning problem.
  • Least-commitment (or optimistic) strategies will instead come up with a purely symbolic plan skeleton. If that skeleton is feasible, then the necessary parameter placeholders are filled by sampling.

Flowcharts representing two extreme varieties of sampling-based TAMP.
*H-CSP = hybrid constraint satisfaction problem
Source: Garrett et al. (2020), Integrated Task and Motion Planning

Both strategies have advantages and disadvantages, and in practice modern TAMP methods will combine them in some way that works for the types of planning domains and problems being considered. Also, note that in the diagram above both strategies have a loop back to the beginning when a solution is not found; so backtracking remains an unavoidable part of planning.

One key paper that demonstrated the balance of symbolic search and sampling was Sampling-based Motion and Symbolic Action Planner (SMAP) by Plaku and Hager in 2010. Around the same time, in 2011, Leslie Kaelbling and Tomás Lozano-Pérez presented Hierarchical Planning in the Now (HPN), which combined hierarchy and sampling-based techniques for TAMP. However, the authors themselves admitted the sampling part left something to be desired. There is a great quote in this paper which foreshadows some of the other work that would come out of their lab:

“Because our domains are infinite, we cannot consider all instantiations of the operations. Our current implementation of suggesters only considers a small number of possible instantiations of the operations. We could recover the relatively weak properties of probabilistic completeness by having the suggesters be generators of an infinite stream of samples, and managing the search as a non-deterministic program over those streams.”

– Leslie pack kaelbling and Tomás Lozano-Pérez (2011), Hierarchical planning in the now.

Directly following this quote is the work their student Caelan Garrett took on — first in the creation of STRIPStream in 2017 and then PDDLStream in 2018. The astute reader will have noticed that PDDLStream is the actual software used in these blog posts, so take this “literature review” with this bias in mind, and keep reading if you want to learn more about TAMP with this specific tool.

If you want to know more about TAMP in general, I will refer you to two recent survey papers that I found useful:

Mobile robot example, revisited

To illustrate the benefits of integrated TAMP, we’ll continue the same mobile robotics example from the previous post. In this problem,

  • The robot’s goal is to place the apple on the table.
  • Navigation now requires coming up with a goal pose (which is a continuous parameter), as well the actual path from start to goal. For this example, we are using a Rapidly-exploring Random Tree (RRT), but you could swap for any other path-finding algorithm.
  • Placing an object now requires sampling a valid pose that is inside the placement surface polygon and does not collide with other objects on that surface.

As you read the following list explaining this problem, make sure you scroll through the slideshow below to get a visual representation.

STEP 1: Looking at the state of the world, you can see how a purely symbolic task planner would output a relatively simple plan: pick the apple, move to the table, and place the apple on the table. In the context of TAMP, this now represents a plan skeleton which several parameters that are yet to be filled — specifically,

  • ?pt is the pose of the robot when navigating to the table
  • ?path is the actual output of our motion planner to get to ?pt
  • ?pa-1 is the new pose of the apple when placed on the table (which follows from its initial pose ?pa-0)


STEP 2
: To make the problem a little simpler, we made it such that every location has a discrete, finite set of possible navigation locations corresponding to the edges of its polygon. So looking at the table location, you see there are 4 possible navigation poses pt-T, pt-B, pt-L, and pt-R corresponding to the top, bottom, left, and right sides, respectively. Since this set of locations is relatively small, we can sample these parameters up front (or eagerly) at the start of search.

STEP 3: Our move action can now have different instantiations for the goal pose ?pt that are enumerated during search. This is in contrast with the ?path argument, which must be sampled by calling our RRT planner. We don’t want to do this eagerly because the space of paths is continuous, so we prefer to defer sampling of this parameter. If our action has a cost associated with the length of a path, we could imagine that the lowest-cost action would be to navigate to the left side of the table (pt-L), and some randomly sampled path (path42) may describe how we get there.

STEP 4: Next comes the place action, which now must include a valid collision-free pose for the apple on the table. Because of how we set up our problem, our robot cannot find a valid placement pose when approaching from the left side of the table. So, we must backtrack.

STEP 5: After backtracking, we need to find an alternative navigation pose for the table (?pt). Given our environment, the only other feasible location is the bottom side of the table (pt-b), as the walls block the robot from the top and right sides and it would be impossible to find a valid path with our RRT. However, when the robot is at the bottom side of the table, it can also sample a valid placement pose! In our example, the placeholder ?pa-1 is therefore satisfied with some randomly sampled pose pa29.

STEP 6: … And there you have it! A valid plan that defines a sequence of symbolic actions (pick, move, place) along with the necessary navigation pose, path to that pose, and placement location for the apple. It’s not optimal, but it is probabilistically complete!

(1/6) By being optimistic about all the continuous parameters related to motion, we can reach a potential goal state with relative ease.

(2/6) Since the navigation poses around the desk and the table are finite, we can sample them eagerly; that is, we enumerate all options up front in planning.

(3/6) Once we commit to a navigation pose around the table, we can continue filling in our plan by sampling a feasible trajectory from the robot’s current pose to the target pose at the table.

(4/6) Next, we need to sample a placement pose for the apple. Suppose in this case we fail to sample a collision-free solution based on the robot’s current location.

(5/6) This means we need to backtrack and consider a different navigation pose, thereby a different motion plan to this new pose.

(6/6) From this new pose, even though the trajectory is longer and therefore higher-cost, we can sample a valid placement pose for the apple and finally complete our task and motion plan.

Now, suppose we change our environment such that we can only approach the table from the left side, so there is no way to directly find a valid placement pose for the apple. Using the same planner, we should eventually converge on a task and motion plan that rearranges the objects world — that is, it requires moving one of the other objects on the table to make room for the apple.

Implementing TAMP with PDDLStream

We will now revisit our pathological examples from the beginning of this post. To do this, we will use PDDLStream for planning and pyrobosim as a simple simulation platform. For quick background on PDDLStream, you may refer to this video.

The key idea behind PDDLStream is that it extends PDDL with a notion of streams (remember the earlier quote from the Hierarchical Planning in the Now paper?). Streams are generic, user-defined Python functions that sample continuous parameters such as a valid sample certifies that stream and provides any necessary predicates that (usually) act as preconditions for actions. Also, PDDLStream has an adaptive technique that balances exploration (searching for discrete task plans) vs. exploitation (sampling to fill in continuous parameters).

Goal-directed navigation

We can use PDDLStream to augment our move action such that it includes metric details about the world. As we saw in our illustrative example, we now must factor in the start and goal pose of our robot, as well as a concrete path between those poses.

As additional preconditions for this action, we must ensure that:

  • The navigation pose is valid given the target location (NavPose)
  • There must be a valid path from the start to goal pose (Motion)

Additionally, we are able to now use more realistic costs for our action by calculating the actual length of our path produced by the RRT! The separate file describing the streams for this action may look as follows. Here, the s-navpose stream certifies the NavPose predicate and the s-motion stream certifies the Motion predicate.

The Python implementations for these functions would then look something like this. Notice that the get_nav_poses function returns a finite set of poses, so the output is a simple Python list. On the other hand, sample_motion can continuously spit out paths from our RRT, and it implemented as a generator:

Putting this new domain and streams together, we can solve our first pathological example from the introduction. In the plan below, the robot will compute a path to the farther away, but reachable room to pick up an apple and satisfy the goal.

Object manipulation

Similarly, we can extend our place action to now include the actual poses of objects in the world. Specifically,

  • The ?placepose argument defines the target pose of the object.
  • The Placeable predicate is certified by a s-place stream.
  • The IsCollisionFree predicate is actually a derived predicate that checks individual collisions between the target object and all other objects at that location.
  • Each individual collision check is determined by the CollisionFree predicate, which is certified by a t-collision-free steam.

The Python implementation for sampling placement poses and checking for collisions may look as follows. Here, sample_place_pose is our generator for placement poses, whereas test_collision_free is a simple Boolean (true/false) check.

By expanding our domain to reason about the feasibility of object placement, we can similarly solve the second pathological example from the introduction. In the first video, we have an alternative location desk1 where the robot can place the banana and satisfy the goal.

In the second video, we remove the alternative desk1. The same task and motion planner then produces a solution that involves picking up one of the objects on desk0 to make room to later place the banana there.

You can imagine extending this to a more realistic system — that is, one that is not a point robot and has an actual manipulator — that similarly checks the feasibility of a motion plan for picking and placing objects. While it wasn’t the main focus of the work, our Active Learning of Abstract Plan Feasibility work did exactly this with PDDLStream. Specifically, we used RRTs to sample configuration-space paths for a Franka Emika Panda arm and doing collision-checking using a surrogate model in PyBullet!

Conclusion

In this post we introduced the general concept of task and motion planning (TAMP). In theory, it’s great to deliberate more — that is, really think more about the feasibility of plans down to the metric level — but with that comes more planning complexity. However, this can pay off in that it reduces the risk of failing in the middle of executing a plan and having to stop and replan.

We introduced 3 general principles that can make TAMP work in practice:

  • Hierarchy, to determine the feasibility of abstract plans before planning at a lower level of refinement.
  • Continuous parameter spaces, and techniques like sampling to make this tractable.
  • Least-commitment strategies, to come up with symbolic plan skeletons before spending time with expensive sampling of parameters.

We then dug into PDDLStream as one tool for TAMP, which doesn’t do much in the way of hierarchy, but certainly tackles continuous parameter spaces and least-commitment strategies for parameter binding. We went through a few examples using pyrobosim, but you can access the full set of examples in the pyrobosim documentation for TAMP.

The PDDLStream repository has many more examples that you can check out. And, of course, there are many other task and motion planners out there that focus on different things — such as hierarchy without continuous parameters, or factoring in other common objectives such as temporal aspects and resource consumption.

Hope you have enjoyed these posts! If the tools shown here give you any cool ideas, I would love to hear about them, so feel free to reach out.


You can read the original article at Roboticseabass.com.

]]>
Task Planning in robotics https://robohub.org/task-planning-in-robotics/ Wed, 31 Aug 2022 10:34:49 +0000 https://robohub.org/?p=205378

Suppose we have a robot in a simple world like the one below. Let’s consider commanding our robot to perform a task such as “take the apple from the shelf and put it on the table”.

Simple task planning example world: A robot can move between a finite set of locations, and can pick and place objects at those locations.

I would argue we humans have pretty good intuition for how a robot could achieve this task. We could describe what the robot should do by breaking the solution down into individual actions. For example:

  • Move to the shelf
  • Pick up the apple
  • Move back to the table
  • Place the apple

This is fine for our simple example — and in fact, is also fine for many real-world robotics applications — but what if the task gets more complicated? Suppose we add more objects and locations to our simple world, and start describing increasingly complex goals like “make sure all the apples are on the table and everything else is in the garbage bin”? Furthermore, what if we want to achieve this in some optimal manner like minimizing time, or number of total actions? Our reasoning quickly hits its limit as the problem grows, and perhaps we want to consider letting a computer do the work.

This is what we often refer to as task planning: Autonomously reasoning about the state of the world using an internal model and coming up a sequence of actions, or a plan, to achieve a goal.

In my opinion, task planning is not as popular as other areas in robotics you might encounter because… quite frankly, we’re just not that good at robotics yet! What I mean is that robots are complicated, and many commercially available systems are automating simple, repetitive tasks that don’t require this level of high-level planning — it would be overkill! There are many lower-level problems to solve in robotics such as standing up good hardware, robust sensing and actuation, motion planning, and keeping costs down. While task planning skews towards more academic settings for this reason, I eagerly await the not-so-distant future where robots are even more capable and the entire industry is thinking more about planning over longer horizons and increasingly sophisticated tasks.

In this post, I will introduce the basic pieces behind task planning for robotics. I will focus on Planning Domain Definition Language (PDDL) and take you through some basic examples both conceptually and using the pyrobosim tool.

Defining a planning domain

Task planning requires a model of the world and how an autonomous agent can interact with the world. This is usually described using state and actions. Given a particular state of the world, our robot can take some action to transition to another state.

Over the years, there have been several languages used to define domains for task planning. The first task planning language was the STanford Research Institute Problem Solver (STRIPS) in 1971, made popular by the Shakey project.

Since then, several related languages for planning have come up, but one of the most popular today is Planning Domain Definition Language (PDDL). The first PDDL paper was published in 1998, though there have been several enhancements and variations tacked on over the years. To briefly describe PDDL, it’s hard to beat the original paper.

“PDDL is intended to express the “physics” of a domain, that is, what predicates there are, what actions are possible, what the structure of compound actions is, and what the effects of actions are.”

– GHALLAB ET AL. (1998), PDDL – THE PLANNING DOMAIN DEFINITION LANGUAGE

The point of languages like PDDL is that they can describe an entire space of possible problems where a robot can take the same actions, but in different environments and with different goals. As such, the fundamental pieces of task planning with PDDL are a task-agnostic domain and a task-specific problem.

Using our robot example, we can define:

  • Domain: The task-agnostic part
    • Predicates: (Robot ?r), (Object ?o), (Location ?loc), (At ?r ?loc), (Holding ?r ?o), etc.
    • Actions: move(?r ?loc1 ?loc2), pick(?r ?o ?loc), place(?r ?o ?loc)
  • Problem: The task-specific part
    • Objects: (Robot robot), (Location shelf), (Location table), (Object apple)
    • Initial state: (HandEmpty robot), (At robot table), (At apple shelf)
    • Goal specification: (At apple table)

Since domains describe how the world works, predicates and actions have associated parameters, which are denoted by ?, whereas a specific object does not have any special characters to describe it. Some examples to make this concrete:

  • (Location ?loc) is a unary predicate, meaning it has one parameter. In our example, shelf and table are specific location instances, so we say that (Location table) and (Location shelf) are part of our initial state and do not change over time.
  • (At ?r ?loc) is a binary predicate, meaning it has two parameters. In our example, the robot may begin at the table, so we say that (At robot table) is part of our initial state, though it may be negated as the robot moves to another location.

PDDL is an action language, meaning that the bulk of a domain is defined as actions (sometimes referred to as operators) that interact with our predicates above. Specifically, an action contains:

  • Parameters: Allow the same type of action to be executed for different combinations of objects that may exist in our world.
  • Preconditions: Predicates which must be true at a given state to allow taking the action from that state.
  • Effects: Changes in state that occur as a result of executing the action.

For our robot, we could define a move action as follows:

(:action move
:parameters (?r ?loc1 ?loc2)
:precondition (and (Robot ?r)
(Location ?loc1)
(Location ?loc2)
(At ?r ?loc1))
:effect (and (At ?r ?loc2)
(not (At ?r ?loc1)))
)

 

 

In our description of move, we have

  • Three parameters ?r ?loc1, and ?loc2.
  • Unary predicates in the preconditions that narrow down the domain of parameters that make an action feasible — ?r must be a robot and ?loc1 and ?loc2 must be locations, otherwise the action does not make sense.
  • Another precondition that is state-dependent: (At ?r ?loc1). This means that to perform a move action starting at some location ?loc1, the robot must already be in that location.

Note that in some cases, PDDL descriptions allow for typing, which lets you define the domain of actions inline with the parameters rather than explicitly including them as unary predicates — for example, :parameters ?r – Robot ?loc1 – Location ?loc2 – Location … but this is just syntactic sugar.

Similarly, the effects of the action can add new predicates or negate existing ones (in STRIPS these were specified as separate addition and deletion lists). In our example, after performing a move action, the robot is no longer at its previous location ?loc1 and instead is at its intended new location ?loc2.

A similar concept can be applied to other actions, for example pick and place. If you take some time to process the PDDL snippets below, you will hopefully get the gist that our robot can manipulate an object only if it is at the same location as that object, and it is currently not holding something else.

(:action pick
:parameters (?r ?o ?loc)
:precondition (and (Robot ?r)
(Obj ?o)
(Location ?loc)
(HandEmpty ?r)
(At ?r ?loc)
(At ?o ?loc))
:effect (and (Holding ?r ?o)
(not (HandEmpty ?r))
(not (At ?o ?loc)))
)

(:action place
:parameters (?r ?o ?loc)
:precondition (and (Robot ?r)
(Obj ?o)
(Location ?loc)
(At ?r ?loc)
(not (HandEmpty ?r))
(Holding ?r ?o))
:effect (and (HandEmpty ?r)
(At ?o ?loc)
(not (Holding ?r ?o)))
)

So given a PDDL domain, we can now come up a plan, or sequence of actions, to solve various types of problems within that domain … but how is this done in practice?

Task planning is search

There is good reason for all this overhead in defining a planning domain and a good language to express it in: At the end of the day, task planning is solved using search algorithms, and much of the literature is about solving complex problems as efficiently as possible. As task planning problems scale up, computational costs increase at an alarming rate — you will often see PSPACE-Complete and NP-Complete thrown around in the literature, which should make planning people run for the hills.

State-space search

One way to implement task planning is using our model to perform state-space search. Given our problem statement, this could either be:

  • Forward search: Start with the initial state and expand a graph until a goal state is reached.
  • Backward search: Start with the goal state(s) and work backwards until the initial state is reached.

Using our example, let’s see how forward state-space search would work given the goal specification (At apple table):

(1/6) In our simple world, the robot starts at the table. The only valid action from here is to move to the shelf.

(2/6) After moving to the shelf, the robot could either pick the apple or move back to the table. Since moving back to the table leads us to a state we have already seen, we can ignore it.

(3/6) After picking the apple, we could move back to the table or place the apple back on the shelf. Again, placing the apple would lead us to an already visited state.

(4/6) Once the robot is at the table while holding the apple, we can place the apple on the table or move back to the shelf.

(5/6) At this point, if we place the apple on the table we reach a goal state! The sequence of actions leading to the state is our resulting plan.

(6/6) As the number of variables increases, the search problem (and time to find a solution) grows exponentially.

Deciding which states to expand during search could be purely based on a predetermined traversal strategy, using standard approaches like breadth-first search (BFS), depth-first search (DFS), and the like. Whether we decide to add costs to actions or treat them all as unit cost (that is, an optimal plan simply minimizes the total number of actions), we could instead decide to use greedy or hill-climbing algorithms to expand the next state based on minimum cost. And finally, regardless of which algorithm we use, we probably want to keep track of states we have already previously visited and prune our search graph to prevent infinite cycles and expanding unnecessary actions.

In motion planning, we often use heuristics during search; one common example is the use of A* with the straight-line distance to a goal as an admissible heuristic. But what is a good heuristic in the context of task planning? How would you define the distance to a goal state without a handy metric like distance? Indeed, a great portion of the literature focuses on just this. Methods like Heuristic Search Planning (HSP) and Fast-Forward (FF) seek to define heuristics by solving relaxed versions of the problem, which includes removing preconditions or negative effects of actions. The de facto standard for state-space search today is a variant of these methods named Fast Downward, whose heuristic relies on building a causal graph to decompose the problem hierarchically — effectively taking the computational hit up front to transform the problem into a handful of approximate but smaller problems that proves itself worthwhile in practice.

Below is an example using pyrobosim and the PDDLStream planning framework, which specifies a more complex goal that uses the predicates we described above. Compared to our example in this post, the video below has a few subtle differences in that there is a separate class of objects to represent the rooms that the robot can navigate, and locations can have multiple placement areas (for example, the counter has separate left and right areas).

  • (At robot bedroom)
  • (At apple0 table0_tabletop)
  • (At banana0 counter0_left)
  • (Holding robot water0)


pyrobosim example showing a plan that sequences navigation, pick, and place actions.

Alternative search methods

While forward state-space search is one of the most common ways to plan, it is not the only one. There are many alternative search methods that seek to build alternate data structures whose goal is to avoid the computational complexity of expanding a full state-transition model as above. Some common methods and terms you will see include:

In general, these search methods tend to outperform state-space search in tasks where there are several actions that can be performed in any order relative to each other because they depend on, and affect, mutually exclusive parts of the world. One of the canonical simple examples is the “dinner date example” which is used to demonstrate Graphplan. While these slides describe the problem in more detail, the idea is that a person is preparing a dinner date where the end goal is that dinner and a present be ready while the house is clean — or (dinner present ¬garb). By thinking about the problem in this planning graph structure, we can gain the following insights:

  1. Planning seeks to find actions that can be executed independently because they are mutually exclusive (mutex). This where the notion of partial ordering comes in, and why there are computational gains compared to explicit state-space search: Instead of separately searching through alternate paths where one action occurs before the other, here we simply say that the actions are mutex and we can figure out which to execute first after planning is complete.
  2. This problem cannot be solved at one level because cooking requires clean hands (cleanH) and wrapping the present requires quiet, whereas the two methods of taking out the garbage (carry and dolly) negate these predicates, respectively. So in the solution below, we must expand two levels of the planning graph and then backtrack to get a concrete plan. We first execute cook to remove the requirement on cleanH, and then perform the other two actions in any order — it does not matter.
  3. There is an alternative solution where at the first level we execute the wrap action, and at the second level we can execute cook and dolly (in either order) to achieve the goal. You can imagine this solution would be favored if we additionally required our dinner date hero to have clean hands before starting their date — gross!

Planning graph for the dinner date problem [Source].
Straight lines are preconditions and effects, lines with boxes are no-operation (no-op) links, and curved lines are mutual exclusion (mutex) links.
One solution to the problem is highlighted in blue by backtracking from the goal state.

To bring this all full-circle, you will sometimes find state-space search implementations that use methods like Graphplan on a relaxed version of the problem to estimate the cost to goal as a heuristic. If you want to dig into the details, I would recommend A Tutorial on Planning Graph-Based Reachability Heuristics, by Daniel Bryce and Subbarao Kambhampati.

Towards more complex tasks

Let’s get back to our mobile robot example. What if we want to describe more complex action preconditions and/or goal specifications? After all, we established at the start of this post that simple tasks probably don’t need the giant hammer that is a task planner. For example, instead of requesting that a specific object be placed at a specific location, we could move towards something more general like “all apples should be on the table”.

New example world containing objects, apple0 and apple1, which belong to the same object type (apple).

To explore this, let’s add predicates denoting objects belonging to categories. For example, (Type ?t) can define a specific type and (Is ?o ?t) to denote that an object belongs to such a type. Concretely, we could say that (Type apple), (Is apple0 apple), and (Is apple1 apple).

We can then add a new derived predicate (Has ?loc ?entity) to complete this. Derived predicates are just syntactic sugar — in this case, it helps us to succinctly define an entire space of possible states from our library of existing predicates. However, it lets us express more elaborate ideas in less text. For example:

  • (Has table apple0) is true if the object apple0 is at the location table.
  • (Has table apple) is true if at least one object of type apple is at the location table.

If we choose the goal state (Has table apple) for our example, a solution could involve placing either apple0 or apple1 on the table. One implementation of this derived predicate could look as follows, which makes use of the exists keyword in PDDL.

(:derived (Has ?loc ?entity)
(or
; CASE 1: Entity is a specific object instance, e.g.,
; (Has table apple0)
(and (Location ?loc)
(Obj ?entity)
(At ?entity ?loc)
)
; CASE 2: Entity is a specific object category, e.g.,
; (Has table apple)
(exists (?o)
(and (Obj ?o)
(Location ?loc)
(Type ?entity)
(Is ?o ?entity)
(At ?o ?loc)
)
)
)
)

Another example would be a HasAll derived predicate, which is true if and only if a particular location contains every object of a specific category. In other words, if you planned for (HasAll table apple), you would need both apple0 and apple1 to be relocated. This could look as follows, this time making use of the imply keyword in PDDL.

(:derived (HasAll ?loc ?objtype)
(imply
(and (Obj ?o)
(Type ?objtype)
(Is ?o ?objtype))
(and (Location ?loc)
(At ?o ?loc))
)
)

You could imagine also using derived predicates as preconditions for actions, to similarly add abstraction in other parts of the planning pipeline. Imagine a robot that can take out your trash and recycling. A derived predicate could be useful to allow dumping a container into recycling only if all the objects inside the container are made of recyclable material; if there is any non-recyclable object, then the robot can either pick out the violating object(s) or simply toss the container in the trash.

Below is an example from pyrobosim where we command our robot to achieve the following goal, which now uses derived predicates to express locations and objects either as specific instance, or through their types:

  • (Has desk0_desktop banana0)
  • (Has counter apple1)
  • (HasNone bathroom banana)
  • (HasAll table water)


pyrobosim example showing task planning with derived predicates.

Conclusion

This post barely scratched the surface of task planning, but also introduced several resources where you can find out more. One central resource I would recommend is the Automated Planning and Acting textbook by Malik Ghallab, Dana Nau, and Paolo Traverso.

Some key takeaways are:

  1. Task planning requires a good model and modeling language to let us describe a task-agnostic planning framework that can be applied to several tasks.
  2. Task planning is search, where the output is a plan, or a sequence of actions.
  3. State-space search relies on clever heuristics to work in practice, but there are alternative partial-order methods that aim to make search tractable in a different way.
  4. PDDL is one of the most common languages used in task planning, where a domain is defined as the tool to solve several problems comprising a set of objects, initial state, and goal specification.
  5. PDDL is not the only way to formalize task planning. Here are a few papers I enjoyed:

After reading this post, the following should now make a little more sense: Some of the more recent task planning systems geared towards robotics, such as the ROS2 Planning System (PlanSys2) and PDDLStream, leverage some of these planners mentioned above. Specifically, these use Fast Downward and POPF.

One key pitfall of task planning — at least in how we’ve seen it so far — is that we are operating at a level of abstraction that is not necessarily ideal for real-world tasks. Let’s consider these hypothetical scenarios:

  • If a plan involves navigating between two rooms, but the rooms are not connected or the robot is too large to pass through, how we do know this before we’re actually executing the plan?
  • If a plan involves placing 3 objects on a table, but the objects physically will not fit on that table, how do we account for this? And what if there is some other combination of objects that does fit and does satisfy our goal?

To address these scenarios, there possibly is a way to introduce more expert knowledge to our domain as new predicates to use in action preconditions. Another is to consider a richer space of action parameters that reasons about metric details (e.g., specific poses or paths to a goal) more so than a purely symbolic domain would. In both cases, we are thinking more about action feasibility in task planning. This leads us to the field commonly known as task and motion planning (TAMP), and will be the focus of the next post — so stay tuned!

In the meantime, if you want to explore the mobile robot examples in this post, check out my pyrobosim repository and take a look at the task and motion planning documentation.


You can read the original article at Roboticseabass.com.

]]>
Building a python toolbox for robot behavior https://robohub.org/building-a-python-toolbox-for-robot-behavior/ Sun, 07 Aug 2022 09:00:26 +0000 https://robohub.org/?p=205116

If you’ve been subject to my posts on Twitter or LinkedIn, you may have noticed that I’ve done no writing in the last 6 months. Besides the whole… full-time job thing … this is also because at the start of the year I decided to focus on a larger coding project.

At my previous job, I stood up a system for task and motion planning (TAMP) using the Toyota Human Support Robot (HSR). You can learn more in my 2020 recap post. While I’m certainly able to talk about that work, the code itself was closed in two different ways:

  1. Research collaborations with Toyota Research Institute (TRI) pertaining to the HSR are in a closed community, with the exception of some publicly available repositories built around the RoboCup@Home Domestic Standard Platform League (DSPL).
  2. The code not specific to the robot itself was contained in a private repository in my former group’s organization, and furthermore is embedded in a massive monorepo.

Rewind to 2020: The original simulation tool (left) and a generated Gazebo world with a Toyota HSR (right).

So I thought, there are some generic utilities here that could be useful to the community. What would it take to strip out the home service robotics simulation tools out of that setting and make it available as a standalone package? Also, how could I squeeze in improvements and learn interesting things along the way?

This post describes how these utilities became pyrobosim: A ROS2 enabled 2D mobile robot simulator for behavior prototyping.

What is pyrobosim?

At its core, pyrobosim is a simple robot behavior simulator tailored for household environments, but useful to other applications with similar assumptions: moving, picking, and placing objects in a 2.5D* world.

* For those unfamiliar, 2.5D typically describes a 2D environment with limited access to a third dimension. In the case of pyrobosim, this means all navigation happens in a 2D plane, but manipulation tasks occur at a specific height above the ground plane.

The intended workflow is:

  1. Use pyrobosim to build a world and prototype your behavior
  2. Generate a Gazebo world and run with a higher-fidelity robot model
  3. Run on the real robot!

Pyrobosim allows you to define worlds made up of entities. These are:

  • Robots: Programmable agents that can act on the world to change its state.
  • Rooms: Polygonal regions that the robot can navigate, connected by Hallways.
  • Locations: Polygonal regions that the robot cannot drive into, but may contain manipulable objects. Locations contain one of more Object Spawns. This allows having multiple object spawns in a single entity (for example, a left and right countertop).
  • Objects: The things that the robot can move to change the state of the world.

Main entity types shown in a pyrobosim world.

Given a static set of rooms, hallways, and locations, a robot in the world can then take actions to change the state of the world. The main 3 actions implemented are:

  • Pick: Remove an object from a location and hold it.
  • Place: Put a held object at a specific location and pose within that location.
  • Navigate: Plan and execute a path to move the robot from one pose to another.

As this is mainly a mobile robot simulator, there is more focus on navigation vs. manipulation features. While picking and placing are idealized, which is why we can get away with a 2.5D world representation, the idea is that the path planners and path followers can be swapped out to test different navigation capabilities.

Another long-term vision for this tool is that the set of actions itself can be expanded. Some random ideas include moving furniture, opening and closing doors, or gaining information in partially observable worlds (for example, an explicit “scan” action).

Independently of the list of possible actions and their parameters, these actions can then be sequenced into a plan. This plan can be manually specified (“go to A”, “pick up B”, etc.) or the output of a higher-level task planner which takes in a task specification and outputs a plan that satisfies the specification.

Execution of a sample action sequence in pyrobosim.

In summary: pyrobosim is a software tool where you can move an idealized point robot around a world, pick and place objects, and test task and motion planners before moving into higher-fidelity settings — whether it’s other simulators or a real robot.

What’s new?

Taking this code out of its original resting spot was far from a copy-paste exercise. While sifting through the code, I made a few improvements and design changes with modularity in mind: ROS vs. no ROS, GUI vs. no GUI, world vs. robot capabilities, and so forth. I also added new features with the selfish agenda of learning things I wanted to try… which is the point of a fun personal side project, right?

Let’s dive into a few key thrusts that made up this initial release of pyrobosim.

1. User experience

The original tool was closely tied to a single Matplotlib figure window that had to be open, and in general there were lots of shortcuts to just get the thing to work. In this redesign, I tried to more cleanly separate the modeling from the visualization, and properties of the world itself with properties of the robotic agent and the actions it can take in the world.

I also wanted to make the GUI itself a bit nicer. After some quick searching, I found this post that showed how to put a Matplotlib canvas in a PyQT5 GUI, that’s what I went for. For now, I started by adding a few buttons and edit boxes that allow interaction with the world. You can write down (or generate) a location name, see how the current path planner and follower work, and pick and place objects when arriving at specific locations.

In tinkering with this new GUI, I found a lot of bugs with the original code which resulted in good fundamental changes in the modeling framework. Or, to make it sound fancier, the GUI provided a great platform for interactive testing.

The last thing I did in terms of usability was provide users the option of creating worlds without even touching the Python API. Since the libraries of possible locations and objects were already defined in YAML, I threw in the ability to author the world itself in YAML as well. So, in theory, you could take one of the canned demo scripts and swap out the paths to 3 files (locations, objects, and world) to have a completely different example ready to go.

pyrobosim GUI with snippets of the world YAML file behind it.

2. Generalizing motion planning

In the original tool, navigation was as simple as possible as I was focused on real robot experiments. All I needed in the simulated world was a representative cost function for planning that would approximate how far a robot would have to travel from point A to point B.

This resulted in building up a roadmap of (known and manually specified) navigation poses around locations and at the center of rooms and hallways. Once you have this graph representation of the world, you can use a standard shortest-path search algorithm like A* to find a path between any two points in space.

This time around, I wanted a little more generality. The design has now evolved to include two popular categories of motion planners.

  • Single-query planners: Plans once from the current state of the robot to a specific goal pose. An example is the ubiquitous Rapidly-expanding Random Tree (RRT). Since each robot plans from its current state, single-query planners are considered to be properties of an individual robot in pyrobosim.
  • Multi-query planners: Builds a representation for planning which can be reused for different start/goal configurations given the world does not change. The original hard-coded roadmap fits this bill, as well as the sampling-based Probabilistic Roadmap (PRM). Since multiple robots could reuse these planners by connecting start and goal poses to an existing graph, multi-query planners are considered properties of the world itself in pyrobosim.

I also wanted to consider path following algorithms in the future. For now, the piping is there for robots to swap out different path followers, but the only implementation is a “straight line executor”. This assumes the robot is a point that can move in ideal straight-line trajectories. Later on, I would like to consider nonholonomic constraints and enable dynamically feasible planning, as well as true path following which sets the velocity of the robot within some limits rather than teleporting the robot to ideally follow a given path.

In general, there are lots of opportunities to add more of the low-level robot dynamics to pyrobosim, whereas right now the focus is largely on the higher-level behavior side. Something like the MATLAB based Mobile Robotics Simulation Toolbox, which I worked on in a former job, has more of this in place, so it’s certainly possible!

Sample path planners in pyrobosim.
Hard-coded roadmap (upper left), Probabilistic Roadmap (PRM) (upper right).
Rapidly-expanding Random Tree (RRT) (lower left), Bidirectional RRT* (lower right).

3. Plugging into the latest ecosystem

This was probably the most selfish and unnecessary update to the tools. I wanted to play with ROS2, so I made this into a ROS2 package. Simple as that. However, I throttled back on the selfishness enough to ensure that everything could also be run standalone. In other words, I don’t want to require anyone to use ROS if they don’t want to.

The ROS approach does provide a few benefits, though:

  • Distributed execution: Running the world model, GUI, motion planners, etc. in one process is not great, and in fact I ran into a lot of snags with multithreading before I introduced ROS into the mix and could split pieces into separate nodes.
  • Multi-language interaction: ROS in general is nice because you can have for example Python nodes interacting with C++ nodes “for free”. I am especially excited for this to lead to collaborations with interesting robotics tools out in the wild.

The other thing that came with this was the Gazebo world exporting, which was already available in the former code. However, there is now a newer Ignition Gazebo and I wanted to try that as well. After discovering that polyline geometries (a key feature I relied on) was not supported in Ignition, I complained just loudly enough on Twitter that the lead developer of Gazebo personally let me know when she merged that PR! I was so excited that I installed the latest version of Ignition from source shortly after and with a few tweaks to the model generation we now support both Gazebo classic and Ignition.

pyrobosim test world exported to Gazebo classic (top) and Ignition Gazebo (bottom).

4. Software quality

Some other things I’ve been wanting to try for a while relate to good software development practices. I’m happy that in bringing up pyrobosim, I’ve so far been able to set up a basic Continuous Integration / Continuous Development (CI/CD) pipeline and official documentation!

For CI/CD, I decided to try out GitHub Actions because they are tightly integrated with GitHub — and critically, compute is free for public repositories! I had past experience setting up Jenkins (see my previous post), and I have to say that GitHub Actions was much easier for this “hobbyist” workflow since I didn’t have to figure out where and how to host the CI server itself.

Documentation was another thing I was deliberate about in this redesign. I was always impressed when I went into some open-source package and found professional-looking documentation with examples, tutorials, and a full API reference. So I looked around and converged on Sphinx which generates the HTML documentation, and comes with an autodoc module that can automatically convert Python docstrings to an API reference. I then used ReadTheDocs which hosts the documentation online (again, for free) and automatically rebuilds it when you push to your GitHub repository. The final outcome was this pyrobosim documentation page.

The result is very satisfying, though I must admit that my unit tests are… lacking at the moment. However, it should be super easy to add new tests into the existing CI/CD pipeline now that all the infrastructure is in place! And so, the technical debt continues building up.

pyrobosim GitHub repo with pretty status badges (left) and automated checks in a pull request (right).

Conclusion / Next steps

This has been an introduction to pyrobosim — both its design philosophy, and the key feature sets I worked on to take the code out of its original form and into a standalone package (hopefully?) worthy of public usage. For more information, take a look at the GitHub repository and the official documentation.

Here is my short list of future ideas, which is in no way complete:

  1. Improving the existing tools: Adding more unit tests, examples, documentation, and generally anything that makes the pyrobosim a better experience for developers and users alike.
  2. Building up the navigation stack: I am particularly interested in dynamically feasible planners for nonholonomic vehicles. There are lots of great tools out there to pull from, such as Peter Corke’s Robotics Toolbox for Python and Atsushi Sakai’s PythonRobotics.
  3. Adding a behavior layer: Right now, a plan consists of a simple sequence of actions. It’s not very reactive or modular. This is where abstractions such as finite-state machines and behavior trees would be great to bring in.
  4. Expanding to multi-agent and/or partially-observable systems: Two interesting directions that would require major feature development.
  5. Collaborating with the community!

It would be fantastic to work with some of you on pyrobosim. Whether you have feedback on the design itself, specific bug reports, or the ability to develop new examples or features, I would appreciate any form of input. If you end up using pyrobosim for your work, I would be thrilled to add your project to the list of usage examples!

Finally: I am currently in the process of setting up task and motion planning with pyrobosim. Stay tuned for that follow-on post, which will have lots of cool examples.


You can read the original article at Roboticseabass.com.

]]>
UBR-1 on ROS2 Humble https://robohub.org/ubr-1-on-ros2-humble/ Thu, 04 Aug 2022 11:06:23 +0000 https://robohub.org/?p=205157 It has been a while since I’ve posted to the blog, but lately I’ve actually been working on the UBR-1 again after a somewhat long hiatus. In case you missed the earlier posts in this series:

ROS2 Humble

The latest ROS2 release came out just a few weeks ago. ROS2 Humble targets Ubuntu 22.04 and is also a long term support (LTS) release, meaning that both the underlying Ubuntu operating system and the ROS2 release get a full 5 years of support.

Since installing operating systems on robots is often a pain, I only use the LTS releases and so I had to migrate from the previous LTS, ROS2 Foxy (on Ubuntu 20.04). Overall, there aren’t many changes to the low-level ROS2 APIs as things are getting more stable and mature. For some higher level packages, such as MoveIt2 and Navigation2, the story is a bit different.

Visualization

One of the nice things about the ROS2 Foxy release was that it targeted the same operating system as the final ROS1 release, Noetic. This allowed users to have both ROS1 and ROS2 installed side-by-side. If you’re still developing in ROS1, that means you probably don’t want to upgrade all your computers quite yet. While my robot now runs Ubuntu 22.04, my desktop is still running 18.04.

Therefore, I had to find a way to visualize ROS2 data on a computer that did not have the latest ROS2 installed. Initially I tried the Foxglove Studio, but didn’t have any luck with things actually connecting using the native ROS2 interface (the rosbridge-based interface did work). Foxglove is certainly interesting, but so far it’s not really an RVIZ replacement – they appear to be more focused on offline data visualization.

I then moved onto running rviz2 inside a docker environment – which works well when using the rocker tool:

sudo apt-get install python3-rocker
sudo rocker --net=host --x11 osrf/ros:humble-desktop rviz2

If you are using an NVIDIA card, you’ll need to add --nvidia along with --x11.

In order to properly visualize and interact with my UBR-1 robot, I needed to add the ubr1_description package to my workspace in order to get the meshes and also my rviz configurations. To accomplish this, I needed to create my own docker image. I largely based it off the underlying ROS docker images:

ARG WORKSPACE=/opt/workspace

FROM osrf/ros:humble-desktop

# install build tools
RUN apt-get update && apt-get install -q -y --no-install-recommends \
python3-colcon-common-extensions \
git-core \
&& rm -rf /var/lib/apt/lists/*

# get ubr code
ARG WORKSPACE
WORKDIR $WORKSPACE/src
RUN git clone https://github.com/mikeferguson/ubr_reloaded.git \
&& touch ubr_reloaded/ubr1_bringup/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_calibration/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_gazebo/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_moveit/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_navigation/COLCON_IGNORE \
&& touch ubr_reloaded/ubr_msgs/COLCON_IGNORE \
&& touch ubr_reloaded/ubr_teleop/COLCON_IGNORE

# install dependencies
ARG WORKSPACE
WORKDIR $WORKSPACE
RUN . /opt/ros/$ROS_DISTRO/setup.sh \
&& apt-get update && rosdep install -q -y \
--from-paths src \
--ignore-src \
&& rm -rf /var/lib/apt/lists/*

# build ubr code
ARG WORKSPACE
WORKDIR $WORKSPACE
RUN . /opt/ros/$ROS_DISTRO/setup.sh \
&& colcon build

# setup entrypoint
COPY ./ros_entrypoint.sh /

ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]

The image derives from humble-desktop and then adds the build tools and clones my repository. I then ignore the majority of packages, install dependencies and then build the workspace. The ros_entrypoint.sh script handles sourcing the workspace configuration.

#!/bin/bash
set -e

# setup ros2 environment
source "/opt/workspace/install/setup.bash"
exec "$@"

I could then create the docker image and run rviz inside it:

docker build -t ubr:main
sudo rocker --net=host --x11 ubr:main rviz2

The full source of these docker configs is in the docker folder of my ubr_reloaded repository. NOTE: The updated code in the repository also adds a late-breaking change to use CycloneDDS as I’ve had numerous connectivity issues with FastDDS that I have not been able to debug.

Visualization on MacOSX

I also frequently want to be able to interact with my robot from my Macbook. While I previously installed ROS2 Foxy on my Intel-based Macbook, the situation is quite changed now with MacOSX being downgraded to Tier 3 support and the new Apple M1 silicon (and Apple’s various other locking mechanisms) making it harder and harder to setup ROS2 directly on the Macbook.

As with the Linux desktop, I tried out Foxglove – however it is a bit limited on Mac. The MacOSX environment does not allow opening the required ports, so the direct ROS2 topic streaming does not work and you have to use rosbridge. I found I was able to visualize certain topics, but that switching between topics frequently broke.

At this point, I was about to give up, until I noticed that Ubuntu 22.04 arm64 is a Tier 1 platform for ROS2 Humble. I proceeded to install the arm64 version of Ubuntu inside Parallels (Note: I was cheap and initially tried to use the VMWare technology preview, but was unable to get the installer to even boot). There are a few tricks here as there is no arm64 desktop installer, so you have to install the server edition and then upgrade it to a desktop. There is a detailed description of this workflow on askubuntu.com. Installing ros-humble-desktop from arm64 Debians was perfectly easy.

rviz2 runs relatively quick inside the Parallels VM, but overall it was not quite as quick or stable as using rocker on Ubuntu. However, it is really nice to be able to do some ROS2 development when traveling with only my Macbook.

Migration Notes

Note: each of the links in this section is to a commit or PR that implements the discussed changes.

In the core ROS API, there are only a handful of changes – and most of them are actually simply fixing potential bugs. The logging macros have been updated for security purposes and require c-strings like the old ROS1 macros did. Additionally the macros are now better at detecting invalid substitution strings. Ament has also gotten better at detecting missing dependencies. The updates I made to robot_controllers show just how many bugs were caught by this more strict checking.

image_pipeline has had some minor updates since Foxy, mainly to improve consistency between plugins and so I needed to update some topic remappings.

Navigation has the most updates. amcl model type names have been changed since the models are now plugins. The API of costmap layers has changed significantly, and so a number of updates were required just to get the system started. I then made a more detailed pass through the documentation and found a few more issues and improvements with my config, especially around the behavior tree configuration.

I also decided to do a proper port of graceful_controller to ROS2, starting from the latest ROS1 code since a number of improvements have happened in the past year since I had originally ported to ROS2.

Next steps

There are still a number of new features to explore with Navigation2, but my immediate focus is going to shift towards getting MoveIt2 setup on the robot, since I can’t easily swap between ROS1 and ROS2 anymore after upgrading the operating system.

]]>
Communicating innovation: What can we do better? https://robohub.org/communicating-innovation-what-can-we-do-better/ Wed, 15 Jun 2022 10:51:17 +0000 https://robohub.org/?p=204799 Have you ever seen a bridge collapse on TV? We only care about inspection and maintenance if it does not work. However, robotics are changing the inspection & maintenance landscape, and significant societal and political implications follow the magnitude of innovation. To pay due attention to these developments, the communication on innovation in robotics for inspection & maintenance has to play a role in informing and influencing the target groups for the successful adoption of robotics technologies.

The question on what role communications play in forming the perception of innovative technology was discussed in the workshop “Communicating innovation: What can we do better?” on May 25th, 2022. Experts explained how the innovation uptake should be supported by effective communication of innovations: explaining the benefits, tackling risks and fears of the audiences, and taking innovation closer to the general public.

Innovation & communication experts Marta Palau Franco and Juan Antonio Pavón Losada presented these topics in depth. Also, the recipe for successful innovation communication in robotics in inspection and maintenance was presented by Carlos Matilla Codesal, CEO of FuVeX.

Marta Palau Franco, project officer at euRobotics, working on digital innovation, presented how social context and perception can affect the way we communicate innovative technologies. For example, mentioning the word “drone” in 2014 and in 2022 may not bring the same images to mind to a person. This is due to changes in context and perception around the technology over the years, going from a more military-surveillance use to others such as logistics (e.g. Amazon shipping), entertainment (e.g. drone lights, filming, drone races, etc.), inspection & maintenance, agriculture, etc.

This example emphasizes that intentional engagement with the audience requires a good understanding of the different communication contexts (i.e. social, physical, cultural) and their perception or future expectations of the technology. How can we change someone‘s perception or expectations? We can do that by changing their current references or their interpretation of their prior references.

The people’s perception of innovation might not necessarily correlate with the technological level of innovation. Marta introduced how good and bad communication can create different impressions on the technology. She presented 4 scenarios depending on the level of technological innovation and perception of being innovative (by people) (see the picture below).

The Balanced scenario is where the company’s technological innovation is high and the company is perceived as such. When there is a high level of technological innovation, but the company is not perceived as innovative, we are in the scenario of the Bad communicator. Companies that find themselves in this group should review their communications and marketing strategy and plan. The scenario of the Illusionists presents a company with low technological innovation, but with strong communication and marketing skills.The company manages to raise audience’s expectations, but risks losing the audience’s trust if the technological innovation is not delivered.

To escape the trap of a bad communicator, whose message does not come across to the audience, Marta offered some advice:

  • Be aware of your communication context(s).
  • Do you know what people’s perception of your company and technology is? If the perception is negative, try to change it.
  • Communicate strategically at different stages of technology development. Be flexible and adapt your communication strategy/plan when necessary. If you are in silence for a prolonged period, you are losing attention. Others will be filling in this gap.
  • Listen to feedback provided by your audience and value other perspectives. (e.g., surveys). Burst your bubble, and let in the external input.
  • Look for professional marketing and communications advice when needed. Take it seriously.
  • By communicating about your innovative technology, you are changing the social and cultural context, contributing to smoother (and safer) adoption of your technology.
  • Failure is part of the learning process.

The advice on building trust with stakeholders and communicating in a crisis was suggested by Juan Antonio Pavón Losada, a Project Manager Assistant at PRISMA Lab in the Department of Electrical Engineering and Information Technology at the University of Naples Federico II. Juan is working on the internal and external design and management of European projects’ communication and dissemination.

Juan presented what symptoms define inefficient communication. Firstly, the communication misses the point if it is not consistent and relies on minimal efforts, i.e., “checking the boxes”. In addition, the lack of a defined value proposition (communicating features and activities instead of value) and strategic positioning for the different audiences drives the communication astray from its goal.

How can we solve these mistakes and create efficient communication for inspection and maintenance? Juan has elaborated on the key concepts that are especially relevant in the field of inspection & maintenance. First, the goal of communication is to build relationships, not explain things. Trust, reliability and good reputation are the key goals. In the worst-case scenario, even when an accident happens, if the relationship with stakeholders is firm it will endure the crisis. The trust will absorb the impact of the issue. Also, sufficient attention to communication includes preparation for crisis – identifying who is eligible to speak? What would be the best channel to communicate? What should be the outputs: press release? Press conference? What would be the messages and possible consequences of the messages? Such preparation would help to react quicker and mitigate the crisis.

But, to build trust you need time. That means that the communication about your innovative solution has to be consistent and start early in the process. Apart from this, Juan proposed other tips for effective communication of innovation:

  • To be an engaged listener, know the domain and stakeholders, and learn the domain-specific language.
  • Express yourself clearly & constantly. Communication is like any relationship – don’t expect people to be there to hear you when you have time. Make some time and show up more than once each 6 months.
  • Connect your content to the events happening in the world simultaneously.
  • Get communication professionals to do the work and develop high-quality material.
  • Engage in new collaborations & break the bubble of your organization to receive new inputs.
  • Make intentional language choices and find a balance between technical language and superficial claims without evidence (see the picture below).

Even though robotics in inspection & maintenance seems to be a rather technical and hardly engaging field for the general audience, Juan encouraged the audience to pay substantial attention to communication in their projects. The due attention converts to allocated budget, planning efforts, strategic positioning, and consistency. However, the main idea for the communication in this field revolves around the TRUST, which is built among stakeholders, whether they are tech providers, projects, institutions, developers, politicians, citizens, etc.

FuVex is a practical example of the successful communication of innovative robotics solutions in inspection & maintenance. FuVeX is a start-up whose mission is to replace manned helicopters with long-range drones in power line inspection and other aerial data capture operations. Carlos Matilla Codesal, CEO and co-founder of FuVeX, highlighted the importance of the intent behind the communication – every word must serve to reach the goal, whether it is to raise awareness, attract investment or find clients. In addition, instead of talking about technical details, which might be the most exhilarating on earth for the developers and engineers, the communication must revolve around the problem you solve. In robotics, this problem is not obviously and easily relatable to the general public (compared to, for example, problems in medicine). However, the responsibility to identify this value proposition and story is not on the audience but the one delivering the message. Vanity is not a strategy. Do not get into the trap of doing things because it feels great but is not valuable for the company. If you are communicating to an audience that is neither your customer nor your investor, but it feels great at the end of the day, you are wasting the resources on the things that matter just to your ego.

The next workshop “Policy issues in Robotics for Infrastructure & Maintenance” will be held during the European Robotics Forum in Rotterdam. If you happen to participate in the Forum, you are welcome to join the workshop on June 29th, at 9.50am (CEST). The experts will discuss how AI regulation affects the domain of Robotics for Infrastructure and Maintenance, what direction the policy is moving to and what are the issues in the pan-European legislation.

]]>
How to make sure regulation helps and not hinders Inspection & Maintenance robotics? https://robohub.org/how-to-make-sure-regulation-helps-and-not-hinders-inspection-maintenance-robotics/ Fri, 10 Jun 2022 14:43:28 +0000 https://robohub.org/?p=204501

One of the essential factors for widespread robotics adoption, especially in the inspection and maintenance area, is the regulatory landscape. Regulatory and legal issues should be addressed to establish effective robotics deployment legal frameworks. Common goals of boosting the widespread adoption of robotics can only be achieved by creating networks between the robotics community and regulators.

On the 23rd of March, Maarit Sandelin, Peter Voorhans and Dr Carlos Cuevas Garcia were invited by Robotics4EU and RIMA network to discuss how cooperation among regulators and the robotics community can be fostered and what are the most pressing legal challenges for the inspection & maintenance application area of robotics.

Maarit Sandelin and Peter Voorhans from Robotic Innovation Department in SPRINT Robotics have opened the workshop with the question of why robotics are important in inspection and maintenance? Speakers highlighted three main aspects: safety, efficiency and costs. Firstly, robotic solutions allow for reducing the fatalities and risks of accidents in the environments of heights, confined spaces or under-water. Secondly, the preparation work for inspection and maintenance (shutting down the facilities, clearing and cleaning the spaces, air sampling, getting the permits) is not required for inspection and maintenance done by a robot. The bureaucracy – applying and waiting for permits – is reduced as well.

However, the integration of robots faces barriers in two main dimensions: differences in cross-border standards and acceptance of robotics by inspectors. Speaking of regulatory challenges, Peter Voorhans identified the main problems:

  • The regulatory framework for acceptance in robotics is disastrous at the global level
  • Robotic inspections are not always allowed based on regulations or interpretation of the regulation
  • A different interpretation of regulations causes issues for service and technical providers

To move further with the integration of robots into Inspection and maintenance, the Europe-wide acceptance and legislation of robots are needed. First, the acceptance of robotics (for example, remote visual techniques) by notified bodies would be a big step further. Also, the training of inspectors should involve robotics training, so the inspectors would understand the advantages and consequences of the integration of robotics and could advocate themselves for the uptake of robotics.

Different legislation and regulations across borders mean that in each country, inspection has to be performed by local certified inspectors. For example, a Dutch company is performing an in-service inspection in France. Due to differences in legislation, a certified inspector from the Netherlands is not allowed to perform the remote visual inspection in France. A local notified body needs to be involved.

Leaving aside the national & cross-border legislation issues, Peter Voornhans has drawn attention to the company-level of policies. As an example, the internal policies in DOW, chemical and plastics manufacturer, defined that people will not be allowed in confined spaces starting from 2025. This leadership position gave a strong incentive to introduce robotics and convince inspectors to use them. The internal programme ensured the recognition and celebration of robotic use cases and best practices, ensuring higher levels of robotics acceptance overall.

Dr Carlos Cuevas Garcia
, a postdoctoral researcher at the Innovation, Society and Public Policy Research Group at the Munich Center for Technology in Society (MCTS), Technical University Munich, has shared his experience in following the EU-funded projects for uptake of robotics in I&M. Dr Garcia has evaluated the policy goals and results, following the cycle of the projects, as policy instruments.

From the sociology of technology perspective, robotics in I&M plays at the unique intersection of innovation and maintenance. Innovation is done by heroic people, entrepreneurs, it is celebrated, and covered in news. Maintenance is done by invisible people, it is usually overlooked. However, such projects as RIMA, bring the two dimensions together. As innovation aims at improving maintenance, what can innovation learn from maintenance? How can maintenance improve innovation?

Speaking of the policy role in this intersection, Dr Garcia has presented the innovation policy landscape from the instrument’s perspective.

In order to improve this landscape, he identified two ways forward:

  1. Examine the effects of individual policy instruments on the field of I&M robotics
  2. Examine the dynamics of different instruments together, and how they enable (and constrain!) the continuity of projects

The examination could be implemented considering the vulnerabilities in the policy instruments. Drawing from the experience in observing and analysing the EU-funded projects, as instruments to achieve policy goals, Carlos identified several vulnerabilities:

  • The confusion between the role of the “public end-user” and the role of the subcontractor. In the case of the observed projects, the subcontractors’ input was not formally involved, even though the maintenance is actually done by the subcontractor.
  • The interest of the “public end-user” and subcontractor to purchase or keep funding the technologies of the solution after the project was not sufficient
    • The “public end-user” didn’t want to directly fund a technology considered risky for workers’ jobs.
  • The particular policy instrument observed (Public end user-Driven Technological Innovation) was too rigid to respond to the complexities of the situation, yet too weak to provide further directions
  • .

Speaking of ways to improve the policy process, Carlos identified that besides technical progress (for example, going beyond technological readiness level from 2 to 5), instruments should consider other metrics of success, e.g.:

  • How well do roboticists’ teams and maintainers work together?
  • How do robots empower maintainers?
  • How does the team co-create a vision of the whole inspection process (service logistics, transporting, unloading, fixing robots, etc.)?

Dr Carlos concluded by suggesting a couple of policy recommendations:

  • We must explore the learning trajectories of different types of stakeholders involved in sequences of I&M robotics projects;
  • We have to learn how to provide maintenance to innovation networks and repair innovation policy instruments by better identifying their contradictions, fragilities and vulnerabilities;
  • This requires close and durable engagement between I&M experts, roboticists, project coordinators, policymakers, regulators, and sociologists of technology.

Finally, the session was concluded with a panel discussion thematizing previous presentations, and engaging the audience. As a final conclusion, the experts suggested beginning with industry-led insights to change the paradigm of policy framework on a larger scale.

]]>
A newcomer’s guide to #ICRA2022: Tutorials https://robohub.org/a-millennials-guide-to-icra-2022_tutorials/ Mon, 02 May 2022 10:09:43 +0000 https://robohub.org/?p=204227

I believe that one of the best ways to get the training you need for a job market in robotics is to attend tutorials at conferences like ICRA. Unlike workshops where you might listen to some work-in-progress, other workshop paper presentations and panel discussions, tutorials are exactly what they sound like. They aim to give you some hands-on learning sessions on technical tools/skills with specific learning objectives.

As such, most tutorials would expect you to come prepared to actively participate and follow along. For instance, the “Tools for Robotic Reinforcement Learning” tutorial expects you to come knowing how to code in python and have basic knowledge of reinforcement learning because you’ll be expected to use those skills/knowledge in the hands-on sessions.

There are seven tutorials this year.

Those interested in the intersection of machine learning and robotics (yes, yes, robotics and AI don’t refer to the same things, even if many people think they are the same) might find the following tutorials interesting.

For those who are more into robots that navigate around our environment, the NavAbility Tutorial Workshop on Non-Gaussian SLAM and Computation is a series of hands-on tutorials that would be highly interesting for you. Be prepared to come with your own laptop to go from getting to know non-gaussian SLAM to solve your SLAM problems.

Meanwhile, roboticists who are more hardware and design-oriented might find the Jamming in Robotics: From Fundamental Building Blocks to Robotic Applications tutorial useful. By “jamming”, they don’t mean musicians coming together to create cool music together — I only found this out through the tutorial website. It refers to the way robots can grab items without needing to have traditional, fingered grippers.

The Tutorial on Koopman Operator and Lifting Linearization: Emerging Theory and Applications of Exact Global Linearization would be interesting for anyone interested in mathy/control/theory side of robotics. Koopman operators have been the buzz in the robotics community recently, and the tutorial is sure to give you the in-depth look at what the buzz is all about.

Lastly, the How to write an R-article and benchmark your results tutorial is one to watch for. It will tell you all about publishing reproducibility-friendly articles, and emphasize the usefulness of doing research in reproducible ways.

]]>
MimicEducationalRobots teach robotics for the future https://robohub.org/mimiceducationalrobots-teach-robotics-for-the-future/ Sun, 24 Apr 2022 09:30:28 +0000 https://robohub.org/?p=204195

The robotics industry is changing. The days of industrial robot arms working behind enclosures, performing pre-programmed identical tasks are coming to an end. Robots that can interact with each other and other equipment are becoming standard and robots are expanding to more aspects of our lives. My name is Brett Pipitone, and I am the founder, CEO, and sole employee of mimicEducationalRobots. I believe that robots will soon become an inescapable part of modern life, and I seek to prepare today’s students to work with these emerging technologies.

The mimicEducationalRobots product line consists of a family of three robots. The largest and most sophisticated is mimicArm. The adorable tinyBot is small and capable. The newest robot, bitsyBot (currently on Kickstarter) is perfect for those taking the first steps into the robotics world. Despite having different features, all three robots are designed to communicate with each other and use special sensors to make decisions about their environment. Interfaces are simple but powerful, allowing users to learn quickly and without frustration.

mimicEducationalRobots believes that every student will encounter robots in their everyday life, no matter their career path. Learning robotics allows students to become comfortable and familiar with technology that is rapidly becoming commonplace in day-to-day life. Through their combinations of features, the mimicEducationalRobots products introduce technology that’s still in its infancy, such as human/robot interaction and cooperative robotics, at a level students can understand. This is why every mimicEducationalRobots robot starts with manual control, allowing students to get the feel of their robot and what it can and can’t do. Once they’ve mastered manual control programming is a smaller leap. The mimicEducationalRobots programming software simplifies this transition by reflecting the same motions the students have been making manually with simple commands like “robotMove” and “robotGrab”.

For more complex programs, mimicEducationalRobots believes that their robots should mimic industry as closely as possible. This means doing as much as possible with the simplest possible sensor. Things start small, with a great big tempting pushable button called, incidentally, “greatBigButton”. This is the students’ first introduction to human interaction as they program their robot to react to a button press. From there things get much more exciting without getting very much more complicated. For example, an array of non-contact IR thermometers called Grid-EYE allows mimicArm to detect faces using nothing but body heat. A simple IR proximity sensor allows tinyBot or bitsyBot to react when offered a block before the block touches any part of the robot. There’s even a cable that allows robots to communicate with each other and react to what the other is doing. These simple capabilities allow students to create a wide range of robotic behaviors.

mimicEducationalRobots is a homegrown business designed and built by an engineer and dad passionate about teaching people of all ages about robotics. I created the robots’ brains using a bare circuit board, template, some solder paste, and tweezers. Every component is added by hand and the board is soldered together with a toaster oven I modified. Once cooled the boards are programmed using a modified Arduino UNO R3, one of the best technological tools for beginners.

Other physical robot parts are designed using 3D modeling software and made either on a 3D printer or CNC router. I have two 3D printers in his basement running 24 hours a day, producing at least 20 robot kits a week. The CNC router requires a great deal more supervision but is capable of turning out four sets of beautiful neon plastic parts every 30 minutes.

mimicEducationalRobots is a new kind of company, producing a new kind of product, for a new kind of consumer. Their products demonstrate just how fundamental technology, and in particular open source technology, has changed our world. I hope students learning on mimicArm, tinyBot, or bitsyBot will help create the next life-changing technological leap.

To learn more about the family of mimicEducationalRobots visit www.mimicRobots.com

]]>
Boosting innovations and maximising societal impact. Role of Digital Innovation Hubs in Inspection & Maintenance robotics https://robohub.org/boosting-innovations-and-maximising-societal-impact-role-of-digital-innovation-hubs-in-inspection-maintenance-robotics/ Thu, 17 Mar 2022 11:02:20 +0000 https://robohub.org/?p=203813

By Jovita Tautkevičiūtė

Robotics4EU is a 3-years-long EU-funded project which advocates for the wider adoption of AI-based robots in 4 sectors: healthcare, inspection and maintenance of infrastructure, agri-food, and agile production. Thus, one of the ways in which Robotics4EU raises awareness about non-technological aspects in robotics is delivering a series of workshops to involve the research community, industry representatives and citizens.

The workshop “Boosting innovations and maximising societal impact. Role of Digital Innovation Hubs (DIHs) in I&M Robotics” which took place on the 23rd of February, 2022 analysed the role and contribution of Digital Innovation Hubs (DIHs) to the widespread adoption of robotics in society. How can they enhance the implementation of robotics by SMEs and startups into their daily operations? How can they help to close the knowledge gap of non-technological issues of robotics in Inspection & Maintenance (I&M)?

These questions were analysed by five experts: Ebert von Vonderen, Ladislav Vargovcik, Maria Roca, Roi Rodriguez de Bernardo and Christophe Leroux, during the presentations and panel discussions of the workshop.

Digital Innovation Hubs

What is Digital Innovation Hub (DIH) and what role does it play? Ebert van Vonderen & Ladislav Vargovcik from DIH Robotics Hub Košice explained what DIH is and how it works with various stakeholders: universities, DIH network, SMEs. In general, DIH serves as a bridge between research organizations and industry – technology and services providers, end-users and SMEs. In the case of DIH Robotics Hub Košice, technical faculties of Košice University, Prototyping and Innovation Center established the DIH.

Structure of DIH Robotics Hub Košice

A DIH serves as a one-stop-shop for SMEs to bring their questions. DIHs need to ensure that SMEs have easy access to technology innovation, are provided with relevant services and training for technology adoption. Thanks to the efforts of the RIMA network, DIHs are spread over Europe and interact for I&M possibilities, competencies, and share the available opportunities across Europe.

Why are Robotics DIHs important for SMEs? They tackle many challenges that arise for companies, aiming to boost innovation, enhance and contribute to robotics implementation. Some of the challenges mentioned by Ebert van Vonderen:

  • High technology levels needed, complex tasks in I&M
  • High Investments needed
  • Competition, profitability, ROI (profitability is in many cases a problem)
  • Ecosystem maturity, differences across Europe
  • Resistances from communities
  • Creation of sustainable solutions

An example of DIH Robotics Hub Košice’s cooperation with INETEC, high tech company that specialises in robotics, instruments and software was presented. Magic Lancer II – a robot for nuclear inspection, was developed under the cooperation of DIH Robotics Hub Košice.

Screenshot of the Magic Lancer II presentation

How to enhance the uptake of technology?

Christophe Leroux, representing RIMA (Robotics for Inspection & Maintainance) Network explained that RIMA Network connects and inspires key stakeholders in I&M robotics and aims to accelerate innovation and uptake of robotics between these stakeholders. Its main purpose is to help digitalise European Industry. Even though the market is estimated with a huge potential for innovation (450 bn EUR/year), the bottleneck is the adoption of robotics to market – no real connection between industry and research organisations.

RIMA Network directs its efforts to establish a network of DIHs (currently, it connects 13 DIHs), focussing on robotics in I&M, fund SEMs to support experimentation, set up courses to facilitate the uptake of technologies and develop new skills, and inform people of funding opportunities for business development.

RIMA network supports SMEs by organising open calls. 15 innovative solutions were supported, with the aim to have a social-economic impact win EU with new products, services, business, established jobs, along with the reduction of costs and risks. RIMA Network provides the following services to SMEs and research organisations:

  • Advising (on finances, technology)
  • Proof of Concept
  • Support for tech transfer
  • Support for testing
  • Access to Incubator’s
  • Access to value chain actors
  • Training and coaching
  • The market study, providing information to network partners

RIMA Network Objectives

ICT innovation for manufacturing SMEs

Maria Roca, I4MS Project Manager, presented I4MS (ICT Innovation for Manufacturing SMEs). It is the initiative promoted by the European Commission to foster the digital innovation of manufacturing SMEs in Europe in order to boost their competitiveness in the digital era. The I4MS goal is to contribute to the adaptation of European SMEs to the current digital transformation challenges by funding & mentorship, training and access to physical and virtual technology platforms. I4MS connects a community of 1700 members, 42% of them – SMEs. According to Maria Roca, the organisation has identified that geographical coverage of Europe is uneven and Eastern European countries need to be represented better in the network. Also, there is a gap in SMEs ’ understanding of technology advantages and network ability to consider SMEs’ interests.

Screenshot from Maria’s Roca presentation on I4MS and Open Call Calendar

Roi Rodriguez de Bernardo, Program Manager at FundingBox presented the AI4EU platform and funding opportunities for SMEs. AI4EU is a collaborative H2020 Project that aims to:

  • mobilize the entire European AI community to make AI promises real for the European Society and Economy, and
  • create a leading collaborative AI European Platform to nurture economic growth.

AI4EU aims to be a catalyst between research and industry. The platform presents available funding opportunities and various tools (research, education, ethics, services).

Speaking of the funding opportunities, Roi set the scene by capturing Europe’s position in the global AI market. The AI Business market is emerging worldwide with an estimated growth of the AI market from 40,2% CAGR from 2021 to 2028. The growing focus on AI is also evident in the EU strategy. AI strategy, set by EC offers yearly 1bn EUR funding for a period of 2021-2028, expecting to leverage 20bn EUR investments. The funding is focused on SMEs adopting AI with some cases of funding for startups or technology providers developing AI solutions that could be adopted by SMEs.

Screenshot from Roi Rodriguez de Bernardo presentation presenting Funding opportunities and open calls

Panel discussion

The discussion started with the question of whether the EU can be a powerhouse of robotics in I&M and lead the way in the race with the USA and China, also adhering to our values? It was identified that record-high investment in the industry in 2021 shows that it is possible. Even though the USA and China did not take into account the ethical aspects EU holds, they are starting to develop regulation as well. It seems that the path set by the EU is followed now by other big markets.

Considering ethical or socio-economic aspects of most importance in the field, Maria Roca identified the management of data for SMEs as a bottleneck. The difficulty lies in the hindrances to sharing data with other companies. Further, in the case of AI, ethical aspects are evaluated in all EU funded projects. From the perspective of DIH’s provided services, RIMA’s representative Christophe Leroux explained that ethical and safety aspects are taken into account in facilitated experimentation. In practice, ethical assessment is conducted for each experimentation, involving experts in the field of ethics. On a daily basis, RIMA’s work includes mentorship on ethical issues and guidelines.

The ways of enlarging the networks were also discussed. Maria Roca identified training as a main tool to attract members and showcase what they can achieve in collaboration and participation in their open calls. Networking, as a core business of DIHs, also covers the organisation of local events with experts. Also, collaboration with social scientists was emphasised as crucial in the field of AI. Annelli Rose, representative of the Robotics4EU project noted that interdisciplinary collaboration is in line with the EU priorities, where the change is foreseen to have a breeding ground.

Zooming in to the topic of robotics in I&M the social aspect of adoption was analysed. Christophe Leroux explained that there is attractiveness for robotics adoption to support certain operations because there is not much fear of losing jobs. Robots are seen as supporting their work, for example as deploying robotics in explosive, dangerous and hazardous environments. Robots are not taking away jobs, but taking away the danger. Christophe Leroux also emphasised that there is a growth in the domain of AI robotics in I&M and the main issues are related to trust and safety.

Upcoming workshop

The upcoming workshop on the 23rd of March 2022, 11-14 CET “How to make sure regulation helps and not hinders I&M robotics? Policy issues in Robotics for Inspection & Maintenance”. We will discuss how cooperation among regulators and the robotics community can be fostered and what are the most pressing legal challenges for the I&M application area of robotics. We will be investigating how to ensure the accessibility of objective information and enhance the capacity of regulators to comprehend technical aspects, risks and opportunities of robotics in I&M. We hope to leave the workshop with industry insights and specific areas for improvement for the future, which as a project we will continue to explore in our activities. Register here.

]]>
Robot centenary – 100 years since ‘robot’ made its debut https://robohub.org/robot-centenary-100-years-since-robot-made-its-debut/ Sat, 12 Mar 2022 10:34:27 +0000 https://robohub.org/?p=203776

Robotics remained at the leading edge of technology development in 2021, yet it was one hundred years earlier in 1921 that the word robot (in its modern sense) made its public debut. Czech author Karel Čapek’s play R.U.R. (Rossum’s Universal Robots) imagined a world in which humanoids called ‘roboti’ were created in a factory. Karel’s brother, the artist and writer Josef Čapek had first coined the term robot before Karel adopted it for this theatrical vision.

The Slavic root of the word is even older and even its first known appearance in English dates back nearly two hundred years. During the time of the Habsburgs and the Austro-Hungarian Empire, robot referred to a form of forced labour similar to slavery. In Čapek’s play, the roboti were being manufactured as serfs to serve human needs.

Nowadays we would see Čapek’s creations more as androids rather than robots in the modern sense. However, the outcome of the play, with the robots rebelling against the humans and taking over the world, has since become a trope of science fiction which persists to this day. And yet the intended message in Čapek’s play wasn’t about the inherent risks of robots but of the dehumanising dangers of rampant mechanisation. We know that popular culture has taken a different reading from the play, of course, with robots on screen and in print more likely to be cast as the bad guy, although there are some notable exceptions, too. It’s interesting to wonder how different the mainstream image of robots might be today if Čapek’s play had placed his roboti and their owners in a more mutually benevolent relationship.

Čapek’s robots first appeared on the screen in a BBC television production of R.U.R. in 1938. Three years later the word ‘robotics’ was first used as a term for the field of research and technology that develops and manufactures robots. Like ‘robot’ in its modern sense, ‘robotics’ also has its origins in the creative imagination, thanks to a man who was born in the same year that the Čapek brothers were bringing the modern ‘robot’ into the world. Science fiction writer Isaac Azimov famously coined his three laws of robotics, which continue to resonate in discussions about the ethical use of robots, in 1941. Incidentally, Azimov wasn’t a fan of the play itself but his laws were designed precisely to prevent the kind of tragedy imagined in Čapek’s play.

]]>
Hands on ground robot & drone design series part I: mechanical & wheels https://robohub.org/hands-on-ground-robot-drone-design-series-part-i-mechanical-wheels/ Tue, 01 Mar 2022 09:57:58 +0000 https://robohub.org/?p=203673

Source: https://www.subt-explorer.com/post/fall-2020-update

This is a new series looking at the detailed design of various robots. To start with we will be looking at the design of two different robots that were used for the DARPA Subterranean Challenge. Both of these robots were designed for operating in complex subterranean environments, including Caves, Mines & Urban environments. Both of these robots presented are from the Carnegie Mellon University Explorer team. While I am writing these posts, this was a team effort that required many people to be successful. (If anyone on Team Explorer is reading this, thank you for everything, you are all awesome.)

These posts are skipping the system requirements step of the design process. See here for more details on defining system requirements.

SubT Ground UGV and DS Drone ImageTeam Explorer R1 Ground Robot and DS Drone [Source]

R3 Ground robot (UGV)

For the SubT challenge three ground vehicles were developed all of a similiar design. The ground robots were known with the moniker of R#, where # is the order we built them in. The primary difference between the three versions are

R1 – Static Chassis, so the chassis has minimal ground compliance when driving over obstacles and uneven surfaces. R1 was initially supposed to have a differencing mechanism for compliance, however due to time constraints it was left out from this first version. R1 is pictured above.

R2 – Has the differencing mechanism and was designed as initially planned.

R3 – Is almost identical to R2, but smaller. This robot was built for navigating smaller areas and also to be able to climb up and down steps. It also uses different motors for the driving the wheels.

DS drone

The original drone design used by Team Explorer called their drones D1, D2, etc.. This let a combination of UGV +Drone go by joint designations such as R2D2. Early on, the team switched to a smaller drone design that was referred to as DS1, DS2, etc.. Where DS is short for Drone Small.

The drone design post are split into two sections. The first is about the actual drone platform, and the second is about the payload that sat on top of the drone.

Mechanical & wheels

Robot size decision

After we have the list of system requirements we start with the design of the mechanical structure of the robot. In this case we decided that a wheeled robot would be best. We wanted to have the largest wheels possible to help climb over obstacles, however, we also needed to keep our sensors at the top of the vehicle above the wheels and be able to fit in openings 1 x 1 meters. These requirements set the maximum size of the robot as well as the maximum size of the wheels.

The final dimensions of the first two vehicles (R1 and R2) were around (L x W x H) 1.2 x 0.8 x 0.8 meters (3.9 x 2.6 x 2.6 ft). The third smaller vehicle was around 1 x 0.6 m (3.2 x 1.9 ft) and designed to fit through 0.7×0.7 m openings.

Steering approach

Early on we also needed to determine the method of driving. Do we want wheels or tracks? Do we want to steer with ackerman steering, rocker-bogie, skid steer, etc.?

See here for more details on steering selection.

We chose to use a skid steer four wheeled drive approach for the simplicity of control and the ability to turn in place (point turns). At the start of the competition we were not focused on stair climbing, which might have changed some of our design decisions.

Suspension

The next step was to determine the suspension type. A suspension is needed so that all four of the wheels make contact with the ground. If the robot had a static fixed frame only three of the wheels might make contact with the ground when on uneven surfaces. This would reduce our stability and traction.

We decided early on that we wanted a passive suspension for the simplicity of not having active components. With a passive suspension we were looking at different type of body averaging. We roughly had two choices, front-pivot or side-to-side.

Left image shows a front-pivot approach. Right image shows a side-to-side differencing method.

We decided to choose the front-pivot method, however we decided to make the pivot be roughly centered in the vehicle. This allowed us to put all of the electronics in the front and the batteries in the rear. The front-pivot method we felt would be better for climbing up stairs and for climbing over obstacles on level’ish terrain. Also importantly this approach made it easier to carry a drone on the ground vehicle.

Chassis design

At this point we started designing the chassis. This was an important step so that we could estimate the total weight in order to spec the drive-train. Ideas for the chassis were everything from building with 80/20 to building an aluminum frame and populating it with components, to a solid welded chassis. We selected to use a welded steel tube chassis for the strength. We needed a robot that could survive anything we did to it. This proved to be a wise decision when the robot crashed or fell over cliffs. The downside of the steel was increased mass.

For the pivot we found a large crossed roller bearing that we were able to use to attach the two steel boxes together. The large bore in the middle was useful for passing wires/cables through for batteries, motors, etc…

Part of the chassis design was also determining where all of the components should mount. Having the batteries (green boxes in image above) in the rear helps us climb obstacles. Other goals were to keep the ground clearance as high as possible while keeping the center of gravity (CG) as low as possible. Since those are competing goals, part of the design process was to develop a happy medium.

In order to maintain modularity for service, each wheel module had the motor controller, motor, gear box, and bearing block as a solid unit that could be swapped between robots if there was any issues. This also allowed most of the wiring to be part of that block. The only cables that needed to be connected to each of the modules from the robot were power, CAN communications, and the emergency stop line; all of which were connectorized.

For electronics on R1 and R2 we built an electronics box that was separate from the robot and could be removed from the robot as needed. On R3 we built the electronics into the robot itself. This modular approach was very useful when we had to do some welding to the chassis post-build for modifications. The downside of the modular approach for electronics was that working in the electronics box was more difficult then in the open R3. Also the time for fabricating and wiring the R1/R2 electronics boxes was considerably more than the open R3 electronics. We also had several failures during testing related to the connectors from the electronics boxes.

Wheel design

We debated a lot about what type of wheel to use, ultimately we used motorcycle wheels due to the simplicity of obtaining them and mounting them. The wheel diameter we desired also lined up very well with motorcycle wheels. In order to get better traction and ability to climb over obstacles we liked the wider tires.

R1 and R2 had a wheel diameter of 0.55m, R3 had a wheel diameter of 0.38m. This gave R1 and R2 a ground clearance of 0.2m, and R3 a ground clearance of 0.12m.

The wheel hubs ended up being a different story. We found solid metal rims that we had to machine large amounts of metal out of in order to balance the strength and the weight.

The R1 and R2 robots were around 180kg (400lb)*, the wheels were for a vehicle significantly heavier. As such we put a small amount of pressure in the wheels to keep them from falling off, however we tried to keep the pressure low to increase the ground compliance of the wheels. This method added a very small amount of compliance, we tried removing some of the rubber from the sidewalls, but was not able to get a happy medium between limiting the wheel deforming during point turns and increasing ground compliance.

We were also concerned how the motorcycle tires would do when point turning and if we would rip the wheels from the rims. To counter this we installed a beadlock system into each of the wheels. The beadlock was a curved segment installed in multiple places to sandwich the tire to the rim. We never had a wheel separate from the rim, so our approach definitely worked, however it was a pain to install.

*R3 was around 90 kg (200 lbs). We tried using different wheels and tracks to get R3 to climb stairs well. However that story is for another post…

The black rims were solid metal that we machined the wedges into in order to lightweight them. The 3 metal posts in those wedges are the beadlock tensioning bolts. You can also see the castle nut and pin that holds the wheel to the axle. This image is from R2, you can see the gap between the front and rear sections of the robot where the pivot is.

Drive-train selection

Now that we had a mass estimate and system requirements for speed and obstacle clearance we can start to spec the drive-train. The other piece of information that we needed and had to discuss with the electrical team was the voltage of the battery. Different bus voltages greatly affects the motors available for a given speed and torque. We decided on a 51.2v nominal bus voltage. This presented a problem since it was very hard to find the speed/torques we wanted at that voltage. We ended up selecting a 400W 1/2HP motor+gearbox from Oriental Motors with a parallel 100:1 gearbox that allows us to drive at a maximum speed of 2.5m/s.

The part numbers of the motors and gearbox on R1 and R2 were BLVM640N-GFS + GFS6G100FR.

The part numbers of the motors and gearbox on the smaller R3 were Maxon EC 90 Flat + GP81A.

Next steps

Now that we know the mechanics of the robot we can start building it. In the next post we will start looking at the electronics and motor controls. While the nature of the blog makes it seem that this design is a serial process, in reality lots of things are happening in parallel. While the mechanical team is designing the chassis, the electrical team is finding the electrical components needed in order for the mechanical person to know what needs mounted.

It is also important to work with the electrical team to figure out wire routing while the chassis is being developed.


Note of the editor: This post has been merged from the posts “Hands On Ground Robot & Drone Design Series” and “Mechanical & Wheels – Hands On Ground Robot Design“.

]]>
Robots: friends and enemies? Social impact of robotics in inspection and maintenance https://robohub.org/robots-friends-and-enemies-social-impact-of-robotics-in-inspection-and-maintenance/ Thu, 10 Feb 2022 15:23:01 +0000 https://robohub.org/?p=203524

Industrial robotics

By Anastasiia Nestrogaeva (Junior Consultant at the International Projects Team, Civitta)

Robotics4EU is a 3-years-long EU-funded project which advocates for a wider adoption for AI-based robots in 4 sectors: healthcare, inspection and maintenance of infrastructure, agri-food, and agile production. Thus, Robotics4EU raises awareness about non-technological aspects in robotics through delivering a series of workshops to involve the research community, industry representatives and citizens.

The workshop “Robots: Friends and Enemies? Social impact of Robotics in Inspection and Maintenance” which took place on the 26th of January, 2022 tackled the problem of interactions between robots and humans. How to evaluate the real impact of robotics on our society? How to decide if robots are hazardous and may complicate human lives? To what extent does society accept rapidly progressing robotic technologies? All these issues were brought up during the fruitful discussion at the workshop. Around 30 people attended the workshop, with 65% of them belonging to the research community and 10% to industry representatives. About 25% comprised the general public. The participants represented 23 different countries, mostly European ones.

How to access trustworthy AI?

The workshop initiated with the presentation of the project and a brainwriting session at better involvement of participants into discussion, then four speakers shared their expertise with the participants. The first speaker was Roberto Zicari who is an affiliated professor at the Arcada University of Applied Sciences in Helsinki and an adjunct professor at the Seoul National University. Also, Roberto leads a team of international experts who defined an assessment process for Trustworthy AI, called Z-Inspection®. The speaker enlisted 4 main fundamental principles which lie down in the EU Framework: respect for human autonomy, prevention of harm, fairness, and explicability. Also, there are 7 main requirements of the EU towards the trustworthy AI: accountability; societal and environmental well-being; diversity, non-discrimination and fairness; transparency, privacy and data governance, technical robustness and safety.

Despite the existing requirements, the EU Framework does not take into account the evolving nature of the technologies and is not contextualised according to various domain peculiarities. That is why, as Roberto stressed, Z-inspection used a holistic approach and created the orchestration process which helps teams of experts access the ethical, legal, technical and domain specific implications for the use of AI products or services. This process can be employed during the whole AI lifecycle and uses socio-technical scenarios to identify issues. Then, the team of experts map to trustworthy AI (onto the ethical categories established by the EU´s Guidelines for Trustworthy AI), execute and resolve (give recommendations to stakeholders).

Embracing our future digital colleagues

Second speech was delivered by Maarit Sandelin who works as the European Network Manager in SPRINT Robotics focusing on Robotic Innovation. Maarit explained that SPRINT Robotics is an industry-driven initiative that promotes the development, availability and application of Inspection & Maintenance Robotics around the world. There are 4 focuses which include safety improvement, cost avoidance and reduction, environmental performance improvement, and general performance improvement. Maarit explained that SPRINT Robotics is an industry-driven initiative that promotes the development, availability and application of Inspection & Maintenance Robotics around the world. There are 4 focuses which include safety improvement, cost avoidance and reduction, environmental performance improvement, and general performance improvement.

Maarit stressed that COVID-19 definitely has been increasing the interest of business in robotic solutions. A total of 88% of businesses worldwide plan on adopting robotic automation into their infrastructure to increase efficiency and safety. Also, there will be a 12% increase in shipment of robots worldwide and collaborative robots (cobots) will constitute 34% of all robot sales by 2025. The speaker noticed that many dangerous inspection and maintenance activities can be done by robots and efficiency is reached mainly through the preparation stage. However, the main barriers for integrating the robotics solution into operations are regulations. In some countries, the usage of robotics is not allowed by the legislature or its interpretations which, in turn, cause issues for service and technical providers. Maarit strongly believes that the policy changes are needed to contribute to the adoption of robots.

There are also other barriers of a wide-spread adoption. Companies would need to reduce the workforce, to modify the location where a company operates, or even to modify the whole value chain. Also, the skills gaps in the local labour markets is also seen as a potential risk which would require proper training. Talking about displacement of jobs, Maarit stated that it will continue and 85 million jobs will be displaced by 2025, but 97 jobs may appear due to new division of work between humans and machines. Maarit said there is a need to show employees that robotics is not black-and-one, and reskilling may help to save a job.

BugWright2 Project

The second half of the workshop featured the BugWright2 Project which does autonomous robotic inspection and maintenance on ship hulls and storage tanks. Alberto Ortiz Rodrigez, who works as a Professor at University of the Balearic Islands, gave the information about the goals and main activities of the project. The project has the 5-steps approach: multi robot task allocation; mission planning; autonomous inspection; data post processing and actionable data generation; and maintenance which include hull cleaning, augmented reality, defect marking. Alberto showed a few examples of robots which perform different tasks. Lasly, the speaker stressed the importance of the project, enlisting 4 potential impacts. First, it demonstrates how the automated multi-robot technologies can be deployed into a large-scale industrial problem. Second, such technologies lead to cost efficiency: fuel saving, lower service costs, no immobilisation costs. Third, it brings a positive environmental impact, as it makes the ships and tanks safer, contributes to lower fuel consumption and smaller need for antifouling. Lastly, these autonomous inspections are regulated by the World Maritime Organisation (UN).

Human-Robot-Collaboration: perspectives from work and organisational psychology

The last speech was delivered by Thomas Ellwart who works as a Professor at the University of Trier. He touched the topic of the interactions between robots and humans. While bringing up the main topic of the workshop “Are robots friends or enemies?”, the speaker noticed that there is a need to define a criteria of terms “friends” and “enemies” or, in other words, criteria of functional / dysfunctional human-robot-interactions (HRC). For psychologists, the main important issue to evaluate among the HRC is a very specific task of employees on the ship. Thomas is convinced that there is a need to reflect abstract models of robotic solutions from on-site user perspective. Talking about criteria of functional HRC, they should:

  • Facilitate the proper execution of the tasks
  • Protect health and increase safety
  • Promote individual well-being
  • Develop skills and human abilities
  • Avoid under/overload demands, isolated works, task hindrances, etc.

However, a high robot autonomy causes some dysfunctional effects of high robot autonomy such as exclusion of humans for safety reasons, reduced possibilities to apply and train skills, responsibilities to react in case of failures or disturbances, and low quality residual tasks. This way, humans are excluded from the task performance, but they are still in charge of system malfunctioning. This way, a high autonomy creates the high interdependence between human tasks and robots – if a robot breaks, a task is under danger, which may postpone the accomplishment of these tasks. Also, Thomas stressed that acceptance is also a double-faced matter: overtrust is as dangerous as mistrust.

To reflect all the criteria, the psychologists look at specific tasks, asking what level of autonomy the robot has and what subtasks it performs. Is it making decisions or just monitoring and implementing action? Thus, from the user perspective, there are critical factors which should be taken into account:

  • Taks (low effectiveness, high costs, hindrances)
  • Technology (reliability, accuracy, maintenance needs)
  • Human (e.g. trust, control, cognitive load: how many robots can be overlooked by a human)
  • Organisation (e.g. different stakeholders, roles)

As one the aims of Robotics4EU is to develop the Responsible Robotics Assessment Model, a main coordinator of the project, Anneli Roose, talked about social readiness of robots. Thus, in order to determine this, it is important to figure out to what extent a robot meets a) ethical values, b) socio-economic needs, c) data issues, d) education and management issues, e) legal concerns. It does not cover any legal or regulatory requirements. The foundation to build the Robotics4EU Assessment Model is the Societal Readiness Levels, while the used tools are surveys, interviews, debates, workshops, and stakeholder forums involving the robotics research community, policy makers, and citizens.

Panel discussion

Lastly, the workshop finished with the panel discussion which brought up very controversial topics.

How to design a self-assessment model or self-assessment tool in a way it would be valid in 2-3 years. Is it possible or even relevant?

Thomas Ellwart stressed that this is a quite difficult question, as it requires to involve the users and the market. Another matter is whether the automated system will be sustainable and adaptable to the changing environment and reliable over the time which is actually a question to the developers and engineers. Roberto Zicari notices that the Maturity Assessment Model should be domain specific, as a robot assisting the healthcare is very different from the robot replacing mine workers. Thus, it is quite difficult to transfer knowledge obtained in the healthcare sector to the Inspection and Maintenance. He stressed that it would be very interesting to create an interdisciplinary working group focused on several domains and try to assess societal readiness level.

Are the regulatory barriers significant? Are these barriers in the EU regulations or/and national regulations?

Thomas Ellwart states that there are 2 difficult matters: safety and data protection, which come together if we are talking about the regulation on AI. Roberto Zicari mentioned that the EU proposed the first AI Regulation Act, aimed at regulating any AI-based systems and solutions which is based on the risk assessment approach. This proposed act looks at AI mainly as a software, opposite to the classical definition. However, which exact composition of hardware and AI poses high risk and low risk? Thus, Roberto stressed that there is a need for discussions among researchers, law community and industry experts. Some participants also noticed that the regulatory framework would need to be adapted to the cultural differences all around the world.

Upcoming workshop

The upcoming workshop on the 23rd of February 2022, 11-14 CET “Boosting Innovations and Maximising Societal Impact. Role of Digital Innovation Hubs (DIHs) in Robotics for Inspection & Maintenance”. We will look at digital innovation hubs as a way to connect institutions and SMEs, helping to close the knowledge gap of non-technological issues of robotics in Inspection & Maintenance. We will focus on opportunities which bring SMEs and tech startups together and how they can potentially boost their innovations in the respected area. The workshop will feature representatives from digital innovation hubs, as well as industry experts.

]]>
Tamim Asfour’s Keynote talk – Learning humanoid manipulation from humans https://robohub.org/tamim-asfours-keynote-talk-learning-humanoid-manipulation-from-humans/ Tue, 25 Jan 2022 12:56:43 +0000 https://robohub.org/?p=203369

Through manipulation, a robotic system can physically change the state of the world. This is intertwined with intelligence, which is the ability whereby such system can detect and adapt to change. In his talk, Tamim Asfour gives an overview of the developments in manipulation for robotic systems his lab has done by learning manipulation task models from human observations, and the challenges and open questions associated with this.

Bio: Tamim Asfour is full Professor at the Institute for Anthropomatics and Robotics, where he holds the chair of Humanoid Robotics Systems and is head of the High Performance Humanoid Technologies Lab (H2T). His current research interest is high performance 24/7 humanoid robotics. Specifically, his research focuses on engineering humanoid robot systems integrating the mechano-informatics of such systems with the capabilities of predicting, acting and learning from human demonstration and sensorimotor experience. He is developer and the leader of the development team of the ARMAR humanoid robot family.

]]>
Pietro Valdastri’s Plenary Talk – Medical capsule robots: a Fantastic Voyage https://robohub.org/pietro-valdastris-plenary-talk-medical-capsule-robots-a-fantastic-voyage/ Fri, 10 Dec 2021 12:38:33 +0000 https://robohub.org/?p=202668

At the beginning of the new millennia, wireless capsule endoscopy was introduced as a minimally invasive method of inspecting the digestive tract. The possibility of collecting images deep inside the human body just by swallowing a “pill” revolutionized the field of gastrointestinal endoscopy and sparked a brand-new field of research in robotics: medical capsule robots. These are self-contained robots that leverage extreme miniaturization to access and operate in environments that are out of reach for larger devices. In medicine, capsule robots can enter the human body through natural orifices or small incisions, and detect and cure life-threatening diseases in a non-invasive manner. This talk provides a perspective on how this field has evolved in the last ten years. We explore what was accomplished, what has failed, and what were the lessons learned. We also discuss enabling technologies, intelligent control, possible levels of computer assistance, and highlight future challenges in this ongoing Fantastic Voyage.

Bio: Pietro Valdastri (Senior Member, IEEE) received the master’s degree (Hons.) from the University of Pisa, in 2002, and the Ph.D. degree in biomedical engineering, Scuola Superiore Sant’Anna in 2006. He is a Professor and a Chair of Robotics and Autonomous Systems with the University of Leeds. His research interests include robotic surgery, robotic endoscopy, design of magnetic mechanisms, and medical capsule robots. He is a recipient of the Wolfson Research Merit Award from the Royal Society.

]]>
Exploring ROS2 with a wheeled robot – #4 – Obstacle avoidance https://robohub.org/exploring-ros2-with-a-wheeled-robot-4-obstacle-avoidance/ Mon, 06 Dec 2021 10:28:30 +0000 https://www.theconstructsim.com/?p=26498

By Marco Arruda

In this post you’ll learn how to program a robot to avoid obstacles using ROS2 and C++. Up to the end of the post, the Dolly robot moves autonomously in a scene with many obstacles, simulated using Gazebo 11.

You’ll learn:

  • How to publish AND subscribe topics in the same ROS2 Node
  • How to avoid obstacles
  • How to implement your own algorithm in ROS2 and C++

1 – Setup environment – Launch simulation

Before anything else, make sure you have the rosject from the previous post, you can copy it from here.

Launch the simulation in one webshell and in a different tab, checkout the topics we have available. You must get something similar to the image below:

2 – Create the node

In order to have our obstacle avoidance algorithm, let’s create a new executable in the file ~/ros2_ws/src/my_package/obstacle_avoidance.cpp:

#include "geometry_msgs/msg/twist.hpp"    // Twist
#include "rclcpp/rclcpp.hpp"              // ROS Core Libraries
#include "sensor_msgs/msg/laser_scan.hpp" // Laser Scan

using std::placeholders::_1;

class ObstacleAvoidance : public rclcpp::Node {
public:
  ObstacleAvoidance() : Node("ObstacleAvoidance") {

    auto default_qos = rclcpp::QoS(rclcpp::SystemDefaultsQoS());
    subscription_ = this->create_subscription(
        "laser_scan", default_qos,
        std::bind(&ObstacleAvoidance::topic_callback, this, _1));
    publisher_ =
        this->create_publisher("cmd_vel", 10);
  }

private:
  void topic_callback(const sensor_msgs::msg::LaserScan::SharedPtr _msg) {
    // 200 readings, from right to left, from -57 to 57 degress
    // calculate new velocity cmd
    float min = 10;
    for (int i = 0; i < 200; i++) { float current = _msg->ranges[i];
      if (current < min) { min = current; } } 
    auto message = this->calculateVelMsg(min);
    publisher_->publish(message);
  }
  geometry_msgs::msg::Twist calculateVelMsg(float distance) {
    auto msg = geometry_msgs::msg::Twist();
    // logic
    RCLCPP_INFO(this->get_logger(), "Distance is: '%f'", distance);
    if (distance < 1) {
      // turn around
      msg.linear.x = 0;
      msg.angular.z = 0.3;
    } else {
      // go straight ahead
      msg.linear.x = 0.3;
      msg.angular.z = 0;
    }
    return msg;
  }
  rclcpp::Publisher::SharedPtr publisher_;
  rclcpp::Subscription::SharedPtr subscription_;
};

int main(int argc, char *argv[]) {
  rclcpp::init(argc, argv);
  rclcpp::spin(std::make_shared());
  rclcpp::shutdown();
  return 0;
}

In the main function we have:

  • Initialize node rclcpp::init
  • Keep it running rclcpp::spin

Inside the class constructor:

  • Subcribe to the laser scan messages: subscription_
  • Publish to the robot diff driver: publisher_

The obstacle avoidance intelligence goes inside the method calculateVelMsg. This is where decisions are made based on the laser readings. Notice that is depends purely on the minimum distance read from the message.

If you want to customize it, for example, consider only the readings in front of the robot, or even check if it is better to turn left or right, this is the place you need to work on! Remember to adjust the parameters, because the way it is, only the minimum value comes to this method.

3 – Compile the node

This executable depends on both geometry_msgs and sensor_msgs, that we have added in the two previous posts of this series. Make sure you have them at the beginning of the ~/ros2_ws/src/my_package/CMakeLists.txt file:

# find dependencies
find_package(ament_cmake REQUIRED)
find_package(rclcpp REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(sensor_msgs REQUIRED)

And finally, add the executable and install it:

# obstacle avoidance
add_executable(obstacle_avoidance src/obstacle_avoidance.cpp)
ament_target_dependencies(obstacle_avoidance rclcpp sensor_msgs geometry_msgs)

...

install(TARGETS
  reading_laser
  moving_robot
  obstacle_avoidance
  DESTINATION lib/${PROJECT_NAME}/
)

Compile the package:
colcon build --symlink-install --packages-select my_package

4 – Run the node

In order to run, use the following command:
ros2 run my_package obstacle_avoidance

It will not work for this robot! Why is that? We are subscribing and publishing to generic topics: cmd_vel and laser_scan.

We need a launch file to remap these topics, let’s create one at ~/ros2_ws/src/my_package/launch/obstacle_avoidance.launch.py:

from launch import LaunchDescription
from launch_ros.actions import Node

def generate_launch_description():

    obstacle_avoidance = Node(
        package='my_package',
        executable='obstacle_avoidance',
        output='screen',
        remappings=[
            ('laser_scan', '/dolly/laser_scan'),
            ('cmd_vel', '/dolly/cmd_vel'),
        ]
    )

    return LaunchDescription([obstacle_avoidance])

Recompile the package, source the workspace once more and launch it:
colcon build --symlink-install --packages-select my_package
source ~/ros2_ws/install/setup.bash
ros2 launch my_package obstacle_avoidance.launch.py

Related courses & extra links:

The post Exploring ROS2 with a wheeled robot – #4 – Obstacle Avoidance appeared first on The Construct.

]]>
Exploring ROS2 using wheeled Robot – #3 – Moving the robot https://robohub.org/exploring-ros2-using-wheeled-robot-3-moving-the-robot/ Tue, 30 Nov 2021 16:21:19 +0000 https://www.theconstructsim.com/?p=26493 By Marco Arruda

In this post you’ll learn how to publish to a ROS2 topic using ROS2 C++. Up to the end of the video, we are moving the robot Dolly robot, simulated using Gazebo 11.

You’ll learn:

  • How to create a node with ROS2 and C++
  • How to public to a topic with ROS2 and C++

1 – Setup environment – Launch simulation

Before anything else, make sure you have the rosject from the previous post, you can copy it from here.

Launch the simulation in one webshell and in a different tab, checkout the topics we have available. You must get something similar to the image below:

2 – Create a topic publisher

Create a new file to container the publisher node: moving_robot.cpp and paste the following content:

#include <chrono>
#include <functional>
#include <memory>

#include "rclcpp/rclcpp.hpp"
#include "geometry_msgs/msg/twist.hpp"

using namespace std::chrono_literals;

/* This example creates a subclass of Node and uses std::bind() to register a
 * member function as a callback from the timer. */

class MovingRobot : public rclcpp::Node {
public:
  MovingRobot() : Node("moving_robot"), count_(0) {
    publisher_ =
        this->create_publisher("/dolly/cmd_vel", 10);
    timer_ = this->create_wall_timer(
        500ms, std::bind(&MovingRobot::timer_callback, this));
  }

private:
  void timer_callback() {
    auto message = geometry_msgs::msg::Twist();
    message.linear.x = 0.5;
    message.angular.z = 0.3;
    RCLCPP_INFO(this->get_logger(), "Publishing: '%f.2' and %f.2",
                message.linear.x, message.angular.z);
    publisher_->publish(message);
  }
  rclcpp::TimerBase::SharedPtr timer_;
  rclcpp::Publisher::SharedPtr publisher_;
  size_t count_;
};

int main(int argc, char *argv[]) {
  rclcpp::init(argc, argv);
  rclcpp::spin(std::make_shared());
  rclcpp::shutdown();
  return 0;
}QoS (Quality of Service)

Similar to the subscriber it is created a class that inherits Node. A publisher_ is setup and also a callback, although this time is not a callback that receives messages, but a timer_callback called in a frequency defined by the timer_ variable. This callback is used to publish messages to the robot.

The create_publisher method needs two arguments:

  • topic name
  • QoS (Quality of Service) – This is the policy of data saved in the queue. You can make use of different middlewares or even use some provided by default. We are just setting up a queue of 10. By default, it keeps the last 10 messages sent to the topic.

The message published must be created using the class imported:

message = geometry_msgs::msg::Twist();

We ensure the callback methods on the subscribers side will always recognize the message. This is the way it has to be published by using the publisher method publish.

3 – Compile and run the node

In order to compile we need to adjust some things in the ~/ros2_ws/src/my_package/CMakeLists.txt. So add the following to the file:

  • Add the geometry_msgs dependency
  • Append the executable moving_robot
  • Add install instruction for moving_robot
find_package(geometry_msgs REQUIRED)
...
# moving robot
add_executable(moving_robot src/moving_robot.cpp)
ament_target_dependencies(moving_robot rclcpp geometry_msgs)
...
install(TARGETS
  moving_robot
  reading_laser
  DESTINATION lib/${PROJECT_NAME}/
)

We can run the node like below:

source ~/ros2_ws/install/setup.bash
ros2 run my_package

Related courses & extra links:

The post Exploring ROS2 using wheeled Robot – #3 – Moving the Robot
appeared first on The Construct.

]]>
Exploring ROS2 with wheeled robot – #2 – How to subscribe to ROS2 laser scan topic https://robohub.org/exploring-ros2-with-wheeled-robot-2-how-to-subscribe-to-ros2-laser-scan-topic/ Thu, 11 Nov 2021 16:39:13 +0000 https://www.theconstructsim.com/?p=26483 By Marco Arruda

This is the second chapter of the series “Exploring ROS2 with a wheeled robot”. In this episode, you’ll learn how to subscribe to a ROS2 topic using ROS2 C++.

You’ll learn:

  • How to create a node with ROS2 and C++
  • How to subscribe to a topic with ROS2 and C++
  • How to launch a ROS2 node using a launch file

1 – Setup environment – Launch simulation

Before anything else, make sure you have the rosject from the previous post, you can copy it from here.

Launch the simulation in one webshell and in a different tab, checkout the topics we have available. You must get something similar to the image below:

2 – Create a ROS2 node

Our goal is to read the laser data, so create a new file called reading_laser.cpp:

touch ~/ros2_ws/src/my_package/reading_laser.cpp

And paste the content below:

#include "rclcpp/rclcpp.hpp"
#include "sensor_msgs/msg/laser_scan.hpp"

using std::placeholders::_1;

class ReadingLaser : public rclcpphttps://www.theconstructsim.com/exploring-ros2-with-wheeled-robot-2-how-to-subscribe-to-ros2-laser-scan-topic/::Node {

public:
  ReadingLaser() : Node("reading_laser") {

    auto default_qos = rclcpp::QoS(rclcpp::SystemDefaultsQoS());

    subscription_ = this->create_subscription(
        "laser_scan", default_qos,
        std::bind(&ReadingLaser::topic_callback, this, _1));
  }

private:
  void topic_callback(const sensor_msgs::msg::LaserScan::SharedPtr _msg) {
    RCLCPP_INFO(this->get_logger(), "I heard: '%f' '%f'", _msg->ranges[0],
                _msg->ranges[100]);
  }
  rclcpp::Subscription::SharedPtr subscription_;
};

int main(int argc, char *argv[]) {
  rclcpp::init(argc, argv);
  auto node = std::make_shared();
  RCLCPP_INFO(node->get_logger(), "Hello my friends");
  rclcpp::spin(node);
  rclcpp::shutdown();
  return 0;
}

We are creating a new class ReadingLaser that represents the node (it inherits rclcpp::Node). The most important about that class are the subscriber attribute and the method callback. In the main function we are initializing the node and keep it alive (spin) while its ROS connection is valid.

The subscriber constructor expects to get a QoS, that stands for the middleware used for the quality of service. You can have more information about it in the reference attached, but in this post we are just using the default QoS provided. Keep in mind the following parameters:

  • topic name
  • callback method

The callback method needs to be binded, which means it will not be execute at the subscriber declaration, but when the callback is called. So we pass the reference of the method and setup the this reference for the current object to be used as callback, afterall the method itself is a generic implementationhttps://www.theconstructsim.com/exploring-ros2-with-wheeled-robot-2-how-to-subscribe-to-ros2-laser-scan-topic/ of a class.

3 – Compile and run

In order to compile the cpp file, we must add some instructions to the ~/ros2_ws/src/my_package/src/CMakeLists.txt:

  • Look for find dependencies and include the sensor_msgs library
  • Just before the install instruction add the executable and target its dependencies
  • Append another install instruction for the new executable we’ve just created
# find dependencies
find_package(ament_cmake REQUIRED)
find_package(rclcpp REQUIRED)
find_package(sensor_msgs REQUIRED)
...

...
add_executable(reading_laser src/reading_laser.cpp)
ament_target_dependencies(reading_laser rclcpp std_msgs sensor_msgs)
...

...
install(TARGETS
  reading_laser
  DESTINATION lib/${PROJECT_NAME}/
)

Compile it:

colcon build --symlink-install --packages-select my_package

4 – Run the node and mapping the topic

In order to run the executable created, you can use:

ros2 run my_package reading_laser

Although the the laser values won’t show up. That’s because we have a “hard coded” topic name laser_scan. No problem at all, when we can map topics using launch files. Create a new launch file ~/ros2_ws/src/my_package/launch/reading_laser.py:

from launch import LaunchDescription
from launch_ros.actions import Node

def generate_launch_description():

    reading_laser = Node(
        package='my_package',
        executable='reading_laser',https://www.theconstructsim.com/exploring-ros2-with-wheeled-robot-2-how-to-subscribe-to-ros2-laser-scan-topic/
        output='screen',
        remappings=[
            ('laser_scan', '/dolly/laser_scan')
        ]
    )

    return LaunchDescription([
        reading_laser
    ])

In this launch file there is an instance of a node getting the executable as argument and it is setup the remappings attribute in order to remap from laser_scan to /dolly/laser_scan.

Run the same node using the launch file this time:

ros2 launch my_package reading_laser.launch.py

Add some obstacles to the world and the result must be similar to:

Related courses & extra links:

The post Exploring ROS2 with wheeled robot – #2 – How to subscribe to ROS2 laser scan topic appeared first on The Construct.

]]>
Exploring ROS2 with wheeled robot – #1 – Launch ROS2 Simulation https://robohub.org/exploring-ros2-with-wheeled-robot-1-launch-ros2-simulation/ Fri, 05 Nov 2021 13:11:11 +0000 https://www.theconstructsim.com/?p=26461 By Marco Arruda

This is the first chapter of the series “Exploring ROS2 with a wheeled robot”. In this episode, we setup our first ROS2 simulation using Gazebo 11. From cloning, compiling and creating a package + launch file to start the simulation!

You’ll learn:

  • How to Launch a simulation using ROS2
  • How to Compile ROS2 packages
  • How to Create launch files with ROS2

1 – Start the environment

In this series we are using ROS2 foxy, go to this page, create a new rosject selecting ROS2 Foxy distro and and run it.

2 – Clone and compile the simulation

The first step is to clone the dolly robot package. Open a web shell and execute the following:

cd ~/ros2_ws/src/
git clone https://github.com/chapulina/dolly.git

Source the ROS 2 installation folder and compile the workspace:

source /opt/ros/foxy/setup.bash
cd ~/ros2_ws
colcon build --symlink-install --packages-ignore dolly_ignition

Notice we are ignoring the ignition related package, that’s because we will work only with gazebo simulator.

3 – Create a new package and launch file

In order to launch the simulation, we will create the launch file from the scratch. It goes like:

cd ~/ros2_ws/src
ros2 pkg create my_package --build-type ament_cmake --dependencies rclcpp

After that, you must have the new folder my_package in your workspace. Create a new folder to contain launch files and the new launch file as well:

mkdir -p ~/ros2_ws/src/my_package/launch
touch ~/ros2_ws/src/my_package/launch/dolly.launch.py

Copy and paste the following to the new launch file:

import os

from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource

def generate_launch_description():

    pkg_gazebo_ros = get_package_share_directory('gazebo_ros')
    pkg_dolly_gazebo = get_package_share_directory('dolly_gazebo')

    gazebo = IncludeLaunchDescription(
        PythonLaunchDescriptionSource(
            os.path.join(pkg_gazebo_ros, 'launch', 'gazebo.launch.py')
        )
    )

    return LaunchDescription([
        DeclareLaunchArgument(
            'world',
            default_value=[
                os.path.join(pkg_dolly_gazebo, 'worlds', 'dolly_empty.world'), ''
            ],
            description='SDF world file',
        ),
        gazebo
    ])

Notice that a launch file returns a LaunchDescription that contains nodes or other launch files.

In this case, we have just included another launch file gazebo.launch.py and changed one of its arguments, the one that stands for the world name: world.

The robot, in that case, is included in the world file, so there is no need to have an extra spawn node, for example.

And append to the end of the file ~/ros2_ws/src/my_package/CMakeLists.txt the following instruction to install the new launch file into the ROS 2 environment:

install(DIRECTORY
	launch
	DESTINATION share/${PROJECT_NAME}/
)
ament_package()

4 – Compile and launch the simulation

Use the command below to compile only the created package:

cd ~/ros2_ws/
colcon build --symlink-install --packages-select my_package
source ~/ros2_ws/install/setup.bash
ros2 launch my_package dolly.launch.py

5 – Conclusion

This is how you can launch a simulation in ROS2. It is important to notice that:

  • We are using a pre-made simulation: world + robot
  • This is how a launch file is created: A python script
  • In ROS2, you still have the same freedom of including other files or running executables inside a custom launch file

Related courses & extra links:

The post Exploring ROS2 with wheeled robot – #1 – Launch ROS2 Simulation appeared first on The Construct.

]]>
We are delighted to announce the launch of Scicomm – a joint science communication project from Robohub and AIhub https://robohub.org/we-are-delighted-to-announce-the-launch-of-scicomm-a-joint-science-communication-project-from-robohub-and-aihub/ Tue, 02 Nov 2021 10:49:21 +0000 https://robohub.org/?p=202348

Scicomm.io is a science communication project which aims to empower people to share stories about their robotics and AI work. The project is a joint effort from Robohub and AIhub, both of which are educational platforms dedicated to connecting the robotics and AI communities to the rest of the world.

This project focuses on training the next generation of communicators in robotics and AI to build a strong connection with the outside world, by providing effective communication tools.

People working in the field are developing an enormous array of systems and technologies. However, due to a relative lack of high quality, impartial information in the mainstream media, the general public receive a lot hyped news which ends up causing fear and / or unrealistic expectations surrounding these technologies.

Scicomm.io has been created to facilitate the connection between the robotics and AI world and the rest of the world through teaching how to establish truthful, honest and hype-free communication. One that brings benefit to both sides.

Scicomm bytes

With our series of bite-sized videos you can quickly learn about science communication for robotics and AI. Find out why science communication is important, how to talk to the media, and about some of the different ways in which you can communicate your work. We have also produced guides with tips for turning your research into blog post and for avoiding hype when promoting your research.

Training

Training the next generation of science communicators is an important mission for scicomm.io (and indeed Robohub and AIhub). As part of scicomm.io, we run training courses to empower researchers to communicate about their work. When done well, stories about AI and robotics can help increase the visibility and impact of the work, lead to new connections, and even raise funds. However, most researchers don’t engage in science communication, due to a lack of skills, time, and reach that makes the effort worthwhile.

With our workshops we aim to overcome these barriers and make communicating robotics and AI ‘easy’. This is done through short training sessions with experts, and hands-on practical exercises to help students begin their science communication journey with confidence.

scicomm workshop in actionA virtual scicomm workshop in action.

During the workshops, participants will hear why science communication matters, learn the basic techniques of science communication, build a story around their own research, and find out how to connect with journalists and other communicators. We’ll also discuss different science communication media, how to use social media, how to prepare blog posts, videos and press releases, how to avoid hype, and how to communicate work to a general audience.

For more information about our workshops, contact the team by email.

Find out more about the scicomm.io project here.

]]>
Robotics Today latest talks – Raia Hadsell (DeepMind), Koushil Sreenath (UC Berkeley) and Antonio Bicchi (Istituto Italiano di Tecnologia) https://robohub.org/robotics-today-latest-talks-raia-hadsell-deepmind-koushil-sreenath-uc-berkeley-and-antonio-bicchi-istituto-italiano-di-tecnologia/ Thu, 21 Oct 2021 10:55:29 +0000 https://robohub.org/?p=202193

Robotics Today held three more online talks since we published the one from Amanda Prorok (Learning to Communicate in Multi-Agent Systems). In this post we bring you the last talks that Robotics Today (currently on hiatus) uploaded to their YouTube channel: Raia Hadsell from DeepMind talking about ‘Scalable Robot Learning in Rich Environments’, Koushil Sreenath from UC Berkeley talking about ‘Safety-Critical Control for Dynamic Robots’, and Antonio Bicchi from the Istituto Italiano di Tecnologia talking about ‘Planning and Learning Interaction with Variable Impedance’.

Raia Hadsell (DeepMind) – Scalable Robot Learning in Rich Environments

Abstract: As modern machine learning methods push towards breakthroughs in controlling physical systems, games and simple physical simulations are often used as the main benchmark domains. As the field matures, it is important to develop more sophisticated learning systems with the aim of solving more complex real-world tasks, but problems like catastrophic forgetting and data efficiency remain critical, particularly for robotic domains. This talk will cover some of the challenges that exist for learning from interactions in more complex, constrained, and real-world settings, and some promising new approaches that have emerged.

Bio: Raia Hadsell is the Director of Robotics at DeepMind. Dr. Hadsell joined DeepMind in 2014 to pursue new solutions for artificial general intelligence. Her research focuses on the challenge of continual learning for AI agents and robots, and she has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting. Dr. Hadsell is on the executive boards of ICLR (International Conference on Learning Representations), WiML (Women in Machine Learning), and CoRL (Conference on Robot Learning). She is a fellow of the European Lab on Learning Systems (ELLIS), a founding organizer of NAISys (Neuroscience for AI Systems), and serves as a CIFAR advisor.

Koushil Sreenath (UC Berkeley) – Safety-Critical Control for Dynamic Robots: A Model-based and Data-driven Approach

Abstract: Model-based controllers can be designed to provide guarantees on stability and safety for dynamical systems. In this talk, I will show how we can address the challenges of stability through control Lyapunov functions (CLFs), input and state constraints through CLF-based quadratic programs, and safety-critical constraints through control barrier functions (CBFs). However, the performance of model-based controllers is dependent on having a precise model of the system. Model uncertainty could lead not only to poor performance but could also destabilize the system as well as violate safety constraints. I will present recent results on using model-based control along with data-driven methods to address stability and safety for systems with uncertain dynamics. In particular, I will show how reinforcement learning as well as Gaussian process regression can be used along with CLF and CBF-based control to address the adverse effects of model uncertainty.

Bio: Koushil Sreenath is an Assistant Professor of Mechanical Engineering, at UC Berkeley. He received a Ph.D. degree in Electrical Engineering and Computer Science and a M.S. degree in Applied Mathematics from the University of Michigan at Ann Arbor, MI, in 2011. He was a Postdoctoral Scholar at the GRASP Lab at University of Pennsylvania from 2011 to 2013 and an Assistant Professor at Carnegie Mellon University from 2013 to 2017. His research interest lies at the intersection of highly dynamic robotics and applied nonlinear control. His work on dynamic legged locomotion was featured on The Discovery Channel, CNN, ESPN, FOX, and CBS. His work on dynamic aerial manipulation was featured on the IEEE Spectrum, New Scientist, and Huffington Post. His work on adaptive sampling with mobile sensor networks was published as a book. He received the NSF CAREER, Hellman Fellow, Best Paper Award at the Robotics: Science and Systems (RSS), and the Google Faculty Research Award in Robotics.

Antonio Bicchi (Istituto Italiano di Tecnologia) – Planning and Learning Interaction with Variable Impedance

Abstract: In animals and in humans, the mechanical impedance of their limbs changes not only in dependence of the task, but also during different phases of the execution of a task. Part of this variability is intentionally controlled, by either co-activating muscles or by changing the arm posture, or both. In robots, impedance can be varied by varying controller gains, stiffness of hardware parts, and arm postures. The choice of impedance profiles to be applied can be planned off-line, or varied in real time based on feedback from the environmental interaction. Planning and control of variable impedance can use insight from human observations, from mathematical optimization methods, or from learning. In this talk I will review the basics of human and robot variable impedance, and discuss how this impact applications ranging from industrial and service robotics to prosthetics and rehabilitation.

Bio: Antonio Bicchi is a scientist interested in robotics and intelligent machines. After graduating in Pisa and receiving a Ph.D. from the University of Bologna, he spent a few years at the MIT AI Lab of Cambridge before becoming Professor in Robotics at the University of Pisa. In 2009 he founded the Soft Robotics Laboratory at the Italian Institute of Technology in Genoa. Since 2013 he is Adjunct Professor at Arizona State University, Tempe, AZ. He has coordinated many international projects, including four grants from the European Research Council (ERC). He served the research community in several ways, including by launching the WorldHaptics conference and the IEEE Robotics and Automation Letters. He is currently the President of the Italian Institute of Robotics and Intelligent Machines. He has authored over 500 scientific papers cited more than 25,000 times. He supervised over 60 doctoral students and more than 20 postdocs, most of whom are now professors in universities and international research centers, or have launched their own spin-off companies. His students have received prestigious awards, including three first prizes and two nominations for the best theses in Europe on robotics and haptics. He is a Fellow of IEEE since 2005. In 2018 he received the prestigious IEEE Saridis Leadership Award.

]]>
Online events to look out for on Ada Lovelace Day 2021 https://robohub.org/online-events-to-look-out-for-on-ada-lovelace-day-2021/ Sun, 10 Oct 2021 11:54:59 +0000 https://robohub.org/?p=201873

On the 12th of October, the world will celebrate Ada Lovelace Day to honor the achievements of women in science, technology, engineering and maths (STEM). After a successful worldwide online celebration of Ada Lovelace Day last year, this year’s celebration returns with a stronger commitment to online inclusion. In Finding Ada (the main network supporting Ada Lovelace Day), there will be three free webinars that you can enjoy in the comfort of your own home. There will also be loads of events happening around the world, so you have a wide range of content to celebrate Ada Lovelace Day 2021!

Engineering – Solving Problems for Real People

Engineering is the science of problem solving, and we have some pretty big problems in front of us. So how are engineers tackling the COVID-19 pandemic and climate change? And how do they stay focused on the impact of their engineering solutions on people and communities?

In partnership with STEM Wana Trust, we invite you to join Renée Young, associate mechanical engineer at Beca, Victoria Clark, senior environmental engineer at Beca, Natasha Mudaliar, operations manager at Reliance Reinforcing, and Sujata Roy, system planning engineer at Transpower, for a fascinating conversation about the challenges and opportunities of engineering.

13:00 NZST, 12 Oct: Perfect for people in New Zealand, Australia, and the Americas. (Note for American audiences: This panel will be on Monday for you.)

Register here, and find out about the speakers here.

Fusing Tech & Art in Games

The Technical Artist is a new kind of role in the games industry, but the possibilities for those who create and merge art and technology is endless. So what is tech art? And how are tech artists pushing the boundaries and creating new experiences for players?

Ada Lovelace Day and Ukie’s #RaiseTheGame invite you to join tech artist Kristrun Fridriksdottir, Jodie Azhar, technical art director at Silver Rain Games, Emma Roseburgh from Avalanche Studios, and Laurène Hurlin from Pixel Toys for our tech art webinar.

13:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, Middle East, India, for early birds in the Americas and night owls in AsiaPacific.

Register here, and find out about the speakers here.

The Science of Hypersleep

Hypersleep is a common theme in science fiction, but what does science have to say about putting humans into suspended animation? What can we learn from hibernating animals? What’s the difference between hibernation and sleep? What health impacts would extended hypersleep have?

Ada Lovelace Day and the Arthur C. Clarke Award invite you to join science fiction author Anne Charnock, Prof Gina Poe, an expert on the relationship between sleep and memory, Dr Anusha Shankar, who studies torpor in hummingbirds, and Prof Kelly Drew, who studies hibernation in squirrels, for a discussion of whether hypersleep in humans is possible.

19:00 BST, 12 Oct: Perfect for people in the UK, Europe, Africa, and the Americas.

Register here, and find out about the speakers here.

Other worldwide events

Apart from the three webinars above, many other organisations will hold their own events to celebrate the day. From a 24-hour global edit-a-thon (The Pankhurst Centre) to a digital theatre play (STEM on Stage) to an online machine learning breakfast (Square Women Engineers + Allies Australia), plus several talks and panel discussions like this one on how you can change the world with the help of physics (Founders4Schools), or this other one on inspiring women and girls in STEAM (Engine Shed), you have plenty of options to choose from.

For a full overview of international events, check out this website.

We also hope that you enjoy reading our annual list of women in robotics that you need to know that will be released on the day. Happy Ada Lovelace Day 2021!

]]>
Robert Wood’s Plenary Talk: Soft robotics for delicate and dexterous manipulation https://robohub.org/robert-woods-plenary-talk-soft-robotics-for-delicate-and-dexterous-manipulation/ Sun, 03 Oct 2021 10:55:34 +0000 https://robohub.org/robert-woods-plenary-talk-soft-robotics-for-delicate-and-dexterous-manipulation/

Robotic grasping and manipulation has historically been dominated by rigid grippers, force/form closure constraints, and extensive grasp trajectory planning. The advent of soft robotics offers new avenues to diverge from this paradigm by using strategic compliance to passively conform to grasped objects in the absence of active control, and with minimal chance of damage to the object or surrounding environment. However, while the reduced emphasis on sensing, planning, and control complexity simplifies grasping and manipulation tasks, precision and dexterity are often lost.

This talk will discuss efforts to increase the robustness of soft grasping and the dexterity of soft robotic manipulators, with particular emphasis on grasping tasks that are challenging for more traditional robot hands. This includes compliant objects, thin flexible sheets, and delicate organisms. Examples will be drawn from manipulation of everyday objects and field studies of deep sea sampling using soft end effectors

Bio: Robert Wood is the Charles River Professor of Engineering and Applied Sciences in the Harvard John A. Paulson School of Engineering and Applied Sciences and a National Geographic Explorer. Prof. Wood completed his M.S. and Ph.D. degrees in the Dept. of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His current research interests include new micro- and meso-scale manufacturing techniques, bioinspired microrobots, biomedical microrobots, control of sensor-limited and computation-limited systems, active soft materials, wearable robots, and soft grasping and manipulation. He is the winner of multiple awards for his
work including the DARPA Young Faculty Award, NSF Career Award, ONR Young Investigator Award, Air Force Young Investigator Award, Technology Review’s TR35, and multiple best paper awards. In 2010 Wood received the Presidential Early Career Award for Scientists and Engineers from President Obama for his work in microrobotics. In 2012 he was selected for the Alan T. Waterman award, the National Science Foundation’s most prestigious early career award. In 2014 he was named one of National Geographic’s “Emerging Explorers”, and in 2018 he was an inaugural recipient of the Max Planck-Humboldt Medal. Wood’s group is also dedicated to STEM education by using novel robots to motivate young students to pursue careers in science and engineering.

]]>
Real Roboticist focus series #6: Dennis Hong (Making People Happy) https://robohub.org/real-roboticist-focus-series-6-dennis-hong-making-people-happy/ Wed, 29 Sep 2021 09:03:02 +0000 https://robohub.org/real-roboticist-focus-series-6-dennis-hong-making-people-happy/

In this final video of our focus series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist, you’ll meet Dennis Hong speaking about the robots he and his team have created (locomotion and new ways of moving; an autonomous car for the visually impaired; disaster relief robots), Star Wars and cooking. All in all, ingredients from different worlds that Dennis is using to benefit society.

Dennis Hong is a Professor and the Founding Director of RoMeLa (Robotics & Mechanisms Laboratory) of the Mechanical & Aerospace Engineering Department at UCLA. If you’d like to find out more about how Star Wars influenced his professional career in robotics, how his experience taking a cooking assistant robot to MasterChef USA inspired a multi-million research project, and all the robots he is creating, check out his video below!

]]>
Sinergies between automation and robotics https://robohub.org/sinergies-between-automation-and-robotics/ Sat, 25 Sep 2021 09:37:38 +0000 https://robohub.org/sinergies-between-automation-and-robotics/

In this IEEE ICRA 2021 Plenary Panel aimed at the younger generation of roboticists and automation experts, panelists Seth Hutchinson, Maria Pia Fanti, Peter B. Luh, Pieter Abbeel, Kaneko Harada, Michael Y. Wang, Kevin Lynch, Chinwe Ekenna, Animesh Garg and Frank Park, under the moderation of Ken Goldberg, discussed about how to close the gap between both disciplines, which have many topics in common. The panel was organised by the Ad Hoc Committee to Explore Synergies in Automation and Robotics (CESAR).

As the IEEE Robotics and Automation Society (IEEE RAS) explain, “robotics and automation have always been siblings. They are similar in many ways and have substantial overlap in topics and research communities, but there are also differences–many RAS members view them as disjoint and consider themselves purely in robotics or purely in automation. This committee’s goal is to reconsider these perceptions and think about ways we can bring these communities closer.”

]]>
Plenary and Keynote talks focus series #6: Jonathan Hurst & Andrea Thomaz https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-6-jonathan-hurst-andrea-thomaz/ Wed, 08 Sep 2021 10:56:15 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-6-jonathan-hurst-andrea-thomaz/

This week you’ll be able to listen to the talks of Jonathan Hurst (Professor of Robotics at Oregon State University, and Chief Technology Officer at Agility Robotics) and Andrea Thomaz (Associate Professor of Robotics at the University of Texas at Austin, and CEO of Diligent Robotics) as part of this series that brings you the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems). Jonathan’s talk is in the topic of humanoids, while Andrea’s is about human-robot interaction.

Prof. Jonathan Hurst – Design Contact Dynamics in Advance

Bio: Jonathan W. Hurst is Chief Technology Officer and co-founder of Agility Robotics, and Professor and co-founder of the Oregon State University Robotics Institute. He holds a B.S. in mechanical engineering and an M.S. and Ph.D. in robotics, all from Carnegie Mellon University. His university research focuses on understanding the fundamental science and engineering best practices for robotic legged locomotion and physical interaction. Agility Robotics is bringing this new robotic mobility to market, solving problems for customers, working towards a day when robots can go where people go, generate greater productivity across the economy, and improve quality of life for all.

Prof. Andrea Thomaz – Human + Robot Teams: From Theory to Practice

Bio: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics and a renowned social robotics expert. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President’s Council of Advisors on Science and Tech, MIT Technology Review on its Next Generation of 35 Innovators Under 35 list, Popular Science on its Brilliant 10 list, TEDx as a featured keynote speaker on social robotics and Texas Monthly on its Most Powerful Texans of 2018 list.

Andrea’s robots have been featured in the New York Times and on the covers of MIT Technology Review and Popular Science. Her passion for social robotics began during her work at the MIT Media Lab, where she focused on using AI to develop machines that address everyday human needs. Andrea co-founded Diligent Robotics to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab).

Andrea is published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning in order to build social robots and other machines that are intuitive for everyday people to teach.

Andrea has received an NSF CAREER award in 2010 and an Office of Naval Research Young Investigator Award in 2008. In addition Diligent Robotics robot Moxi has been featured on NBC Nightly News and most recently in National Geographic “The robot revolution has arrived”.

]]>
Real Roboticist focus series #5: Michelle Johnson (Robots That Matter) https://robohub.org/iros2020-real-roboticist-focus-series-5-michelle-johnson-robots-that-matter/ Sun, 05 Sep 2021 08:03:15 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-5-michelle-johnson-robots-that-matter/

We’re reaching the end of this focus series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist. This week you’ll meet Michelle Johnson, Associate Professor of Physical Medicine and Rehabilitation at the University of Pennsylvania.

Michelle is also the Director of the Rehabilitation Robotics Lab at the University of Pennsylvania, whose aim is to use rehabilitation robotics and neuroscience to investigate brain plasticity and motor function after non-traumatic brain injuries, for example in stroke survivors or persons diagnosed with cerebral palsy. If you’d like to know more about her professional journey, her work with affordable robots for low/middle income countries and her next frontier in robotics, among many more things, check out her video below!

]]>
Plenary and Keynote talks focus series #5: Nikolaus Correll & Cynthia Breazeal https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-5-nikolaus-correll-cynthia-breazeal/ Wed, 01 Sep 2021 10:40:45 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-5-nikolaus-correll-cynthia-breazeal/

As part of our series showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems), this week we bring you Nikolaus Correll (Associate Professor at the University of Colorado at Boulder) and Cynthia Breazeal (Professor of Media Arts and Sciences at MIT). Nikolaus’ talk is on the topic of robot manipulation, while Cynthia’s talk is about the topic of social robots.

Prof. Nikolaus Correll – Robots Getting a Grip on General Manipulation

Bio: Nikolaus Correll is an Associate Professor at the University of Colorado at Boulder. He obtained his MS in Electrical Engineering from ETH Zürich and his PhD in Computer Science from EPF Lausanne in 2007. From 2007-2009 he was a post-doc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). Nikolaus is the recipient of a NSF CAREER award, a NASA Early Career Faculty Fellowship and a Provost Faculty achievement award. In 2016, he founded Robotic Materials Inc. to commercialize robotic manipulation technology.

Prof. Cynthia Breazeal – Living with Social Robots: from Research to Commercialization and Back

Abstract: Social robots are designed to interact with people in an interpersonal way, engaging and supporting collaborative social and emotive behavior for beneficial outcomes. We develop adaptive algorithmic capabilities and deploy multitudes of cloud-connected robots in schools, homes, and other living facilities to support long-term interpersonal engagement and personalization of specific interventions. We examine the impact of the robot’s social embodiment, emotive and relational attributes, and personalization capabilities on sustaining people’s engagement, improving learning, impacting behavior, and shaping attitudes to help people achieve long-term goals. I will also highlight challenges and opportunities in commercializing social robot technologies for impact at scale. In a time where citizens are beginning to live with intelligent machines on a daily basis, we have the opportunity to explore, develop, study, and assess humanistic design principles to support and promote human flourishing at all ages and stages.

Bio: Cynthia Breazeal is a Professor at the MIT Media Lab where she founded and Directs the Personal Robots Group. She is also Associate Director of the Media Lab in charge of new strategic initiatives, and she is spearheading MIT’s K-12 education initiative on AI in collaboration with the Media Lab, Open Learning and the Schwarzman College of Computing. She is recognized as a pioneer in the field of social robotics and human-robot interaction and is a AAAI Fellow. She is a recipient of awards by the National Academy of Engineering as well as the National Design Awards. She has received Technology Review’s TR100/35 Award and the George R. Stibitz Computer & Communications Pioneer Award. She has also been recognized as an award-winning entrepreneur, designer and innovator by CES, Fast Company, Entrepreneur Magazine, Forbes, and Core 77 to name a few. Her robots have been recognized by TIME magazine’s Best Inventions in 2008 and in 2017 where her award-wining Jibo robot was featured on the cover. She received her doctorate from MIT in Electrical Engineering and Computer Science in 2000.

]]>
Real Roboticist focus series #4: Peter Corke (Learning) https://robohub.org/iros2020-real-roboticist-focus-series-4-peter-corke-learning/ Sun, 29 Aug 2021 16:02:42 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-4-peter-corke-learning/

In this fourth release of our series dedicated to IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist, we bring you Peter Corke. He is a Distinguished Professor of Robotic Vision at Queensland University of Technology, Director of the QUT Centre for Robotics, and Director of the ARC Centre of Excellence for Robotic Vision.

If you’ve ever studied a robotics or computer vision course, you might have read a classic book: Peter Corke’s Robotics, Vision and Control. Moreover, Peter has also released several open-source robotics resources and free courses, all available at his website. If you’d like to hear more about his career in robotics and education, his main challenges and what he learnt from them, and what’s his advice for current robotics students, check out his video below. Have fun!

]]>
Plenary and Keynote talks focus series #4: Steve LaValle & Sarah Bergbreiter https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-4-steve-lavalle-sarah-bergbreiter/ Wed, 25 Aug 2021 17:19:03 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-4-steve-lavalle-sarah-bergbreiter/

In this new release of our series showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems) you’ll meet Steve LaValle (University of Oulu) talking about the area of perception, action and control, and Sarah Bergbreiter (Carnegie Mellon University) talking about bio-inspired microrobotics.

Prof. Steve LaValle – Rapidly exploring Random Topics

Bio: Steve LaValle is Professor of Computer Science and Engineering, in Particular Robotics and Virtual Reality, at the University of Oulu. From 2001 to 2018, he was a professor in the Department of Computer Science at the University of Illinois. He has also held positions at Stanford University and Iowa State University. His research interests include robotics, virtual and augmented reality, sensing, planning algorithms, computational geometry, and control theory. In research, he is mostly known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm, which is widely used in robotics and other engineering fields. In industry, he was an early founder and chief scientist of Oculus VR, acquired by Facebook in 2014, where he developed patented tracking technology for consumer virtual reality and led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration, health and safety, and the design of comfortable user experiences. From 2016 to 2017 he was Vice President and Chief Scientist of VR/AR/MR at Huawei Technologies, Ltd. He has authored the books Planning Algorithms, Sensing and Filtering, and Virtual Reality.

Prof. Sarah Bergbreiter – Microsystems-inspired Robotics

Bio: Sarah Bergbreiter joined the Department of Mechanical Engineering at Carnegie Mellon University in the fall of 2018 after spending ten years at the University of Maryland, College Park. She received her B.S.E. degree in electrical engineering from Princeton University in 1999. After a short introduction to the challenges of sensor networks at a small startup company, she received the M.S. and Ph.D. degrees from the University of California, Berkeley in 2004 and 2007 with a focus on microrobotics. Prof. Bergbreiter received the DARPA Young Faculty Award in 2008, the NSF CAREER Award in 2011, and the Presidential Early Career Award for Scientists and Engineers (PECASE) Award in 2013 for her research bridging microsystems and robotics. She has received several Best Paper awards at conferences like ICRA, IROS, and Hilton Head Workshop. She currently serves as vice-chair of DARPA’s Microsystems Exploratory Council and as an associate editor for IEEE Transactions on Robotics.

]]>
Real Roboticist focus series #3: Radhika Nagpal (Enjoying the Ride) https://robohub.org/iros2020-real-roboticist-focus-series-3-radhika-nagpal-enjoying-the-ride/ Sun, 22 Aug 2021 17:37:49 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-3-radhika-nagpal-enjoying-the-ride/

Today we continue with our series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist. This time you’ll meet Radhika Nagpal, who is a Fred Kavli Professor of Computer Science at the Wyss Institute for Biologically Inspired Engineering from Harvard University.

Did you know Radhika directed the research that led to the creation of the Kilobots, the first open-source, low-cost robots that were specifically designed for large scale experiments with hundreds and thousands of them? You can watch this example or this other one if you’re curious. If you’d like to know more about Radhika and her achievements, challenges and what she would tell her younger self, below is the whole interview. Enjoy!

]]>
Plenary and Keynote talks focus series #3: Anya Petrovskaya & I-Ming Chen https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-3-anya-petrovskaya-i-ming-chen/ Wed, 18 Aug 2021 16:31:08 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-3-anya-petrovskaya-i-ming-chen/

In this new release of our series showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems) you’ll meet Dr. Anya Petrovskaya (Stanford University), talking about driverless vehicles and field robots, and Prof. I-Ming Chen (Nanyang Technological University), whose talk is about food handling robots.

Dr. Anya Petrovskaya – A Journey through Autonomy

Topical Area: Driverless Vehicles and Field Robots

Bio: Dr. Anna Petrovskaya is a scientist and entrepreneur with decades of experience in the field of AI, autonomy, and 3D computer vision. Most recently, Anna built a 3D mapping startup that was acquired by Mobileye/Intel, where she became Head of LiDAR AI. She completed her Doctorate degree in Computer Science at Stanford University in 2011, where she focused on Artificial Intelligence and Robotics. In 2012, her thesis was named among the winners of the IEEE Intelligent Transportation Systems Society Best PhD Thesis Award. Anna was part of the core team that built the Stanford autonomous car Junior, which was a precursor to the Waymo/Google autonomous car. She has served as an Associate Editor for International Conference on Robotics and Automation (ICRA) since 2011. Based on her expertise, Anna has been invited to co-author chapters for the Handbook of Intelligent Vehicles and the 2nd edition of the Handbook of Robotics

Prof. I-Ming Chen – Automation of Food Handling: From Item-Picking to Food-Picking

Topical Area: Food Handling Robotics

Bio: I-Ming Chen received the B.S. degree from National Taiwan University in 1986, and M.S. and Ph.D. degrees from California Institute of Technology, Pasadena, CA in 1989 and 1994 respectively. He is currently Professor in the School of Mechanical and Aerospace Engineering of Nanyang Technological University (NTU) in Singapore, and Editor-in-chief of IEEE/ASME Transactions on Mechatronics. He is Director of Robotics Research Centre in NTU from 2013 to 2017, and is also a member of the Robotics Task Force 2014 under the National Research Foundation which is responsible for Singapore’s strategic R&D plan in future robotics. Professor Chen is Fellow of Singapore Academy of Engineering, Fellow of IEEE and Fellow of ASME, General Chairman of 2017 IEEE International Conference on Robotics and Automation (ICRA 2017) in Singapore. His research interests are in logistics and construction robots, wearable devices, human-robot interaction and industrial automation. He is also CEO of Transforma Robotics Pte Ltd developing robots for construction industry and CTO of Hand Plus Robotics Pte Ltd developing robotics and AI solutions for logistics and manufacturing industry.

]]>
Introduction to behavior trees https://robohub.org/introduction-to-behavior-trees/ Tue, 17 Aug 2021 10:58:33 +0000 https://robohub.org/introduction-to-behavior-trees/

Abstraction in programming has evolved our use of computers from basic arithmetic operations to representing complex real-world phenomena using models. More specific to robotics, abstraction has moved us from low-level actuator control and basic sensing to reasoning about higher-level concepts behaviors, as I define in my Anatomy of a Robotic System post.

In autonomous systems, we have seen an entire host of abstractions beyond “plain” programming for behavior modeling and execution. Some common ones you may find in the literature include teleo-reactive programs, Petri nets, finite-state machines (FSMs), and behavior trees (BTs). In my experience, FSMs and BTs are the two abstractions you see most often today.

In this post, I will introduce behavior trees with all their terminology, contrast them with finite-state machines, share some examples and software libraries, and as always leave you with some resources if you want to learn more.

What is a behavior tree?

As we introduced above, there are several abstractions to help design complex behaviors for an autonomous agent. Generally, these consist of a finite set of entities that map to particular behaviors or operating modes within our system, e.g., “move forward”, “close gripper”, “blink the warning lights”, “go to the charging station”. Each model class has some set of rules that describe when an agent should execute each of these behaviors, and more importantly how the agent should switch between them.

Behavior trees (BTs) are one such abstraction, which I will define by the following characteristics:

  1. Behavior trees are trees (duh): They start at a root node and are designed to be traversed in a specific order until a terminal state is reached (success or failure).
  2. Leaf nodes are executable behaviors: Each leaf will do something, whether it’s a simple check or a complex action, and will output a status (success, failure, or running). In other words, leaf nodes are where you connect a BT to the lower-level code for your specific application.
  3. Internal nodes control tree traversal: The internal (non-leaf) nodes of the tree will accept the resulting status of their children and apply their own rules to dictate which node should be expanded next.

Behavior trees actually began in the videogame industry to define behaviors for non-player characters (NPCs): Both Unreal Engine and Unity (two major forces in this space) have dedicated tools for authoring BTs. This is no surprise; a big advantage of BTs is that they are easy to compose and modify, even at runtime. However, this sacrifices the ease of designing reactive behaviors (for example, mode switches) compared to some of the other abstractions, as you will see later in this post.

Since then, BTs have also made it into the robotics domain as robots have become increasingly capable of doing more than simple repetitive tasks. Easily the best resource here is the textbook “Behavior Trees in Robotics and AI: An Introduction” by Michele Colledanchise and Petter Ögren. In fact, if you really want to learn the material you should stop reading this post and go directly to the book … but please stick around?

Behavior tree Editor in Unreal Engine. Source: Unreal Engine documentation

Behavior tree terminology

Let’s dig into the terminology in behavior trees. While the language is not standard across the literature and various software libraries, I will largely follow the definitions in Behavior Trees in Robotics and AI.

At a glance, these are the types of nodes that make up behavior trees and how they are represented graphically:

Overview of behavior tree nodes.

Behavior trees execute in discrete update steps known as ticks. When a BT is ticked, usually at some specified rate, its child nodes recursively tick based on how the tree is constructed. After a node ticks, it returns a status to its parent, which can be Success, Failure, or Running.

Execution nodes, which are leaves of the BT, can either be Action or Condition nodes. The only difference is that condition nodes can only return Success or Failure within a single tick, whereas action nodes can span multiple ticks and can return Running until they reach a terminal state. Generally, condition nodes represent simple checks (e.g., “is the gripper open?”) while action nodes represent complex actions (e.g., “open the door”).

Control nodes are internal nodes and define how to traverse the BT given the status of their children. Importantly, children of control nodes can be execution nodes or control nodes themselves. Sequence, Fallback, and Parallel nodes can have any number of children, but differ in how they process said children. Decorator nodes necessarily have one child, and modify its behavior with some custom defined policy.

Check out the images below to see how the different control nodes work.

Sequence nodes execute children in order until one child returns Failure or all children returns Success.

Fallback nodes execute children in order until one of them returns Success or all children return Failure. These nodes are key in designing recovery behaviors for your autonomous agents.

Parallel nodes will execute all their children in “parallel”. This is in quotes because it’s not true parallelism; at each tick, each child node will individually tick in order. Parallel nodes return Success when at least M child nodes (between 1 and N) have succeeded, and Failure when all child nodes have failed.

Decorator nodes modify a single child node with a custom policy. A decorator has its own set of rules for changing the status of the “decorated node”. For example, an “Invert” decorator will change Success to Failure, and vice-versa. While decorators can add flexibility to your behavior tree arsenal, you should stick to standard control nodes and common decorators as much as possible so others can easily understand your design.

Robot example: searching for objects

The best way to understand all the terms and graphics in the previous section is through an example. Suppose we have a mobile robot that must search for specific objects in a home environment. Assume the robot knows all the search locations beforehand; in other words, it already has a world model to operate in.

Our mobile robot example. A simulated TurtleBot3 must move in a known map to find blocks of a given color.

Let’s start simple. If there is only one location (we’ll call it A), then the BT is a simple sequence of the necessary actions: Go to the location and then look for the object.

Our first behavior tree. Bask in the simplicity while you can…

We’ve chosen to represent navigation as an action node, as it may take some time for the robot to move (returning Running in the process). On the other hand, we represent vision as a condition node, assuming the robot can detect the object from a single image once it arrives at its destination. I admit, this is totally contrived for the purpose of showing one of each execution node.

One very common design principle you should know is defined in the book as explicit success conditions. In simpler terms, you should almost always check before you act. For example, if you’re already at a specific location, why not check if you’re already there before starting a navigation action?

Explicit success conditions use a Fallback node with a condition preceding an action. The guarded action will only execute if the success condition is not met — in this example if the robot is not at location A

Our robot likely operates in an environment with multiple locations, and the idea is to look in all possible locations until we find the object of interest. This can be done by introducing a root-level Fallback node and repeating the above behavior for each location in some specified order.

We can also use Fallback nodes to define reactive behaviors; that is, if one behavior does not work, try the next one, and so on.

Finally, suppose that instead of looking for a single object, we want to consider several objects — let’s say apples and oranges. This use case of composing conditions can be done with Parallel nodes as shown below.

  • If we accept either an apple or an orange (“OR” condition), then we succeed if one node returns Success.
  • If we require both an apple and an orange (“AND” condition), then we succeed if both nodes return Success.
  • If we care about the order of objects, e.g., you must find an apple before finding an orange, then this could be done with a Sequence node instead.

Parallel nodes allows multiple actions and/or conditions to be considered within a single tick.

Of course, you can also compose actions in parallel — for example, turning in place until a person is detected for 5 consecutive ticks. While my example is hopefully simple enough to get the basics across, I highly recommend looking at the literature for more complex examples that really show off the power of BTs.

Robot example revisited: decorators and blackboards

I don’t know about you, but looking at the BT above leaves me somewhat uneasy. It’s just the same behavior copied and pasted multiple times underneath a Fallback node. What if you had 20 different locations, and the behavior at each location involved more than just two simplified execution nodes? Things could quickly get messy.

In most software libraries geared for BTs you can define these execution nodes as parametric behaviors that share resources (for example, the same ROS action client for navigation, or object detector for vision). Similarly, you can write code to build complex trees automatically and compose them from a ready-made library of subtrees. So the issue isn’t so much efficiency, but readability.

There is an alternative implementation for this BT, which can extend to many other applications. Here’s the basic idea:

  • Introduce decorators: Instead of duplicating the same subtree for each location, have a single subtree and decorate it with a policy that repeats the behavior until successful.
  • Update the target location at each iteration: Suppose you now have a “queue” of target locations to visit, so at each iteration of the subtree you pop an element from that queue. If the queue eventually ends up empty, then our BT fails.

In most BTs, we often need some notion of shared data like the location queue we’re discussing. This is where the concept of a blackboard comes in: you’ll find blackboard constructs in most BT libraries out there, and all they really are is a common storage area where individual behaviors can read or write data.

Our example BT could now be refactored as follows. We introduce a “GetLoc” action that pops a location from our queue of known locations and writes it to the blackboard as some parameter target_location. If the queue is empty, this returns Failure; otherwise it returns Success. Then, downstream nodes that deal with navigation can use this target_location parameter, which changes every time the subtree repeats.

The addition of a blackboard and a “Repeat” decorator can greatly simplify our tree even if the underlying behavior is the same.

You can use blackboards for many other tasks. Here’s another extension of our example: Suppose that after finding an object, the robot should speak with the object it detected, if any. So, the “FoundApple” and “FoundOrange” conditions could write to a located_objects parameter in the blackboard and a subsequent “Speak” action would read it accordingly. A similar solution could be applied, for instance, if the robot needs to pick up the detected object and has different manipulation policies depending on the type of object.

Fun fact: This section actually came from a real discussion with Davide Faconti, in which… he essentially schooled me. It brings me great joy to turn my humiliation into an educational experience for you all.

Behavior tree software libraries

Let’s talk about how to program behavior trees! There are quite a few libraries dedicated to BTs, but my two highlights in the robotics space are py_trees and BehaviorTree.CPP.

py_trees is a Python library created by Daniel Stonier.

  • Because it uses an interpreted language like Python, the interface is very flexible and you can basically do what you want… which has its pros and cons. I personally think this is a good choice if you plan on automatically modifying behavior trees at run time.
  • It is being actively developed and with every release you will find new features. However, many of the new developments — not just additional decorators and policy options, but the visualization and logging tools — are already full-steam-ahead with ROS 2. So if you’re still using ROS 1 you will find yourself missing a lot of new things. Check out the PyTrees-ROS Ecosystem page for more details.
  • Some of the terminology and design paradigms are a little bit different from the Behavior Trees in Robotics book. For example, instead of Fallback nodes this library uses Selector nodes, and these behave slightly differently.

Our navigation example using the py_trees library and rqt_py_trees for visualization.

BehaviorTree.CPP is a C++ library developed by Davide Faconti and Michele Colledanchise (yes, one of the book authors). It should therefore be no surprise that this library follows the book notation much more faithfully.

  • This library is quickly gaining traction as the behavior tree library of the ROS developers’ ecosystem, because C++ is similarly the language of production quality development for robotics. In fact, the official ROS 2 navigation stack uses this library in its BT Navigator feature.
  • It heavily relies on an XML based workflow, meaning that the recommended way to author a BT is through XML files. In your code, you register node types with user-defined classes (which can inherit from a rich library of existing classes), and your BT is automatically synthesized!
  • It is paired with a great tool named Groot which is not only a visualizer, but a graphical interface for editing behavior trees. The XML design principle basically means that you can draw a BT and export it as an XML file that plugs into your code.
  • This all works wonderfully if you know the structure of your BT beforehand, but leaves a little to be desired if you plan to modify your trees at runtime. Granted, you can also achieve this using the programmatic approach rather than XML, but this workflow is not documented/recommended, and doesn’t yet play well with the visualization tools.

Our navigation example using the BehaviorTree.CPP library and Groot for visualization.

So how should you choose between these two libraries? They’re both mature, contain a rich set of tools, and integrate well with the ROS ecosystem. It ultimately boils down to whether you want to use C++ or Python for your development. In my example GitHub repo I tried them both out, so you can decide for yourself!

Behavior trees vs. finite-state machines

In my time at MathWorks, I was immersed in designing state machines for robotic behavior using Stateflow — in fact, I even did a YouTube livestream on this topic. However, robotics folks often asked me if there were similar tools for modeling behavior trees, which I had never heard of at the time. Fast forward to my first day at CSAIL, my colleague at the time (Daehyung Park) showed me one of his repositories and I finally saw my first behavior tree. It wasn’t long until I was working with them in my project as a layer between planning and execution, which I describe in my 2020 recap blog post.

As someone who has given a lot of thought to “how is a BT different from a FSM?”, I wanted to reaffirm that they both have their strengths and weaknesses, and the best thing you can do is learn when a problem is better suited for one or the other (or both).

The Behavior Trees in Robotics and AI book expands on these thoughts in way more rigor, but here is my attempt to summarize the key ideas:

  • In theory, it is possible to express anything as a BT, FSM, one of the other abstractions, or as plain code. However, each model has its own advantages and disadvantages in their intent to aid design at larger scale.
  • Specific to BTs vs. FSMs, there is a tradeoff between modularity and reactivity. Generally, BTs are easier to compose and modify while FSMs have their strength in designing reactive behaviors.

Let’s use another robotics example to go deeper into these comparisons. Suppose we have a picking task where a robot must move to an object, grab it by closing its gripper, and then move back to its home position. A side-by-side BT and FSM comparison can be found below. For a simple design like this, both implementations are relatively clean and easy to follow.

Behavior tree (left) and finite-state machine (right) for our robot picking example.

Now, what happens if we want to modify this behavior? Say we first want to check whether the pre-grasp position is valid, and correct if necessary before closing the gripper. With a BT, we can directly insert a subtree along our desired sequence of actions, whereas with a FSM we must rewire multiple transitions. This is what we mean when we claim BTs are great for modularity.

Modifications to our BT (left) and FSM (right) if we want to add a pre-grasp correction behavior.

On the other hand, there is the issue of reactivity. Suppose our robot is running on a finite power source, so if the battery is low it must return to the charging station before returning to its task. You can implement something like this with BTs, but a fully reactive behavior (that is, the battery state causes the robot to go charge no matter where it is) is easier to implement with a FSM… even if it looks a bit messy.

On the note of “messy”, behavior tree zealots tend to make the argument of “spaghetti state machines” as reasons why you should never use FSMs. I believe that is not a fair comparison. The notion of a hierarchical finite-state machine (HFSM) has been around for a long time and helps avoid this issue if you follow good design practices, as you can see below. However, it is true that managing transitions in a HFSM is still more difficult than adding or removing subtrees in a BT.

There have been specific constructs defined to make BTs more reactive for exactly these applications. For example, there is the notion of a “Reactive Sequence” that can still tick previous children in a sequence even after they have returned Success. In our example, this would allow us to terminate a subtree with Failure if the battery levels are low at any point during that action sequence, which may be what we want.

Adding a battery check and charging action to a BT is easy, but note that this check is not reactive — it only occurs at the start of the sequence. Implementing more reactivity would complicate the design of the BT, but is doable with constructs like Reactive Sequences.

FSMs can allow this reactivity by allowing the definition of transitions between any two states.

Hierarchical FSMs can clean up the diagram. In this case, we define a superstate named “Nominal”, thus defining two clear operating modes between normal operation and charging.

Because of this modularity / reactivity tradeoff, I like to think that FSMs are good at managing higher-level operating modes (such as normal operation vs. charging), and BTs are good at building complex sequences of behaviors that are excellent at handling recoveries from failure. So, if this design were up to me, it might be a hybrid that looks something like this:

Best of both worlds: High-level mode switches are handled by a FSM and mode-specific behaviors are managed with BTs.

Conclusion

Thank for reading through this introductory post, and I look forward to your comments, questions, and suggestions. If you want to try the code examples, check out my example GitHub repository.

To learn more about behavior trees, here are some good resources that I’ve relied on over the past year and a bit.

See you next time!


You can read the original article at Roboticseabass.com.

]]>
Real Roboticist focus series #2: Ruzena Bajczy (Foundations) https://robohub.org/iros2020-real-roboticist-focus-series-2-ruzena-bajczy-foundations/ Sun, 08 Aug 2021 14:55:08 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-2-ruzena-bajczy-foundations/

Last Sunday we started another series on IEEE/RSJ IROS 2020 (International Conference on Intelligent Robots and Systems) original series Real Roboticist. In this episode you’ll meet Ruzena Bajczy, Professor Emerita of Electrical Engineering and Computer Science at the University of California, Berkeley. She is also the founding Director of CITRIS (the Center for Information Technology Research in the Interest of Society).

In her talk, she explains her path from being an electrical engineer to becoming a researcher with Emeritus honours, and with over 50 years of experience in robotics, artificial intelligence and the foundations of how humans interact with our environment. Are you curious about the tips she’s got to share and her own prediction of the future of robotics? Don’t miss it out!

]]>
Plenary and Keynote talks focus series #2: Frank Dellaert & Ashish Deshpande https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-2-frank-dellaert-ashish-deshpande/ Wed, 04 Aug 2021 09:40:16 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-2-frank-dellaert-ashish-deshpande/

Last Wednesday we started this series of posts showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems). This is a great opportunity to stay up to date with the latest robotics & AI research from top roboticists in the world. This week we’re bringing you Prof. Frank Dellaert (Georgia Institute of Technology; Google AI) and Prof. Ashish Deshpande (The University of Texas).

Prof. Frank Dellaert – Perception in Aerial, Marine & Space Robotics: a Biased Outlook

Bio: Frank Dellaert is a Professor in the School of Interactive Computing at the Georgia Institute of Technology and a Research Scientist at Google AI. While on leave from Georgia Tech in 2016-2018, he served as Technical Project Lead at Facebook’s Building 8 hardware division. Before that he was also Chief Scientist at Skydio, a startup founded by MIT grads to create intuitive interfaces for micro-aerial vehicles. His research is in the overlap between robotics and computer vision, and he is particularly interested in graphical model techniques to solve large-scale problems in mapping, 3D reconstruction, and increasingly model-predictive control. The GTSAM toolbox embodies many of the ideas his research group has worked on in the past few years and is available at https://gtsam.org.

Prof. Ashish Deshpande – Harmony Exoskeleton: A Journey from Robotics Lab to Stroke

Bio: Ashish D. Deshpande is passionate about helping stroke patients recover from their disabilities and he believes robots could serve as important tools in the recovery process. He is a faculty member in Mechanical Engineering at The University of Texas at Austin, where he directs the ReNeu Robotics Lab. His work focuses on the study of human system and design of robotic systems toward the goals accelerating recovery after a neurological injury (e.g. stroke and spinal cord injury), improving the quality of lives of those living disabilities (e.g. amputation) and enhancing lives and productivity of workers, soldiers and astronauts. Specifically, his group has designed two novel exoskeletons for delivering engaging and subject-specific training for neuro-recovery of upper-body movements after stroke and spinal cord injury. Dr. Deshpande is a co-founder of Harmonic Bionics whose mission is to improve rehabilitation outcomes for the stroke patients.

]]>
Real Roboticist focus series #1: Davide Scaramuzza (Drones & Magic) https://robohub.org/iros2020-real-roboticist-focus-series-1-davide-scaramuzza-drones-magic/ Sun, 01 Aug 2021 18:39:58 +0000 https://robohub.org/iros2020-real-roboticist-focus-series-1-davide-scaramuzza-drones-magic/

Are you curious about the people behind the robots? The series ‘Real Roboticist’, produced by the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), shows the people at the forefront of robotics research from a more personal perspective. How did they become roboticists? What made them proud and what challenges did they face? What advice would they give to their younger self? What does a typical day look like? And where do they see the future of robotics? In case you missed it during the On-Demand conference, no worries! IEEE has recently made their original series public, and every Sunday we’ll bring you an interview with a real roboticist for you to get inspired.

This week is the turn of Davide Scaramuzza, Professor and Director of the Robotics and Perception Group at the University of Zürich. In his talk, Davide explains his journey from Electronics Engineering to leading a top robotics vision research group developing a promising technology: event cameras. He’ll also speak about the challenges he faced along the way, and even how he combines the robotics research with another of his passions, magic. Curious about where the magic happens? Davide also takes you around his research lab during the interview. Let the magic happen!

]]>
Plenary and Keynote talks focus series #1: Yukie Nagai & Danica Kragic https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-1-yukie-nagai-danica-kragic/ Wed, 28 Jul 2021 16:03:01 +0000 https://robohub.org/iros2020-plenary-and-keynote-talks-focus-series-1-yukie-nagai-danica-kragic/

Would you like to stay up to date with the latest robotics & AI research from top roboticists? The IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems) recently released their Plenary and Keynote talks in the IEEE RAS YouTube channel. We’re starting a new focus series with all their talks. This week, we’re featuring Professor Yukie Nagai (University of Tokyo), talking about cognitive development in humans and robots, and Professor Danica Kragic (KTH Royal Institute of Technology), talking about the impact of robotics and AI in the fashion industry.

Prof. Yukie Nagai – Cognitive Development in Humans and Robots: New Insights into Intelligence

Abstract: Computational modeling of cognitive development has the potential to uncover the underlying mechanism of human intelligence as well as to design intelligent robots. We have been investigating whether a unified theory accounts for cognitive development and what computational framework embodies such a theory. This talk introduces a neuroscientific theory called predictive coding and shows how robots as well as humans acquire cognitive abilities using predictive processing neural networks. A key idea is that the brain works as a predictive machine; that is, the brain tries to minimize prediction errors by updating the internal model and/or by acting on the environment. Our robot experiments demonstrate that the process of minimizing prediction errors leads to continuous development from non-social to social cognitive abilities. Internal models acquired through their own sensorimotor experiences enable robots to interact with others by inferring their internal state. Our experiments inducing atypicality in predictive processing also explains why and how developmental disorders appear in social cognition. I discuss new insights into human and robot intelligence obtained from these studies.

Bio: Yukie Nagai is a Project Professor at the International Research Center for Neurointelligence, the University of Tokyo. She received her Ph.D. in Engineering from Osaka University in 2004 and worked at the National Institute of Information and Communications Technology, Bielefeld University, and Osaka University. Since 2019, she leads Cognitive Developmental Robotics Lab at the University of Tokyo. Her research interests include cognitive developmental robotics, computational neuroscience, and assistive technologies for developmental disorders. Her research achievements have been widely reported in the media as novel techniques to understand and support human development. She also serves as the research director of JST CREST Cognitive Mirroring.

Prof. Danica Kragic – Robotics and Artificial Intelligence Impacts on the Fashion Industry

Abstract: This talk will overview how robotics and artificial intelligence can impact fashion industry. What can we do to make fashion industry more sustainable and what are the most difficult parts in this industry to automate? Concrete examples of research problems in terms of perception, manipulation of deformable materials and planning will be discussed in this context.

Bio: Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology. Her research is in the area of robotics, computer vision and machine learning. She received ERC Starting and Advanced Grant. Her research is supported by the EU, Knut and Alice Wallenberg Foundation, Swedish Foundation for Strategic Research and Swedish Research Council. She is an IEEE Fellow.

]]>
Play RoC-Ex: Robotic Cave Explorer and unearth the truth about what happened in cave system #0393 https://robohub.org/play-roc-ex-robotic-cave-explorer-and-unearth-the-truth-about-what-happened-in-cave-system-0393/ Sat, 03 Jul 2021 16:47:03 +0000 https://robohub.org/play-roc-ex-robotic-cave-explorer-and-unearth-the-truth-about-what-happened-in-cave-system-0393/

Dive into the experience of piloting a robotic scout through what appears to be an ancient cave system leading down to the centre of the Earth. With the help of advanced sensors, guide your robot explorer along dark tunnels and caverns, avoiding obstacles, collecting relics of aeons past and, hopefully, discover what happened to its predecessor.

Mickey Li, Julian Hird, G. Miles, Valentina Lo Gatto, Alex Smith and WeiJie Wong (most from the FARSCOPE CDT programme of the University of Bristol and the University of the West of England) have created this educational game as part of the UKRAS festival of Robotics 2021. The game has been designed to teach you about how sensors work, how they are used in reality and perhaps give a glimpse into the mind of the robot. With luck, this game can show how exciting it can be to work in robotics.

As the authors suggest, you can check out the DARPA Subterranean Challenge (video) (website) (not affiliated) for an example of the real life version of this game.

The game is hosted on this website. The site is secured and no data is collected when playing the game (apart from if you decide to fill in the anonymous feedback form). The source code for the game is available open source here on Github. Enjoy!

]]>
Robot Swarms in the Real World workshop at IEEE ICRA 2021 https://robohub.org/robot-swarms-in-the-real-world-workshop-at-ieee-icra-2021/ Thu, 17 Jun 2021 11:19:28 +0000 https://robohub.org/robot-swarms-in-the-real-world-workshop-at-ieee-icra-2021/

Siddharth Mayya (University of Pennsylvania), Gennaro Notomista (CNRS Rennes), Roderich Gross (The University of Sheffield) and Vijay Kumar (University of Pennsylvania) were the organisers of this IEEE ICRA 2021 workshop aiming to identify and accelerate developments that help swarm robotics technology transition into the real world. Here we bring you the recordings of the session in case you missed it or would like to re-watch.

As the organisers describe, “in swarm robotics systems, coordinated behaviors emerge via local interactions among the robots as well as between robots and the environment. From Kilobots to Intel Aeros, the last decade has seen a rapid increase in the number of physically instantiated robot swarms. Such deployments can be broadly classified into two categories: in-laboratory swarms designed primarily as research aids, and industry-led efforts, especially in the entertainment and automated warehousing domains. In both of these categories, researchers have accumulated a vast amount of domain-specific knowledge, for example, regarding physical robot design, algorithm and software architecture design, human-swarm interfacing, and the practicalities of deployment.” The workshop brought together swarm roboticists from academia to industry to share their latest developments—from theory to real-world deployment. Enjoy the playlist with all the recordings below!

]]>
Anatomy of a robotic system https://robohub.org/anatomy-of-a-robotic-system/ Fri, 21 May 2021 14:03:06 +0000 https://robohub.org/anatomy-of-a-robotic-system/

Robotics is both an exciting and intimidating field because it takes so many different capabilities and disciplines to spin up a robot. This goes all the way from mechatronic design and hardware engineering to some of the more philosophical parts of software engineering.

Those who know me also know I’m a total “high-level guy”. I’m fascinated by how complex a modern robotic system is and how all the pieces come together to make something that’s truly impressive. I don’t have particularly deep knowledge in any specific area of robotics, but I appreciate the work of countless brilliant people that give folks like me the ability to piece things together.

In this post, I will go through what it takes to put together a robot that is highly capable by today’s standards. This is very much an opinion article that tries to slice robotic skills along various dimensions. I found it challenging to find a single “correct” taxonomy, so I look forward to hearing from you about anything you might disagree with or have done differently.

What defines a robot?

Roughly speaking, a robot is a machine with sensors, actuators, and some computing device that has been programmed to perform some level of autonomous behavior. Below is the essential “getting started” diagram that anyone in AI should recognize.

Diagram of typical artificially intelligent agent, such as a robot. Source: Artificial Intelligence: A Modern Approach (Russell, Norvig)

This opens up the ever present “robot vs. machine” debate that I don’t want to spent too much time on, but my personal distinction between a robot vs. a machine includes a couple of specifications.

  1. A robot has at least one closed feedback loop between sensing and actuation that does not require human input. This discounts things like a remote-controlled car, or a machine that constantly executes a repetitive motion but will never recover if you nudge it slightly — think, for example, Theo Jansen’s kinetic sculptures.
  2. A robot is an embodied agent that operates in the physical world. This discounts things like chatbots, or even smart speakers which — while awesome displays of artificial intelligence — I wouldn’t consider them to be robots… yet.

The big picture: Automatic, Autonomous, and Aware

Robots are not created equal. Not to say that simpler robots don’t have their purpose — knowing how simple or complex (technology- and budget-wise) to build a robot given the problem at hand is a true engineering challenge. To say it a different way: overengineering is not always necessary.

I will now categorize the capabilities of a robot into three bins: Automatic, Autonomous, and Aware. This roughly corresponds to how low- or high-level a robot’s skills have been developed, but it’s not an exact mapping as you will see.

Don’t worry too much about the details in the table as we will cover them throughout the post.

Automatic: The robot is controllable and it can execute motion commands provided by a human in a constrained environment. Think, for example, of industrial robots in an assembly line. You don’t need to program very complicated intelligence for a robot to put together a piece of an airplane from a fixed set of instructions. What you need is fast, reliable, and sustained operation. The motion trajectories of these robots are often trained and calibrated by a technician for a very specific task, and the surroundings are tailored for robots and humans to work with minimal or no conflict.

Autonomous: The robot can now execute tasks in an uncertain environment with minimal human supervision. One of the most pervasive example of this is self-driving cars (which I absolutely label as robots). Today’s autonomous vehicles can detect and avoid other cars and pedestrians, perform lane-change maneuvers, and navigate the rules of traffic with fairly high success rates despite the negative media coverage they may get for not being 100% perfect.

Aware: This gets a little bit into the “sci-fi” camp of robots, but rest assured that research is bringing this closer to reality day by day. You could tag a robot as aware when it can establish two-way communication with humans. An aware robot, unlike the previous two categories, is not only a tool that receives commands and makes mundane tasks easier, but one that is perhaps more collaborative. In theory, aware robots can understand the world at a higher level of abstraction than their more primitive counterparts, and process humans’ intentions from verbal or nonverbal cues, still under the general goal of solving a real-life problem.

A good example is a robot that can help humans assemble furniture. It operates in the same physical and task space as us humans, so it should adapt to where we choose to position ourselves or what part of the furniture we are working on without getting in the way. It can listen to our commands or requests, learn from our demonstrations, and tell us what it sees or what it might do next in language we understand so we can also take an active part in ensuring the robot is being used effectively.

Self-awareness and control: How much does the robot know about itself?

Spatial awareness: How much does the robot know about the environment and its own relationship to the environment?

Cognition and expression: How capable is the robot about reasoning about the state of the world and expressing its beliefs and intentions to humans or other autonomous agents?

Above you can see a perpendicular axis of categorizing robot awareness. Each highlighted area represents the subset of skills it may require for a robot to achieve awareness on that particular dimension. Notice how forming abstractions — i.e., semantic understanding and reasoning — is crucial for all forms of awareness. That is no coincidence.

The most important takeaway as we move from automatic to aware is the increasing ability for robots to operate “in the wild”. Whereas an industrial robot may be designed to perform repetitive tasks with superhuman speed and strength, a home service robot will often sacrifice this kind of task-specific performance with more generalized skills needed for human interaction and navigating uncertain and/or unknown environments.

A Deeper Dive Into Robot Skills

Spinning up a robot requires a combination of skills at different levels of abstraction. These skills are all important aspects of a robot’s software stack and require vastly different areas of expertise. This goes back to the central point of this blog post: it’s not easy to build a highly capable robot, and it’s certainly not easy to do it alone!

Functional Skills

This denotes the low-level foundational skills of a robot. Without a robust set of functional skills, you would have a hard time getting your robot to be successful at anything else higher up the skill chain.

Being at the lowest level of abstraction, functional skills are closely tied to direct interaction with the actuators and sensor of the robot. Indeed, these skills can be discussed along the acting and sensing modalities.

Acting

  • Controls is all about ensuring the robot can reliably execute physical commands — whether this is a wheeled robot, flying robot, manipulator, legged robot, soft robot (… you get the point …) it needs to respond to inputs in a predictable way if it is to interact with the world. It’s a simple concept, but a challenging task that ranges from controlling electrical current/fluid pressure/etc. to multi-actuator coordination towards executing a full motion trajectory.
  • Speech synthesis acts on the physical world in a different way — this is more on the human-robot interaction (HRI) side of things. You can think of these capabilities as the robot’s ability to express its state, beliefs, and intentions in a way that is interpretable by humans. Imagine speech synthesis as more than speaking in a monotone robotic voice, but maybe simulating emotion or emphasis to help get information across to the human. This is not limited to speech. Many social robots will also employ visual cues like facial expressions, lights and colors, or movements — for example, look at MIT’s Leonardo.

Sensing

  • Controls requires some level of proprioceptive (self) sensing. To regulate the state of a robot, we need to employ sensors like encoders, inertial measurement units, and so on.
  • Perception, on the other hand, deals with exteroceptive (environmental) sensing. This mainly deals with line-of-sight sensors like sonar, radar, and lidar, as well as cameras. Perception algorithms require significant processing to make sense of a bunch of noisy pixels and/or distance/range readings. The act of abstracting this data to recognize and locate objects, track them over space and time, and use them for higher-level planning is what makes it exciting and challenging. Finally, back on the social robotics topic, vision can also enable robots to infer the state of humans for nonverbal communication.
  • Speech recognition is another form of exteroceptive sensing. Getting from raw audio to accurate enough text that the robot can process is not trivial, despite how easy smart assistants like Google Assistant, Siri, and Alexa have made it look. This field of work is officially known as automatic speech recognition (ASR).

Behavioral Skills

Behavioral skills are a step above the more “raw” sensor-to-actuator processing loops that we explored in the Functional Skills section. Creating a solid set of behavioral skills simplifies our interactions with robots both as users and programmers.

At the functional level, we have perhaps demonstrated capabilities for the robot to respond to very concrete, mathematical goals. For example,

  • Robot arm: “Move the elbow joint to 45 degrees and the shoulder joint to 90 degrees in less than 2.5 seconds with less than 10% overshoot. Then, apply a force of 2 N on the gripper.”
  • Autonomous car: “Speed up to 40 miles per hour without exceeding acceleration limit of 0.1 g and turn the steering wheel to attain a turning rate of 10 m.”

At the behavioral level, commands may take the form of:

    • Robot arm: “Grab the door handle.”
    • Autonomous car: “Turn left at the next intersection while following the rules of traffic and passenger ride comfort limits.”

Abstracting away these motion planning and navigation tasks requires a combination of models of the robot and/or world, and of course, our set of functional skills like perception and control.

  • Motion planning seeks to coordinate multiple actuators in a robot to execute higher-level tasks. Instead of moving individual joints to setpoints, we now employ kinematic and dynamic models of our robot to operate in the task space — for example, the pose of a manipulator’s end effector or the traffic lane a car occupies in a large highway. Additionally, to go from a start to a goal configuration requires path planning and a trajectory that dictates how to execute the planned path over time. I like this set of slides as a quick intro to motion planning.
  • Navigation seeks to build a representation of the environment (mapping) and knowledge of the robot’s state within the environment (localization) to enable the robot to operate in the world. This representation could be in the form of simple primitives like polygonal walls and obstacles, an occupancy grid, a high-definition map of highways, etc.

If this hasn’t yet got across, behavioral skills and functional skills definitely do not work in isolation. Motion planning in a space with obstacles requires perception and navigation. Navigating in a world requires controls and motion planning.

On the language side, Natural Language Processing (NLP) is what takes us from raw text input — whether it came from speech or directly typed in — to something more actionable for the robot. For instance, if a robot is given the command “bring me a snack from the kitchen”, the NLP engine needs to interpret this at the appropriate level to perform the task. Putting it all together,

  1. Going to the kitchen is a navigation problem that likely requires a map of the house.
  2. Locating the snack and getting its 3D position relative to the robot is a perception problem.
  3. Picking up the snack without knocking other things over is a motion planning problem.
  4. Returning to wherever the human was when the command was issued is again a navigation problem. Perhaps someone closed a door along the way, or left something in the way, so the robot may have to replan based on these changes to the environment.

Abstract Skills

Simply put, abstract skills are the bridge between humans and robot behaviors. All the skills in the row above perform some transformation that lets humans more easily express their commands to robots, and similarly lets robots more easily express themselves to humans.

Task and Behavior Planning operates on the key principles of abstraction and composition. A command like “get me a snack from the kitchen” can be broken down into a set of fundamental behaviors (navigation, motion planning, perception, etc.) that can be parameterized and thus generalized to other types of commands such as “put the bottle of water in the garbage”. Having a common language like this makes it useful for programmers to add capabilities to robots, and for users to leverage their robot companions to solve a wider set of problems. Modeling tools such as finite-state machines and behavior trees have been integral in implementing such modular systems.

Semantic Understanding and Reasoning is bringing abstract knowledge to a robot’s internal model of the world. For example, in navigation we saw that a map can be represented as “occupied vs. unoccupied”. In reality, there are more semantics that can enrich the task space of the robot besides “move here, avoid there”. Does the environment have separate rooms? Is some of the “occupied space” movable if needed? Are there elements in the world where objects can be stored and retrieved? Where are certain objects typically found such that the robot can perform a more targeted search? This recent paper from Luca Carlone’s group is a neat exploration into maps with rich semantic information, and a huge portal of future work that could build on this.

Natural Language Understanding and Dialog is effectively two-way communication of semantic understanding between humans and robots. After all, the point of abstracting away our world model was so we humans could work with it more easily. Here are examples of both directions of communication:

  • Robot-to-human: If a robot failed to execute a plan or understand a command, can it actually tell the human why it failed? Maybe a door was locked on the way to the goal, or the robot did not know what a certain word meant and it can ask you to define it.
  • Human-to-robot: The goal here is to share knowledge with the robot to enrich its semantic understanding the world. Some examples might be teaching new skills (“if you ever see a mug, grab it by the handle”) or reducing uncertainty about the world (“the last time I saw my mug it was on the coffee table”).

This is all a bit of a pipe dream — Can a robot really be programmed to work with humans at such a high level of interaction? It’s not easy, but research tries to tackle problems like these every day. I believe a good measure of effective human-robot interaction is whether the human and robot jointly learn not to run into the same problem over and over, thus improving the user experience.

Conclusion

Thanks for reading, even if you just skimmed through the pictures. I hope this was a useful guide to navigating robotic systems that was somewhat worth its length. As I mentioned earlier, no categorization is perfect and I invite you to share your thoughts.

Going back to one of my specifications: A robot has at least one closed feedback loop between sensing and actuation that does not require human input. Let’s try put our taxonomy in an example set of feedback loops below. Note that I’ve condensed the natural language pipeline into the “HRI” arrows connecting to the human.

Example of a hierarchical robotic system architecture. Icons made by Freepik from www.flaticon.com

Because no post today can evade machine learning, I also want to take some time make readers aware of machine learning’s massive role in modern robotics.

  • Processing vision and audio data has been an active area of research for decades, but the rise of neural networks as function approximators for machine learning (i.e. “deep learning”) has made recent perception and ASR systems more capable than ever.
  • Additionally, learning has demonstrated its use at higher levels of abstraction. Text processing with neural networks has moved the needle on natural language processing and understanding. Similarly, neural networks have enabled end-to-end systems that can learn to produce motion, task, and/or behavior plans from complex observation sources like images and range sensors.

The truth is, our human knowledge is being outperformed by machine learning for processing such high-dimensional data. Always remember that machine learning shouldn’t be a crutch due to its biggest pitfall: We can’t (yet) explain why learned systems behave the way they do, which means we can’t (yet) formulate the kinds guarantees than we can with traditional methods. The hope is to eventually refine our collective scientific knowledge so we’re not relying on data-driven black-box approaches. After all, knowing how and why robots learn will only make them more capable in the future.

On that note, you have earned yourself a break from reading. Until next time!


You can read the original article at Roboticseabass.com.

]]>
How to build a robotics startup: getting some money to start https://robohub.org/how-to-build-a-robotics-startup-getting-some-money-to-start/ Tue, 18 May 2021 14:16:24 +0000 https://robohub.org/how-to-build-a-robotics-startup-getting-some-money-to-start/

This episode is about learning the options you have to get some money to start your startup and what is expected you achieve with that money.

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

Subscribe to the podcast using any of the following methods

Or watch the video

The post 94. How to build a robotics startup: getting some money to start appeared first on The Construct.

]]>
Robohub and AIhub’s free workshop trial on sci-comm of robotics and AI https://robohub.org/robohub-and-aihubs-free-workshop-trial-on-sci-comm-of-robotics-and-ai/ Wed, 21 Apr 2021 15:35:45 +0000 https://robohub.org/robohub-and-aihubs-free-workshop-trial-on-sci-comm-of-robotics-and-ai/ A robot in a field

Image credit: wata1219 on flickr (CC BY-NC-ND 2.0)

Would you like to learn how to tell your robotics/AI story to the public? Robohub and AIhub are testing a new workshop to train you as the next generation of communicators. You will learn to quickly create your story and shape it to any format, from short tweets to blog posts and beyond. In addition, you will learn how to communicate about robotics/AI in a realistic way (avoiding the hype), and will receive tips from top communicators, science journalists and ealy career researchers. If you feel like being part of our beta testers, join this free workshop to experience how much impact science communication can have on your professional journey!

The workshop is taking place on Friday the 30th of April, 10am-12.30pm (UK time) via Zoom. Please, sign up by sending an email to daniel.carrillozapata@robohub.org.

]]>
Amanda Prorok’s talk – Learning to Communicate in Multi-Agent Systems (with video) https://robohub.org/amanda-proroks-talk-learning-to-communicate-in-multi-agent-systems-with-video/ Sat, 17 Apr 2021 09:30:37 +0000 https://robohub.org/amanda-proroks-talk-learning-to-communicate-in-multi-agent-systems-with-video/

In this technical talk, Amanda Prorok, Assistant Professor in the Department of Computer Science and Technology at Cambridge University, and a Fellow of Pembroke College, discusses her team’s latest research on what, how and when information needs to be shared among agents that aim to solve cooperative tasks.

Abstract

Effective communication is key to successful multi-agent coordination. Yet it is far from obvious what, how and when information needs to be shared among agents that aim to solve cooperative tasks. In this talk, I discuss our recent work on using Graph Neural Networks (GNNs) to solve multi-agent coordination problems. In my first case-study, I show how we use GNNs to find a decentralized solution to the multi-agent path finding problem, which is known to be NP-hard. I demonstrate how our policy is able to achieve near-optimal performance, at a fraction of the real-time computational cost. Secondly, I show how GNN-based reinforcement learning can be leveraged to learn inter-agent communication policies. In this case-study, I demonstrate how non-shared optimization objectives can lead to adversarial communication strategies. Finally, I address the challenge of learning robust communication policies, enabling a multi-agent system to maintain high performance in the presence of anonymous non-cooperative agents that communicate faulty, misleading or manipulative information.

Biography

Amanda Prorok is an Assistant Professor in the Department of Computer Science and Technology, at Cambridge University, UK, and a Fellow of Pembroke College. Her mission is to find new ways of coordinating artificially intelligent agents (e.g., robots, vehicles, machines) to achieve common goals in shared physical and virtual spaces. Amanda Prorok has been honored by an ERC Starting Grant, an Amazon Research Award, an EPSRC New Investigator Award, an Isaac Newton Trust Early Career Award, and the Asea Brown Boveri (ABB) Award for the best thesis at EPFL in Computer Science. Further awards include Best Paper at DARS 2018, Finalist for Best Multi-Robot Systems Paper at ICRA 2017, Best Paper at BICT 2015, and MIT Rising Stars 2015. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL), and Associate Editor for Autonomous Robots (AURO). Prior to joining Cambridge, Amanda Prorok was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania, USA. She completed her PhD at EPFL, Switzerland.

Featuring Guest Panelist(s): Stephanie Gil, Joey Durham


The next technical talk will be delivered by Koushil Sreenath from UC Berkeley, and it will take place on April 23 at 3pm EDT. Keep up to date on this website.

]]>
Building a 7 axis robot from scratch https://robohub.org/building-a-7-axis-robot-from-scratch/ Sun, 11 Apr 2021 08:50:53 +0000 https://robohub.org/building-a-7-axis-robot-from-scratch/

Do you fancy making yourself an industrial robot to enjoy at home? Jeremy Fielding, a passionate fan of mechanical engineering, did. So he built one. Good news is: he’s preparing a series of videos to teach you the whole process from scratch. How much power do you need to run 7 motors at one time? If you lose power, how do you prevent the arm from collapsing on you or dropping the load? How do you keep the cost down? He’s recorded over 100 hours of video, and he’s planning to teach you how he used the servo motors, how you can use them for your projects and how he designed his 7 axis, articulated robot.

Jeremy’s aim (website, YouTube, Twitter, Instagram) is simple: draw people to engineering with amazing projects, inspire them with ideas, then teach them how to do it. And for this video series, he’s also looking for your collaboration. So if you’ve got experience and knowledge on building this type of robots and you’d like to share it, maybe you end up being part of the series!

We’d like to thank Black in Robotics for making it possible for us to discover Jeremy. Here’s the video Jeremy has released to introduce his project:

Check out Jeremy’s YouTube channel to discover many more instructional videos. You can also support his work on Patreon.

]]>
Talking Robotics’ seminars of January – April 2021 (with videos and even a musical summary!) https://robohub.org/talking-robotics-seminars-of-january-april-2021-with-videos-and-even-a-musical-summary/ Fri, 09 Apr 2021 07:59:59 +0000 https://robohub.org/talking-robotics-seminars-of-january-april-2021-with-videos-and-even-a-musical-summary/

Talking Robotics is a series of virtual seminars about Robotics and its interaction with other relevant fields, such as Artificial Intelligence, Machine Learning, Design Research, Human-Robot Interaction, among others. They aim to promote reflections, dialogues, and a place to network. In this seminars compilation, we bring you 7 talks (and a half?) from current roboticists for your enjoyment.

Filipa Correia “Group Intelligence on Social Robots”

Filipa Correia received a M.Sc. in Computer Science from University of Lisbon, Portugal, 2015. She is currently a junior researcher at GAIPSLab and she is pursuing a Ph.D. on Human-Robot Interaction at University of Lisbon, Portugal.

Her PhD thesis is focused on the challenges of creating social robots that are capable of sustaining cohesive alliances in team settings with humans. Moreover, it contributes with computational mechanisms for the robotic teammate to autonomously express group-based emotions or to gaze human teammates in multi-party settings.

For more information about the speaker and related papers, please see this website.

Ross Mead “Bringing Robots To Life”

Dr. Ross Mead is the Founder and CEO of Semio. Ross received his PhD and MS in Computer Science from the University of Southern California in 2015, and his BS in Computer Science from Southern Illinois University Edwardsville in 2007.

In this talk, Dr. Ross Mead discussed the technological advances paving the way for the personal robotics revolution, and the robotics hardware companies leading the charge along that path. He also introduced the software innovations in development at Semio for bringing robots to life.

For more information about the speaker and related papers, please see this website.

Kim Baraka “Humans and Robots Teaching and Learning through Social Interaction”

Kim Baraka is currently a postdoctoral fellow at the Socially Intelligent Machines Lab at the University of Texas at Austin. He holds a dual Ph.D. in Robotics from Carnegie Mellon University and Instituto Superior Técnico (Portugal), an M.S. in Robotics from Carnegie Mellon, and a Bachelor in Electrical and Computer Engineering from the American University of Beirut.

In the first part of the talk, focusing on robots teaching humans, Kim discussed algorithmic solutions that enable socially assistive robots to teach both children and therapists in a personalized way. In the second part of the talk, focusing on humans teaching robots, Kim discussed some preliminary efforts towards developing ways in which robots can learn tasks from human teachers in richer and more natural ways.

For more information about the speaker and related papers, please see this website.

Glenda Hannibal “Trust in HRI: Probing Vulnerability as an Active Precondition”

Glenda has previously worked in the Department of Sociology at the University of Vienna and as an expert for the HUMAINT project at the European Commission. Glenda holds a BA and MA in Philosophy from Aarhus University and is currently a PhD student in the Trust Robots Doctoral College and Human-Computer Interaction group at TU Wien.

In this talk, Glenda presented her research on vulnerability as a precondition of trust in HRI. In the first part, she argued that while the most commonly cited definitions of trust used in HRI recognize vulnerability as an essential element of trust, it is also often considered somewhat problematic too. In the second part of her talk, she presented the results of two empirical studies she has undertaken to explore trust in HRI in relation to vulnerability. Finally, she reflected on few ethical aspects related to this theme to end this talk.

For more information about the speaker and related papers, please see this website.

Carl Mueller “Robot Learning from Demonstration Driven Constrained Skill Learning & Motion Planning”

Carl Mueller is a Ph.D. student of computer science at the University of Colorado – Boulder, advised by Professor Bradley Hayes within the Collaborative Artificial Intelligence and Robotics Laboratory. He graduated from the University of California – Santa Barbara with a degree in Biopsychology and after a circuitous route through the pharmaceutical industry, he ended up in the tech, founding his own company building intelligent chat agents for business analytics.

The major theme of his research is the enablement of human users to communicate additional information to the robot learning system through ‘concept constraints’. Concept Constraints are abstract behavioral restrictions grounded as geometric and kinodynamical planning predicates that prohibit or limit the behavior of the robot resulting in more robust, generalizable, and safe skill execution. In this talk, Carl discussed how conceptual constraints are integrated into existing LfD methods, how unique interfaces can further enhance the communication of such constraints, and how the grounding of these constraints requires constrained motion planning techniques.

For more information about the speaker and related papers, please see this website.

Daniel Rakita “Methods and Applications for Generating Accurate and Feasible Robot-arm Motions in Real-time”

Daniel Rakita is a Ph.D. student of computer science at the University of Wisconsin-Madison advised by Michael Gleicher and Bilge Mutlu. He received a Bachelors of Music Performance from the Indiana University Jacobs School of Music in 2012.

In this talk, he overviewed technical methods they have developed that attempt to achieve feasible, accurate, and time-sensitive robot-arm motions. In particular, he detailed their inverse kinematics solver called RelaxedIK that utilizes both non-linear optimization and machine learning to achieve a smooth, feasible, and accurate end-effector to joint-space mapping on-the-fly. He highlighted numerous ways they have applied their technical methods to real-world-inspired problems, such as mapping human-arm-motion to robot-arm-motion in real-time to afford effective shared-control interfaces and automatically moving a camera-in-hand robot in a remote setting to optimize a viewpoint for a teleoperator.

For more information about the speaker and related papers, please see this website.

Barbara Bruno “Culture-Aware Robotics”

Barbara Bruno is a post-doc researcher at the École Polytechnique Fédérale de Lausanne (EPFL), in Lausanne, Switzerland, in the CHILI lab. Barbara received the M.Sc. and the Ph.D. in Robotics from the University of Genoa in 2011 and 2015, respectively. She is part of the NCCR Robotics organisation and currently involved in the EU ITN ANIMATAS.

In this talk, she explored how existing quantitative and qualitative methods for the assessment of culture, and cultural differences, can be combined with knowledge representation and reasoning tools such as ontologies and fuzzy controllers to endow robots with the capability of taking cultural factors into account in a range of tasks going from low-level motion planning to high-level dialogue management and user adaptation.

For more information about the speaker and related papers, please see this website.

Extra: When you forget to hit the record button but you value the speaker so much that you compose a musical summary of his talk…

Because as they say, we are human after all!

Nils Hagberg “A call for a Human Approach to Technology”

Nils Hagberg is the Product Owner at Furhat Robotics. He is a computer linguist with almost ten years of industry experience in human-machine interaction and conversational technology.

In this talk, he gave a few examples of human-centric business case designs that he’s come across earlier in his career and at Furhat Robotics – the social robotics company that have set out to make technology more human. His hope for this talk is that it will put your own work in a larger context and nudge you towards a path that will ensure humans will be allowed to remain human.

For more information about the speaker and related papers, please see this website.

]]>
iRobot Education expands its free coding platform with social-emotional learning, multi-language support https://robohub.org/irobot-education-expands-its-free-coding-platform-with-social-emotional-learning-multi-language-support/ Thu, 08 Apr 2021 09:41:44 +0000 https://robohub.org/irobot-education-expands-its-free-coding-platform-with-social-emotional-learning-multi-language-support/

iRobot Corp. unveiled new coding resources through iRobot Education that promote more inclusive, equitable access to STEM education and support social-emotional development. iRobot also updated its iRobot Coding App with the introduction of Python coding support and a new 3D Root coding robot simulator environment that is ideal for hybrid and remote learning landscapes.

The updates coincide with the annual National Robotics Week, a time when kids, parents and teachers across the nation tap into the excitement of robotics for STEM learning.

Supporting Social and Emotional Learning
The events of the past year changed the traditional learning environment with students, families and educators adapting to hybrid and remote classrooms. Conversations on the critical importance of diversity, equity and inclusion have also taken on increased importance in the classroom. To address this, iRobot Education has introduced social and emotional learning (SEL) lessons to its Learning Library that tie SEL competencies, like peer interaction and responsible decision-making, into coding and STEM curriculum. These SEL learning lessons, such as The Kind Playground, Seeing the Whole Picture and Navigating Conversations, provide educators with new resources that help students build emotional intelligence and become responsible global citizens, through a STEM lens.

Language translations for iRobot Coding App
More students can now enjoy the free iRobot Coding App with the introduction of Spanish, French, German, Czech and Japanese language support. iRobot’s mobile and web coding app offers three progressively challenging levels of coding language that advances users from graphical coding to hybrid coding, followed by full-text coding. Globally, users can now translate graphical and hybrid block coding levels into their preferred language, helping beginners and experts alike hone their language and computational thinking skills.

Introducing Python coding language support
One of the most popular coding languages, Python is now available to iRobot Coding App users in level 3, full-text coding. This new functionality provides an avenue to gain more complex coding experience in a coding language that is currently used in both academic and professional capacities worldwide, preparing the next generation of students for STEM curriculums and careers.

New Root Coding Robot 3D Simulator
Ready to code in 3D? The iRobot Coding App is uniquely designed to help kids learn coding at home and in school, with a virtual Root coding robot available for free within the app. iRobot updated the virtual Root SimBot with a fun and interactive 3D experience, allowing students to control their programmable Root® coding robot right on the screen.

“The COVID-19 pandemic has had, and continues to have, a tangible impact on students who’ve been learning in remote environments, which is why we identified solutions to nurture and grow SEL skills in students,” said Colin Angle, chairman and CEO of iRobot. “The expansion of these new iRobot Education resources, which are free to anyone, will hopefully facilitate greater inclusivity and accessibility for those who want to grow their coding experience and pursue STEM careers.”

In celebration of National Robotics Week, iRobot Education will also release weekly coding challenges focused on learning how to use code to communicate. Each challenge can be completed online in the iRobot Coding App with the Root SimBot or in-person with a Root coding robot. The weekly challenges build upon each other and include guided questions to facilitate discussions about the coding process, invite reflections, and celebrate new learning.

For more information on iRobot Education, Root coding robots and the iRobot Coding App, visit: https://edu.irobot.com/.

Information about National Robotics Week can be found at www.nationalroboticsweek.org.

]]>
How to build a robotics startup: getting the team right https://robohub.org/how-to-build-a-robotics-startup-getting-the-team-right/ Tue, 06 Apr 2021 10:39:03 +0000 https://robohub.org/how-to-build-a-robotics-startup-getting-the-team-right/

This episode is about understanding why you can’t build your startup alone, and some criteria to properly select your co-founders.

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

Subscribe to the podcast using any of the following methods

Or watch the video

The post 91. How to build a robotics startup: getting the team right appeared first on The Construct.

]]>
Back to Robot Coding part 3: testing the EBB https://robohub.org/back-to-robot-coding-part-3-testing-the-ebb/ Tue, 23 Mar 2021 15:43:00 +0000 https://robohub.org/back-to-robot-coding-part-3-testing-the-ebb/

In part 2 a few weeks ago I outlined a Python implementation of the ethical black box. I described the key data structure – a dictionary which serves as both specification for the type of robot, and the data structure used to deliver live data to the EBB. I also mentioned the other key robot specific code:

# Get data from the robot and store it in data structure spec
def getRobotData(spec):

Having reached this point I needed a robot – and a way of communicating with it – so that I could both write getRobotData(spec)  and test the EBB. But how to do this? I’m working from home during lockdown, and my e-puck robots are all in the lab. Then I remembered that the excellent robot simulator V-REP (now called CoppeliaSim) has a pretty good e-puck model and some nice demo scenes. V-REP also offers multiple ways of communicating between simulated robots and external programs (see here). One of them – TCP/IP sockets – appeals to me as I’ve written sockets code many times, for both real-world and research applications. Then a stroke of luck: I found that a team at Ensta-Bretagne had written a simple demo which does more or less what I need – just not for the e-puck. So, first I got that demo running and figured out how it works, then used the same approach for a simulated e-puck and the EBB. Here is a video capture of the working demo.

So, what’s going on in the demo? The visible simulation views in the V-REP window show an e-puck robot following a black line which is blocked by both a potted plant and an obstacle constructed from 3 cylinders. The robot has two behaviours: line following and wall following. The EBB requests data from the e-puck robot once per second, and you can see those data in the Python shell window. Reading from left to right you will see first the EBB date and time stamp, then robot time botT, then the 3 line following sensors lfSe, followed by the 8 infra red proximity sensors irSe. The final two fields show the joint (i.e. wheel angles) jntA, in degrees, then the motor commands jntD. By watching these values as the robot follows its line and negotiates the two obstacles you can see how the line and infra red sensor values change, resulting in updated motor commands.

Here is the code – which is custom written both for this robot and the means of communicating with it – for requesting data from the robot.


# Get data from the robot and store it in spec[]
# while returning one of the following result codes
ROBOT_DATA_OK = 0
CANNOT_CONNECT = 1
SOCKET_ERROR = 2
BAD_DATA = 3
def getRobotData(spec):
    # This function connects, via TCP/IP to an ePuck robot running in V-REP

    # create a TCP/IP socket and connect it to the simulated robot
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    try:
        sock.connect(server_address_port)
    except:
        return CANNOT_CONNECT
    sock.settimeout(0.1) # set connection timeout
    
    # pack a dummy packet that will provoke data in response
    #   this is, in effect, a ‘ping’ to ask for a data record
    strSend = struct.pack(‘fff’,1.0,1.0,1.0)
    sock.sendall(strSend) # and send it to V-REP
    # wait for data back from V-REP
    #   expect a packet with 1 time, 2 joints, 2 motors, 3 line sensors, 8 irSensors  
    #   all floats because V-REP
    #   total packet size = 16 x 4 = 64 bytes
    data = b”
    nch_rx = 64 # expect this many bytes from  V-REP 
    try:
        while len(data) < nch_rx:
            data += sock.recv(nch_rx)
    except:
        sock.close()
        return SOCKET_ERROR
    # unpack the received data
    if len(data) == nch_rx:
        # V-REP packs and unpacks in floats only so…
        vrx = struct.unpack(‘ffffffffffffffff’,data)
        # now move data from vrx[] into spec[], while rounding the floats
        spec[“botTime”] = [ round(vrx[0],2) ] 
        spec[“jntDemands”] = [ round(vrx[1],2), round(vrx[2],2) ]
        spec[“jntAngles”] = [round(vrx[3]*180.0/math.pi,2)
                             round(vrx[4]*180.0/math.pi,2) ]
        spec[“lfSensors”] = [ round(vrx[5],2), round(vrx[6],2), round(vrx[7],2) ]
        for i in range(8):
            spec[“irSensors”][i] = round(vrx[8+i],3)       
        result = ROBOT_DATA_OK
    else:       
        result = BAD_DATA
    sock.close()
    return result

The structure of this function is very simple: first create a socket then open it, then make a dummy packet and send it to V-REP to request EBB data from the robot. Then, when a data packet arrives, unpack it into spec. The most complex part of the code is data wrangling.

Would a real EBB collect data in this way? Well if the EBB is embedded in the robot then probably not. Communication between the robot controller and the EBB might be via ROS messages, or even more directly, by – for instance – allowing the EBB code to access a shared memory space which contains the robot’s sensor inputs, command outputs and decisions. But an external EBB, either running on a local server or in the cloud, would most likely use TCP/IP to communicate with the robot, so getRobotData() would look very much like the example here.

]]> Chad Jenkins’ talk – That Ain’t Right: AI Mistakes and Black Lives (with video) https://robohub.org/chad-jenkins-talk-that-aint-right-ai-mistakes-and-black-lives-with-video/ Tue, 16 Mar 2021 15:41:28 +0000 https://robohub.org/chad-jenkins-talk-that-aint-right-ai-mistakes-and-black-lives-with-video/

In this technical talk, Chad Jenkins from the University of Michigan posed the following question: “who will pay the cost for the likely mistakes and potential misuse of AI systems?” As he states, “we are increasingly seeing how AI is having a pervasing impact on our lives, both for good and for bad. So, how do we ensure equal opportunity in science and technology?”

Abstract

It would be great to talk about the many compelling ideas, innovations, and new questions emerging in robotics research. I am fascinated by the ongoing NeRF Explosion, prospects for declarative robot programming by demonstration, and potential for a reemergence of probabilistic generative inference. However, there is a larger issue facing our intellectual enterprise: who will pay the cost for the likely mistakes and potential misuse of AI systems? My nation is poised to invest billions of dollars to remain the leader in artificial intelligence as well as quantum computing. This investment is critically needed to reinvigorate the science that will shape our future. In order to get the most from this investment, we have to create an environment that will produce innovations that are not just technical advancements but will also benefit and uplift everybody in our society. We are increasingly seeing how AI is having a pervasing impact on our lives, both for good and for bad. So, how do we ensure equal opportunity in science and technology? It starts with how we invest in scientific research. Currently, when we make investments, we only think about technological advancement. Equal opportunity is a non-priority and, at best, a secondary consideration. The fix is simple really — and something we can do almost immediately: we must start enforcing existing civil rights statutes for how government funds are distributed in support of scientific advancement. This will mostly affect universities, as the springwell that generates the intellectual foundation and workforce for other organizations that are leading the way in artificial intelligence.This talk will explore the causes of systemic inequality in AI, the impact of this inequity within the field of AI and across society today, and offer thoughts for the next wave of AI inference systems for robotics that could provide introspectability and accountability. Ideas explored build upon the BlackInComputing.org open letter and “Before we put $100 billion into AI…” opinion. Equal opportunity for anyone requires equal opportunity for everyone.

Biography

Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science and Association for the Advancement of Artificial Intelligence, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

Featuring Guest Panelist: Sarah Brown, Hadas Kress-Gazit, Aisha Walcott


The next technical talk will be delivered by Raia Hadsell from DeepMind, and it will take place on March 26 at 3pm EST. Keep up to date on this website.

]]>
Adam Bry and Hayk Martiros’s talk – Skydio Autonomy: Research in Robust Visual Navigation and Real-Time 3D Reconstruction (with video) https://robohub.org/adam-bry-and-hayk-martiross-talk-skydio-autonomy-research-in-robust-visual-navigation-and-real-time-3d-reconstruction-with-video/ Tue, 09 Mar 2021 09:20:55 +0000 https://robohub.org/adam-bry-and-hayk-martiross-talk-skydio-autonomy-research-in-robust-visual-navigation-and-real-time-3d-reconstruction-with-video/

In the last online technical talk, Adam Bry and Hayk Martiros from Skydio explained how their company tackles real-world issues when it comes to drone flying.

Abstract

Skydio is the leading US drone company and the world leader in autonomous flight. Our drones are used for everything from capturing amazing video, to inspecting bridges, to tracking progress on construction sites. At the core of our products is a vision-based autonomy system with seven years of development at Skydio, drawing on decades of academic research. This system pushes the state of the art in deep learning, geometric computer vision, motion planning, and control with a particular focus on real-world robustness. Drones encounter extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. They are commonly flown in scenes with few or no semantic priors and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, camera smudges, and fog. These challenges are daunting for classical vision – because photometric signals are simply not consistent and for learning-based methods – because there is no ground truth for direct supervision of deep networks. In this talk we’ll take a detailed look at these issues and the algorithms we’ve developed to tackle them. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

Biography

Adam is co-founder and CEO at Skydio. He has two decades of experience with small UAS, starting as a national champion R/C airplane aerobatics pilot. As a grad student at MIT, he did award winning research that pioneered autonomous flight for drones, transferring much of what he learned as an R/C pilot into software that enables drones to fly themselves. Adam co-founded Google’s drone delivery project. He currently serves on the FAA’s Drone Advisory Committee. He holds a BS in Mechanical Engineering from Olin College and an SM in Aero/Astro from MIT. He has co-authored numerous technical papers and patents, and was also recognized on MIT’s TR35 list for young innovators.

Hayk was the first engineering hire at Skydio and he leads the autonomy team. He is an experienced roboticist who develops robust approaches to computer vision, deep learning, nonlinear optimization, and motion planning to bring intelligent robots into the mainstream. His team’s state of the art work in UAV visual localization, obstacle avoidance, and navigation of complex scenarios is at the core of every Skydio drone. He also has an interest in systems architecture and symbolic computation. His previous works include novel hexapedal robots, collaboration between robot arms, micro-robot factories, solar panel farms, and self-balancing motorcycles. Hayk is a graduate of Stanford University and Princeton University.

Featuring Guest Panelist: Davide Scaramuzza and Margaritha Chli


The next technical talk is happening this Friday the 12th of March at 3pm EST. Join Chad Jenkins from the University of Michigan in his talk ‘That Ain’t Right: AI Mistakes and Black Lives’ using this link.

]]>
How to build a robotics startup: the product idea https://robohub.org/how-to-build-a-robotics-startup-the-product-idea/ Tue, 23 Feb 2021 13:03:46 +0000 https://robohub.org/how-to-build-a-robotics-startup-the-product-idea/

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

This episode is about deciding the product your startup will produce.

Related links

Subscribe to the podcast using any of the following methods

Or watch the video

The post 89. How to build a robotics startup: the product idea appeared first on The Construct.

]]>
Back to Robot Coding part 2: the ethical black box https://robohub.org/back-to-robot-coding-part-2-the-ethical-black-box/ Sat, 20 Feb 2021 08:32:00 +0000 https://robohub.org/back-to-robot-coding-part-2-the-ethical-black-box/ In the last few days I started some serious coding. The first for 20 years, in fact, when I built the software for the BRL LinuxBots. (The coding I did six months ago doesn’t really count as I was only writing or modifying small fragments of Python).

My coding project is to start building an ethical black box (EBB), or to be more accurate, a module that will allow a software EBB to be incorporated into a robot. Conceptually the EBB is very simple, it is a data logger – the robot equivalent of an aircraft Flight Data Recorder, or an automotive Event Data Recorder. Nearly five years ago I made the case, with Marina Jirotka, that all robots (and AIs) should be fitted with an EBB as standard. Our argument is very simple: without an EBB, it will be more or less impossible to investigate robot accidents, or near-misses, and in a recent paper on Robot Accident Investigation we argue that with the increasing use of social robots accidents are inevitable and will need to be investigated.

Developing and demonstrating the EBB is a foundational part of our 5-year EPSRC funded project RoboTIPS, so it’s great to be doing some hands-on practical research. Something I’ve not done for awhile.

Here is a block diagram showing the EBB and its relationship with a robot controller.

Box diagram of sensor, embedded artificial intelligence and actuation data being logged by the ethical black box

As shown here the data flows from the robot controller to the EBB are strictly one way. The EBB cannot and must not interfere with the operation of the robot. Coding an EBB for a particular robot would be straightforward, but I have set myself a tougher goal: a generic EBB module (i.e. library of functions) that would – with some inevitable customisation – apply to any robot. And I set myself the additional challenge of coding in Python, making use of skills learned from the excellent online Codecademy Python 2 course.

There are two elements of the EBB that must be customised for a particular robot. The first is the data structure used to fetch and save the sensor, actuator and decision data in the diagram above. Here is an example from my first stab at an EBB framework, using the Python dictionary structure:

# This dictionary structure serves as both 
# 1 specification of the type of robot, and each data field that
#   will be logged for this robot, &
# 2 the data structure we use to deliver live data to the EBB

# for this model let us create a minimal spec for an ePuck robot
epuckSpec = {
    # the first field *always* identifies the type of robot plus            # version and serial nos
    “robot” : [“ePuck”, “v1”, “SN123456”],
    # the remaining fields are data we will log, 
    # starting with the motors
    # ..of which the ePuck has just 2: left and right
    “motors” : [0,0],
    # then 8 infra red sensors
    “irSensors” : [0,0,0,0,0,0,0,0],
    # ..note the ePuck has more sensors: accelerometer, camera etc, 
    # but this will do for now
    # ePuck battery level
    “batteryLevel” : [0],
    # then 1 decision code – i.e. what the robot is doing now
    # what these codes mean will be specific to both the robot 
    # and the application
    “decisionCode” : [0]
    }

Whether a dictionary is the best way of doing this I’m not 100% sure, being new to Python (any thoughts from experienced Pythonistas welcome).

The idea is that all robot EBBs will need to define a data structure like this. All must contain the first field “robot”, which names the robot’s type, its version number and serial number. Then the following fields must use keywords from a standard menu, as needed. As shown in this example each keyword is followed by a list of placeholder values – in which the number of values in the list reflects the specification of the actual robot. The ePuck robot, for instance, has 2 motors and 8 infra-red sensors.

The final field in the data structure is “decisionCode”. The values stored in this field would be both robot and applications specific; for the ePuck robot these might be 1 = ‘stop’, 2 = ‘turn left’, 3 = ‘turn right’ and so on. We could add another value for a parameter, so the robot might decide for instance to turn left 40 degrees, so “decisionCode” : [2,40]. We could also add a ‘reason’ field, which would save the high-level reason for the decision, as in “decisionCode” : [2,40,”avoid obstacle right”] noting that the decision field could be a string as shown here, or a numeric code.

As I hope I have shown here the design of this data structure and its fields is at the heart of the EBB.

The second element of the EBB library that must be written for the particular robot and application, is the function which fetches data from the robot

# Get data from the robot and store it in data structure spec
def getRobotData(spec):

How this function is implemented will vary hugely between robots and robot applications. For our Linux enhanced ePucks with WiFi connections this is likely to be via a TCP/IP client-server, with the server running on the robot, sending data following a request from the client getRobotData(ePuckspec).   For simpler setups in which the EBB module is folded into the robot controller then accessing the required data within getRobotData() should be very straightforward.

The generic part of the EBB module will define the class EBB, with methods for both initialising the EBB and saving a new data record to the EBB. I will cover that in another blog post.

Before closing let me add that it is our intention to publish the specification of the EBB, together with the model EBB code, once it had been fully tested, as open source.

Any comments or feedback would be much appreciated.


Link to the original post here.

]]>
RoMi-H: Bringing robot traffic control to healthcare https://robohub.org/romi-h-bringing-robot-traffic-control-to-healthcare/ Mon, 15 Feb 2021 08:51:27 +0000 https://robohub.org/romi-h-bringing-robot-traffic-control-to-healthcare/

Imagine for a moment that a road is used only for a single car and driver. Everything is smooth and wonderful. Then you wake up from that utopian dream and remember that our road networks have multiple cars of varying sizes, from different manufacturers, each with a driver with unique behaviors behind the wheel. We quickly realize that traffic conventions and rules are in place to avoid complete and utter chaos. We believe with increasing robotic use cases in the public domain as we all do see, a similar parallel reality needs to be realized and we propose that RoMi-H, an open-source robot and infrastructure framework that simplifies cross fleet robot collaboration, is the way to achieve this coming reality!

Even before the onset of COVID-19, the number of robots and automation technologies introduced and tested in the healthcare industry has been skyrocketing. Service robots perform an ever increasing and diverse set of tasks; taking on smaller and more sensitive deliveries in some cases and relying more heavily on shared infrastructure such as elevators (lifts), doors, and passageways. No single robotics or automation provider can supply the breadth of solutions required in a modern healthcare facility and no facility can afford to operate siloed systems requiring dedicated infrastructure and operating unique user interfaces. Therein lies the challenge.

GIF showing RoMi-H moving around a hospital

ROMI-H allows robots from different vendors to interact with each other as well physical assets in the hospital like elevators. The robots can even avoid gurneys and people.

First announced in July 2018, Robotic Middleware for Healthcare (RoMi-H) is a unique open-source system built on ROS 2 and simulated using the Gazebo simulator. It allows for uniform communication and monitoring across robot platforms, sensors, and enterprise information systems. A brief explanation of this initiative can be found in WIRED Magazine: As Robots Fill the Workplace, They Must Learn to Get Along. In tomorrow’s reality, interoperability must be front and center for every developer, manufacturer, systems integrator, and end-user.

As we have written before:

We need food-delivery robots from one vendor to communicate with drug-delivery robots from another vendor. We need a unified approach to command and control for all the robots in a facility. We need a reliable way to develop and test multi-vendor systems in software simulation prior to deployment. And for it to succeed we need this critical interoperability infrastructure to be open source.

Under the leadership of Singapore’s Centre for Healthcare Assistive and Robotics Technology (CHART) and with collaborators such as IHiS, Hope Technik, GovTech and other solution providers, Open Robotics has been working since 2018 to develop an open-source software solution. Its goal is to realize the potential of a vendor agnostic and interoperable communication system for heterogeneous robots, sensors, and information systems in the healthcare space. To accelerate its development we are encouraging contributions to the open-source codebase to accelerate the development of a robust and sustainable system.

RVIZ Shows each robot’s planned path as well as keep out areas for other robots with ROMI-H

RVIZ Shows each robot’s planned path as well as keep out areas for other robots with ROMI-H

In order to understand the underlying mechanics of RoMi-H, we encourage you to take a look at Programming Multiple Robots with ROS 2. It is being continuously updated and will provide you with a thorough explanation of ROS 2 — upon which RoMi-H is built — and the core Robotics Middleware Framework (RMF) that serves to power RoMi-H. The book also features a tutorial on how one might build a web application that can interface with RoMi-H to create useful applications for robot operators or user-facing tools for the robotics industry.

RoMi-H is able to apply the same software across the different robotic systems while ROS 2 manages the communication and data routing from machine to machine; allowing for real-time, dependable and high-performance data exchanges via a publish-subscribe pattern. Publishers group their messages into different classes and subscribers receive information from the classes of messages they have indicated an interest in. This allows RMF to provide a common platform for integrating heterogeneous robotic systems.

We see the RoMi-H project as a significant step, encouraging an open and integrated approach to robotics development and digitising healthcare. We are looking forward to receiving feedback and contributions from interested parties.

RMF can take a simple map and translate it into a Gazebo simulation. The entire system is powered by ROS 2.

RMF can take a simple map and translate it into a Gazebo simulation. The entire system is powered by ROS 2.

Learn More

A public webinar that introduced RMF and featured a live demonstration took place on 18 August 2020 in the CHART Lab. The presentations and recordings can be viewed here. At the same time, if you are interested in finding out more and viewing the source code, do check out the ROS 2 book and the following repositories:

ROS 2 Book:

Github Repositories:

We would like to acknowledge the Singapore government for their vision and support to start this ambitious research and development project, “Development of Standardised Robotics Middleware Framework – RMF detailed design and common services, large-scale virtual test farm infrastructure, and simulation modelling”. The project is supported by the Ministry of Health (MOH) and the National Robotics Program (NRP).

Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the NR2PO, MOH or other parties.

]]>
James Bruton focus series #3: Virtual Reality combat with a real robot https://robohub.org/james-bruton-focus-series-3-virtual-reality-combat-with-a-real-robot/ Sat, 26 Dec 2020 09:12:44 +0000 https://robohub.org/james-bruton-focus-series-3-virtual-reality-combat-with-a-real-robot/

It’s Saturday, it’s the turn of another post of the James Bruton focus series, and it’s Boxing Day in the UK and most of the Commonwealth countries. Even if this holiday has nothing to do with boxing, I didn’t want to miss the opportunity to take it literally and bring you a project in which James teamed up with final year degree students in Computer Games Technology at Portsmouth University to build a robot that fights a human in a Virtual Reality (VR) game.

For this project, the students Michael (Coding & VR Hardware), Stephen (Character Design & Animation), George (Environment Art) and Boyan (Character Design & Animation) designed a VR combat game in which you fight another character. James’ addition was to design a real robot that fights the player, so that when they get hit in the game, they also get hit in real life by the robot. The robot and the player’s costume are tracked using Vive trackers so the VR system knows where to position each of them in the 3D virtual environment. You can see some artwork and more details about the project here and here. Without further ado, here’s James’ video:

Happy holidays!

]]>
Carlotta Berry’s talk – Robotics Education to Robotics Research (with video) https://robohub.org/carlotta-berrys-talk-robotics-education-to-robotics-research-with-video/ Tue, 22 Dec 2020 08:47:06 +0000 https://robohub.org/carlotta-berrys-talk-robotics-education-to-robotics-research-with-video/

A few days ago, Robotics Today hosted an online seminar with Professor Carlotta Berry from the Rose-Hulman Institute of Technology. In her talk, Carlotta presented the multidisciplinary benefits of robotics in engineering education. In is worth highlighting that Carlotta Berry is one of the 30 women in robotics you need to know about in 2020.

Abstract

This presentation summarizes the multidisciplinary benefits of robotics in engineering education. I will describe how it is used at a primarily undergraduate institution to encourage robotics education and research. There will be a review of how robotics is used in several courses to illustrate engineering design concepts as well as controls, artificial intelligence, human-robot interaction, and software development. This will be a multimedia presentation of student projects in freshman design, mobile robotics, independent research and graduate theses.

Biography

Carlotta A. Berry is a Professor in the Department of Electrical and Computer Engineering at Rose-Hulman Institute of Technology. She has a bachelor’s degree in mathematics from Spelman College, bachelor’s degree in electrical engineering from Georgia Institute of Technology, master’s in electrical engineering from Wayne State University, and PhD from Vanderbilt University. She is one of a team of faculty in ECE, ME and CSSE at Rose-Hulman to create and direct the first multidisciplinary minor in robotics. She is the Co-Director of the NSF S-STEM Rose Building Undergraduate Diversity (ROSE-BUD) Program and advisor for the National Society of Black Engineers. She was previously the President of the Technical Editor Board for the ASEE Computers in Education Journal. Dr. Berry has been selected as one of 30 Women in Robotics You Need to Know About 2020 by robohub.org, Reinvented Magazine Interview of the Year Award on Purpose and Passion, Women and Hi Tech Leading Light Award You Inspire Me and Insight Into Diversity Inspiring Women in STEM. She has taught undergraduate courses in Human-Robot Interaction, Mobile Robotics, circuits, controls, signals and system, freshman and senior design. Her research interests are in robotics education, interface design, human-robot interaction, and increasing underrepresented populations in STEM fields. She has a special passion for diversifying the engineering profession by encouraging more women and underrepresented minorities to pursue undergraduate and graduate degrees. She feels that the profession should reflect the world that we live in in order to solve the unique problems that we face.

You can also view past seminars on the Robotics Today YouTube Channel.

]]>
Nikolas Martelaro’s talk – Remote user research for human-robot interaction – with video https://robohub.org/nikolas-martelaros-talk-on-11-december-remote-user-research-for-human-robot-interaction/ Tue, 15 Dec 2020 09:05:08 +0000 https://robohub.org/nikolas-martelaros-talk-on-11-december-remote-user-research-for-human-robot-interaction/ On Friday the 11th of December, Nikolas Martelaro (Assistant Professor at Carnegie Mellon’s Human-Computer Interaction Institute) gave an online seminar on ways robot design teams can do remote user research now (in these COVID-19 times) and in the future. If you missed it, you can now watch the recorded livestream.

About the speaker

Nikolas Martelaro

Nikolas Martelaro is an Assistant Professor at Carnegie Mellon’s Human-Computer Interaction Institute. Martelaro’s lab focuses on augmenting designer’s capabilities through the use of new technology and design methods. Martelaro’s interest in developing new ways to support designers stems from my interest in creating interactive and intelligent products. Martelaro blends a background in product design methods, interaction design, human-robot interaction, and mechatronic engineering to build tools and methods that allow designers to understand people better and to create more human-centered products. Before moving to the HCII, Nikolas Martelaro was a researcher in the Digital Experiences group at the Accenture Technology Labs. Martelaro graduated with a Ph.D. in Mechanical Engineering from Stanford’s Center for Design Research where he was co-advised by Larry Leifer and Wendy Ju.

Abstract

COVID-19 has led to decreases in in-person user research. While designers who work on software can shift to using all digital remote research methods, people who work on hardware systems, including robots, are left with limited options. In this talk, I will discuss some ways the robot design teams can do remote user research now and in the future. I will draw on past work in human-computer interaction as well as my own work in creating systems to allow remote design teams to conduct remote observation and interaction prototyping. While things can be challenging for the user researcher team today, I believe that with some creative new methods, we can expand our abilities to do user research for robotics from anywhere in the world.

Papers covered during the talk

]]>
James Bruton focus series #2: Barcode scanner guitar synths https://robohub.org/james-bruton-focus-series-2-barcode-scanner-guitar-synths/ Sat, 12 Dec 2020 10:09:35 +0000 https://robohub.org/james-bruton-focus-series-2-barcode-scanner-guitar-synths/ James Bruton playing his barcode synths

James Bruton playing his barcode synths

As every other Saturday, I’m bringing you another cool open-source project from James Bruton. Today, how about becoming an experimental musician with your own barcode scanner synthesizer?

I introduced James Bruton in the first post of this focus series on him, where I showed you the Boston Dynamics-inspired open robot dogs projects that consolidated him as one of the top maker on YouTube. As a sort of musician, the barcode synth project I’ve picked for this second post grabbed my attention among the countless videos he’s got on his channel.

To be more specific, the barcode synth consists of two projects. A bit more than a year ago, James showed how to build a four-neck guitar synth with the frets (the place where you put your fingers to play a note on the guitar) being barcodes instead of strings. To play this guitar, you only need a barcode reader connected to an Arduino that converts the data read from the barcodes into a number that represents a MIDI note – which is a digital representation of a musical note based on the MIDI standard. You can then plug the Arduino into a synth or your computer (if you love virtual instruments as much as I do!) to transform the MIDI output into actual sound. Extra features of this guitar included pitch bending and octave shifting buttons. You can access the open-source code of this type of guitar here, and enjoy James’ explanation (and performance) in the following video:

A couple of months ago, James made an improved version of the previous guitar synth. Instead of using the number given by the barcode, for this improved synth he hacked the barcode reader to interpret the barcodes as images so that the output is the raw square wave that it sees. With he help of a Teensy microcontroller to do the digital signal processing and a Raspberry Pi to display barcode images on a screen fitted to a 3D-printed guitar, he could produce a richer range of sounds compared to the previous version. If you want to build you own barcode synth, check out the open-source files and his video (you’ll be impressed to find out how a zebra sounds like!):

Make tech, make music, and stay tuned!

]]>
James Bruton focus series #1: openDog, Mini Robot Dog & openDog V2 https://robohub.org/james-bruton-focus-series-1-opendog-mini-robot-dog-opendog-v2/ Sat, 28 Nov 2020 10:35:28 +0000 https://robohub.org/james-bruton-focus-series-1-opendog-mini-robot-dog-opendog-v2/ James Bruton with openDog V2

James Bruton with openDog V2

What if you could ride your own giant LEGO electric skateboard, make a synthesizer that you can play with a barcode reader, or build a strong robot dog based on the Boston Dynamics dog robot? Today sees the start of a new series of videos that focuses on James Bruton’s open source robot projects.

James Bruton is a former toy designer, current YouTube maker and general robotics, electrical and mechanical engineer. He has a reputation for building robot dogs and building Iron Man inspired cosplays. He uses 3D printing, CNC and sometimes welding to build all sorts of robotics related creations. Highlights include building Mark Rober’s auto-strike bowling ball and working with Colin Furze to build a life-sized Iron Man Hulkbuster for an official eBay and Marvel promo. He also built a life-sized Bumblebee Transformer for Paramount to promote the release of the Bumblebee movie.

I discovered James’ impressive work in this episode of Ricardo Tellez’s ROS Developers Podcast on The Construct, which I highly recommend. Whether you enjoy getting your hands dirty with CAD files, 3D-printed parts, arduinos, motors and code, or you like learning about the full research & development (R&D) process of a robotics project, you will have loads of hours of fun following this series.

Today I brought one of James’ coolest and most successful open source projects: openDog and its different versions. In James’ own words, “if you want your very own four-legged friend to play fetch with and go on long walks then this is the perfect project for you.” You can access all the CAD files and code here. And without further ado, here’s the full YouTube playlist of the first version of openDog:

James also released another series of videos developing an affordable version of openDog: Mini Robot Dog. This robot is half the size of openDog and its mechanical components and 3D-printed parts are much more cheaper than the former robot without sacrificing compliance. You can see the full development in the playlist below, and access the open source files of version 1 and version 2.

Based on the insight gained through the R&D of openDog, Mini Robot Dog and these test dogs, James built the ultimate robot dog: openDog V2. For this improved version of openDog, he used brushless motors which can be back-driven to increase compliance. And by adding an Inertial Measurements Unit, he improved the balance of the robot. CAD files and code are available here. If you want to find out whether the robot is able to walk, check out the openDog V2 video series:

If you like James Bruton’s project, you can check out his website for more resources, updates and support options. See you in the next post of our focus series!

]]>
Davide Scaramuzza’s seminar – Autonomous, agile micro drones: Perception, learning, and control (with video) https://robohub.org/davide-scaramuzzas-seminar-on-13-november-autonomous-agile-micro-drones-perception-learning-and-control/ Tue, 24 Nov 2020 08:03:49 +0000 https://robohub.org/davide-scaramuzzas-seminar-on-13-november-autonomous-agile-micro-drones-perception-learning-and-control/

A few days ago, Robotics Today hosted an online seminar with Professor Davide Scaramuzza from the University of Zurich. The seminar was recorded, so you can watch it now in case you missed it.

“Robotics Today – A series of technical talks” is a virtual robotics seminar series. The goal of the series is to bring the robotics community together during these challenging times. The seminars are open to the public. The format of the seminar consists of a technical talk live captioned and streamed via Web and Twitter, followed by an interactive discussion between the speaker and a panel of faculty, postdocs, and students that will moderate audience questions.

Abstract

Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. High speed is particularly important: since drone battery life is usually limited to 20-30 minutes, drones need to fly faster to cover longer distances. However, to do so, they need faster sensors and algorithms. Human pilots take years to learn the skills to navigate drones. What does it take to make drones navigate as good or even better than human pilots? Autonomous, agile navigation through unknown, GPS-denied environments poses several challenges for robotics research in terms of perception, planning, learning, and control. In this talk, I will show how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event cameras, can allow drones to achieve unprecedented speed and robustness by relying solely on onboard computing.

Biography

Davide Scaramuzza (Italian) is a Professor of Robotics and Perception at both departments of Informatics (University of Zurich) and Neuroinformatics (joint between the University of Zurich and ETH Zurich), where he directs the Robotics and Perception Group. His research lies at the intersection of robotics, computer vision, and machine learning, using standard cameras and event cameras, and aims to enable autonomous, agile navigation of micro drones in search and rescue applications. After a Ph.D. at ETH Zurich (with Roland Siegwart) and a postdoc at the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis), from 2009 to 2012, he led the European project sFly, which introduced the PX4 autopilot and pioneered visual-SLAM-based autonomous navigation of micro drones in GPS-denied environments. From 2015 to 2018, he was part of the DARPA FLA program (Fast Lightweight Autonomy) to research autonomous, agile navigation of micro drones in GPS-denied environments. In 2018, his team won the IROS 2018 Autonomous Drone Race, and in 2019 it ranked second in the AlphaPilot Drone Racing world championship. For his research contributions to autonomous, vision-based, drone navigation and event cameras, he won prestigious awards, such as a European Research Council (ERC) Consolidator Grant, the IEEE Robotics and Automation Society Early Career Award, an SNSF-ERC Starting Grant, a Google Research Award, the KUKA Innovation Award, two Qualcomm Innovation Fellowships, the European Young Research Award, the Misha Mahowald Neuromorphic Engineering Award, and several paper awards. He co-authored the book “Introduction to Autonomous Mobile Robots” (published by MIT Press; 10,000 copies sold) and more than 100 papers on robotics and perception published in top-ranked journals (Science Robotics, TRO, T-PAMI, IJCV, IJRR) and conferences (RSS, ICRA, CVPR, ICCV, CORL, NeurIPS). He has served as a consultant for the United Nations’ International Atomic Energy Agency’s Fukushima Action Plan on Nuclear Safety and several drones and computer-vision companies, to which he has also transferred research results. In 2015, he cofounded Zurich-Eye, today Facebook Zurich, which developed the visual-inertial SLAM system running in Oculus Quest VR headsets. He was also the strategic advisor of Dacuda, today Magic Leap Zurich. In 2020, he cofounded SUIND, which develops camera-based safety solutions for commercial drones. Many aspects of his research have been prominently featured in wider media, such as The New York Times, BBC News, Discovery Channel, La Repubblica, Neue Zurcher Zeitung, and also in technology-focused media, such as IEEE Spectrum, MIT Technology Review, Tech Crunch, Wired, The Verge.

You can also view past seminars on the Robotics Today YouTube Channel.

]]>
Natalia Calvo’s talk – How children build a trust model of a social robot in the first encounter? – with video https://robohub.org/natalia-calvos-talk-on-13-november-how-children-build-a-trust-model-of-a-social-robot-in-the-first-encounter/ Tue, 24 Nov 2020 07:56:46 +0000 https://robohub.org/natalia-calvos-talk-on-13-november-how-children-build-a-trust-model-of-a-social-robot-in-the-first-encounter/ On Friday the 13th of November, Talking Robotics hosted an online talk with PhD student Natalia Calvo from Uppsala University in Sweden. Now you can watch the recorded seminar.

Natalia Calvo with Pepper robot

Talking Robotics is a series of virtual seminars about Robotics and its interaction with other relevant fields, such as Artificial Intelligence, Machine Learning, Design Research, Human-Robot Interaction, among others. The aim is to promote reflections, dialogues, and a place to network. Talking Robotics happens virtually and bi-weekly, i.e., every other week, allocating 30 min for presentation and 30 min for Q&A and networking. Sessions have a roundtable format where everyone is welcome to share ideas. Recordings and materials are shared in this website.

Abstract

The talk discussed several literature approaches to assess children’s trust towards robots. Calvo argues that the perceived first impression of a social robot’s likability and competence are predictors of children’s judgments of trust in social robots.

Biography

Natalia Calvo is a Ph.D. student at Uppsala University in Sweden. She obtained her master’s degree in Robotics Engineering from the University of Genoa in Italy, and her bachelor’s degree in Mechatronics Engineering from the Nueva Granada Military University in Colombia. She is part of the EU ITN ANIMATAS project. Her research focuses on modelling trust in child-robot educational interactions. Natalia is interested in implementing machine learning models for the understanding of children’s perception of trust in robots. You can read more details about the speaker on this website.

]]>
Online events to look out for on Ada Lovelace Day 2020 https://robohub.org/online-events-to-look-out-for-on-ada-lovelace-day-2020/ Mon, 12 Oct 2020 14:57:51 +0000 https://robohub.org/online-events-to-look-out-for-on-ada-lovelace-day-2020/ Tomorrow the world celebrates Ada Lovelace Day to honor the achievements of women in science, technology, engineering and maths. We’ve specially chosen a couple of online events featuring amazing women in robotics and technology. You can enjoy their talks in the comfort of your own home.

Ada Lovelace Day: The Near Future (panel discussion)

Ada Lovelace Day 2020: The Near Future (panel discussion)

Organized by Ada Lovelace Day, this panel session will be joined by Dr Beth Singler (Junior Research Fellow in Artificial Intelligence at the University of Cambridge), Prof Praminda Caleb-Solly (Professor of Assistive Robotics and Intelligent Health Technologies at the University of the West of England), Dr Anat Caspi (director of the Taskar Center for Accessible Technology, University of Washington) and Dr Chanuki Seresinhe (visiting data science researcher at The Alan Turing Institute). The event will take place at 4pm (UTC). You can register here.


Ada Lovelace Day 2020 Celebration of Women in Robotics

Ada Lovelace Day 2020 Celebration of Women in Robotics

Hosted by UC CITRIS CPAR and Silicon Valley Robotics, this event will be joined by Dr Ayanna Howard (Chair of Interactive Computing Georgia Tech), Dr Carlotta Berry (Professor of Electrical and Computer Engineering Rose-Hulman Institute of Technology), Angelique Taylor (PhD Candidate at the Healthcare Robotics Lab in UCSD and Facebook Research Intern), Dr Ariel Anders (First Technical Hire at Robust.ai) and Jasmine Lawrence, (Product Manager at X, the Moonshot Factory). This event will take place at 1am (UTC). You can register here.


Tomorrow we will also publish our 2020 list of women in robotics you need to know about. Stay tuned!

]]>
RSS 2020 – all the papers and videos! https://robohub.org/rss-2020-all-the-papers-and-videos/ Sat, 18 Jul 2020 21:11:07 +0000 https://robohub.org/rss-2020-all-the-papers-and-videos/

RSS 2020 was held virtually this year, from the RSS Pioneers Workshop on July 11 to the Paper Awards and Farewell on July 16. Many talks are now available online, including 103 accepted papers, each presented as an online Spotlight Talk on the RSS Youtube channel, and of course the plenaries and much of the workshop content as well. We’ve tried to link here to all of the goodness from RSS 2020.

The RSS Keynote on July 15 was delivered by Josh Tenenbaum, Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, CSAIL. Titled “It’s all in your head: Intuitive physics, planning, and problem-solving in brains, minds and machines”.

Abstract: I will overview what we know about the human mind’s internal models of the physical world, including how these models arise over evolution and developmental learning, how they are implemented in neural circuitry, and how they are used to support planning and rapid trial-and-error problem-solving in tool use and other physical reasoning tasks. I will also discuss prospects for building more human-like physical common sense in robots and other AI systems.

RSS 2020 introduces the new RSS Test of Time Award given to highest impact papers published at RSS (and potentially journal versions thereof) from at least ten years ago. Impact may mean that it changed how we think about problems or about robotic design, that it brought fully new problems to the attention of the community, or that it pioneered new approach to robotic design or problem solving. With this award, RSS generally wants to foster the discussion of the long term development of our field. The award is an opportunity to reflect on and discuss the past, which is essential to make progress in the future. The awardee’s keynote is therefore complemented with a Test of Time Panel session devoted to this important discussion.

This year’s Test of Time Awards goes to the pair of papers for pioneering an information smoothing approach to the SLAM problem via square root factorization, its interpretation as a graphical model, and the widely-used GTSAM free software repository.

Abstract: Many estimation, planning and optimal control problems in robotics have an optimization problem at their core. In most of these optimization problems, the objective function is composed of many different factors or terms that are local in nature, i.e., they only depend on a small subset of the variables. 10 years ago the Square Root SAM papers identified factor graphs as a particularly insightful way of modeling this locality structure. Since then we have realized that factor graphs can represent a wide variety of problems across robotics, expose opportunities to improve computational performance, and are beneficial in designing and thinking about how to model a problem, even aside from performance considerations. Many of these principles have been embodied in our evolving open source package GTSAM, which puts factor graphs front and central, and which has been used with great success in a number of state of the art robotics applications. We will also discuss where factor graphs, in our opinion, can break in

The RSS 2020 Plenary Sessions highlighted Early Career Awards for researchers, Byron Boots, Luca Carlone and Jeanette Bohg. Byron Boots is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University.

Title: Perspectives on Machine Learning for Robotics

Abstract: Recent advances in machine learning are leading to new tools for designing intelligent robots: functions relied on to govern a robot’s behavior can be learned from a robot’s interaction with its environment rather than hand-designed by an engineer. Many machine learning methods assume little prior knowledge and are extremely flexible, they can model almost anything! But, this flexibility comes at a cost. The same algorithms are often notoriously data hungry and computationally expensive, two problems that can be debilitating for robotics. In this talk I’ll discuss how machine learning can be combined with prior knowledge to build effective solutions to robotics problems. I’ll start by introducing an online learning perspective on robot adaptation that unifies well-known algorithms and suggests new approaches. Along the way, I’ll focus on the use of simulation and expert advice to augment learning. I’ll discuss how imperfect models can be leveraged to rapidly update simple control policies and imitation can accelerate reinforcement learning. I will also show how we have applied some of these ideas to an autonomous off-road racing task that requires impressive sensing, speed, and agility to complete.

Title: The Future of Robot Perception: Certifiable Algorithms and Real-time High-level Understanding

Abstract: Robot perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception.

This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems.

The second effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction.
Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human instructions, and operate in large dynamic environments and over and extended period of time

Title: A Tale of Success and Failure in Robotics Grasping and Manipulation

Abstract: In 2007, I was a naïve grad student and started to work on vision-based robotic grasping. I had no prior background in manipulation, kinematics, dynamics or control. Yet, I dove into the field by re-implementing and improving a learning-based method. While making some contributions, the proposed method also had many limitations partly due to the way the problem was framed. Looking back at the entire journey until today, I find that I have learned the most about robotic grasping and manipulation from observing failures and limitations of existing approaches – including my own. In this talk, I want to highlight how these failures and limitations have shaped my view on what may be some of the underlying principles of autonomous robotic manipulation. I will emphasise three points. First, perception and prediction will always be noisy, partial and sometimes just plain wrong. Therefore, one focus of my research is on methods that support decision-making under uncertainty due to noisy sensing, inaccurate models and hard-to-predict dynamics. To this end, I will present a robotic system that demonstrates the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. I will also talk about work that funnels uncertainty by enabling robots to exploit contact constraints during manipulation.

Second, a robot has many more sensors than just cameras and they all provide complementary information. Therefore, one focus of my research is on methods that can exploit multimodal information such as vision and touch for contact-rich manipulation. It is non-trivial to manually design a manipulation controller that combines modalities with very different characteristics. I will present work that uses self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. And third, choosing the right robot action representation has a large influence on the success of a manipulation policy, controller or planner. While believing many years that inferring contact points for robotic grasping is futile, I will present work that convinced me otherwise. Specifically, this work uses contact points as an abstraction that can be re-used by a diverse set of robot hands.

Inclusion@RSS is excited to host a panel “On the Future of Robotics” to discuss how we can have an inclusive robotics community and its impact on the future of the field. Moderator: Matt Johnson-Roberson (University of Michigan) with Panelists: Tom Williams (Colorado School of Mines), Eduard Fosch-Villaronga (Leiden University), Lydia Tapia (University of New Mexico), Chris Macnab (University of Calgary), Adam Poulsen (Charles Sturt University), Chad Jenkins (University of Michigan), Kendall Queen (University of Pennsylvania), Naveen Kuppuswamy (Toyota Research Institute).

The RSS community is committed to increasing the participation of groups traditionally underrepresented in robotics (including but not limited to: women, LGBTQ+, underrepresented minorities, and people with disabilities), especially people early in their studies and career. Such efforts are crucial for increasing research capacity, creativity, and broadening the impact of robotics research.

The RSS Pioneers Workshop for senior Ph.D. students and postdocs, was modelled on the highly successful HRI Pioneers Workshop and took place on Saturday July 11. The goal of RSS Pioneers is to bring together a cohort of the world’s top early career researchers to foster creativity and collaborations surrounding challenges in all areas of robotics, as well as to help young researchers navigate their next career stages. The workshop included a mix of research and career talks from senior scholars in the field from both academia and industry, research presentations from attendees and networking activities, with a poster session where Pioneers will get a chance to externally showcase their research.

Content from the various workshops on July 12 and 13 may be available through the various workshop websites.

RSS 2020 Accepted Workshops

WS1-2 Reacting to contact: Enabling transparent interactions through intelligent sensing and actuation Ankit Bhatia
Aaron M. Johnson
Matthew T. Mason
[Session]
WS1-3 Certifiable Robot Perception: from Global Optimization to Safer Robots Luca Carlone
Tat-Jun Chin
Anders Eriksson
Heng Yang
[Session]
WS1-4 Advancing the State of Machine Learning for Manufacturing Robotics Elena Messina
Holly Yanco
Megan Zimmerman
Craig Schlenoff
Dragos Margineantu
[Session]
WS1-5 Advances and Challenges in Imitation Learning for Robotics  Scott Niekum
Akanksha Saran
Yuchen Cui
Nick Walker
Andreea Bobu
Ajay Mandlekar
Danfei Xu
[Session]
WS1-6 2nd Workshop on Closing the Reality Gap in Sim2Real Transfer for Robotics Sebastian Höfer
Kostas Bekris
Ankur Handa
Juan Camilo Gamboa
Florian Golemo
Melissa Mozifian
[Session]
WS1-7 ROS Carpentry Workshop Katherine Scott
Mabel Zhang
Camilo Buscaron
Steve Macenski
N/A
WS1-8 Perception and Control for Fast and Agile Super-Vehicles II Varun Murali
Phillip Foehn
Davide Scaramuzza
Sertac Karaman
[Session]
WS1-9 Robotics Retrospectives  Jeannette Bohg
Franziska Meier
Arunkumar Byravan
Akshara Rai
[Session]
WS1-10 Heterogeneous Multi-Robot Task Allocation and Coordination  Harish Ravichandar
Ragesh Ramachandran
Sonia Chernova
Seth Hutchinson
Gaurav Sukhatme
Vijay Kumar
[Session]
WS1-11 Learning (in) Task and Motion Planning  Danny Driess
Neil T. Dantam
Lydia E. Kavraki
Marc Toussaint
[Session]
WS1-12 Performing Arts Robots & Technologies, Integrated (PARTI)  Naomi Fitter
Heather Knight
Amy LaViers
[Session]
WS1-13 Robots in the Wild: Challenges in Deploying Robust Autonomy for Robotic Exploration Hannah Kerner
Amy Tabb
Jnaneshwar Das
Pratap Tokekar
Masahiro Ono
[Session]
WS1-14 Emergent Behaviors in Human-Robot Systems  Erdem Bıyık
Minae Kwon
Dylan Losey
Noah Goodman
Stefanos Nikolaidis
Dorsa Sadigh
[Session]

Monday, July 13

WS Title Organizers Virtual Session Link
WS2-1 Interaction and Decision-Making in Autonomous Driving  Rowan McAllister
Litin Sun
Igor Gilitschenski
Daniela Rus
[Session]
WS2-2 2nd RSS Workshop on Robust Autonomy: Tools for Safety in Real-World Uncertain Environments Andrea Bajcsy
Ransalu Senanayake
Somil Bansal
Sylvia Herbert
David Fridovich-Keil
Jaime Fernández Fisac
[Session]
WS2-3 AI & Its Alternatives in Assistive & Collaborative Robotics  Deepak Gopinath
Aleksandra Kalinowska
Mahdieh Nejati
Katarina Popovic
Brenna Argall
Todd Murphey
[Session]
WS2-4 Benchmarking Tools for Evaluating Robotic Assembly of Small Parts Adam Norton
Holly Yanco
Joseph Falco
Kenneth Kimble
[Session]
WS2-5 Good Citizens of Robotics Research Mustafa Mukadam
Nima Fazeli
Niko Sünderhauf
[Session]
WS2-6 Structured Approaches to Robot Learning for Improved Generalization  Arunkumar Byravan
Markus Wulfmeier
Franziska Meier
Mustafa Mukadam
Nicolas Heess
Angela Schoellig
Dieter Fox
[Session]
WS2-7 Explainable and Trustworthy Robot Decision Making for Scientific Data Collection Nisar Ahmed
P. Michael Furlong
Geoff Hollinger
Seth McCammon
[Session]
WS2-8 Closing the Academia to Real-World Gap in Service Robotics  Guilherme Maeda
Nick Walker
Petar Kormushev
Maru Cabrera
[Session]
WS2-9 Visuotactile Sensors for Robust Manipulation: From Perception to Control  Alex Alspach
Naveen Kuppuswamy
Avinash Uttamchandani
Filipe Veiga
Wenzhen Yuan
[Session]
WS2-10 Self-Supervised Robot Learning  Abhinav Valada
Anelia Angelova
Joschka Boedecker
Oier Mees
Wolfram Burgard
[Session]
WS2-11 Power On and Go Robots: ‘Out-of-the-Box’ Systems for Real-World Applications Jonathan Kelly
Stephan Weiss
Robuffo Giordana
Valentin Peretroukhin
[Session]
WS2-12 Workshop on Visual Learning and Reasoning for Robotic Manipulation  Kuan Fang
David Held
Yuke Zhu
Dinesh Jayaraman
Animesh Garg
Lin Sun
Yu Xiang
Greg Dudek
[Session]
WS2-13 Action Representations for Learning in Continuous Control  Tamim Asfour
Miroslav Bogdanovic
Jeannette Bohg
Animesh Garg
Roberto Martín-Martín
Ludovic Righetti
[Se

RSS 2020 Accepted Papers

Paper ID Title Authors Virtual Session Link
1 Planning and Execution using Inaccurate Models with Provable Guarantees Anirudh Vemula (Carnegie Mellon University)*; Yash Oza (CMU); J. Bagnell (Aurora Innovation); Maxim Likhachev (CMU) Virtual Session #1
2 Swoosh! Rattle! Thump! – Actions that Sound Dhiraj Gandhi (Carnegie Mellon University)*; Abhinav Gupta (Carnegie Mellon University); Lerrel Pinto (NYU/Berkeley) Virtual Session #1
3 Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image Danny Driess (Machine Learning and Robotics Lab, University of Stuttgart)*; Jung-Su Ha (); Marc Toussaint () Virtual Session #1
4 Elaborating on Learned Demonstrations with Temporal Logic Specifications Craig Innes (University of Edinburgh)*; Subramanian Ramamoorthy (University of Edinburgh) Virtual Session #1
5 Non-revisiting Coverage Task with Minimal Discontinuities for Non-redundant Manipulators Tong Yang (Zhejiang University)*; Jaime Valls Miro (University of Technology Sydney); Yue Wang (Zhejiang University); Rong Xiong (Zhejiang University) Virtual Session #1
6 LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices Radu Alexandru Rosu (University of Bonn)*; Peer Schütt (University of Bonn); Jan Quenzel (University of Bonn); Sven Behnke (University of Bonn) Virtual Session #1
7 A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty Valentin Peretroukhin (University of Toronto)*; Matthew Giamou (University of Toronto); W. Nicholas Greene (MIT); David Rosen (MIT Laboratory for Information and Decision Systems); Jonathan Kelly (University of Toronto); Nicholas Roy (MIT) Virtual Session #1
8 Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication Brian Reily (Colorado School of Mines)*; Christopher Reardon (ARL); Hao Zhang (Colorado School of Mines) Virtual Session #1
9 OverlapNet: Loop Closing for LiDAR-based SLAM Xieyuanli Chen (Photogrammetry & Robotics Lab, University of Bonn)*; Thomas Läbe (Institute for Geodesy and Geoinformation, University of Bonn); Andres Milioto (University of Bonn); Timo Röhling (Fraunhofer FKIE); Olga Vysotska (Autonomous Intelligent Driving GmbH); Alexandre Haag (AID); Jens Behley (University of Bonn); Cyrill Stachniss (University of Bonn) Virtual Session #1
10 The Dark Side of Embodiment – Teaming Up With Robots VS Disembodied Agents Filipa Correia (INESC-ID & University of Lisbon)*; Samuel Gomes (IST/INESC-ID); Samuel Mascarenhas (INESC-ID); Francisco S. Melo (IST/INESC-ID); Ana Paiva (INESC-ID U of Lisbon) Virtual Session #1
11 Shared Autonomy with Learned Latent Actions Hong Jun Jeon (Stanford University)*; Dylan Losey (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #1
12 Regularized Graph Matching for Correspondence Identification under Uncertainty in Collaborative Perception Peng Gao (Colorado school of mines)*; Rui Guo (Toyota Motor North America); Hongsheng Lu (Toyota Motor North America); Hao Zhang (Colorado School of Mines) Virtual Session #1
13 Frequency Modulation of Body Waves to Improve Performance of Limbless Robots Baxi Zhong (Goergia Tech)*; Tianyu Wang (Carnegie Mellon University); Jennifer Rieser (Georgia Institute of Technology); Abdul Kaba (Morehouse College); Howie Choset (Carnegie Melon University); Daniel Goldman (Georgia Institute of Technology) Virtual Session #1
14 Self-Reconfiguration in Two-Dimensions via Active Subtraction with Modular Robots Matthew Hall (The University of Sheffield)*; Anil Ozdemir (The University of Sheffield); Roderich Gross (The University of Sheffield) Virtual Session #1
15 Singularity Maps of Space Robots and their Application to Gradient-based Trajectory Planning Davide Calzolari (Technical University of Munich (TUM), German Aerospace Center (DLR))*; Roberto Lampariello (German Aerospace Center); Alessandro Massimo Giordano (Deutches Zentrum für Luft und Raumfahrt) Virtual Session #1
16 Grounding Language to Non-Markovian Tasks with No Supervision of Task Specifications Roma Patel (Brown University)*; Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #1
17 Fast Uniform Dispersion of a Crash-prone Swarm Michael Amir (Technion – Israel Institute of Technology)*; Freddy Bruckstein (Technion) Virtual Session #1
18 Simultaneous Enhancement and Super-Resolution of Underwater Imagery for Improved Visual Perception Md Jahidul Islam (University of Minnesota Twin Cities)*; Peigen Luo (University of Minnesota-Twin Cities); Junaed Sattar (University of Minnesota) Virtual Session #1
19 Collision Probabilities for Continuous-Time Systems Without Sampling Kristoffer Frey (MIT)*; Ted Steiner (Charles Stark Draper Laboratory, Inc.); Jonathan How (MIT) Virtual Session #1
20 Event-Driven Visual-Tactile Sensing and Learning for Robots Tasbolat Taunyazov (National University of Singapore); Weicong Sng (National University of Singapore); Brian Lim (National University of Singapore); Hian Hian See (National University of Singapore); Jethro Kuan (National University of Singapore); Abdul Fatir Ansari (National University of Singapore); Benjamin Tee (National University of Singapore); Harold Soh (National University Singapore)* Virtual Session #1
21 Resilient Distributed Diffusion for Multi-Robot Systems Using Centerpoint JIANI LI (Vanderbilt University)*; Waseem Abbas (Vanderbilt University); Mudassir Shabbir (Information Technology University); Xenofon Koutsoukos (Vanderbilt University) Virtual Session #1
22 Pixel-Wise Motion Deblurring of Thermal Videos Manikandasriram Srinivasan Ramanagopal (University of Michigan)*; Zixu Zhang (University of Michigan); Ram Vasudevan (University of Michigan); Matthew Johnson Roberson (University of Michigan) Virtual Session #1
23 Controlling Contact-Rich Manipulation Under Partial Observability Florian Wirnshofer (Siemens AG)*; Philipp Sebastian Schmitt (Siemens AG); Georg von Wichert (Siemens AG); Wolfram Burgard (University of Freiburg) Virtual Session #1
24 AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos Laura Smith (UC Berkeley)*; Nikita Dhawan (UC Berkeley); Marvin Zhang (UC Berkeley); Pieter Abbeel (UC Berkeley); Sergey Levine (UC Berkeley) Virtual Session #1
25 Provably Constant-time Planning and Re-planning for Real-time Grasping Objects off a Conveyor Belt Fahad Islam (Carnegie Mellon University)*; Oren Salzman (Technion); Aditya Agarwal (CMU); Likhachev Maxim (Carnegie Mellon University) Virtual Session #1
26 Online IMU Intrinsic Calibration: Is It Necessary? Yulin Yang (University of Delaware)*; Patrick Geneva (University of Delaware); Xingxing Zuo (Zhejiang University); Guoquan Huang (University of Delaware) Virtual Session #1
27 A Berry Picking Robot With A Hybrid Soft-Rigid Arm: Design and Task Space Control Naveen Kumar Uppalapati (University of Illinois at Urbana Champaign)*; Benjamin Walt ( University of Illinois at Urbana Champaign); Aaron Havens (University of Illinois Urbana Champaign); Armeen Mahdian (University of Illinois at Urbana Champaign); Girish Chowdhary (University of Illinois at Urbana Champaign); Girish Krishnan (University of Illinois at Urbana Champaign) Virtual Session #1
28 Iterative Repair of Social Robot Programs from Implicit User Feedback via Bayesian Inference Michael Jae-Yoon Chung (University of Washington)*; Maya Cakmak (University of Washington) Virtual Session #1
29 Cable Manipulation with a Tactile-Reactive Gripper Siyuan Dong (MIT); Shaoxiong Wang (MIT); Yu She (MIT)*; Neha Sunil (Massachusetts Institute of Technology); Alberto Rodriguez (MIT); Edward Adelson (MIT, USA) Virtual Session #1
30 Automated Synthesis of Modular Manipulators’ Structure and Control for Continuous Tasks around Obstacles Thais Campos de Almeida (Cornell University)*; Samhita Marri (Cornell University); Hadas Kress-Gazit (Cornell) Virtual Session #1
31 Learning Memory-Based Control for Human-Scale Bipedal Locomotion Jonah Siekmann (Oregon State University)*; Srikar Valluri (Oregon State University); Jeremy Dao (Oregon State University); Francis Bermillo (Oregon State University); Helei Duan (Oregon State University); Alan Fern (Oregon State University); Jonathan Hurst (Oregon State University) Virtual Session #1
32 Multi-Fidelity Black-Box Optimization for Time-Optimal Quadrotor Maneuvers Gilhyun Ryou (Massachusetts Institute of Technology)*; Ezra Tal (Massachusetts Institute of Technology); Sertac Karaman (Massachusetts Institute of Technology) Virtual Session #1
33 Manipulation Trajectory Optimization with Online Grasp Synthesis and Selection Lirui Wang (University of Washington)*; Yu Xiang (NVIDIA); Dieter Fox (NVIDIA Research / University of Washington) Virtual Session #1
34 VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation Ryan Hoque (UC Berkeley)*; Daniel Seita (University of California, Berkeley); Ashwin Balakrishna (UC Berkeley); Aditya Ganapathi (University of California, Berkeley); Ajay Tanwani (UC Berkeley); Nawid Jamali (Honda Research Institute); Katsu Yamane (Honda Research Institute); Soshi Iba (Honda Research Institute); Ken Goldberg (UC Berkeley) Virtual Session #1
35 Spatial Action Maps for Mobile Manipulation Jimmy Wu (Princeton University)*; Xingyuan Sun (Princeton University); Andy Zeng (Google); Shuran Song (Columbia University); Johnny Lee (Google); Szymon Rusinkiewicz (Princeton University); Thomas Funkhouser (Princeton University) Virtual Session #2
36 Generalized Tsallis Entropy Reinforcement Learning and Its Application to Soft Mobile Robots Kyungjae Lee (Seoul National University)*; Sungyub Kim (KAIST); Sungbin Lim (UNIST); Sungjoon Choi (Disney Research); Mineui Hong (Seoul National University); Jaein Kim (Seoul National University); Yong-Lae Park (Seoul National University); Songhwai Oh (Seoul National University) Virtual Session #2
37 Learning Labeled Robot Affordance Models Using Simulations and Crowdsourcing Adam Allevato (UT Austin)*; Elaine Short (Tufts University); Mitch Pryor (UT Austin); Andrea Thomaz (UT Austin) Virtual Session #2
38 Towards Embodied Scene Description Sinan Tan (Tsinghua University); Huaping Liu (Tsinghua University)*; Di Guo (Tsinghua University); Xinyu Zhang (Tsinghua University); Fuchun Sun (Tsinghua University) Virtual Session #2
39 Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving Zhangjie Cao (Stanford University); Erdem Biyik (Stanford University)*; Woodrow Wang (Stanford University); Allan Raventos (Toyota Research Institute); Adrien Gaidon (Toyota Research Institute); Guy Rosman (Toyota Research Institute); Dorsa Sadigh (Stanford) Virtual Session #2
40 Deep Drone Acrobatics Elia Kaufmann (ETH / University of Zurich)*; Antonio Loquercio (ETH / University of Zurich); Rene Ranftl (Intel Labs); Matthias Müller (Intel Labs); Vladlen Koltun (Intel Labs); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #2
41 Active Preference-Based Gaussian Process Regression for Reward Learning Erdem Biyik (Stanford University)*; Nicolas Huynh (École Polytechnique); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #2
42 A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play Shray Bansal (Georgia Institute of Technology)*; Jin Xu (Georgia Institute of Technology); Ayanna Howard (Georgia Institute of Technology); Charles Isbell (Georgia Institute of Technology) Virtual Session #2
43 Data-driven modeling of a flapping bat robot with a single flexible wing surface Jonathan Hoff (University of Illinois at Urbana-Champaign)*; Seth Hutchinson (Georgia Tech) Virtual Session #2
44 Safe Motion Planning for Autonomous Driving using an Adversarial Road Model Alex Liniger (ETH Zurich)*; Luc Van Gool (ETH Zurich) Virtual Session #2
45 A Motion Taxonomy for Manipulation Embedding David Paulius (University of South Florida)*; Nicholas Eales (University of South Florida); Yu Sun (University of South Florida) Virtual Session #2
46 Aerial Manipulation Using Hybrid Force and Position NMPC Applied to Aerial Writing Dimos Tzoumanikas (Imperial College London)*; Felix Graule (ETH Zurich); Qingyue Yan (Imperial College London); Dhruv Shah (Berkeley Artificial Intelligence Research); Marija Popovic (Imperial College London); Stefan Leutenegger (Imperial College London) Virtual Session #2
47 A Global Quasi-Dynamic Model for Contact-Trajectory Optimization in Manipulation Bernardo Aceituno-Cabezas (MIT)*; Alberto Rodriguez (MIT) Virtual Session #2
48 Vision-Based Goal-Conditioned Policies for Underwater Navigation in the Presence of Obstacles Travis Manderson (McGill University)*; Juan Camilo Gamboa Higuera (McGill University); Stefan Wapnick (McGill University); Jean-François Tremblay (McGill University); Florian Shkurti (University of Toronto); David Meger (McGill University); Gregory Dudek (McGill University) Virtual Session #2
49 Spatio-Temporal Stochastic Optimization: Theory and Applications to Optimal Control and Co-Design Ethan Evans (Georgia Institute of Technology)*; Andrew Kendall (Georgia Institute of Technology); Georgios Boutselis (Georgia Institute of Technology ); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #2
50 Kernel Taylor-Based Value Function Approximation for Continuous-State Markov Decision Processes Junhong Xu (INDIANA UNIVERSITY)*; Kai Yin (Vrbo, Expedia Group); Lantao Liu (Indiana University, Intelligent Systems Engineering) Virtual Session #2
51 HMPO: Human Motion Prediction in Occluded Environments for Safe Motion Planning Jaesung Park (University of North Carolina at Chapel Hill)*; Dinesh Manocha (University of Maryland at College Park) Virtual Session #2
52 Motion Planning for Variable Topology Truss Modular Robot Chao Liu (University of Pennsylvania)*; Sencheng Yu (University of Pennsylvania); Mark Yim (University of Pennsylvania) Virtual Session #2
53 Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning Archit Sharma (Google)*; Michael Ahn (Google); Sergey Levine (Google); Vikash Kumar (Google); Karol Hausman (Google Brain); Shixiang Gu (Google Brain) Virtual Session #2
54 Compositional Transfer in Hierarchical Reinforcement Learning Markus Wulfmeier (DeepMind)*; Abbas Abdolmaleki (Google DeepMind); Roland Hafner (Google DeepMind); Jost Tobias Springenberg (DeepMind); Michael Neunert (Google DeepMind); Noah Siegel (DeepMind); Tim Hertweck (DeepMind); Thomas Lampe (DeepMind); Nicolas Heess (DeepMind); Martin Riedmiller (DeepMind) Virtual Session #2
55 Learning from Interventions: Human-robot interaction as both explicit and implicit feedback Jonathan Spencer (Princeton University)*; Sanjiban Choudhury (University of Washington); Matt Barnes (University of Washington); Matthew Schmittle (University of Washington); Mung Chiang (Princeton University); Peter Ramadge (Princeton); Siddhartha Srinivasa (University of Washington) Virtual Session #2
56 Fourier movement primitives: an approach for learning rhythmic robot skills from demonstrations Thibaut Kulak (Idiap Research Institute)*; Joao Silverio (Idiap Research Institute); Sylvain Calinon (Idiap Research Institute) Virtual Session #2
57 Self-Supervised Localisation between Range Sensors and Overhead Imagery Tim Tang (University of Oxford)*; Daniele De Martini (University of Oxford); Shangzhe Wu (University of Oxford); Paul Newman (University of Oxford) Virtual Session #2
58 Probabilistic Swarm Guidance Subject to Graph Temporal Logic Specifications Franck Djeumou (University of Texas at Austin)*; Zhe Xu (University of Texas at Austin); Ufuk Topcu (University of Texas at Austin) Virtual Session #2
59 In-Situ Learning from a Domain Expert for Real World Socially Assistive Robot Deployment Katie Winkle (Bristol Robotics Laboratory)*; Severin Lemaignan (); Praminda Caleb-Solly (); Paul Bremner (); Ailie Turton (University of the West of England); Ute Leonards () Virtual Session #2
60 MRFMap: Online Probabilistic 3D Mapping using Forward Ray Sensor Models Kumar Shaurya Shankar (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #2
61 GTI: Learning to Generalize across Long-Horizon Tasks from Human Demonstrations Ajay Mandlekar (Stanford University); Danfei Xu (Stanford University)*; Roberto Martín-Martín (Stanford University); Silvio Savarese (Stanford University); Li Fei-Fei (Stanford University) Virtual Session #2
62 Agbots 2.0: Weeding Denser Fields with Fewer Robots Wyatt McAllister (University of Illinois)*; Joshua Whitman (University of Illinois); Allan Axelrod (University of Illinois); Joshua Varghese (University of Illinois); Girish Chowdhary (University of Illinois at Urbana Champaign); Adam Davis (University of Illinois) Virtual Session #2
63 Optimally Guarding Perimeters and Regions with Mobile Range Sensors Siwei Feng (Rutgers University)*; Jingjin Yu (Rutgers Univ.) Virtual Session #2
64 Learning Agile Robotic Locomotion Skills by Imitating Animals Xue Bin Peng (UC Berkeley)*; Erwin Coumans (Google); Tingnan Zhang (Google); Tsang-Wei Lee (Google Brain); Jie Tan (Google); Sergey Levine (UC Berkeley) Virtual Session #2
65 Learning to Manipulate Deformable Objects without Demonstrations Yilin Wu (UC Berkeley); Wilson Yan (UC Berkeley)*; Thanard Kurutach (UC Berkeley); Lerrel Pinto (); Pieter Abbeel (UC Berkeley) Virtual Session #2
66 Deep Differentiable Grasp Planner for High-DOF Grippers Min Liu (National University of Defense Technology)*; Zherong Pan (University of North Carolina at Chapel Hill); Kai Xu (National University of Defense Technology); Kanishka Ganguly (University of Maryland at College Park); Dinesh Manocha (University of North Carolina at Chapel Hill) Virtual Session #2
67 Ergodic Specifications for Flexible Swarm Control: From User Commands to Persistent Adaptation Ahalya Prabhakar (Northwestern University)*; Ian Abraham (Northwestern University); Annalisa Taylor (Northwestern University); Millicent Schlafly (Northwestern University); Katarina Popovic (Northwestern University); Giovani Diniz (Raytheon); Brendan Teich (Raytheon); Borislava Simidchieva (Raytheon); Shane Clark (Raytheon); Todd Murphey (Northwestern Univ.) Virtual Session #2
68 Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints Shushman Choudhury (Stanford University)*; Jayesh Gupta (Stanford University); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford); Jeannette Bohg (Stanford) Virtual Session #2
69 Latent Belief Space Motion Planning under Cost, Dynamics, and Intent Uncertainty Dicong Qiu (iSee); Yibiao Zhao (iSee); Chris Baker (iSee)* Virtual Session #2
70 Learning of Sub-optimal Gait Controllers for Magnetic Walking Soft Millirobots Utku Culha (Max-Planck Institute for Intelligent Systems); Sinan Ozgun Demir (Max Planck Institute for Intelligent Systems); Sebastian Trimpe (Max Planck Institute for Intelligent Systems); Metin Sitti (Carnegie Mellon University)* Virtual Session #3
71 Nonparametric Motion Retargeting for Humanoid Robots on Shared Latent Space Sungjoon Choi (Disney Research)*; Matthew Pan (Disney Research); Joohyung Kim (University of Illinois Urbana-Champaign) Virtual Session #3
72 Residual Policy Learning for Shared Autonomy Charles Schaff (Toyota Technological Institute at Chicago)*; Matthew Walter (Toyota Technological Institute at Chicago) Virtual Session #3
73 Efficient Parametric Multi-Fidelity Surface Mapping Aditya Dhawale (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #3
74 Towards neuromorphic control: A spiking neural network based PID controller for UAV Rasmus Stagsted (University of Southern Denmark); Antonio Vitale (ETH Zurich); Jonas Binz (ETH Zurich); Alpha Renner (Institute of Neuroinformatics, University of Zurich and ETH Zurich); Leon Bonde Larsen (University of Southern Denmark); Yulia Sandamirskaya (Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland)* Virtual Session #3
75 Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping Cristian Bodnar (University of Cambridge)*; Adrian Li (X); Karol Hausman (Google Brain); Peter Pastor (X); Mrinal Kalakrishnan (X) Virtual Session #3
76 Scaling data-driven robotics with reward sketching and batch reinforcement learning Serkan Cabi (DeepMind)*; Sergio Gómez Colmenarejo (DeepMind); Alexander Novikov (DeepMind); Ksenia Konyushova (DeepMind); Scott Reed (DeepMind); Rae Jeong (DeepMind); Konrad Zolna (DeepMind); Yusuf Aytar (DeepMind); David Budden (DeepMind); Mel Vecerik (Deepmind); Oleg Sushkov (DeepMind); David Barker (DeepMind); Jonathan Scholz (DeepMind); Misha Denil (DeepMind); Nando de Freitas (DeepMind); Ziyu Wang (Google Research, Brain Team) Virtual Session #3
77 MPTC – Modular Passive Tracking Controller for stack of tasks based control frameworks Johannes Englsberger (German Aerospace Center (DLR))*; Alexander Dietrich (DLR); George Mesesan (German Aerospace Center (DLR)); Gianluca Garofalo (German Aerospace Center (DLR)); Christian Ott (DLR); Alin Albu-Schaeffer (Robotics and Mechatronics Center (RMC), German Aerospace Center (DLR)) Virtual Session #3
78 NH-TTC: A gradient-based framework for generalized anticipatory collision avoidance Bobby Davis (University of Minnesota Twin Cities)*; Ioannis Karamouzas (Clemson University); Stephen Guy (University of Minnesota Twin Cities) Virtual Session #3
79 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans Antoni Rosinol (MIT)*; Arjun Gupta (MIT); Marcus Abate (MIT); Jingnan Shi (MIT); Luca Carlone (Massachusetts Institute of Technology) Virtual Session #3
80 Robot Object Retrieval with Contextual Natural Language Queries Thao Nguyen (Brown University)*; Nakul Gopalan (Georgia Tech); Roma Patel (Brown University); Matthew Corsaro (Brown University); Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #3
81 AlphaPilot: Autonomous Drone Racing Philipp Foehn (ETH / University of Zurich)*; Dario Brescianini (University of Zurich); Elia Kaufmann (ETH / University of Zurich); Titus Cieslewski (University of Zurich & ETH Zurich); Mathias Gehrig (University of Zurich); Manasi Muglikar (University of Zurich); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #3
82 Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations Lin Shao (Stanford University)*; Toki Migimatsu (Stanford University); Qiang Zhang (Shanghai Jiao Tong University); Kaiyuan Yang (Stanford University); Jeannette Bohg (Stanford) Virtual Session #3
83 A Variable Rolling SLIP Model for a Conceptual Leg Shape to Increase Robustness of Uncertain Velocity on Unknown Terrain Adar Gaathon (Technion – Israel Institute of Technology)*; Amir Degani (Technion – Israel Institute of Technology) Virtual Session #3
84 Interpreting and Predicting Tactile Signals via a Physics-Based and Data-Driven Framework Yashraj Narang (NVIDIA)*; Karl Van Wyk (NVIDIA); Arsalan Mousavian (NVIDIA); Dieter Fox (NVIDIA) Virtual Session #3
85 Learning Active Task-Oriented Exploration Policies for Bridging the Sim-to-Real Gap Jacky Liang (Carnegie Mellon University)*; Saumya Saxena (Carnegie Mellon University); Oliver Kroemer (Carnegie Mellon University) Virtual Session #3
86 Manipulation with Shared Grasping Yifan Hou (Carnegie Mellon University)*; Zhenzhong Jia (SUSTech); Matthew Mason (Carnegie Mellon University) Virtual Session #3
87 Deep Learning Tubes for Tube MPC David Fan (Georgia Institute of Technology )*; Ali Agha (Jet Propulsion Laboratory); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #3
88 Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions Jason Choi (UC Berkeley); Fernando Castañeda (UC Berkeley); Claire Tomlin (UC Berkeley); Koushil Sreenath (Berkeley)* Virtual Session #3
89 Fast Risk Assessment for Autonomous Vehicles Using Learned Models of Agent Futures Allen Wang (MIT)*; Xin Huang (MIT); Ashkan Jasour (MIT); Brian Williams (Massachusetts Institute of Technology) Virtual Session #3
90 Online Domain Adaptation for Occupancy Mapping Anthony Tompkins (The University of Sydney)*; Ransalu Senanayake (Stanford University); Fabio Ramos (NVIDIA, The University of Sydney) Virtual Session #3
91 ALGAMES: A Fast Solver for Constrained Dynamic Games Simon Le Cleac’h (Stanford University)*; Mac Schwager (Stanford, USA); Zachary Manchester (Stanford) Virtual Session #3
92 Scalable and Probabilistically Complete Planning for Robotic Spatial Extrusion Caelan Garrett (MIT)*; Yijiang Huang (MIT Department of Architecture); Tomas Lozano-Perez (MIT); Caitlin Mueller (MIT Department of Architecture) Virtual Session #3
93 The RUTH Gripper: Systematic Object-Invariant Prehensile In-Hand Manipulation via Reconfigurable Underactuation Qiujie Lu (Imperial College London)*; Nicholas Baron (Imperial College London); Angus Clark (Imperial College London); Nicolas Rojas (Imperial College London) Virtual Session #3
94 Heterogeneous Graph Attention Networks for Scalable Multi-Robot Scheduling with Temporospatial Constraints Zheyuan Wang (Georgia Institute of Technology)*; Matthew Gombolay (Georgia Institute of Technology) Virtual Session #3
95 Robust Multiple-Path Orienteering Problem: Securing Against Adversarial Attacks Guangyao Shi (University of Maryland)*; Pratap Tokekar (University of Maryland); Lifeng Zhou (Virginia Tech) Virtual Session #3
96 Eyes-Closed Safety Kernels: Safety of Autonomous Systems Under Loss of Observability Forrest Laine (UC Berkeley)*; Chih-Yuan Chiu (UC Berkeley); Claire Tomlin (UC Berkeley) Virtual Session #3
97 Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from Suboptimal Demonstrations Glen Chou (University of Michigan)*; Necmiye Ozay (University of Michigan); Dmitry Berenson (U Michigan) Virtual Session #3
98 Nonlinear Model Predictive Control of Robotic Systems with Control Lyapunov Functions Ruben Grandia (ETH Zurich)*; Andrew Taylor (Caltech); Andrew Singletary (Caltech); Marco Hutter (ETHZ); Aaron Ames (Caltech) Virtual Session #3
99 Learning to Slide Unknown Objects with Differentiable Physics Simulations Changkyu Song (Rutgers University); Abdeslam Boularias (Rutgers University)* Virtual Session #3
100 Reachable Sets for Safe, Real-Time Manipulator Trajectory Design Patrick Holmes (University of Michigan); Shreyas Kousik (University of Michigan)*; Bohao Zhang (University of Michigan); Daphna Raz (University of Michigan); Corina Barbalata (Louisiana State University); Matthew Johnson Roberson (University of Michigan); Ram Vasudevan (University of Michigan) Virtual Session #3
101 Learning Task-Driven Control Policies via Information Bottlenecks Vincent Pacelli (Princeton University)*; Anirudha Majumdar (Princeton) Virtual Session #3
102 Simultaneously Learning Transferable Symbols and Language Groundings from Perceptual Data for Instruction Following Nakul Gopalan (Georgia Tech)*; Eric Rosen (Brown University); Stefanie Tellex (Brown University); George Konidaris (Brown) Virtual Session #3
103 A social robot mediator to foster collaboration and inclusion among children Sarah Gillet (Royal Institute of Technology)*; Wouter van den Bos (University of Amsterdam); Iolanda Leite (KTH) Virtual Session #3

The RSS Foundation is the governing body behind the Robotics: Science and Systems (RSS) conference. The foundation was started and is run by volunteers from the robotics community who believe that an open, high-quality, single-track conference is an important component of an active and growing scientific discipline.

]]>
Opportunities in DARPA SubT Challenge https://robohub.org/opportunities-in-darpa-subt-challenge/ Tue, 14 Jul 2020 20:13:57 +0000 https://robohub.org/opportunities-in-darpa-subt-challenge/
Opportunities Still Available to Participate in the DARPA Subterranean (SubT) Challenge: Cave Circuit 2020 and Final Event 2021. Join us for an introduction of the DARPA Subterranean Challenge with Program Manager Timothy Chung on Monday July 20 at 12pm PDT https://www.eventbrite.com/e/opportunities-with-darpa-subt-challenge-tickets-113037393888

About this Event

The DARPA Subterranean (SubT) Challenge aims to develop innovative technologies that would augment operations underground.

The SubT Challenge allows teams to demonstrate new approaches for robotic systems to rapidly map, navigate, and search complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.

The SubT Challenge is organized into two Competitions (Systems and Virtual), each with two tracks (DARPA-funded and self-funded).

The Cave Circuit, the final of three Circuit events, is planned for later this year. Final Event, planned for summer of 2021, will put both Systems and Virtual teams to the test with courses that incorporate diverse elements from all three environments. Teams will compete for up to $2 million in the Systems Final Event and up to $1.5 million in the Virtual Final Event, with additional prizes.

Learn more about the opportunities to participate either virtual or systems Team: https://www.subtchallenge.com/

Dr. Timothy Chung – Program Manager

Dr. Timothy Chung joined DARPA’s Tactical Technology Office as a program manager in February 2016. He serves as the Program Manager for the OFFensive Swarm-Enabled Tactics Program and the DARPA Subterranean (SubT) Challenge. His interests include autonomous/unmanned air vehicles, collaborative autonomy for unmanned swarm system capabilities, distributed perception, distributed decision-making, and counter unmanned system technologies.

Prior to joining DARPA, Dr. Chung served as an Assistant Professor at the Naval Postgraduate School and Director of the Advanced Robotic Systems Engineering Laboratory (ARSENL). His academic interests included modeling, analysis, and systems engineering of operational settings involving unmanned systems, combining collaborative autonomy development efforts with an extensive live-fly field experimentation program for swarm and counter-swarm unmanned system tactics and associated technologies.

Dr. Chung holds a Bachelor of Science in Mechanical and Aerospace Engineering from Cornell University. He also earned Master of Science and Doctor of Philosophy degrees in Mechanical Engineering from the California Institute of Technology.

Learn more about DARPA here: www.darpa.mil

]]>
Open Problems for Robots in Surgery and Healthcare https://robohub.org/open-problems-for-robots-in-surgery-and-healthcare/ Fri, 15 May 2020 19:46:14 +0000 https://robohub.org/open-problems-for-robots-in-surgery-and-healthcare/ * Please register at:
https://robotsinsurgeryandhealthcare.eventbrite.com

The COVID-19 pandemic is increasing global demand for robots that can 
assist in surgery and healthcare. This symposium focuses on recent 
advances and open problems in robot-assisted tele-surgery and 
tele-medicine and needs for new research and development. The online 
format will encourage active dialogue among faculty, students, 
professionals, and entrepreneurs.

Featuring:
Gary Guthart, CEO, Intuitive Surgical
Robin Murphy, Texas A&M
Pablo Garcia Kilroy, VP Research, Verb Surgical
Allison Okamura, Professor Stanford
David Noonan, Director of Research, Auris Surgical
Jaydev Desai, Director, Georgia Tech Center for Medical Robotics
Nicole Kernbaum, Principal Engineer, Seismic Powered Clothin
Monroe Kennedy III, Professor, Stanford

Presented by the University of California Center for Information 
Technology Research in the Interest of Society (CITRIS) and the Banatao 
Institute “People and Robots” Initiative, SRI International, and 
Silicon Valley Robotics.

Schedule:

   *  09:30-10:00: Conversation with Robin Murphy, Texas A&M and Director of 
Robotics for Infectious Diseases, and Andra Keay, Director of Silicon 
Valley Robotics
   *  10:00-10:30: Conversation with Gary Guthart, CEO Intuitive Surgical 
and Ken Goldberg, Director of CITRIS People and Robots Initiative
   *  10:30-11:00: Conversation with Pablo Garcia Kilroy, VP Research 
Verb Surgical and Tom Low, Director of Robotics at SRI International
   *  11:00-11:15: Coffee Break
   *  11:15-11:45: Conversation with David Noonan, Director of Research, 
Auris Surgical and Nicole Kernbaum
   *  11:45-12:45: Keynote by Jaydev Desai, Director, Georgia Tech Center 
for Medical Robotics
   *  12:45-01:15: Conversation with Allison Okamura, Stanford and Monroe 
Kennedy III, Stanford

]]>
From SLAM to Spatial AI https://robohub.org/from-slam-to-spatial-ai/ Tue, 12 May 2020 21:49:15 +0000 https://robohub.org/from-slam-to-spatial-ai/ You can watch this seminar here at 1PM EST (10AM PST) on May 15th.

 Andrew Davison (Imperial College London)

Andrew Davison

Abstract: To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general real-time geometric and semantic `Spatial AI’ perception capability. I will give many examples from our work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction. Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and I will cover research on vision algorithms for non-standard visual sensors and graph-based computing architectures.

Biography: Andrew Davison is Professor of Robot Vision and Director of the Dyson Robotics Laboratory at Imperial College London. His long-term research focus is on SLAM (Simultaneous Localisation and Mapping) and its evolution towards general `Spatial AI’: computer vision algorithms which enable robots and other artificial devices to map, localise within and ultimately understand and interact with the 3D spaces around them. With his research group and collaborators he has consistently developed and demonstrated breakthrough systems, including MonoSLAM, KinectFusion, SLAM++ and CodeSLAM, and recent prizes include Best Paper at ECCV 2016 and Best Paper Honourable Mention at CVPR 2018. He has also had strong involvement in taking this technology into real applications, in particular through his work with Dyson on the design of the visual mapping system inside the Dyson 360 Eye robot vacuum cleaner and as co-founder of applied SLAM start-up SLAMcore. He was elected Fellow of the Royal Academy of Engineering in 2017.

Robotics Today Seminars

“Robotics Today – A series of technical talks” is a virtual robotics seminar series. The goal of the series is to bring the robotics community together during these challenging times. The seminars are scheduled on Fridays at 1PM EDT (10AM PDT) and are open to the public. The format of the seminar consists of a technical talk live captioned and streamed via Web and Twitter (@RoboticsSeminar), followed by an interactive discussion between the speaker and a panel of faculty, postdocs, and students that will moderate audience questions.

Stay up to date with upcoming seminars with the Robotics Today Google Calendar (or download the .ics file) and view past seminars on the Robotics Today Youtube Channel. And follow us on Twitter!

Upcoming Seminars

Seminars will be broadcast at 1PM EST (10AM PST) here.

Leslie Kaelbling

22 May 2020: Leslie Kaelbling (MIT)

Allison Okamura

29 May 2020: Allison Okamura (Stanford)

Anca Dragan

12 June 2020: Anca Dragan (UC Berkeley)

Past Seminars

We’ll post links to the recorded seminars soon!

Organizers

Contact

]]>
Intel RealSense 3D Camera for robotics & SLAM (with code) https://robohub.org/intel-realsense-3d-camera-for-robotics-slam-with-code/ Thu, 12 Sep 2019 13:00:33 +0000 https://robohub.org/intel-realsense-3d-camera-for-robotics-slam-with-code/

The Intel® RealSense™ D400 Depth Cameras. Credit: Intel Corporation

The Intel RealSense cameras have been gaining in popularity for the past few years for use as a 3D camera and for visual odometry. I had the chance to hear a presentation from Daniel Piro about using the Intel RealSense cameras generally and for SLAM (Simultaneous Localization and Mapping). The following post is based on his talk.

Comparing depth and color RGB images

Depth Camera (D400 series)

Depth information is important since that gives us the information needed to understand shapes, sizes, and distance. This lets us (or a robot) know how far it is from items to avoid running into things and to plan path around obstacles in the image field of view. Traditionally this information has come from RADAR or LIDAR, however in some applications we can also get that from cameras. In cameras we often get depth from using 2 cameras for stereo vision.

The Intel RealSense Depth camera (D400 series) uses stereoscopic depth sensing to determine the range to an item. So essentially it has two cameras and can do triangulation from them for stereo. This sensor uses two infrared cameras for the stereo and then also has an RGB camera onboard. So you can get 4 data products from the sensor; RGB image, depth image, left infrared image, and right infrared image. Think of each image frame as a 3D snapshot of the environment, where each color (RGB) pixel also has a range value (depth) to the item that is in the image. The farther the items are from the camera the greater the range/depth error will be.

Applications In Computer Vision

The D400 cameras have an infrared projector for getting better features on surfaces with minimal texture in the infrared camera for computing the stereo reconstruction. This projector can be turned on and off if you want. Disabling the projector is often useful for tracking applications (since the projected dots don’t move with the items being tracked).

Realsense D400 Cameras Versions

One thing to be aware is that the infrared images are rectified (to make the images look the same in a common plane) in the camera, however the RGB camera image is not rectified. This means that if you want the depth and RGB images to line up well, you need to manually rectify the RGB image.

Pro Tip 1: The driver has a UV map to help map from the depth pixel to the RGB image to help account for the difference in image sizes. This lets you match the depth to RGB image data points better.

Pro Tip 2: If using the D435i (the IMU version), use the timestamps from the images and IMU to synchronize the two sensors. If you use system time (from your computer) there will be more error (partially due to weird USB timing).

Pro Tip 3: The cameras have Depth Presets. These are profiles that let you optimize various settings for various conditions. Such as high density, high accuracy, etc..

Pro Tip 4: Make sure your exposure is set correctly. If you are using auto exposure try changing the Mean Intensity Set Point (the setting is not where exposure is, it is under AE control, not the most obvious…).
If you want to use manual exposure you can play with are Exposure setpoint and Gain constant. Start with the exposure setpoint, then adjust the gain.

You might also want to see this whitepaper for more methods of tuning the cameras for better performance.

Visual Odometry & SLAM (T265)

Visual odometry is the generic term for figuring out how far you have moved using a camera. This is as opposed to “standard” odometry using things such as wheel encoders, or inertial odometry with a IMU.

These are not the only ways to get odometry. Having multiple sources of odometry is nice so you can take the advantages of each type and fuse them together (such as with a Kalman filter).

RealSense T265 is a tracking camera that is designed to be more optimal for Visual Odometry and SLAM (wider field of view and not using infrared light). It can do SLAM onboard as well as loop closure. However, this camera is not able to return RGB images (since it does not have a RGB camera onboard) and the depth returned is not as good as the D400 series (and can be a little trickier to get).

Using both a RealSense D435i sensor and a RealSense T265 sensor can provide both the maps and the better quality visual odometry for developing a full SLAM system. The D435i used for the mapping, and the T265 for the tracking.

Software

RealSense Putting Everything Together to Build a Full System

Intel provides the RealSense SDK2.0 library for using the RealSense cameras. It is Open Source and work on Mac, Windows, Linux, and Android. There are also ROS and OpenCV wrappers.

Click here for the developers page with the SDK.

Within the SDK (software development kit) it includes a viewer to let you view images, record images, change settings, or update the firmware.

Pro Tip 5: Spend some time with the viewer looking at the camera and depth images when designing your system so you can compare various mounting angles, heights, etc.. for the cameras.

The RealSense SDK (software development kit) has a few filters that can run on your computer to try and improve the returned depth map. You can play with turning these on and off in the viewer. Some of these include:

  1. Decimation Filter
  2. Spatial Edge-Preserving Filter
  3. Temporal Filter
  4. Holes Filling Filter

Within ROS there is a realsense-ros package that provides a wrapper for working with the cameras in ROS and lets you view images and other data in RVIZ.

ROS RealSense Occupancy Map package is available as an experimental feature in a separate branch of the RealSense git repo. This uses both the D400 and T265 cameras for creating the map.

For SLAM with just the D435i sensor, see here.

Pro Tip 6: You can use multiple T265 sensors for better accuracy. For example, if one sensor is pointed forward and another backwards; you can use the confidence values from each sensor to feed into a filter.

I know this is starting to sound like a sales pitch…

Pro Tip 7: If you have multiple cameras you can connect and query for a serial number to know which cameras is which.
Also if you remove the little connector at the top of the camera you can wire and chain multiple cameras together to synchronize them. (This should work in the SDK 2.0)

See this whitepaper for working with multiple camera configurations. https://simplecore.intel.com/realsensehub/wp-content/uploads/sites/63/Multiple_Camera_WhitePaper04.pdf

Pro Tip 8: Infinite depth points (such as points to close or to far from the sensor) have a depth value of 0. This is good to know for filtering.

Here are two code snippet’s for using the cameras provided by Daniel, to share with you. The first one is for basic displaying of images using python. The second code snippet is using OpenCV to also detect blobs, also using python.

## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

###############################################
##      Open CV and Numpy integration        ##
###############################################

import pyrealsense2 as rs
import numpy as np
import cv2


cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
pipeline.start(config)

try:
    while True:

        
        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        
        scaled_depth=cv2.convertScaleAbs(depth_image, alpha=0.08)
        depth_colormap = cv2.applyColorMap(scaled_depth, cv2.COLORMAP_JET)

        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))

        # Show images
        cv2.imshow('RealSense', images)


        k = cv2.waitKey(1) & 0xFF
        if k == 27:
            break

finally:

    # Stop streaming
    pipeline.stop()
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

###############################################
##      Open CV and Numpy integration        ##
###############################################

import pyrealsense2 as rs
import numpy as np
import cv2

def nothing(args):
    pass

def detectBlobs(mask):
        # Set up the SimpleBlobdetector with default parameters.
    params = cv2.SimpleBlobDetector_Params()
         
    # Change thresholds
    params.minThreshold = 1;
    params.maxThreshold = 255;
    
    # Filter by Area.
    params.filterByArea = True
    params.maxArea = 4000
    params.minArea = 300
    
    # Filter by Circularity
    params.filterByCircularity = True
    params.minCircularity = 0.1
    
    # Filter by Convexity
    params.filterByConvexity = True
    params.minConvexity = 0.5
    
    # Filter by Inertia
    params.filterByInertia = True
    params.minInertiaRatio = 0.1

         
    detector = cv2.SimpleBlobDetector_create(params)
 
    # Detect blobs.
    reversemask= mask
    keypoints = detector.detect(reversemask)
    im_with_keypoints = cv2.drawKeypoints(mask, keypoints, np.array([]),
            (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
    return im_with_keypoints

def thresholdDepth(depth):
    depth[depth==0] = 255 #set all invalid depth pixels to 255
    threshold_value = cv2.getTrackbarPos('Threshold','Truncated Depth')
    # Zero if dist>TH
    ret,truncated_depth=cv2.threshold(scaled_depth,threshold_value,255,cv2.THRESH_BINARY_INV) 
    return truncated_depth

cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
cv2.namedWindow('Truncated Depth', cv2.WINDOW_AUTOSIZE)
cv2.createTrackbar('Threshold','Truncated Depth',30,255,nothing)

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
pipeline.start(config)

try:
    while True:

        
        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        scaled_depth=cv2.convertScaleAbs(depth_image, alpha=0.08)
        depth_colormap = cv2.applyColorMap(scaled_depth, cv2.COLORMAP_JET)

        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))

        # Show images
        cv2.imshow('RealSense', images)
        truncated_depth=thresholdDepth(scaled_depth)
        truncated_depth=detectBlobs(truncated_depth)
        cv2.imshow('Truncated Depth', truncated_depth)

        k = cv2.waitKey(1) & 0xFF
        if k == 27:
            break

finally:

    # Stop streaming
    pipeline.stop()

I hope you found this informative and can make use of the Pro Tips.

Thank you to Daniel for presenting this information and allowing me to share it. This content is based on his talk. Daniel has also provided the full slide set that can be accessed by clicking here.

Disclaimer: I have not received any funding or free items from Intel.

Liked this article? Take a second to support me on Patreon! This post appeared first on Robots For Roboticists.

]]>
Using hydraulics for robots: Introduction https://robohub.org/using-hydraulics-for-robots-introduction/ Sun, 12 May 2019 21:41:56 +0000 https://robohub.org/using-hydraulics-for-robots-introduction/

From the Reservoir the fluid goes to the Pump where there are three connections. 1. Accumulator(top) 2. Relief Valve(bottom) & 3. Control Valve. The Control Valve goes to the Cylinder which returns to a filter and then back to the Reservoir.

Hydraulics are sometimes looked at as an alternative to electric motors.

Some of the primary reasons for this include:

  • Linear motion
  • Very high torque applications
  • Small package for a given torque
  • Large number of motors that can share the reservoir/pump can increase volume efficiency
  • You can add dampening for shock absorption

However there are also some downsides to using hydraulics including:

  • More parts are required (however they can be separated from the robot in some applications)
  • Less precise control (unless you use a proportional valve)
  • Hydraulic fluid (mess, leaks, mess, and more mess)

Hydraulic systems use an incompressible liquid (as opposed to pneumatics that use a compressible gas) to transfer force from one place to another. Since the hydraulic system will be a closed system (ignore relief valves for now) when you apply a force to one end of the system that force is transferred to another part of that system. By manipulating the volume of fluid in different parts of the system you can change the forces in different parts of the system (Remember Pascal’s Law from high school??).

So here are some of the basic components used (or needed) to develop a hydraulic system.

Pump

The pump is the heart of your hydraulic system. The pump controls the flow and pressure of the hydraulic fluid in your system that is used for moving the actuators.

The size and speed of the pump determines the flow rate and the load at the actuator determines the pressure. For those familiar with electric motors the pressure in the system is like the voltage, and the flow rate is like the electrical current.

Pump Motor

We know what the pump is, but you need a way to “power” the pump so that it can pump the hydraulic fluid. Generally the way you power the pump is by connecting it to an electric motor or gas/diesel engine.

Hydraulic Fluid

Continuing the analogy where the pump is the heart, the hydraulic fluid is the blood of the system. The fluid is what is used to transfer the pressure from the pump to the motor.

Hydraulic Hoses (and fittings to connect things)

These are the arteries and veins of the system that allows for the transfer of hydraulic fluid.

Hydraulic Actuators – Motor/Cylinder

cylinder
Cylinder [Source]
Motor [Source]

The actuator is generally the reason we are designing this hydraulic system. The motor is essentially the same as the pump; however instead of going from a mechanical input to generating the pressure, the motor converts the pressure into mechanical motion.

Actuators can come in the form of linear motion (referred to as a hydraulic cylinder) or rotary motion motors.

For cylinders, you generally apply a force and the cylinder end extends, and then if you release the force and the cylinder gets pushed back in (think of a car lift). This is the classic and most common use of hydraulics.

For rotary motors there are generally 3 connections on the motor.

  • A – Hydraulic fluid input/output line
  • B – Hydraulic fluid input/output line
  • Drain – Hydraulic fluid output line (generally only on motors, not cylinders)

Depending on the motor you can either only use A as the fluid input and B as the fluid output and the motor only spins in one direction. Or some motors can spin in either direction based on if A or B is used as the input or output of the hydraulic fluid.

The drain line is used so when the system is turned off, the fluid has a way to get out of the motor (to deal with internal leakage and to not blow out seals). In some motors the drain line is connected to one of the A or B lines. Also their are sometimes multiple drain lines so that you can route the hydraulic hoses from different locations.

Note: While the pump and motor are basically the same component. You usually can not switch their role due to how they are designed to handle pressure and the pumps usually not being backdrivable.

There are some actuators that are designed to be leakless and hold the fluid and pressure (using valves) so that the force from the actuator is held even without the pump. For example these are used in things like automobile carrying trucks that need to stack cars for transport.

Reservoir

This is essentially a bucket that holds the fluid. They are usually a little fancier so that they have over pressure relief valves, lids, filters, etc..

The reservoir is also often a place where the hydraulic fluid can cool down if it is getting hot within the system. As the fluid gets hotter it can get thinner which can result in increased wear of your motor and pump.

Filter

Keeps your hydraulic fluid clean before going back to the reservoir. Kind of like a persons kidneys.

Valves (and Solenoids)

solenoid valve
Valve (metal) with Solenoid (black) attached on top [Source]

Valves are things that open and close to allow the control of fluid. These can be controlled by hand (ie. manual), or more often my some other means.

One common method is to use a solenoid which is a device that when you apply a voltage can be used to open a valve. Some solenoids are latching, which means you quickly apply a voltage and it opens the valves, and then you apply a voltage again (usually switching polarity) to close the valve.

There are many types of valves, I will detail a few below.

Check Valves (One Way Valve)

These are a type of valve that can be inline to allow the flow of hydraulic fluid in only one direction.

Relief Valve

These are a type of valve that automatically opens (And lets fluid out) when the pressure gets to high. This is a safety feature so you don’t damage other components and/or cause an explosion.

Pilot Valve

These are another special class of valve that can use a small pressure to control a much larger pressure valve.

Pressure & Flow-rate Sensors/Gauges 

You need to have sensors (with a gauge or computer output) to measure the pressure and/or flow-rate so you know how the system is operating and if it is operating how you expect it to operate.

Accumulator

The accumulator is essentially just a tank that holds fluid under pressure that has its own pressure source. This is used to help smooth out the pressure and take any sudden loads from the motor by having this pressure reserve. This is almost like how capacitors are used in electrical power circuits.

The pressure source in the accumulator is often a weight, springs, or a gas.

There will often be a check valve to make sure the fluid in the accumulator does not go back to the pump.


I am not an expert on hydraulic systems. But I hope this quick introduction helps people. Liked it? Take a second to support me on Patreon! This post appeared first on Robots For Roboticists.

]]>
Rapid prototyping of the strider robot – smoother, stronger, faster https://robohub.org/rapid-prototyping-of-the-strider-robot-smoother-stronger-faster/ Wed, 29 Aug 2018 20:16:24 +0000 https://robohub.org/rapid-prototyping-of-the-strider-robot-smoother-stronger-faster/ This summer we used our Strider optimizer coupled with rapid prototyping in LEGO to refine its 10-bar linkage.

In a tiny fraction of the time it took us to refine TrotBot’s linkage in our garage, we explored numerous variations of Strider’s linkage and their trade-offs, and were able to significantly improve its performance in these categories:

Gait Efficiency. To be energy efficient, walking gaits should be smooth and not cause the robot to rise and fall – think how much more tiring it is to do lunges than it is to simply walk. The feet should also move at constant speed – try walking while changing how quickly your feet sweep across the ground and your leg muscles will quickly complain! Strider Ver 3 was optimized for both of these features, and it’s the only mechanism that we’ve tested which can walk with a 1:1 gear ratio without the LEGO motors stalling:

Rugged Terrain. Like with TrotBot, we wanted Strider to be able to walk on rugged terrain, otherwise why not just use wheels? But, mechanical walkers are blind and dumb, and they can’t lift their feet to avoid tripping on obstacles. So we designed Strider’s foot-path to mimic how you would walk blindfolded on a path crisscrossed by roots: lift your feet high and keep them high until stepping back down to the ground, like this:

Here’s a rugged terrain test an 8-leg build:

Strength. Walkers stress frames more than wheeled vehicles, so we increased the frame’s rigidity with more triangles. Also, the leverage of long legs can put a tremendous amount of force on the leg’s joints, especially when turning them tank-style. We strengthened Strider’s leg joints by sandwiching them between beams like in the image below, which also reduces how quickly friction wears down the lips of the frictionless pins.

Here’s a test of Strider’s strength with a 10 pound load:

We made these changes while keeping Strider simple – it’s still our easiest walker to build. We also (hopefully) made our building instructions more clear. We learned how bad we are at creating building instructions from Ben’s experience teaching his walking robots class last spring. In the past when users would email us about something confusing in the instructions, we’d simply add more pictures to clarify things. This backfired in Ben’s class, since more build pictures resulted in the students getting more confused, and caused them to skim thru the pictures more quickly, and to make more mistakes. As we learned, less is more!

We also separated the leg and frame instructions to make it easier to build different versions of Strider, and we designed the frame to make it easy to swap between the battery box/IR controller, or the EV3 brick. You can find the instructions here. Here’s an example build with the EV3 brick mounted underneath – but it would probably work with the brick mounted on top as well.

For the bold, we also posted a half dozen new variations of the Strider linkage for you to try building here, along with an online optimizer for the even bolder to create their own versions of the Strider linkage. We’ve also optimized Klann’s and Strandbeest’s linkages for LEGO, and you can find their online optimizers here. (jeez, I guess we’ve been busy)

Here’s a final teaser to encourage you (or your students) to take on the challenging task of replacing wheels with mechanical legs:

]]>
Underwater robot photography and videography https://robohub.org/underwater-robot-photography-and-videography/ Fri, 29 Dec 2017 23:29:00 +0000 http://robohub.org/underwater-robot-photography-and-videography/
I had somebody ask me questions this week about underwater photography and videography with robots (well, now it is a few weeks ago…). I am not an expert at underwater robotics, however as a SCUBA diver I have some experience that can be applicable towards robotics.

Underwater Considerations

There are some challenges that exist with underwater photography and videography, that are less challenging above the water. Some of them include:

1) Water reflects some of the light that hits the surface, and absorbs the light that travels through it. This causes certain colors to not be visible at certain depths. If you need to see those colors you often need to bring strong lights to restore the visibility of those wavelengths that were absorbed. Red’s tend to disappear first, blues are the primary color seen as camera depth increases. A trick that people often try is to use filters on the camera lens to make certain colors more visible.

If you are using lights then you can get the true color of the target. Sometimes if you are taking images you will see one color with your eye, and then when the strobe flashes a “different color” gets captured. In general you want to get close to the target to minimize the light absorbed by the water.

Visible colors at given depths underwater. [Image Source]

For shallow water work you can often adjust the white balance to sort of compensate for the missing colors. White balance goes a long ways for video and compressed images (such as .jpg). Onboard white balance adjustments are not as important for photographs stored as with a raw image format, since you can deal with it in post processing. Having a white or grey card in the camera field of view (possibly permanently mounted on the robot) is useful for setting the white balance and can make a big difference. The white balance should be readjusted every so often as depth changes, particularly if you are using natural lighting (ie the sun).

Cold temperate water tends to look green (such as a freshwater quarry) (I think from plankton, algae, etc..). Tropical waters (such as in the Caribbean) tend to look blue near the shore and darker blue as you get further away from land (I think based on how light reflects off from the bottom of the water)… Using artificial light sources (such as strobes) can minimize those colors in your imagery.

Auto focus generally works fine underwater. However if you are in the dark you might need to keep a focus light turned on to help the autofocus work, and then a separate strobe flash for taking the image. Some systems turn the focus light off when the images are being taken. This is generally not needed for video as the lights are continuously turned on.

2) Objects underwater appear closer and larger than they really are. A rule of thumb is that the objects will appear 25% larger and/or closer.

3) Suspended particles in the water (algae, dirt, etc..) scatters light which can make visibility poor. This can obscure details in the camera image or make things look blurry (like the camera is out of focus). A rule of thumb is your target should be less than 1/4 distance away from the camera as your total visibility.

The measure of the visibility is called turbidity. You can get turbidity sensors that might let you do something smart (I need to think about this more).

To minimize the backscatter from turbidity there is not a “one size fits all” solution. The key to minimizing backscatter is to control how light strikes the particles. For example if you are using two lights (angled at the left and right of the target), the edge of each cone of light should meet at the target. This way the water between the camera and the target is not illuminated. For wide-angle lenses you often want the light to be behind the camera (out of its plane) and to the sides at 45° angles to the target. With macro lenses you usually want the lights close to the lens.

“If you have a wide-angle lens you probably will use a domed port to protect the camera from water and get the full field of view of the camera.
The dome however can cause distortion in the corners. Here is an interesting article on flat vs dome ports.”

Another tip is to increase the exposure time (such as 1/50th of a second) to allow more natural light in, and use less strobe light to reduce the effect from backscatter.

4) Being underwater usually means you need to seal the camera from water, salts, (and maybe sharks). Make sure the enclosure and seals can withstand the pressure from the depth the robot will be at. Also remember to clean (and lubricate) the O rings in the housing.

“Pro Tip:Here are some common reasons for O ring seals leaking:
a. Old or damaged O rings. Remember O rings don’t last forever and need to be changed.
b. Using the wrong O ring
c. Hair, lint, or dirt getting on the O ring
d. Using no lubricant on the O ring
e. Using too much lubricant on the O rings. (Remember on most systems the lubricant is for small imperfections in the O ring and to help slide the O rings in and out of position.)”

5) On land it is often easy to hold a steady position. Underwater it is harder to hold the camera stable with minimal motion. If the camera is moving a faster shutter speed might be needed to avoid motion blur. This also means that less light is entering the camera, which is the downside of having the faster shutter speed.

When (not if) your camera floods

When your enclosure floods while underwater (or a water sensor alert is triggered):

a. Shut the camera power off as soon as you can.
b. Check if water is actually in the camera. Sometimes humidity can trigger moisture sensors. If it is humidity, you can add desiccant packets in the camera housing.
c. If there is water, try to take the camera apart as much as you reasonably can and let it dry. After drying you can try to turn the camera on and hope that it works. If it works then you are lucky, however remember there can be residual corrosion that causes the camera to fail in the future. Water damage can happen instantaneously or over time.
d. Verify that the enclosure/seals are good before sending the camera back in to the water. It is often good to do a leak test in a sink or pool before going into larger bodies of water.
e. The above items are a standard response to a flooded camera. You should read the owner’s manual of your camera and follow those instructions. (This should be obvious, I am not sure why I am writing this).


Do you have other advice for using cameras underwater and/or attached to a robot? Leave them in the comment section below.


I want to thank John Anderson for some advice for writing this post. Any mistakes that may be in the article are mine and not his.

The main image is from divephotoguide.com. They have a lot of information on underwater cameras, lens, lights and more.

This post appeared first on Robots For Roboticists.

]]>
Battery safety and fire handling https://robohub.org/battery-safety-and-fire-handling/ Tue, 14 Nov 2017 18:00:28 +0000 http://robohub.org/battery-safety-and-fire-handling/

Lithium battery safety is an important issue as there are more and more reports of fires and explosions. Fires have been reported in everything from cell phones to airplanes to robots.

If you don’t know why we need to discuss this, or even if you do know, watch this clip or click here.

I am not a fire expert. This post is based on things I have heard and some basic research. Contact your local fire department for advice specific to your situation. I had very little success contacting my local fire department about this, hopefully you will have more luck.

Preventing Problems

1. Use a proper charger for your battery type and voltage. This will help prevent overcharging. In many cases lithium-ion batteries catch fire when the chargers keep dumping charge into the batteries after the maximum voltage has been reached.

2. Use a battery management system (BMS) when building battery packs with multiple cells. A BMS will monitor the voltage of each cell and halt charging when any cell reaches the maximum voltage. Cheap BMS’s will stop all charging when any cell reaches that maximum voltage. Fancier/better BMS’s can individually charge each cell to help keep the battery pack balanced. A balanced pack is good since each cell will be a similar voltage for optimal battery pack performance. The fancy BMS’s can also often detect if a single cell is reading wrong. There have been cases of a BMS’s working properly but a single cell going bad which confuses the BMS; and yields a fire/explosion.

3. Only charge batteries in designated areas. A designated area should be non combustible. For example cement, sand, cinder block and metal boxes are not uncommon to use for charging areas. For smaller cells you can purchase fire containment bags designed to put the charging battery in.
lipo lithiom ion battery charging bag

In addition the area where you charge the batteries should have good ventilation.

I have heard that on the Boeing Dreamliner, part of the solve for their batteries catching fire on planes, was to make sure that the metal enclosure that the batteries were in could withstand the heat of a battery fire. And also to make sure that in the event of a fire the fumes would vent outside the aircraft and not into the cabin.

Dreamliner airline battery fire

Dreamliner battery pack before and after fire. [SOURCE]

4. Avoid short circuiting the batteries. This can cause a thermal runoff which will also cause a fire/explosion. When I say avoid short circuiting the battery you are probably thinking of just touching the positive and negative leads together. While that is an example you need to think of other methods as well. For example puncturing a cell (such as with a drill bit or a screw driver) or compressing the cells, can cause a short-circuit with a resulting thermal runoff.

5. Don’t leave batteries unattended when charging. This will let people be available in case of a problem. However, as you saw in the video above, you might want to keep a distance from the battery in case there is a catastrophic event with flames shooting out from the battery pack.

6. Store batteries within the specs of the battery. Usually that means room temperature and out of direct sunlight (to avoid overheating).

7. Training of personnel for handling batteries, charging batteries, and what to do in the event of a fire. Having people trained in what to do can be important so that they stay safe. For example, without training people might not realize how bad the fumes are. Also make sure people know where the fire pull stations are and where the extinguishers are.

Handling Fires

1. There are 2 primary issues with a lithium fire. The fire itself and the gases released. This means that even if you think you can safely extinguish the fire, you need to keep in mind the fumes and possibly clear away from the fire.

2a. Lithium batteries which are usually in the form of small non-rechargeable batteries (such as in a watch) in theory require a class D fire extinguisher. However most people do not have one available. As such, for the most part you need to just let it burn itself out (it is good that the batteries are usually small). You can use a standard class ABC fire extinguisher to prevent the spread of the fire. Avoid using water on the lithium battery itself since the lithium and water can react violently.

2b. Lithium-ion batteries (including LiFePO4) that are used on many robots, are often larger and rechargeable. For these batteries there is not a lot of actual lithium metal in the battery, so you can use water or a class ABC fire extinguisher. You do not use a class D extinguisher with these batteries.

With both of these types of fires, there is a good chance that you will not be able to extinguish the it. If you can safety be in the area your primary goal is to allow the battery to burn in a controlled and safe manner. If possible try to get the battery outside and on a surface that is not combustible. As a reminder lithium-ion fires are very hot and flames can shoot out from various places unexpectedly; you need to be careful and only do what you can do safety. If you have a battery with multiple cells it is not uncommon for each cell to catch fire separately. So you might see the flames die down, then shortly after another cell catches fire, and then another; as the cells cascade and catch fire.

PASS fire extinguisher

A quick reminder about how to use a fire extinguisher. Remember first you Pull the pin, then you Aim at the base of the fire, then you Squeeze the handle, followed by Sweeping back and forth at the base of the fire. [SOURCE]

3. In many cases the batteries are in an enclosure where if you spray the robots with an extinguisher you will not even reach the batteries. In this case your priority is your safety (from fire and fumes), followed by preventing the fire from spreading. To prevent the fire from spreading you need to make sure all combustible material is away from the robot. If possible get the battery pack outside.

In firefighting school a common question is: Who is the most important person? To which the response is, me!

4. If charging the battery, try to unplug the battery charger from wall. Again only if you can do this safely.


I hope you found the above useful. I am not an expert on lithium batteries or fire safety. Consult with your local experts and fire authorities. I am writing this post due to the lack of information in the robotics community about battery safety.

As said by Wired “you know what they say: With great power comes great responsibility.”.


Thank you Jeff (I think he said I should call him Merlin) for some help with this topic.

This post appeared first on Robots For Roboticists.

]]>
National Robot Safety Conference 2017 https://robohub.org/national-robot-safety-conference-2017/ Thu, 12 Oct 2017 22:11:33 +0000 http://robohub.org/national-robot-safety-conference-2017/ I had the opportunity to attend the National Robot Safety Conference for Industrial Robots today in Pittsburgh, PA (USA). Today was the first day of a three-day conference. While I mostly cover technical content on this site; I felt that this was an important conference to attend since safety and safety standards are becoming more […]

The post National Robot Safety Conference – October 2017 appeared first on Robots For Roboticists.

]]>

I had the opportunity to attend the National Robot Safety Conference for Industrial Robots today in Pittsburgh, PA (USA). Today was the first day of a three-day conference. While I mostly cover technical content on this site; I felt that this was an important conference to attend since safety and safety standards are becoming more and more important in robot system design. This conference focused specifically on industrial robots. That means the standards discussed were not directly related to self-driving cars, personal robotics, or space robots (you still don’t want to crash into a martian and start an inter-galactic war).

In this post I will go into a bit of detail on the presentations from the first day. Part of the reason I wanted to attend the first day was to hear the overview and introductory talks that formed a base for the rest of the sessions.

The day started out with some Standards Bingo. Lucky for us the conference organizers provided a list of standards terms, abbreviations, codes, and titles (see link below). For somebody (like myself) who does not work with industrial robot safety standards every day, when people start rattling off safety standard numbers it can get confusing very fast.

Quick, what is ISO 10218-1:2011 or IEC 60204-1:2016? For those who do not know, (me included) those are Safety requirements for industrial robots — Part 1: Robots and Safety of machinery – electrical equipment — Part 1: General requirements.

Click here for a post with a guide to relevant safety standards, Abbreviations, Codes & Titles.

The next talk was from Carla Silver at Merck & Company Inc. she introduced what safety team members need to remember to be successful, and introduced Carla’s Top Five List.

  1. Do not assume you know everything about the safety of a piece of equipment!
  2. Do not assume that the Equipment Vendor has provided all the information or understands the hazards of the equipment.
  3. Do not assume that the vendor has built and installed the equipment to meet all safety regulations.
  4. Be a “Part of the Process”. – Make sure to involve the entire team (including health and safety people)
  5. Continuous Education

I think those 5 items are a good list for life in general.

The prior talk set the stage for why safety can be tricky and the amount of work it takes to stay up to date.

Robot integrator is a certification (and way to make money) from Robotic Industries Association (RIA) that helps provide people who come trained to fill the safety role while integrating and designing new robot systems.

According to Bob Doyle the RIA Director of Communications, RIA certified robot integrators must understand current industry safety standards and undergo an on-site audit in order to get certified. Every two years they need to recertify. Part of the recertification is having an RIA auditor perform a site visit. When recertifing the integrators are expected to know the current standards. I was happy to hear about the two-year recertification, due to how much changes with robotics technology over two years.

A bit unrelated but A3 is the umbrella association for Robotic Industries Association (RIA) as well as Advancing Vision & Imaging (AIA), and Motion Control & Motor Association (MCMA). Bob mentioned that the AIA and MCMA certifications are standalone from the RIA Certified Integrators. However they are both growing as a way to train industrial engineers for those applications. Both the AIA and MCMA certifications are vendor agnostic for the technology used. There are currently several hundred people with the AIA certification. The MCMA certification was just released earlier this year and has several dozen people certified. Bob said that there are several companies that now require at least one team member on a project to have the above certifications.

The next talk really started to get into the details about Robot System Integrators and Best Practices. In particular risk assessments. Risk assessments is a relatively new part of the integration process, but has a strong focus in the current program. Risk assessments are important due to the number of potential safety hazards and the different types or interactions a user might have with the robot system. The risk assessment helps guide the design as well as how users should interact with the robot . The responsibility to perform this risk assessment is with the robot integrator and not directly with the manufacturer or end-user.

One thing that I heard that surprised me was that many integrators do not share the risk assessment with the end-user since it is considered proprietary to that integrator. However one participant said that you can often get them to discuss it in a meeting or over the phone, just they will not hand over the documents.

After a small coffee break we moved on to discussing some of the regulations in detail. In particular R15.06 which is for Industrial Robot Safety standards, the proposed R15.08 standards for industrial mobile robot safety standards, and the R15.606 collaborative robot safety standards. Here are a few notes that I took:

Types of Standards

  • A – Basic concepts — Ex. Guidance to assess risk
  • B – Generic safety standards — Ex. Safety distances, interlocks, etc..
  • C – Machine specific — ex. From the vendor for a particular robot.

Type C standards overrule type A & B standards.

Parts of a Standard

  • Normative – These are required and often use the language of “shall”
  • Informative – These are recommended or advice and use the language of “should” or “can”. Notes in standards are considered Informative

Key Terms for Safety Standards

  • Industrial Robot – Robot manipulator of at least 3 DOF and its controller
  • Robot System – The industrial robot with its end effector, work piece and periphery equipment (such as conveyor).
  • Robot Cell – Robot system with the safe guarded spaces to include the physical barriers.
robot work cell

Case study that was presented of a 3 robot system in a single cell, and how it was designed to meet safety standards.

R15.06 is all about “keeping people safe by keeping them away from the robot system”. This obviously does not work for mobile robots that move around people and collaborative robots. For that the proposed R15.08 standard for mobile robots and the R15.606 standard for collaborative robots are needed.

R15.08 which is expected to be ratified as a standard in 2019 looks at things like mobile robots, manipulators on mobile robots, and manipulators working while the mobile base is also working. Among other things, the current standard draft says that if an obstacle is detected, the primary mode is for the robot to stop; however dynamic replanning will be allowed.

For R15.606 they are trying to get rid of the term collaborative robot (a robot designed for direct interaction with a human) and think about systems in regard to its application. For example :

…a robotic application where an operator may interact directly with a robot system without relying on perimeter safeguards for protection in pre-determined,low risk tasks…

collaborative robots

After all the talk about standards we spent a bit of time looking at various case studies that were very illuminating for designing industrial robotic systems, and some of the problems that can occur.

One thing unrelated, but funny since this was a safety conference, was a person sitting near the back of the room who pulled a roll of packing tape out of their backpack to tape over their laptop power cable that ran across the floor.

I hope you found this interesting. This was the 29th annual national robot safety meeting (really, I did not realize we had been using robots in industry for that long). If you want to find out more about safety and how it affects your work and robots make sure to attend next year.


I would like to thank RIA for giving me a media pass to attend this event.

]]>
Robotbenchmark lets you program simulated robots from your browser https://robohub.org/robotbenchmark-lets-you-program-simulated-robots-from-your-browser/ Tue, 22 Aug 2017 15:56:31 +0000 http://robohub.org/robotbenchmark-lets-you-program-simulated-robots-from-your-browser/

Cyberbotics Ltd. is launching https://robotbenchmark.net to allow everyone to program simulated robots online for free.

Robotbenchmark offers a series of robot programming challenges that address various topics across a wide range of difficulty levels, from middle school to PhD. Users don’t need to install any software on their computer, cloud-based 3D robotics simulations run on a web page. They can learn programming by writing Python code to control robot behavior. The performance achieved by users is recorded and displayed online, so that they can challenge their friends and show off their skills at robot programming on social networks. Everything is designed to be extremely easy-to-use, runs on any computer, any web browser, and is totally free of charge.

This project is funded by Cyberbotics Ltd. and the Human Brain Project.

About Cyberbotics Ltd.: Cyberbotics is a Swiss-based company, spin-off from the École Polytechnique Fédérale de Lausanne, specialized in the development of robotics simulation software. It has been developing and selling the Webots software for more than 19 years. Webots is a reference software in robotics simulation being used in more than 1200 companies and universities across the world. Cyberbotics is also involved in industrial and research projects, such as the Human Brain Project.

About the Human Brain Project: The Human Brain Project is a large ten-year scientific research project that aims to build a collaborative ICT-based scientific research infrastructure to allow researchers across the globe to advance knowledge in the fields of neuroscience, computing, neurorobotics, and brain-related medicine. The Project, which started on 1 October 2013, is a European Commission Future and Emerging Technologies Flagship. Based in Geneva, Switzerland, it is coordinated by the École Polytechnique Fédérale de Lausanne and is largely funded by the European Union.

]]>