Ricardo Téllez – Robohub https://robohub.org Connecting the robotics community to the world Sat, 17 Jun 2023 09:10:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Flowstate: Intrinsic’s app to simplify the creation of robotics applications https://robohub.org/flowstate-intrinsics-app-to-simplify-the-creation-of-robotics-applications/ Sun, 18 Jun 2023 08:30:04 +0000 https://www.theconstructsim.com/?p=37325

Copyright by Intrinsic.

Finally, Intrinsic (a spin-off of Google-X) has revealed the product they have been working with the help of the Open Source Robotics Corporation team (among others): Flowstate!

What is Flowstate?

Introducing Intrinsic Flowstate | Intrinsic (image copyright by Intrinsic)

Flowstate is a web-based software designed to simplify the creation of software applications for industrial robots. The application provides a user-friendly desktop environment where blocks can be combined to define the desired behavior of an industrial robot for specific tasks.

Good points

  • Flowstate offers a range of features, including simulation testing, debugging tools, and seamless deployment to real robots.
  • It is based on ROS, so we should be able to use our favorite framework and all the existing software to program on it, including Gazebo simulations.
  • It has a behavior tree based system to graphically control the flow of the program, which simplifies the way to create programs by just moving blocks around. But it is also possible to switch to expert mode to manually touch the code.
  • It has a library of already existing robot models and hardware ready to be added, but you can also add your own.
  • Additionally, the application provides pre-built AI skills that can be utilized as modules to achieve complex AI results without the need for manual coding.
  • One limitiation (but I actually consider a good point) is that the tool is thought for industrial robots not for service robots in general. This is good because it provides a focus for the product, specially for this initial release

Flowstate | Intrinsic (image copyright by Intrinsic)

Based on the official post and the keynote released on Monday, May 15, 2023 (available here), this is the information we have gathered so far. However, we currently lack a comprehensive understanding of how the software works, its complete feature set, and any potential limitations. To gain more insights, we must wait until July of this year, hoping that I will be among the lucky participants selected for the private beta (open call to the beta still available here).

Unclear points

Even if I find interesting the proposal of Intrinsic, I have identified three potential concerns regarding it:

  1. Interoperability across different hardware and software platforms poses a challenge. The recruitment of the full OSRC team by Intrinsic appears to address this issue, given that ROS is currently the closest system in the market to achieve such interoperability. However, widespread adoption of ROS by industrial robot manufacturers is still limited, with only a few companies embracing it.

    Ensuring hardware interoperability necessitates the adoption of a common framework by robot manufacturers, which is currently a distant reality. What we, ROS developers, aim right now is to be able to have somebody build the ROS drivers for the robotic arm we want to use (like for example the manufacturers of the robot, or the team of ROS Industrial). However, manufacturers generally hesitate to develop ROS drivers due to potential business limitations and their aims for customer lock-in. Unless a platform dedicates substantial resources to developing and maintaining drivers for supported robots, the challenge of hardware interoperability cannot be solved by a platform alone (actually, that is one of the goals that ROS-Industrial is trying to achieve).

    Google possesses the potential to unite hardware companies towards this goal, as Wendy Tan White, the CEO of Intrinsic mentioned, “This is an ecosystem effort” However, it is crucial for the industrial community to perceive tangible benefits and value in supporting this initiative beyond merely assisting others in building their businesses. The specific benefits that the ecosystem stands to gain by supporting this initiative remain unclear.

  2. Flowstate | Intrinsic (image copyright by Intrinsic)

  3. The availability of pre-made AI skills for robots is a complex task. Consider the widely used skills in ROS, such as navigation or arm path planning, exemplified by Nav2 and MoveIt, which offer excellent functionality. However, integrating these skills into new robots is not as simple as plug-and-play. In fact, dedicated courses exist to teach users how to effectively utilize the different components of navigation within a robot. This highlights the challenges associated with implementing such skills for robots in general. Thus, it is reasonable to anticipate similar difficulties in developing pre-made skills within Flowstate.
  4. A final point that I don’t see clear (because it was not addressed in the presentation) is how the company is going to do business with Flowstate. This is a very important point for every robotics developer because we don’t want to be locked into proprietary systems. We understand that companies must have a business, but we want to understand clearly what the business is so we can decide if that is convenient or not for us, both in the short and the long run. For instance, Robomaker from Amazon did not gain much traction because forced the developers to pay for the cloud while running Robomaker, when they could do the same thing (with less fancy stuff) in their own local computers for free

Conclusion

Overall, while Flowstate shows promising, further information and hands-on experience are required to assess its effectiveness and address potential challenges.

I have applied to the restricted beta. I hope to be selected so I can have a first hand experience and report about it.

Please make sure to read the original post by Wendy Tan White and the keynote presentation, both can be found at the web of Intrinsic.

Flowstate | Intrinsic (image copyright by Intrinsic)

]]>
ROS Awards 2022 results https://robohub.org/ros-awards-2022-results/ Sat, 02 Jul 2022 09:30:17 +0000 https://www.theconstructsim.com/?p=29736

The ROS Awards are the Oscars of the ROS world! The intention of these awards is to express recognition for contributions to the ROS community and the development of the ROS-based robot industry, and to help those contributions gain awareness.

Conditions

  • Selection of the winners is made by anonymous online voting over a period of 2 weeks
  • Anybody in the ROS community can vote through the voting enabled website
  • Organizers of the awards provide an initial list of 10 possible projects for each category but the list can be increased at any time by anybody during the voting period
  • Since the Awards are organized by The Construct none of its products or developers can be voted
  • Winners are announced at the ROS Developers Day yearly conference
  • New on 2022 edition: Winners of previous editions cannot win again, in order to not concentrate the focus on the same projects all the time. Remember, with these awards, we want to help spread all ROS projects!

Voting

  1. Every person can only vote once in each category
  2. You cannot change your answers once you have submitted your vote
  3. Voting is closed 3 days before the conference, and a list of the finalists per each category is announced in the same week
  4. Voters cannot use flaws in the system to influence voting. Any detection of trying to trick the system will disqualify the votes. You can, though, promote your favorite among your networks so others vote for it.

Measures have been taken to prevent as much as possible batch voting from a single person.

Categories

Best ROS Software

The Best ROS Software category comprises any software that runs with ROS. It can be a package published on the ROS.org repo of just a software that uses ROS libraries to produce an input. Open Source and closed source are both valid.

Finalists

  1. Ignition Gazebo, by Open Robotics
  2. Groot Behavior Tree, by Davide Faconti
  3. Webots, by Cyberbotics
  4. SMACC2, by Brett Aldrich
  5. ros2_control, by several ROS developers
  6. PlotJuggler, by Davide Faconti

Winner: Webots, by Cyberbotics

Learn more about the winner in this video:

Best ROS-Based Robot

The Best ROS-Based Robot category includes any robot that runs ROS inside it. They can be robotics products, robots for research, or robots for education. In all cases, they must be running ROS inside.

Finalists

  1. Panda robot arm, by Franka Emika
  2. TIAGo, by Pal Robotics
  3. UR robot arm, by Universal Robots
  4. Turtlebot 4, by Clearpath
  5. Nanosaur, by Raffaello Bonghi
  6. Leo Rover, by Leo Rover

Winner: Nanosaur, by Raffaello Bonghi

Learn more about the winner in this video:

Best ROS Developer

Developers are the ones that create all the ROS software that we love. The Best ROS Developer category allows you to vote for any developer who has contributed to ROS development in one sense or another.

Finalists

  1. Francisco Martín
  2. Davide Faconti
  3. Raffaello Bonghi
  4. Brett Aldrich
  5. Victor Mayoral Vilches
  6. Pradheep Krishna

Winner: Francisco Martín

Learn more about the winner in this video:

Insights from the 2022 Edition

  1. This year, the third time we organise the awards, we have increased the total number of votes by 500% So we can say that the winners are a good representation of the feelings of the community.
  2. Still this year, the winners of previous editions received many votes. Fortunately, we applied the new rule of not allowing to win previous winners, to provide space for other ROS projects have the focus on the community, and hence help to create a rich ROS ecosystem.

Conclusions

The ROS Awards started in 2020 with a first edition where the winners were some of the best and well-known projects in the ROS world. In this third edition, we have massively increased the number of votes from the previous edition. We expect this award will continue to contribute to the spreading of good ROS projects.

See you again at ROS Awards 2023!

]]>
How to build a robotics startup: getting some money to start https://robohub.org/how-to-build-a-robotics-startup-getting-some-money-to-start/ Tue, 18 May 2021 14:16:24 +0000 https://robohub.org/how-to-build-a-robotics-startup-getting-some-money-to-start/

This episode is about learning the options you have to get some money to start your startup and what is expected you achieve with that money.

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

Subscribe to the podcast using any of the following methods

Or watch the video

The post 94. How to build a robotics startup: getting some money to start appeared first on The Construct.

]]>
How to build a robotics startup: getting the team right https://robohub.org/how-to-build-a-robotics-startup-getting-the-team-right/ Tue, 06 Apr 2021 10:39:03 +0000 https://robohub.org/how-to-build-a-robotics-startup-getting-the-team-right/

This episode is about understanding why you can’t build your startup alone, and some criteria to properly select your co-founders.

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

Subscribe to the podcast using any of the following methods

Or watch the video

The post 91. How to build a robotics startup: getting the team right appeared first on The Construct.

]]>
How to build a robotics startup: the product idea https://robohub.org/how-to-build-a-robotics-startup-the-product-idea/ Tue, 23 Feb 2021 13:03:46 +0000 https://robohub.org/how-to-build-a-robotics-startup-the-product-idea/

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

This episode is about deciding the product your startup will produce.

Related links

Subscribe to the podcast using any of the following methods

Or watch the video

The post 89. How to build a robotics startup: the product idea appeared first on The Construct.

]]>
Teaching robotics with ROS https://robohub.org/teaching-robotics-with-ros/ Sat, 24 Nov 2018 23:29:33 +0000 http://www.theconstructsim.com/?p=11523

A couple of months ago I interviewed Joel Esposito about the state of robotics education for the ROS Developers Podcast #21. On that podcast, Joel talks about his research on how robotics is taught around the world. He identifies a set of common robotics subjects that need to be explained in order to make students know about robotics, and a list of resources that people are using to teach them. But most important, he points out the importance of practicing with robots what students learn.

From my point of view, robotics is about doing, not about reading. A robotics class cannot be about learning how to compute the Jacobian of a matrix to find the inverse kinematics. Computing the Jacobian has nothing to do with robotics, it is just a mathematical tool that we use to solve a robotics problem. That is why a robotics class should not be focused on how to compute the Jacobian but on how to move the end effector to a given location.

  • I want to move this robotic arm’s end effector to that point, how do I do it?
  • Well, you need to represent the current configuration of the arm in a matrix!
  • Ah, ok, how do I do that?
  • You need to use a matrix to encode its current state. Let’s do it. Take that simulated robot arm and create a ROS node that subscribes to the /joint_states and captures the joint values. From those values, create a matrix storing it. It has to be able to modify the matrix at any time step, so if I move the arm, the matrix must change.
  • […]
  • Done! So what do I have to do now if I want to move the robot gripper close to this bottle?
  • You need to compute the Jacobian. Let’s do it! Create a program that manually introduce a desired position for the end effector. Then based on that data, compute the Jacobian.
  • How do I compute the Jacobian?
  • I love that you ask me that!

I understand that learning and studying theory is a big part of robotics, but not the biggest. The biggest should be practicing and doing with robots. For that, I propose to use ROS as the base system, the enabler, that allows practical practice (does that exist?!?!) while learning robotics.

The proposal

Teaching subjects as complex as inverse kinematics or robot navigation should not be (just) a theoretical matter. Practice should be embedded into the teaching of those complex robotics subjects.

I propose to teach ROS at the same time that we teach robotics, and use the former during the whole robotics semester as a tool to build and implement the robotics subject we are teaching. The idea is that we use ROS to allow the student to actually practice what they are learning. For instance, if we are talking about the different algorithms for obstacle avoidance, we also provide a simulated robot and make the student create a ROS program that actually implements the algorithm for that robot. By following this approach, the learning of the student is not only theoretical but instead includes the practice of what is being told.

The advantage of using ROS in this procedure is that ROS already provides a lot of material that can provide a working infrastructure, where you as a teacher, can concentrate on teaching the student how to create the small part that is required for the specific subject you are teaching.

We have so many tools at present that were not available just 5 years ago… Let’s make use of them to increase the student quality and quantity of learning! Instead of using Powerpoint slides, let’s use interactive Python notebooks. Instead of using a single robot for the whole class, let’s provide a robot simulation to each student. Instead of providing an equation on the screen, let’s provide students the implementation of it in a ROS node for them to modify.

Students learn robotics with ROS

Students learn robotics with ROS

Not a new method

What I preach is what I practice. This is the method that I am using on my class of Robot Navigation at the University of LaSalle in Barcelona. In this class, I teach the basics about how to make a robot autonomously move from one point in space to another while avoiding obstacles. You know, SLAM, particle filters, Dynamic Window Approaches and the like. As part of the class, students must learn ROS and use it to implement some of the theoretical concepts I explain to them, like for example, how to compute $odometry based on the values provided by the encoders.

However,  I’m not the only one using this method to teach robotics. For example, professor Ross Knepper from Cornell University explained to me on Using ROS to teach the foundations of robotics for the ROS Developers podcast #23, how he teaching of ROS and robotics in parallel. He even goes further, forbidding students to use many ROS software like the ROS navigation stack or MoveIt! His point is very good: he wants the students to actually learn how to do it, not just how to use something that somebody else has done (like the navigation stack). But he uses the ROS infrastructure anyway.

How to teach robotics with ROS

The method would involve the following steps:

  1. You start with a ROS class that explains the most basic topics of ROS, just what is required to start working
  2. Then you start explaining the robotics subject of your choice, making the students implement what you are teaching in a ROS program, using robots. I advise to use simulated robots.
  3. Whenever a new ROS concept is required for continuing the implementation of some algorithm, then, a class dedicated to that subject is performed.
  4. Continue with the next robotics subject
  5. Add a real robot project where students must apply all that they have learned into a single project with a real ROS based robot
  6. Add an exam

1. Explain the basic ROS concepts

For this step, you should describe the basic subjects of ROS that would allow a student to create ROS programs. Those subjects include the following:

  • ROS packages
  • Launching ROS nodes
  • Basic ROS line commands
  • ROS topic subscribers
  • ROS topic publishers
  • Rviz for visualization and debugging

It is not necessary to dedicate too much time to those subjects, just the necessary time for them to know about ROS. Deeper knowledge and understanding of ROS will be learned by practicing along the different classes of the semester. While they are trying to create the software for the implementation of the theoretical concepts, they will have to practice those concepts, and ingrain them deeper into their brain.

Very important, explain those concepts by using simulated robots and making the students apply the concepts to the simulated robot. For example, make the students read from a topic, to get the distance to the closest obstacle. Or make them write to a topic to make the robot move around. Do not just explain what a topic is and provide a code on the slide. Make them actually connect to a producer of data, that is, the simulated robot.

I would also recommend that at this step you forget teaching about creating custom messages, or what are services or action clients. That is too much for such an introductory class and it is very unlikely you are going to need it in your next robotics class.

2. Explain and implement robotics subject

You can now proceed on teaching your robotics subject. How I recommend is to divide the time of the class into two:

  1. First part will be to explain the theory of the robotics subject
  2. Second part to actually implement that theory. Create a program that actually does what you explained.

It may happen that in order to implement that theoretical part, the student needs a lot of pre-made code that supports the specific point you are teaching. That will mean that you have to prepare your class even harder and provide to the student will all that support code. The good news is that, by using ROS, it is very likely that you will find that code already done by someone. Hence, find or develop that code, and provide it as ROS package to your students. More about where to find the code and how to provide it to your students, below.

An interesting point to include here is what professor Knepper indicates: he creates some exercises with deliberate errors, so the students learn to recognize situations where the system is not performing correctly, and, more important, to create the skills necessary to figure out how to solve those errors. Take that point also into consideration.

Teaching Domain Randomization Reinforcement Learning with ROS robots

3. Add new ROS concept

At some point in time, you may need to create a custom ROS message for the robotics subject you are teaching. Or you may need to use a ROS service because you need to provide a service for face recognition. Whatever you need to explain about ROS, now is the best moment. Use that necessity to explain the concept. What I mean is that explaining a concept like ROS action servers when the student has no idea what this could be used for is a bad idea. The optimal moment to explain that new ROS concept is when it is so clear for everyone that the concept is needed.

I have found myself that explaining complex concepts like ROS services when the students don’t need them makes it very difficult for them to understand, even if you provide good examples of usage. They cannot feel the pain of not using those concepts in their programs. Only when they are in the situation that actually requires the concept, only then are they going to feel the pain, and the knowledge is going to be integrated into their heads.

4. Continue with the next robotics subject

Keep pushing this way. Move to the next subject. Provide a different robot simulation. Request the students implement the concept. And keep going this way until you finish the whole course.

5. Real robots project

Testing with simulators is good because it provides real life-like experience. However, the real test is when you use a real robot. Usually, Universities do not have a budget for a robot for each student, that is why we have been promoting so much the use of simulations. However, some real robot experience is necessary, and most universities can afford to have a few robots for the whole class. That is the point where the robotics project come into action.

Define a robotics project that encapsulates all the knowledge of the whole course. Make the students create groups that will test their results on the real robot. This approach has many advantages: students will have to summarize all the lessons into a single application. They will have to make it work in a real robot. Additionally, they will have to work in teams.

In order to provide this step to the students you will need to prepare the following:

  • A simulation of the environment where the real robot will be executed. This allows the students to practice most of the code in the simulator, in a faster way. It also allows you to have some rest (otherwise the students may require your presence all the time at the real robots lab ?
  • You cannot escape providing some hours per week for the students to practice with the real robot. So schedule that.

robot students ROS

Finally, decide a day when you will evaluate the project for each team. That is going to be called demo day. On demo day, each group has to show how their code is able to make the robot perform as it is supposed to.

For example, at LaSalle, we use two Turtlebots for 15 students. The students must do teams of two people in order to do the project. Then, on demo day, their robot has to be able to serve the coffee on a real coffee shop in Barcelona (thank you Costa Coffee for your support!). All the students go to the coffee shop with their programs, and we bring the two robots. Then, while one team is demonstrating on one robot, the other is preparing.

Costa Coffee Gazebo ROS simulation

 

In case you need ROS certified robots, let me recommend you the online shop of my friends: ROS Components shop. That is where I buy robots or pieces when I need them. No commission for me recommending it! I just think they are great.

6. Evaluation

One very important point in the whole process is how to evaluate the students. In my opinion, the evaluation must be continuous, otherwise the students just do nothing until the previous day of the exam. Afterward, everyone complains.

At LaSalle, I do an exam of one hour every month, covering the topics I have taught during that month, and the previous months too.

In our case, the exams are completely practical. They have to make the robot perform something related to the subjects taught. For example, they have to make the robot create a map of the environment using an extended Kalman filter. I may provide the implementation of the extended Kalman filter for straight usage, but the student must know how to use it. How to capture the proper data from the robot, how to provide that to the filter, and how to use the output to build the map.

As an example, here you can find a ROSject containing the last exam I gave to the students about dead reckoning navigation. The ROSject contains everything they needed to do the exam, including instructions, scores, and robot simulations. That leads us to the next point.

How to provide the material to the students

If I have convinced you on using a practical method to teach robotics using ROS, at this point you must be concerned about two points:

  1. How am I going to create all that material?
  2. How can I provide my students with a running environment where to execute that bunch of simulations and ROS code?

Very good concerns.

Preparing the material is a lot of work. And more important, there is a lot of risk. When you prepare such material, there is a large probability that what you prepared does not work for the student’s computer. That is why I propose to use an online platform for preparing all that material, and share the material inside the same platform, so you will be 100% sure that it will work no matter who is going to use it.

I propose to use the ROS Development Studio. That is a platform that we have developed at our company, The Construct, for the easy creation, testing and distribution of ROS code.

The ROS Development Studio (or ROSDS for short), allows you to create the material using a web browser on any type of computer. It already provides the simulations you may need (even if you can add your own ones). It also provides a Python notebook structure that you can fill with the material for the students. It also allows you to include any code that your students may require in the form of ROS packages.

But the most interesting point of the ROSDS is that it allows you to share all your material by sending a simple web link to the students. That is what we call a ROSject. A ROSject is a complete ROS project that includes simulations, notebooks and code in a single web link. Whenever you provide this link to anybody, they will get a copy of the whole content, and they will be able to execute it in the exact same conditions as you created the ROSject.

This sharing feature makes it very easy also for the students to share with you their program for evaluation, in case of exams, or to help students when they are stuck while studying. Since the ROSject contains all the material, it will be very easy for you to correct the exam in the conditions the student created the program, without requiring you to copy files or set up your computer environment. Just tell the students to share with you the ROSject link.

We use the ROSjects at LaSalle to provide the different lessons and exercises. For example, we have a ROSject with the simulation of the coffee shop where the students will do the project evaluation. Also, we use the ROSject to create the exams (as you can see in the previous section).

Summarizing: ROSjects allow you to encapsulate your lessons in a complete unit that includes the explanations, with the simulations and with the code. And all that for free.

Still, you will have to create your ROSjects for your classes. This is something that is going to finish in the close future as more and more Universities are creating their ROSjects and publishing them online for anybody.

However, if you do not want to wait until they are created, I can recommend you our online academy the Robot Ignite Academy, where we provide already made ROS-based courses about deep learning, manipulation, robot perception, and more. The only drawback is that the academy has a certain cost per student. However, it is highly recommended because it simplifies your preparation. We use the Robot Ignite Academy at LaSalle, but many other Universities use it around the world, like Clarkson University (USA), University of Michigan (USA), Chukyo University (Japan), University of Sydney (Australia), University of Luxembourg (Luxembourg), Université de Reims (France) or University of Alicante (Spain).

Some examples of rosjects. ARIAC and Cartpole created by The Construct, Autorace created by ROBOTIS

Additional topics you may need to cover

Finally, I would like to make a point about a problem that I have identified when using this method.

I find that most of the people that come to our Robot Navigation class have zero knowledge about programming in either Python or C++, nor any knowledge on how to use Linux shell. After interviewing several robotics teachers around the world, I found that this is the case also for other countries.

If that is your case, first I would recommend that you reduce the scope of learning programming to the use of Python. Do not go to C++. C++ is too complex to start teaching it together with robotics. Also, Python is very powerful and easy to learn.

I usually create an initial class about Python and Linux shell. I do this class and immediately, on the next week, I do an exam to the students about Python and Linux shell. The purpose of the exam is to stress to the students the importance of mastering those programming skills. They are the base of the rest.

Here you can find a free online course for learning Python, and here, another one for learning Linux shell.

Additional problems that you may find is that the students have problems understanding English. Most of ROS documentation is in English. The way to communicate in the ROS community is in English (whether we like it or not). You may feel tempted to create the course notes in your mother language, and only provide documentary resources in your own language. I suggest not to do it. Push your students to learn English . The community needs a common language, and at present it is English.

Webinar about this subject

If you want to know more and discuss about the subject, let me suggest you come to the webinar we are doing on the 29th November about this matter.

You can check here more details about the How To Teach Autonomous Mobile Robotics and how to subscribe to attend.

Conclusion

I started this post talking about the findings of Joel Esposito. Even if you listen to the podcast interview, you will see that his final conclusion is actually NOT to use ROS to teach robotics. I’m sure there are other teachers with the same opinion. Other teachers like professor Knepper do advocate for the opposite. Those are points of views, like mine in this article. I recommend you listen to the podcast interview with Joel so you can understand why he suggests that.

It is up to you to decide. Now you have several opinions here. Which one is best for you? Please leave your answer into the comments so we can start a debate about it.

]]>
Simulations are the key to intelligent robots https://robohub.org/simulations-are-the-key-to-intelligent-robots/ Mon, 01 Oct 2018 11:03:37 +0000 https://robohub.org/simulations-are-the-key-to-intelligent-robots/

I read an article entitled Games Hold the Key to Teaching Artificial Intelligent Systems, by Danny Vena, in which the author states that computer games like Minecraft, Civilization, and Grand Theft Auto have been used to train intelligent systems to perform better in visual learning, understand language, and collaborate with humans. The author concludes that games are going to be a key element in the field of artificial intelligence in the near future. And he is almost right.

In my opinion, the article only touches the surface of artificial intelligence by talking about games. Games have been a good starting point for the generation of intelligent systems that outperform humans, but going deeper into the realm of robots that are useful in human environments will require something more complex than games. And I’m talking about simulations.

Using games to bootstrap intelligence

The idea behind beating humans at games has been in artificial intelligence since its birth. Initially, researchers created programs to beat humans at Tic Tac Toe and Chess (like, for example, IBM’s DeepBlue). However, those games’ intelligence was programmed from scratch by human minds. There were people writing the code that decided which move should be the next one. However, that manual approach to generate intelligence reached a limit: intelligence is so complex that we realized that it may be too difficult to manually write a program that emulates it.

Then, a new idea was born: what if we create a system that learns by itself? In that case, the engineers will only have to program the learning structures and set the proper environment to allow intelligence to bootstrap by itself.

AlphaGo beats Lee Sedol

AlphaGo beats Lee Sedol
(photo credit: intheshortestrun)

The results of that idea are programs that learn to play the games better than anyone in the world, even if nobody explains to the program how to play in the first place. For example, Google’s DeepMind company created AlphaGo Zero program uses that approach. The program was able to beat the best players of Go in the world. The company used the same approach to create programs that learnt to play Atari games, starting from zero knowledge. Recently, OpenAI used this approach for their bot program that beats pro players of the Dota 2 game. By the way, if you want to reproduce the results of the Atari games, OpenAI released the OpenAI Gym, containing all the code to start training your system with Atari games, and compare the performance against other people.

What I took from those results is that the idea of making an intelligent system generate intelligence by itself is a good approach, and that the algorithms used for teaching can be used for making robots learn about their space (I’m not so optimistic about the way to encode the knowledge and to set the learning environment and stages, but that is another discussion).

From games to simulations

OpenAI wanted to go further. Instead of using games to generate programs that can play a game, they applied the same idea to make a robot do something useful: learn to manipulate a cube on its hand. In this case, instead of using a game, they used a simulation of the robot. The simulation was used to emulate the robot and its environment as if it were a real one. Then, they allowed the algorithm to control the simulated robot and make the robot learn about the task to solve by using domain randomization. After many trials, the simulated robot was able to manipulate the block in the expected way. But that was not all! At the end of the article, the authors successfully transferred the learned control program of the simulated robot to a real robot, which performed in a way similar to the simulated one. Except it was real.

Simulated Hand OpenAI Manipulation Experiments

Simulated Hand OpenAI Manipulation Experiment (image credit: OpenAI)

Real Hand OpenAI Manipulation Experiment

Real Hand OpenAI Manipulation Experiment (photo credit:OpenAI)

A similar approach was applied by OpenAI to a Fetch robot trained to grasp a spam box off of a table filled with different objects. The robot was trained in simulation and it was successfully transferred to the real robot.

OpenAI teaches Fetch robot to grasp from table using simulations (photo credit: OpenAI)

OpenAI teaches Fetch robot to grasp from table using simulations (photo credit: OpenAI)

We are getting close to the holy grail in robotics, a complete self-learning system!

Training robots

However, in their experiments, engineers from OpenAI discovered that training for robots is a lot more complex than training algorithms for games. Meanwhile, in games, the intelligent system has a very limited list of actions and perceptions available; robots face a huge and continuous spectrum in both domains, actions and perceptions. We can say that the options are infinite.

That increase in the number of options diminishes the usefulness of the algorithms used for RL. Usually, the way to deal with the problem is with some artificial tricks, like discarding some of the information completely or discretizing the data values artificially, reducing the options to only a few.

OpenAI engineers found that even if the robots were trained in simulations, their approach could not scale to more complex tasks.

Massive data vs. complex learning algorithms

As Andrew Ng indicated, and as an engineer from OpenAI personally indicated to me based on his results, massive data with simple learning algorithms wins over complicated algorithms with a small amount of data. This means that it is not a good idea to try to focus on getting more complex learning algorithms. Instead, the best approach for reaching intelligent robotics would be to use simple learning algorithms trained with massive amounts of data (which makes sense if we observe our own brain: a massive amount of neural networks trained over many years).

Google has always known that. Hence, in order to obtain massive amounts of data to train their robots, Google created a real life system with real robots, training all day long in a large space. Even if it is a clever idea, we can all see that this is not practical in any sense for any kind of robot and application (breaking robots, limited to execution in real time, a limited amount of robots, a limited amount of environments, and so on…).

Google robots training for grasping

Google robots training for grasping

That leads us to the same solution again: to use simulations. By using simulations, we can put any robot in any situation and train them there. Also, we can have virtually an infinite number of them training in parallel, and generate massive amounts of data in record time.

Even if that approach looks very clear right now, it was not three years ago when we created our company, The Construct, around robot simulations in the cloud. I remember exhibiting at the Innorobo 2015 exhibition and finding, after extensive interviews among all the other exhibitors, that only two among them were using simulations for their work. Furthermore, roboticists considered simulations to be something nasty to be avoided at all cost, since nothing can compare with the real robot (check here for a post I wrote about it at the time).

Thankfully, the situation has changed since then. Now, using simulations for training real robots is starting to become the way.

Transferring to real robots

We all know that it is one thing to get a solution with the simulation and another for that solution to work on the real robot. Having something done by the robot in the simulation doesn’t imply that it will work the same way on the real robot. Why is that?

Well, there is something called the reality gap. We can define the reality gap as the difference between the simulation of a situation and the real-life situation. Since it is impossible for us to simulate something to a perfect degree, there will always be differences between simulation and reality. If the difference is big enough, it may happen that the results obtained in the simulator are not relevant at all. That is, you have a big reality gap, and what applies in the simulation does not apply to the real world.

That problem of the reality gap is one of the main arguments used to discard simulators for robotics. And in my opinion, the path to follow is not to discard the simulators and find something else, but instead to find solutions to cross that reality gap. As for solutions, I believe we have two options:

1. Create more accurate simulators. That is on its way. We can see efforts in this direction. Some simulators concentrate on better physics (Mujoco); others on a closer look at reality (Unreal or Unity-based simulators, like Carla or AirSim). We can expect that as computer power continues to increase, and cloud systems become more accessible, the accuracy of simulations is going to keep increasing in both senses, physics and looks.

2. Build better ways to cross the reality gap. In its original work, Noise and the reality gap, Jakobi (the person who identified the problem of the reality gap) indicated that one of the first solutions is to make a simulation independent of the reality gap. His idea was to introduce noise in those variables that are not relevant to the task. The modern version of that noise introduction is the concept of domain randomization, as described in the paper Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World.

Domain randomization basically consists of performing the training of the robot in a simulated environment where its non-relevant-to-the-task features are changed randomly, like the colors of the elements, the light conditions, the relative position to other elements, etc.

The goal is to make the trained algorithm be unaffected by those elements in the scene that provide no information to the task at hand, but which may confuse it (because the algorithm doesn’t know which parts are the relevant ones to the task). I can see domain randomization as a way to tell the algorithm where to focus its attention, in terms of the huge flow of data that it is receiving.

Domain randomization applied by OpenAI to train a Fetch robot in simulation

Domain randomization applied by OpenAI to train a Fetch robot in simulation (photo credit: OpenAI)

In more recent works, the OpenAI team has released a very interesting paper that improves domain randomization. They introduce the concept of dynamics randomization. In this case, it is not the environment that is changing in the simulation, but the properties of the robot (like its mass, distance between grippers, etc.). The paper is Sim-to-real transfer of robotic control with dynamics randomization. That is the approach that OpenAI engineers took to successfully achieve the manipulation robot.

Some software that goes on the line

What follows is a list of software that allows the training of robots in simulations. I’m not including just robotics simulators (like Gazebo, Webots, and V-Rep) because they are just that, robot simulators in the general sense. The software listed here goes one step beyond that and provides a more complete solutions for doing the training in simulations. Of course, I have discarded the system used by OpenAI (which is Mujoco) because it requires the building of your own development environment.

Carla

Carla is an open source simulator for self-driving cars based on Unreal Engine. It has recently included a ROS bridge.

Carla simulator (photo credit Carla)

Carla simulator (photo credit Carla)

Microsoft Airsim

Microsoft Airsim drones simulator follows a similar approach to Carla, but for drones. Recently, they updated the simulator to also include self-driving cars.

Airsim

Airsim (photo credit: Microsoft)

Nvidia Isaac

Nvidia Isaac aims to be a complete solution for training robots on simulations and then transferring to real robots. There is still nothing available, but they are working on it.

Isaac

Isaac (photo credit: Nvidia)

ROS Development Studio

The ROS Development Studio is the development environment that our company created, and it has been conceived from the beginning to simulate and train any ROS-based robot, requiring nothing to be installed in your computer (cloud-based). Simulations for the robots are already provided with all the ROS controllers up and running, as well as the machine learning tools. It includes a system of Gym cloud computers for the parallel training of robots on an infinite number of computers.

ROS Development Studio showing an industrial environment

ROS Development Studio showing an industrial environment

Here is a video showing a simple example of training two cartpoles in parallel using the Gym computers inside the ROS Development Studio:

 

(Readers, if you know other software like this that I can add, let me know.)

Conclusion

Making all those deep neural networks learn in a training simulation is the way to go, and as we may see in the future, this is just the tip of the iceberg. My personal opinion is that intelligence is yet more embodied than current AI approaches admit: you cannot have intelligence without a body. Hence, I believe that the use of simulated embodiments will be even higher in the future. We’ll see.

]]>
How to start with self-driving cars using ROS https://robohub.org/how-to-start-with-self-driving-cars-using-ros/ Tue, 24 Oct 2017 15:01:05 +0000 http://robohub.org/how-to-start-with-self-driving-cars-using-ros/   Self-driving cars are inevitable. In recent years, self-driving cars research is becoming the main direction of automotive companies. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo are investing in autonomous driving research. Also, many new companies have appeared  in the autonomous cars industry: Drive.ai, Cruise, nuTonomy, Waymo to name a few (read

The post How to Start With Self-Driving Cars Using ROS appeared first on The Construct.

]]>

Self-driving cars are inevitable.

In recent years, self-driving cars have become a priority for automotive companies. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo are investing in autonomous driving research. Also, many new companies have appeared in the autonomous cars industry: Drive.ai, Cruise, nuTonomy, Waymo to name a few (read this post for a list of 260 companies involved in the self-driving industry).

The rapid development of this field has prompted a large demand for autonomous cars engineers. Among the skills required, knowing how to program with ROS is becoming an important one. You just have to visit the robotics-worldwide list to see the large amount of job offers for working/researching in autonomous cars, which demand knowledge of ROS.

Why ROS is interesting for autonomous cars

Robot Operating System (ROS) is a mature and flexible framework for robotics programming. ROS provides the required tools to easily access sensors data, process that data, and generate an appropriate response for the motors and other actuators of the robot. The whole ROS system has been designed to be fully distributed in terms of computation, so different computers can take part in the control processes, and act together as a single entity (the robot).

Due to these characteristics, ROS is a perfect tool for self-driving cars. After all, an autonomous vehicle can be considered as just another type of robot, so the same types of programs can be used to control them. ROS is interesting because:

1. There is a lot of code for autonomous cars already created. Autonomous cars require the creation of algorithms that are able to build a map, localize the robot using lidars or GPS, plan paths along maps, avoid obstacles, process pointclouds or cameras data to extract information, etc… Many algorithms designed for the navigation of wheeled robots are almost directly applicable to autonomous cars. Since those algorithms are already available in ROS, self-driving cars can just make use of them off-the-shelf.

2. Visualization tools are already available. ROS has created a suite of graphical tools that allow the easy recording and visualization of data captured by the sensors, and representation of the status of the vehicle in a comprehensive manner. Also, it provides a simple way to create additional visualizations required for particular needs. This is tremendously useful when developing the control software and trying to debug the code.

3. It is relatively simple to start an autonomous car project with ROS onboard. You can start right now with a simple wheeled robot equipped with a pair of wheels, a camera, a laser scanner, and the ROS navigation stack. You’re set up in a few hours. That could serve as a basis to understand how the whole thing works. Then you can move to more professional setups, like for example buying a car that is already prepared for autonomous car experiments, with full ROS support (like the Dataspeed Inc. Lincoln MKZ DBW kit).
Self-driving car companies have identified those advantages and have started to use ROS in their developments. Examples of companies using ROS include BMW (watch their presentation at ROSCON 2015), Bosch or nuTonomy.

Weak points of using ROS

ROS is not all nice and good. At present, ROS presents two important drawbacks for autonomous vehicles:

1. Single point of failure. All ROS applications rely on a software component called the roscore. That component, provided by ROS itself, is in charge of handling all coordination between the different parts of the ROS application. If the component fails, then the whole ROS system goes down. This implies that it does not matter how well your ROS application has been constructed. If roscore dies, your application dies.

2. ROS is not secure. The current version of ROS does not implement any security mechanism for preventing third parties from getting into the ROS network and reading the communication between nodes. This implies that anybody with access to the network of the car can get to the ROS messaging and kidnap the car behavior.

All those drawbacks are expected to be solved in the newest version of ROS, ROS 2. Open Robotics, the creators of ROS have recently released a second beta of ROS 2 which can be tested here. It is expected there will be a release version by the end of 2017.

In any case, we believe that the ROS-based path to self-driving vehicles is the way to go. That is why, we propose a low budget learning path for becoming a self-driving cars engineer, based on the ROS framework.

Our low cost solution to become a self-driving car engineer

Step 1
First thing you need is to learn ROS. ROS is quite a complex framework to learn and requires dedication and effort. Watch the following video for a list of the 5 best methods to learn ROS. Learning basic ROS will help you understand how to create programs with that framework, and how to reuse programs made by others.

Step 2
Next, you need to get familiar with the basic concepts of robot navigation with ROS. Learning how the ROS navigation stack works will provide you the knowledge of basic concepts in navigation like mapping, path planning or sensor fusion. There is no better way to learn this than taking the ROS Navigation in 5 days course developed by Robot Ignite Academy (disclaimer – this is provided by my company The Construct).

Step 3
Third step would be to learn the basic ROS application to autonomous cars: how to use the sensors available in any standard of autonomous car, how to navigate using a GPS, how to generate an algorithm for obstacle detection based on the sensors data, how to interface ROS with the Can-bus protocol used in all the cars used in the industry…

The following video tutorial is ideal to start learning ROS applied to Autonomous Vehicles from zero. The course teaches how to program a car with ROS for autonomous navigation by using an autonomous car simulation. The video is available for free, but if you want to get the most of it, we recommend you to do the exercises at the same time by enrolling into the Robot Ignite Academy.

Step 4
After the basic ROS for Autonomous Cars course, you should learn more advanced subjects like obstacles and traffic signals identification, road following, as well as coordination of vehicles in cross roads. For that purpose, our recommendation would be to use the Duckietown project at MIT. The project provides complete instructions to physically build a small size town, with lanes, traffic lights and traffic signals, to practice algorithms in the real world (even if at a small scale). It also provides instructions to build the autonomous cars that should populate the town. Cars are based on differential drives and a single camera for sensors. That is why they achieve a very low cost (around 100$ per each car).

Image by Duckietown project

Due to the low monetary requirements, and to the good experience it offers for testing real stuff, the Duckietown project is ideal to start practicing autonomous cars concepts like line following based on vision, detecting other cars, traffic signal-based behavior. Still, if your budget is below that cost, you can use a Gazebo simulation of the Duckietown, and still practice most of the content.

Step 5
Then if you really want to go pro, you need to practice with real life data. For that purpose we propose you install and learn from the Autoware project. This project provides real data obtained from real cars on real streets, by means of ROS bags. ROS bags are logs containing data captured from sensors which can be used in ROS programs as if the programs were connected to the real car. By using those bags, you will be able to test algorithms as if you had an autonomous car to practice with (the only limitation is that the data is always the same and restricted to the situation that happened when it was recorded).

Image by the Autoware project

The Autoware project is an amazing huge project that, apart from the ROS bags, provides multiple state-of-the-art algorithms for localization, mapping, obstacles detection and identification using deep learning. It is a little bit complex and huge, but definitely worth studying for a deeper understanding of ROS with autonomous vehicles. I recommend you to watch the Autoware ROSCON2017 presentation for an overview of the system (will be available in October 2017).

Step 6
Final step would be to start implementing your own ROS algorithms for autonomous cars and testing them in different, realistic situations. Previous steps provided you with real-life situations, the bags were limited to the situations where they were recorded. Now it is time to test your algorithms in different situations. You can use already existing algorithms in a mix of all the steps above, but at some point, you will see that all those implementations lack some things required for your goals. You will have to start developing your own algorithms, and you will need lots of tests. For this purpose, one of the best options is to use a Gazebo simulation of an autonomous car as a testbed for your ROS algorithms. Recently, Open Robotics has released a simulation of cars for Gazebo 8 simulator.

Image by Open Robotics

That simulation based on ROS contains a Prius car model, together with 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar, which you can use to practice and create your own self-driving car algorithms. By using that simulation, you will be able to put the car in as many different situations as you want, checking if your algorithm works on those situations, and repeating as many times as you want until it works.

Conclusion

Autonomous driving is an exciting subject with demand for experienced engineers increasing year after year. ROS is one of the best options to quickly jump into the subject. So learning ROS for self-driving vehicles is becoming an important skill for engineers. We have presented here a full path to learn ROS for autonomous vehicles while keeping the budget low. Now it is your turn to make the effort and learn. Money is not an excuse anymore. Go for it!

]]>
The need for robotics standards https://robohub.org/the-need-for-robotics-standards/ Tue, 29 Aug 2017 21:01:33 +0000 http://robohub.org/the-need-for-robotics-standards/   Last week I was talking to one lead engineer of a Singapore company which is building a benchmarking system for robot solutions. Having seen my presentation at ROSCON2016 about robot benchmarking, he asked me how would I benchmark solutions that are non-ROS compatible. I said that I wouldn’t. I would not dedicate time to benchmark solutions

The post The Need For Robotics Standards appeared first on The Construct.

]]>

Last week I was talking to one lead engineer of a Singapore company which is building a benchmarking system for robot solutions. Having seen my presentation at ROSCON2016 about robot benchmarking, he asked me how I would benchmark solutions that are non-ROS compatible. I said that I wouldn’t dedicate time to benchmark solutions that are not ROS-based. Instead, I suggested I would use the time to polish the ROS-based benchmarking and suggest that vendors adopt that middleware in their products.

Benchmarks are necessary and they need standards

Benchmarks are necessary to improve any field. By having a benchmark, different solutions to a single problem can be compared and hence a direction for improvement can be traced. Currently, robotics lacks such benchmarking system.

I strongly believe that to create a benchmark for robotics we need a standard at the level of programming.

By having a standard at the level of programming, manufacturers can build their own hardware solutions at will, as long as they are programmable with the programming standard. That is the approach taken by devices that can be plugged into a computer. Manufacturers create the product on their own terms, and then provide a Windows driver that allows any computer in the world (that runs Windows) to communicate with the product. Once this computer-to-product communication is made, you can create programs that compare the same type of devices from different manufacturers for performance, quality, noise, whatever your benchmark is trying to compare.

You see? Different types of devices, different types of hardware. But all of them can be compared through the same benchmarking system that relies on the Windows standard.

Software development for robots also needs standards

The need for standards is not only required for comparing solutions but also to speed robotics development. By having a robotics standard, developers can concentrate on building solutions that do not have to be re-implemented whenever the robot hardware changes. Actually, given the middleware structure, developers can disassociate enough from the hardware that they can almost spend 100% of their time in the software realm, while still developing code for robots.

We need the same type of standard for robotics. We need a kind of operating system that allows us to compare different robotics solutions. We need the Windows of the PCs, the Android of the phones, the CAN of the buses…

IMG_0154

A few standard proposals and a winner

But you already know that. I’m not the first one to state this. Actually, many people have already tried to create such a standard. Some examples include Player, ROS, YARP, OROCOS, Urbi, MIRA or JdE Robot, to name a few.

Personally, I actually don’t care which standard is used. It could be ROS, it could be YARP, or it could be any other that still has not been created. The only thing I really care about is that we  adopt a standard as soon as possible.

And it looks like the developers have decided. Robotics developers prefer ROS as their common middleware to program robots.

No other middleware for robotics has had such a large adoption. Some data about it:

ROS YARP OROCOS
Number of Google pages: 243.000 37.000 42.000
Number of citations if the paper describing the middleware: 3.638 463 563
Alexa ranking: 14.118Screenshot from 2017-08-24 19:50:39 1.505.000Screenshot from 2017-08-24 19:50:29 668.293Screenshot from 2017-08-24 19:50:19

Note 1: Only showing the current big three players.

Note 2: Very simple comparison. Difficult to compare in other terms since data is not available.

Note 3: Data measured in August 2017. May vary at the time you are reading this. Links provided on the numbers themselves, so you can check yourself.

This is not only the feeling that we, roboticists, have. The numbers also indicate that ROS is becoming the standard for robotics programming.

Screenshot from 2017-08-24 19:25:59

Why ROS?

The question is then, why has ROS emerged on top of all the other possible contestants. None of them is worst than ROS in terms of features. Actually you can find some feature in all the other middlewares that outperform ROS. If that is so, why or how has ROS achieved the status of becoming the standard ?

A simple answer from my point of view: excellent learning tutorials and debugging tools.

1

 

Here there is a video where Leila Takayama, early developer of ROS, explains when she realized that the key for having ROS used worldwide would be to provide tools that simplify the reuse of ROS code. None of the other projects have such a set of clear and structured tutorials. In addition, few of the other middlewares provide debugging tools for their packages. The lack of these two essential aspects is preventing new people from using their middlewares (even if I understand the developers of OROCOS and YARP for not providing it… who wants to write tutorials or build debugging tools… nobody! ? )

 

Additionally, it is not only about tutorials and debugging tools. ROS creators also provide a good system of managing packages. The result is that developers worldwide could use others packages in a (relatively) easy way. This created an explosion in ROS packages available, providing off-the-shelf almost anything for your brand new ROSified robot.

Now, the impressive rate at which contributions to the ROS ecosystem are made makes it almost unstoppable in terms of growing.

Screenshot from 2017-02-23 20:39:27

 

What about companies?

At the beginning, ROS was mostly used by students at Universities. However, as ROS becomes more mature and the number of packages increases, companies are realizing that adopting ROS is also good for them because they will be able to use code developed by others. On top of that, it will be easier for them to hire new engineers who already know the middleware (otherwise they would need to teach the newcomers their own middleware).

As a result, many companies have jumped onto the ROS train, developing from scratch their products to be ROS compatible. Examples include Robotnik, Fetch Robotics, Savioke, Pal Robotics, Yujin Robots, The Construct, Rapyuta Robotics, Erle Robotics, Shadow Robot or Clearpath, to name a few of the sponsors of the next ROSCON ? . Creating their ROS-compatible products, they decreased their development time by several orders of magnitude.

To bring things further, two Spanish companies have revolutionised the standardization of robotics products using ROS middleware: on one side, Robotnik has created the ROS Components shop. A shop where anyone can buy ROS compatible devices, starting from mobile bases to sensors or actuators. On the other side, Erle Robotics (now Acutronic Robotics) is in the process of developing Hardware ROS. The H-ROS is a standardized software and hardware infrastructure to easily create reusable and reconfigurable robot hardware parts. ROS is enabling hardware standarization too, but this time driven by companies, not research! That must mean something…

Screenshot from 2017-08-24 22:25:45

Finally, it looks like industrial robot manufacturers have understood the value that a standard can provide to their business. Even if they do not make their industrial robots ROS-enabled from the start, they are adopting ROS Industrial  a flavour of ROS, which allows them to ROSify their industrial robots and re-use all the software created for manipulators in the ROS ecosystem.

But are all companies jumping onto the ROS train? Not all of them!

Some companies like Jibo, Aldebaran or Google still do not rely on ROS for their robot programming. Some of them rely on their own middleware created before the existence of ROS  (that is the case of Aldebaran). Some others, though, are creating their own middleware from scratch. Their reasons: they do not believe ROS is good, they have already created a middleware, or do not want to develop their products dependent on the middleware of others. Those companies have very fair reasons to go their way. However, will that make them competitive? (if we have to judge from history, mobiles, VCRs, the answer may be no).

So is ROS the standard for programming robots?

That question is still too soon to be answered. It looks like it is becoming the standard, but many things can change. It is unlikely that another middleware takes the current title from ROS, but it may happen. There could be a new player that wipes ROS from the map (maybe Google will release its middleware to the public – like they did with Android – and take the sector by storm?).

Still, ROS has its problems, like a lack of security or the instability of some important packages. Even if the OSRF group are working hard to build a better ROS system (for instance ROS2 is in beta phase with many root improvements), some hard work is still required for some basic things (like the ROS controllers for real robots).

IMG_3330

Given those numbers, at The Construct we believe that ROS IS THE STANDARD (that is why we are devoted to creating the best ROS learning tutorials of the world). Actually, it was thanks to this standardization that two Barcelona students were able to create an autonomous robot product for coffee shops in only three months with zero knowledge of robotics (see Barista robot).

This is the future, and it is good. In this future, thanks to standards, almost anyone will be able to build, design and program their own robotics product, similar to how PC stores are building computers today.

So my advice, as I said to the Singapore engineer, is to bet on ROS. Right now, it is the best option for a robotics standard.

 

]]>
Teaching ROS quickly to students https://robohub.org/teaching-ros-quickly-to-students/ Fri, 09 Jun 2017 09:00:03 +0000 http://robohub.org/teaching-ros-quickly-to-students/   Lecturer Steffen Pfiffner of University of Weingarten in Germany is teaching ROS to 26 students at the same time at a very fast pace. His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development

The post Teaching ROS Fast To Many Students appeared first on The Construct.

]]>

Lecturer Steffen Pfiffner of University of Weingarten in Germany is teaching ROS to 26 students at the same time at a very fast pace. His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development environment and several ROS based simulated robots. Using the browser, Pfiffner and his colleague Benjamin Stähle, are able to teach how to program with ROS quickly and to many students. This is what Robot Ignite Academy is made for.

“With Ignite Academy our students can jump right into ROS without all the hardware and software setup problems. And the best: they can do this from everywhere,” says Pfiffner.

Robot Ignite Academy provides a web service which contains the teaching material in text and video format, the simulations of several ROS based robots that the students must learn to program, and the development environment required to build ROS programs and test them on the simulated robot.


Student’s point of view

Students bring their own laptops to the class and connect to the online platform. From that moment, their laptop becomes a ROS development machine, ready to develop programs for many simulated real robots.

The Academy provides the text, the videos and the examples that the student has to follow. Then, the student creates her own ROS program and makes the robot perform a specific action. The student develops the ROS programs as if she is in a typical ROS development computer.

The main advantage is that students can use a Windows, Linux or Mac machine to learn ROS. They don’t even have to install ROS in their computers. The only prerequisite of the laptop is to have a browser. So students do not mess with all the installation problems that frustrate them (and the teachers!), especially when they are starting.

After class, students can continue with their learning at home, library or even the beach if there is a wifi available! All their code, learning material and simulations are stored online so they can access them from anywhere, anytime using any computer.


Teacher’s point of view

The advantage of using the platform is not only for the students but also for the teachers. Teachers do not have to create the material and maintain it. They do not have to prepare the simulations or work on multiple different computers. They don’t even have to prepare the exams!! (which are already provided by the platform).

So what are the teachers for?

By making use of the provided material, the teacher can concentrate on guiding the students by explaining the most confusing parts, answer questions, suggest modifications according to the level of each student, and adapt the pace to the different types of students.

This new method of teaching ROS is exploding among the Universities and High Schools that want to provide the latest and most practical teachings to their students. The method, developed by Robot Ignite Academy, combines a new way of teaching based on practice and an online learning platform. Those two points combined make the teaching of ROS a smooth experience and can potentially see the students’ knowledge base skyrocket.

As user Walace Rosa indicates in his video comment about Robot Ignite Academy:

It is a game changer [in] teaching ROS!

The method is becoming very popular in the robotics circuits too, and many teachers are using it for younger students. For example, High School Mundet in Barcelona is using it to teach ROS to 15 years old students.

Additionally, the academy provides a free online certification exam with different levels of knowledge certification. Many Universities are using this exam to certify that their students did learn the material since the exam is quite demanding.


Some examples of past events

  •  1 week ROS course in Barcelona for SMART-E project team members. This is a private course given by Robot Ignite Academy at Barcelona for 15 members of the SMART-E project that need to be up to speed with ROS fast. From 8th to 12nd of May 2017
  •  1 day ROS course for the Col·legi d’Enginyers de Barcelona. The 17th of May 2017.
  •  3 months course for University of La Salle in Barcelona within the Master on Automatics, Domotics and Robotics. From 10th of May to 29th of June 2017.
  •  1 weekend ROS course for teenagers in Bilbao, Spain. The 20th and 21st of May 2017.
  •  We can also organize a special event like these for you and your team.

Helpful ROS videos

]]>
Machine Learning with OpenAI Gym on ROS Development Studio https://robohub.org/machine-learning-with-openai-gym-on-ros-development-studio/ Mon, 08 May 2017 08:15:37 +0000 http://robohub.org/machine-learning-with-openai-gym-on-ros-development-studio/ Imagine how easy it would be to learn skating, if only it doesn’t hurt everytime you fall. Unfortunately, we, humans,  don’t have that option. Robots, however, can now “learn” their skills on a simulation platform without being afraid of crashing into a wall. Yes, “it learns“! This is possible with the reinforcement learning algorithms provided by

The post Machine Learning with OpenAI Gym on ROS Development Studio appeared first on The Construct.

]]>

Imagine how easy it would be to learn skating, if only it doesn’t hurt everytime you fall. Unfortunately, we, humans,  don’t have that option. Robots, however, can now “learn” their skills on a simulation platform without being afraid of crashing into a wall. Yes, “it learns“! This is possible with the reinforcement learning algorithms provided by OpenAI Gym and the ROS Development Studio.

You can now train your robot to navigate through an environment filled with obstacles just based on the sensor inputs, with the help of OpenAI Gym. In April 2016, OpenAI introduced “Gym”, a platform for developing and comparing reinforcement learning algorithms. Reinforcement learning is an area of machine learning that allows an intelligent agent (for example, robot) to learn the best behaviors in an environment by trial-and-error. The agent takes actions in an environment so as to maximize its rewards. We have deployed the gym_gazebo package from Erle Robotics. in the ROS Development Studio. It enables the users to test their reinforcement learning for their robots in Gazebo.


How to Train your Robot

Robot training to navigate through an environment with obstacles

Robot training to navigate through an environment with obstacles

In this example we will be seeing how a turtlebot is able to learn navigating through an environment without hitting an obstacle. The turtlebot will use a reinforcement learning method known as Q-learning.

There are four environments already available for the user to test their simulations with. These environments can be launched using the respective launch files:

  • GazeboCircuitTurtlebotLidar_v0.launch
  • GazeboCircuit2TurtlebotLidar_v0.launch
  • GazeboRoundTurtlebotLidar_v0.launch
  • GazeboMazeTurtlebotLidar_v0.launch

The images of the different environments are given below:

The various environments already available:
Circuit
Circuit2
Round
Maze

The user is requested to try out the existing environment before developing their own environments for training the robot. An environment is where the robot’s possible actions and rewards are defined. For example, in the available environments, there are three possible actions for the Turtlebot robot:

– Forward ( with a reward of 5 points)
– Left ( with a reward of 1 point)
– Right ( with a reward of 1 point)

If it collides into the walls, then the training episode ends (with a penalty of 200 points). The turtlebot has to learn to navigate through the environment, based on the rewards obtained from different episodes. This can be achieved using the Q-learning algorithm. Let us see, how it can be done using the ROS Development Studio.


Using the ROS Development Studio for training the robot

First, we have to set the path in the jupyter-notebook as given below:

import sys
sys.path.append("/usr/local/lib/python2.7/dist-packages/")
sys.path.append("/home/ubuntu/gym-gazebo")
sys.path.append("/home/user/catkin_ws/src/gym_construct/src")
%matplotlib inline

The python scripts in the folder gym_construct/src/ help us simulate the reinforcement learning techniques for a Turtlebot. Currently, the number of episodes has been set to 20.
Feel free to increase the number of episodes in the python scripts (usually up to 5000) to actually train the robot to navigate the environment completely.Run the  only the script corresponding to the environment:

Run the python corresponding to the environment:

## Circuit-2 Environment --> Q-learning
%run /home/user/catkin_ws/src/gym_construct/src/circuit2_turtlebot_lidar_qlearn.py

The robot undergoes several training episodes and each of these episodes are rewarded based on the number of steps taken by the robot before hitting the environment. We will be able to see that the robot incrementally increases its rewards over time compared to its previous versions. With a very large number of episodes, the robot will learn to navigate the environment without hitting the obstacle.


Plotting the learning curve of the robot

The machine learning algorithm generates the output files in the output directory specified in the python script file. In order to plot the curve, we run the display_plot.py. But before that, don’t forget to restart the kernel and set the path once again.

download2

As the number of episodes increases, we will see that the robot’s mean rewards also increases. The user can choose his own robot, environments, action and rewards for testing his reinforcement learning algorithms in OpenAI Gym and RDS. Watch the video below for more on this.


Video describing the procedure to run OpenAI Gym on RDS

I hope you were able to follow the tutorial.  So, that’s all folks. It’s now up to you to develop and test reinforcement learning algorithms in OpenAI Gym and RDS.

Have fun training your robot!

]]>
Developing ROS programs for the Sphero robot https://robohub.org/developing-ros-programs-for-the-sphero-robot/ Mon, 13 Mar 2017 15:30:40 +0000 http://robohub.org/developing-ros-programs-for-the-sphero-robot/

You probably know the Sphero robot. It is a small robot with the shape of a ball. In case that you have one, you must know that it is possible to control it using ROS, by installing in your computer the Sphero ROS packages developed by Melonee Wise and connecting to the robot using the bluetooth of the computer.

Now, you can use the ROS Development Studio to create ROS control programs for that robot, testing as you go by using the integrated simulation.

The ROS Development Studio (RDS) provides off-the-shelf a simulation of Sphero with a maze environment. The simulation provides the same interface as the ROS module created by Melonee, so you can test your develop and test the programs on the environment, and once working properly, transfer it to the real robot.

We created the simulation to teach ROS to the students of the Robot Ignite Academy. They have to learn ROS enough to make the Sphero get out of the maze by using odometry and IMU.


Using the simulation

To use the Sphero simulation on RDS go to rds.theconstructsim.com and sign in. If you select the Public simulations, you will quickly identify the Sphero simulation.

Press the red Play button. A new screen will appear giving you details about the simulation and asking you which launch file you want to launch. The main.launch selected by default is the correct one, so just press Run.

After a few seconds the simulation will appear together with the development environment for creating the programs for Sphero and testing them.

On the left hand side you have a notebook containing information about the robot and how to program it with ROS. This notebook contains just some examples, but it can be completed and/or modified at your will. As you can see it is an iPython notebook and follows its standard. So it is up to you to modify it, add new information or else. Remember that any change you do to the notebook will be saved in a simulation in your private area of RDS, so you can come back later and launch it with your modifications.

You must know that the code included in the notebook is directly executable by selecting the cell of the code (do a single click on it) and pressing the small play button at the top of the notebook. This means that, once you press that button, the code will be executed and start controlling the Sphero simulated robot for a few time-steps (remember to have the simulation activated (Play button of the simulation activated) to see the robot move).

On the center area, you can see the IDE. It is the development environment for developing the code. You can browse there all the packages related to the simulation or any other packages that you may create.

On the right hand side, you can see the simulation and beneath it, the shell. The simulation shows the Sphero robot as well as the environment of the maze. On the shell, you can issue commands in the computer that contains the simulation of the robot. For instance, you can use the shell to launch the keyboard controller and move the Sphero around. Try typing the following:

  • $ roslaunch sphero_gazebo keyboard_teleop.launch

Now you must be able to move the robot around the maze by pressing some keys of the keyboard (instructions provided on the screen).

You can also launch there Rviz, and then watch the robot, the frames and any other additional information you may want of the robot. Type the following:

  • $ rosrun rviz rviz

Then press the Screen red icon located at the bottom-left of the screen (named the graphical tools). A new tab should appear, showing how the Rviz is loading. After a while, you can configure the Rviz to show the information you desire.

There are many ways you can configure the screen to provide more focus to what interests you the most.

To end this post, I would like to indicate that you can download the simulation to your computer at any time, by doing right-click on the directories and selecting Download. You can also clone the The Construct simulations repository to download it (among other simulations available).


If you liked this tutorial, you may also enjoy these:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
Adjust how you learn and quickly pick up robotics programming https://robohub.org/adjust-how-you-learn-and-quickly-pick-up-robotics-programming/ Thu, 06 Oct 2016 09:30:14 +0000 http://robohub.org/adjust-how-you-learn-and-quickly-pick-up-robotics-programming/ Credit: Ricardo Tellez

Credit: Ricardo Tellez

“The biggest obstacle preventing robotics going mainstream is not having good programmers able to program robots,” Brian Gerkey, CEO of the Open Source Robotics Foundation.

Learning to program robots is difficult.

I mean, real robots.

I am not talking about learning to program Lego and the like. I’m talking about making a Reem robot move around a shopping mall helping people to carry their bags. Or a Fetch robot cooperate on a warehouse with humans helping them to fetch the packages.

Yes, programming of these robots is difficult.

In order to simplify the programming of those robots, frameworks have been created:

  1. ROS
  2. OROCOS
  3. YARP
  4. URBI
  5. etc…

Those are the real thing. They are frameworks that are installed in real robots and used to make them work. I’m not talking about using Matlab for programming your algorithm in a robot. Matlab is not used in real life, only in academia for non-replicable results. Here we are talking about the real thing.

But don’t be get me wrong. Even if those frameworks are quite complex, anybody can learn to program robots. The question is: how long does it take?

Why does it take so long to learn how to program robots?

We have identified a series of problems that prevent quick learning of robotics.

You don’t have access to robots to play with

There are many different robots available and they are really expensive. Acquiring, maintaining, and explaining the hardware are real problems that limit class size and the learning experience of the students.

As Nikolaus Correll, Rowan Wing and David Coleman indicate in their paper, A One-Year Introductory Robotics Curriculum for Computer Science Upperclassmen:

Only a subset of the course material of a robotics curriculum can be assessed conceptually or mathematically. Demonstrating a thorough understanding of an algorithm requires implementation. Unlike purely computational algorithms, which can be assessed automatically using test-input, authentic assessment scenarios are difficult to create for a robotics class.

Hence, students can hardly practice what they are learning.

We strongly believe in the CONSTRUCTIONISM approach. Defined by Seymor Papert around 1980, the constructionism approach advocates for learning by making. That is, you are not learning when somebody is teaching you, but you are learning when you practice.

If you don’t have a robot to practise with, you cannot learn properly and at full speed.

Concepts are complex to understand

The concepts of learning how to make a robot work are complex (how to make a robot grasp, how to build a navigation program, how to make it reason, etc.), which makes the learning curve very steep.

How many of you out there use ROS? Ok, how would you explain ROS to others who know nothing about it? Difficult, isn’t it?

The most meaningful unit in robotics for a newcomer is making the robot do something. Given the student’s program to make the robot do something, the robot either does it or not. This is very clear to understand. You push this button and the robot does that (or not, if you have a failure in your program). The feedback of your performance is very clear and straight.

It is not the same to get the feedback about the robot doing an action than to do it because the compilation said you forgot to include something in the CMakeLists.

The longer it takes to associate the errors with the result in the robot, the more difficult it is for the student to close the loop of understanding, and hence, learn why or why not this works that way. This is the concept of delayed reward. The longer delayed reward, the more difficult to understand the relation cause-effect and the longer it will take to learn the concept associated.

The challenge is then to provide such feedback very quickly, even if there are many steps and concepts involved in a robot action. Hence, teaching materials cannot start by indicating how to install everything, how to compile, or how to create a download new packages.

Nobody needs to know the definition of ROS to start using it. It is not a good idea to start teaching ROS by defining what a package is, or what a workspace is, then how catkin works, then what is the ROS_MASTER_URI, etc.

All of those are important, but not at this early stage. Going at that pace will take ages to make the robot do something, and for the student to understand.

screen-shot-2016-09-01-at-16-37-37-1024x452

Documentation made by teachers for teachers

We found that documentation, tutorials and classes are made by people who already know robotics for people who already understand robotics. I call these type of tutorials reference tutorials. Those tutorials basically start by describing concepts one-by-one, following a hierarchy of well-defined definitions by people who already understand the subject. If you ask the reason of that concept or for what do you need it, they will tell you that you will understand later (but later is too late!).

Generally speaking, robotics is sometimes taught as if it were for the instructor. It is very structured with a clear concepts hierarchy. It is a problem that documentations of robot programming frameworks work as a reference tutorial for the engineers that wrote it. Of course, you can learn from those but it is a lot harder and much slower (I learnt ROS that way).

I do believe we do not work this way. One thing is the way we structure information once we know the whole space, and another is the way we structure it when we are learning. Very different things.

The solution we propose

The method we propose is to make the students forget as many concepts as possible and concentrate on doing things with the robots. Complex concepts will be taught later when the student has the base. This approach is based on how we learn and not in how we order our thoughts once we have them.

We propose a solution to speed the learning that provides the following steps:

1. In media res. That is the trick Star Wars used to make the movie more appealing (new Star Trek movies have copied successfully, I think).

This means, start in the middle, not the edges. The idea is to put the student in the middle of the knowledge to learn and make it move to the sides, discarding a lot of things in the way to the side.

2. Learning by doing. Not by reading or by typing, by working on a robot.

This is the successful case of Code Academy. There you can learn to code by doing and quickly observing the result of what you have done.

We must have the same type of environment for robotics learning. Do, then observe on the robot the result of your program.

3. A step by step guide. Showing only what is required for the next action of the robot, and removing out of the equation what is not required/useful at that specific learning moment (even if important for the success of the step).

For example, you must have a catkin_ws in order to launch a ROS package, but we do not believe that we must explain to a new student how to create a catkin_ws. Instead, we provide it done ready to be used, and once he becomes used to it, then we show how to create your own.

screenshot-from-2016-08-10-121840-1024x422

Implementation

At The Construct, we have integrated all those ideas into a platform that allows students to quickly learn (in 5 days) any concept of robotics programming. I mean, for real robots!

Our platform is called Robot Ignite Academy, and we like to call it the Code Academy for Robotics. We integrated step by step explanations with a robot simulation and program development environment, everything inside a web browser (hence you can learn using any type of device).

screen-shot-2016-09-29-at-12-37-38-1024x576

At Ignite Academy, students will learn complex robotics concepts by:

  1. Using simulations of the real robots. Many. Examples include Turtlebot-2, Husky, Sphero, Wam, quadcopters… even BB-8! Students use a simulation of the robots as if they had the robot.
  2. Use a step-by-step lesson process that engages the student on doing something with the robot after every step, in order to check that he is understanding the explanation.
  3. Use the description not only to explain, but also to do. Include interactions in the explanation. For example, generate graphics dynamically on the explanation based on what the robot is doing at that moment.
  4. Forget about certain basic concepts (for example, what is ROS… who cares? You don’t need to know what is ROS in order to use it and make a robot work) We believe that once you have mastered the real thing, getting those concepts will be easy peace. But if you put them on the pile of things you have to learn together with the others, it makes the whole thing difficult to swallow.

All that without having to install anything in your computer, and with simulations at the core of the learning system, not only as something additional.

In our system, the simulation is at the center. We provide different simulated robots that the student has to practice with. They use simulation as if they had the real robot, with the difference that they don’t have to care about building it. It is just provided and ready to be used.

Conclusion

We are providing the Ignite Academy platform online, for anyone. Students can use it for their personal learning. Teachers can use it for teaching during the class. Companies can use it to upgrade their workers.

What is also important to state is that you don’t have to care about installing anything. Connect using a web browser from any type of computer and start learning.

dsc09928-1024x680 ]]>
NASA Robonaut 2 simulation: Placing an ISS panel in The Construct https://robohub.org/nasa-robonaut-2-simulation-placing-an-iss-panel-in-the-construct/ Fri, 24 Jun 2016 16:35:10 +0000 http://robohub.org/nasa-robonaut-2-simulation-placing-an-iss-panel-in-the-construct/ Image: The Construct

Image: The Construct

By Miguel Angel

One of the most wanted robot simulations is a robot that can be used for anything. Robonaut is one like this. NASA kindly gave this simulation for public use and we thought here in The Construct that we could use it to make an even better user-friendly version. We created a test to demonstrate the possibilities that The Construct has to offer in Space Zero gravity simulations. We also envisioned this as a proof of concept of how The Construct could be used as a platform for competitions such as RoboCup, special for Space Exploration themes.

To launch this simulation you simply have to select the Robonaut2 simulation and select the following launch: $r2_robocup_at_space_1.launch

And there you have it! All the technical hassle was solved by The Construct and you have a ready to use Robonaut. This launch will spawn the torso version of Robonaut, with all its controllers in a zero gravity space. It will also spawn a taskboard full of functional buttons and switches without the third panel which will be spawned separately. One it’s done, you should get something like this:

Robonaut1

Image: The Construct

To check that Robonaut is functioning correctly, it should have its arms in an L shape ready to start moving like in the picture above. Once all is running, please check that the taskboard is publishing. The taskboard publishes in a topic the state of the switches and buttons in the panel. In this early version, it only tracks the three lower switches and the security switch on the top right-hand corner of the first panel. To check all the taskboard systems are working, this is the correct output to the following command in the Webshell:

$rostopic info /taskboard/TaskboardPanelA

Type:
gazebo_taskboard/TaskboardPanelA
Publishers: * /taskboard_state_tracker
Subscribers: * /taskboard_systems_actuator

You should see the publisher of the Topic and a subscriber that controls when to activate the magnet that maintains the panel to be placed in position.

taskboard_and_panel

Image: The Construct

Image: The Construct

Image: The Construct

Image: The Construct

Image: The Construct

Image: The Construct

Image: The Construct

Once you are ready just launch the grab and place panel 3 launch:

Source: The Construct

Source: The Construct

This will reset the power switch and the panel to their starting positions.
Once done, Robonaut will start moving to grab the panel.

Source: The Construct

Source: The Construct

Source: The Construct

Source: The Construct

Source: The Construct

Source: The Construct

Once grabbed, it will place it just in front of the desired spot.

Source: The Construct

Source: The Construct

Then it will activate the magnet by firstly opening the security lid and then activating the power switches. Once activated, the panel will receive a force that will hopefully place it inside the taskboard.

Source: The Construct

Source: The Construct

This launch can be executed as many time as you like, so that there is no need of relaunching the simulation every time. There are other features that you might want to explore like the “model_destination_validator.py” which you can use to evaluate correct placing of objects based on their relative position to prior placed markers. Very useful for unsupervised learning and competitions.

Here is a full-length demo:

https://www.youtube.com/watch?v=QABa4pbk4qs

Come on, try it! We would love to see intelligent algorithms and systems to make Robonaut work as a charm!

]]>
Create a ROS sensor plugin for Gazebo https://robohub.org/create-a-ros-sensor-plugin-for-gazebo/ Thu, 07 Jan 2016 21:35:34 +0000 http://robohub.org/create-a-ros-sensor-plugin-for-gazebo/

There are magnificent tutorials about how to create plugins for Gazebo in the GazeboSim webpage. There are even some tutorials about how to create plugins for Gazebo + ROS, which show that there are several types of plugins (world, model, sensor, system, visual), and indicate how to create a plugin for a world-type plugin. But I recently I needed to create a plugin for a light detector and couldn’t find a concrete example. Here’s a how-to post showing you how I did it.

How to: light sensor plugin in Gazebo

Following the indications provided at the answers forum of Gazebo, I decided to build a very simple light detector sensor based on a camera. Instead of using a raytracing algorithm from lights, the idea is to use a camera to capture an image, then use the image to calculate the illuminance of the image, and then publish that illuminance value through a ROS topic.

Since the plugin is meant to be used with ROS, the whole plugin should be compilable using ROS environment.

Creating a ROS package for the plugin

First we need to create the package in our catkin workspace that will allow us to compile the plugin without a problem.

cd ~/catkin_ws/src
catkin_create-pkg gazebo_light_sensor_plugin gazebo_ros roscpp

Creating the plugin code

For this purpose, since we are using a camera to capture the light, we are going to create a plugin class that inherits from the CameraPlugin. The code that follows has been created taking as guideline the code of the authentic gazebo ROS camera plugin.

Create a file called light_sensor_plugin.h inside the include directory of your package, including the following code:

#ifndef GAZEBO_ROS_LIGHT_SENSOR_HH
#define GAZEBO_ROS_LIGHT_SENSOR_HH

#include <string>

// library for processing camera data for gazebo / ros conversions
#include <gazebo/plugins/CameraPlugin.hh>

#include <gazebo_plugins/gazebo_ros_camera_utils.h>

namespace gazebo
{
class GazeboRosLight : public CameraPlugin, GazeboRosCameraUtils
{
/// brief Constructor
/// param parent The parent entity, must be a Model or a Sensor
public: GazeboRosLight();

/// brief Destructor
public: ~GazeboRosLight();

/// brief Load the plugin
/// param take in SDF root element
public: void Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf);

/// brief Update the controller
protected: virtual void OnNewFrame(const unsigned char *_image,
unsigned int _width, unsigned int _height,
unsigned int _depth, const std::string &_format);

ros::NodeHandle _nh;
ros::Publisher _sensorPublisher;

double _fov;
double _range;
};
}
#endif

As you can see, the code includes a node handler to connect to the roscore. It also defines a publisher that will publish messages containing the illuminance value. Two parameters have been defined: fov (field of view) and range. At present only fov is used to indicate the amount of pixels around the center of the image that will be taken into account to calculate the illuminance.

Then create a file named light_sensor_plugin.cpp containing the following code in the src directory of your package:

#include <gazebo/common/Plugin.hh>
#include <ros/ros.h>
#include "/home/rtellez/iri-lab/iri_ws/src/gazebo_light_sensor_plugin/include/light_sensor_plugin.h"

#include "gazebo_plugins/gazebo_ros_camera.h"

#include <string>

#include <gazebo/sensors/Sensor.hh>
#include <gazebo/sensors/CameraSensor.hh>
#include <gazebo/sensors/SensorTypes.hh>

#include <sensor_msgs/Illuminance.h>

namespace gazebo
{
 // Register this plugin with the simulator
 GZ_REGISTER_SENSOR_PLUGIN(GazeboRosLight)

 ////////////////////////////////////////////////////////////////////////////////
 // Constructor
 GazeboRosLight::GazeboRosLight():
 _nh("light_sensor_plugin"),
 _fov(6),
 _range(10)
 {
 _sensorPublisher = _nh.advertise<sensor_msgs::Illuminance>("lightSensor", 1);
 }

 ////////////////////////////////////////////////////////////////////////////////
 // Destructor
 GazeboRosLight::~GazeboRosLight()
 {
 ROS_DEBUG_STREAM_NAMED("camera","Unloaded");
 }

 void GazeboRosLight::Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf)
 {
 // Make sure the ROS node for Gazebo has already been initialized
 if (!ros::isInitialized())
 {
 ROS_FATAL_STREAM("A ROS node for Gazebo has not been initialized, unable to load plugin. "
 << "Load the Gazebo system plugin 'libgazebo_ros_api_plugin.so' in the gazebo_ros package)");
 return;
 }

 CameraPlugin::Load(_parent, _sdf);
 // copying from CameraPlugin into GazeboRosCameraUtils
 this->parentSensor_ = this->parentSensor;
 this->width_ = this->width;
 this->height_ = this->height;
 this->depth_ = this->depth;
 this->format_ = this->format;
 this->camera_ = this->camera;

 GazeboRosCameraUtils::Load(_parent, _sdf);
 }

 ////////////////////////////////////////////////////////////////////////////////
 // Update the controller
 void GazeboRosLight::OnNewFrame(const unsigned char *_image,
 unsigned int _width, unsigned int _height, unsigned int _depth,
 const std::string &_format)
 {
 static int seq=0;

 this->sensor_update_time_ = this->parentSensor_->GetLastUpdateTime();

 if (!this->parentSensor->IsActive())
 {
 if ((*this->image_connect_count_) > 0)
 // do this first so there's chance for sensor to run once after activated
 this->parentSensor->SetActive(true);
 }
 else
 {
 if ((*this->image_connect_count_) > 0)
 {
 common::Time cur_time = this->world_->GetSimTime();
 if (cur_time - this->last_update_time_ >= this->update_period_)
 {
 this->PutCameraData(_image);
 this->PublishCameraInfo();
 this->last_update_time_ = cur_time;

 sensor_msgs::Illuminance msg;
 msg.header.stamp = ros::Time::now();
 msg.header.frame_id = "";
 msg.header.seq = seq;

 int startingPix = _width * ( (int)(_height/2) - (int)( _fov/2)) - (int)(_fov/2);

 double illum = 0;
 for (int i=0; i<_fov ; ++i)
 {
 int index = startingPix + i*_width;
 for (int j=0; j<_fov ; ++j)
 illum += _image[index+j];
 }

 msg.illuminance = illum/(_fov*_fov);
 msg.variance = 0.0;

 _sensorPublisher.publish(msg);

 seq++;
 }
 }
 }
 }
}

That is the code that calculates, in a very basic form, the illuminance.

Create a CMakeLists.txt

Copy the following code in your CMakeLists.txt

cmake_minimum_required(VERSION 2.8.3)
project(gazebo_light_sensor_plugin)

# Load catkin and all dependencies required for this package
find_package(catkin REQUIRED COMPONENTS
roscpp
gazebo_ros
)

# Depend on system install of Gazebo
find_package(gazebo REQUIRED)

link_directories(${GAZEBO_LIBRARY_DIRS})
include_directories(${Boost_INCLUDE_DIR} ${catkin_INCLUDE_DIRS} ${GAZEBO_INCLUDE_DIRS})

add_library(${PROJECT_NAME} src/light_sensor_plugin.cpp)
target_link_libraries(${PROJECT_NAME} ${catkin_LIBRARIES} ${GAZEBO_LIBRARIES} CameraPlugin)

catkin_package(
DEPENDS
roscpp
gazebo_ros
)

Update the package.xml

Now you need to include the following line in your package.xml, between the tags <export></export>

<gazebo_ros plugin_path="${prefix}/lib" gazebo_media_path="${prefix}" />

Now you are ready to compile the plugin. Compilation should generate the library containing the plugin inside your building directory.

Create a world file

Below is an example of a world file that includes the plugin. Save this code in a file named light.world inside the directory worlds of your package. This world file simply loads the camera with its plugin, so it might be a bit ugly, but it will be good enough for your tests. Feel free to add more elements and models in the world file.

<?xml version="1.0" ?>
<sdf version="1.4">
 <world name="default">
 <include>
 <uri>model://ground_plane</uri>
 </include>

 <include>
 <uri>model://sun</uri>
 </include>

 <!-- reference to your plugin -->
 <model name='camera'>
 <pose>0 -1 0.05 0 -0 0</pose>
 <link name='link'>
 <inertial>
 <mass>0.1</mass>
 <inertia>
 <ixx>1</ixx>
 <ixy>0</ixy>
 <ixz>0</ixz>
 <iyy>1</iyy>
 <iyz>0</iyz>
 <izz>1</izz>
 </inertia>
 </inertial>
 <collision name='collision'>
 <geometry>
 <box>
 <size>0.1 0.1 0.1</size>
 </box>
 </geometry>
 <max_contacts>10</max_contacts>
 <surface>
 <contact>
 <ode/>
 </contact>
 <bounce/>
 <friction>
 <ode/>
 </friction>
 </surface>
 </collision>
 <visual name='visual'>
 <geometry>
 <box>
 <size>0.1 0.1 0.1</size>
 </box>
 </geometry>
 </visual>
 <sensor name='camera' type='camera'>
 <camera name='__default__'>
 <horizontal_fov>1.047</horizontal_fov>
 
 <clip>
 <near>0.1</near>
 <far>100</far>
 </clip>
 </camera>
 <plugin name="gazebo_light_sensor_plugin" filename="libgazebo_light_sensor_plugin.so">
 <cameraName>camera</cameraName>
 <alwaysOn>true</alwaysOn>
 <updateRate>10</updateRate>
 <imageTopicName>rgb/image_raw</imageTopicName>
 <depthImageTopicName>depth/image_raw</depthImageTopicName>
 <pointCloudTopicName>depth/points</pointCloudTopicName>
 <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
 <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
 <frameName>camera_depth_optical_frame</frameName>
 <baseline>0.1</baseline>
 <distortion_k1>0.0</distortion_k1>
 <distortion_k2>0.0</distortion_k2>
 <distortion_k3>0.0</distortion_k3>
 <distortion_t1>0.0</distortion_t1>
 <distortion_t2>0.0</distortion_t2>
 <pointCloudCutoff>0.4</pointCloudCutoff>
 <robotNamespace>/</robotNamespace>
 </plugin>
 </sensor>
 <self_collide>0</self_collide>
 <kinematic>0</kinematic>
 <gravity>1</gravity>
 </link>
 </model>
 </world>
</sdf>

Create a launch file

Now comes the final step: to create a launch that will upload everything for you. Save the following code as main.launch inside the launch directory of your package.

<launch>
 <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched -->
 <include file="$(find gazebo_ros)/launch/empty_world.launch">
 <arg name="verbose" value="true"/>
 <arg name="world_name" value="$(find gazebo_light_sensor_plugin)/worlds/light.world"/>
 <!-- more default parameters can be changed here -->
 </include>
 </launch>

Ready to run!

Now launch the world. Be sure that a roscore is running or your machine, and that the GAZEBO_PLUGIN_PATH environment var includes the path to the new plugin.

Now execute the following command:

roslaunch gazebo_light_sensor_plugin main.launch

You can see what the camera is observing by running the following command:

rosrun image_view image_view image:=/camera/rgb/image_raw
Screenshot-from-2015-05-20-173457

You can also have the value of luminance by watching the published topic:

rostopic echo /gazebo_light_sensor_plugin/lightSensor

 

Screenshot-from-2015-05-20-173532

Conclusion

Now you have a plugin for your Gazebo simulations that can measure (very roughly) the light detected. You can use it in you local copy of Gazebo or even inside The Construct.

]]>
7 reasons why robotics companies should be using simulators https://robohub.org/why-robotics-companies-must-use-simulators/ Mon, 28 Dec 2015 12:44:59 +0000 http://robohub.org/why-robotics-companies-must-use-simulators/

During my long experience at Pal Robotics developing humanoid robots, I learned just how much a good simulator can help speed up the development and maintenance processes of robot building. To my surprise, however, I recently discovered that lots of robotics companies don’t use a simulator for this purpose. In this post I’ll share with you the key reasons why you should be using a simulator for your robot development.

1. Test ideas prior to building

Simulations are the best place to start when developing a new robot. By using a simulator to develop your robot, you can quickly identify whether your idea is feasible, with little expense. You can also easily test and discover which are the physical constraints that your robot must face in order to accomplish its goal.

Simulators allow you to easily and quickly test of many different ideas for the same robotic problem, and then decide which one to build based on actual data.

2. Parallelise development

Once your robot has been defined and tested in the simulator, you can start its physical construction. The good thing with simulators is that they allow you to keep doing tests even if your robot is not built yet.

In other words, while your mechanical or electrical departments are building the actual robot, your software department can start developing software and testing it on the simulated robot. Current simulators are closer to reality, so you can develop your software for the simulated robot, make sure it works there, and then transfer the software to the real robot once it has been constructed. Chances are that your software will work with the real robot with only minor mods.

You do not have to wait for your robot to be built before developing its software.

3. Test in different environments

Your robot software could have many bugs that only appear once you take it outside your lab. By using a simulator, you can test your robot in many different environments.

robonaut_r2_iss_taskboard

4. Debug in simulation

Your robot software should be debugged first in the simulator. In fact, you should never allow a software engineer to test software on the real robot if the software does not work on the simulator. It won’t work!

Not only does debugging in the simulator save you a lot of time (since testing and correcting basic errors on the real robot is very time consuming), but it also prevents your untested software from breaking your robot!

5. Develop modifications and improvements

With your simulation in place, you can think about modifications for your robot and test them. What is the next feature that your clients are requesting? You can easily test if whether new features are possible, to what extent, and what it would require you to modify and add. Just use the simulation to see if they are possible or not.

6. Functionality testing suite

Once your robot is built and the software is working properly inside it, you will have to maintain it. Correcting bugs, adding new features, adapting to changes in the frameworks you use … all this requires that you update the software that is already running on your robot.

Chances are that some of these changes will crash some of the functionality that already works in the robot. To prevent minimize the damage, you must create a suite of functionality tests.

Functionality tests are tests that execute on a simulation of your robot and check whether the robot is still able to function in the simulation as expected. You can perform these tests at night. The suite must download the latests code from your repository, compile it, and execute it on the simulated robot.

For each functionality, the robot must be placed on a specific situation in the simulation that allows you to test that functionality. A reward function must check whether the robot was able to accomplish the functionality or not. In the case that it cannot, the suit should send an email to the person in charge of that functionality, in order to make that person aware of the functionality crash.

reem_both_arms3

7. Make your robot popular

Whether your company targets a mainstream audience, or a specialized one, you will want your robot to be used. By creating and freely providing a simulation of your robot, you are giving your audience a chance to experiment and find uses for your robot without being tied to an expensive purchase. You can also allow to people publish tools and ideas for your robot on the Internet, which further extends your reach.

Conclusion

Simulators are a powerful tool for robotics companies that can speed up the development process and prevent costly errors.

In a nutshell, follow these steps next time you are doing a robot development:

  1. Do some drawings and design the robot you want to build based on the task you are trying to solve.
  2. Implement those designs in the simulator and perform some tests.
  3. Modify the robot based on the simulator results.
  4. Start building the real robot. At the same time use the simulator to develop its software.
  5. Test your software on the real robot. If errors are found, use the simulator to correct them and retest. Only use the real robot once errors are corrected.
  6. If customers require mods or improvements, use the simulator to test whether they are (theoretically) possible.
  7. Once your robot is built (including software), build a functionality test suite that executes functionality tests over the simulation every night to prevent changes in the software from shutting down the robot’s functions.
  8. Finally, provide for free a model of your robot (including controllers) and help users to use it!

]]>
Robot Race to Hawaii Contest https://robohub.org/the-robot-race-to-hawaii-contest/ Tue, 24 Nov 2015 22:16:16 +0000 http://robohub.org/the-robot-race-to-hawaii-contest/

The Robot Race to Hawaii is a robotics contest where participants program a humanoid Nao robot to run a 10-meter race in the shortest time possible. The whole competition is run in a Webots simulation inside The Construct hosting platform. Participants need only a computer equipped with web browser to participate, and are not required to download any applications. There is no cost to enter.

The competition is simple, making it a good entry point and hands-on exercise for newcomers to robotics. The Construct provides the simulated environment with the robot already in it, as well as a sample controller that participants have the option to use as a starting point for building the walking controller. Because the challenge is hosted in The Construct and stored in the cloud, contestants participate via their web browser and do not need to worry about downloading and installing software.

Teachers, you can use this contest as an assignment for your students, as participants can expect to learn:

  • How to code in Python. The controllers of the robots must be written in Python, which is one of the best languages for introducing someone to programming and especially to robotics. A good place to start learning about python is CodeAcademy.
  • How a robotics simulator works. The simulator used for the contest is Webots. Participants must learn how this simulator works in order to understand how all the parts interact together (robot simulation, controller, computer, etc). Fortunately, there are nice tutorials to learn about it here.
  • How to build models of objects for robot simulators. Throughout the competition, participants will be asked to create simulated models of simple objects, like a chair, a glass or a blackboard, once per week. This will provide them with deeper knowledge of what a simulation means. The same Webots tutorial linked to above contains information about how to create a model.
  • The basics of biped walking. Even if the contest provides a sample controller allowing the robot to walk, participants may be interested in applying more complex techniques for robot walking. Learning about how to make a biped walk can help create faster controllers. You can learn about robot walking here or read many of the scientific papers available in the web.

cubo_webots_8_robot_race_to_hawaii_contestMore complex topics such as writing and compiling C++ controllers, using ROS as a robotics framework, or the programming of more complex robot skills like opening doors or grasping a objects have been deliberately left out of this challenge in order to make it suitable for people with minimal knowledge of robotics. STEM high school students and first year robotics students should find the competition accessible.

Participation is open to anyone, however, so expect some pros to enter.

By the way, did I mention that the winner of the contest will get a trip to Hawaii?!?! Register here: contest.theconstructsim.com

Good luck!

 

]]>
Echoes of the DRC at Humanoids 2015 https://robohub.org/echoes-of-the-drc-at-humanoids-2015/ Fri, 13 Nov 2015 13:25:14 +0000 http://robohub.org/echoes-of-the-drc-at-humanoids-2015/

Last week, the Humanoids 2015 conference was held in Seoul, Korea. It’s the first event to focus on humanoid robots since the Darpa Robotics Challenge (DRC) earlier this year and, in fact, the location is home to the DRC winners, Team KAIST. Hence, it was no surprise that the event was very DRC orientated and even featured a mini-Darpa challenge.

Humanoids is a small conference compared to IROS or ICRA. And I really appreciate that because it gives you the chance to interact more closely with people that actually work in the humanoid field; less people, fewer concurrent sessions and more interaction.

The conference lasted three days, with the first dedicated to workshops and the others to the main conference itself.

IMG_5697-e1447253487261-768x1024

I attended the Benchmarking bipedal functions of humanoids robots: towards a unified framework workshop and gave a talk about how to use cloud simulations for benchmarking in a general and unbiased way. Very interesting papers were presented at that workshop, especially about how to build a general framework to benchmark walking in humanoids (by partners in the Koroibot project)

The main conference was divided into plenary talks, paper presentations and interactive sessions. There were guided visits to the laboratories of KIST (the Korean Institute for Science and Technology) and a full demo of the Hubo robot doing all the DRC tests.

IMG_5707-e1447253636415-768x1024

I loved the two plenary talks, especially the first one given by Aude Billard from EPFL. She presented a beautiful path of research towards the creation of robots that can do better than humans (super-human competences). She showed her results with the study of a robot that was able to grasp a racket when it was thrown, by making use of dynamical systems. It was impressive to see how her robot was able to grasp rackets and even balls, where a human is not able to do this.

The second plenary talk was given by Russ Tedrake, a DRC participant. He outlined several problems the teams faced when trying to control a robot during the challenge. He demonstrated his quest in trying to exploit the structure of the governing equations of model based control, in order to create more robust control for robot dynamics. I really enjoyed his talk, even if I did not understand it very much.

Paper presentations were limited to ten minutes per publication and I recommend you check the whole list of presented papers here.

The Technical Tours were visits to some the KIST laboratories like the Center for Intelligent Robotics, and the Healthcare Robotics Research Group. The first lab is dedicated to the use of social robots to enhance the cognitive abilities of the elderly. It’s  focused on small humanoids that engage the elderly to play games that stress their cognitive skills. The second lab is more concerned with robots for surgery.

IMG_5721xxxxx IMG_5736-e1447253701185-768x1024

Now, the most impressive activity was the DRC demo. It was amazing to see how the KAIST team performed a complete demo from beginning to end without any trouble. The robot was able to get into the car, drive, get out, open a door, close the valve, perform a drill, walk and finally climb the stairs. And they did this on three consecutive days. Based on my experience at several Robocup competitions, I know how difficult it is to make all this work for a demo, so I cannot say anything but: bravo KAIST!

Evaluating the conference, from my point of view, I feel that it is very (too much) focused on control for humanoids. The vast majority of papers are dedicated to specific control problems that humanoids face and that make them so much more complex than wheeled robots. I, personally, noticed the lack of papers more concerned with cognitive issues, where the fact that the robot is a humanoid might be used to create more complex behaviours.

There could also have been a panel of experts of some sort to analyse the results of the DARPA Robotics Challenge and how a second round of the contest would improve humanoid technology (or not).

Perhaps next year!

IMG_5694-e1447254110701 ]]>
The Construct: Robot simulations in the cloud https://robohub.org/the-construct-robot-simulations-in-the-cloud/ Tue, 20 Oct 2015 16:05:58 +0000 http://robohub.org/the-construct-robot-simulations-in-the-cloud/

The Construct is a Barcelona based startup created to simplify the simulation of robots. The goal is to allow anybody to simulate complex robots and environments with minimal knowledge, without having to install or maintain anything in their computer and without having to build the simulations from scratch.

With The Construct, users run their robot simulations in the cloud, using a web browser so they don’t need to install, configure or store files on their computer. Any WebGL enabled browser will suffice.

Once logged in, users must select which simulator to run. Currently available simulators include Gazebo, Webots and DRC, and more are in the pipeline. All simulators are fully integrated with ROS (it’s not mandatory to use it though).

The_Construct_2

Users will also find different versions of the same simulator available. This means that they can run a Gazebo 4.0 with ROS Indigo under Ubuntu 14.04, or a Gazebo 1.9 with ROS Hydro under Ubuntu 12.04. It all depends on their needs.

An interesting feature of The Construct is that it allows sharing of simulations, that is, users can work at the same time on the same simulation, each from their own point of view. This feature opens a lot of simulation possibilities, such as collaboration between workmates, worldwide robotics classes in real time or hosting a competition where all the participants share the same interface and resources.

The_Construct_3
The_Construct_4

The Construct running on an iPad2 (Nao robot by Aldebaran on Webots)

Since simulations are created and accessed through a web browser, users can utilize any type of computer or device – a Linux, Windows PC or Mac, a tablet or smartphone.

 

 

 

 

 

The_Construct_5

The Construct running on an iPhone4S (Wam arm by Barret on Gazebo)

As the cloud provides high CPU power, it is possible to simulate really big environments, such as a  city, or very complex robots.

Screenshot from 2015-03-26 21_15_51

The simulation of a city, which includes the sea and an island.

Users can launch their already existing simulations in The Construct  without a problem. It’s fully compatible with desktop simulators, and allows switching from desktop to cloud and vice versa at anytime without losing features in the simulations.

The_Construct_6The Construct was an official tool at the 2015 IEEE-RAS Summer School on Experimental Methodology, Performance Evaluation and Benchmarking in Robotics, and it’s developers are currently working with the University of Pisa and the Center Piaggio to build an off-the-shelf grasping simulation system using a soft robotic hand.

 

 

 

The_Construct_7

Recently, the company participated in the Robot Launch 2015 startup competition, winning the Reader’s pick award and the Siemens Industrial Award. This later award makes The Construct a Frontier Partner of Siemens.

You can see a demo about how to launch your already created simulations in The Construct by visiting this post.

You can also enrol in The Construct for a free account and start running your simulations in the cloud. Just visit us here.

]]>