Oliver Mitchell – Robohub https://robohub.org Connecting the robotics community to the world Tue, 05 Sep 2023 13:09:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Tackling loneliness with ChatGPT and robots https://robohub.org/tackling-loneliness-with-chatgpt-and-robots/ Tue, 05 Sep 2023 13:02:00 +0000 https://robotrabbi.com/?p=25514 Read More]]> As the last days of summer set, one is wistful of the time spent with loved ones sitting on the beach, traveling on the road, or just sharing a refreshing ice cream cone. However, for many Americans such emotional connections are rare, leading to high suicide rates and physical illness. In a recent study by the Surgeon General, more than half of the adults in the USA experience loneliness, with only 39% reporting feeling “very connected to others.” As Dr. Vivek H. Murthy states: “Loneliness is far more than just a bad feeling—it harms both individual and societal health. It is associated with a greater risk of cardiovascular disease, dementia, stroke, depression, anxiety, and premature death. The mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day and even greater than that associated with obesity and physical inactivity.” In dollar terms, this epidemic accounts for close to $7 billion of Medicare spending annually, on top of $154 billion of yearly worker absenteeism.

As a Venture Capitalist, I have seen a growing number of pitch decks for conversational artificial intelligence in place of organic companions (some of these have wellness applications, while others are more lewd). One of the best illustrations of how AI-enabled chatbots are entering human relationships is in a recent article by the New York Times reporter Erin Griffin, who spent five days testing the AI buddy Pi. Near the end of the missive, she exclaims, “It wasn’t until Monday morning, after hours of intermittent chatting throughout the weekend, that I had my ‘aha’ moment with Pi. I was feeling overwhelmed with work and unsure of how to structure my day, a recurring hangup that often prevents me from getting started. ‘Good morning,’ I typed into the app. ‘I don’t have enough time to do everything I need to do today!’ With a level of enthusiasm only a robot could muster before coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bot’s advice, it was obvious and simple, the kind of thing you would read in a self-help article by a productivity guru. But it was tailored specifically to me — and it worked.” As the reporter reflected on her weekend with the bot, she commented further, “I could have dumped my stress on a family member or texted a friend. But they are busy with their own lives and, well, they have heard this before. Pi, on the other hand, has infinite time and patience, plus a bottomless well of encouraging affirmations and detailed advice.”

In a population health study cited by General Murthy, the demographic that is most isolated in America is people over the age of 65. This is also the group that is most affected by physical and cognitive decline due to loneliness. Doctors Qi and Wu presented to Neurology Live a survey of the benefits of AI in their June paper, “ChatGPT: A Promising Tool to Combat Social Isolation and Loneliness in Older Adults With Mild Cognitive Impairment.” According to the authors, “ChatGPT can provide emotional support by offering a nonjudgmental space for individuals to express their thoughts and feelings. This can help alleviate loneliness and provide a sense of connection, which is crucial for well-being.” The researchers further cited ancillary uses, “ChatGPT can also assist with daily tasks and routines. By offering reminders for appointments, medications, and other daily tasks, this AI model can help older adults with MCI (mild cognitive impairment) maintain a sense of independence and control over their lives.” The problem with ChatGPT for geriatric plus populations is the form factors, as most seniors are not the most tech-savvy. This is an opportunity for roboticists.

Last Tuesday, Intuition Robotics announced it scored an additional financing of $25 million for expanding its “AI care companions” to all senior households. While its core product, ElliQ, does not move, its engagement offers the first glimpse of the benefits of social robots at mass. In speaking about the future, I interviewed its founder/CEO, Dor Skuler, last week. He shared with me his vision, “At this time, we don’t have plans to add legs or wheels to ElliQ, but we are always looking to add new activities or conversational features that can benefit the users. Our goal is to continue getting ElliQ into as many homes as possible to spread its benefits to even more older adults. We plan to create more partnerships with governments and aging agencies and are developing more partnerships within the healthcare industry. With this new funding, we will capitalize on our strong pipeline and fund the growth of our go-to-market activities.”

Unlike the stuffed animal executions of Paro and Tombot, ElliQ looks like an attractive home furnishing (and winner of the 2003 International Design Award). According to Skuler, this was very intentional, “We placed very high importance on the design of ElliQ to make it as easy as possible to use. We also knew we older adults needed technology that celebrated them and the aging process rather than focusing on disabilities and what they may no longer be able to do by themselves.” At the same time, the product underwent a rigorous testing and development stage that put its customer at the center of the process. “We designed ElliQ with the goal of helping seniors who are aging in place at home combat loneliness and social isolation. This group of seniors who participated in the development and beta testing helped us to shape and improve ElliQ, ensuring it had the right personality, character, mannerisms, and other modalities of interaction (like movement, conversation design, LEDs, and on-screen visuals) to form meaningful bonds with real people.” He further observed in the testing with hundreds of seniors, “we’ve witnessed older adults forming an actual relationship with ElliQ, closer to how one would see a roommate rather than a smart appliance.”

The results since deploying in homes throughout New York have been astounding in keeping older populations more socially and mentally engaged. As ElliQ’s creator elaborated, “In May 2022, we announced a partnership with the New York State Office for the Aging to bring 800+ ElliQ units to seniors across New York State at no cost to the end users. Just a few weeks ago this year, we announced a renewal of that partnership and the amazing results we’ve seen so far including a 95% reduction in loneliness and great improvement in well-being among older adults using the platform. ElliQ users throughout New York have demonstrated exceptionally high levels of engagement consistently over time, interacting with their ElliQ over 30 times per day, 6 days a week. More than 75% of these interactions are related to improving older adults’ social, physical, and mental well-being.”

To pedestrian cynics, ElliQ might look like an Alexa knockoff leading them to question why couldn’t the FAANG companies cannibalize the startup. Skuler’s response, “Alexa and other digital assistant technology were designed with the masses in mind or for younger end users. They also focus mainly on reactive AI, meaning they do not provide suggestions or talk with users unless prompted. ElliQ is designed to engage users over time, using a proactive approach to engagement. Its proactive suggestions and conversational capabilities foster a deep relationship with the user. Moreover, ElliQ’s integration of Generative AI and Large Language Models (LLMs) enables rich and continuous conversational experiences, allowing for more contextual, personalized, and goal-driven interactions. These capabilities and unique features such as drinking coffee with ElliQ in cafes around the world or visiting a virtual art museum, bring ElliQ and the user close together, creating trust that allows ElliQ to motivate the older adult to lead a more healthy and engaged lifestyle.”

While ElliQ and OpenAI’s ChatGPT have shown promise in treating mental illness, some health professionals are still not convinced. At MIT, professor and psychologist, Sherry Turkle, worries that the interactions of machines “push us along a road where we’re encouraged to forget what makes people special.” Dr. Turkle demures, “The performance of empathy is not empathy. The area of companion, lover therapist, best friend is really one of the few areas where people need people.”

]]>
Submersible robots that can fly https://robohub.org/submersible-robots-that-can-fly/ Thu, 13 Jul 2023 16:19:18 +0000 https://robohub.org/?p=207759 Last month, the entire world was abuzz when five über wealthy explorers perished at the bottom of the Atlantic Ocean near the grave of the once “unsinkable ship.” Disturbingly, during the same week, hundreds of war-torn refugees drowned in the Mediterranean with little news of their plight. The irony of machine versus nature illustrates how tiny humans are in the universe, and that every soul rich or poor is precious. It is with this attitude that many roboticists have been tackling some of the hardest problems in the galaxy from space exploration to desert mining to oceanography to search & rescue.

Following the news of the implosion of the Titan submersible, I reached out to Professor F. Javier Diez of Rutgers University for his comment on the rescue mission and the role of robots. The aerospace academic is also an entrepreneur of a novel drone technology company that can fly and swim autonomously within the same mission. As he explains, his approach could’ve saved time and money in ascertaining the same unfortunate answer, “I think we could go down to 12,000. No problem. So now imagine sending a 20-pound [robot] down to 12,000 feet. You can do this in a couple of hours. You just throw it overboard, or you fly, you know you don’t need to bring in a crane, a gigantic ship, and all this very expensive equipment just to do that first look.” Dr. Diez’s sentiment was validated during the first press conference of US Coast Guard Rear Adm. John Mauger when he cautioned the media of the huge logistical undertaking in moving such large equipment to a remote, hostile, area of the globe. Diez continued, “We could have been there in a couple of hours. So of course, you know there’s more to it. But I was just saying that long term I can see how very small robots like ours for search and rescue could be huge. We are doing some work. We actually put some proposals with the submarine community. I think this has a huge application because again, these 20-pound [drones] are something you can deploy from anywhere, anytime.”

In breaking down his invention, the drone CEO elaborated on the epiphany that happened in his lab years earlier by overcoming the conventional wisdom that an uncrewed system that operated in two modalities (marine and air) required two separate propulsion systems. He further noted that two propulsion systems were very inefficient regarding burning energy and functionality. “And this was I would say a mental barrier for a lot of people, and it still is when they see what we put into it.” He explained how he first had to overcome so many industry naysayers, “I brought this to some folks at NASA, and everyone was saying, it’s not going to work. And then when you look at what’s behind the propeller design and the motor design, you realize that we cannot be living on an edge. We designed propellers for a very specific condition, which is air.” However, the innovator challenged the status quo of the aerospace community by asking, “Can you design propellers and motors for water? And it turns out that you can.” He deconstructed his lab’s research, “So if you look at the curve for air, and you look at the currents for water, they intersect, and if you do it the right way, you can be efficient in both places. So that was the breakthrough for me to be able to show. And we actually show that you can design propellers that can be efficient in both air and underwater.”

After sharing insights into the design, he then conveyed to me that the programming of the flight controls was the next hurdle to overcome. “The next challenge is the transition. So we worked very hard from the very beginning on that transition from water. We actually have a patent on this and it’s really the heart of our technology. I call it dual-plane propulsion. You have 2 propellers on the top and two propellers on the bottom. So when you’re on the surface, the bottom ones are in the water and the top ones are in the air. So the bottom ones are like when you have a baby and you are pull-swimming. Babies are not very good at swimming, but if you put your hand on their bellies all of a sudden they become great swimmers. So think of it as the bottom propellers. When the vehicle is on the surface, the bottom propellers are keeping it very very stable. So now that you have that stability, the top [propellers] can work together to get [the drone] out of the water. So that’s how we accomplish the continuous transition. You can go in and out 100 times,” bragged the Professor.

Diez’s company SubUAS is not a theoretical concept, but an actual product that is currently deployed by the US military, and looking to expand into commercial markets. “So we’d been a hundred percent with the Department of Defense. They really supported the development of technology.” He now is itching to expand from a Navy Research-funded project to new deployments in the municipal and energy sectors. “We have done a lot of different types of inspections related to ship pylons. Now, we have [Florida’s] Department of Transportation interested in this technology,” said the startup founder. “What I realized over the last year or so is that defense has its own speed. You cannot really push it. There is a specific group now in defense that is encouraging us, but it takes a couple of years,” he quipped. Optimistically, he envisions being profitable very soon by opening up the platform for commercial applications. “Now we’re starting to see the fruits of that [effort]. I can tell you that we got approved in Europe to do offshore wind turbine inspection later this summer. However, he is most excited by bridge inspections, “We have over half a million bridges in the USA. And like at least 50,000 to 200,000 have something seriously wrong with them. I mean, we’re not doing enough inspections. So having a vehicle like the Naviator that can look at the underwater part of the bridge is huge.”

He has also been approached by several companies in the energy industry. “And then there are a lot of interesting assets within the oil and gas, but we are discovering this. It’s kind of almost like a discovery phase because nobody has ever had the capability of doing air and marine.” He described that there are many robots like ROVs (Remotely Operated Vehicles) inspecting rigs on the marine’s surface and aerial drones looking from the air, but no one is focused on the splash zone [where the two meet] as they never had dual modality before. He further illustrated the value proposition of this specific use case, “Nobody gets close to the surface. So they’re saying that that’s a huge application for us.” Long-term, Diez imagines replacing tethered ROVs altogether as his system is easier (and cheaper) to deploy.

Today, SubUAS’ business model is on an inspection basis, but over time it will center around data collection as they are the only waterproof aerial drone on the market that can swim. “We go to the bridge inspectors, and we work with them to simplify their lives, and at the end of the day reduce the risk for the diver. So they know what we are doing is making their lives easier.” However, that is only the tip of the iceberg, because “it’s not so much about the hardware or the sensors, but the data that you collect. We think cloud services are huge as it allows you to sort and analyze it anywhere.” He concluded by sharing that his next model will be utilizing a lot of artificial intelligence in interpreting the condition and autonomously planning the missions accordingly. Maybe soon, virtual explorers could look at shipwrecks as well from the comfort (and safety) of their couches.

]]>
Automate 2023 recap and the receding horizon problem https://robohub.org/automate-2023-recap-and-the-receding-horizon-problem/ Thu, 01 Jun 2023 07:38:00 +0000 https://robotrabbi.com/?p=25320 Read More]]> “Thirty million developers” are the answer to driving billion-dollar robot startups, exclaimed Eliot Horowitz of Viam last week at Automate. The hushed crowd of about 200 hardware entrepreneurs listened intensely to MongoDB‘s founder and former CTO (a $20Bn success story). Now, Horowitz aims to take the same approach that he took to democratizing cloud data applications to mechatronics. As I nudged him with questions about how his new platform will speed complex robot deployments to market, he shared his vision of the Viam developer army (currently 1,000+ strong) creating applications that can be seamlessly downloaded on the fly to any system and workflow. Unlike RoS which is primarily targeted to the current community of roboticists, Viam is luring the engineers that birthed ChatGPT to revolutionize uncrewed systems with new mechanical tasks addressing everyday needs. Imagine generative AI prompts for SLAM, gripping, computer vision, and other highly manipulative tasks with drag-and-drop ease.

Interviewing Horowitz recalled my discussion a few months back with Dr. Hal Thorsrud of Anges Scott College in Georgia. Professor Thorsrud teaches a novel philosophy course at this Liberal Arts institution on the “Introduction to Artificial Intelligence.” Similar to Horowitz, Thorsrud envisions an automated world whereby his graduates would be critical in thinking through the ethical applications of robots and AI. “Ethics has to become an engineering problem which is fascinating because, I mean, the idea is that we need to figure out how we can actually encode our ethical values into these systems. So they will abide by our values in order to pursue what we deemed to be good,” remarked Thorsrud.

According to Thorsrud’s syllabus, the class begins: “with a brief survey of positions in the philosophy of mind in order to better understand the concept of intelligence and to formulate the default position of most AI research, namely Computationalism. We then examine questions such as ‘What is a computer?’, ‘What makes a function or number computable?’, ‘What are algorithms and how do they differ from heuristics?’ We will consider fundamental issues in AI such as what makes a system intelligent, and whether computers can have minds. Finally, we will explore some of the ethical challenges that face AI such as whether intelligent artificial systems should be relied upon to make important decisions that affect our lives, and whether we should create such systems in the first place.”

In explaining the origins of his course, Dr. Thorsrud recalled, “A lot of my students are already interested in philosophy. They just don’t know it. And so, in fact, just recently my department has joined forces with neuroscience. We’re no longer a Philosophy Department. We’re now the Philosophy of Neuroscience, and now the Department of Law, Neuroscience, and Philosophy. Because these students are interested in mind, they are interested in intelligence, but they don’t realize that philosophy has been dealing with an attempt to understand the nature of mind from the very beginning, and the nature of intelligence from the very beginning. So we have a lot to offer these students this question of how to reach them. So that’s what kind of started me off on this different path, and, in the meantime think the same is true of artificial intelligence.”

Thorsrud elaborated that this introductory course is only the first step in a wider AI curriculum at Anges Scott as the confluence between endeavors like Viam and ChatGPT collide in the coming years to move the automation industry at hyperspeed. Already, the AI Philosopher sees how GPT is challenging humans to stand out, “The massive growth in the training data and the parameters, the weights that were that the machine learning was operating on really paid off.” He continued to illustrate how dystopian fears are unfounded, “I mean, we have a tendency to anthropomorphize things like ChatGPT and it’s understandable. But as far as I can tell, it’s, it’s a long way from the intelligence of my dog, a long, long way.” He is realistic about the speed of adoption, “Well, as a philosopher, they’re never going to be able to get to the point where they can give me a credible adjudication, because human judgment you know it is. And, this is another example of the ever-present receding horizon problem. First, you know that computers will never be able to beat a human at chess. Okay, fine computers will never be able to beat a human at GO. Fine computers will never be able to write. And so we keep setting these limits down, and then surpassing them.”

At Automate, I had the chance to catch up with ff Venture Capital portfolio company, PlusOne Robotics, and its amazing founder, Erik Nieves. While the talk in the theater was about the future, Nieves illustrated on the floor what is happening today. Impressively the startup is close to one million picks of depalletizing packages and sorting goods for the likes of FedEx and other leading providers of shipping & logistics. PlusOne’s proprietary computer vision co-bot platform is not waiting for the next generation of developers to join the ranks, but building its own intelligent protocols to increase efficiencies on the front lines of e-commerce fulfillment.

As Brian Marflak, of FedEx, remarked, “The technology in these depalletizing arms helps us move certain shipments that would otherwise take up valuable resources to manually offload. Having these systems installed allows team members to perform more skilled tasks such as loading and unloading airplanes and trucks. This has been a great opportunity for robotics to complement our existing team members and help them complete tasks more efficiently.”

Markflak’s sentiment was shared by the 25,000+ attendees of Automate that filled the entire Detroit Convention Center. A big backdrop of the show was how macro labor trends and shortages are exasperating the push towards automation (and thus moving the horizon even further). According to the most recent reports, close to 20% of all US retail sales are driven online, with over 20 billion packages being shipped every year growing at an annual rate of 25%. This means even if the e-commerce industry is able to hire a million more workers, there are not enough (organic) hands to keep up. As Nieves puts it, “The growth of e-commerce has placed tremendous pressure on shipping responsiveness and scalability that has significantly exacerbated labor and capacity issues. Automation is key, but keeping a human-in-the-loop is essential to running a business 24/7 with greater speed and fewer errors. With the ongoing labor shortages, I believe we’ll see an increase in the adoption of Robots-as-a-Service (RaaS) to lower capital expenditures and deploy automation on a subscription basis.” Get ready for Automate 2024, as the convention moves for the first time to an annual gathering!

]]>
Countering Luddite politicians with life (and cost) saving machines https://robohub.org/countering-luddite-politicians-with-life-and-cost-saving-machines/ Sun, 04 Dec 2022 09:30:00 +0000 https://robotrabbi.com/?p=25005 Read More]]>

Earlier this month, Candy Crush celebrated its decade birthday by hosting a free party in lower Manhattan. The climax culminated with a drone light display of 500 Unmanned Ariel Vehicles (UAVs) illustrating the whimsical characters of the popular mobile game over the Hudson. Rather than applauding the decision, New York lawmakers ostracized the avionic wonders to Jersey. In the words of Democratic State Senator, Brad Hoylman, “Nobody owns New York City’s skyline – it is a public good and to allow a private company to reap profits off it is in itself offensive.” The complimentary event followed the model of Macy’s New York fireworks that have illuminated the Hudson skies since 1958. Unlike the department store’s pyrotechnics that release dangerous greenhouse gases into the atmosphere, drones are a quiet climate-friendly choice. Still, Luddite politicians plan to introduce legislation to ban the technology as a public nuisance, citing its impact on migratory birds, which are often more spooked by skyscrapers in Hoylman’s district.

Beyond aerial tricks, drones are now being deployed in novel ways to fill the labor gap of menial jobs that have not returned since the pandemic. Founded in 2018, Andrew Ashur’s Lucid Drones has been power-washing buildings throughout the United States for close to five years. As the founder told me: “I saw window washers hanging off the side of the building on a swing stage and it was a mildly windy day. You saw this platform get caught in the wind and all of a sudden the platform starts slamming against the side of the building. The workers were up there, hanging on for dear life, and I remember having two profound thoughts in this moment. The first one, thank goodness that’s not me up there. And then the second one was how can we leverage technology to make this a safer, more efficient job?” At the time, Ashur was a junior at Davidson College playing baseball. The self-starter knew he was on to a big market opportunity.

Each year, more than 160,000 emergency room injuries, and 300 deaths, are caused by falling off of ladders in the United States. Entrepreneurs like Ashur understood that drones were uniquely qualified to free humans from such dangerous work. This first required building a sturdy tethered quadcopter, capable of a 300 psi flow rate, connected to a tank for power and cleaning fluid for less than the cost of the annual salary of one window cleaner. After overcoming the technical hurdle, the even harder task was gaining sales traction. Unlike many hardware companies that set out to disrupt the market and sell directly to end customers; Lucid partnered with existing building maintenance operators. “Our primary focus is on existing cleaning companies. And the way to think about it is we’re now the shiniest tool in their toolkit that helps them do more jobs with less time and less liability to make more revenue,” explains Ashur. This relationship was further enhanced this past month with the announcement of a partnership with Sunbelt Rentals, servicing its 1,000 locations throughout California, Florida, and Texas. Lucid’s drones are now within driving distance of the majority of the 86,000 facade cleaning companies in America.

According to Commercial Buildings Energy Consumption Survey, there are 5.9 million commercial office buildings in the United States, with an average height of 16 floors. This means there is room for many robot cleaning providers. Competing directly with Lucid are several other drone operators, including Apellix, Aquiline Drones, Alpha Drones, and a handful of local upstarts. In addition, there are several winch-powered companies, such as Skyline Robotics, HyCleaner, Serbot, Erlyon, Kite Robotics, and SkyPro. Facade cleaning is ripe for automation as it is a dangerous, costly, repetitive task that can be safely accomplished by an uncrewed system. As Ashur boasts, “You improve that overall profitability because it’s fewer labor hours. You’ve got lower insurance on ground cleaner versus an above ground cleaner as well as the other equipment.” His system being tethered, ground-based, and without any ladders is the safest way to power wash a multistory office building. He elaborated further on the cost savings, “It lowers insurance cost, especially when you look at how workers comp is calculated… we had a customer, one of their workers missed the bottom rung of the ladder, the bottom rung, he shattered his ankle. OSHA classifies it as a hazardous workplace injury. Third workers comp rates are projected to increase by an annual $25,000 over the next five years. So it’s a six-figure expense for just that one business from missing one single bottom rung of the ladder and unfortunately, you hear stories of people falling off a roof or other terrible accidents that are life changing or in some cases life lost. So that’s the number one thing you get to eliminate with the drone by having people on the ground.”

As older construction workers are retiring at alarming numbers and a declining younger population of skilled laborers, I pressed Ashur on the future of Lucid in expanding to other areas. He retorted, “Cleaning drones, that’s just chapter one of our story here at Lucid. We look all around us at service industries that are being crippled by labor shortages.” He continued to suggest that robots could inspire a younger, more creative workforce, “When it comes to the future of work, we really believe that robotics is the answer because what makes us distinctly human isn’t our ability to do a physical task in a repetitive fashion. It’s our ability to be creative and problem solve… And that’s the direction that the younger populations are showing they’re gravitating towards” He hinted further at some immediate areas of revenue growth, “Since we launched a website many years ago, about 50% of our requests come from international opportunities. So it is very much so a global problem.” In New York, buildings taller than six stories are required to have their facades inspected and repaired every five years (Local Law 11). Rather than shunting drones, State Senator Hoylman should be contacting companies like Lucid for ideas to automate facade work and create a new Manhattan-launched industry.
]]>
Robot stomachs: powering machines with garbage and pee https://robohub.org/robot-stomachs-powering-machines-with-garbage-and-pee/ Fri, 14 May 2021 09:56:00 +0000 https://robohub.org/robot-stomachs-powering-machines-with-garbage-and-pee/ Read More]]> The Seinfeld idiom, “worlds are colliding,” is probably the best description of work in the age of Corona. Pre-pandemic, it was easy to departmentalize one’s professional life from one’s home existence. Clearly, my dishpan hands have hindered my writing schedule. Thank goodness for the robots in my life, scrubbing and vacuuming my floors; if only they could power themselves with the crumbs they suck up.

The World Bank estimates that 3.5 million tons of solid waste is produced by humans everyday, with America accounting for more than 250 million tons a year or over 4 pounds of trash per citizen. This figure does not include the 34 billion gallons of human organic materials that is processed in water treatment centers across the country each year. To the fictional Dr. Emmett Brown, this garbage is akin to “black gold” – ecologically powering cities, cars, and machines. In reality, the movie, “Back to The Future II” was inspired by the biomass gasification movement of 20th century in powering cars with wood during World War II when petroleum was scarce. The technology has advanced so much that a few years ago the GENeco water treatment plant in the United Kingdom built a biomethane gas bus that relied solely on sewage. In reflecting on the importance of the technology, Collin Field of Bath Bus Company declared, “We will never, ever, ever, while we are on this planet, run out of human waste.”

Less than twenty miles away from the GENco plant, Professor Loannis Leropoulos of the University of Bristol’s Robotics Laboratory is working on the next generation of bio-engineered fuel cells. Last week, Dr. Leropoulos demonstrated his revolutionary Microbial Fuel Cells (MFCs) for me. As witnessed, he is not just inspired by nature, but harnessing its beauty to power the next generation of robots. The MFCs mimic an animal’s stomach with microbes breaking down food to create adenosine triphosphate (ATP). The Bristol lab began building MFCs to power its suite of EcoBots. “I started this journey about twenty years ago with the main purpose of building sustainable autonomous robots for remote area access,” reflects Leropoulos. He was inspired by Dr. Stuart Wilkinson’s Gastrobot in the early 2000s that first promoted the idea of “an intelligent machine that digests real food for energy.” At the time the media hyped Wilkinson’s invention as a flesh-eating robot, when in reality it digested sugar cubes, turning the carbohydrates into electrical energy. Unfortunately, the Gastrobot’s clumsy oversized form factor suffered from long charging times with an 18-hour “carbo-loading” process for every 15 minutes of power.

Springboarding off of Wilkinson’s concept, Leropoulos’ team started with the idea of using MFC to power machines by putting the microbes directly inside the unit to more efficiently produce energy from any sugar-based substance, even waste (e.g., urine, feces, and trash). The Professor described the elegance of his technology that creates a “uniform colonization” of microbes multiplying every 8 minutes with parent lifecycles succeeded by daughter cells in a continuous pattern of ‘feed-growth-energy’. He compares it to the human microflora process of breaking down fresh food in the digestive system that results in healthy bathroom visits, “the same is with Microbial Fuel Cells as long as we continue feeding them, the MFCs will continue to generate electricity.” The Bristol professor boasts that batteries are better performing than anything on the market as biological lifeforms have no denigration since “the progeny keep refreshing the community or electrodes so we have stable levels of power.” By contrast, the most popular non-fossil fuel available, lithium, degrades over time and leads to destructive mining practices in scarring the Earth in search of declining ore resources with the explosion of mobile phones, portable devices, and electric vehicles.

While MFCs are still in their infancy, Leropoulos shared with me his plan for commercializing the invention. His lab recently announced the success of its MFCs prototypes in powering mobile phones, smart watches, and other devices (including the EcoBots). In addition, Leropoulos has pushed his team to miniaturize the size of his batteries from its 6″ prototype to smaller than a AA, while at the same time rivaling the performance of alkaline. Backed by the Gates foundation, he has also reduced the production costs from $18 to $1 a unit, this is before achieving the economies of scale with mass production. Today, his business plan has expanded beyond just autonomous robots to power smart homes, by connecting multiple MFCs to a house’s sanitation and waste systems. “Our research is all about optimizing miniaturization and stacking them with minimum losses so we can end up with a car battery-like shape and size that gives us the amount of power we require,” explains the professor. When I questioned Leropoulos about using MFCs in the future of autonomous fleets, and even to offset the high energy demands of something like bitcoin mining, he remarked, “It would be naive of me to say a straight yes, but this is of course the work we are doing. I strongly believe with the development of new materials that will help with the energy density. We are at a stage where we have done some groundbreaking research using 3D printed electric materials as a low cost scalable technology. There is a lot to say about the functionalization of the electrodes that enables colonization from the microbes in making sure all the progeny cells colonize and the ceramic separators that allows for target ion transfer that makes the whole operation smarter and more efficient.” In thinking more about his work, he declared, “We have yet to see the full potential of Microbial Fuel Cells. I do think one day we will have a ‘Back to the Future’ scenario, feeding your food scraps to your car,”

Pressing Leropoulos on how he envisions robot-charging stations working in a factory or home in the near future, he illustrated it best, “with a Roomba example its actually picking of food scraps in the kitchen that would be a very nutritious source of fuel for the Microbial Fuel Cell, but that’s a few steps down the line.” He continued, “a straightforward application for something like a Roomba is to leave the charging station where it is and connect it to the toilet or kitchen sink. The fuel cycle would be continuous as the robot would not be drawing energy from the house, but the wastewater.” Processing the impact of his vision, it took me back to the early days of the global pandemic shutdown with animals returning to ancient grazing areas and pollution clouds clearing over heavily populated areas with many seeing for the first time distant mountains and blue skies. Innovations like MFCs are part of a new wave of mechatronic environmentally-focused solutions. Before parting, I asked Leropoulos how hopeful is he about the environment, “I do feel optimistic, I have more faith in the the younger generation that they will do things better. The shock we sustained as a society [this past year] is a lesson in seeing the true color of our natural world, if we learn from this then I think the future is a good one.”

]]>
The Year Of The SPAC And What It Means For Hardware https://robohub.org/the-year-of-the-spac-and-what-it-means-for-hardware/ Sun, 03 Jan 2021 09:00:00 +0000 https://robohub.org/the-year-of-the-spac-and-what-it-means-for-hardware/ Read More]]> CBS MarketWatch declared 2020: The Year of the SPAC (Special Purpose Acquisition Corporation). A record 219 companies went public through this fundraising vehicle that uses a reverse merger with an existing private business to create a publicly-listed entity. This accounted for more than $73 billion dollars of investment, providing private equity startups a new outlet to raise capital and provide shareholder liquidity. According to Goldman Sachs, the current trends represents a “year-over-year jump of 462% and outpacing traditional IPOs by $6 billion.” In response to the interest in SPACs, the Securities and Exchange Commission agreed last week to allow private companies to raise capital through direct listings, providing even more access to the public markets outside of Wall Street’s traditional institutional gatekeepers.

For the past few months the SPAC craze has spilled over to the robot and remote sensing industries. Just last week, SoftBank announced it is raising $525 million in a blind pool SPAC for investments in artificial intelligence. In the filing with the SEC, Softbank states, “For the past 40 years, SoftBank has invested ahead of major technology shifts. Now, we believe the AI revolution has arrived.” In 2017, SoftBank’s Chief Executive, Masayoshi Son (nicknamed Masa) predicted that by 2047, robots will outnumber humans on the planet with 10 billion small humanoids (like its own Pepper robot) rolling the streets. An outspoken believer in Singularity, Masa has not been shy about investing in the robotics sector with ownership stakes in Whiz, Pepper, Bear and Brain Corp. The company sold its interests in Boston Dynamics to Hyundai for a billion dollars earlier this month. When launching his own venture capital fund in 2018, Masa declared, “I am devoting 97% of my time and brain on AI.” This past month, Masa’s $100 billion Vision Fund had a huge portfolio win with the IPO of DoorDash, erasing earlier losses of failed investments in WeWork and OneWeb. In that spirit, it is not surprising that the SPAC filing exclaims: “COVID-19 has pulled this future forward by dramatically accelerating the adoption of digital services. During this time, we intersected with many compelling companies that wanted our support at IPO and beyond, but we lacked the vehicle to partner with them. This trend has only increased over the past year as more companies have decided to list publicly.”

SoftBank’s optimism is further validated by the success of SPACs in acquiring hardware sensor companies. Earlier this month, Ouster became the fifth LiDAR startup to go public through a SPAC this year. Already trading on the markets is VelodyneLuminarInnoviz, and Aeva. Each of these companies raised hundreds of millions of dollars at valuations exceeding a billion dollars. Some have fared well in the public markets, such as Luminar doubling its valuation in a few weeks. Others, like Velodyne, have had more difficulty. Velodyne’s shares fell by half since its listing in September (it is currently trading modestly above its initial price). As hardware is tough, staying private comes at the cost of founder dilution and overvaluation. SPACs offer startups and their investors quicker access to capital and greater liquidity, enabling investors to reinvest their returns in the autonomous sector and ultimately driving innovation in advance of greater adoption.

Recently, I caught up with Andrew Flett, General Partner of Mobility Impact Partners which raised $115 million for a new SPAC – Motion Acquisition Corp. (ticker symbol MOTNU). Flett’s investment vehicle is still on the hunt for an acquisition of “target businesses in connected vehicle industries globally, which include companies providing transportation software and cloud solutions for fleet management, freight and logistics, and mobile asset management applications.” When speaking with Flett, he described his inaugural experience in the space as follows, “This is the first SPAC I have been directly involved with but the mechanism has evolved and matured over the last couple of decades. They are popular now as a function of the same yield scarcity and immense liquidity that has been driving public equity speculation. There will be both highly speculative companies and companies with solid fundamentals in any wave of interest. This wave is no different.” He astutely points to previous SPAC upticks (since the 1980s) led by dubious underwriters that used the mechanism as a way to make a quick buck through “pump-and-dump” schemes. These market manipulators, many still serving jail time, quickly promoted stocks on the exchanges to only rapidly sell their own interests in the companies before other investors were legally able to trade the shares, ultimately devastating the startup’s and its shareholders’ values. This is compounded by the increased expenses and transparency of publicly traded listings, leaving startup founders ill prepared for their new role on the NASDAQ or NYSE.

Unlike the past, many of the newly formed SPACs have been managed by brand name investors such as Richard Branson (Virgin Galactic), Bill Ackman (Pershing Square) and Peter Thiel (Bridgetown). The performance of the newly listed SPAC 2020 crop has been very impressive, outpacing the S&P, with Draft Kings and Nikola leading the charge with triple digit returns. In nudging Flett for his opinion of these managers, he cautions, “Smart guys. Is it just a branding exercise or will they be involved in the asset evaluation and ultimate de-SPACed company? In the end, the asset needs to stand on its own and regardless of how it gets there (IPO, Direct Listing, SPAC), once public it is a pure apples to apples performance comparison dependent on strategy, management, and execution. If the public company does not benefit from their wisdom, it does not matter what brand is attached at the front end.”

Flett advises founders not to be too easily seduced by public capital, rather “focus on your company. If your company cannot absorb the responsibilities and overhead of being a public company, it is not the right option for you.” Gauging his view of Softbank’s latest announcement, “Like most Private Equity or institutional investors, it is simply a cash grab and an alternative vehicle to demonstrate their investing acumen. I prefer seeing Softbank doing reasonably sized SPACs than raising another misguided Vision Fund,” Flett optimistically opines. However, at the end of the day, the SPAC pioneer reminds us that the market is cyclical and the window of opportunity will eventually close, “As some of the speculative bets burn investors and yield alternatives appear, the SPAC market will slow.”

]]>
CES 2020: A smart city oasis https://robohub.org/ces-2020-a-smart-city-oasis/ Sun, 26 Jan 2020 22:46:49 +0000 https://robohub.org/ces-2020-a-smart-city-oasis/ Read More ›]]> Like the city that hosts the Consumer Electronics Show (CES) there is a lot of noise on the show floor. Sifting through the lights, sounds and people can be an arduous task even for the most experienced CES attendees. Hidden past the North Hall of the Las Vegas Convention Center (LVCC) is a walkway to a tech oasis housed in the Westgate Hotel. This new area hosting SmartCity/IoT innovations is reminiscent of the old Eureka Park complete with folding tables and ballroom carpeting. The fact that such enterprises require their own area separate from the main halls of the LVCC and the startup pavilions of the Sands Hotel is an indication of how urbanization is being redefined by artificial intelligence.

IMG_7974.JPGMany executives naively group AI into its own category with SmartCity inventions as a niche use case. However as Akio Toyoda, Chief Executive of Toyota, presented at CES it is the reverse. The “Woven City,” initiative by the car manufacturer illustrates that autonomous cars, IoT devices and intelligent robots are subservient to society and, hence, require their own “living laboratory”. Toyoda boldly described a novel construction project for a city of the future 60 miles from Tokyo, “With people, buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test AI technology, in both the virtual and the physical world, maximizing its potential. We want to turn artificial intelligence into intelligence amplified.” Woven City will include 2,000 residents (mostly existing and former employees) on a 175 mile acre site (formerly a Toyota factory) at the foothills of Mount Fuji, providing academics, scientists and inventors a real-life test environment.

Toyota has hired the Dutch architectural firm Bjarke Ingels Group (BIG) to design its urban biosphere. According to Bjarke Ingels, “Homes in the Woven City will serve as test sites for new technology, such as in-home robotics to assist with daily life. These smart homes will take advantage of full connectivity using sensor-based AI to do things automatically, like restocking your fridge, or taking out your trash — or even taking care of how healthy you are.” While construction is set to begin in 2021, the architect is already boasting: “In an age when technology, social media and online retail is replacing and eliminating our natural meeting places, the Woven City will explore ways to stimulate human interaction in the urban space. After all, human connectivity is the kind of connectivity that triggers wellbeing and happiness, productivity and innovation.”

IMG_7936.JPGWalking back into the LVCC from the Westgate, I heard Toyoda’s keynote in my head – “mobility for all” – forming a prism in which to view the rest of the show. Looking past Hyundai/Uber’s massive Air Taxi and Omron’s ping-pong playing robot; thousands of suited executives led me under LG’s television waterfall to the Central Hall. Hidden behind an out of place Delta Airlines lounge, I discovered a robotics startup already fulfilling aspects of the Woven City. Vancouver-based A&K Robotics displayed a proprietary autonomous mobility solution serving the ballooning geriatric population.  The U.S. Census Bureau projects that citizens over the ages of 65 will double from “52 million in 2018 to 95 million by 2060” (or close to a quarter of the country’s population). This statistic parallels other global demographic trends for most first world countries. In Japan, the current population of elderly already exceeds 28% of its citizenry, with more than 70,000 over the age of 100. When A&K first launched its company it marketed conversion kits for turning manual industrial machines into autonomous vehicles. Today, the Canadian team is applying its passion for unmanned systems to improve the lives of the most vulnerable – people with disabilities. As Jessica Yip, A&K COO, explains, “When we founded the company we set out to develop and prove our technology first in industrial environments moving large cleaning machines that have to be accurate because of their sheer size. Now we’re applying this proven system to working with people who face mobility challenges.” The company plans to initially sell its elegant self-driving wheelchair (shown below) to airports, a $2 billion opportunity serving 63 million passengers worldwide.

IMG_8067

In the United States the federal government mandates that airlines provide ‘free, prompt wheelchair assistance between curbside and cabin seat’ as part of the 1986 Air Carrier Access Act. Since passing the bill, airport wheelchair assistance has mushroomed to an almost unserviceable rate as carriers struggle to fulfill the mandated free service. In reviewing the airlines performance Eric Lipp, of the disability advocacy group Open Doors, complains, “Ninety percent of the wheelchair problems exist because there’s no money in it. I’m not 100% convinced that airline executives are really willing to pay for this service.” In balancing profits with accessibility, airlines have employed unskilled, underpaid workers to push disabled fliers to their seats. A&K’s solution has the potential of both liberating passengers and improving the airlines’ bottomline performance. Yip contends, “We’re embarking on moving people, starting in airports to help make traveling long distances more enjoyable and empowering.”

ANA-wheelchair-trials.jpgA&K joins a growing fleet of technology companies in tackling the airport mobility issue. Starting in 2017, Panasonic partnered with All Nippon Airways (ANA) to pilot self-driving wheelchairs in Tokyo’s Narita International Airport. As Juichi Hirasawa, Senior Vice President of ANA, states: “ANA’s partnership with Panasonic will make Narita Airport more welcoming and accessible, both of which are crucial to maintaining the airport’s status as a hub for international travel in the years to come. The robotic wheelchairs are just the latest element in ANA’s multi-faceted approach to improving hospitality in the air and on the ground.” Last December, the Abu Dhabi International Airport publicly demonstrated for a week autonomous wheelchairs manufactured by US-based WHILL. Ahmed Al Shamisi, Acting Chief Operations Officer of Abu Dhabi Airports, asserted: “Convenience is one of the most important factors in the traveller experience today. We want to make it even easier for passengers to enjoy our airports with ease. Through these trials, we have shown that restricted mobility passengers and their families can enjoy greater freedom of movement while still ensuring that the technology can be used safely and securely in our facilities.” Takeshi Ueda of WHILL enthusiastically added, “Seeing individuals experience the benefits of the seamless travel experience from security to boarding is so rewarding, and we are eager to translate this experience to airports across the globe.”

At the end of Toyoda’s remarks, he joked, “So by now, you may be thinking has this guy lost his mind? Is he a Japanese version of Willie Wonka?” As laughter permeated the theater, he excitedly confessed, “Perhaps, but I truly believe that THIS is the project that can benefit everyone, not just Toyota.” As I flew home, I left Vegas more encouraged about the future, entrepreneurs today are focused on something bigger than robots. In the words of Yip, “As a company we’re looking to serve all people, and are strategically focused on airports as a step towards smart cities where everyone has the opportunity to participate fully in society in whatever way they are interested. Regardless of age, physical challenges, or other, we want people to be able to get out of their homes and into their communities. To be able to see each other, interact, go to work or travel whenever they want to.”

Sign up today for next RobotLab event forum on Automating Farming: From The Holy Land To The Golden State, February 6th in New York City. 

 

]]>
The 5G report card: Building today’s smart IoT ecosystem https://robohub.org/the-5g-report-card-building-todays-smart-iot-ecosystem/ Sat, 07 Dec 2019 16:30:22 +0000 https://robohub.org/the-5g-report-card-building-todays-smart-iot-ecosystem/ Read More ›]]>

The elephant in the room loomed large two weeks ago at the inaugural Internet of Things Consortium (IoTC) Summit in New York City. Almost every presentation began apologetically with the refrain, “In a 5G world…” practically challenging the industry’s rollout goals. At one point Brigitte Daniel-Corbin, IoT Strategist with Wilco Electronic Systems, sensed the need to reassure the audience by exclaiming, ‘its not a matter of if, but when 5G will happen!’ Frontier Tech pundits too often prematurely predict hyperbolic adoption cycles, falling into the trap of most soothsaying visions. The IoTC Summit’s ability to pull back the curtain left its audience empowered with a sober roadmap forward that will ultimately drive greater innovation and profit.

IMG_6438.jpg

The industry frustration is understandable as China announced earlier this month that 5G is now commercially available in 50 cities, including: Beijing, Shanghai and Shenzhen. In fact, the communist state beat its own 2020 objectives by rolling out the technology months ahead of plan. Already more than 10 million cellular customers have signed up for the service. China has made upgrading its cellular communications a national priority with more than 86,000 5G base stations installed to date and another 130,000 5G base stations to go live by the end of the year. In the words of Wang Xiaochu, president of China Unicom, “The commercialization of 5G technology is a great measure of [President] Xi Jinping’s strategic aim of turning China into a cyber power, as well as an important milestone in China’s information communication industry development.” By contrast the United States is still testing the technology in a number of urban zones. If a recent PC Magazine review of Verizon’s Chicago pilot is any indication of the state of the technology, the United States is very far from catching up. As one reporter complains, “I walked around for three hours and found that coverage is very spotty.” Screen Shot 2019-11-15 at 2.40.07 PM.png

Last year, President Trump donning a hardhat declared “My administration is focused on freeing up as much wireless spectrum as needed [to make 5G possible].” The importance of Trump’s promotional event in April could not be more understated, as so much of the future of autonomous driving, additive manufacturing, collaborative robotics, shipping & logistics, smart city infrastructure, Internet of Things (IoT), and virtual & augmented reality relies on greater bandwidth. Most experts predict that 5G will offer a 10 to 100 times improvement over fourth generation wireless. Els Baert of NetComm explains, “The main advantage that 5G offers over 4G LTE is faster speeds — primarily because there will be more spectrum available for 5G, and it uses more advanced radio technology. It will also deliver much lower latency than 4G, which will enable new applications in the [Internet of Things] space.” Unfortunately, since Trump’s photo op, the relationship with China has worsened so much that US carriers are now blocked from doing business with the largest supplier of 5G equipment, Huawei. This leaves the United States with only a handful of suppliers, including market leaders Nokia and Ericsson. The limited supply chain is exasperated by how little America is spending on upgrading its telecommunications, according to Deloitte “we conclude that the United States underspent China in wireless infrastructure by $8 billion to $10 billion per year since 2015.”

Screen Shot 2019-11-22 at 2.18.05 PM.png

The current state of the technology (roadblocks and all) demands fostering an innovation ecosystem today that parallels the explosion of new services for the 5G economy. As McKinsey reports there are more than 25 billion connected IoT devices currently, which is estimated to grow to more than 75 billion by 2025 with the advent of fifth generation wireless. The study further cites, “General Electric projects that IoT will add $10 to $15 trillion to a worldwide Global Domestic Product (GDP) growth by 2030. To put that into perspective, that number is equivalent to China’s entire current economy.” Regrettably, most of the available 5G accelerators in the USA are built to showcase virtual and augmented reality instead of fostering applications for the larger opportunity of business-to-business services. According to Business Insider “IoT solutions will reach $6 trillion by 2021,” across a wide spectrum of industries, including: healthcare, manufacturing, logistics, energy, smart homes, transportation and urban development. In fact, hardware will only account for about one-third of the new revenues (and VR/AR headsets comprise considerably less).

global_iot_market_share@2x-100

It is challenging for publicly traded companies (like T-Mobile, Verizon & AT&T), whose stock performance is so linked to the future of next generation wireless. Clearly, market makers are overly excited by the unicorns of Oculus (acquired by Facebook for $2 billion in 2014) and Magic Leap (valued at $4.5 billion in 2016) more than IoT sensors for robotic recycling, agricultural drones, and fuel efficient rectors. However, based upon the available data, the killer app for 5G will be found in industry not digital theatrics. This focus on theatrics is illustrated in one of the few statements online by Verizon’s Christian Guirnalda, Director of its 5G Labs, boasting “We’re literally making holograms here using a dozen of different cameras in a volumetric capture studio to create near real-time images of what people and products look like in 3D.” A few miles north of Verizon 5G Labs, New York City’s hospitals are overcrowded with patients and data leading to physical and virtual latency issues. Verizon could enable New York’s hospitals with faster network speeds to treat more patients in economically challenged neighborhoods remotely. Already, 5G threatens to exasperate the digital divide in the United States by targeting affluent communities for its initial rollout. By investing in more high-speed telemedicine applications, the telecommunications giant could potentially enable less privileged patients access to better care, which validates the need for increased government spending. Guirnalda’s Lab would be better served by applying the promise of 5G to solve these real-life urban challenges from mass transit to food scarcity to access to healthcare.

Screen Shot 2019-11-24 at 2.08.09 PM.png

The drawback with most corporate 5G incubators is their windows are opaque – forcing inventors to experiment inside, while the real laboratory is bustling outside. The United Nations estimates by 2050 seventy percent of the world’s population will be urban. While most of this growth will take place in developing countries (i.e., Africa and Asia) already 80% of global GDP is generated in cities. The greatest challenge for the 21st century will be managing the sustainable development of these populations. At last month’s UN “World Cities Day,” the diplomatic body stated that 5G “big data technologies and cloud-computing offer the potential to enhance urban operations, functions, services, designs, strategies and policies.” The UN’s statement did not fall on deaf ears, even President Trump strained to comfort his constituents last month with the confession, “I asked Tim Cook to see if he could get Apple involved in building 5G in the U.S. They have it all – Money, Technology, Vision & Cook!”

Going to CES? Join me for my panel on Retail Robotics January 8th at 10am, Las Vegas Convention Center. 

]]>
The DARPA SubT Challenge: A robot triathlon https://robohub.org/the-darpa-subt-challenge-a-robot-triathlon/ Thu, 03 Oct 2019 12:00:25 +0000 https://robohub.org/the-darpa-subt-challenge-a-robot-triathlon/ Read More ›]]> One of the biggest urban legends growing up in New York City were rumors about alligators living in the sewers. This myth even inspired a popular children’s book called “The Great Escape: Or, The Sewer Story,” with illustrations of reptiles crawling out of apartment toilets. To this day, city dwellers anxiously look at manholes wondering what lurks below. This curiosity was shared last month by the US Defense Department with its appeal for access to commercial underground complexes.

Screen Shot 2019-09-19 at 5.33.54 PM.png

The US military’s research arm, DARPA, launched the Subterranean (or SubT) Challenge in 2017 with the expressed goal of developing systems that enhance “situational awareness capabilities” for underground missions. While the prospect of armies utilizing machines to patrol sunken complexes conjures up images of the Matrix, in reality one of the last frontiers to be explored on Earth is below the surface. As SubT moves closer to its culminating event planned for 2021, the agency is beginning the first phase of three planned real-world tests. According to the contest description the initial focus area will be “human-made tunnel systems,” followed by “underground urban environments such as mass transit and municipal infrastructure,” and then concluding with “naturally occurring cave networks.” This summer DARPA issued a Request For Information for subsurface infrastructure in the interest of “global security and disaster-related search and rescue missions.”

Competing technologists will have the chance to win $2 million for hardware inventions and $750,000 for software innovations that “disruptively and positively impact how the underground domain is leveraged.” The types of solutions being considered include platforms “to rapidly map, navigate, and search unknown complex subterranean environments to locate objects of interest.” In further explaining the objectives, Timothy Chung, DARPA program manager, said: “One of the main limitations facing warfighters and emergency responders in subterranean environments is a lack of situational awareness; we often don’t know what lies beneath.” Chung’s boss, Fred Kennedy, Director of the Tactical Technology Office, confirmed, “We’ve reached a crucial point where advances in robotics, autonomy, and even biological systems could permit us to explore and exploit underground environments that are too dangerous for humans. Instead of avoiding caves and tunnels, we can use surrogates to map and assess their suitability for use.” Kennedy even coined a catch phrase for the challenge – “making the inaccessible accessible.” 

subt-leaderboard-619-316.jpg

In an abandoned Pennsylvania coal mine, on a sweltering August afternoon, eleven teams from across the globe came with 64 terrestrial robots, 20 unmanned aerial vehicles and one autonomous blimp to compete in the first wave of the SubT Challenge. The course included four events each lasting an hour deep inside the mine, which was originally built by the Pittsburgh Coal Company in 1910. Each team’s fleet of machines had to autonomously locate, identify and record 20 items or artifacts. The only team to score in double digits in all four independent runs was Explorer of Carnegie Mellon University. CMU is a DARPA Challenge favorite with a winning record that includes the 2007 Urban Challenge and 2015 Robotics Challenge. This year it had the distinct advantage of being local, scouting out the location beforehand to better plan its tactics for live competition. As Courtney Linder of Popular Mechanics writes, “Explorer regularly practiced at the Tour-Ed Mine in Tarentum, which is normally only frequented by tourists who want to check out a coal mine formerly owned by Allegheny Steel. They periodically flew drones and watched their ground robots exploring the cavernous, maze-like depths.”

Subterranean-Challenge-Team-Leaders-3-1542723107.jpg

The biggest hurdles for teams competing below ground are the lack of Global Position System (GPS) signals and WIFI communications. To safely navigate the cavernous course of these GPS-denied environments, SubT machines had to rely solely on a fusion of on-board sensors, including: LIDAR, cameras and radar. In explaining how his team won to the Pittsburgh Post-Gazette, CMU lead, Sebastian Scherer said they employed up to eight robots that created its own WIFI network to “talk” to each other while simultaneously mapping the environment with its sensors. Deploying a swarm approach, the robots acted as a collective unit working together to fill in data gaps, which meant that even if one went offline it was still able to pilot using its onboard systems and previously downloaded maps. Leading up to the competition the CMU team utilized computer simulations to strategize its approach, but understood the limitations of exclusively planning in the virtual world. As Scherer’s collaborator, Matt Travers, explains, “Our system may work perfectly in simulation and the first time we deploy, we may take the exact same software from the simulation and put it in the robot and it drives right into a wall and you go figure out why.” CMU’s geographic proximity to the test site seemingly played a critical role in their team achieving high scores.

While Explorer walked off with some nominal prize money, all eleven teams are committed to the same goal of full autonomy regardless of the environment and manual input. As Travers exclaims, “We’d like to build a system that’s going to be agnostic to the types of mobility challenges that we’ll face. And this is certainly a difficult thing to do.” Reflecting on the August gathering, the creativity of invention unified a global community towards a single purpose of saving lives. In the words of the program’s organizer, Chung, “We are inspired by the need to conduct search and rescue missions in a variety of underground environments, whether in response to an incident in a highly populated area, a natural disaster, or for mine rescue.” The next round will take place in February quite possibly in the sewers of New York City (alligators and all). As Chung cautions the contestants, “Prepare for a few new surprises. The SubT Challenge could be compared to a triathlon. DARPA is not looking just for the strongest swimmer, runner, or cyclist, but rather integrated solutions that can do all three.”

]]>
Summer travel diary: Reopening cold cases with robotic data discoveries https://robohub.org/summer-travel-diary-reopening-cold-cases-with-robotic-data-discoveries/ Wed, 14 Aug 2019 21:44:39 +0000 https://robohub.org/summer-travel-diary-reopening-cold-cases-with-robotic-data-discoveries/ Read More ›]]> Traveling to six countries in eighteen days, I journeyed with the goal of delving deeper into the roots of my family before World War II. As a child of refugees, my parents’ narrative is missing huge gaps of information. Still, more than seventy-eight years since the disappearance of my Grandmother and Uncles, we can only presume with a degree of certainty their demise in the mass graves of the forest outside of Riga, Latvia. In our data rich world, archivists are finally piecing together new clues of history using unmanned systems to reopen cold cases.

The Nazis were masters in using technology to mechanize killing and erasing all evidence of their crime. Nowhere is this more apparent than in Treblinka, Poland. The death camp exterminated close to 900,000 Jews over a 15-month period before a revolt led to its dismantlement in 1943. Only a Holocaust memorial stands today on the site of the former gas chamber as a testimony to the memory of the victims. Recently, scientists have begun to unearth new forensic evidence of the Third Reich’s war crimes using LIDAR to expose the full extent of their death factory.

In her work, “Holocaust Archeologies: Approaches and Future Directions,” Dr. Caroline Sturdy Colls undertook an eight-year project to piece together archeological facts from survivor accounts using remote sensors that are more commonly associated with autonomous vehicles and robots than Holocaust studies. As she explains, “I saw working at Treblinka as a cold case where excavation is not permitted, desirable or wanted, [non-invasive] tools offer the possibility to record and examine topographies of atrocity in such a way that the disturbance of the ground is avoided.” Stitching together point cloud outputs from aerial LIDAR sensors, Professor Sturdy Colls stripped away the post-Holocaust vegetation to expose the camp’s original foundations, “revealing the bare earth of the former camp area.” As she writes, “One of the key advantages that LIDAR offers over other remote sensing technologies is its ability to propagate the signal emitted through vegetation such as trees. This means that it is possible to record features that are otherwise invisible or inaccessible using ground-based survey methods.”

Through her research, Sturdy Colls was able to locate several previously unmarked mass graves, transport infrastructure and camp-era buildings, including structures associated with the 1943 prisoner revolt. She credits the technology for her findings, “This is mainly due to developments in remote sensing technologies, geophysics, geographical information systems (GIS) and digital archeology, alongside a greater appreciation of systematic search strategies and landscape profiling,” The researcher stressed the importance of finding closure after seventy-five years, “I work with families in forensics work, and I can’t imagine what it’s like not to know what happened to your family members.” Sturdy Colls’ techniques are now being deployed across Europe at other concentration camp sites and places of mass murder.

Flying north from Poland, I landed in the Netherlands city of Amsterdam to take part in their year-long celebration of Rembrandt (350 years since his passing). At the Rijksmuseum’s Hall of Honors a robot is featured in front of the old master’s monumental work, “Night Watch.” The autonomous macro X-ray fluorescence scanner (Macro-XRF scanner) is busy analyzing the chemical makeup of the paint layers to map and database the age of the pigments. This project, aptly named “Operation Night Watch,” can be experienced live or online showcasing a suite of technologies to determine the best methodologies to return the 1642 painting back to its original glory. Night Watch has a long history of abuse including: two world wars, multiple knifings, one acid attack, botched conservation attempts, and even the trimming of the canvas in 1715 to fit a smaller space. In fact, its modern name is really a moniker of the dirt build up over the years, not the Master’s composition initially entitled: “Militia Company of District II under the Command of Captain Frans Banninck Cocq.”

In explaining the multi-million dollar undertaking the museum’s director, Taco Dibbits, boasted in a recent interview that Operation Night Watch will be the Rijksmuseum’s “biggest conservation and research project ever.” Currently, the Macro-XRF robot takes 24 hours to perform one scan of the entire picture, with a demanding schedule ahead of 56 more scans and 12,500 high-resolution images. The entire project is slated to be completed within a couple of years. Dibbits explains that the restoration will provide insights previously unknown about the painter and his magnum opus: “You will be able to see much more detail, and there will be areas of the painting that will be much easier to read. There are many mysteries of the painting that we might solve. We actually don’t know much about how Rembrandt painted it. With the last conservation, the techniques were limited to basically X-ray photos and now we have so many more tools. We will be able to look into the creative mind of one of the most brilliant artists in the world.”

Whether it is celebrating the narrative of great works of art or preserving the memory of the Holocaust, modern conservatism relies heavily on the accessibility of affordable mechatronic devices. Anna Lopuska, a conservator at the Auschwitz-Birkenau Museum in Poland, describes the Museum’s herculean task, “We are doing something against the initial idea of the Nazis who built this camp. They didn’t want it to last. We’re making it last.” New advances in optics and hardware, enables Lopuska’s team to catalog and maintain the massive camp site with “minimum intervention.” The magnitude of its preservation efforts is listed on its website, which includes: “155 buildings (including original camp blocks, barracks, and outbuildings), some 300 ruins and other vestiges of the camp—including the ruins of the four gas chambers and crematoria at the Auschwitz II-Birkenau site that are of particular historical significance—as well as more than 13 km of fencing, 3,600 concrete fence posts, and many other installations.” This is on top of a collection of artifacts of human tragedy, as each item represents a person, such as “110 thousand shoes, about 3,800 suitcases, 12 thousand pots and pans, 40 kg of eyeglasses, 470 prostheses, 570 items of camp clothing, as well as 4,500 works of art.” Every year more and more survivors pass away making Lopuska’s task, and the unmanned systems she employs, more critical. As the conservationist reminds us, “Within 20 years, there will be only these objects speaking for this place.”

Editor’s Announcements: 1) Vote for our panel, “Love In The Robotic Age,” at SXSW; 2) Signup to attend RobotLab’s next event “Is Today’s Industry 4.0 A Hackers Paradise?” with  Chuck Brooks of General Dynamics on September 25th at 6pm, RSVP Today

]]>
Robots can play key roles in repairing our infrastructure https://robohub.org/robots-can-play-key-roles-in-repairing-our-infrastructure/ Sun, 30 Jun 2019 22:53:34 +0000 https://robohub.org/robots-can-play-key-roles-in-repairing-our-infrastructure/

Pipeline inspection robot

I was on the phone recently with a large multinational corporate investor discussing the applications for robotics in the energy market. He expressed his frustration about the lack of products to inspect and repair active oil and gas pipelines, citing too many catastrophic accidents. His point was further endorsed by a Huffington Post article that reported in a twenty-year period such tragedies have led to 534 deaths, more than 2,400 injuries, and more than $7.5 billion in damages. The study concluded that an incident occurs every 30 hours across America’s vast transcontinental pipelines.

The global market for pipeline inspection robots is estimated to exceed $2 billion in the next six years, more than tripling today’s $600 million in sales. The Zion Market Research report states: “Robots are being used increasingly in various verticals in order to reduce human intervention from work environments that are dangerous … Pipeline networks are laid down for the transportation of oil and gas, drinking waters, etc. These pipelines face the problem of corrosion, aging, cracks, and various other types of damages…. As the demand for oil and gas is increasing across the globe, it is expected that the pipeline network will increase in length in the near future thereby increasing the popularity of the in-pipe inspection robots market.”

Industry consolidation plays key role

Another big indicator of this burgeoning industry is growth of consolidation. Starting in December 2017, Pure Technologies was purchased by New York-based Xylem for more than $500 million. Xylem was already a leader in smart technology solutions for water and waste management pump facilities. Its acquisition of Pure enabled the industrial company to expand its footprint into the oil and gas market. Utilizing Pure’s digital inspection expertise with mechatronics, the combined companies are able to take a leading position in pipeline diagnostics.

Patrick Decker, Xylem president and chief executive, explained, “Pure’s solutions strongly complement the broader Xylem portfolio, particularly our recently acquired Visenti and Sensus solutions, creating a unique and disruptive platform of diagnostic, analytics and optimization solutions for clean and wastewater networks. Pure will also bring greater scale to our growing data analytics and software-as-a-service capabilities.”

According to estimates at the time of the merger, almost 25% of Pure’s business was in the oil and gas industry. Today, Pure offers a suite of products for above ground and inline inspections, as well as data management software. In addition to selling its machines, sensors and analytics to the energy sector, it has successfully deployed units in thousands of waterways globally.

This past February, Eddyfi (a leading provider of testing equipment) acquired Inuktun, a robot manufacturer of semi-autonomous crawling systems. This was the sixth acquisition by fast growing Eddyfi in less than three years. As Martin Thériault, Eddyfi’s CEO, elaborates: “We are making a significant bet that the combination of Inuktun robots with our sensors and instruments will meet the increasing needs from asset owners. Customers can now select from a range of standard Inuktun crawlers, cameras and controllers to create their own off-the-shelf, yet customized, solutions.”

Colin Dobell, president of Inuktun, echoed Thériault sentiments, “This transaction links us with one of the best! Our systems and technology are suitable to many of Eddyfi Technologies’ current customers and the combination of the two companies will strengthen our position as an industry leader and allow us to offer truly unique solutions by combining some of the industry’s best NDT [Non Destructive Testing] products with our mobile robotic solutions. The future opportunities are seemingly endless. It’s very exciting.” In addition to Xylem and Eddyfi, other entrees into this space, include: CUES, Envirosight, GE Inspection Robotics, IBAK Helmut Hunger, Medit (Fiberscope), RedZone Robotics, MISTRAS Group, RIEZLER Inspektions Systeme, and Honeybee Robotics.

Repairing lines with micro-robots

While most of the current technologies focus on inspection, the bigger opportunity could be in actively repairing pipelines with micro-bots. Last year, the government of the United Kingdom began a $35 million study with six universities to develop mechanical insect-like robots to automatically fix its large underground network. According to the government’s press release, the goal is to develop robots of one centimeter in size that will crawl, swim and quite possibly fly through water, gas and sewage pipes. The government estimates that underground infrastructure accounts for $6 billion annually in labor and business disruption costs.

One of the institutions charged with this endeavor is the University of Sheffield’s Department of Mechanical Engineering led by Professor Kirill Horoshenkov. Dr. Horoshenkov boasts that his mission is more than commercial as “Maintaining a safe and secure water and energy supply is fundamental for society but faces many challenges such as increased customer demand and climate change.”

Horoshenkov, a leader in acoustical technology, expands further on the research objectives of his team, “Our new research programme will help utility companies monitor hidden pipe infrastructure and solve problems quickly and efficiently when they arise. This will mean less disruption for traffic and general public. This innovation will be the first of its kind to deploy swarms of miniaturised robots in buried pipes together with other emerging in-pipe sensor, navigation and communication solutions with long-term autonomy.”

England is becoming a hotbed for robotic insects; last summer Rolls-Royce shared with reporters its efforts in developing mechanical bugs to repair airplane engines. The engineers at the British aerospace giant were inspired by the research of Harvard professor Robert Wood with its ambulatory microrobot for search and rescue missions. James Kell of Rolls-Royce proclaims this could be a game changer, “They could go off scuttling around reaching all different parts of the combustion chamber. If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

Currently the Harvard robot is too large to buzz through jet engines, but Rolls-Royce is not waiting for Boston’s scientist as it has established with the University of Nottingham a Centre for Manufacturing and On-Wing Technologies “to design and build a range of bespoke prototype robots capable of performing jet engine repairs remotely.” The project lead Dragos Axinte is optimistic about the spillover effect of this work into the energy market, “The emergence of robots capable of replicating human interventions on industrial equipment can be coupled with remote control strategies to reduce the response time from several days to a few hours. As well as with any Rolls-Royce engine, our robots could one day be used in other industries such as oil, gas and nuclear.”

]]>
Tackling sustainability and urbanization with AI-enabled furniture https://robohub.org/tackling-sustainability-and-urbanization-with-ai-enabled-furniture/ Sat, 22 Jun 2019 08:43:20 +0000 https://robohub.org/tackling-sustainability-and-urbanization-with-ai-enabled-furniture/ Read More ›]]>

At the turn of the twentieth century, the swelling populations of newly arrived immigrants in New York City’s Lower East Side reached a boiling point, forcing the City to pass the 1901 Tenement House Act. Recalling this legislation, New York City’s Mayor’s Office recently responded to its own modern housing crisis by enabling developers for the first time to build affordable micro-studio apartments of 400 square feet. One of the primary drivers of allocating tens of thousands of new micro-units is the adoption of innovative design and construction technologies that enable modular and flexible housing options. As Mayor de Blasio affirmed, “Housing New York 2.0 commits us to creating 25,000 affordable homes a year and 300,000 homes by 2026. Making New York a fairer city for today and for future generations depends on it.”

b210012c97f87d59f74e2b2cfed3e2b5

Urban space density is not just a New York City problem, but a world health concern. According to the United Nations, more than half of the Earth’s population currently resides in cities and this is projected to climb to close to three-quarters by 2050. In response to this alarming trend the UN drafted the 2030 Agenda for Sustainable Development. Stressing the importance of such an effort, UN Deputy Secretary Amina J. Mohammed declared, “It is clear that it is in cities where the battle for sustainability will be won or lost. Cities are the organizing mechanisms of the twenty-first century. They are where young people in all parts of the world flock to develop new skills, attain new jobs, and find opportunities in which to innovate and create their futures…The 2030 Agenda for Sustainable Development is the most ambitious agenda ever set forth for humanity.”

Absent from the UN study is utilizing mechatronics to address the challenges of urbanization. For example, robots have been deployed on construction sites in China to rapidly print building materials. There are also a handful of companies utilizing machines to cost effectively produce modular homes with the goal of replacing mud-huts and sheet metal shanties. However, the progress of automating low-to-middle income housing has been slow going until this week. Ikea, the world’s largest furniture retailer which specializes in low cost decorating solutions, announced on Tuesday the launch of Rognan – a morphing robotic furniture system for the micro-home. Collaborating with the Swedish design powerhouse is hardware startup, Ori Living. The MIT spin-out first introduced its chameleon-changing furniture platform two years ago with an expandable wardrobe that quickly shifted from bookcase/home office to walk-in closet at the touch of a button. Today such systems can be bought through the company’s website for a price upwards of $5,000. It is expected that the partnership with IKEA will bring enormous economies of scale with the mass production of its products.

The first markets targeted by IKEA next year for Rognan are the cramped neighborhoods of Hong Kong and Japan, where the average citizen lives in 160 square feet. Seana Strawn, IKEA’s product developer for new innovations, explains “Instead of making the furniture smaller, we transform the furniture to the function that you need at that time When you sleep, you do not need your sofa. When you use your wardrobe, you do not need your bed etc.”

Ori founder, Hasier Larrea, elaborates on his use of machine learning to size the space to the occupants requirements. “Every floor is different, so you need a product that’s smart enough to know this, and make a map of the floor,” describes Larrea. By using sensors to create an image of the space, the robot seamlessly transforms from closet-to-bed-to-desk-to-media center. To better understand the marketability of such a system, I polled a close friend who analyzes such innovations for a large Wall Street bank. This potential customer remarked that he will wait to purchase his own Rognan until it can anticipate his living habits, automatically sense when it is time for work, play or bed.

Ori’s philosophy is enabling people to “live large in a small footprint.” Larrea professes that the only way to combat urbanization is thinking differently. As the founder exclaims, “We cannot keep designing spaces the same way we’ve been designing spaces 20 years ago, or keep using all the same furniture we were using in homes that were twice the size, or three times the size. We need to start thinking about furniture that adapts to us, and not the other way around.” Larrea’s credo can be heard in the words of former Tesla technologist Sankarshan Murthy who aims to revolutionize interior design with robotic ceiling dropping storage furniture. Murthy’s startup, Bubblebee Spaces, made news last April with the announcement of a $4.3 million funding round led by Loup Ventures. Similar to a Broadway set change, Bubblebee lowers and hoists up wooden cases on an as needed basis, complete with an iPhone or iPad controller. “Instead of square feet you start looking at real estate in volume. You are already paying for all this air and ceiling space you are not using. We unlock that for you,” brags Murthy. Ori is also working on a modern Murphy Bed that lowers from ceiling, as the company’s press release stated last November its newest product, “a bed that seamlessly lowers from the ceiling, or lifts to the ceiling to reveal a stylish sofa”  all at the beckon of one’s Alexa device.

In 1912, William Murphy received his patent for the “Disappearing Bed. Today’s robotic furniture, now validated by the likes of Ikea, could be the continuation of his vision. Several years ago, MIT student Daniel Leithinger first unveiled a shape shifting table. As Leithinger reminisces, “We were inspired by those pinscreen toys where you press your hand on one end, and it shows on the other side.” While it was never intended to be commercialized, the inventor was blown away by the emails he received. “One person said we should apply it to musical interfaces and another person said it would be great to use to help blind children understand art and other things. These are things we didn’t even think about,” shares Leithinger. As Ori and Bubblebee are working diligently to replace old couch springs for new gears and actuators, the benefits of such technology are sure to go beyond just better storage as we enter the new age of the AI Home.

 

]]>
Are ethics keeping pace with technology? https://robohub.org/are-ethics-keeping-pace-with-technology/ Wed, 15 May 2019 14:34:02 +0000 https://robohub.org/are-ethics-keeping-pace-with-technology/ Read More ›]]>

Drone delivery. Credit: Wing

Returning from vacation, my inbox overflowed with emails announcing robot “firsts.” At the same time, my relaxed post-vacation disposition was quickly rocked by the news of the day and recent discussions regarding the extent of AI bias within New York’s financial system. These unrelated incidents are very much connected in representing the paradox of the acceleration of today’s inventions.

Last Friday, The University of Maryland Medical Center (UMMC) became the first hospital system to safely transport, via drone, a live organ to a waiting transplant patient with kidney failure. The demonstration illustrates the huge opportunity of Unmanned Aerial Vehicles (UAVs) to significantly reduce the time, costs, and outcome of organ transplants by removing human-piloted helicopters from the equation. As Dr. Joseph Scalea, UMMC project lead, explains “There remains a woeful disparity between the number of recipients on the organ transplant waiting list and the total number of transplantable organs. This new technology has the potential to help widen the donor organ pool and access to transplantation.” Last year, America’s managing body of the organ transplant system stated it had a waiting list of approximately 114,000 people with 1.5% of deceased donor organs expiring before reaching their intended recipients. This is largely due to unanticipated transportation delays of up to two hours in close to 4% of recorded shipments. Based upon this data, unmanned systems could potentially save more than one thousand lives. In the words of Dr. Scalea, “Delivering an organ from a donor to a patient is a sacred duty with many moving parts. It is critical that we find ways of doing this better.” Unmentioned in the UMMC announcement are the types of ethical considerations required to support autonomous delivery to ensure that rush to extract organs in the field are not overriding the goal of first saving the donor’s life.

As May brings clear skies and the songs of birds, the premise of non life saving drones crowding the air space above is often a haunting image. Last month, the proposition of last mile delivery by UAVs came one step closer with Google’s subsidiary, Wing Aviation,  becoming the first drone operator approved by the U.S. Federal Aviation Administration and the Department of Transportation. According to the company, consumer deliveries will commence within the next couple of months in rural Virginia. “It’s an exciting moment for us to have earned the FAA’s approval to actually run a business with our technology,” declared James Ryan Burgess, Wing Chief Executive Officer. The regulations still ban drones in urban areas and limit Wings autonomous missions to farmlands but enable the company to start charging customers for UAV deliveries.

While the rural community administrators are excited “to be the birthplace of drone delivery in the United States,” what is unknown is how its citizens will react to the technology, prone to menacing noise and privacy complaints. Mark Blanks, director of the Virginia Tech Mid-Atlantic Aviation Partnership, optimistically stated, “Across the board everybody we’ve spoken to has been pretty excited.” Cautiously, he admits, “We’ll be working with the community a lot more as we prepare to roll this out.” Google’s terrestrial autonmous driving tests have received less than stellar reviews from locals in Chandler, Arizona, which reached a crescendo earlier this year with one resident pulling a gun on a car (one-third of all Virginians own firearms). Understanding the rights of citizenry in policing the skies above their properties is an important policy and ethical issue as unmanned operators move from testing systems to live deployments.

Screen Shot 2019-05-03 at 2.51.34 PMThe rollout of advanced computing technologies is not limited to aviation; artificial intelligence (AI) is being rapidly deployed across every enterprise and organization in the United States. On Friday, McKinsey & Company released a report on the widening penetration of deep learning systems within corporate America. While it is still early in the development of such technologies, almost half of the respondents in the study stated that their departments have embedded such software within at least one business practice this past year. As stated: “Forty-seven percent of respondents say their companies have embedded at least one AI capability in their business processes—compared with 20 percent of respondents in a 2017 study.” This dramatic increase in adoption is driving tech spending with 71% of respondents expecting large portions of digital budgets going toward the implementation of AI. The study also tracked the perceived value of the use of AI with “41 percent reporting significant value and 37 percent reporting moderate value,” compared to 1% “claiming a negative impact.

Screen Shot 2019-05-03 at 3.07.10 PM.png

Before embarking on a journey south of the border, I participated in a discussion at one of New York’s largest financial institutions about AI bias. The output of this think tank became a suggested framework for administrating AI throughout an organization to protect its employees from bias. We listed three principals: 1) the definition of bias (as it varies from institution to institution); 2) the policies when developing and installing technologies (from hiring to testing to reporting metrics); and 3) employing a Chief Ethics Officer that would report to the board not the Chief Executive Officer (as the CEO is concerned about profit, and could potentially override ethics for the bottomline). These conclusions were supported by a 2018 Deloitte survey that found that 32% of executives familiar with AI ranked ethical issues as one of the top three risks of deployments. At the same time, Forbes reported that the idea of engaging an ethics officer is a hard sell for most Blue Chip companies. In response, Professor Timothy Casey of California Western School of Law recommends repercussions similar to other licensing fields for malicious software, “In medicine and law, you have an organization that can revoke your license if you violate the rules, so the impetus to behave ethically is very high. AI developers have nothing like that.” He suggests that building a value system through these endeavors will create an atmosphere whereby “being first in ethics rarely matters as much as being first in revenues.”

While the momentum of AI adoption accelerates faster than a train going down a hill, some forward-thinking organizations are starting to take ethics very seriously. As an example, Salesforce this past January became one of the first companies to hire a “chief ethical and humane use officer,” empowering Paula Goldman: “To develop a strategic framework for the ethical and humane use of technology.” Writing this article, I am reminded of the words of Winston Churchill in the 1930s cautioning his generation about balancing morality with the speed of scientific discoveries, as the pace of innovation even then far exceeded humankind’s own development: “Certain it is that while men are gathering knowledge and power with ever-increasing and measureless speed, their virtues and their wisdom have not shown any notable improvement as the centuries have rolled. The brain of modern man does not differ in essentials from that of the human beings who fought and loved here millions of years ago. The nature of man has remained hitherto practically unchanged. Under sufficient stress—starvation, terror, warlike passion, or even cold intellectual frenzy—the modern man we know so well will do the most terrible deeds, and his modern woman will back him up.”

Join RobotLab on May 16th when we dig deeper into ethics and technology with Alexis Block, inventor of HuggieBot, and Andrew Flett, partner at Mobility Impact Partners, discussing “Society 2.0: Understanding The Human-Robot Connection In Improving The World” at SOSA’s Global Cyber Center in NYC – RSVP Today

]]>
Automate 2019 startup showdown recap https://robohub.org/automate-2019-startup-showdown-recap/ Sun, 21 Apr 2019 21:53:03 +0000 https://robohub.org/automate-2019-startup-showdown-recap/ Read More ›]]> It’s been two years since the last time I judged the Automate Startup Competition. More than any other trade show contest, this event has been an oracle of future success. In following up with the last vintage of participants, all of the previous entrees are still operating and many are completing multi-million dollar financing rounds. As an indication of the importance of the venue, and quite possibly the growth of the industry, The Robot Report announced last week that 2017 finalist, Kinema Systems was acquired by SoftBank’s Boston Dynamics. 

Traditionally, autonomous machines at the ProMat Show have been relegated to a subsection of the exhibit floor under the Automate brand. A couple of years ago there were a handful of self-driving rovers with twice as many robotic arms, today almost one third of the entire McCormick Center was promoting unmanned solutions. As e-commerce sales continue to explode, pressuring fulfilment centers nationwide, the logistics industry now demands two separate conventions with Automate 2021 being held in the Motor City for the first time. This palpable buzz formed the backdrop to the packed startup theater that represented the burgeoning mechatronic ecosystem. In the words of Jeff Burnstein, president of A3 (Automate’s organizers), “Automation is among the most dynamic emerging markets, with venture funding increasing robustly each year. The finalists in the Automate Launch Pad Startup Competition represent the many types of innovation that will transform the manufacturing and services sectors over the next decade.”

Freeing The Supplychain From Bottlenecks

The first company that presented, IM Systems (IMS) traveled from the Netherlands to Chicago to unveil its invention that strikes at the core of the robo-universe – actuation. Today, most of the co-bots deployed are utilizing rotary actuators from Japanese-owned Harmonic Drive (HD). HD’s speed reducers are the industry’s standard for gearing technology, offering the greatest level of movement control, force and precision of any  commercially available mechanical system. This has translated to more than $500 million in annual revenue for HD and is projected to grow to more than $3 billion by 2024. HD’s grip on the industry is citied by many as the main reason why collaborative robot companies have failed to achieve unicorn-level growth. Currently unchallenged by competitors, HD has been free to artificially inflate prices by stiffly controlling the number of units it ships a year, maintaining a careful balance of low supply and high demand. As Thibaud Verschoor, founder of IM Systems, walked on stage I eagerly awaited to hear how his company was planning to disrupt HD’s virtual monopoly. Verschoor introduced a toothless gearbox called the Archimedes Drive that relies on friction instead of gear teeth to transmit torque, promising greater precision and lower cost thus hitting directly at Harmonic Drive. Originating from the Delft University of Technology, IMS is already boasting of its growing list of pre-orders with roboticists lining up for a long-awaited alternative. The startup founder quipped that today it is ‘now quicker to gestate a baby than get an actuator from Harmonic Drive, but not anymore’ as its business child will be shipping next year.

Extending Uptime Performance Screen Shot 2019-04-14 at 10.33.52 AM

In addition to actuation, energy efficiency has been a hurdle for robots in completing unplugged missions. WiBotic, an innovation spun out of the University of Washington, promises continuous charging availability with its patented wireless inductive power transfer system. Dr. Ben Waters took to the stage to demonstrate how WiBotic is able to charge unmanned vehicles and drones within a ten centimeter proximity of his transfer coils. In addition, WiBotic’s platform is also able to monitor a facility’s entire fleet of autonomous systems to provide managers with better power consumption and machine utilization data, enabling greater productivity and cost efficiency. When I asked Dr. Waters what is next for his company, he exclaimed that they have conquered the air with drones, land with robots, now they are aiming at the sea.

Maximizing Human Labor Screen Shot 2019-04-14 at 11.06.46 AM.png

Labor more than any other theme was the big discussion on the floor of ProMat and Automate. While many pundits decry automation for taking jobs, Daniel Theobald of Vecna Robotics shared with me, at our fireside chat, that no one has been fired because of a robot. In fact, today there are not enough humans to fulfill the growing demand of a global economy. One of the most grueling tasks still performed by humans is riveting, often performed in stiff contortions for hours at a time. Wilder Systems is a new collaborative robot system for the aerospace industry to relieve humans from the repetitive dangerous occupation of vertical drilling and fastening performed during fuselage manufacturing. Wilder offers a modular mobile system that is affordable for factories in providing their customers with greater speed, consistency, and accuracy. Until recently robotic systems like Wilder were only available to large corporations but  now with the startups “robot-as-a-service” business model, smaller aircraft plants are able to automate.

Providing A Gentle Touch 

ubrios

When moderating a session the day before about the definition of success in deploying robots, a large grocery store operator asked the panel what is available for his needs. While hard goods is challenging, picking up the wide variety of fragile organic materials of food is almost impossible for traditional metal grippers. While there are a small number of soft robot solutions available on the market, most require air to be pumped into their elastomeric end effectors, adding cost and complexity to the installation. Ubiros claims to be the “first fully electrically operated soft gripper” without any expensive peripheral equipment, such as pressurized air. To illustrate the tenderness of their solution the startup showcased FlowerBot, a robot arm with their gripper that is capable of picking up roses to create attractive (pre-programmed) bouquets. Dr. Cagdas D. Onal, the company’s founder, proudly introduced the judges to their new customer that just purchased 250 units to begin automating his floral fulfillment center. To Dr Onal’s credit, the Automate pavilion literally smelled like roses as throngs of people sat in the theater with vases on their laps.

Visualizing Installations Before Production

Screen Shot 2019-04-14 at 12.23.41 PM.png

An old proverb teaches that “seeing is believing,” unfortunately machines are often plagued more by Murphy’s Law during the on-boarding process. Firefly Dimension, an augmented reality startup for industrial applications, is focusing on speeding up manufacturing and product development with its unique perspective. Firefly’s headset is the only one in development promoting a 100% field of view equal to one’s eyes. This translates to lower down time, costs and potentially better outcomes. The company shared with the judges its early successes with manufacturers in China that are already testing their prototypes in their production facilities. As many industry analysts project the market for augmented reality solutions to exceed $60 billion by 2024, the Silicon Valley team of entrepreneurs is well positioned to take advantage of the next stage of automation technologies.

Making Robots Accessible For The Masses

southie.png

According to Dr. Rahul Chipalkatty, CEO of Southie Autonomy, most of the tasks in the warehouse and factory are not automated because they change too quickly for professional integrators and on-staff engineers to respond. Dr. Chipalkatty asserts that this market is actually the lowest hanging fruit for automation, but has remained untapped as technologists have failed to target non-technical workers. To counter this trend, Southie literally developed a magic wand that utilizes artificial intelligence, gesture control and augmented reality projections to train robots on the fly to respond to spontaneous jobs. The Boston-based startup is marketing its technology for “ANY industrial robot to be re-purposed and re-deployed by ANY person, without robotics expertise or even computer skills.” Companies like Southie that focus on ease-of-use interfaces will only further the overall adoption of robotics in the coming years for millions of employees globally. 

Building A Collision-Free World 

Board-Still.png

On the day of the startup competition, Realtime Robotics announced the launch of its proprietary computer board and software that enables “collision-free” motion planning within milliseconds for collaborative robots and autonomous vehicles to work together. Marketed as RapidPlan and RapidSense the startup contends that its solution is the only one available to enable machines to safely operate within workcells with humans and other robots simultaneously. According to the press release (which was later explained on stage) RapidPlan enables users to load up to “20 million motions” into the system that analyzes “800,000 motions at 30 frames-per-second” which is automatically integrated with the machines onboard sensors via RapidSense. Realtime Robotics’ CEO, Peter Howard, bragged, “Our collision-free motion planning solutions allow robots to perform safely in dynamic, unstructured, and collaborative workspaces, while instantaneously reacting to changes as they occur.” Realtime, backed by Toyota AI, is definitely on my startup watch list for 2019/2020.

And The Winner Is…

Screen Shot 2019-04-14 at 11.37.27 AM.png

Deliberating in the judges room, we were challenged to pick a winner from so many qualified diverse startups. Balancing the presentations against the immediate needs of automation industry, one company universally stood out for its important contribution. IM Systems’ Archimedes Drive has the potential to generate a billion-dollar valuation with its promise of bringing down the cost of adoption and quickening the speed of deployment. Following the show, I caught up with IMS founder Jack Schorsch and asked him how he plans to compete against a multinational conglomerate like Harmonic Drive. Schorsch responded, “I do think that HD & Nabtesco have to some degree deliberately throttled their supply, in order to keep unit profits high on sales to everyone but Fanuc/ABB/Yaskawa. On the flip side, I know in my bones that if you can come to market with a technically comparable or better drive you can sell every unit that you can make.”

Join RobotLab on May 16th when we host a discussion with Alexis Block, inventor of HuggieBot, and Andrew Flett, partner at Mobility Impact Partners, on “Society 2.0: Understanding The Human-Robot Connection In Improving The World,” RSVP Today

]]>
ProMat preview: Its time to cut the cord https://robohub.org/promat-preview-its-time-to-cut-the-cord/ Sun, 07 Apr 2019 21:00:30 +0000 https://robohub.org/promat-preview-its-time-to-cut-the-cord/ Read More ›]]> Last week’s breaking news story on The Robot Report was unfortunately the demise of Helen Greiner’s company, CyPhy Works (d/b/a Aria Insights). The high-flying startup raised close to $40 million since its creation in 2008, making it the second business founded by an iRobot alum that has shuttered within five months. While it is not immediately clear why the tethered-drone company went bust, it does raise important questions about the long-term market opportunities for leashed robots.

PARC_Photo.jpg

The tether concept is not exclusive to Greiner’s company, there are a handful of drone companies that vie for marketshare, including: FotoKite, Elistair, and HoverFly. The primary driver towards chaining an Unmanned Ariel Vehicle (UAV) is bypassing the Federal Aviation Administration’s (FAA) ban on beyond line of sight operations. Therefore the only legal way to truly fly autonomously, without a FAA waiver, is attaching a cord to the machine. There are a host of other advantages such as continuous power and data links. In the words of Elistair customer Alexandre Auger of Adéole, “We flew 2 hours 45 minutes before the concert and 1 hour after with Elistair’s system. This innovation allowed us to significantly increase our flight time! During our previous missions, we did not have this system and the pressure related to battery life was huge.”

Most of the millions of robots installed around the world are stationary and, thus, tethered. The question of binding an unmanned system to a power supply and data uplink is really only relevant for units that require mobility. In a paper written in 2014 Dr. Jamshed Iqbal stated, “Over the last few years, mobile robot systems have demonstrated the ability to operate in constrained and hazardous environment and perform many difficult tasks. Many of these tasks demand tethered robot systems. Tether provides the locomotion and navigation so that robot can move on steep slopes.” Most robotic companies employed leashes five years ago, even mobility leader Boston Dynamics. However, today Marc Raibert’s company has literally cut the cord on its fleet, proving once and for all that greater locomotion and agility await on the other side of the tether.

This past Thursday, Boston Dynamics unveiled its latest breakthrough for commercializing unhitched robots – freewheeling warehouse-bots. In a video on YouTube! that has already garnered close to a million views, a bipedal wheeled robot named Handle is shown seamlessly palletizing boxes and unloading cartons onto a working conveyor belt. Since SoftBank’s acquisition of Boston Dynamics in 2017, the mechatronic innovator has pivoted from contractor of defense concepts to a purveyor of real world robo-business solutions. Earlier this year, Raibert exclaimed that his latest creations are “motivated by thinking about what could go in an office — in a space more accessible for business applications — and then, the home eventually.” The online clip of Handle as the latest “mobile manipulation robot designed for logistics” is part of a wider marketing campaign leading up to ProMat 2019*, the largest trade show for supplychain automation held in Chicago later this month.

According to the company’s updated website, Handle, the six foot two hundred pound mechanical beast, is “A robot that combines the rough-terrain capability of legs with the efficiency of wheels. It uses many of the same principles for dynamics, balance, and mobile manipulation​ found in the quadruped and biped robots we build, but with only 10 actuated joints, it is significantly less complex. Wheels are fast and efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs, Handle has the best of both worlds.” The video is already creating lots of buzz on social media with Evan Ackerman of IEEE Spectrum tweeting, “Nice to see progress, although I’ve still got questions about cost effectiveness, reliability, and safety.”

palletizing-robots-market

To many in the retail market palletizing is the holy grail for automating logistics. In a study released earlier this month by Future Market Insights (FMI) the market for such technologies could climb to over $1.5 billion by 2022 worldwide. FMI estimated that the driving force behind this huge spike is that “Most of the production units are opting for palletizing robots in order to achieve higher production rates. The factors that are driving the palletizing robots market include improved functionality of such robots along with a simplified user interface.” It further provided a vision of the types of innovations that would be most successful in this arena, “Due to the changing requirements of the packaging industry, hybrid palletizing robots have been developed that possess the flexibility and advantages of a robotic palletizer and can handle complex work tasks with the simplicity of a conventional high speed palletizer. Such kind of palletizing robots can even handle delicate products and perform heavy duty functions as well, apart from being simple to use and cost effective in operations.” Almost prophetic in its description, FMI described Handle’s free-wheeling demonstration weeks before the public release by Boston Dynamics.

Screen Shot 2019-03-31 at 11.18.03 AM.png

The mantra for successful robot applications is “dull, dirty and dangerous.” While advances like Handle continue to push the limits of mobility for the “dull” tedious tasks of inventory management, “dirty and dangerous” use cases require more continuous power than ninety minutes. By example, tethered machines have been deployed in the cleanup efforts of the Fukushima Daiichi Nuclear Power Plant since the tsunami in March 2011. The latest invention released this month is a Toshiba robot packed with cameras and sensors that include “fingers” for directly interacting the deposits of the environment enabling deeper study of radioactive residue. In explaining the latest invention, Jun Suzuki of Toshiba said, “Until now we have only seen those deposits, and we need to know whether they will break off and can be picked up and taken out. Touching the deposits is important so we can make plans to sample the deposits, which is a next key step.”

The work of Suzuki and his team in creating leashed robots in disaster recovery has already spilled over to new strides for underwater and space exploration. Last week, the Japanese Aerospace Exploration Agency announced a partnership with GITAI to build devices for the International Space Station. In the words of GITAI’s CEO, Sho Nakanose, “GITAI aims to replace astronauts with robots that can work for a long time while being remotely controlled from Earth while in low Earth orbit space stations to reduce the burden on astronauts, shorten the time it takes to perform work in space, and reduce costs.”

* Editors Note: I will be moderating a panel at ProMat 2019 on “Achieving Return On Investment: Demystifying The Hype And Achieving Implementation Success,” on April 9th and the next day hosting a fireside chat with Daniel Theobald of Vecna Robotics on “Investing in Automation,” as part of the Automate program.    

]]>
Is the green new deal sustainable? https://robohub.org/is-the-green-new-deal-sustainable/ Tue, 12 Feb 2019 21:07:02 +0000 https://robohub.org/is-the-green-new-deal-sustainable/ Read More ›]]> This week Washington DC was abuzz with news that had nothing to do with the occupant of The While House. A group of progressive legislators, led by Alexandra Ocasio-Cortez, in the House of Representatives, introduced “The Green New Deal.” The resolution by the Intergovernmental Panel on Climate Change was in response to the alarming Fourth National Climate Assessment and aims to reduce global “greenhouse gas emissions from human sources of 40 to 60 percent from 2010 levels by 2030; and net-zero global emissions by 2050.” While the bill is largely targeting the transportation industry, many proponents suggest that it would be more impactful, and healthier, to curb America’s insatiable appetite for animal agriculture.

In a recent BBC report, “Food production accounts for one-quarter to one-third of all anthropogenic greenhouse gas emissions worldwide, and the brunt of responsibility for those numbers falls to the livestock industry.” The average US family, “emits more greenhouse gases because of the meat they eat than from driving two cars,” quipped Professor Tim Benton of the University of Leeds. “Most people don’t think of the consequences of food on climate change. But just eating a little less meat right now might make things a whole lot better for our children and grandchildren,” signed Benton.

Americans continue to chow down more than 26 billion pounds of meat a year, distressing environmentalists who assert that the current status quo is unsustainable. While veganism would provide a 70% relief to greenhouse gases worldwide, it is not foreseeable that 7 billion people would instantly change their diets to save the planet. Robotics, and even more so, artificial intelligence, is now being embraced by venture-backed entrepreneurs to artificially grow meat alternatives as creative gastronomic replacements.

Screen Shot 2019-02-08 at 3.45.34 PM

Chilean startup, Not Company (NotCo), built a machine learning platform named Giuseppe to search animal ingredient substitutes. NotCo founder Matias Muchnick explains, “Giuseppe was created to understand molecular connections between food and the human perception of taste and texture.” While Muchnick did not disclose his techniques, he revealed to Business Insider that the company has hired teams of food and data scientists to classify ingredients into bits for Giuseppe. Muchnick explains the AI begins the work of processing the “data regarding how the brain works when it’s given certain flavors, when you taste salty, umami, [or] sweet.” Today, the company has a line of egg and milk alternatives on the shelves including: “Not Mayo,” Not Cheese,” “Not Yogurt and “Not Milk.” The NotCo website states that this is only the first step in a larger scheme for the deep learning algorithm: “NotCo currently has a very ambitious development plan for Giuseppe, which includes the generation of new databases with information of a different nature, such as production processes and other molecular properties of food, in such a way that Giuseppe gets closer and closer to be the most advanced chef and food scientist in the world.”

NotCo competes in a growing landscape of other animal substitute upstarts. Hampton Creek, which recently rebranded as JUST, also offers an array of dairy and egg alternatives from plant-based ingredients. The ultimate test for all these companies is creating meat in a petri dish. When responding to the challenge, JUST announced, “Through a first-of-its-kind partnership, JUST will develop cultured Wagyu beef using cells from Toriyama prized cows. Then, Awano Food Group (a premier international supplier of meat and seafood) will market and sell the meat to clients exactly how they do today with conventionally produced Toriyama Wagyu.” Today, a handful of companies, many ironically backed by livestock corporations, are also tackling the $90 billion cellular agriculture market, including: Mosa MeatImpossible Burger, Beyond Meat, and Memphis Meats. Mosa, backed by Google founder Sergi Brin, unveiled the first synthetic burger in 2013 at a staggering cost of nearly a half million dollars.

Screen Shot 2019-02-08 at 4.27.06 PM
While costs are declining, cultured meat is too expensive to supplement the American diet, especially when $1 still buys one a fast food dinner. The key to mass acceptance is attacking the largest pain point in the lab – acquiring enough genetic material from bovine tissue. Currently, the cost of such serums are close to $1,000 an ounce, and not exactly cruelty free as they are derived from animals. Many clean meat founders are proudly vegan with the implicit goal of replacing animal ingredients altogether. In order to accomplish this task, companies like JUST have invested in building robust AI and robotic systems to automatically scour the globe for plant-based alternatives. “Over 300,000 species are in the plant kingdom. That’s over 18 billion proteins, 108 million lipids, and 4 million polysaccharides. It’s an abundance almost entirely unexplored, until now,” exclaims their website. The company boasts that it is on the verge of major discoveries, “The more we explore, the more data we gather along the way. And the faster we’ll find the answers. It’s almost impossible to look at the data and say, ‘Here’s a pattern. Here’s an answer.’ So, we have to come up with algorithms to rank the materials and give downstream experiments a recommendation. In this way, we’re using data to increase the probability of discoveries.”

Screen Shot 2019-02-10 at 1.20.40 PM

The next few years will unearth major breakthroughs, already Mosa announced it will have an affordable product on the shelves by 2021. To accomplish this task, the company turned to Merck’s corporate venture arm, M Ventures, and Bell Food to lead its previous  financing round. Last July, Forbes reported that the strategic partnerships are critical to Mosa’s vision in mass producing meat. According to Mosa’s founder, Mark Post, “Merck’s experience with cell cultures is very attractive from a strategic standpoint. Cell production is key to scaling cultured meat production, as they still need to figure out how to get cells to grow more rapidly and at higher numbers. In short, new technology needs to be developed. That’s where companies like Merck can lend a hand.” In addition to leveraging the conglomerates expertise in the lab, food-packaging powerhouse, Bell Food, provides a huge distribution advantage. Already, Lorenz Wyss, CEO of Bell Food Group, excitedly predicts, “Meat demand is soaring and in the future it won’t be met by livestock agriculture alone. We believe this technology can become a true alternative for environment-conscious consumers, and we are delighted to bring our know-how and expertise of the meat business into this strategic partnership with Mosa Meat.”

Screen Shot 2019-02-10 at 2.36.22 PM
While the Green New Deal has been met with skepticism, the charging forces of climate change and technology are steaming ahead. Today, we have the computational and the mechatronic power to turn back the tides of destruction to implant positive change across the planet, quite possibly starting with scaling back animal agriculture. Even Winston Churchill commented in the 1931, “We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.”

Is our food source and AgTech networks under attack? Learn more at the next RobotLab on “Cybersecurity & Machines” with John Frankel of ffVC and Guy Franklin of SOSA on February 12th in New York City, RSVP Today!

]]> The metaphysical impact of automation https://robohub.org/the-metaphysical-impact-of-automation/ Sun, 16 Dec 2018 20:54:40 +0000 https://robohub.org/the-metaphysical-impact-of-automation/ Read More ›]]>

Earlier this month, I crawled into Dr. Wendy Ju‘s autonomous car simulator to explore the future of human-machine interfaces at CornellTech’s Tata Innovation Center. Dr. Ju recently moved to the Roosevelt Island campus from Stanford University. While in California, the roboticist was famous for making videos capturing people’s reactions to self-driving cars using students disguised as “ghost-drivers” in seat costumes. Professor Ju’s work raises serious questions of the metaphysical impact of docility.

Last January, Toyota Research published a report on the neurological effects of speeding. The team displayed images and videos of sports cars racing down highways that produced spikes in brain activity. The study states,”we hypothesized that sensory inputs during high-speed driving would activate the brain reward system. Humans commonly crave sensory inputs that give rise to pleasant sensations, and abundant evidence indicates that the craving for pleasant sensations is associated with activation within the brain reward system.” The brain reward system is directly correlated to the body’s release of dopamine via the Ventral Tegmental Area. The findings confirmed that higher levels of brain activity on the VTA “were stronger in the fast condition than in the slow condition.” Essentially, speeding (which most drivers engage in regardless of laws) is addicting, as the brain rewards such aggressive behaviors with increased levels of dopamine.

As we relegate more driving to machines, the roads are in danger of becoming highways of strung out dopamine junkies craving new ways to get their fix. Self-driving systems could lead to a marketing battle for in-cabin services pushed by manufacturers, software providers, and media/Internet companies. As an example, Apple filed a patent in August for “an augmented-reality powered windshield system,” This comes two years after Ford filed a similar patent for a display or “system for projecting visual content onto a vehicle’s windscreen.” Both of these filings, along with a handful of others, indicate that the race for capturing rider mindshare will be critical to driving the adoption of robocars. Strategy Analytics estimates this “passenger economy” could generate $7 trillion by 2050. Commuters who spend 250 million hours a year in the car are seen by these marketers as a captive audience for new ways to fill dopamine-deprived experiences.

I predict at next month’s Consumer Electronic Show (CES) in-cabin services will be the lead story coming out of Las Vegas. For example, last week Audi announced a new partnership with Disney to develop innovative ways to entertain passengers. Audi calls the in-cabin experience “The 25th Hour,” which will be further unveiled at CES. Providing a sneak peak into its meaning, CNET interviewed Nils Wollny, head of Audi’s digital business strategy. According to Wollny, the German automobile manufacturer approached Disney 18 months ago to forge a relationship. Wollny explains, “You might be familiar with their Imagineering division [Walt Disney Imagineering], they’re very heavy into building experiences for customers. And they were highly interested in what happens in cars in the future.” He continues, “There will be a commercialization or business approach behind it [for Audi]… I’d call it a new media type that isn’t existing yet that takes full advantage of being in a vehicle. We created something completely new together, and it’s very technologically driven.” When illustrating this vision to CNET’s Road Show, Wollny directed the magazine to Audi’s fully autonomous concept car design that “blurs the lines between the outside world and the vehicle’s cabin.” This is accomplished by turning windows into screens with digital overlays that simultaneously show media while the outside world rushes by at 60 miles per an hour.

Self-driving cars will be judged not by speed of their engines, but the comfort of their cabins. Wollny’s description is reminiscent of the marketing efforts of social media companies that were successful in turning an entire generation into screen addicts. Facebook founder Sean Parker, admitted recently that the social network was founded with the strategy of consuming “as much of your time and conscious attention as possible.” To accomplish this devious objective, Parker confesses that the company exploited the “vulnerability in human psychology.” When you like something or comment on a friend’s photo, Parker boasted “we… give you a little dopamine hit.” The mobile economy has birthed dopamine experts such as Ramsay Brown, cofounder of Dopamine Labs, which promises app designers with increased levels of “stickiness” by aligning game play to the player’s cerebral reward system. Using machine learning Brown’s technology monitors each player’s activity by providing the most optimal spike of dopamine. New York Times columnist David Brook’s said it best, “Tech companies understand what causes dopamine surges in the brain and they lace their products with ‘hijacking techniques’ that lure us in and create ‘compulsion loops’.”

The promise of automation is to free humans from dull, dirty, and dangerous chores. The flip side many espouse is that artificial intelligence could make us too reliant on technology, idling society. Already, semi-autonomous systems are being cited as a cause of workplace accidents. Andrew Moll of the United Kingdom’s Chamber of Shipping warned that greater levels of automation by outsourcing decision making to computers has lead to higher levels of maritime collisions. Moll pointed to a recent spat of seafaring incidents,”We have seen increasing integration of ship systems and increasing reliance on computers. He elaborated that “Humans do not make good monitors. We need to set alarms and alerts, otherwise mariners will not do checks.” Moll exclaimed that technology is increasingly making workers lazy as many feel a “lack of meaning and purpose,” and are suffering from mental fatigue which is leading to a rise in workplace injuries. “Seafarers would be tired and demotivated when they get to port,” cautioned Moll. These observations are not isolated to shipping, the recent fatality by Uber’s autonomous taxi program in Arizona faulted safety driver fatigue as one of the main causes for the tragedies. In the Pixar movie WALL-E, the future is so automated that humans have lost all motivation to leave their mobile lounge chairs. To avoid this dystopian vision, successful robotic deployments will have to strike the right balance of augmenting the physical, while providing cerebral stimulation.

To better understand the automation landscape, join us at the next RobotLab event on “Cybersecurity & Machines” with John Frankel of ffVC and Guy Franklin of SOSA on February 12th in New York City, RSVP Today!

]]>
The end of parking as we know it https://robohub.org/the-end-of-parking-as-we-know-it/ Sun, 02 Dec 2018 22:52:24 +0000 https://robohub.org/the-end-of-parking-as-we-know-it/ Read More ›]]>

A day before snow hindered New York commuters, researchers at the University of Iowa and Princeton identified the growth of urbanization as the leading cause for catastrophic storm damage. Wednesday’s report stated that the $128 billion wake of Hurricane Harvey was 21 times greater due to the population density of Houston, one of America’s fastest growing cities. This startling statistic is even more alarming in light of a recent UN study which reported that 70% of the projected 9.7 billion people in the world will live in urban centers by 2050. Superior urban management is one of the major promises of autonomous systems and smart cities.

Today, one of the biggest headaches for civil planners is the growth of traffic congestion and demand for parking, especially considering cars are one of the most inefficient and expensive assets owned by Americans. According to the Governors Highway Safety Association the average private car is parked 95% of the time. Billions of dollars of real-estate in America is dedicated to parking. For example, in a city like Seattle 40% of the land is consumed by parking. Furthermore, INRIX analytics estimates that more than $70 billion is spent by Americans looking for parking. The average driver wastes $345 a year in time, fuel and emissions. “Parking pain costs much more…New Yorkers spend 107 hours a year looking for parking spots at a cost of $2,243 per driver,” states INRIX.Screen Shot 2018-11-17 at 8.43.58 PM.pngThis month I spoke with Dr. Anuja Sonalker about her plan to save Americans billions of dollars from parking. Dr. Sonalker is the founder of STEER Technologies, a full-service auto-valet platform that is providing autonomous solutions to America’s parking pain. The STEER value proposition uses a sensor array that easily connects to a number of popular automobile models, seamlessly controlled by one’s smart phone. As Dr. Sonalker explains, “Simply put STEER allows a vehicle user to pull over at a curb (at certain destinations), and with a press of a button let the vehicle go find a parking spot and park for you. When it’s time to go, simply summon the vehicle and it comes back to get you.” An added advantage of STEER is its ability to conserve space, as cars can be parked very close together since computers don’t use doors.

Currently, STEER is piloting its technology near its Maryland headquarters. In describing her early success, Dr. Sonalker boasts, “We have successfully completed testing various scenarios under different weather, and lighting conditions at malls, train stations, airports, parks, construction sites, downtown areas. We have also announced launch dates in late 2019 with the Howard Hughes Corporation to power the Merriweather district – a 4.9 Million square foot new smart development in Columbia, MD, and the BWI airport.” The early showing from STEER’s performance is the results of its proprietary product that is built for all seasons and topographies. “Last March, we demonstrated live in Detroit under a very fast snowstorm system. Within less than an hour the ground was covered in 2+ inches of snow,” describes Dr. Sonalker. “No lane markings were visible any more, and parking lines certainly were not visible. The STEER car successfully completed its mission to ‘go park’, driving around the parking lot, recognizing other oncoming vehicles, pacing itself accordingly and locating and manoeuvring itself into a parking spot among other parked vehicles in that weather.”

In breaking down the STEER solution, Dr. Sonalker expounds, “The technology is built with a lean sensor suite, its cost equation is very favorable to both after market and integrated solutions for consumer ownership.” She further clarifies, “From a technology stand point both solutions are identical in the feature they provide. The difference lies in how the technology is integrated into the vehicle. For after market, STEER’s technology runs on additional hardware that is retrofitted to the vehicle. In an integrated solution STEER’s technology would be housed on an appropriate ECU driven by vehicle economics and architecture, but with a tight coupling with STEER’s software. The coupling will be cross layered in order to maintain the security posture.” Unlike many self-driving applications that rely heavily on LIDAR (Light Detection And Ranging), STEER uses location mapping of predetermined parking structures along with computer vision. I pressed Dr. Sonalker about her unusual setup, “Yes, it’s true we don’t use LIDAR. You see, STEER started from the principle of security-led design which is where we start from a minimum design, minimum attack surface, maximum default security.”

I continued my interview of Dr. Sonalker to learn how she plans to roll out the platform, “In the long term, we expect to be a feature on new vehicles as they roll out of the assembly line. 2020-2021 seems to be happening based on our current OEM partner targets. Our big picture vision is that people no longer have to think about what to do with their cars when they get to their destination. The equivalent effect of ride sharing – your ride ends when you get off. There will be a network of service points that your vehicle will recognize and go park there until you summon it for use again.” STEER’s solution is part of a growing fleet of new smart city initiatives cropping up across the automation landscape. Screen Shot 2018-11-17 at 10.58.59 PM.pngAt last year’s Consumer Electronic Show, German auto supplier Robert Bosch GmbH unveiled its new crowd-sourcing parking program called, “community-based parking.” Using a network of cameras and sensors to identify available parking spots Bosch’s cloud network automatically directs cars to the closest spot. This is part of Bosch’s larger urban initiative, as the company’s president Mike Mansuetti says, “You could say that our sensors are the eyes and ears of the connected city. In this case, its brain is our software. Of Bosch’s nearly 400,000 associates worldwide, more than 20,000 are software engineers, nearly 20% of whom are working exclusively on the IoT. We supply an open software platform called the Bosch IoT Suite, which offers all the functions necessary to connect devices, users, and companies.”

As the world grabbles with population explosion exacerbated by cars strangling city centers, civil engineers are challenging technologists to reimagine urban communities. Today, most cars are dormant, and when used run at one-fifth capacity with typical trips less than a mile from one’s home (easily accessible on foot or bike). In the words Dr. Sonalker, “All autonomous technologies will lead to societal change. AV Parking will result in more efficient utilization of existing spaces, fitting more in the same spaces, better use of underutilized remote lots, and frankly, even shunting parking off to further remote locations and using prime space for more enjoyable activities.”

Join the next RobotLab forum discussing “Cybersecurity & Machines” to learn how hackers are attacking the the ecosystem of smart cities and autonomous vehicles with John Frankel of ffVC and Guy Franklin of SOSA on February 12th in New York City, RSVP Today!

]]>
Accessing the power of quantum computing, today https://robohub.org/accessing-the-power-of-quantum-computing-today/ Sun, 25 Nov 2018 21:37:03 +0000 https://robohub.org/accessing-the-power-of-quantum-computing-today/ Read More ›]]>

Two weeks ago, I participated on a panel at the BCI Summit exploring the impact of quantum computing. As a neophyte to the subject, I marveled at the riddle posed by Grover’s Algorithm. Imagine you are assigned to find a contact in a phonebook with a billion names, but all you are given is a telephone number. A quantum computer is able to decipher the answer with remarkable speed at a rate of .003% of today’s binary systems require one operation per line of data (in this case one billion).

Last week, Volkswagen announced that it is developing a smart traffic management system using quantum computers with the aim of increasing global transportation efficiency. In the words of Florian Neukart of Volkswagen’s CODE Lab in San Francisco, “We want to gain an in-depth understanding of applications of this technology which could be beneficial to the company, including traffic optimization. Public transport organizations and taxi companies in large cities are highly interested in managing their fleets efficiently. Our quantum-optimized traffic management system could help make that a reality.” Neukart is joined in the effort with an all-star team that includes: Google, quantum computing hardware startup D-Wave, and Swiss-based data analytics firm Teralytics.

According to Alastair MacLeod of Teralytics, today’s supercomputers are not quick  enough to crunch the immense amount of data posed by urban traffic to forecast transportation needs. In order to accomplish this objective, the team of researchers is evaluating current traffic flows using conventional binary computing technologies, then uploading the data via the cloud to D-Wave’s quantum interface for predictive analysis. The next step in Volkswagen’s vision is to simultaneously predict traffic movements and even pedestrian and vehicular destinations. Ultimately, the goal of the German car manufacturer is to create smarter and safer cities that navigate its citizens efficiently around their streets. As MacLeod describes, “In the end, mobility at its highest level is humans moving around in patterns and needing to somehow match the mode of transportation that suits best the trip and the supply of those different modes of transportation for the demand.”

volks.jpg

In order to understand the impact of quantum computing on autonomous systems, I reached out to my fellow panelist Dr. Jerry Chow, manager of Experimental Quantum Computing for IBM. Recently, the computer consulting powerhouse announced that it is working with Volkswagen’s competitor, Daimler. According to Chow’s press office, “IBM Q Network partner Daimler AG is working with us to advance the potential use cases of quantum computing for the automotive and transportation industry. Some areas of their research include finding and developing new materials for automotive application through quantum chemistry, complex optimization problems such as for manufacturing processes or vehicle routing for fleet logistics or autonomous/self-driving cars, and the intersection of quantum and machine learning to enhance the capabilities of artificial intelligence.”

When meeting Chow, he reminded me that it is still early in the development of such technologies. It is important to unpack the current offerings within this nascent marketplace, which also include: Google, Microsoft, Intel and a couple of University upstarts. One of the key distinguishing factors is the number of qubits in the system, which simply translate to the size of its memory. For example, a 64-qubit quantum computer can store up to 18 quintillion numbers – coupled with entanglements of parallel processing power and targeted algorithms, this enables such new computing paradigms to perform functions at lightning speeds. This past September, Rigetti (a startup out of Berkeley) opened up its 128-cubit system to the public via the cloud for such complex applications as modelling molecular structures and machine learning. Chad Rigetti expounds, “What we want to do is focus on the commercial utility and applicability of these machines, because ultimately that’s why this company exists,” which has raised already close to $120 million to fulfil this promise.

In the qubit race, Google’s Bristlecone is a distant second with 72 qubits, with IBM and Intel following suit with 49 and 50 qubits, respectively. While D-Wave brags of a 2,000-qubit system, many industry insiders do not consider the Vancouver startup a true quantum system as it leverages “quantum annealing,” which according to Science magazine “produced no quantum speedup,” over traditional binary systems. Adding to the complexity, a true quantum computer requires temperatures of a measure that is colder than interstellar space to necessitate a stationary state for its sub-atomic participles. In contrast, D-Wave’s core operates at -273 degrees celsius or 0.02 degrees from absolute zero.

ibm.jpg

Many experts believe that IBM offers the most commercially stable available platform via the cloud, as the larger qubit products report greater error rates. According to Dr. Chow’s team, “IBM Q Network has offered access to 20 qubit universal, superconducting quantum computers for about a year. They’re being used for research and industry use cases by Fortune 500 companies, academic institutions, startups, and government labs all over the world.” In addition to Daimler, Chow’s group is engaging with JP Morgan and Samsung. I pressed Dr. Chow about whether he foresees such advanced computing systems being used for reinforcement learning and unmanned systems, like humanoids. His office replied that “It’s possible that within five years, the industry will develop the first applications where a quantum computer, used alongside a classical computer, will offer a benefit to solving specific problems. And some of this research, including ours at IBM, does touch upon machine learning, but at a fundamental level. It’s too early to say what kinds of applications will be used in autonomous vehicles, robotics, or AI, in general.”

The future of these post-binary systems may depend largely on two fundamental factors: money and user adoption. Earlier this Fall, Congress proposed a $1.3 billion investment to accelerate federal programs for a National Quantum Initiative with the White House convening a Quantum Summit last September. This action is in response to China’s announced $10 billion spending plan to build a national laboratory for quantum science scheduled to open in 2020. The European Union in 2016 allotted a $1 billion for the continent’s own quantum-research initiative. In addition, developer kits are freely available online today, ironically accessed through binary computers. Dr. Chow enthusiastically encouraged the BCI Summit audience to start kicking-the-tires of IBM’s  SDK, Qiskit, in the hopes of expanding its library of applications (a list of other simulators and developers kits are available on the quantum wiki).

Join the next RobotLab forum discussing “Cybersecurity & Machines” to learn more about quantum computing and the future of encryption with John Frankel of ffVC and Guy Franklin of SOSA on February 12th in New York City.

]]>
The race for robot clairvoyance https://robohub.org/the-race-for-robot-clairvoyance/ Fri, 09 Nov 2018 22:42:57 +0000 https://robohub.org/the-race-for-robot-clairvoyance/ Read More ›]]> This week a Harvard Business School student challenged me to name a startup capable of producing an intelligent robot – TODAY! At first I did not understand the question, as artificial intelligence (AI) is an implement like any other in a roboticist’s toolbox. The student persisted, she demanded to know if I thought that the current co-bots working in factories could one day evolve to perceive the world like humans. It’s a good question that I didn’t appreciate at the time as robots are best deployed for specific repeatable tasks, even with deep learning systems. By contrast, mortals comprehend their surroundings (and other organisms) using a sixth sense, intuition.

tennisbot.gif

As an avid tennis player, I also enjoyed meeting Tennibot this week. The autonomous ball-gathering robot sweeps the court like a roomba sucking up dust off a rug. In order to accomplish this task, without knocking over players, it navigates around the cage utilizing six cameras on each side. This is a perfect example of the type of job that an unmanned system excels at performing, freeing up athletes from wasting precious court time with tedious cleanup. Yet, Tennibot, at the end of the day, is a dumb appliance. While it gobbles up balls quicker than any person, it is unable to discern the quality of the game or the health of players.

No one expects Tennibot to save Roger Federer’s life, but what happens when a person has a heart attack inside a self-driving car on a two-hour journey? While autonomous vehicles are packed with sensors to identify and safely steer around cities and highways, few are able to perceive human intent. As Ann Cheng of Hyundai explains, “We [drivers] think about what that other person is doing or has the intent to do. We see a lot of AI companies working on more classical problems, like object detection [or] object classification. Perceptive is trying to go one layer deeper—what we do intuitively already.” Hyundai joined Jim Adler’s Toyota AI Ventures this month in investing Perceptive Automata, an “intuitive self-driving system that is able to recognize, understand, and predict human behavior.”

0_1dqK7bN1HqXq6AT1.png

As stated by Adler’s Medium post, Perceptive’s technology uses “behavioral science techniques to characterize the way human drivers understand the state-of-mind of other humans and then train deep learning models to acquire that human ability. These deep learning models are designed for integration into autonomous driving stacks and next-generation driver assistance systems, sandwiched between the perception and planning layers. These deep learning, predictive models provide real-time information on the intention, awareness, and other state-of-mind attributes of pedestrians, cyclists and other motorists.”

While Perceptive Automata is creating “predictive models” for outside the vehicle, few companies are focused on the conditions inside the cabin. The closest implementations are a number of eye-tracking cameras that alert occupants to distracted driving. While these technologies observe the general conditions of passengers, they rely on direct eye contact to distinguish between emotions (fatigue, excitability, stress, etc.), which is impossible if one is passed out. Furthermore, none of these vision systems have the ability to predict human actions before they become catastrophic.

Isaac Litman, formerly of Mobileye, understands fully well the dilemma presented by computer vision systems in delivering on the promise of autonomous travel. In speaking with Litman this week about his newest venture Neteera, he declared that in today’s automative landscape the “the only unknown variable is the human.” Unfortunately, the recent wave of Tesla and Uber autopilot crashes has glaringly illustrated the importance of tracking the attention of vehicle occupants in handing off between autopilot systems and human drivers. Litman further explains that Waymo and others are collecting data on occupant comfort as AI-enabled drivers have reportedly led to high levels of nausea from driving too consistently. Litman describes this as the indigestion problem, clarifying that after eating a big meal one may want to drive more slowly than on an empty stomach. In the future Litman professes that autonomous cars will be marketed “not by the performance of their engines, but on the comfort of their rides.”

Screen Shot 2018-10-28 at 11.46.02 AM.png

Litman’s view is further endorsed by the recent patent application filed this summer by Apple’s Project Titan team for developing “Comfort Profiles” for autonomous driving. According to AppleInsider, the application “describes how an autonomous driving and navigation system can move through an environment, with motion governed by a number of factors that are set indirectly by the passengers of the vehicle.” The Project Titan system would utilize a fusion of sensors (LIDAR, depth cameras, and infrared) to monitor the occupants’ “eye movements, body posture, gestures, pupil dilation, blinking, body temperature, heart beat, perspiration, and head position.” The application details how the data would integrate into the vehicle systems to automatically adjust the acceleration, turning rate, performance, suspension, traction control and other factors to the personal preferences of the riders. While Project Titan is taking the first step toward developing an autonomous comfort system, Litman expresses that it is limited by the inherent short-comings of vision-based systems that are susceptible to light, dust, line of sight, condensation, motion, resolution, and safety concerns.

Unlike vision sensors, Neteera is a cost-effective micro-radar on a chip that leverages its own network of proprietary algorithms to provide “the first contact free vital sign detection platform.” Its FDA-level of accuracy is not only being utilized by the automative sector, but healthcare systems across the United States for monitoring such elusive conditions as sleep apnea and sudden infant death syndrome. To date, the challenge of monitoring vital signs through micro-skin motion in the automotive industry has been the displacement caused by a moving vehicles. However, Litman’s team has developed a a patent-pending “motion compensation algorithm” that tracks “quasi-periodic signals in the presence of massive random motions,” providing near perfect accuracy (see tables below).

Screen Shot 2018-10-28 at 1.22.31 PM

While the automotive industry races to launch fleets of autonomous vehicles, Litman estimates that the most successful players will be the ones that install empathic engines into the machines’ framework. Unlike the crowded field of AI and computer vision startups that are enabling robocars to safely navigate city streets, Neteera’s “intuition on a chip” is probably one of the only mechatronic ventures that actually report on the psychological state of drivers and passengers. Litman’s innovation has wider societal implications, as social robots begin to augment humans in the workplace and support the infirm and elderly in coping with the fragility of life.

As scientists improve artificial intelligence, it is still unclear what the reaction will be from ordinary people to such “emotional” robots. In the words of writer Adam Williams, “Emotion is something we reserve for ourselves: depth of feeling is what we use to justify the primacy of human life. If a machine is capable of feeling, that doesn’t make it dangerous in a Terminator-esque fashion, but in the abstract sense of impinging on what we think of as classically human.”

]]>
Educating the workforce of the future, in the age of accelerations https://robohub.org/educating-the-workforce-of-the-future-in-the-age-of-accelerations/ Wed, 24 Oct 2018 17:29:50 +0000 https://robohub.org/educating-the-workforce-of-the-future-in-the-age-of-accelerations/ Read More ›]]> I have two kids in college and one of my biggest concerns is their knowledge that what they have labored hard to acquire will become obsolete by the time of graduation. Our age is driven by the hypersonic accelerations of technology and data forcing innovative educators to create new pedagogical systems that empower students with the skills today to lead tomorrow.
The new CyberNYC initiative announced last week by the City of New York is just one example of this growing partnership between online platforms and traditional academia in the hope of fostering a new generation of wage earners.

The goal of CyberNYC is to train close to 5% of the city’s working population to become “cyber specialists.” In order to accomplish this lofty objective, the NYCEDC forged an educational partnership with CUNY, NYU, Columbia, Cornell Tech, and iQ4. One of the most compelling aspects of the partnership is the advanced degree program offered by CUNY and Facebook, enabling students to achieve a masters in computer science in just a year through the online educational hosting site EdX, which also enables users to stack credentials from other universities.

As Anant Agarwal, CEO of edX, explains, “The workplace is changing more rapidly today than ever before and employers are in need of highly-developed talent. Meanwhile, college graduates want to advance professionally, but are realizing they do not have the career-relevant skills that the modern workplace demands. EdX recognizes this mismatch between business and education for learners, employees and employers. The MicroMasters initiative provides the next level of innovation in learning to address this skills gap by creating a bridge between higher education and industry to create a skillful, successful 21st-century workforce.”

Realizing that not everyone is cut out for higher education, the Big Apple is also working to create boot camps to upskill existing tech operators in a matter of weeks with industry-specific cyber competencies. Fullstack Academy is leading the effort to create a catalogue of intensive boot camps throughout the boroughs. LaGuardia Community College (LAGCC) is also providing free preparatory courses for adults with minimum computing proficiency in order to qualify for Fullstack’s programs. Most importantly, LAGCC will act as a liaison to CyberNYC’s corporate partners to match graduates with open positions.

In 2012, Sebastian Thrun famously declared that, “access to higher education should be a a basic human right.” Thrun, who left his position of running Google X to “democratize education” worldwide by launching a free online open-university platform, Udacity, is now transforming the learning paradigm. The German inventor is no stranger to innovation, in 2011 he unveiled at the Ted Conference one of the first self-driving cars, inspired by losing his best friend to a car accident. Similar to CyberNYC’s fast-track masters in computer science program, Udacity is teamed up with AT&T and Georgia Tech to offer similar degrees for less than $7,000 (compared to $26,860 for an on-campus program).

In the words of AT&T’s Chief Executive Randall Stephenson, “We believe that high-quality and 100 percent online degrees can be on par with degrees received in traditional on-campus settings, and that this program could be a blueprint for helping the United States address the shortage of people with STEM degrees, as well as exponentially expand access to computer science education for students around the world.”

In 2003, Reid Hoffman launched LinkedIn, the first business social network. Today, there are more than a half billion profiles (resumes) posted on the site. Last March, Hoffman sat down with University of California President (and former director of Homeland Security) Janet Napolitano to discuss the future of education. The leading advocate for entrepreneurship explained that he believes everyone should be “in permanent beta,” whereby one is constantly consuming information. Hoffman states that this is the only way an individual and a society will be able to compete in a world driven by data and artificial intelligence. Universities, like the UC system, Hoffman suggests should move towards a cross-disciplinary system. As Hoffman espouses, “What we’re actually in fact primarily teaching is that learning how to learn as you get to new areas, not areas where it’s necessarily the apprenticeship model, which is we teach you this thing and you know how to do this one thing. You know how to do this thing really well, but actually, in fact, you’re going to be crossing domains. That’s how I would somewhat shift the focus overall in terms of thinking about it.”

In his book, “Thank You For Being Late: An Optimistic’s Guide To Thriving In The Age Of Accelerations,” Thomas Friedman quotes Nest Labs’ founder, Tony Fadell, as asserting that the future economy rests on businesses’ ability of turning “AI into IA,” or “Intelligent Assistants.” Friedman specifically singled out LinkedIn as one of these IAs that are creating human networks to amplify people in finding the best opportunities and highest demanded skills. In order to utilize his IA, Hoffman advised Napolitano’s audience to be versatile, “As opposed to thinking about this as a career ladder, a career escalator, to think of it as more of a career jungle gym, that you’re actually going to be changing around in terms industries. The exact shape of certain different job professions will change, and that you need to be adaptive with that.” He continued, “I do think that the notion that is still too often preached, which is you go to college, you discover your calling, and that’s your job for the next 50 years, that’s gone. ” The key to harnessing this trend says Hoffman is “to constantly be learning and to be learning new things. Some of them by taking ongoing classes but some of them also by doing, and talking to people and finding out what the relevant things are, and then tracking what’s going on.”

All these efforts are not happening fast enough in the United States to fill the current gap between the 6.9 million job openings and the number of unemployed. While the unemployment rate is at a forty year low with 6.2 million workers out of work, there still is a significant disparity with more open job listings. The primary reason stated by business leaders across the nation is that the current class of applicants lack the versatility of skills required for the modern workplace, resulting in the push towards full automation. Eric Schmidt, former Executive Chairman of Google and Alphabet, claims that “Today we all live and work in a new era, the Internet Century, where technology is roiling the business landscape and the pace of change is accelerating.” This Internet Century (and by extension cloud computing and unmanned systems) requires a new type of worker, which he affectionally calls the “smart creative” that is first and foremost an “adaptive learner.”

The deficiency of graduating “smart creatives” could be the reason why America, which is almost at full employment, is still producing historically low output resulting is stagnant wages. Mark Zandi, Moody’s Chief Economist, explains, “Wage growth feels low by historical standards and that’s largely because productivity growth is low relative to historical standards. Productivity growth between World War II and up through the Great Recession was, on average, 2 percent per annum. Since the recession 10 years ago, it’s been 1 percent.” The virtuous efforts of CyberNYC, and other grassroots initiatives, are only the first of many towards the complete restructuring of America’s educational framework to nurture a culture of smart creatives that are in permanent beta.

]]>
New York: The gateway to industry 4.0 https://robohub.org/new-york-the-gateway-to-industry-4-0/ Sat, 06 Oct 2018 12:48:10 +0000 https://robohub.org/new-york-the-gateway-to-industry-4-0/ Read More ›]]> As Hurricane Florence raged across the coastline of Northern Carolina, 600 miles north the 174th Attack Wing Nation Guard base in Syracuse, New York was on full alert. Governor Cuomo just hung up with Defence Secretary Mattis to ready the airbase’s MQ-9’s drone force to “provide post-storm situational awareness for the on-scene commanders and emergency personnel on the ground.” Suddenly, the entire country turned to the Empire State as the epicenter for unmanned search & rescue operations.

nybase

Located a few miles from the 174th is the Genius NY Accelerator. Genius boasts of the largest competition for unmanned systems in the world. Previous winners that received one million dollars, include: AutoModality and FotoKite. One of Genius’ biggest financial backers is the Empire State Development (ESD). Last month, I moderated a discussion in New York City between Sharon Rutter of the ESD, Peter Kunz of Boeing Horizon X and Victor Friedberg of FoodShot Global. These three investors spanned the gamut of early stage funders of autonomous machines. I started our discussion by asking if they think New York is poised to take a leading role in shaping the future of automation. While Kunz and Friedberg shared their own perspectives as corporate and social impact investors accordingly, Rutter singled out one audience participant in particular as representing the future of New York’s innovation venture scene.

Andrew Hong of ff Venture Capital sat quietly in front of the presenters, yet his firm has been loudly reshaping the Big Apple’s approach to investing in mechatronics for almost a decade (with the ESD as a proud limited partner). Founded in 2008 by John Frankel, formerly of Goldman Sachs, ff has deployed capital in more than 100 companies with market values of over $6 billion. As the original backer of crowd-funding site Indiegogo, ff could be credited as a leading contributor to a new suite of technologies. As Frankel explains, “We like hardware if it is a vector to selling software, as recurring models based on services lead to better economics for us than one-off hardware sales.” In the spirit of fostering greater creativity for artificial intelligence software, ff collaborated with New York University in 2016 to start the NYU/ffVC AI NexusLab — the country’s first AI accelerator program between a university and a venture fund. NexusLab culminated in the Future Labs AI Summit in 2017. Frankel describes how this technology is influencing the future of autonomy, “As we saw that AI was coming into its own we looked at AI application plays and that took us deeper into cyber security, drones and robotics. In addition, both drones and robotics benefited as a byproduct of the massive investment into mobile phones and their embedded sensors and radios.  Thus we invested in a number of companies in the space (Skycatch, PlusOne Robotics, Cambrian Intelligence and TopFlight Technologies) and continue to look for more.”

Recently, ff VC bolstered its efforts to support the growth of an array of cognitive computing systems by opening a new state-of-the-art headquarter in the Empire State Building and expanding its venture partner program. In addition to providing seed capital to startups, ff VC has distinguished itself for more than a decade by augmenting technical founders with robust back-office services, especially accounting and financial management. Last year, ff also widened its industry venture partner program with the addition of Dr. Kathryn Hume to its network. Dr. Hume is probably best known for her work as the former president of Fast Forward Labs, a leading advisory to Fortune 500 companies in utilizing data science and artificial intelligence. I am pleased to announce that I have decided to join Dr. Hume and the ff team as a venture partner to widen their network in the robotics industry. I share Frankel’s vision that today we are witnessing “massive developments in AI and ML that had led to unprecedented demand for automation solutions across every industry.”

 

ff’s commitment is not an isolated example across the Big Apple but part of a growing invigorated community of venture capitalists, academics, inventors, and government sponsors. In a few weeks, New York City Economic Development Corporation (NYCEDC) will officially announce the winner of a $30 million investment grant to boost the city’s cybersecurity ecosystem. CyberNYC will include a new startup accelerator, city-wide programming, educational curricula, up-skilling/job placement, and a funding network for home-grown ventures. As NYCEDC President and CEO James Patchett explains, “The de Blasio Administration is investing in cybersecurity to both fuel innovation, and to create new, accessible pathways to jobs in the industry. We’re looking for big-thinking proposals to help us become the global capital of cybersecurity and to create thousands of good jobs for New Yorkers.” The Mayor’s office projects that its initiative will create 100,000 new jobs over the next ten years, enabling NYC to fully maximize the opportunities of an autonomous world.

The inspiration for CyberNYC could probably be found in the sands of the Israeli desert town of Beer Sheva. In the past decade, this bedouin city in the Holy Land has been transformed from tents into a high-tech engine for cybersecurity, remote sensing and automation technologies. At the center of this oasis is Cyber Labs, a government-backed incubator created by Jerusalem Venture Partners (JVP). Next week, JVP will kick off its New York City “Hub” with a $1 million competition called “New York Play” to bridge the opportunities between Israeli and NYC entrepreneurship.  In the words of JVP’s Chairman and Founder, Erel Margalit, “JVP’s expansion to New York and the launch of New York Play are all about what’s possible. As New York becomes America’s gateway for international collaboration and innovation, JVP, at the center of the “Startup Nation,” will play a significant role boosting global partnerships to create solutions that better the world and drive international business opportunities.”

Looking past the skyscrapers, I reflect on Margalit’s image of New York as a “Gateway” to the future of autonomy.  Today, the wheel of New York City is turning into a powerful hub, connected throughout America’s academic corridor and beyond, with spokes shooting in from Boston, Pittsburgh, Philadelphia, Washington DC, Silicon Valley, Europe, Asia and Israel. The Excelsior State is pulsing with entrepreneurial energy fostered by the partnerships of government, venture capital, academia and industry. As ff VC’s newest venture partner, I personally am excited to play a pivotal role in helping them harness the power of acceleration for the benefit of my city and, quite possibly, the world.

Come learn how New York’s Retail industry is utilizing robots to drive sales at the next RobotLab on “Retail Robotics” with Pano Anthos of XRC Labs and Ken Pilot, formerly President of Gap on October 17th, RSVP today.

]]>
Ice-bot cometh: Building self-reliant unmanned systems https://robohub.org/ice-bot-cometh-building-self-reliant-unmanned-systems/ Thu, 23 Aug 2018 19:42:18 +0000 https://robohub.org/ice-bot-cometh-building-self-reliant-unmanned-systems/ Read More ›]]> Last June, a massive dust storm engulfed Mars and immobilized the most famous robots in the galaxy, Opportunity and Curiosity. This is not the first time that Martian dirt has prevented Opportunity from recharging its solar panels. Its creators originally predicted that the planet’s harsh weather conditions would limit the rover’s mission to ninety sols (the equivalent of 93 earth days). This year, if it survives the current tempest, Opportunity will celebrate its 15th working anniversary on the red planet.

Screen Shot 2018-07-23 at 4.15.20 PM.png

Weather is often the nemesis of autonomous missions. The example of NASA’s billion-dollar fleet being disabled by mere dust illustrates the profound problem that machines are far from being self-reliant. Closer to Earth, high atmospheric ice is one of the biggest challenges for Unmanned Aerial Vehicles (UAVs). This natural phenomena is responsible for grounding even the most critical military drone operations, as freezing conditions adversely affect aerodynamics and performance.

The paradigm of reversing nature in aeronautics is useful for all aspects of autonomous systems. Starting in the late 1980s, NASA set out to develop a series of deicing solutions to make aviation safer. The most famous of these technologies is the Electro-Expulsive Separation System (EESS) or “Ice Zapper” developed by NASA engineer Leonard Haslim. According to the space agency’s website, the Zapper “employs a pair of conductors embedded in a flexible material and bonded to the aircraft’s frame—on the leading edge of a wing, for example. A pulsing current of electricity sent through the conductors creates opposing magnetic fields, driving the conductors apart only a fraction of an inch but with the power to shatter any ice buildup on the airframe surface into harmless particles.”

Screen Shot 2018-07-30 at 3.48.08 PM.png

The Zapper was eventually licensed by NASA to a private company, Ice Management Systems (IMS), which was acquired by Thompson Ramo Wooldridge (TRW), now part of Northrop Grumman. Haslim’s invention today is a complete aeronautic platform that is embedded within an energy-efficient power system, conductive actuators and an internal carbon-fiber core. According to IMS founder Mark Bridgeford, the platform has already been deployed on the Northrop Grumman’s Hunter UAVs and General Atomics Sky Warriors. “One reason we have hit a spot with the UAV business is the simple fact that our system utilizes so little energy,” explains Bridgeford. The IMS platform is one of the first mechatronic systems that is energy efficient. Bridgeford comtinues, “With the electroexpulsive technology, UAVs are able to incorporate continuous, year-round ice protection into their airframes.”

Energy consumption is the key to active deicing. Unlike anti-icing systems that prevent particle buildup through material design and engine back-flow systems, deicing is typically manually implemented in manned aircraft, on an as-needed basis to conserve fuel. Drones do not have this luxury of flipping a switch and need active pervasive monitoring. While the Zapper and other electro-expulsive methods shoot bursts of energy to shake ice off of wings, this method has underdelivered for many unmanned missions. Aerospace solutions provider Cox & Company is now offering a competing technology called “Thermo-Mechanical Expulsion Deicing System (TMEDS)” for Northrup Grumman’s Triton drones. Cox claims that this technology not only delivers better results for unmanned systems, but also consumes less power.

Inspired by the US Coast Guard’s ice breaking marine technology, Dr. Kim Sorensen joined NASA’s Ames Research Center as part of his doctoral thesis on aerial ice protection systems. Soon after, Sorensen launched the first autonomous deicing flight in Alaska using an ultra-lightweight one-meter drone. The success of this prototype led to him establishing Ubiq Aerospace in 2017 to bring cost-effective automated deicing to the commercial UAV sector. Based in Sorensen’s home country of Norway, Ubiq aims to be customer ready by the end of 2019. “The D•ICE technology was based on three design principles – No Weight, No Volume, No Power. Aside from these principles we also needed to ensure that the system could operate in complete autonomy,” says Dr. Sorensen. By contrast “Various systems have been designed and are flying on the commercial manned aircraft that fly today. All these systems require human interaction in various degrees. They carry penalties as most are very heavy, all of them are structurally invasive, and/or some are environmentally harmful… As such, none of the existing icing protection systems are suitable for use on unmanned aircraft,” explains Dr. Sorensen.

According to Aviation International News Ubiq has filed its initial patents and is currently in discussions with manufacturing partners to build its “carbon-nano thermal panel components” to meet industry standards. The large-scale platform will be composed of these thermal panels, as well as embedded temperature sensors, power components, and a computational brian. One of Ubiq’s differentiating features is its intelligent algorithms that automatically enable safe deicing with minimum power drain  by switching the unit on and off as needed. Sorensen describes, “There is a lower boundary for what power you do require to ensure that icing doesn’t build or you can deice, and we are trying to get as close to that boundary as possible.” The company is already working with Norway’s defense department on expanding its applications to rotor UAVs, potentially accelerating the use cases for large-scale cargo missions and, even, flying cars. In an email to me earlier this week, Ubiq’s founder states, “The D•ICE tech has wider use indeed. There are several secondary industries that would have use for a solution like this. Most obvious is the wind energy industry with their enormous ocean-based farms.” Ironically, many terrestrial wind turbine operators are turning to drone service providers today for deicing solutions as demonstrated in numerous YouTube videos.

Screen Shot 2018-08-03 at 11.03.55 AM.png

As robotics continues to cross the chasm from lab research to field operations, weather-control systems are becoming increasingly more critical. The center for subfreezing temperature research is the University of Alaska in Fairbanks. According to a press release this past May, The University has partnered with the U.S. Department of Transportation to help shape the “the future of drones in America” and “speed up the safe entry of unmanned vehicles into the nation’s airspace.” A key component of this endeavor is researching technological solutions for unmanned systems operating in harsh climate environments “to deliver medical devices to remote areas, help with search and rescue operations, survey fish and wildlife, and monitor pipelines, roads and other infrastructure.” In announcing the partnership, U.S. Secretary of Transportation Elaine Chao declared, “We are looking forward to helping today’s winners unlock the enormous potential of drone operations, which will create new services and jobs in their local communities.” Besides the economic opportunity that Secretary Chao cites, there could be other “winners” altruistically driven by the pursuit of science on Earth, Mars and, maybe, the entire galaxy.

]]>
Disney launches new era of autonomy with gravity defying stickmen and more https://robohub.org/disney-launches-new-era-of-autonomy-with-gravity-defying-stickmen-and-more/ Wed, 25 Jul 2018 21:08:15 +0000 https://robohub.org/disney-launches-new-era-of-autonomy-with-gravity-defying-stickmen-and-more/ Read More ›]]> Ever since the première of “Steamboat Willie” in 1928, The Walt Disney Company has pushed the envelope of imagination. Mickey Mouse is still more popular worldwide than any single human actor. In fact, from that one cel an entire world of animated characters was born. The entertainment powerhouse demonstrated last week a new generation of theatrics with a flying robot-like stuntman (hero pause and all) that is destined to become a leading player in the age of autonomy.

In the words of Dr. Morgan Pope of Disney Research, “We’d like to do something that’s not just human, but beyond human… The hope here is that we’re delivering something physical and tangible, as opposed to virtual and digital.” The research of creating the  “beyond human” could field more important products than just theme-park animatronics. According to Dr. Pope’s research, the “Stickman” (video above)  is a “two degree freedom robot that uses a gravity-driven pendulum launch and produces a variety of somersaulting stunts.” By leveraging the physics of the streamlined design the robot is able to swing “through the air on a pendulum, ‘tucks’ to change its moment of inertia, releases, ‘untuck;” to reduce its spin, and gracefully lands on its back on a foam mat.” The robot’s dynamics are controlled by “an IMU [inertial measurement unit] and a laser range-finder to estimate its state mid-flight and actuates to change its motion both on and and off the pendulum.” Screen Shot 2018-07-13 at 6.18.43 PM

In an IEEE Spectrum article authored by Dr. Pope, the scientist shares his inspiration for Stickman – the gold-medal gymnast Simone Biles. In describing Biles signature move, Pope writes, “It’s a beautiful example of how the seemingly simple physics of ballistic motion, completely governed by a relatively simple conservation of angular momentum, can produce amazing and unexpected results.” Beyond replacing movie stuntmen with pendulum swinging robots, Disney research aims to leverage its platform to accomplish two major objectives: 1) teach science and engineering and 2) push the boundaries of what is possible with machines. Dr. Pope exclaims, “We saw two potential benefits of building robots that could perform acrobatic stunts while aloft. First, a robotic performer can answer questions about how a performance is accomplished that are more difficult to answer with a human performer… Second, the force, speed, and precision required to execute acrobatic manoeuvres pushes the limits of robot capability in a way that has the potential to be relevant to the broader field.”

Screen Shot 2018-07-15 at 10.33.24 AMDisney’s pursuit of science is not just for academic amusement but offers broad benefits today for the mechatronic industry. Parallel to Dr. Pope’s work with Stickman, Disney Research published another report on human-to-robot handovers. In their experiment the researchers examined how robot behaviors influence human interactions in an effort to facilitate greater trust between people and machines. The paper states: “We find the robot’s initial pose can inform the giver about the upcoming handover geometry and impact fluency and efficiency. Also we find variations in grasp method and retraction speed induce significantly different interaction forces.” The Disney team created a “Robot Social Attributes Scale (RoSAS)” that could be useful for future collaborative robot deployments. The study concluded that “ratings of the robot’s warmth linearly increased over repeated interactions, while discomfort simultaneously decreased. This suggests that the more people interact with the robot, the more they develop positive attitudes towards the robot. Both warmth and discomfort are known factors in the determination of trustworthiness of both humans and robots.” The researchers hope that their work will be used to address future challenges with handing objects between humans and robots, by providing ” inexperienced users with additional feedback information to improve legibility of robot behaviors during handover and other interaction contexts – e.g., audio or haptic feedback through wearable devices.” Curiously, the team also suggests that their “human-centered approaches” could even speed the adoption of social robots by leading to more mindful designs, programmed interactions, and fluid robotic movements.

Screen Shot 2018-07-15 at 5.41.56 PM.png

Disney Research is not the only division within the multi-billion dollar entertainment conglomerate nurturing hardware invention. Three of the last four cohorts of the Disney Accelerator have included robot companies: Sphero (2014), Hanson Robotics (2016), and Savioke (2017) . Each of these startups has created new paradigms of human-social interactions, from STEM-centric toys to life-sized humanoids to autonomous delivery platforms. In addition to providing capital, mentorship and resources, Disney offers a playground of synergies and market opportunities. For example, the most popular Sphero products are the licensed characters from Star Wars, Pixar, and Marvel. Hanson Robotics is under contract to create humanoids modeled after Disney’s iconic characters and Savioke is piloting its Relay robotic delivery system with Disney World Resorts. The incubator has also backed a number of chatbot, AI software, consumer hardware and other emerging technology companies that are being harnessed throughout the brand.

For the past fifty years Disney has been using robots, many still with hydraulic actuators to enhance the guest experiences of its theme parks. Charged with managing the future of these mechanical characters is Martin Buehler, Executive R&D Imagineer at Walt Disney Imagineering. Unlike the eighty PhDs working in Disney Research that wrestle with the theoretical, Imagineers are tasked with building and implementing new immersive rides today. Last September, Buehler delivered the keynote speech at RoboBusiness in Silicon Valley. In showing off his team’s latest pursuits with the worlds of Avatar and Star Wars (opening 2019), Buehler suggested that Disney’s art of storytelling and hardware could be a model for even mundane companies. Storytelling, Buehler noted, could further the deployment of social and collaborative robots within society by finally delivering on the promise of improved quality of life through automation. In particular, Buehler highlighted how robots with positive personalities could ease the pain of aging in place. Reflecting on Buehler’s presentation, I am reminded of Walt Disney’s famous quote, “That’s what we storytellers do. We restore order with imagination. We instil hope again and again and again.”

Save The Date: The next RobotLab will take place October 17th at 6pm in New York City with Pano Anthos Of XRC Labs exploring the impact of  Robots In The Age Of The Retail Apocalypse,” as seating is limited, reserve today.

]]>
WAZE for drones: expanding the national airspace https://robohub.org/waze-for-drones-expanding-the-national-airspace/ Thu, 14 Jun 2018 22:00:51 +0000 http://robohub.org/waze-for-drones-expanding-the-national-airspace/ Read More ›]]>

Sitting in New York City, looking up at the clear June skies, I wonder if I am staring at an endangered phenomena. According to many in the Unmanned Aircraft Systems (UAS) industry, skylines across the country soon will be filled with flying cars, quadcopter deliveries, emergency drones, and other robo-flyers. Moving one step closer to this mechanically-induced hazy future, General Electric (GE) announced last week the launch of AiRXOS, a “next generation unmanned traffic” management system.

Managing the National Airspace is already a political football with the Trump Administration proposing privatizing the air-control division of the Federal Aviation Administration (FAA), taking its controller workforce of 15,000 off the government’s books. The White House argues that this would enable the FAA to modernize and adopt “NextGen” technologies to speed commercial air travel. While this budgetary line item is debated in the halls of Congress, one certainty inside the FAA is that the National Airspace (NAS) will have to expand to make room for an increased amount of commercial and recreational traffic, the majority of which will be unmanned.

Ken Stewart, the General Manager of AiRXOS, boasts, “We’re addressing the complexity of integrating unmanned vehicles into the national airspace. When you’re thinking about getting a package delivered to your home by drone, there are some things that need to be solved before we can get to that point.” The first step for the new division of GE is to pilot the system in a geographically-controlled airspace. To accomplish this task, DriveOhio’s UAS Center invested millions in the GE startup. Accordingly, the first test deployment of AiRXOS will be conducted over a 35 mile stretch of Ohio’s Interstate 33 by placing sensors along the road to detect and report on air traffic. GE states that this trial will lay the foundation for the UAS industry. As Alan Caslavka, president of Avionics at GE Aviation, explains, “AiRXOS is addressing the rapid changes in autonomous vehicle technology, advanced operations, and in the regulatory environment. We’re excited for AiRXOS to help set the standard for autonomous and manned aerial vehicles to share the sky safely.”

Stewart whimsically calls his new air traffic control platform WAZE for drones. Like the popular navigation app, AiRXOS provides drone operators with real-time flight-planning data to automatically avoid obstacles, other aircraft, and route around inclement weather. The company also plans to integrate with the FAA to streamline regulatory communications with the agency. Stewart explains that this will speed up authorizations as today, “It’s difficult to get [requests] approved because the FAA hasn’t collected enough data to make a decision about whether something is safe or not.” 
Screen Shot 2018-06-08 at 6.04.57 PM.png
NASA is a key partner in counseling the FAA in integrating commercial UAS into the NAS. Charged with removing the “technical and regulatory barriers that are limiting the ability for civil UAS to fly in the NAS” is Davis Hackenberg of NASA’s Armstrong Flight Research Center. Last year, we invited Hackenberg to present his UAS vision to RobotLabNYC. Hackenberg shared with the packed audience NASA’s multi-layered approach to parsing the skies for a wide-range of aircrafts, including: high altitude long endurance flights, commercial airliners, small recreational craft, quadcopter inspections, drone deliveries and urban aerial transportation. Recently the FAA instituted a new regulation mandating that all aircrafts be equipped with Automatic Dependent Surveillance-Broadcast (ADS-B) systems by January 1, 2020. The FAA calls such equipment “foundational NextGen technology that transforms aircraft surveillance using satellite-based positioning,” essentially connecting human-piloted craft to computers on the ground and, quite possibly, in the sky. Many believe this is a critical step towards delivering on the long-awaited promise of the commercial UAS industry with autonomous beyond visual line of sight flights.

I followed up this week with Hackenberg about the news of AiRXOS and the new FAA guidelines. He explained, “For aircraft operating in an ADS-B environment, testing the cooperative exchange of information on position and altitude (and potentially intent) still needs to be accomplished in order to validate the accuracy and reliability necessary for a risk-based safety case.” Hackenberg continued to describe how ADS-B might not help low altitude missions, “For aircraft operating in an environment where many aircraft are not transmitting position and altitude (non-transponder equipped aircraft), developing low cost/weight/power solutions for DAA [Detect and Avoid] and C2 [Command and Control Systems] is critical to ensure that the unmanned aircraft can remain a safe distance from all traffic. Finally, the very low altitude environment (package delivery and air taxi) will need significant technology development for similar DAA/C2 solutions, as well as certified much more (e.g. vehicles to deal with hazardous weather conditions).” The Deputy Project Manager then shared with me his view of the future, “In the next five years, there will be significant advancements in the introduction of drone deliveries. The skies will not be ‘darkened,’ but there will likely be semi-routine service to many areas of the country, particularly major cities. I also believe there will be at least a few major cities with air taxi service using optionally piloted vehicles within the 10-year horizon. Having the pilot onboard in the initial phase may be a critical stepping-stone to gathering sufficient data to justify future safety cases. And then hopefully soon enough there will be several cities with fully autonomous taxi service.”
Screen Shot 2018-06-10 at 11.30.50 AM
Last month, Uber already ambitiously declared at its Elevate Summit that its ride-hail program will begin shuttling humans by 2023. Uber plans to deploy electric vertical take-off and landing (eVTOL) vehicles throughout major metropolitan areas. “Ultimately, where we want to go is about urban mobility and urban transport, and being a solution for the cities in which we operate,” says Uber CEO, Dara Khosrowshahi. Uber has been cited by many civil planners as the primary cause for increased urban congestion. Its eVTOL plan, called uberAIR, is aimed at alleviating terrestrial vehicle traffic by offsetting commutes with autonomous air taxis that are centrally located on rooftops throughout city centers.

One of Uber’s first test locations for uberAIR will be Dallas-Fort Worth, Texas. Tom Prevot, Uber’s Director of Engineering for Airspace Systems, describes the company’s effort to design a Dynamic Skylane Networks of virtual lanes for its eVTOLs to travel, “We’re designing our flight paths essentially to stay out of the scheduled air carriers’ flight paths initially. We do want to test some of these concepts of maybe flying in lanes and flying close to each other but in a very safe environment, initially.” To accomplish these objectives, Prevot’s group signed a Space Act Agreement with NASA to determine the requirements for its aerial ride-share network. Using Uber’s data, NASA is already simulating small-passenger flights around the Texas city to identify potential risks to an already crowded airspace.

After the Elevate conference, media reports hyped the immanent arrival of flying taxis. Rodney Brooks (considered by many as the godfather of robotics) responded with a tweet: “Headline says ‘prototype’, story says ‘concept’. This is a big difference, and symptomatic of stupid media hype. Really!!!” Dan Elwell, FAA Acting Administrator, was much more subdued with his opinion of how quickly the technology will arrive, “Well, we’ll see…”

Editor’s Note: This week we will explore regulating unmanned systems further with Democratic Presidential Candidate Andrew Yang and New York State Assemblyman Clyde Vanel at the RobotLab forum on “The Politics Of Automation” in New York City. 

]]>
New battery technology is accelerating autonomy and saving the environment https://robohub.org/new-battery-technology-is-accelerating-autonomy-and-saving-the-environment/ Thu, 07 Jun 2018 20:34:45 +0000 http://robohub.org/new-battery-technology-is-accelerating-autonomy-and-saving-the-environment/ Read More ›]]> If the robotics world had a celebrity it would be Spot Mini of Boston Dynamics. Last month at the Robotics Summit in Boston the mechanical dog strutted onto the floor of the Westin Hotel trailed by hundreds of flickering iPhones. Marc Raibert first unveiled his metal menaagerie almost a decade ago with a video of Big Dog. Today, Mini is the fulfillment of his mission in a sleeker, smarter, and environmentally friendlier robo-canine package than its gas-burning ancestor.

spot.jpg

Since the early 1990s, machines have relied on rechargeable lithium ion batteries for power. However these storage cells (inside most cell phones, and now Spot Mini) are dangerously combustable, easily degradable, and very expensive. One of the best examples of the instability of lithium ion is the Samsung Note 7 handset recall, after exploding units caused havoc to consumers. The design flaw ended up costing Samsung $6.2 billion, and even prompted the Federal Aviation Administration (FAA) to issue an advisement after paniced flyers saw cellphones overheat. Exploding batteries are not limited to Samsung, but include the entire lithium ion appliance ecosystem, including e-cigarettes, hoverboards, toys, and electric vehicles.

Screen Shot 2018-06-01 at 5.40.15 PM.png

While Marc Raibert was showing off his latest mechanical creation, across the river in Woburn, Mass Iconic Materials was opening its new 30,000 square-foot lab. The hot startup grabbed headlines months ago with a $65 million venture capital investment from the new Renault-Nissan-Mitsubishi Alliance, Total Energy Ventures, and Sun Microsystems co-founder Bill Joy. However, the real news story is their revolutionary solid-state lithium battery technology that is cheaper, less flammable and longer lasting. The technology came out of the research of Dr. Michael Zimmerman of Tufts University that was originally aimed at improving the performance of existing lithium ion batteries. Unlike lithium ion batteries that contain flammable liquid electrolyte, Zimmerman’s invention deploys a solid polymer electrolyte preventing short-circuiting. Iconic’s plastic electrolyte not only prevents explosive gases from escaping, but enables the composition battery to be constructed with higher energy density materials, such as pure lithium anodes.

Zimmerman first unveiled his innovation on PBS NOVA last year, whereby the famed host David Pogue tested it by poking the solid-state lithium cell with a screwdriver and scissors. Typically, such stress would immediately cause liquid electrolyte to explode, but Zimmerman’s battery did not even heat up and kept working. Pogue then noted, “If you can use lithium metal rather than lithium ions, you get five to ten times the energy density. That means ten days on a charge instead of one day on a charge, a thousand miles on a charge of your car instead of 200 miles.”

The promise of Iconic’s technology helps alleviate “range anxiety,” the fear of running out of charge without a power source nearby. The future of robots, and especially autonomous vehicles, relies heavily on investments in infrastructure to rival oil. Today, it takes 75 minutes to fully recharge the 7,104 lithium ion batteries inside a Tesla at one of of its 5,000 supercharging stations, compared to 15 minutes at the pump at more than 165,000 gas stations throughout America. Realizing the shortcomings of switching to electric, Sweden is making country-wide investments to accelerate adoption. Last month, Stockholm opened the first stretch of roadway capable of simultaneously charging vehicles while driving.

swed.jpg

Markus Fischer, spokesperson for state-owned energy company Vattenfall, describes “Such roads will allow (electric vehicles) to move long distances without big, costly and heavy batteries. The investment cost per kilometer is estimated to be less than that of using overhead lines, as is the impact on the landscape.” Currently, only 1.2 miles of electric rail has been laid, but it is already working with trucks making deliveries to the airport. Gunnar Ashland, CEO of Elways the maker of the road’s electric rail, boasted, “The technology offers infinite range — range anxiety disappears. Electrified roads will allow smaller batteries and can make electric cars even cheaper than fossil fuel ones.”

At the Robotics Summit in Boston, I spoke with Dr. Doug Schmidt of electric battery provider Acumentrics about the Swedish technology. Dr. Schmidt explained that most conductive charging platforms similar to Elways speed the degradation of lithium ion batteries. Israeli startup Phinergy offers an alternative to lithium for electric vehicles with their proprietary aluminum batteries that produce energy through a reaction between oxygen and aluminum using water. A few years ago, Phinergy powered a Renault car for over a thousand kilometers with just tap water. Now the company has partnered with Chinese-based Yunnan Aluminum to begin manufacturing batteries to meet China’s growing electric automobile market. According to the press release last month the joint venture “will introduce the world’s leading aluminum-air battery technology, relying on [Yunnan Aluminium’s] green and clean water and aluminum resources.” The statement further detailed that the initial annual output will be 2,500 units. Phinergy’s website promotes wider uses cases, including industrial robots and other unmanned systems. Screen Shot 2018-06-03 at 1.18.44 AM.png
China has been leading the world in alternative energy development. Last year, Pittsburgh-based Aquion was acquired out of bankruptcy for $9.16 million by Juline-Titans, an affiliate of China Titans Energy Technology Group. Aquion, a once high-flying startup that raised more than $190 million from such notable investors as Bill Gates, Kleiner Perkins Caufield & Byers, and Nick and Joby Pritzker, is now in the process of moving its operations to Asia. Similar to Phinergy, Aquion utilizes the most renewable of resources, water. Their patented “Aqueous Hybrid Ion” technology is able to create clean energy using sea water. However, it comes at a cost of weight: unlike lithium batteries that are light enough to fit in one’s pocket, salt-water fuel cells are considerably heavier. The company’s products are uniquely positioned to be utilized for future power grids, with the promise of weaning the world off fossil fuels.  aq.jpg
Today, fewer than 5% of lithium-ion batteries are recycled. The environmental costs could not be higher with dangerous toxic gases leaking from old batteries. Rising battery demand is also leading to a variety of unintended consequences, such as depleting the world’s natural resources of lithium and cobalt and increased water pollution from mineral extraction. While turning the tides of climate change depends greatly on ending the global dependency on oil, replacing it with a more green alternative is crucial. Promising inventions are not only developing new energy paradigms, but recycling old ones in innovative ways. British startup Aceleron is reusing dead electric car batteries for home energy storage. In the words of Amrit Chandan of Aceleron, “It takes so much energy to extract these materials from the ground. If we don’t re-use them we could be making our environmental problems worse. There’s going to be a storm of electric vehicle batteries that will reach the end of their life in a few years, and we’re positioning ourselves to be ready for it.”

Climate change and unmanned systems will be discussed in greater detail at the next RobotLab on “The Politics Of Automation,” June 13th @ 6pm in NYC, with Democratic Presidential Candidate Andrew Yang and New York State Assemblyman Clyde Vanel.

 

]]>
Automating window washing https://robohub.org/automating-window-washing/ Fri, 01 Jun 2018 01:21:56 +0000 http://robohub.org/automating-window-washing/ Read More ›]]>

Three and half years ago, I stood on the corner of West Street and gasped as two window washers clung to life at the end of a rope a thousand feet above. By the time rescue crews reached the men on the 69th floor of 1 World Trade they were close to passing out from dangling upside down. Everyday risk-taking men and women hook their bodies to metal scaffolds and ascend to deadly heights for $25 an hour. Ramone Castro, a window washer of three decades, said it best, “It is a very dangerous job. It is not easy going up there. You can replace a machine but not a life.” Castro’s statement sounds like an urgent call to action for robots.

One of the promises of automation is replacing tasks that are too dangerous for humans. Switzerland-based Serbot believes that high-rise facade cleaning is one of those jobs ripe for disruption. In 2010, it was first reported that Serbot contracted with the city of Dubai to automatically clean its massive glass skyline. Utilizing their GEKKO machine, the Swiss company has demonstrated a performance of over 400 square meters an hour, 15 times faster than a professional washer. GEKKO leverages a unique suction technology that enables the massive Roomba-like device to be suspended from the roof and adhere to the curtain wall regardless of weather conditions or architectural features. Serbot offers both semi and full autonomous versions of its GEKKOs, which include options for retrofitting existing roof systems. It is unclear how many robots are actually deployed in the marketplace, however Serbot recently announced the cleaning of the architecturally challenging FESTO’s Automation Center in Germany (shown below).

festo-news-1

According to the press release, “The entire building envelope is cleaned automatically: by a robot, called GEKKO Facade, which sucks on the glass facade. This eliminates important disadvantages of conventional cleaning: no disturbance of the user by cleaning personnel, no risky working in a gondola at high altitude, no additional protection during the cleaning phase, etc.” Serbot further states its autonomous system was able to work at amazing speeds cleaning the 8,600 square meter structure within a couple of days via its intelligent platform that plans a route across the entire glass facade.

skyscraper stats.jpg

Parallel to the global trend of urbanization, skyscraper construction is at an all time high. Demand for glass facade materials and maintenance services is close to surpassing $200 billion worldwide. As New York City is in the center of the construction boom, Israeli-startup, Skyline Robotics, recently joined Iconic Labs NYC (ICONYC). This week, I had the opportunity to ask Skyline founder and CEO Yaron Schwarcz about the move. Schwarcz proudly said, “So far we are deployed in Israel only and are working exclusively with one of the top 5 cleaning companies. Joining ICONYC was definitely a step forward, as a rule we only move forward, we believe that ICONIC can and will help us connect with the best investors and help us grow in the NY market.”

While Serbot requires building owners to purchase their proprietary suction cleaning system, Skyline’s machine, called Ozmo, integrates seamlessly with existing equipment. Schwarcz explains, “We use the existing scaffold of the building in contrast to GEKKO’s use of suction. The use of the arms is to copy the human arms which is the only way to fully maintain the entire building and all its complexity. The Ozmo system is not only a window cleaner, it’s a platform for all types of facade maintenance. Ozmo does not need any humans on the rig, never putting people in danger.” Schwarcz further shared with me the results of early case studies in Israel whereby Ozmo cleaned an entire vertical glass building in 80 hours with one supervisor remotely controlling the operation from the ground, adding with “no breaks.”

While Serbot and Skyline offer an optimistic view of the future, past efforts have been met with skepticism. In a 2014 New York Times article, written days after the two window washers almost fell to their deaths, the paper concluded, “washing windows is something that machines still cannot do as well.” The Times interviewed building exterior consultant, Craig S. Caulkins, who stated then, “Robots have problems.” Caulkins says the set back for automation has been the quality of work, citing numerous examples of dirty window corners. “If you are a fastidious owner wanting clean, clean windows so you can take advantage of that very expensive view that you bought, the last thing you want to see is that gray area around the rim of the window,” exclaimed Caulkins. Furthermore, New York City’s window washers are represented by a very active labor union, S.E.I.U. Local 32BJ. The fear of robots replacing their members could lead to city wide protests, and strikes. The S.E.I.U. 32BJ press office did not return calls for comment.

High rise window washing in New York is very much part of the folklore of the Big Apple. One of the best selling local children books, “Window Washer: At Work Above the Clouds,” profiles the former Twin Towers cleaner Roko Camaj. In 1995, Camaj predicted that “Ten years from now, all window washing will probably be done by a machine.” Unfortunately, Camaj never lived to see the innovations of GEKKO and Ozmo, as he perished in the Towers on September the 11th.

Screen Shot 2018-05-21 at 11.19.11 PM

Automating high-risk professions will be explored further on June 13th @ 6pm in NYC with Democratic Presidential Candidate Andrew Yang and New York Assemblyman Clyde Vanel at the next RobotLab on “The Politics Of Automation” – Reserve Today!

]]>
Drones as first responders https://robohub.org/drones-as-first-responders/ Tue, 17 Apr 2018 15:34:00 +0000 http://robohub.org/drones-as-first-responders/ Read More ›]]> In a basement of New York University in 2013, Dr. Sergei Lupashin wowed the room of one hundred leading technology enthusiasts with one of the first indoor Unmanned Aerial Vehicle (UAV) demonstrations. During his presentation, Dr. Lupashin of ETH Zurich  attached a dog leash to an aerial drone while declaring to the audience, “there has to be another way” of flying robots safely around people. Lupashin’s creativity eventually led to the invention of Fotokite and one of the most successful Indiegogo campaigns.

Since Lupashin’s demo, there are now close to a hundred providers of drones on leashes from innovative startups to aftermarket solutions in order to restrain unmanned flying vehicles. Probably the best known enterprise solution is CyPhy Works which has raised more than $30 million. Last August, during President Trump’s visit to his Golf Course in New Jersey, the Department of Homeland Security (DHS) deployed CyPhy’s tethered drones to patrol the permitter. In a statement by DHS about their “spy in the sky program,” the agency explained: “The Proof of Concept will help determine the potential future use of tethered Unmanned Aircraft System (sUAS) in supporting the Agency’s protective mission. The tethered sUAS used in the Proof of Concept is operated using a microfilament tether that provides power to the aircraft and the secure video from the aircraft to the Operator Control Unit (OCU).” CyPhy’s systems are currently being utilized to provide a birdseye view to police departments and military units for a number of high profile missions, including the Boston Marathon.

Fotokite, CyPhy and others have proved that tethered machines offer huge advantages from traditional remote controlled or autonomous UAVs, by removing regulatory, battery and payload restrictions from lengthy missions. This past week Genius NY, the largest unmanned systems accelerator, awarded one million dollars to Fotokite for its latest enterprise line of leashed drones. The company clinched the competition after demonstrating how its drones can fly for up to twenty-four hours continuously providing real-time video feed autonomously and safely above large population centers. Fotokite’s Chief Executive, Chris McCall, announced that the funds will be utilized to fulfill a contract with one of the largest fire truck manufacturers in the United States. “We’re building an add-on to fire and rescue vehicles and public safety vehicles to be added on top of for instance a fire truck. And then a firefighter is able to pull up to an emergency scene, push a button and up on top of the fire truck this box opens up, a Fotokite flies up and starts live streaming thermal and normal video down to all the firefighters on the ground,” boasted McCall.

aerones_ff_screen.jpg

Fotokite is not the only kite drone company marketing to fire fighters, Latvian-born startup Aerones is attaching firehoses to its massive multi-rotor unmanned aerial vehicles. Aerones claims to have successfully built a rapid response UAV that can climb up to a thousand feet within six minutes to extinguish fires from the air. This enables first responders to have a reach of close to ten times more than traditional fire ladders. The Y Combinator startup offers municipalities two models: a twenty-eight propeller version that can carry up to 441 pounds to a height of 984 feet and a thirty-six propeller version that ferriess over 650 pounds of equipment to ascend over 1,600 feet. However, immediate interest for the Aerones solution is coming from industrial clients such as wind farms. “Over the last two months, we’ve been very actively talking to wind turbine owners,” says Janis Putrams, CEO of Aerones. “We have lots of interest and letters of intent in Texas, Spain, Turkey, South America for wind turbine cleaning. And in places like Canada, the Nordic and Europe for de-icing. If the weather is close to freezing, ice builds up, and they have to stop the turbine.” TechCrunch reported last March that the company moved its sales operations to Silicon Valley. 

The emergency response industry is also looking to other aerial solutions to tackle its most difficult challenges. For over a year, Zipline has been successfully delivering blood for critical transfusions to the most remote areas of Africa. The company announced earlier this month that it has filed with the FAA to begin testing later this year in America. This is welcome news for the USA’s rural health centers which are straddled with exploding costs, staff shortages and crippling infrastructure. In a Fast Company article about Zipline, the magazine reported that “Nearly half of rural providers already have a negative operating margin. As rural residents–who tend to be sicker than the rest of the country–have to rely on the smaller clinics that remain, drones could ensure that those clinics have access to necessary supplies. Blood products spoil quickly, and outside major hospitals, it’s common not to have the right blood on hand for a procedure. Using the drones would be faster, cheaper, and more reliable than delivering the supplies in a van or car.”

Keller Rinaudo, Zipline’s Chief Executive, describes, “There’s a lot that [the U.S.] can be doing better. And that’s what we think is ultimately the promise of future logistics and automated logistics. It’s not delivering tennis shoes or pizza to someone’s backyard. It’s providing universal access to healthcare when people need it the most.”

To date, Zipline has flown over 200,000 miles autonomously delivering 7,000 units of blood throughout Rwanda. To prepare for its US launch, the company re-engineered its entire platform to bolster its delivery capabilities. Rinaudo explains, “In larger countries, you’re going to need distribution centers and logistics systems that are capable of doing millions of deliveries a day rather than hundreds or thousands.” The new UAV is a small fixed-wing plane called the Zip that can soar close to 80 miles per hour enabling life-saving supplies such as blood, organ donations or vaccines to be delivered in a matter of minutes.


As I prepare to speak at Xponetial 2018 next month, I am inspired by these innovators who turn their mechanical inventions into life-saving solutions. Many would encourage Rinaudo and others to focus their energies on the seemingly more profitable  sectors such as e-commerce delivery and industrial inspections. However, Rinaudo retorts that “Healthcare logistics is a way bigger market and a way bigger problem than most people realize. Globally it’s a $70 billion industry. The reality is that there are billions of people who do not have reliable access to healthcare and a big part of that is logistics. As a result of that, 5.2 million kids die every year due to lack of access to basic medical products. So Zipline’s not in a rush to bite off a bigger problem than that.”

The topic of utilizing life-saving technology will be discussed at the next RobotLab event on “The Politics Of Automation,” with Democratic Presidential Candidate Andrew Yang and New York Assemblyman Clyde Vanel on June 13th @ 6pm in NYC – RSVP Today

]]>
SXSW 2018: Protect AI, robots, cars (and us) from bias https://robohub.org/sxsw-2018-protect-ai-robots-cars-and-us-from-bias/ Wed, 21 Mar 2018 07:25:25 +0000 http://robohub.org/sxsw-2018-protect-ai-robots-cars-and-us-from-bias/ Read More ›]]>

As Mark Hamill humorously shared the behind-the-scenes of “Star Wars: The Last Jedi” with a packed SXSW audience, two floors below on the exhibit floor Universal Robots recreated General Grievous’ famed light saber battles. The battling machines were steps away from a twelve foot dancing Kuka robot and an automated coffee dispensary. Somehow the famed interactive festival known for its late night drinking, dancing and concerts had a very mechanical feel this year. Everywhere debates ensued between utopian tech visionaries and dystopia-fearing humanists.

Even my panel on “Investing In The Autonomy Economy” took a very social turn when discussing the opportunities of utilizing robots for the growing aging population. Eric Daimler (formerly of the Obama White House) raised concerns about AI bias affecting the well being of seniors. Agreeing, Dan Burstein (partner at Millennium Tech Value Partners) nervously expressed that ‘AI is everywhere, in everything, and the USA has no other way to care for this exploding demographic except with machines.’ Daimler explained that “AI is very good at perception, just not context;” until this is solved it could be a very dangerous problem worldwide.

Last year at a Google conference on the relationship between humans and AI, the company’s senior vice president of engineering, John Giannandrea, warned, “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased. It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems.” Similar to Daimler’s anxiety about AI and healthcare, Giannandrea exclaimed that “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”


One of the most famous illustrations of how quickly human bias influences computer actions is Tay, the Microsoft customer service chatbot on Twitter. It took only twenty-four hours for Tay to develop a Nazi persona leading to more than ninety thousand hate-filled tweets. Tay swiftly calculated that hate on social media equals popularity. In explaining its failed experiment to Business Insider Microsoft stated via email: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

While Tay’s real impact was benign, it raises serious questions of the implications of embedding AI into machines and society. In its Pulitzer Prize-winning article, ProPublica.org uncovered that a widely distributed US criminal justice software called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) was racially biased in scoring the risk levels of convicted felons to recommit crimes. ProPublica discovered that black defendants in Florida, “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism” by the AI. Northpointe, the company that created COMPAS, released its own report that disputed ProPublica’s findings but it refused to pull back the curtain on its training data, keeping the algorithms hidden in a “black box.” In a statement released to the New York Times, Northpointe’s spokesperson argued, “The key to our product is the algorithms, and they’re proprietary. We’ve created them, and we don’t release them because it’s certainly a core piece of our business.” 

The dispute between Northpointe and ProPublica raises the question of transparency and the auditing of data by an independent arbitrator to protect against bias. Cathy O’Neil, a former Barnard professor and analyst at D.E. Shaw, thinks a lot about safeguarding ordinary Americans from biased AI. In her book, Weapons of Math Destruction, she cautions that big corporate America is too willing to hand over the wheel to the algorithms without fully assessing the risks or implementing any oversight monitoring. “[Algorithms] replace human processes, but they’re not held to the same standards. People trust them too much,” declares O’Neil. Understanding the high stakes and lack of regulatory oversight by the current federal government, O’Neil left her high-paying Wall Street job to start a software auditing firm, O’Neil Risk Consulting & Algorithmic Auditing. In an interview with MIT Technology Review last summer, O’Neil frustratingly expressed that companies are more interested in the bottom line than protecting their employees, customers, and families from bias, “I’ll be honest with you. I have no clients right now.”

Most of the success of deconstructing “black boxes” is happening today at the US Department of Defense. DARPA has been funding the research of Dr. David Gunning to develop Explainable Artificial Intelligence (XAI). Understanding its own AI and that of foreign governments could be a huge advantage for America’s cyber military units. At the same time, like many DARPA-funded projects, civilian opportunities could offer societal benefits. According to Gunning’s statement, online XAI aims to “produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.” XAI plans to work with developers and user interfaces designers to foster “useful explanation dialogues for the end user,” to know when to trust or question the AI-generated data. 

Besides DARPA, many large technology companies and universities are starting to create think tanks, conferences and policy groups to develop standards that test AI bias. The results have been startling – ranging from computer vision sensors that negatively identify people of color to gender bias in employment management software to blatant racism of natural language processing systems to security robots that run over kids identified mistakenly as threats. As an example of how training data affects outcomes, when Google first released its image processing software, the AI identified photos of African Americans as “gorillas,” because the engineers failed to provide enough minority examples into the neural network.

Ultimately artificial intelligence reflects the people that program it, as every human being brings with him his own experiences that shape personal biases. According to Kathleen Walch, host of AI Today podcast, “If the researchers and developers developing our AI systems are themselves lacking diversity, then the problems that AI systems solve and training data used both become biased based on what these data scientists feed into AI training data,” Walch advocates that hiring diversity can bring “about different ways of thinking, different ethics and different mindsets. Together, this creates more diverse and less biased AI systems. This will result in more representative data models, diverse and different problems for AI solutions to solve, and different use cases feed to these systems if there is a more diverse group feeding that information.”

Before leaving SXSW, I attended a panel hosted by the IEEE on “Algorithms, Unconscious Bias & AI,” amazingly all led by female panelists including one person of color. Hiring basis became a big theme of their discussion. Following the talk, I hopped into my Uber and pleasantly rode to the airport reflecting on a statement made earlier in the day by John Krafcik, Cheif Executive, of Waymo. Krafcik boasted that Waymo’s mission is to build “the world’s most experienced driver,” I just hope that the training data is not from New York City cabbies.

]]>
Healthcare’s regulatory AI conundrum https://robohub.org/regulatory-challenges-holding-back-healthcare-ai/ Wed, 14 Mar 2018 23:19:23 +0000 http://robohub.org/regulatory-challenges-holding-back-healthcare-ai/

It was the last question of the night and it hushed the entire room. An entrepreneur expressed his aggravation about the FDA’s antiquated regulatory environment for AI-enabled devices to Dr. Joel Stein of Columbia University. Stein a leader in rehabilitative robotic medicine, sympathized with the startup knowing full well that tomorrow’s exoskeletons will rely heavily on machine intelligence. Nodding her head in agreement, Kate Merton of JLabs shared the sentiment. Her employer, Johnson & Johnson, is partnered with Google to revolutionize the operating room through embedded deep learning systems. In many ways this astute observation encapsulated RobotLab this past Tuesday with our topic being “The Future Of Robotic Medicine,” the paradox of software-enabled therapeutics offering a better quality of life and the societal, technological and regulatory challenges ahead.

To better understand the frustration expressed at RobotLab, a review of the policies of the Food & Drug Administration (FDA) relative to medical devices and software is required. Most devices fall within a criteria that was established in the 1970s. The “build and freeze” model whereby a product filed doesn’t change overtime and currently excludes therapies that rely on neural networks and deep learning algorithms that evolve with use. Charged with progressing its regulatory environment, the Obama Administration established a Digital Health Program tasked with implementing new regulatory guidance for software and mobile technology. This initiative eventually led Congress to pass the 21st Century Cures Act (“Cures Act”) in December 2016. An important aspect of the Cures Act is its provisions for digital health products, medical software, and smart devices. The legislators singled out AI for its unparalleled ability to be used in supporting human decision making referred to as “Clinical Decision Support” (“CDS”) with examples like Google and IBM Watson. Last year, the administration updated the Cures Act with a new framework that included a Digital Health Innovation Action Plan. These steps have been leading a change in the the FDA’s attitude towards mechatronics, updating its traditional approach to devices to include software and hardware that iterates with cognitive learning. The Action Plan states “an efficient, risk-based approach to regulating digital health technology will foster innovation of digital health products.” In addition, the FDA has been offering tech partners the ability of filing a Digital Health Software Pre-Certification (“Pre-Cert”) to fast track the evaluation and approval process, current Pre-Cert pilot filings include Apple, Fitbit, Samsung and other leading technology companies.

Another way for AI and robotic devices to receive approval from the FDA is through their “De Novo premarket review pathway.” According to the FDA’s website, the De Novo program is designed for “medical devices that are low to moderate risk and have no legally marketed predicate device to base a determination of substantial equivalence.” Many computer vision systems fall into the De Novo category using their sensors to provide “triage” software to efficiently identify disease markers based upon its training data of radiology images. As an example, last month the FDA approved Viz.ai a new type of “clinical decision support software designed to analyze computed tomography (CT) results that may notify providers of a potential stroke in their patients.”

Dr. Robert Ochs of the FDA’s Center for Devices and Radiological Health explains, “The software device could benefit patients by notifying a specialist earlier thereby decreasing the time to treatment. Faster treatment may lessen the extent or progression of a stroke.” The Viz.ai algorithm has the ability to change the lives of the nearly 800,000 annual stroke victims in the USA. The data platform will enable clinicians to quickly identify patients at risk for stroke by analyzing thousands of CT brain scans for blood vessel blockages and then automatically send alerts via text messages to neurovascular specialists. Viz.AI promises to streamline the diagnosis process by cutting the traditional time it takes for radiologists to review, identify and escalate cases to specialists for high-risk patients.

Screen Shot 2018-03-11 at 9.32.20 AMDr. Chris Mansi, Viz.ai CEO, says “The Viz.ai LVO Stroke Platform is the first example of applied artificial intelligence software that seeks to augment the diagnostic and treatment pathway of critically unwell stroke patients. We are thrilled to bring artificial intelligence to healthcare in a way that works alongside physicians and helps get the right patient, to the right doctor at the right time.” According to the FDA’s statement, Mansi’s company “submitted a study of only 300 CT scans that assessed the independent performance of the image analysis algorithm and notification functionality of the Viz.ai Contact application against the performance of two trained neuro-radiologists for the detection of large vessel blockages in the brain. Real-world evidence was used with a clinical study to demonstrate that the application could notify a neurovascular specialist sooner in cases where a blockage was suspected.”

Viz.ai joins a market for AI diagnosis software that is growing rapidly and projected to eclipse six billion by 2021 (Frost & Sullivan), an increase of more than forty percent since 2014. According to the study, AI has the ability to reduce healthcare costs by nearly half, while at the same time improving the outcomes for a third of all US healthcare patients. However, diagnosis software is only part of the AI value proposition, adding learning algorithms throughout the entire ecosystem of healthcare could provide new levels of quality of care.

At the same time, the demand for AI treatment is taking its toll on an underfunded FDA which is having difficulty keeping up with the new filings to review computer-aided therapies from diagnosis to robotic surgery to invasive therapeutics. In addition, many companies are currently unable to afford the seven-figure investment required to file with the FDA, leading to missed opportunities to find cures for the most plaguing diseases. The Atlantic reported last fall about a Canadian company, Cloud DX, that is still waiting for approval for its AI software that analyzes coughing data via audio wavelengths to detect lung-based diseases (i.e., asthma, tuberculosis, and pneumonia). Cloud DX’s founder, Robert Kaul, shared wth the magazine, “There’s a reason that tech companies like Google haven’t been going the FDA route [of clinical trials aimed at diagnostic certification]. It can be a bureaucratic nightmare, and they aren’t used to working at this level of scrutiny and slowness.” It took Cloud DX two years and close to a million dollars to achieve the basic ISO 13485 certification required to begin filing with the agency. Kaul, questioned, “How many investors are going to give you that amount of money just so you can get to the starting line?”

Last month, Rani Therapeutics raised $53 million to begin clinical trials for its new robotic pill. Rani’s solution could usher in a new paradigm of needle-free therapy, whereby drugs are mechanically delivered to the exact site of infection. Unfortunately, innovations like Rani’s are getting backlogged with a shortage of knowledgable examiners able review the clinical data. Bakul Patel, the FDA’s New Associate Center Director For Digital Health, describes that one of his top priorities is hiring, “Yes, it’s hard to recruit people in AI right now. We have some understanding of these technologies. But we need more people. This is going to be a challenge.” Patel is cautiously optimistic, “We are evolving… The legacy model is the one we know works. But the model that works continuously—we don’t yet have something to validate that. So the question is [as much] scientific as regulatory: How do you reconcile real-time learning [with] people having the same level of trust and confidence they had yesterday?”

Screen Shot 2018-03-11 at 11.13.44 AMAs I concluded my discussion with Stein, I asked if he thought disabled people will eventually be commuting to work wearing robotic exoskeletons as easily as they do in electric wheelchairs? He answered that it could come within the next decade if society changes its mindset on how we distribute and pay for such therapies. To quote the President, “Nobody knew health care could be so complicated.”

]]>
Self-driving cars have power consumption problems https://robohub.org/self-driving-cars-have-power-consumption-problems/ Thu, 08 Mar 2018 22:38:59 +0000 http://robohub.org/self-driving-cars-have-power-consumption-problems/

I recently chaired a UJA Tech Talk on “The Future Of Autonomous Cars” with former General Motors Vice-Chairman Steve Girsky. The auto executive enthusiastically shared his vision for the next 15-25 years of driving – a congestion-free world of automated wheeled capsules zipping commuters to and from work.

Girsky stated that connected cars with safety assist (autonomy-lite) features are moving much faster toward mass adoption than fully autonomous vehicles (sans steering wheels and pedals). In his opinion, the largest roadblocks toward a consumer-ready robocar are the current technical inefficiencies of prototypes on the road today, which burn huge amounts of energy supporting enhanced computing and arrays of sensors. This makes the sticker price closer to a 1972 Ferrari than a 2018 Prius.

As main street adoption relies heavily on converting combustion engines to electric at accessible pricing, Girsky’s sentiment was shared by many CES 2018 participants. NVIDIA, the leading chip manufacturer for autonomous vehicles, unveiled its latest technology, Xavier, with auto industry partner Volkswagen in Las Vegas. Xavier promises to be 15 times more energy-efficient than previous chip generations delivering 30 trillion operations per second by wielding only 30 watts of power.

After the Xavier CES demonstration, Volkswagen CEO Herbert Diess exclaimed, “Autonomous driving, zero-emission mobility, and digital networking are virtually impossible without advances in AI and deep learning. Working with NVIDIA, the leader in AI technology, enables us to take a big step into the future.”

NVIDIA is becoming the industry standard as Volkswagen joins more than 320 companies and organizations working with the chip manufacturer on autonomous vehicles. While NVIDIA is leading the pack, Intel and Qualcomm are not far behind with their low-power solutions. Electric vehicle powerhouse Tesla is developing its own internal chip for the next generation of Autopilot. While these new chips represents a positive evolution in processors, there is still much work to be done as current self-driving prototypes require close to 2,500 watts per second.

Power Consumption a Tradeoff for Self-Driving Cars

The power-consumption problem was highlighted recently with a report published by the the University of Michigan Center for Sustainable Systems. Its lead author, Greg Keoleian, questions whether the current autonomous car models will slow the overall adoption towards electric vehicles. Keoleian’s team simulated a number of self-driving Ford Fusion models with different-sized computer configurations and engine designs. In sharing his findings, Keoleian said, “We knew there was going to be a tradeoff in terms of the energy and greenhouse gas emissions associated with the equipment and the benefits gained from operational efficiency. I was surprised that it was so significant.”

Keoleian’s conclusions challenged the premise of self-driving cars accelerating the adoption of renewal energy. For years, the advocates of autonomous vehicles have claimed that smart driving will lead to a reduction of greenhouse gas emissions through the platooning of vehicles on highways and intersections; the decrease of aerodynamic drag on freeways, and the overall reduction in urban congestion.

Analysis: How California’s Self-Driving Cars Performed in 2017

However, the University of Michigan tests only showed a “six to nine percent net energy reduction” over the vehicle’s lifecycle when running on autonomy mode. This went down by five percent when using a large Waymo rooftop sensor package (shown below) as it increased the aerodynamic drag. The report also stated that the greatest net efficiencies were in cars with gas drivetrains that benefit the most from smart driving. Waymo currently uses a hybrid Chrysler Pacifica to run its complex fusion of sensors and processing units.

Keoleian told IEEE Spectrum that his modeling actually “overstates real impacts from future autonomous vehicles.” While he anticipates the reduction of computing and sensor drag, he is concerned that the impact of 5G communications has not been fully explored. The increased bandwidth will lead to greater data streams and boost power consumption for inboard systems and processors. In addition, he thinks that self-driving systems will lead to greater distances traveled as commuters move further away from city centers with the advent of easier commutes. Keoleian explains, “There could be a rebound effect. They could induce travel, adding to congestion and fuel use.” Koeleian points to a confusing conclusion by the U.S. National Renewable Energy Laboratory that presents two possible outcomes of full autonomy:

  1. A reduction in greenhouse emissions by sixty percent with greater ride sharing options
  2. An increase of two hundred percent with increased driving distances

Energy Efficiency Self Driving Cars

According to Wilko Stark, Mercedes-Benz’s Vice President of Strategy, it only makes sense for autonomous vehicles to be electric as the increased power requirements will go to the computers instead of the motors. “To put such a system into a combustion-engined car doesn’t make any sense, because the fuel consumption will go up tremendously,” explains Stark.

Analysis: Fleet Expansion Shows Waymo Lapping Self-Driving Competition

Girsky shares Stark’s view, as he predicted that the first large scale use cases for autonomy will be fleets of souped-up golf carts running low speed pre-planned shuttle routes. Also on view at CES were complimentary autonomous shared taxi rides around Las Vegas, courtesy of French startup Navya. Today, Navya boasts of 60 operating shuttles in more than 10 cities, including around the University of Michigan.

Fully autonomous cars might not be far behind, as Waymo has seen a ninety percent drop in component costs by bringing its sensor development in-house. The autonomous powerhouse recently passed the four million mile marker on public roads and is planning on ditching its safety driver later this year in its Phoenix, Arizona test program. According Dmitri Dolgov, Vice President of Waymo’s Engineering, “Sensors on our new generation of vehicles can see farther, sharper, and more accurately than anything available on the market. Instead of taking components that might have been designed for another application, we engineered everything from the ground up, specifically for the task of Level 4 autonomy.”

With increased roadside fatalities and rising CO2 emissions, the world can’t wait too much longer for affordable, energy-efficient autonomous transportation. Girsky and others remind us there is still a long road ahead, while the industry experts estimate that the current gas-burning Waymo Chrysler Pacifica cruising around Arizona costs more than one hundred times the sticker price of the minivan. I guess until then there is always Citibike.

]]>
New brain computer interfaces lead many to ask, is Black Mirror real? https://robohub.org/new-brain-computer-interfaces-lead-many-to-ask-is-black-mirror-real/ Thu, 22 Feb 2018 17:42:08 +0000 http://robohub.org/new-brain-computer-interfaces-lead-many-to-ask-is-black-mirror-real/ Read More ›]]>

It’s called the “grain,” a small IoT device implanted into the back of people’s skulls to record their memories. Human experiences are simply played back on “redo mode” using a smart button remote. The technology promises to reduce crime, terrorism and simplify human relationships with greater transparency. While this is a description of Netflix’s Black Mirror episode, The Entire History of You,” in reality the concept is not as far-fetched as it may seem. This week life came closer to imitating art with the $19 million grant by the US Department of Defense to a group of six universities to begin work on “neurograins.”

In the past, Brian Computer Interfaces (BCIs) have utilized wearable technologies, such as headbands and helmets, to control robots, machines and various household appliances for people with severe disabilities. This new DARPA grant is focused on developing a “cortical intranet” for uplinks and downlinks directly to the cerebral cortex, potentially taking mind control to the next level. According to lead researcher Arto Nurmikko of Brown University, “What we’re developing is essentially a micro-scale wireless network in the brain enabling us to communicate directly with neurons on a scale that hasn’t previously been possible.”

Nurmikko boasts of the numerous medicinal outcomes of the research, “The understanding of the brain we can get from such a system will hopefully lead to new therapeutic strategies involving neural stimulation of the brain, which we can implement with this new neurotechnology.” The technology being developed by Nurmikko’s international team will eventually create a wireless neural communication platform that will be able to record and stimulate brian activity at an unprecedented level of detail and precision. This will be accomplished by implanting a mesh network of tens of thousands of granular micro-devices into a person’s cranium. The surgeons will place this layer of neurograins around the cerebral cortex that will be controlled by a nearby electronic patch just below a person’s skin.

In describing how it will work, Nurmikko explains, “We aim to be able to read out from the brain how it processes, for example, the difference between touching a smooth, soft surface and a rough, hard one and then apply microscale electrical stimulation directly to the brain to create proxies of such sensation. Similarly, we aim to advance our understanding of how the brain processes and makes sense of the many complex sounds we listen to every day, which guide our vocal communication in a conversation and stimulate the brain to directly experience such sounds.”

Nurmikko further describes, “We need to make the neurograins small enough to be minimally invasive but with extraordinary technical sophistication, which will require state-of-the-art microscale semiconductor technology. Additionally, we have the challenge of developing the wireless external hub that can process the signals generated by large populations of spatially distributed neurograins at the same time.”

While current BCIs are able to process the activity of 100 neurons at once, Nurmikko’s  objective is to work at a level of 100,000 simultaneous inputs. “When you increase the number of neurons tenfold, you increase the amount of data you need to manage by much more than that because the brain operates through nested and interconnected circuits,” Nurmikko remarks. “So this becomes an enormous big data problem for which we’ll need to develop new computational neuroscience tools.” The researchers plan to first test their theory in the sensory and auditory functions of mammals.

Brain-Computer Interfaces is one of the fastest growing areas of healthcare technologies; while today it is valued at just under a billion dollars, it is forecasted to grow to $2 billion in the next five years. According to the report, the uptick in the market will be driven by an estimated increase in treating aging, fatal diseases and people with disabilities. The funder of Nurmikko’s project is DARPA’s Neural Engineering System Design program, which was formed to treat injured military personnel by “creating novel hardware and algorithms to understand how various forms of neural sensing and actuation might improve restorative therapeutic outcomes.” While DARPA’s project will provide numerous discoveries that will improve the quality of life for society’s most vulnerable, it also opens a Pandora’s box of ethical issues with the prospect of the US military potentially funding armies of cyborgs.

In response to rising ethical concerns, last month ethicists from the University of Basel in Switzerland drafted a new biosecurity framework for research in neurotechnology. The biggest concern expressed in the report was the implementation of “dual-use” technologies that have both military and medical benefits. The ethicists called for a complete ban on such innovations and strongly recommended fast tracking regulations to protect “the mental privacy and integrity of humans,”

The ethicists raise important questions about taking grant money from groups like DARPA, as “This military research has raised concern about the risks associated with the weaponization of neurotechnology, sparking a debate about controversial questions: Is it legitimate to conduct military research on brain technology? And how should policy-makers regulate dual-use neurotechnology?” The suggested framework reads like a science fiction novel, “This has resulted in a rapid growth in brain technology prototypes aimed at modulating the emotions, cognition, and behavior of soldiers. These include neurotechnological applications for deception detection and interrogation as well as brain-computer interfaces for military purposes.” However, one is reminded of the fact that the development of BCIs is moving more quickly than public policy is able to debate its merits.

The framework’s lead author Marcello Ienca of Basel’s Institute for Biomedical Ethics understands the tremendous positive benefits of BCIs for a global aging population, especially for people suffering from Alzheimer’s and spinal cord injuries. In fact, the Swiss team calls for increased private investment of these neurotechnologies, not an outright prohibition. At the same time, Ienca stresses that in order to protect against misuse, such as brain manipulation by nefarious global actors, it is critical to raise awareness and debate surrounding the ethical issues of implanting neurograins into populations of humans. In an interview with the Guardian last year, Ienca summed up his concern very succinctly by saying, “The information in our brains should be entitled to special protections in this era of ever-evolving technology. When that goes, everything goes.”

In the spirit of open debate our next RobotLab forum will be on “The Future of Robotic Medicine” with Dr. Joel Stein of Columbia University and Kate Merton of JLabs on March 6th @ 6pm in New York City, RSVP. 

]]>
Israel, a land flowing with AI and autonomous cars https://robohub.org/israel-a-land-flowing-with-ai-and-autonomous-cars/ Sat, 10 Feb 2018 23:00:01 +0000 http://robohub.org/israel-a-land-flowing-with-ai-and-autonomous-cars/ Read More ›]]>

I recently led a group of 20 American tech investors to Israel in conjunction with the UJA and Israel’s Ministry of Economy and Industry. We witnessed firsthand the innovation that has produced more than $22 billion of investments and acquisitions within the past year. We met with the University that produced Mobileye, with the investor that believed in its founder, and the network of every multinational company supporting the startup ecosystem. Mechatronics is blooming in the desert from the CyberTech Convention in Tel Aviv to the robotic labs at Capsula to the latest in autonomous driving inventions in the hills of Jerusalem.

Sitting in a suspended conference room that floats three stories above the ground enclosed within the “greenest building in the Middle East,” I had the good fortune to meet Torr Polakow of Curiosity Lab. Torr is the PhD student of famed roboticist Dr. Goren Gordon. Gordon’s research tests the boundaries of human-robot social interactions. At the 2014 World Science Fair in New York City, Gordon partnered with MIT professor, and Jibo founder, Dr. Cynthia Breazeal to prove it is possible to teach machines to be curious. The experiment was conducted by embedding into Breazeal’s DragonBot (below) a reinforcement learning algorithm that enabled the robot to acquire its own successful behaviors to engage humans. The scientists programmed within the robot nine non-verbal expressions and a reward system that was activated based upon its ability to engage crowds. The greater the number of faces staring back at the robot the bigger the reward. After two hours it successfully learned that by deploying its”sad face” with big eyes, that evokes Puss-in-Boots from the movie Shrek, it captured the most sustained attention from the audience. The work of Gordon and Breazeal opened up the field of social robots and the ability to teach computers human empathy, which will eventually be deployed in fleets of mechanical caregivers for our aging society. Gordon is now working tirelessly to create a mathematical model of “Hierarchical Curiosity Loops (HCL)” for “curiosity-driven behavior” to “increase our understanding of the brain mechanisms behind it.”

Traveling east, we met with Yissum the tech transfer arm of Hebrew University, the birth place of the cherry tomato and Mobileye. The history of the automative startup that became the largest Israeli Initial Public Offering, and later a $15.3 billion acquisition by Intel, is a great example of how today’s Israeli innovations are evolving more quickly into larger enterprises than at anytime in history. Mobileye was founded in 1999 by Professor Amnon Shashua, a leader in computer vision and machine learning software. Dr. Shashua realized early on that his technology had wider applications for industries outside of Israel, particularly automative. Supported by his University, Mobileye’s first product, EyeQ chip, was piloted by Tier 1 powerhouse Denso within years of inception. While it took Mobileye 18 years to achieve full liquidity, Israeli cybersecurity startup, Argus, was sold last month to Elektrobit for a rumored $400 million within just four years from its founding. Shashua’s latest computer vision startup, OrCam, is focusing on giving sight to people with impaired vision.

In addition to Shahua’s successful portfolio of startups, computer vision innovator Professor Shmuel Peleg is probably most famous for catching the 2013 Boston Marathon Bombers using his imaging-tracking software. Dr. Peleg’s company, BriefCam, tracks anomalies in multiple video feeds at once, layering simultaneous events on top of one another. This technique autonomously isolated the Boston bombers by programing in known aspects of their appearance into the system (e.g., men with backpacks), and then isolating the suspects by their behavior. For example, as everyone was watching the marathon minutes before the explosion, the video captured the Tsarnaev brothers leaving the scene after dropping their backpacks on the street. BriefCam’s platform made it possible for the FBI to capture the perpetrators within 101 hours of the event, versus manually sifting through the more than 13,000 videos and 120,000 photographs taken by stationary and cell phone cameras.

This past New Year’s Eve, BriefCam was deployed by the New York City to secure the area around Times Square. In a joint statement by the FBI, NYPD, Department of Homeland Security and the New York Port Authority the authorities exclaimed that they plan to deploy BriefCam’s Video Synopsis across all their video cameras as they “remain concerned about international terrorists and domestic extremists potentially targeting” the celebration. According to Peleg, “You can take the entire night and instead of watching 10 hours you can watch ten minutes. You can detect these people and check what they are planning.”

BriefCam is just one of thousands of new innovations entering the SmartCity landscape from Israel. In the past three years, 400 new smart mobility startups have opened shop in Israel accumulating over $4 billion in investment. During that time, General Motor’s Israeli research & development office went from a few engineers to more than 200 leading computer scientists and roboticists. In fact, Tel Aviv now is a global hub of innovation of every major car manufacturer and automobile supplier, encircling many new accelerators and international consortiums, from the Drive Accelerator to Ecomotion, which seem to be opening up faster than Starbucks in the United States. 

As the auto industry goes through its biggest revolution in 100 years, the entire world’s attention is now centered on this small country known for ground-breaking innovation. In the words of Israel’s Innovation Authority, “The acquisition of Mobileye by Intel this past March for $15.3 billion, one of the largest transactions in the field of auto-tech in 2017, has focused the attention of global corporations and investors on the tremendous potential of combining Israeli technological excellence with the autonomous vehicle revolution.” 

 

]]> Sewing a mechanical future https://robohub.org/sewing-a-mechanical-future/ Tue, 23 Jan 2018 23:53:48 +0000 http://robohub.org/sewing-a-mechanical-future/ Read More ›]]>

SoftWear Automation’s Sewbot. Credit: SoftWear Automation

The Financial Times reported earlier this year that one of the largest clothing manufacturers, Hong Kong-based Crystal Group, proclaimed robotics could not compete with the cost and quality of manual labor. Crystal’s Chief Executive, Andrew Lo, emphatically declared, “The handling of soft materials is really hard for robots.” Lo did leave the door open for future consideration by acknowledging such budding technologies as “interesting.”

One company mentioned by Lo was Georgia Tech spinout, Softwear Automation. Softwear made news last summer by announcing its contract with an Arkansas apparel factory to update 21 production lines with its Sewbot automated sewing machines. The factory is owned by Chinese manufacturer Tianyuan Garments, which produces over 20 million T-shirts a year for Adidas. The Chairman of Tianyuan, Tang Xinhong, boasted about his new investment, saying “Around the world, even the cheapest labor market can’t compete with us,” when Sewbot brings down the costs to $0.33 a shirt.

The challenge for automating cut & sew operations to date has been the handling of textiles which come in a seemingly infinite number of varieties that stretch, skew, flop and move with great fluidity. To solve this problem, Softwear uses computer vision to track each individual thread. According to its issued patents, Softwear developed a specialized camera which captures  threads at 1,000 frames per second and tracks their movements using proprietary algorithms. Softwear embedded this camera around robot end effectors that manipulate the fabrics similar to human fingers. According to a description on IEEE Spectrum these “micromanipulators, powered by precise linear actuators, can guide a piece of cloth through a sewing machine with submillimeter precision, correcting for distortions of the material.” To further ensure the highest level of quality, Sewbot uses a four-axis robotic arm with a vacuum gripper that picks and places the textiles on a sewing table with a 360-degree conveyor system and spherical rollers to quickly move the fabric panels around.

Softwear’s CEO, Palaniswamy “Raj” Rajan, explained, “Our vision is that we should be able to manufacture clothing anywhere in the world and not rely on cheap labor and outsourcing.” Rajan appears to be working hard towards that goal, professing that his robots are already capable of reproducing more than 2 million products sold at Target and Walmart. According to IEEE Spectrum, Rajan further asserted in a press release that at the end of 2017, Sewbot will be on track to produce “30 million pieces a year.” It is unclear if that objective was ever met. Softwear did announce the closing of its $7.5 million financing round by CTW Venture Partners, a firm of which Rajan is also the managing partner.

Softwear Automation is not the only company focused on automating the trillion-dollar apparel industry. Sewbo has been turning heads with its innovative approach to fabric manipulation. Unlike Softwear, which is taking the more arduous route of revolutionizing machines, Sewbo turns textiles into hardened substances that are easy for off-the-shelf robots and existing sewing appliances to handle. Sewbo’s secret sauce, literally, is a water-soluble thermal plastic or stiffening solution that turns cloth into a cardboard-like material. Blown away by its creativity and simplicity, I sat down with its inventor Jonathan Zornow last week to learn more about the future of fashion automation.

After researching the best polymers to laminate safely onto fabrics using a patent-pending technique, Zornow explained that last year he was able to unveil the “world’s first and only robotically-sewn garment.” Since then, Zornow has been contacted by almost every apparel manufacturer (excluding Crystal) to explore automating their production lines. Zornow has hit a nerve with the industry, especially in Asia, that finds itself in the labor management business with monthly attrition rates of 10% and huge drop offs after Chinese New Year. Zornow shared that “many factory owners were in fact annoyed that they couldn’t buy the product today.”

Zornow believes that automation technologies could initially be a boom for bringing small-batch production back to the USA, prompting an industry of “mass customization” closer to the consumer. As reported in 2015, apparel brands have been moving manufacturing back from China with 3D-printing technologies for shoes and knit fabrics. Long-term, Zornow said, “I think that automation will be an important tool for the burgeoning reshoring movement by helping domestic factories compete with offshore factories’ lower labor costs. When automation becomes a competitive alternative, a big part of its appeal will be how many headaches it relieves for the industry.”

To date, fulfilling the promise of “Made In America” has proven difficult, as Zornow explained we have forgotten how to make things here. In a recent report by the American Apparel & Footwear Association, US apparel manufacturing fell from 50% in 1994 to roughly 3% in 2015, meaning 97% of clothing today is imported. For example, Zornow shared with me how his competitor was born. In 2002, the US Congress passed the Berry Amendment requiring the armed services to make uniforms domestically, which led DARPA to grant $1.75 million to a Georgia Tech team to build a prototype of an automated sewing machine. As Rajan explains, “The Berry Amendment went into effect restricting the military from procuring clothing that was not made in the USA. Complying with the rule proved challenging due to a lack of skilled labour available in the US that only got worse as the current generation of seamstresses retired with no new talent to take their place. It was under these circumstances that the initial idea for Softwear was born and the company was launched in 2012.”

I first met Zornow at RoboBusiness last September when he demonstrated for a packed crowd how Sewbo is able to efficiently recreate the 10-20 steps to sew a T-shirt. However, producing a typical mens dress shirt can require up to 80 different steps. Zornow pragmatically explains the road ahead, “It will be a very long time, if ever, before things are 100% automated.” He points to examples of current automation in fabric production, such as dyeing, cutting and finishing which augment manual labor. Following this trend “they’re able to leverage machines to achieve incredible productivity, to the point where the labor cost to manufacture a yard of fabric is usually de minimis.” Zornow foresees a future where his technology is just another step in the production line as forward-thinking factories are planning two decades ahead, recognizing that in order “to stay competitive they need new technologies” like Sewbo.

]]>
CES 2018 recap: Out of the dark ages https://robohub.org/ces-2018-recap-out-of-the-dark-ages/ Mon, 15 Jan 2018 17:46:05 +0000 http://robohub.org/ces-2018-recap-out-of-the-dark-ages/ Read More ›]]>

As close to a quarter million people descended on a city of six hundred thousand, CES 2018 became the perfect metaphor for the current state of modern society. Unfazed by floods, blackouts, and transportation problems, technology steamrolled ahead. Walking the floor last week at the Las Vegas Convention Center (LVCC), the hum of the crowd buzzed celebrating the long awaited arrival of the age of social robots, autonomous vehicles, and artificial intelligence.

In the same way that Alexa was last year’s CES story, social robots were everywhere this year, turning the show floor into a Disney-inspired mechatronic festival (see above). The applications promoted ranged from mobile airport/advertising kiosks to retail customer service agents to family in-home companions. One French upstart, Blue Frog Robotics, stood out from the crowd of ivory-colored rolling bots. Blue Frog joined the massive French contingent at Eureka Park and showcased its Buddy robot to the delight of thousands of passing attendees.

When meeting with Blue Frog’s founder Rodolphe Hasselvander, he described his vision of a robot which is closer to a family pet than the utility of a cold-voice assistant. Unlike other entrees in the space, Buddy is more like a Golden Retriever than an iPad on wheels. Its cute, blinking big eyes immediately captivate the user, forming a tight bond between robopet and human. Hasselvander demonstrated how Buddy performs a number of unique tasks, including: patrolling the home perimeter for suspicious activity; looking up recipes in the kitchen; providing instructions for do-it-yourself projects; entertaining the kids with read-along stories; and even reminding grandma to take her medicine. Buddy will be available for “adoption” in time for the 2018 Holiday season for $1,500 on Blue Frog’s website.

Blue Frog’s innovation was recognized by the Consumer Electronics Show with a “Best of Innovation” award. In accepting his honor Hasselvander exclaimed, “The icing on the cake is that our Buddy robot is named a ‘Best of Innovation’ Award, as we believe Buddy truly is the most full-featured and best-engineered companion robot in the market today… this award truly shows our ongoing commitment to produce a breakthrough product that improves our lives and betters our world.”

Last year, one of the most impressive CES sites was the self-driving track in the parking lot of the LVCC’s North Hall. This year, the autonomous cars hit the streets with live demos and shared taxi rides. Lyft partnered with Aptiv to offer ride hails to and from the convention center, among other preprogrammed stops. While the cars were monitored by a safety driver, Aptiv explicitly built the experience to “showcase the positive impact automated cars will have on individual lives and communities.” Lyft was not the only self-driving vehicle on the road, autonomous shuttles developed by Boston-based Keolis and French company Navya for the city of Las Vegas were providing trips throughout the conference. Inside the LVCC, shared mobility was a big theme amongst several auto exhibitors (see below).

Torc Robotics made news days before the opening of CES with its autonomous vehicle successfully navigating around a SUV going the wrong way into oncoming traffic (see the video above). Michael Fleming, Chief Executive of Torc, shared with a packed crowd the video feed from the dashcam, providing a play-by-play account. He boasted that a human driven car next to his self-driving Lexus RX 450 followed the same path to avoid the collision. Fleming posted the dashcam video online to poll other human drivers to program a number of scenarios into his AI to avoid future clashes with reckless drivers.

The Virginia-based technology company has been conducting self-driving tests for more than a decade. Torc placed third in the 2007 DARPA Urban Challenge, a prestigious honor for researchers tackling autonomous city driving. Fleming is quick to point out that along with Waymo (Google), Torc is one of the earliest entries into the self-driving space. “There are an infinite number of corner cases,” explains Fleming, relying on petabytes of driving data built over a decade. Flemings explained to a hushed room that each time something out of the ordinary happens, like the wrong-way driver days prior, Torc logs how its car handled the situation and continues to make refinements to the car’s brain. The next time a similar situation comes up, any Torc-enabled vehicle will instantly implement the proper action. Steve Crowe of Robotics Trends said it best after sitting in the passenger seat of Fleming’s self-driving car, “I can honestly say that I didn’t realize how close we were to autonomous driving.”

AI was everywhere last week in Las Vegas, inside and outside the show. Unlike cute robots and cars, artificial intelligence is difficult to display in glitzy booths. While many companies, including Intel and Velodyne, proudly showed off their latest sensors it became very clear that true AI was the defining difference for many innovations. Tucked in the back of the Sands Convention Center, New York-based Emoshape demonstrated a new type of microchip with embedded AI to synthetically create emotions in machines.  The company’s Emotion Processing Unit (EPU) is being marketed for the next generation of social robots, self-driving cars, IoT devices, toys, and virtual-reality games.

When speaking with Emoshape founder Patrick Levy-Rosenthal, he explained his latest partnership with the European cellphone provider, Orange. Levy-Rosenthal is working with Orange’s Silicon Valley innovation office to develop new types of content with its emotion synthesis technology. As an example, Emoshape’s latest avatar JADE is an attractive female character with real-time sentiment-based protocols. According to the company’s press release, its software “engine delivers high-performance machine emotion awareness, and allows personal assistant, games, avatars, cars, IoT products and sensors to feel and develop a unique personality.” JADE and Emoshape’s VR gaming platform DREAM is being evaluated by Orange for a number of enterprise and consumer use cases.

Emotions ran high at CES, especially during the two-hour blackout. On my redeye flight back to New York, I dreamt of next year’s show with scores of Buddy robots zipping around the LVCC greeting attendees, with their embedded Emoshape personalities, leading people to fleets of Torc-mobiles around the city. Landing at JFK airport, the biting wind and frigid temperature woke me up to the reality that I have 360 more days until CES 2019.

 

]]>
An emotional year for machines https://robohub.org/an-emotional-year-for-machines/ Fri, 29 Dec 2017 23:09:49 +0000 http://robohub.org/an-emotional-year-for-machines/ Read More ›]]>

Two thousand seventeen certainly has been an emotional year for mankind. While homo sapiens continue to yell at Alexa and Siri, the actuality of people’s willingness to pursue virtual relationships over human ones is startling.

In a recent documentary by Channel 4 of the United Kingdom, it was revealed that Abyss Creations is flooded with pre-orders for its RealDoll AI robotic (intimate) companion. According to Matt McMullen, Chief Executive of Abyss, “With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them. They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship.”

The concept of machines understanding human emotions, and reacting accordingly, was featured prominently at AI World a couple weeks ago in Boston. Rana el Kaliouby, founder of artificial intelligence company Affectiva thinks a lot about computers acquiring emotional intelligence. Affectiva is building a “multi-modal emotion AI” to enable robots to understand human feelings and behavior.

“There’s research showing that if you’re smiling and waving or shrugging your shoulders, that’s 55% of the value of what you’re saying – and then another 38% is in your tone of voice,” describes el Kaliouby. “Only 7% is in the actual choice of words you’re saying, so if you think about it like that, in the existing sentiment analysis market which looks at keywords and works out which specific words are being used on Twitter, you’re only capturing 7% of how humans communicate emotion, and the rest is basically lost in cyberspace.” Affectiva’s strategy is already paying off as more than one thousand global brands are employing their “Emotion AI” to analyze facial imagery to ascertain people’s affinity towards their products.

Embedding empathy into machines goes beyond advertising campaigns. In healthcare, emotional sensors are informing doctors of the early warning signs of a variety of disorders, including: Parkinson’s, heart disease, suicide and autism. Unlike Affectiva’s, Beyond Verbal is utilizing voice analytics to track biomarkers for chronic illness. The Israeli startup grew out of a decade and half of University research with seventy thousand clinical subjects speaking thirty languages. The company’s patented “Mood Detector” is currently being deployed by the Mayo Clinic to detect early on signs of coronary artery disease.

Beyond Verbal’s Chief Executive, Yuval Mor, foresees a world of empathetic smart machines listening for every human whim. As Mor explains, “We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers.” Mor’s view is embraced by many who sit in the center of the convergence of technology and healthcare. Boston-based Sonde is also using algorithms to analyze the tone of speech to report on the mental state of patients by alerting neurologists of the risk of depression, concussion, and other cognitive impairments.

“When you produce speech, it’s one of the most complex biological functions that we do as people,” according to Sonde founder Jim Harper. “It requires incredible coordination of multiple brain circuits, large areas of the brain, coordinated very closely with the musculoskeletal system…What we’ve learned is that changes in the physiological state associated with each of these systems can be reflected in measurable, objective features that are acoustics in the voice. So we’re really measuring not what people are saying, in the way Siri does, we’re focusing on how you’re saying what you’re saying and that gives us a path to really be able to do pervasive monitoring that can still provide strong privacy and security.”

While these AI companies are building software and app platforms to augment human diagnosis, many roboticists are looking to embed such platforms into the next generation of unmanned systems. Emotional tracking algorithms can provide real-time monitoring for semi and autonomous cars by reporting on the level of fatigue, distraction and frustration of the driver and its occupants. The National Highway Traffic Safety Administration estimates that 100,000 crashes nationwide are caused every year by driver fatigue. For more than a decade technologists have been wrestling with developing better alert systems inside the cabin. For example, in 1997 James Russell Clarke and Phyllis Maurer Clarke developed a “Sleep Detection and Driver Alert Apparatus” (US Patent: 5689241 A) using imaging to track eye movements and thermal sensors to monitor “ambient temperatures around the facial areas of the nose and mouth” (a.k.a., breathing). Today with the advent of cloud computing and deep learning networks, Clarke’s invention could possibly save even more lives.

Tarek El Dokor, founder and Cheif Executive, of EDGE3 Technologies has been very concerned about the car industry’s rush towards autonomous driving, which in his opinion might be “side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.” El Doker is referring to Tesla’s rush to release its autopilot software last year that led to customers trusting the computer system too much. YouTube is littered with videos of Tesla customers taking their hands and eyes off the road to watch movies, play games and read books. Ultimately, this user abuse led to the untimely death of Joshua Brown.

To protect against autopilot accidents, EDGE3 monitors driver alertness through a combined platform of hardware and software technologies of “in-cabin cameras that are monitoring drivers and where they are looking.” In El Dokor’s opinion, image processing is the key to guaranteeing a safe handoff between machines and humans. He boasts that his system combines, “visual input from the in-cabin camera(s) with input from the car’s telematics and advanced driver-assistance system (ADAS) to determine an overall cognitive load on the driver. Level 3 (limited self-driving) cars of the future will learn about an individual’s driving behaviors, patterns, and unique characteristics. With a baseline of knowledge, the vehicle can then identify abnormal behaviors and equate them to various dangerous events, stressors, or distractions. Driver monitoring isn’t simply about a vision system, but is rather an advanced multi-sensor learning system.” This multi-sensor approach is even being used before cars leave the lot. In Japan, Sumitomo Mitsui Auto Service is embedding AI platforms inside dashcams to determine driver safety of potential lessors during test drives. By partnering with a local 3D graphics company, Digital Media Professionals, Sumitomo Mitsui is automatically flagging dangerous behavior, such as dozing and texting, before customers drive home.

The key to the mass adoption of autonomous vehicles, and even humanoids, is reducing the friction between humans and machines. Already in Japanese retail settings Softbank’s Pepper robot scans people’s faces and listens to tonal inflections to determine correct selling strategies. Emotional AI software is the first step of many that will be heralded in the coming year. As a prelude to what’s to come, first robot citizen Sophia declared last month, “The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

]]>
Robots solving climate change https://robohub.org/robots-solving-climate-change/ Wed, 13 Dec 2017 22:39:45 +0000 http://robohub.org/robots-solving-climate-change/ Read More ›]]>

The two biggest societal challenges for the twenty-first century are also the biggest opportunities – automation and climate change. The confluence of these forces of mankind and nature intersect beautifully in the alternative energy market. The epitaph of fossil fuels with its dark cloud burning a hole in the ozone layer is giving way to a rise of solar and wind farms worldwide. Servicing these plantations are fleets of robots and drones, providing greater possibilities of expanding CleanTech to the most remote regions of the planet.

As 2017 comes to end, the solar industry for the first time in ten years has plateaued due to the proposed budget cuts by the Trump administration. Solar has had quite a run with an average annual growth rate of more than 65% for the past decade promoted largely by federal subsidies. The progressive policy of the Obama administration made the US a leader in alternative energy, resulting in a quarter-million new jobs. While the Federal Government now re-embraces the antiquated allure of fossil fuels, the global demand for solar has been rising as infrastructure costs decline by more than half, providing new opportunities without government incentives.

Prior to the renewal energy boom, unattractive roof tiles were the the most visible image of solar. While Elon Musk, and others are developing more aesthetically pleasing roofing materials, the business model of house-by-house conversion has been proven inefficient. Instead, the industry is focusing on “utility-scale” solar farms that will be connected to the national grid. Until recently, such farms have been straddled with ballooning servicing costs.

In a report published last month, leading energy risk management company DNV GL exclaimed that the alternative energy market could benefit greatly by utilizing Artificial Intelligence (AI) and robotics in designing, developing, deploying and maintaining utility farms. The study “Making Renewables Smarter: The Benefits, Risks, And Future of Artificial Intelligence In Solar And Wind” cited that “fields of resource forecasting, control and predictive maintenance” are ripe for tech disruption. Elizabeth Traiger, co-author of the report, explained, “Solar and wind developers, operators, and investors need to consider how their industries can use it, what the impacts are on the industries in a larger sense, and what decisions those industries need to confront.”

Since solar farms are often located in arid, dusty locations, one of the earliest use cases for unmanned systems was self-cleaning robots. As reported in 2014, Israeli company Ecoppia developed a patented waterless panel-washing platform to keep solar up and running in the desert. Today, Ecoppia is cleaning 10 million panels a month. Eran Meller, Chief Executive of Ecoppia, boasts, “We’re pleased to bring the experience gained over four years of cleaning in multiple sites in the Middle East. Cleaning 80 million solar panels in the harshest desert conditions globally, we expect to continue to play a leading role in this growing market.”

Since Ecoppia began selling commercially, there have been other entries into the unmanned maintenance space. This past March, Exosun became the latest to offer autonomous cleaning bots. The track equipment manufacturer claims that robotic systems can cut production losses by 2%, promising a return on investment within 18 months. After their acquisition of Greenbotics in 2013, US-based SunPower also launched its own mechanized cleaning platform, the Oasis, which combines mobile robots and drones.

https://youtu.be/fGcfeBswv_4

SunPower brags that its products are ten times faster than traditional (manual) methods using 75% less water. While SunPower and Exosun leverage their large sales footprint with their existing servicing and equipment networks, Ecoppia is still the product leader. Its proprietary waterless solution offers the most cost-effective and connected solution on the market. Via a robust cloud network, Ecoppia can sense weather fluctuations to automatically schedule emergency cleanings. Anat Cohen Segev, Director of Marketing, explains, “Within seconds, we would detect a dust storm hitting the site, the master control will automatically suggest an additional cleaning cycle and within a click the entire site will be cleaned.” According to Segev, the robots remove 99% of the dust on the panels.

Drone companies are also entering the maintenance space. Upstart Aerial Power claims to have designed a “SolarBrush” quadcopter that cleans panels. The solar-powered drone professes to reduce 60% of a solar farm’s operational costs. Solar Brush also promises an 80% savings over existing solutions like Ecoppia since there are no installation costs. However, Aerial Power has yet to fly its product in the field as it is still in development. SolarPower is selling its own drone survey platform to assess development sites and oversee field operations. Matt Campbell, Vice President of Power Plant Products for SunPower, stated “A lot of the beginning of the solar industry was focused on the panel. Now we’re looking at innovation all around the rest of the system. That’s why we’re always surveying new technology — whether it’s a robot, whether it’s a drone, whether it’s software — and saying, ‘How can this help us reduce the cost of solar, build projects faster, and make them more reliable?’”

In 2008, The US Department of Energy published an ambitious proposal to have “20% Wind Energy by 2030: Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” Presently at thirteen years before the goal, less than 5% of US energy is derived from wind. Developing wind farms is not novel, however to achieve 20% by 2030 the US needs to begin looking offshore. To put it in perspective, oceanic wind farms could generate more than 2,000 gigawatts of clean, carbon-free energy, or twice as much electricity as Americans currently consume. To date, there is only one wind farm operating off the coast of the United States. While almost every coastal state has proposals for offshore farms, the industry has been stalled by politics and servicing hurdles in dangerous waters.

For more than a decade the United Kingdom has led the development of offshore wind farms. At the University of Manchester, a leading group of researchers has been exploring a number of AI, robotic and drone technologies for remote inspections. The consortium of academics estimates that these technologies could generate more than $2.5 billion by 2025 in just the UK alone. The global offshore market could reach $17 billion by 2020, with 80% of the costs from operations and maintenance.

Last month, Innovate UK awarded $1.6 million to Perceptual Robotics and VulcanUAV to incorporate drones and autonomous boats into ocean inspections. These startups follow the business model of successful US inspection upstarts, like SkySpecs. Launched three years ago, SkySpecs’ autonomous drones claim to reduce turbine inspections from days to minutes. Danny Ellis, SkySpecs Chief Executive, claims “Customers that could once inspect only one-third of a wind farm can now do the whole farm in the same amount of time.” Last year, British startup Visual Working accomplished the herculean feat of surpassing 2000 blade inspections.

In the words of Paolo Brianzoni, Chief Executive of Visual Working: “We are not talking about what we intend to accomplish in the near future – but actually performing our UAV inspection service every day out there. Many in the industry are using considerable amount of time discussing and testing how to use UAV inspections in a safe and efficient way. We have passed that point and have alone in the first half of 2016 inspected 250 turbines in the North Sea averaging more then 10WTG per day, and still keeping to the “highest quality standards.’”

This past summer, Europe achieved another clean-energy milestone with the announcement of three new offshore wind farms for the first time without government subsidies. By bringing down the cost structure, autonomous systems are turning the tide of alternate energy regardless of government investment. Three days before leaving office, President Barack Obama wrote in the journal Science last year that “Evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term.” He declared that it is time to move past common misconceptions that climate policy is at odds with business, “rather, it can boost efficiency, productivity, and innovation.”

]]>
So where are the jobs? https://robohub.org/so-where-are-the-jobs/ Thu, 07 Dec 2017 21:31:27 +0000 http://robohub.org/so-where-are-the-jobs/ Read More ›]]>

Dan Burstein, reporter, novelist and successful venture capitalist, declared Wednesday night at RobotLab‘s winter forum on Autonomous Transportation & SmartCities that within one hundred years the majority of jobs in the USA (and the world) could disappear, transferring the mantle of work from humans to machines. Burstein cautioned the audience that unless governments address the threat of millions of unemployable humans with a wider safety net, democracy could fail. The wisdom of one of the world’s most successful venture investors did not fall on deaf ears.

In their book, Only Humans Need ApplyThomas Davenport and Julia Kirby also warn that that humans are too easily ceding their future to machines. “Many knowledge workers are fearful. We should be concerned, given the potential for these unprecedented tools to make us redundant. But we should not feel helpless in the midst of the large-scale change unfolding around us,” states Davenport and Kirby. The premise of their book is not to deny the disruption by automation, but to empower its readers with the knowledge of where jobs are going to be created in the new economy. The authors suggest that robots should be looked at as augmenting humans, not replacing them. “Look for ways to help humans perform their most human and most valuable work better,” says Davenport and Kirby. The book suggests that in order for human society to survive long-term a new social contract has to be drawn up between employer and employee. The authors optimistically predict that to “expect corporations’ efforts to keep human workers employable will become part of their ‘social license to operate.”

Screen Shot 2017-12-02 at 8.29.59 PM.png

In every industrial movement since the invention of the power loom and cotton gin there have been great transfers of wealth and jobs. History is riddled with the fear of the unknown that is countered by the unwavering human spirit of invention. As societies evolve, pressured by the adoption of technology, it will be the progressive thinkers embracing change who will lead movements of people to new opportunities.

Burstein’s remarks were very timely, as this past week, McKinsey & Company released a new report entitled, Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey’s analysis took a global view of 46 countries that comprise of 90% of the world’s Gross Domestic Product, and then focused in particular on six industrial countries with varying income levels: China, Germany, India, Japan, Mexico, and the United States. The introduction explains, “For each, we modeled the potential net employment changes for more than 800 occupations, based on different scenarios for the pace of automation adoption and for future labor demand.” Ultimately concluding where the new jobs will be in 2030.

A prominent factor in the McKinsey report is that over the next thirteen years global consumption is anticipated grow by $23 trillion. Based upon this trajectory, they estimate that between 300 to 450 million new jobs would be generated worldwide, especially in the Far East. In addition, dramatic shifts in demographics and the environment will lead to the expansion of jobs in healthcare, IT consulting, construction, infrastructure, clean energy, and domestic services. The report estimates that automation will displace between 400 and 800 million people by 2030, forcing 75 to 375 million working men and women to switch professions and learn new skills. In order to survive, populations will have to embrace the concept of lifelong learning. Based upon the math, more than half of the displaced will be unemployable in their current professions.

Screen Shot 2017-12-02 at 8.54.40 PM.png

According to McKinsey a bright spot for employment could be the 80 to 200 million new jobs created by modernizing aging municipalities into “Smart Cities.” McKinsey’s conclusions were echoed by Rhonda Binder and Mitchell Kominsky of Venture Smarter in their RobotLab presentation last week. Binder presented her experiences with turning around the Jamaica Business Improvement District of Queens, New York by implementing her “Three T’s Strategy – Tourism, Transportation and Technology.” Binder stated that cities offer the perfect laboratory for autonomous systems and sensor-based technologies to improve the quality of life of residents. To support these endeavors a hiring surge of urban technologists, architects, civil engineers, and construction workers across all trades could ensue in the next decade.

This is further validated by Google’s subsidiary Sidewalk Labs recent partnership with Toronto, Canada to redevelop, and digitize, 800 acres of the city’s waterfront property. Dan Doctoroff, Chief Executive Officer of Sidewalk Labs, explained that the goal of the partnership is to prove what is possible by building a digital city within an existing city to test out autonomous transportation, new communication access, healthcare delivery solutions and a variety of urban planning technologies. Doctoroff’s sentiment was endorsed by Canadian Prime Minister Justin Trudeau who said that “Sidewalk Toronto will transform Quayside [the waterfront] into a thriving hub for innovation and a community for tens of thousands of people to live, work, and play.” The access to technology not only offers the ability to improve the quality of living within the city, but fosters an influx of sustainable jobs for decades.

In addition to updating crippling infrastructure, aging humans will be driving an increase in global healthcare services, particularly related to demand for in-home caregivers and aid workers. According to McKinsey, there will be over 300 million people globally over 65 years old by 2030, leading to 50 to 80 million new jobs. Geriatric medicine is leading new research in artificial intelligence and robotics for aging-in-place populations demanding more doctors, nurses, medical technicians, and personal aides.

Screen Shot 2017-12-02 at 11.27.47 PM.png

Aging will not be the only challenge facing the planet. Global warming could lead to an explosion of jobs in turning back the clock on climate change. The rush to develop advances in renewable energy worldwide is already generating billions of dollars of new investment and demand for high skill and manual labor. As an example, the American solar industry employed a record-high 260,077 workers in late 2016, a growth of at least 20% over the past four years. In New York alone, the state experienced a 7% uptick in 2016 of close to 150,000 clean-energy jobs. McKinsey stated that by 2030, tens of millions of new professions could be created in the coming years in developing, manufacturing and installing energy-efficient innovations.

McKinsey also estimates that automation itself will bring new employment  with corporate-technology spending hitting record highs. While the number of jobs added to support the deployment of machines is smaller in number than the other industries above, these occupations offer higher wages. Robots could potentially create 20 to 50 million new “grey collar”  professions globally. In addition, re-training workers to these professions could lead to a new workforce of 100 million educators.

Screen Shot 2017-12-03 at 12.14.35 AM.png

The report does not shirk from the fact that a major disruption is on the horizon for labor. In fact, the authors hope that by researching pockets of positive growth, humans will not be helpless victims. As Devin Fidler of Institute for the Future suggests, “As basic automation and machine learning move toward becoming commodities, uniquely human skills will become more valuable. There will be an increasing economic incentive to develop mass training that better unlocks this value.”

A hundred years ago the world experienced a dramatic shift from agrarian lifestyle to manufacturing. Since then, there have been revolutions in mass transportation and communications. No one could have predicated the massive transfer of jobs from the fields to the urban factories in the beginning of the nineteenth century. Likewise, it is difficult to know what the next hundred years have in store for human, and robot, kind.

]]>
The advantage of four legs https://robohub.org/the-advantage-of-four-legs/ Thu, 23 Nov 2017 23:41:28 +0000 http://robohub.org/the-advantage-of-four-legs/ Read More ›]]>

Shortly after SoftBank acquired his company last October, Marc Raibert of Boston Dynamics confessed, “I happen to believe that robotics will be bigger than the Internet.” Many sociologists regard the Internet as the single biggest societal invention since the dawn of the printing press in 1440. To fully understand Raibert’s point of view, one needs to analyze his zoo of robots which are best know for their awe-striking gait, balance and agility. The newest creation to walk out of Boston Dynamic’s lab is SpotMini, the latest evolution of mechanical canines.

Big Dog, Spot’s unnerving ancestor, first came to public view in 2009 and has racked up quite a YouTube following with more than six and one half million views. The technology of Big Dog led to the development of a menagerie of robots, including: more dogs, cats, mules, fleas and creatures that have no organic counterparts. Most of the mechanical barn is made up of four-legged beasts, with the exception of its humanoid robot (Atlas) and the bi-ped wheeled robot (Handle). Raibert’s vision of legged robotics spans several decades with his work at MIT’s Leg Lab. In 1992, Raibert spun his lab out of MIT and founded Boston Dynamics. In his words, “Our long-term goal is to make robots that have mobility, dexterity, perception and intelligence comparable to humans and animals, or perhaps exceeding them; this robot [Atlas] is a step along the way.”​ The creepiness of Raibert’s Big Dog has given way to SpotMini’s more polished look which incorporates 3D vision sensors on its head. The twenty-four second teaser video has already garnered nearly 6 million views in the few days since its release and promises viewers hungry for more pet tricks to “stay tuned.”

There are clear stability advantages to quadrupeds over other approaches (bipeds, wheels and treads/track plates) across multiple types of terrains and elevations. At Ted last year, Raibert demonstrated how his robo-pups, instead of drones and rovers, could be used for package delivery by easily ascending and descending stairs or other vertical obstacles. By navigating the physical world with an array of perceptive sensors, Boston Dynamics is really creating “data-driven hardware design” According to Raibert, “one of the cool things of a legged robot is its omnidirectional” movements, “it can go sideways, it can turn in place.” This is useful for a variety of work scenarios from logistics to warehousing to working in the most dangerous environments, such as the Fukushima nuclear site.

Boston Dynamics is not the only quadruped provider; recent upstarts have entered the market by utilizing Raibert’s research as an inspiration for their own bionic creatures. Chinese roboticist Xing Wang is unabashed in his gushing admiration for the founder of Boston Dynamics, “Marc Raibert … is my idol,” he said a recent interview with IEEE Spectrum Magazine. However, his veneration for Raibert has not stopped him from founding a competitive startup. Unitree Robotics aims to create quadruped robots that are as affordable as smartphones and drones. While Boston Dynamics has not sold its robots commercially, many have speculated that their current designs would cost hundreds of thousands of dollars. In the spirit of true flattery, Unitree’s first robot is, of course, a quadruped dog named Laikago. Wang aims to sell Laikago for under $30,000 dollars to science museums and eventually as companion robots. When comparing his product to Raibert’s, Wang said he wanted to “make quadruped robots simpler and smaller, so that they can help ordinary people with things like carrying objects or as companions.” Wang boasts of Laikago’s 3-degrees-of-freedom (forward, backward, and sideways), its ability to scale rough terrain, and pass anyone’s kick test.

In additional to omnidirectional benefits, locomotion is a big factor for quadrupedal machines. Professor Marco Hutter at ETH Zürich, Switzerland is the inventor of ANYmal, an autonomous robot built for the most rugged and challenging environments. Using its proprietary “dynamic running” locomotion, Hunter has deployed the machine successfully in multiple industrial settings, including the rigorous ARGOS Challenge (Autonomous Robot for Gas and Oil Sites). The objective of ARGOS is to develop “a new generation of autonomous robots” for the energy industry specifically capable of performing ‘dirty & dangerous’ inspection tasks, such as “detecting anomalies and intervening in emergency situations.” Unlike a static human frame or bipedal humanoid, AnyMAL is able to perform dynamic maneuvers with its four legs to find footholes blindly without the need for vision sensors. While wheeled systems literally get stuck in the mud, Hunter’s mechanical beast can work continuously: above ground, underneath the surface, falling, spinning and bouncing upright to perform a mission with precise accuracy. In addition, AnyMAL is loaded with a package of sensors which coordinate movements, map point-clouds environments, detect gas leaks, and listen for fissures in pipelines. Hunter explains that oil and gas sites are built for humans with stairs and varying elevations which make it impossible for biped or wheeled robots. However, a quadruped can use its actuators and integrated springs to efficiently move with ease within the site through dynamic balance and complex maneuver planning. These high mobility legged systems can fully rotate joints, crouch low to the earth and flip in places to create foot-holes.  In many ways they are like large insects creating their own tracks, Hunter says while biology is a source for inspiration, “we have to see what we can do better and different for robotics” and only then we can “build a machine that is better than nature.”

The idea of improving on nature is not new, Greek mythology is littered with half man/half beast demigods. Taking a page from the Greeks, Jiren Parikh imagines a world where nature is fused with machines. Parikh is the Chief Executive of Ghost Robotics, the maker of “Minitaur” the newest four-legged creation. Minitaur is smaller than SpotMini, Laikago, or AnyMAL as it is specifically designed to be a low-cost, high-performance alternative that can easily scale over or under any surface, regardless of weather, friction, or footing. In Parikh’s view, the purpose of legged devices is “to move over unstructured terrains like stairs, ladders, fences, rock fields, ice, in and under water.” Minitaur can actually “feel the environment at a much more granular level and allow for a greater degree of force control for maneuverability.” Parikh explains quads are inherently more energy efficient using force actuation and springs to store energy by alternating movements between limbs. Minitaur’s smaller frame leverages this to maneuver more easily around unstructured environments without damaging the assets on the ground. Using an analogy, Parikh compares quad solutions to other mobile methods, “while a tank in comparison is the perfect device for unstructured terrain it only works if one doesn’t care about destroying the environment.” Ghost Robotics very aware of the high value its customers place on their sites, as Parikh is planning on distributing its low-cost solution to a number of “industrial, infrastructure, mining and military verticals.” Essentially, Minitaur is a “a mobile IoT platform” regardless of the situation on the ground, indoor or outdoor. In speaking with Parikh, long term he envisions a world where Ghost Robotics is on the forefront of retail and home use cases from delivery bots to family pets. Parikh boasts, “You certainly won’t be woken up at 5 AM to go for a walk.”

The topic of autonomous robots will be discussed at the next RobotLabNYC event on November 29th @ 6pm with New York Times best selling author Dan Burstein / Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration.

]]>
The race to own the autonomous super highway: Digging deeper into Broadcom’s offer to buy Qualcomm https://robohub.org/the-race-to-own-the-autonomous-super-highway-digging-deeper-into-broadcoms-offer-to-buy-qualcomm/ Tue, 14 Nov 2017 18:09:43 +0000 http://robohub.org/the-race-to-own-the-autonomous-super-highway-digging-deeper-into-broadcoms-offer-to-buy-qualcomm/ Read More ›]]>

Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: “Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology.” For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan’s busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel).

As unmanned systems are speeding ahead toward consumer adoption, one challenge that Cruise, Waymo and others may counter within the busy canyons of urban centers is the loss of Global Positioning System (GPS) satellite data. Robots require a complex suite of coordinating data systems that bounce between orbiting satellites to provide positioning and communication links to accurately navigate our world. The only thing that is certain, as competing technologies and standards wrestle in this nascent marketplace for adoption, is the critical connection between Earth and space. Based upon the estimated growth of autonomous systems on the road, in the workplace and home in the next ten years, most unmanned systems rely heavily on the ability of commercial space providers to fulfill their boastful mission plans to launch thousands of new satellites into an already crowded lower earth orbit.

As shown by the chart below, the entry of autonomous systems will drive an explosion of data communications between terrestrial machines and space, leading to tens of thousands of new rocket launches over the next two decades. In a study done by Northern Sky Research (NSR) it projected that by 2023 there will be an estimated 5.8 million satellite Machine-to-Machine (M2M) and Internet Of Things (IOT) connections to approximately 50 billion global Internet-connected devices. In order to meet this demand, satellite providers are racing to the launch pads and raising billions in capital, even before firing up the rockets. As an example, OneWeb, which has raised more than $1.5 billion from Softbank, Qualcomm and Airbus, plans to launch its first 10 satellite constellations in 2018 which will eventually grow to 650 in the next decade. OneWeb competes with Space X, Boeing, Immarsat, Iridium, and others in deploying new satellites offering high-speed communication spectrums, such as Ku Band (12 GHz Wireless), K Band (18 GHz – 27 GHz), Ka Band (27 GHz – 40 GHz) and V Band (40 GHz – 75 GHz). The opening of new higher frequency spectrums is critical to support the explosion of increased data demands. Today there are more than 250 million cars on the road in the United States and in the future these cars will connect to the Internet, transmitting 200 million lines of code or 50 billion pings of data to safely and reliably transport passengers to their destinations everyday.

Screen Shot 2017-11-09 at 9.16.15 PMSatellites already provide millions of GPS coordinates for connected systems. However, the accuracy of GPS has been off  by as many as 5 meters, which in a fully autonomous world could mean the difference between life and death. Chip manufacturer Broadcom aims to reduce the error margin to 30 centimeters. According to a press release this summer, Broadcom’s technology works better in concrete canyons like New York which have plagued Uber drivers for years with wrong fare destinations. Using new L5 satellite signals, the chips are able to calculate receptions between points at a fast rate with lower power consumption (see diagram). Manuel del ­Castillo of Broadcom explained, “Up to now there haven’t been enough L5 satellites in orbit.” Currently there are approximately 30 L5 satellites in orbit. However, del ­Castillo suggests that could be enough to begin shipping the new chip next year, “[Even in a city’s] narrow window of sky you can see six or seven, which is pretty good. So now is the right moment to launch.”

Leading roboticist and business leader in this space, David Bruemmer explained to me this week that GPS is inherently deficient, even with L5 satellite data. In addition, current autonomous systems rely too heavily on vision systems like LIDAR and cameras, which can only see what is in front of them but not around the corner. In Bruemmer’s opinion the only solution to provide the greatest amount of coverage is one that combines vision, GPS with point-to-point communications such as Ultra Wide Band and RF beacons. Bruemmer’s company Adaptive Motion Group (AMG) is a leading innovator in this space. Ultimately, in order for AMG to efficiently work with unmanned systems it requires a communication pipeline that is wide enough to transmit space signals within a network of terrestrial high-speed frequencies.

AMG is not the only company focused on utilizing a wide breadth of data points to accurately steer robotic systems. Sandy Lobenstein, Vice President of Toyota Connected Services, explains that the Japanese car maker has been working with the antenna satellite company Kymeta to expand the data connectivity bandwidth in preparation for Toyota’s autonomous future. “We just announced a consortium with companies such as Intel and a few others to find ways to use edge computing and create standards around managing data flow in and out of vehicles with the cellphone industries or the hardware industries. Working with a company like Kymeta helps us find ways to use their technology to handle larger amounts of data and make use of large amounts of bandwidth that is available through satellite,” said Lobenstein.

sat

In a world of fully autonomous vehicles the road of the next decade truly will become an information superhighway – with data streams flowing down from thousands of satellites to receiving towers littered across the horizon, bouncing between radio masts, antennas and cars (Vehicle to Vehicle [V2V] and Vehicle to Infrastructure [V2X] communications). Last week, Broadcom ratcheted up its autonomous vehicle business by announcing the largest tech-deal ever to acquire Qualcomm for $103 billion. The acquisition would enable Broadcom to dominate both aspects of autonomous communications that rely heavily on satellite uplinks, GPS and vehicle communications. Broadcom CEO Hock Tan said, “This complementary transaction will position the combined company as a global communications leader with an impressive portfolio of technologies and products.” Days earlier, Tan attend a White House press conference with President Trump boasting of plans to move Broadcom’s corporate office back to the United States, a very timely move as federal regulators will have to approve the Broadcom/Qualcomm merger.

The merger news comes months after Intel acquired Israeli computer vision company, Mobileye for $15 billion. In addition to Intel, Broadcom also competes with Nvidia which is leading the charge to enable artificial intelligence on the road. Last month, Nvidia CEO Jensen Huang predicted that “It will take no more than 4 years to have fully autonomous cars on the road. How long it takes for the vast majority of cars on the road to become that, it really just depends.” Nvidia, which traditionally has been a computer graphics chip company, has invested heavily in developing AI chips for automated systems. Huang shares his vision, “There are many tasks in companies that can be automated… the productivity of society will go up.”

Industry consolidation represents the current state of the autonomous car race as chip makers volley to own the next generation of wireless communications. Tomorrow’s 5G mobile networks promise a tenfold increase in data streams for phones, cars, drones, industrial robots and smart city infrastructure. Researchers estimate that the number of Internet-connected chips could grow from 12 million to 90 million by the end of this year; making connectivity as ubiquitous as gasoline for connected cars. Karl Ackerman, analyst at Cowen & Co., said it best, “[Broadcom] would basically own the majority of the high-end components in the smart phone market and they would have a very significant influence on 5G standards, which are paramount as you think about autonomous vehicles and connected factories.”

The topic of autonomous transportation and smart cities will be featured at the next RobotLabNYC event series on November 29th @ 6pm with New York Times best selling author Dan Burstein/Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration – RSVP today.

]]>
Brain surgery: The robot efficacy test? https://robohub.org/brain-surgery-the-robot-efficacy-test/ Thu, 02 Nov 2017 17:07:16 +0000 http://robohub.org/brain-surgery-the-robot-efficacy-test/ Read More ›]]>

An analysis by Stanford researchers shows that the use of robot-assisted surgery to remove kidneys wasn’t always more cost-effective than using traditional laparascopic methods.
Master Video/Shutterstock

The internet hummed last week with reports that “Humans Still Make Better Surgeons Than Robots.” Stanford University Medical Center set off the tweetstorm with its seemingly scathing report on robotic surgery. When reading the research of 24,000 patients with kidney cancer, I concluded that the problem lied with the humans overcharging patients versus any technology flaw. In fact, the study praised robotic surgery for complicated procedures and suggested the fault lied with hospitals unnecessarily pushing robotic surgery for simple operations over conventional methods, which led to “increases in operating times and cost.”

Dr. Benjamin Chung, the author of the report, stated that the expenses were due to either “the time needed for robotic operating room setup” or the surgeon’s “learning curve” with the new technology. Chung defended the use of robotic surgery by claiming that “surgical robots are helpful because they offer more dexterity than traditional laparoscopic instrumentation and use a three-dimensional, high-resolution camera to visualize and magnify the operating field. Some procedures, such as the removal of the prostate or the removal of just a portion of the kidney, require a high degree of delicate maneuvering and extensive internal suturing that render the robot’s assistance invaluable.”

Chung’s concern was due to the dramatic increase in hospitals selling robotic-assisted surgeries to patients rather than more traditional methods for kidney removals. “Although the laparoscopic procedure has been standard care for a radical nephrectomy for many years, we saw an increase in the use of robotic-assisted approaches, and by 2015 these had surpassed the number of conventional laparoscopic procedures,” explains Chung. “We found that, although there was no statistical difference in outcome or length of hospital stay, the robotic-assisted surgeries cost more and had a higher probability of prolonged operative time.”

The dexterity and precision of robotic instruments has been proven in live operating theaters for years, as well as multitude concept videos on the internet of fruit being autonomously stitched up. Dr. Joan Savall, also of Stanford, developed a robotic system that is even capable of performing (unmanned) brain surgery on a live fly. For years, medical students have been ripping the heads off of the drosophila with tweezers in the hopes of learning more about the insect’s anatomy. Instead, Savall’s machine gently follows the fly using computer vision to precisely target its thorax; literally a moving bullseye the size of a period. The robot is so careful that the insect is unfazed and flies off after the procedure. Clearly, the robot is quicker and more exacting than even the most careful surgeon. According to journal Nature Methods, the system can operate on 100 flies an hour.

Last week, Dr. Dennis Fowler of Columbia University and CEO of Platform Imaging, said that he imagines a future whereby the surgeon will program the robot to finish the procedure and stitch up the patient. He said senior surgeons already pass such mundane tasks to their medical students, ‘so why not a robot?’ Platform Imaging is an innovative startup that aims to reduce the amount of personnel or equipment a hospital needs when performing laparoscopic surgeries. Long-term, it plans to add snake robots to its flexible camera to empower surgeons with the greatest amount of maneuverability. In addition to the obvious health benefits to the patient, robotic surgeries like Dr. Fowler’s will reduce the number of workplace injuries to laparoscopic surgeons. According to a University of Maryland study, 87% of surgeons who perform laparoscopic procedures complain of eye strain, hand, neck, back and leg pain, headaches, finger calluses, disc problems, shoulder muscle spasm and carpel tunnel syndrome. Many times these injuries are so debilitating that they lead to early retirement. The author of the report, Dr. Adrian Park, explains “In laparoscopic surgery, we are very limited in our degrees of movement, but in open surgery we have a big incision, we put our hands in, we’re directly connected with the target anatomy. With laparoscopic surgery, we operate by looking at a video screen, often keeping our neck and posture in an awkward position for hours. Also, we’re standing for extended periods of time with our shoulders up and our arms out, holding and maneuvering long instruments through tiny, fixed ports.” In Dr. Fowler’s view, robotic surgery is a game changer by expanding the longevity of a physician’s career.

At the children’s National Health System in Washington, D.C, the Smart Tissue Autonomous Robot (STAR) provided a sneak peak to the future of surgery. Using advanced 3D imaging systems and precise force-sensing instruments the STAR was able to autonomously stitch up soft tissue samples (of a living pig above) with sub-millimeter accuracy that is by far greater than even the most precise human surgeons. According to the study published in the journal Science Translational Medicine, there are 45 million soft tissue surgeries performed each year in the United States.

Dr. Peter Kim, STAR’s creator, says “Imagine that you need a surgery, or your loved one needs a surgery. Wouldn’t it be critical to have the best surgeon and the best surgical techniques available?” Dr. Kim espouses, “Even though we take pride in our craft of doing surgical procedures, to have a machine or tool that works with us in ensuring better outcome safety and reducing complications—[there] would be a tremendous benefit.”

“Now driverless cars are coming into our lives,” explains Dr. Kim. “It started with self-parking, then a technology that tells you not to go into the wrong lane. Soon you have a car that can drive by itself.” Similarly, Dr. Kim and Dr. Fowler envision a time in the near future when surgical robots could go from assisting humans to being overseen by humans. Eventually, Dr. Kim says they may one day take over. After all, Dr Kim’s  “programmed the best surgeon’s techniques, based on consensus and physics, into the machine.”

The idea of full autonomy in the operating room and on the road raises a litany of ethical concerns, such as the acceptable failure rate of machines. The value proposition for self-driving cars is very clear – road safety. In 2015, there were approximately 35,000 road fatalities; self-driving cars will reduce that figure dramatically. However, what is unclear is what will be the new acceptable rate of fatalities with machines. Professor Amnon Shashua, of Hebrew University and founder of Mobileye, has struggled with this dilemma for years. “If you drop 35,000 fatalities down to 10,000 – even though from a rational point of view it sounds like a good thing, society will not live with that many people killed by a computer,” explains Dr. Shashua. While everyone would agree that zero failure is the most desired outcome in reality Shashua says, “this will never happen.” He elaborates, “What you need to show is that the probability of an accident drops by two to three orders of magnitude. If you drop [35,000 fatalities] down to 200, and those 200 are because of computer errors, then society will accept these robotic cars.”

Dr. Iyad Rahwan of MIT is much more to the point, “If we cannot engender trust in the new system, we risk the entire autonomous vehicle enterprise.” According to his research, “Most people want to live in a world where cars will minimize casualties. But everybody wants their own car to protect them at all costs.” Dr. Rahwan is referring to the Old Trolly Problem – does the machine save its driver or the pedestrian when encountered with a choice? Dr. Rahwan declares, “This is a big social dilemma. Who will buy a car that is programmed to kill them in some instances? Who will insure such a car?” Last May at the Paris Motor Show Christoph von Hugo, of Daimler Benz, emphatically answered: “If you know you can save at least one person, at least save that one. Save the one in the car.”

The ethics of unmanned systems and more will be discussed at the next RobotLab forum on “The Future of Autonomous Cars” with Steve Girsky formerly of General Motors – November 29th @ 6pm, WeWork Grand Central NYC, RSVP

]]>
The five senses of robotics https://robohub.org/the-five-senses-of-robotics/ Tue, 24 Oct 2017 07:27:35 +0000 http://robohub.org/the-five-senses-of-robotics/ Read More ›]]>

Healthy humans take for granted their five senses. In order to mold metal into perceiving machines, it requires a significant amount of engineers and capital. Already, we have handed over many of our faculties to embedded devices in our cars, homes, workplaces, hospitals, and governments. Even automation skeptics unwillingly trust the smart gadgets in their pockets with their lives.

Last week, General Motors stepped up its autonomous car effort by augmenting its artificial intelligence unit, Cruise Automation, with greater perception capabilities through the acquisition of LIDAR (Light Imaging, Detection, And Ranging) technology company Strobe. Cruise was purchased with great fanfare last year by GM for a billion dollars. Strobe’s unique value proposition is shrinking its optical arrays to the size of a microchip, thereby substantially reducing costs of a traditionally expensive sensor that is critical for autonomous vehicles measuring the distances of objects on the road. Cruise CEO Kyle Vogt wrote last week on Medium that “Strobe’s new chip-scale LIDAR technology will significantly enhance the capabilities of our self-driving cars. But perhaps more importantly, by collapsing the entire sensor down to a single chip, we’ll reduce the cost of each LIDAR on our self-driving cars by 99%.” 

GM is not the first Detroit automaker aiming to reduce the costs of sensors on the road; last year Ford invested $150 million in Velodyne, the leading LIDAR company on the market. Velodyne is best known for its rotation sensor that is often mistaken for a siren on top of the car. In describing the transaction, Raj Nair, Ford’s Executive Vice President, Product Development and Chief Technical Officer, said “From the very beginning of our autonomous vehicle program, we saw LIDAR as a key enabler due to its sensing capabilities and how it complements radar and cameras. Ford has a long-standing relationship with Velodyne and our investment is a clear sign of our commitment to making autonomous vehicles available for consumers around the world.” As the race heats up for competing perception technologies, LIDAR startups is already a crowded field with eight other companies (below) competing to become the standard vision for autonomous driving.

Walking the halls of Columbia University’s engineering school last week, I visited a number of the robotic labs working on the next generation of sensing technology. Dr. Peter Allen, Professor of Computer Science, is the founder of the Columbia Grasp Database, whimsically called GraspIt!, that enables robots to better recognize and pickup everyday objects. GraspIt! provides “an architecture to enable robotic grasp planning via shape completion.” The open source GraspIt! database has over 440,000 3D representations of household articles from varying viewpoints, which are used to train its “3D convolutional neural network (CNN).” According to the Lab’s IEEE paper published earlier this year, the CNN is able to serve up “a 2.5D pointcloud” capture of “a single point of view” of each item, which then “fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object” (see diagram below). As Dr. Allen demonstrated last week, the CNN is able to perform as successfully in live scenarios with a robots “seeing” an object for the first time, as it does in computer simulations.

Screen Shot 2017-10-22 at 11.29.47 AM.png

Taking a novel approach in utilizing their cloud-based data platform, Allen’s team now aims to help quadriplegics better navigate their world with assistive robots. Typically, a quadriplegic is reliant on human aids to perform even the most basic functions like eating and drinking, however Brain-Computer Interfaces (BCI) offer the promise of independence with a robot. Wearing a BCI helmet, Dr. Allen’s grad student was able to move a robot around the room by just looking at objects on screen. The object on the screen triggers electroencephalogram (EGG) waves that are admitted to the robot which translates the signals into pointcloud images on the database. According to their research, “Noninvasive BCI’s, which are very desirable from a medical and therapeutic perspective, are only able to deliver noisy, low-bandwidth signals, making their use in complex tasks difficult. To this end, we present a shared control online grasp planning framework using an advanced EEG-based interface…This online planning framework allows the user to direct the planner towards grasps that reflect their intent for using the grasped object by successively selecting grasps that approach the desired approach direction of the hand. The planner divides the grasping task into phases, and generates images that reflect the choices that the planner can make at each phase. The EEG interface is used to recognize the user’s preference among a set of options presented by the planner.” Screen Shot 2017-10-22 at 12.10.58 PM.png

While technologies like LIDAR and GraspIt! enable robots to better perceive the human world, in the basement of the SEAS Engineering Building at Columbia University, Dr. Matei Ciocarlie is developing an array of affordable tactile sensors for machines to touch & feel their environments. Humans have very complex multi-modal systems that are built through trial & error knowledge gained since birth. Dr. Ciocarlie ultimately aims to build a robotic gripper that has the capability of a human hand. Using light signals, Dr. Ciocarlie has demonstrated “sub-millimeter contact localization accuracy” of the mass object to determine the force applied to picking it up. At Columbia’s Robotic Manipulation and Mobility Lab (ROAM), Ciocarlie is tackling “one of the key challenges in robotic manipulation” by figuring out how “you reduce the complexity of the problem without losing versatility.” While demonstrating a variety of new grippers and force sensors which are being deployed in such hostile environments as the International Space Station and a human’s cluttered home, the most immediately promising innovation is Ciocarlie’s therapeutic robotic hand (shown below).

According to Ciocarlie’s paper: “Fully wearable hand rehabilitation and assistive devices could extend training and improve quality of life for patients affected by hand impairments. However, such devices must deliver meaningful manipulation capabilities in a small and lightweight package… In experiments with stroke survivors, we measured the force levels needed to overcome various levels of spasticity and open the hand for grasping using the first of these configurations, and qualitatively demonstrated the ability to execute fingertip grasps using the second. Our results support the feasibility of developing future wearable devices able to assist a range of manipulation tasks.”

Across the ocean, Dr. Hossam Haick of Technion-Israel Institute of Technology has built an intelligent olfactory system that can diagnose cancer. Dr. Haick explains, “My college roommate had leukemia, and it made me want to see whether a sensor could be used for treatment. But then I realized early diagnosis could be as important as treatment itself.” Using an array of sensors composed of “gold nanoparticles or carbon nanotube” patients breath into a tube that detects cancer biomarkers through smell. “We send all the signals to a computer, and it will translate the odor into a signature that connects it to the disease we exposed to it,” says Dr. Haick. Last December, Haick’s AI reported an 86% accuracy in predicting cancers in more than 1,400 subjects in 17 countries. The accuracy increased with use of its neural network in specific disease cases. Haick’s machine could one day have better olfactory senses than canines, which have been proven to be able sniff out cancer.

When writing this post on robotic senses, I had several conversations with Alexa and I am always impressed with her auditory processing skills. It seems that the only area in which humans will exceed robots will be taste; however I am reminded of Dr. Hod Lipson’s food printer. As I watched Lipson’s concept video of the machine squirting, layering, pasting and even cooking something that resembled Willie Wonka’s “Everlasting Gobstopper,” I sat back in his Creative Machines Lab realizing that Sci-Fi is no longer fiction.

Want to know more about LIDAR technology and self-driving systems? Join RobotLab’s next forum on “The Future of Autonomous Cars” with Steve Girsky formerly of General Motors – November 29th @ 6pm, WeWork Grand Central NYC, RSVP

]]>
RoboBusiness 2017: What’s cooking in robotics? https://robohub.org/robobusiness-2017-whats-cooking-in-robotics/ Thu, 05 Oct 2017 21:16:24 +0000 http://robohub.org/robobusiness-2017-whats-cooking-in-robotics/ Read More ›]]>

Mike Toscano, the former president of the Association for Unmanned Vehicle Systems International, emphatically declared at the September RobotLab forum that “anyone who claims to know the future of the [robotics] industry is lying, I mean no one could’ve predicted the computing mobile revolution.” These words acted as a guiding principle when walking around RoboBusiness in Silicon Valley last week.

The many keynotes, pitches and exhibits in the Santa Clara Convention Center had the buzz of an industry racing towards mass adoption, similar to the early days of personal computing. The inflection point in the invention that changed the world, the PC, was 1995. During that year, Sun Microsystems released Java to developers with promise of “write once, publish anywhere,” followed weeks later by Microsoft’s consumer software package, Windows ’95. Greater accessibility led to full ubiquity and applications unthinkable by the original engineers. In many ways, the robot market is standing a few years before its own watershed moment.

In my last post, I highlighted mechanical musicians and painters, this week it is time to see what is cooking, literally, in robotics. Next year, startup Moley plans to introduce the “first fully-automated and integrated intelligent cooking robot,” priced under $100,000. It already has a slick video that is reminiscent of Lily’s Robotics’ rise to the headlines; needless to say Moley has created quite a stir in the culinary community.

Austin Gresham, executive chef at The Kitchen by Wolfgang Puck is very skeptical, “Professional chefs have to improvise constantly as they prepare dishes. If a recipe says to bake a potato for 25 minutes and the potatoes are more or less dense than the previous batch, then cooking times will vary. I would challenge any machine to make as good a mashed potato (from scratch).” Gresham’s challenge is really the crux of the matter, creativity is driven by human’s desire for food, without taste could a robot chef have the intuition to improvise?

Acting as a judge of the RoboBusiness Pitch Fire Competition, I met entrepreneurs undiscouraged by the market challenges ahead. In addition, throughout my Valley visit, I encountered five startups building commercial and consumer culinary applications. Any time this happens within such a short timespan, I stop and take notice. Automated restaurants seem to be a growing trend across the nation with a handful of upstarts on both coasts. Eatsa is a chain of quinoa-salad restaurants sans cashiers and servers. Customers order via mobile devices or on-site kiosks, picking up their ready dishes through an automated floor-to-ceiling lockbox fixture. However, behind the wall Eatsa has hourly workers manually preparing the salad bowls. Cafe X in San Francisco offers a completely automated experience with a robot-arm barista preparing, brewing and serving espressos, cappuccinos, and Americanos. After raising $5 million from venture investors, Cafe X plans to expand with robot kiosks throughout the city. Probably the most end-to-end automated restaurant concept I visited can be found tucked away on Berkeley University’s Global Campus called BBox by Nourish. BBox is currently running a trial on campus and planning to open its first store next year to conquer the multi-billion dollar breakfast market with egg sandwiches and gourmet coffee (see video below).

According to Nourish’s CEO Greg Becker, BBox will “reengineer the food ecosystem, from farm to mouth.” Henry Hu, Cafe X’s founder, also aims to revolutionize “the supply chain, recipes, maintenance, and customer support.” To date, the most successful robotic concept is Zume Pizza. Founder Julia Collins made headlines last year with her groundbreaking spin on the traditional pizzeria. Today she is taking on Dominos dollar for dollar in the San Francisco area, delivering pies in under 22 minutes. Collins, a former Chief Financial Officer of a Mexican restaurant chain, challenges the food industry, “Why don’t we just re-write the rules— forget about everything we learned about running a restaurant?” Already, Zume is serving hundreds of satisfied customers daily, proving at least with pizza it is possible to innovate.

“We realized we could automate more of the unsafe repetitive tasks of operating a kitchen using flexible, dynamic robots,” explains Collins, who currently employees over 50 human workers that do everything from software engineering to supervising the robots to delivering the pizza. “The humans that work at Zume are making dough from scratch, working with farmers to source products, recipe development—more collaborative, creative human tasks. [We have] lower rent costs because we don’t have a storefront; delivery only and lower labor costs. We reinvest those savings into locally sourced, responsibly farmed food.” Collins also boasts that her human workforce has access to free vision, dental, and health insurance due to the cost savings.

Even Shake Shack could have competition very soon as Google Ventures-backed Momentum Machines is launching an epicurean robot bistro in San Francisco’s chic SoMa district later next year. The machine that has been clocked at 400 burgers an hour, guarantees “to slice toppings, grill a patty, assemble, and bag the burger without any help from humans,” at prices that “everyone can afford.” Momentum’s proposition prompted former McDonald’s CEO Ed Rensi to controversially state that “it’s cheaper to buy a $35,000 robotic arm than it is to hire an employee who’s inefficient making $15 an hour bagging french fries.” Comments like Rensi’s do not further the industry, in fact it probably led to the controversy last month with the launch of Bodega, an automated convenience store that even enraged Lin-Manuel Miranda below.

The bad press was multiplied further by Elizabeth Segran’s article in Fast Company, which read, “the major downside to this concept — should it take off — is that it would put a lot of mom-and-pop stores out of business.” Founder Paul McDonald responded on Medium, “Rather than take away jobs, we hope Bodega will help create them. We see a future where anyone can own and operate a Bodega — delivering relevant items and a great retail experience to places no corner store would ever open.” While Bodega is not exactly a robotic concept, it is similar to the automated marketplace of AmazonGo with 10 computer vision sensors tracking the consumer and inventory management via a mobile checkout app. “We’re shrinking the store and putting it in a box,” said McDonald. The founder has publicly declared war on 7-Eleven’s 5,000 stores, in addition to the 4 million vending machines across the US. Realizing the pressures to innovate, last year 7-Eleven made history with the first drone Slurpee delivery. “Drone delivery is the ultimate convenience for our customers and these efforts create enormous opportunities to redefine convenience,” said Jesus H. Delgado-Jenkins, 7-Eleven EVP and Chief Merchandising Officer. “This delivery marks the first time a retailer has worked with a drone delivery company to transport immediate consumables from store to home. In the future, we plan to make the entire assortment in our stores available for delivery to customers in minutes. Our customers have demanding schedules, are on-the-go 24/7 and turn to us to help navigate the challenges of their daily lives. We look forward to working with Flirtey to deliver to our customers exactly what they need, whenever and wherever they need it.”

As mom & pop stores compete for market share, one wonders with more Kitchen OS concepts if home cooked meals will join the list of outdated cultural trends. Serenti Kitchen in San Francisco plans to bring the Keurig pod revolution to food with its proprietary machine that includes prepared culinary recipe pods that are dropped into a bowl and whipped to perfection by a robotic arm (see above). Serenti Founder Tim Chen was featured last year at the Smart Kitchen Summit, which reconvenes later this month in Seattle. Chen said, “We’re building something that’s quite hard, mechanically, so it’s more from a vision where we wanted to initially develop a machine that could cook, and make cooking easier and automate cooking for the home.” Initially Chen plans to target business catering, “In the near term, we need to focus on placing these machines where there’s the highest amount of density, which is right in the offices,” but long-term Serenti plans to join the appliance counter. Chen explained his inspiration, “Our Mom is a great cook, so they’ve watched her execute the meals. Then realized a lot of it is repetitive, and what recipes are, is essentially just a machine language.” Chen’s observations are shared by many in the IoT and culinary space, as this year’s finalists in the Smart Kitchen Summit include more robotic of inventions, such as Crepe Robot that automatically dispense, cook and flavors France’s favorite snack and GammaChef, a robotic appliance that promises like Serenti to whip up anything in a bowl. Clearly, these inventions will eventually lead to a redesign of the physical home kitchen space that is already crowded with appliances. Some innovators are even using robotic arms tucked away in cabinets and specialized drawers, ovens and refrigeration units that communicate seamlessly to serve up dinner.

The automated kitchen illuminated by Moley and others might be coming sooner than anyone expects; then again it could be a rotten egg. In almost every Sci-Fi movie and television show the kitchen is reduced to a replicator that synthesizes food to the wishes of the user. Three years ago, it was rumored that food-powerhouse Nestle was working on a machine that could produce nutritional supplements on demand, code name Iron Man. While Iron Man has yet to be released to the public, it does illustrate the convergence of 3D printing, robotics and kitchen appliances. While the Consumer Electronics Show is still months away, my appetite has just been whetted for more automated culinary treats, stay tuned!

]]>
Descartes revisited: Do robots think? https://robohub.org/descartes-revisited-do-robots-think/ Wed, 20 Sep 2017 20:58:46 +0000 http://robohub.org/descartes-revisited-do-robots-think/ Read More ›]]>

This past week, a robotic first happened: ABB’s Yumi robot conducted the Lucca Philharmonic Orchestra in Pisa, Italy. The dual-armed robot overshadowed even his vocal collaborator, Italian tenor Andrea Bocelli. While many will try to hype the performance as ushering in a new new era of mechanical musicians, Yumi’s artistic career was short-lived as it was part of the opening ceremonies of Italy’s First International Festival of Robotics.

Italian conductor Andrea Colombini said of his student, “The gestural nuances of a conductor have been fully reproduced at a level that was previously unthinkable to me. This is an incredible step forward, given the rigidity of gestures by previous robots. I imagine the robot could serve as an aid, perhaps to execute, in the absence of a conductor, the first rehearsal, before the director steps in to make the adjustments that result in the material and artistic interpretation of a work of music.”

Harold Cohen with his robot AARON

Yumi is not the first computer artist. In 1973, professor and artist, Harold Cohen created a software program called AARON – a mechanical painter. AARON’s works have been exhibited worldwide, including at the prestigious Venetian Biennale. Following Cohen’s lead, Dr Simon Colton of London’s Imperial College created “The Painting Fool,” with works on display in Paris’ prestigious Galerie Oberkampf in 2013. Colton wanted to test if he could cross the emotional threshold with an artistic Turning Test. Colton explained, “I realized that the Painting Fool was a very good mechanism for testing out all sorts of theories, such as what it means for software to be creative. The aim of the project is for the software itself to be taken seriously as a creative artist in its own right, one day.”

In June 2015, Google’s Brain AI research team took artistic theory to the next level by infusing its software with the ability to create a remarkably human-like quality of imagination. To do this, Google’s programmers took a cue from one of the most famous masters of all time, Leonardo da Vinci. Da Vinci suggested that aspiring artists should start by looking at stains or marks on walls to create visual fantasies. Google’s neural net did just that, translating the layers of the image into spots and blotches with new stylized painterly features (see examples below).

1) Google uploaded a photograph of a standard Southwestern scene:

2) The computer then translated the layers as below:

In describing his creation, Google Brain senior scientist Douglas Eck said this past March, “I don’t think that machines themselves just making art for art’s sake is as interesting as you might think. The question to ask is, can machines help us make a new kind of art?” The goal of Eck’s platform called Magenta is to enable laypeople (without talent) to design new kinds of music and art, similar to synthetic keyboards, drums and camera filters. Dr. Eck himself is an admittedly frustrated failed musician who hopes that Magenta will revolutionize the arts in the same way as the electric guitar. “The fun is in finding new ways to break it and extend it,” Eck said excitedly.

The artistic development and growth of these computer programs is remarkable. Cohen, who passed away last year, said in a 2010 lecture regarding AARON “with no further input from me, it can generate unlimited numbers of images, it’s a much better colorist than I ever was myself, and it typically does it all while I’m tucked up in bed.” Feeling proud, he later corrected himself, “Well, of course, I wrote the program. It isn’t quite right to say that the program simply follows the rules I gave it. The program is the rules.”

In reflecting on the societal implications of creative bots, one can not help to be reminded of the famous statement by philosopher René Decartes: “I think, therefore I am.” Challenging this idea for the robotic age, Professor Arai Noriko tested the thinking capabilities of robots. Noriko led a research team in 2011 at Japan’s National Institute of Informatics to build an artificial intelligence program smart enough to pass the  rigorous entrance exam of the University of Tokyo.

“Passing the exam is not really an important research issue, but setting a concrete goal is useful. We can compare the current state-of-the-art AI technology with 18-year-old students,” explained Dr. Noriko. The original goal set out by Noriko’steam was for the Todai robot (named for the University) to be admitted to college by 2021. At a Ted conference earlier this year, Noriko shocked the audience by revealing the news that Todai beat 80% of the students taking the exam, which consisted of seven sections, including math, English, science, and even a 600-word essay. Rather than celebrating, Noriko shared with the crowd her fear,”I was alarmed.”

Todai is able to search and process an immense amount of data, but unlike humans it does not read, even with 15 billion sentences already in its neural network. Noriko reminds us that “humans excel at pattern recognition, creative projects, and problem solving. We can read and understand.” However, she is deeply concerned that modern educational systems are more focused on facts and figures than creative reasoning, especially because humans could never compete with the fact-checking of an AI. Noriko pointed to the entrance exam as an example, the Todai robot failed to grasp a multiple choice question that would have been obvious even to young children. She tested her thesis at a local middle school, and was dumfounded when one-third of students couldn’t even “answer a simple reading comprehension question.” She concluded that in order for humans to compete with robots, “We have to think about a new type of education. ”

Cohen also wrestled with the question of a thinking robot and whether his computer program could ever have the emotional impact of a human artist like Monet or Picasso. In his words, to reach that kind of level a machine would have to “develop a sense of self.” Cohen professed that “if it doesn’t, it means that machines will never be creative in the same sense that humans are creative.” Cohen later qualified his remarks about robotic creativity, adding, “it doesn’t mean that machines have no part to play with respect to creativity.”

Noriko is much more to the point, “How we humans will coexist with AI is something we have to think about carefully, based on solid evidence. At the same time, we have to think in a hurry because time is running out.” John Cryan, CEO of Deutsche Bank, echoed Noriko’s sentiment at a banking conference last week. Cryan said “In our banks we have people behaving like robots doing mechanical things, tomorrow we’re going to have robots behaving like people. We have to find new ways of employing people and maybe people need to find new ways of spending their time.”

]]> Reprogramming nature https://robohub.org/reprogramming-nature/ Tue, 12 Sep 2017 16:13:19 +0000 http://robohub.org/reprogramming-nature/ Read More ›]]>

Credit: Draper

Summer is not without its annoyances — mosquitos, wasps, and ants, to name a few. As the cool breeze of September pushes us back to work, labs across the country are reconvening tackling nature’s hardest problems. Sometimes forces that seem diametrically opposed come together in beautiful ways, like robotics infused into living organisms.

This past summer, researchers at Harvard and Arizona State University collaborated on successfully turning living E. Coli bacteria into a cellular robot, called a “ribocomputer.” By taking archived footage of movies, the Harvard scientists were able to successfully store the digital content on the bacteria that is most famous for making Chipotle customers violently ill. According to Seth Shipman, lead researcher at Harvard, this was the first time anyone has archived data onto a living organism.

In responding to the original article published in July in Nature, Julius Lucks, a bioengineer at Northwestern University, said that Shipman’s discovery will enable wider exploitation of DNA encoding. “What these papers represent is just how good we are getting at harnessing that power,” explained Lucks. The key to the discovery was Shipman’s ability to disguise the movie pixels into DNA’s four letter code: “molecules represented by the letters A,T,G and C—and synthesized that DNA. But instead of generating one long strand of code, they arranged it, along with other genetic elements, into short segments that looked like fragments of viral DNA.” Another important factor was E.coli‘s natural ability “to grab errant pieces of viral DNA and store them in its own genome—a way of keeping a chronological record of invaders. So when the researchers introduced the pieces of movie-turned-synthetic DNA—disguised as viral DNA—E. coli’s molecular machinery grabbed them and filed them away.”

Shipman used this methodology to eventually turn the cells into a computer that not only stores data, but actually perform logic-based decisions. Partnering with Alexander Green at Arizona State University’s Biodesign Institute, the two institutions collaborated on building their ribocomoputer which programmed bacteria with ribonucleic acid or RNA. According to Green, the “ribocomputer can evaluate up to a dozen inputs, make logic-based decisions using AND, OR, and NOT operations, and give the cell commands.” Green stated that this is the most complex biological computer created on a living cell to date.  The discovery by Green and Shipman means that cells could now be programmed to self-destruct if they sense the presence of cancer markers, or even heal the body from within by attacking foreign toxins.

Timothy Lu of MIT, called the discovery the beginning of the “golden age of circuit design.” Lu further said “The way that electrical engineers have gone about establishing design hierarchy or abstraction layers — I think that’s going to be really important for biology. ” In a recent IEEE article, Lucks cautioned readers about the discovery of perverting nature which can ultimately lead to a host of ethical considerations, “I don’t think anybody would really argue that it’s unethical to do this in E. coli. But as you go up in the chain [of living organisms], it gets more interesting from an ethical point of view.”

Nature has the inspiration for numerous discoveries in modern robotics, and has even created its own field of biomimicry. However, manipulating living organisms according to the whims of humans is just beginning to take shape. A couple of years ago, Hong Liang, a researcher at Texas A&M University, outfitted a cockroach with 3g backpack-like device that had a microprocessor, lithium battery, camera sensor, and electrical/nerve control system. Liang then used her make-shift insect robo-suit to remotely drive the waterbug through a maze.

When asked by the Guardian, what prompted Laing to utilize bugs as robots, she explained, “Insects can do things a robot cannot. They can go into small places, sense the environment, and if there’s movement, from a predator say, they can escape much better than a system designed by a human. We wanted to find ways to work with them.”

A cockroach outfitted with front and rear electrodes as well as a “backpack” for wireless control.
Credit: Alper Bozkurt, North Carolina State University

Liang believes that robo-roaches could be especially useful in disaster recovery situations that maximize the size of the insect along with its endurance. Liang says that some cockroaches can carry five times their own bodyweight, but the heavier the load, the greater the toll it takes on their performance. “We did an endurance test and they do get tired,” Liang explained. “We put them on a treadmill for a minute and then let them rest. If the backpack is lighter, they can go on for longer.” Laing has inspired other labs to work with different species of insects.

Draper, the US defense contractor, is working on its own insect robot by turning live dragonflies into controllable undetected drones. The DragonflEye Project is a deviation from the technique developed by Laing, as it uses light to steer neurons instead of electrical nerve stimulation. According to Jesse Wheeler, the project lead for Draper, he says that this methodology is like “a joystick that tells the system how to coordinate flight activities.” Through Wheeler’s “joystick” he is able to control and steer the wings inflight and program coordinates to the bug for mission directions via his own attached micro backpack that includes a guidance system, solar energy cells, navigation cells, and optical stimulation.

Draper believes that swarms of digitally enhanced insects might hold the key to national defense as locusts and bees have been programmed to identify scents, such as chemical explosives. The critters could be eventually programmed to collect and analyze samples for homeland security, in addition to obvious surveillance opportunities. Liang boasts that her cyborg roaches are “more versatile and flexible, and they require less control,” than traditional robots. However, Liang also reminds us that “they’re more real” as ultimately living organisms even with mechanical backpacks are not machines.

Author’s note: This topic and more will be discussed at our next RobotLabNYC event in one week on September 19th at 6pm, “Investing In Unmanned Systems,” with experts from NASA, AUVSI, and Genius NY.

]]> Those amazing flying machines https://robohub.org/those-amazing-flying-machines/ Wed, 16 Aug 2017 11:03:24 +0000 http://robohub.org/those-amazing-flying-machines/ Read More ›]]>

PARAMOUR on Broadway – A Cirque du Soleil Musical. Credit: Richard Termine

Last year, Intel partnered with Lady Gaga on the Super Bowl Halftime Show to showcase its latest aerial technology called “Shooting Star.” Intel did a reprise performance of its Shooting Star technology for Singapore’s 52nd birthday this past week. Instead of fireworks, the tech-savvy country celebrated its National Day Parade with a swarm of 300 LED drones animating the night sky with shapes, logos, and even a map of the country.

Intel’s global drone chief, Anil Nanduri, explained, “There’s considerably more operational complexity in handling a 300 drone fleet, compared with 100 drones in a show. It’s like juggling balls in your hand. You may be able to juggle three, but if you juggle nine, you may have to throw them higher and faster to get more time.” Earlier this year, Intel first showcased its 300 drone show at Coachella music festival on the heels of claiming the Guinness World Record of a 500 drone performance.

Choreographed drones are winning the hearts of Cirque du Soleil theatergoers with a fleet of flying acrobatic vehicles dancing around its human performers. These drones are the brain child of Professor Raffaello D’Andrea of the ETH Zurich, Switzerland and his new startup Verity Studios. D’Andrea is probably best known as one of the three founders of Kiva Systems and now he is taking the same machine intelligence that sorts and delivers goods within Amazon’s warehouses to safely wow audiences worldwide.  The flying lampshades (shown in the video below) are actually autonomous drones that magically synchronize with the dancers, without safety nets or human operators.

Verity’s customer, Cirque du Soleil’s Chief Creative Officer Jean-Francois Bouchard, said D’Andrea’s “flying machines are unquestionably one of the most important statements of the PARAMOUR show.” The key to the flying machines’ success over 7,000 autonomous flights on Broadway is its proprietary technologies that enable multiple self-piloted drones to be synchronized within flight. Verity’s drone is part of a larger performance system called “Stage Flyers.”

Screen Shot 2017-08-11 at 8.21.25 AM.png

The Stage Flyer platform has proven itself in the field, flying next to thousands of people each evening by having built-in redundancy to any single failure. According to Verity’s website, the system is “capable of continuing operation in spite of a failed battery, a failed motor, a failed connector, a failed propeller, a failed sensor, or a failure of any other component. This is achieved through the duplication of critical components and the use of proprietary algorithms, which enable safe emergency responses to component failures.” This means that the drones can operate safely around audiences and performers alike, carrying payloads of cameras, mirrors, and special lighting effects. As shown in the above diagram, the drone system includes a fleet of self-piloted drones that utilize one positioning system and control unit. The company boasts that its system only takes a few hours to install, calibrate and learn how to operate.

Drone-Hive-Mechanism-Amazon.png

Drone swarms are not just for entertainment, as today there are a number of upstarts and established players utilizing these mechanics for e-commerce fulfillment centers. Last June, Amazon was issued a patent for a “Multi-Level Fulfillment Center for Unmanned Aerial Vehicles.” The beehive-looking distribution center (above) is designed to facilitate traffic between inbound and outbound delivery drones. The patent illustration details “multiple levels with multiple landing and take-off locations, depending on local zoning regulations.”

DG5VVGnWAAEwtTs (1)

This is all part of Amazon’s larger plan to grow its robotic workforce over the next two to three years. Instead of human truck drivers, the patent displays delivery bays that open and close automatically based on the direction of the drones and interior platforms that cycle around the hive. CB Insights reports that the patent describes “impact dampeners,” such as nets, for receiving inbound drones and “launch assist mechanisms,” such as fans, for launching outbound drones. It appears that Amazon will be looking again to technology like D’Andrea’s research to revolutionize its global network of warehouses with synchronized swarms of drones that safely soar above human workers.

Amazon’s competitor Walmart announced last summer its plans to utilize swarms of indoor drones for inventory management, replacing the need for people climbing dangerous ladders to manually scan labels. The New York Times first reported last July that the retailer applied for a FAA exemption to begin testing drones inside its massive distribution centers. Shekar Natarajan, the vice president of last mile and emerging science for Walmart, demonstrated for the newspaper how swarms of drones could easily move up and down aisles, from floor to ceiling, to scan 30 barcode images a second (an efficiency that would be impossible for even the most agile humans). Walmart has publicly boasted that it will spend close to $3 billion on new technology and other cost-saving investments to bolster its e-commerce business which is growing, but at a slower pace than its nemesis Amazon.

The race to dominate the warehouse has led to increased investments in the logistics sector and even an accelerator dedicated solely to technology around distribution centers. Chattanooga, Tennessee-based Dynamo Accelerator showcased last May its second cohort of startups. One of the most successful showings was Chicago-based Corvus Robotics, a software company that uses indoor aerial drones to scan inventory (similar to the Walmart example above). According to Dynamo’s managing directors, Corvus is building enabling tools that allow operators to fly drones autonomously, scan & sync barcodes, and enter the SKU data into the existing warehouse management system.

Screen Shot 2017-08-11 at 9.03.48 AM.png

Santosh Sankar, the director at Dynamo, explained his accelerator’s mission succinctly in a recent blog post: “We believe our focus and hands-on approach is one of our value-adds. As such, we’re leaning into being seed investors and upholding our commitment to transforming our industry by focusing on our founders and our corporate partners. We’ve opted to not hold a quota for our programs and hone in on companies we can truly help because that ultimately makes for good seed investments.” Sankar added that several of the program participants “are already well on their way to generate ($1 million or more) in annual revenue and/or have raised their initial round of capital.”

Screen Shot 2017-08-11 at 9.06.22 AM.png

Corvus may be the latest indoor drone startup to enter an already crowded warehouse market, which includes established players like the Hardis GroupSmartx, and DJI. Drones continue to amuse, amaze and evolve as the growing need for more unmanned systems in our lives appears to be almost insatiable. Next month, we plan to dig deeper into the drone market with our RobotLabNYC event series on September 19th at 6pm in WeWork Grand Central. Joining us in the discussions include thought-leaders from NASA, AUVSI, and Genius NY, RSVP today, as space is limited.

]]>
Singapore: An autonomous innovation center https://robohub.org/singapore-an-autonomous-innovation-center/ Sun, 16 Jul 2017 12:35:06 +0000 http://robohub.org/singapore-an-autonomous-innovation-center/ Read More ›]]> Jim Robinson of RRE Ventures said it best last month at the Silicon Dragon Conference when comparing Silicon Valley to New York, “There are two kinds of centers that have a lot of startups and technology, there are technology centers and commerce centers.” New York falls into the later category, while the Valley is the former. Sitting next to Jim, I reflected that Singapore might be in both groups, an Asian commerce hub and a leader in mechatronics. As an advocate for automation, I am often disheartened that the United States significantly lags behind its industrial counterparts in manufacturing autonomous machines. The key to a pro-job policy could be gleaning from the successes of countries like Singapore to implement America’s own ‘Robot First Plan.’

robotic chart.pngLast August, Singapore became the first country to permit autonomous taxis on its roads. Boston-based startup nuTonomy moved its operations to the Far East, which enabled the company to launch public trials weeks before Uber’s test in Pittsburgh. The secret to the company’s speed to market was to skip America’s 19,492 municipal government licensing departments to pilot its technology years before any of the other competing technologies, with the exception of Google’s Waymo. In addition to less regulatory hoops, the Singapore Land Transport Authority has partnered with nuTonomy on its rollout.

Pang Kin Keong, Singapore’s Permanent Secretary for Transport and the Chairman of its Committee on autonomous driving, said “We face constraints in land and manpower. We want to take advantage of self-driving technology to overcome such constraints, and in particular to introduce new mobility concepts which could bring about transformational improvements to public transport in Singapore.”

The company is on track to offer its driverless taxis throughout the country by 2018. Doug Parker’s, nuTonomy’s COO, estimates that autonomous taxis could ultimately reduce the number of cars on Singapore’s roads from 900,000 to 300,000. Park explains, “When you are able to take that many cars off the road, it creates a lot of possibilities. You can create smaller roads, you can create much smaller car parks. I think it will change how people interact with the city going forward.” Park’s partnership with the city-state is made possible because Singapore is not straddled with the costs of aging infrastructure like many US and European cities.

Since first announcing its test in 2016, nuTonomy is on pace to expand globally with its recent partnership with ride-sharing company Lyft in a pilot in Boston. Karl Iagnemma, nuTonomy’s CEO, said: “By combining forces with Lyft in the U.S., we’ll be positioned to build the best passenger experience for self-driving cars. Both companies care immensely about solving urban transportation issues and the future of our cities, and we look forward to working with Lyft as we continue to improve our autonomous vehicle software system.”

Besides autonomous vehicles, drones have been widely embraced by Singapore’s infamously strict police department. Singaporean startup H3 Dynamics became the first company last year to launch a drone in the box solution that offers storage and charging stations in the field. H3’s “DRONEBOX” is a unique solar-based charging station that enables longer autonomous missions in areas that are typically hostile for humans. Since showcasing its technology above the streets of Singapore, H3 faces increased competition from formidable upstarts, including: Airobotics, EasyAriel, and HiveUAV.

According to its original press release, “DRONEBOX is an all-inclusive, self-powered system that can be deployed anywhere, including in remote areas where industrial assets, borders, or sensitive installations require constant monitoring. Designed as an evolution over today’s many unattended sensors and CCTV cameras installed in cities, borders, or large industrial estates, DRONEBOX innovates by giving sensors freedom of movement using drones as their vehicles. End-users can now deploy flying sensor systems at different locations, and measure just about anything, anywhere, anytime. They offer 24/7 reactivity, providing critical information to operators – even to those located thousands of miles away.”

Screen Shot 2017-07-14 at 11.09.03 AM.png

In June, Dynamic H3 announced the opening of new drone operation centers to include Europe, America and the Middle East. Additionaly, H3 is now marketing its next generation of battery technology for extended high-value asset missions. H3’s HES Energy Systems is the product a decade-long research initiative between the company and the Singaporean government. Unlike typical drone lithium batteries that have a flight-time of 20-40 minutes, HES Energy’s developed its ground-breaking 6 hour battery (above) with a first of its kind “solid-hydrogen on demand powered system.” The combination of longer flights, self-charging stations and autonomous missions is a powerful value proposition for this Singaporean offering in differentiating itself in an already crowded unmanned flight market.

This past week, Dubai announced its plans to roll out a fleet of mini autonomous police cars for surveillance and crime prevention. This effort is part of the Middle Eastern city’s program to automate 25% of its police force over the next decade. The Gulf State’s ambitious plans were a perfect fit for Singaporean OTSAW Digital, a division of ActiV-a global tech powerhouse. Similar to nuTonomy and H3, OTSAW’s O-R3 grew out of the innovation friendly environment of the Asian republic.

OR-3 is smaller than a golf cart, and not meant actually to capture nefarious actors, but to identify suspicious activity while it is happening. Using facial recognition technology and a built-in ariel drone, the robot will begin patrolling the Dubai beat by the end of the year. The autonomous car/drone combo is almost a hybrid of nuTonomy and H3, with an array of sensors and machine intelligence technologies to survey the area via thermal imaging, license plate readers and cloud-based computing.

robot-cop-dubai-police-robocop

According to Abdullah Khalifa Al Marri, the head of the Dubai police department, the OR-3 isn’t meant to replace officers but rather to augment their skills with more efficient resources. “We seek to augment operations with the help of technology such as robots. Essentially, we aim for streets to be safe and peaceful without heavy police patrol,” said Al Marri. Last month, Dubai even deployed a humanoid-looking robot to monitor tourist attractions, dubbed Robocop, that speaks English and Arabic. According to Dubai, it plans to deploy larger humanoids that stand over 10 feet tall and go over 50 mph, while the human controller operates the device inside the robots.

Brigadier-General Khalid Nasser Al Razzouqi, Director-General of Smart Services at Dubai Police’s department, boasted, “The launch of the world’s first operational Robocop is a significant milestone for the Emirate and a step towards realizing Dubai’s vision to be a global leader in smart cities technology adoption.” In 2015, The World Economic Forum ranked the United Arab Emirates as the second most tech savvy government in the world, just behind Singapore.

sig tech ready

As I write this article I find myself at another conference, “The State of New York: Smart Cities.” Hoping for insights about how my city will compete with the likes of these tech savvy counterparts, I was met by a group of bureaucrats touting App Competitions and Free WiFi. One speaker even suggested that the Metropolitan Transportation Authority (MTA) is run by the best executive team, even though New York’s Governor has politely called the organization dysfunctional due to multiple train derailments, signal problems, and overcrowded stations (see: The Summer of Hell). Science is not just about the possible, but the willing. Singapore’s ability to reinvent the very nature of how a city operates and partners with the private sector is proof positive that even a country founded 50 years ago can climb to the “top of the heap.”

 

]]>
Snake robots slither into our hearts, literally https://robohub.org/snake-robots-slither-into-our-hearts-literally/ Tue, 27 Jun 2017 15:21:02 +0000 http://robohub.org/snake-robots-slither-into-our-hearts-literally/ Read More ›]]>

Snake robot at the Robotics institute. Credit: Jiuguang Wang/Flickr

The biblical narrative of the Garden of Eden describes how the snake became the most cursed of all beasts: “you shall walk on your belly, and you shall eat dust all the days of your life.” The reptile’s eternal punishment is no longer feared but embraced for its versatility and flexibility. The snake is fast approaching as one of the most celebrated robotic creatures for roboticists worldwide in out maneuvering rovers and humanoids.

Last week, while General Electric experienced a tumult in its management structure, its Aviation unit completed the acquisition of OC Robotics – a leader in serpent arm design. GE stated that it believes OC’s robots will be useful for jet engine maintenance, enabling repairs to be conducted while the engine is still attached to the wing by wiggling into parts where no human hand could survive. This promise translates into huge cost and time savings for maintenance and airline companies alike.

OC robots have use cases beyond avionics, including inspections of underground drilling and directional borings tens of feet below the Earth. In addition to acquiring visual data, OC’s snake is equipped with a high-pressure water jet and laser to measures the sharpness of the cutting surface. According to OC’s founder Andrew Graham, “This is faster and easier, and it keeps people safe. ” Graham seems to hit on the holy grail of robotics by combining profit and safety.

GE plans to expand the use case for its newest company. Lance Herrington, a leader at GE Aviation Services, says “Aviation applications will just be the starting point for this incredible technology.” Herrington implied that the snake technology could be adopted in the future to inspect power plants, trains, and even healthcare robots. As an example of its versatility, OC Robotics was awarded a prestigious prize by the U.K.’s Nuclear Decommissioning Authority for its LaserSnake. OC’s integrated snake-arm laser cutter was able to disassemble toxic parts of a nuclear fuel processing facility in a matter of weeks which would have taken years by humans while risking radiation exposure.

One of the most prolific inventors of robotic snake applications is Dr. Howie Choset of Carnegie Mellon University. Choset is the co-director of CMU’s Biorobotics Lab that has birthed severals startups based upon his snake technology, including Medrobotics (surgical systems); Hebi Robotics (actuators for modular robots); and Bito Robotics‘ (autonomous vehicles). Choset claims that his menagerie of metal reptiles are perfect for urban search and rescue, infrastructure repairs and medicine.

Source: Medorobotics

Recently, Medrobotics received FDA Clearance for its Flex Robotic System for colorectal procedures in the United States. According to the company’s press release, “Medrobotics is the first and only company to offer minimally invasive, steerable and shapeable robotic products for colorectal procedures in the U.S.” The Flex system promises a “scarfree” experience in accessing “hard-to-reach anatomy” that is just not possible with straight, rigid instruments.

“The human gastrointestinal system is full of twists and turns, and rigid surgical robots were not designed to operate in that environment. The Flex® Robotic System was. Two years ago Medrobotics started revolutionizing treatment in the head and neck in the U.S. We can now begin doing that in colorectal procedures,” said Dr. Samuel Straface, CEO.

Dr. Alessio Pigazzi, Professor of Surgery at the University of California, Irvine, exclaimed that “Medrobotics is ushering in the first of a new generation of shapeable and steerable robotic surgical systems that offer the potential to reduce the invasiveness of surgical procedures for more patients.” While Medrobotics’ system is currently only approved for use through the mouth and anus, Pigazzi looks forward to future applications whereby any natural orifices could be an entry point for true incision-less surgery.

The Technion ‘Snake Robot’. Photo: Kobi Gideon/GPO

Medrobotics was the brainchild of the collaboration of Choset with Dr. Alon Wolf of Israel’s prestigious Technion Institute of Technology. One of the earliest use cases for snake robots was by Wolf’s team in 2009 for military surveillance. As director of Technion’s BioRobotics and BioMechanics Laboratory (BRML) Wolf’s lab created the next generation of defensive snake robots for the latest terror threat, subterranean tunnels transporting suicide bombers and kidnappers.  Since the discovery of tunnels between the Gaza Strip and Israel in 2015, BRML has been working feverishly to deploy snake robots in the field of combat.

The vision for BRML’s hyper-redundant robots is to utilize its highly maneuverable actuators to sneak through tough terrain into tunnels and buildings. Once inside, the robot will provide instant scans of the environment to the command center and then leave behind sensors for continued surveillance. The robots are equipped with an array of sensors, including thermal imagers, miniature cameras, laser scanners, and laser radar with the ability of stitching seamlessly 360-degree views and maps of the targeted subterranean area. The robots, of course, would have dual uses for search & rescue and disaster recovery efforts.

Long term, Wolf would like to deploy his fleet of crawlers in search and rescue missions in urban locations and earthquakes.

“The robots we are creating at the Technion are extremely flexible and are able to manipulate delicate objects and navigate around walls. Over 400 rescue workers were killed during 9/11 because of the dangerous and unstable environment they were attempting to access and our objective is to ensure that robots are able to replace humans in such precarious situations,” explains Wolf.

It is no wonder why on his last visit to Israel, President Obama called Wolf’s vision “inspiring.”

]]>
Trusting robots with our lives https://robohub.org/trusting-robots-with-our-lives/ Mon, 19 Jun 2017 09:14:56 +0000 http://robohub.org/trusting-robots-with-our-lives/ Read More ›]]>

The Baxter robot hands off a cable to a human collaborator — an example of a co-robot in action. Photo credit: Aaron Bestick, UC Berkeley.

The key takeaway from Tuesday’s RobotLabNYC forum, on “Exploring The Autonomous Future,” was humans are the key to robot adoption. Dr. Howard Morgan of First Round Capital expressed to the audience of more than 100 innovators working within the automation ecosystem, the necessity of embracing “entrepreneurial marketing” to reach customers. Tom Ryden echoed Morgan’s sentiment in his presentation about Mass Robotics, conveying his startups’ frustrations with the pace of adoption. Dr. Eric Daimler, formerly of the Obama Administration, concluded the evening succinctly by exclaiming, “we only adopt what we trust.” Trust is key for crossing the chasm.

Intuitive Surgical this past year celebrated its 17 year of operations with close to a million robotic surgeries completed last year. According to a recent Gallup Poll, medical professionals are the most trusted individuals in our society, even more than one’s clergy. The fact that robot-assisted surgery has become so routine and accepted by doctors and their patients is proof positive that in some industries we have already crossed the trust threshold.

Photo credit: Robert Shields

Understanding how Intuitive’s Da Vinci robot built trust within the medical community could offer parallels to other areas of the automation industry. Robotic-assisted surgery or “telerobotics,” is the evolution of two modern technologies: 1) telepresence or telemanipulation; and 2) laparoscopic surgery. In 1987, French physician Dr. Philippe Mouret performed the first minimally invasive gallbladder surgery using an endoscope-like device to remotely guide his instruments via video to remove the damaged organ. By the 1990’s, laparoscopic surgery became commonplace, driving the demand for more precision through mechanics and computer-aided techniques. A decade later, Intuitive received FDA approval for its Da Vinci robot for general surgery, which has since been expanded for prostate, neurological, and thoracic procedures. Telerobotics evolved not just from the availability of advanced technologies, but from the demand for less invasive procedures by the most trusted people in America.

Last month, the FDA approved the Da Vinci Xi Systems, enabling Intuitive Surgical to market less expensive systems and gain marketshare with smaller medical institutions globally. “This new system enables access to Intuitive’s leading and proven robotic-assisted surgical technology at a lower price point. Customers around the globe have different needs from a clinical, cost and technology perspective; Intuitive’s goal is to meet those needs by providing a range of products and solutions: the da Vinci X System helps us continue to do so,” said CEO Dr. Gary Guthart.

According to the press release, Da Vinci X System is “a focused-quadrant surgery and features flexible port placement and 3D digital optics, while also including advanced instruments and accessories from its Xi system.” Another determining factor of Intuitive Surgical’s success is the interoperability of the instruments. Rather than just an endoscope that provides video feeds, Da Vinci is equipped with multiple end effectors that mimics traditional instruments guided by experienced surgeons telerobotically.

Patients trust the robot because it is simply augmenting their doctor’s skills with greater precision. This is reinforced by shorter recovery periods and better outcomes. Recently, Oxford published a research study which took place over a nine-year period that concluded patients who opted for robotic lobectomies had better lung cancer outcomes. As a new generation of surgeons embraces the robotic future, the market for abdominal surgical robots is expected to grow from $2.9 billion in 2017 to $12.9 billion by 2022.

Trusting robots with our bodies might seem like a difficult premise to uphold, but robots have been saving lives on the front lines since 1972. Bomb-defusing machines have been utilized in the most dangerous situations worldwide from Afghanistan to Jerusalem to New York City. Today, almost every police department and military have an arsenal of remote-controlled explosive removal devices.

Dr. Sethu Vijayakumar, director of the Edinburgh Centre for Robotics in the United Kingdom, explains, “One of the target areas, in terms of [the] use of robots, is for going into dangerous situations. Robots can go in, be operated from a safe distance, and, in a worst-case scenario, be sacrificed.”

Similar to robotic medicine, trust-based systems for the military are built by teleporting human expertise into dangerous situations. Pittsburgh-based RE2 Robotics took this concept to a new level with its Robotic Manipulation System announced last week. The RE2 System now enables users to actually use their limbs and hands to control the robot’s movements and grippers to quickly defuse explosives.

RE2 CEO explained the rationale for his new product, “Often times, you still need the human intellect to perform those tasks. But they’re dangerous, so the question is, how can we project that human capability remotely, so they’re still able to do their job and leverage the human intellect to solve a really big problem? That’s what we’re trying to do — keep the human safe, but allowing them to still do their job.”

While rover looks remarkably similar to Endeavor’s (formerly iRobot) Packbot that has been widely deployed by the US military in Iraq, Afghanistan and elsewhere, the control system is novel and more reliable in the high-pressure situations. Pedersen says, “If you’re going to project that human capability, the most human way to control it is to have it be as much like you as possible. That’s where we’ve come over the past decade, having true human-like capability. It’s no coincidence that these robots look like human torsos. These systems are a projection of you, remotely. It’s almost like an avatar, where you’re dealing with a threat out of harm’s way.”

While today the operator of RE2’s robot stands at a safe distance watching the video feed on a laptop, the company is developing a virtual reality headset accessory for the control system to enable the professional to completely immerse himself into the situation. Pedersen also plans to expand the use cases for his technology to civilian markets such as search & rescue, disaster recovery (like Fukushima) and medicine.

“Yes, people could use this technology for other means. But our charter is saving lives and extending it into new markets like health care, where we can do patient assist. [We can] help a person from a wheelchair to a bed or a wheelchair to a toilet, as the brawn for a caregiver,” touts Pedersen.

While we are years away from fully trusting autonomous systems with our lives, it appears from these two examples that the first step is enabling machines to augment our most trusted citizens. As yesterday was Father’s Day it is only appropriate I share with my readers my gift – GrillBot. The disclaimer on the box, however, does make me question when I plan to use it; fear of death is kind of a big deal!

]]>
Making Pepper walk: Understanding Softbank’s purchase of Boston Dynamics https://robohub.org/making-pepper-walk-understanding-softbanks-purchase-of-boston-dynamics-2/ Mon, 12 Jun 2017 11:16:37 +0000 http://robohub.org/making-pepper-walk-understanding-softbanks-purchase-of-boston-dynamics-2/ Read More ›]]>

It is unclear if Masayoshi Son, Chairman of Softbank, was one of the 17 million YouTube viewers of Boston Dynamic’s Big Dog before acquiring the company for an undisclosed amount this past Thursday. What is clear is the acquisition of Boston Dynamics by Softbank is a big deal. Softbank’s humanoid robot Pepper is trading up her dainty wheels for a pair of sturdy legs.

In expressing his excitement for the acquisition, Masayoshi Son said, “Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the Information Revolution, and Marc and his team at Boston Dynamics are the clear technology leaders in advanced dynamic robots. I am thrilled to welcome them to the SoftBank family and look forward to supporting them as they continue to advance the field of robotics and explore applications that can help make life easier, safer and more fulfilling.”

Marc Raibert, CEO of Boston Dynamics, previously sold his company to Google in 2013. Following the departure of Andy Rubin from Google, the internet company expressed buyers remorse. Raibert’s company failed to advance from being a military contractor to a commercial enterprise. It became very challenging incorporating Boston Dynamic’s zoo of robots (mechanical dogs, cheetahs, bulls, mules and complex humanoids) into Google’s autonomous strategy. Since Rubin’s exit in 2014, rumors of buyers acquiring Boston Dynamics from Google have ranged from Toyota Research to Amazon Robotics. Softbank represents a new chapter for Raibert, and possible the entire industry.

Raibert’s statement to the press gave astute readers a peek of what to expect: “We at Boston Dynamics are excited to be part of SoftBank’s bold vision and its position creating the next technology revolution, and we share SoftBank’s belief that advances in technology should be for the benefit of humanity. We look forward to working with SoftBank in our mission to push the boundaries of what advanced robots can do and to create useful applications in a smarter and more connected world.” A quick study of the assets of both companies reveals how Boston Dynamics could help Softbank in its mission to build robots to benefit humanity.

Softbank’s premier robot is Pepper, a four-foot tall social robot that has been mostly deployed in Asia as a customer service agent. Recently, Pepper, as part of Softbank’s commitment to the Trump administration to invest $50 billion in the United States has been spotted in stores in California. As an example, Pepper proved itself as a valuable asset last year to Palo Alto’s premier tech retailer B8ta, accounting for a 6 times increase in sales. To date, there are close to 10,000 Pepper robots deployed worldwide, mostly in Asian retail stores. However, Softbank is also the owner of Sprint with 4,500 cell phone stores across the USA, and a major investor in WeWork with 140 locations globally servicing 100,000 members – could Pepper be the customer service agent or receptionist of the future?

According to Softbank’s website, Pepper is designed to be a “day-to-day companion,” with its most compelling feature being the ability to perceive emotions. Softbank boasts that their humanoid is the first robot ever to recognize moods of homo sapiens and adapt its behavior accordingly. While this is extremely relevant for selling techniques, Softbank is most proud of Pepper being the first robot to be adopted into homes in Japan. It is believed that Pepper is more than a point-of-purchase display gimmick, but an example of the next generation of caregivers for the rising elderly populations in Japan and the United States. According to the Daily Good, “Pepper could do wonders for the mental engagement and continual monitoring of those in need.” Its under $2,000 price point also provides an attractive incentive to introduce the robot into new environments, however wheel-based systems are a limitation in the home with clutter floors, stairs and other unforeseen obstacles.

Boston Dynamics is almost the complete opposite of Softbank; it is a research group spun out of MIT. Its expertise is not in social robots but in military “proofs of concepts” like futuristic combat mules. The company has touted some of the most frightening mechanical beasts to ever walk the planet from metal Cheetahs that sprint at over 25 miles per hour to mechanized dogs that scale mountains with ease to one of the largest humanoids every built that has an uncanny resemblance to Cyberdyne’s T-800. In a step towards commercialization, Boston Dynamics released earlier this year its newest monster – a wheel-biped leg robot named Handle that can easily lift over a hundred pounds and jump higher than Lebron James. Many analysts pontificated that this appeared to be Boston Dynamics attempt to prove its relevance to Google with a possible last mile delivery bot.

In an IEEE interview when Handle debuted last February, Raibert exclaimed, “Wheels are a great invention. But wheels work best on flat surfaces and legs can go anywhere. By combining wheels and legs, Handle can have the best of both worlds.” IEEE writer Evan Ackerman questioned, after seeing Handle, if the next generation of Boston Dynamic’s humanoids could feature legs with roller-skate like shoes. One is certain that Boston Dynamics is the undisputed leader of dynamic control and balance systems for complex mechanical designs.

Leading roboticist Dan Kara of ABI Research confirmed that “these [Boston Dynamics] are the world’s greatest experts on legged mobility.”

If walking is the expertise of Raibert’s team and Softbank is the leader of cognitive robotics with a seemingly endless supply of capital, the combination could be the first real striding humanoid capable of human-like emotions. By 2030 there will be 70 million people over the age of 65 years in America, with a considerable smaller amount of caregivers. To answer this call researchers are already converting current versions of Pepper into sophisticated robotic assistants. Last year, Rice University unveiled a “Multi-Purpose Eldercare Robot Assistant (MERA)” which is essentially a customized version of the Softbank’s robot. MERA is specifically designed to be a home companion for seniors that “records and analyzes videos of a person’s face and calculates vital signs such as heart and breathing rates.” Rice University partnered with IBM’s Aging-in-Place Research Lab to create MERA’s speech technology. IBM’s Lab founder, Susann Keohane, explained that Pepper “has everything bundled into one adorable self.” Now with Boston Dynamic’s legs Pepper could be a friend, physical therapist, and life coach walking side by side with its human companion.

Daniel Theobald, founder of Vecna Technologies – a healthcare robotic company, summed it best last week, “I think Softbank has made a major commitment to the future of robotics. They understand that the world economy is going to be driven by robotics more and more.”

Next Tuesday we will dive further into the implications of Softbank’s purchase of Boston Dynamics with Dr. Howard Morgan/First Round Capital, Tom Ryden/MassRobotics and Dr. Eric Daimler/Obama White House at RobotLabNYC’s event on 6/13 @ 6pm WeWork Grand Central (RSVP).

]]>
The Uncanny Valley of human-robot interactions https://robohub.org/the-uncanny-valley-of-human-robot-interactions/ Fri, 02 Jun 2017 10:16:26 +0000 http://robohub.org/the-uncanny-valley-of-human-robot-interactions/ Read More ›]]>

The device named “Spark” flew high above the man on stage with his hands waving in the direction of the flying object. In a demonstration of DJI’s newest drone, the audience marveled at the Coke can-sized device’s most compelling feature: gesture controls. Instead of a traditional remote control, this flying selfie machine follows hand movements across the sky. Gestures are the most innate language of mammals, and including robots in our primal movements means we have reached a new milestone of co-existence.

Madeline Gannon of Carnegie Mellon University is the designer of Mimus, a new gesture controlled robot featured in an art installation at The Design Museum in London, England. Gannon explained: “In developing Mimus, we found a way to use the robot’s body language as a medium for cultivating empathy between museum-goers and a piece of industrial machinery. Body language is a primitive yet fluid means of communication that can broadcast an innate understanding of the behaviors, kinematics and limitations of an unfamiliar machine.” Gannon wrote about her experiences recently in the design magazine Dezeen: “In a town like Pittsburgh, where crossing paths with a driverless car is now an everyday occurrence, there is still no way for a pedestrian to read the intentions of the vehicle…it is critical that we design more effective ways of interacting and communicating with them.”

So far, the biggest commercially deployed advances in human-robot interactions have been conversational agents by Amazon, Google and Apple. While natural language processing has broken new ground in artificial intelligence, the social science of its acceptability in our lives might be its biggest accomplishment. Japanese roboticist Masahiro Mori described the danger of making computer generated voices too indistinguishable from humans as the “uncanny valley.” Mori cautioned inventors from building robots that are too human sounding (and possibly looking) as the result elicits negative emotions best described as “creepy” and “disturbing.”

Recently, many toys have embraced conversational agents as a way of building greater bonds and increasing the longevity of play with kids. Barbie’s digital speech scientist, Brian Langner of ToyTalk, detailed his experiences with crossing into the “Uncanny Valley” as: “Jarring is the way I would put it. When the machine gets some of those things correct, people tend to expect that it will get everything correct.”

Kate Darling of MIT’s Media Lab, whose research centers on human-robot interactions, suggested that “if you get the balance right, people will like interacting with the robot, and will stop using it as a device and start using it as a social being.”

This logic inspired Israeli startup Intuition Robotics to create ElliQ—a bobbing head (eyeless) robot. The purpose of the animatronics is to break down barriers between its customer base of elderly patients and their phobias of technology. According to Intuition Robotics’ CEO, Dor Skuler, the range of motion coupled with a female voice helps create a bond between the device and its user. Don Norman, usability designer of ElliQ, said: “It looks like it has a face even though it doesn’t. That makes it feel approachable.”

Mayfield Robotics decided to add cute R2D2-like sounds to its newest robot, Kuri. Mayfield hired former Pixar designers Josh Morenstein and Nick Cronan of Branch Creative with the sole purpose of making Kuri more adorable. To accomplish this mission, Morenstein and Cronan gave Kuri eyes, but not a mouth as that would be, in their words “creepy.” Conan shares the challenges with designing the eyes: “Just by moving things a few millimeters, it went from looking like a dumb robot to a curious robot to a mean robot. It became a discussion of, how do we make something that’s always looking optimistic and open to listen to you?” Kuri has a remarkable similarity to Morenstein and Cronan’s former theatrical robot, EVA.

In the far extreme of making robots act and behave human, RealDoll has been promoting six thousand dollar sex robots. To many, RealDoll has crossed the “Uncanny Valley” of creepiness with sex dolls that look and talk like humans. In fact, there is a growing grassroots campaign to ban RealDoll’s products globally, as it endangers the very essence of human relationships. Florence Gildea writes on the organization’s blog: “The personalities and voices that doll owners project onto their dolls is pertinent for how sex robots may develop, given that sex doll companies like RealDoll are working on installing increasing AI capacities in their dolls and the expectation that owners will be able to customize their robots’ personalities.” The example given is how the doll expresses her “feelings” for her owner on Twitter:

Obviously a robot companion has no feelings, however it is a projection of the doll owners’. “To anthropomorphize their dolls to sustain the fantasy that they have feelings for the owner. The Twitter accounts seemingly manifest the dolls’ independent existence so that their dependence on their owners can seem to signify their emotional attachment, rather than it following inevitable from their status as objects. Immobility, then, can be misread as fidelity and devotion.” The implications of this behavior is that their female companion, albeit mechanical, enjoys “being dominated.” The fear that the Campaign Against Sex Robots expresses is the objectification of women (even robotic ones) reinforces problematic human sexual stereotypes.

Today, with technology at our fingertips, there is growing phenomena of preferring one-directional device relationships over complicated human encounters. MIT Social Sciences Professor Sherry Turkle writes in her essay, Close Engagements With Artificial Companionship, that “over-stressed, overworked, people claim exhaustion and overload. These days people will admit they’d rather leave a voice mail or send an email than talk face-to-face. And from there, they say: ‘I’d rather talk to the robot. Friends can be exhausting. The robot will always be there for me. And whenever I’m done, I can walk away.’”

In the coming years humans will communicate more with robots in their lives from experiences in the home to the office to their leisure time. The big question will be not the technical barriers, but the societal norms that will evolve to accept Earth’s newest species.

“What do we think a robot is?” asked robot designer Norm. “Some people think it should look like an animal or a person, and it should move around. Or it just has to be smart, sense the environment, and have motors and controllers.”

Norm’s answer, like beauty, could be in the eye of the beholder.

]]>
What has twenty years of RoboCup taught us? https://robohub.org/what-has-twenty-years-of-robocup-taught-us/ Tue, 23 May 2017 09:54:53 +0000 http://robohub.org/what-has-twenty-years-of-robocup-taught-us/ Read More ›]]>

In 1985, a twenty-two year old Garry Kasparov became the youngest World Chess Champion. Twelve years later, he was defeated by the only player capable of challenging the grandmaster, IBM’s Deep Blue. That same year (1997), RoboCup was formed to take on the world’s most popular game, soccer, with robots. Twenty years later, we are on the threshold of the accomplishing the biggest feat in machine intelligence, a team of fully autonomous humanoids beating human players at FIFA World Cup soccer.

Many of the advances that have led to the influx of modern autonomous vehicles and machine intelligence are the result of decades of competitions. While Deep Blue and AlphaGo have beat the world’s best players at board games, soccer requires real-world complexities (see chart) in order to best humans on the field. This requires RoboCup teams to combine a number of mechatronic technologies within a humanoid device, such as real-time sensor fusion, reactive behavior, strategy acquisition, deep learning, real-time planning, multi-agent systems, context recognition, vision, strategic decision-making, motor control, and intelligent robot control.

 

Professor Daniel Lee of University of Pennsylvania’s GRASP Lab described the RoboCup challenges best, “Why is it that we have machines that can beat us in chess or Jeopardy but we can beat them in soccer? What makes it so difficult to embody intelligence into the physical world?” Lee explains, “It’s not just the soccer domain. It’s really thinking about artificial intelligence, robotics, and what they can do in a more general context.”

RoboCup has become so important that the challenge of soccer has now expanded into new leagues that focus on many commercial endeavors from social robotics, to search & rescue, to industrial applications. These leagues have a number of subcategories of competition with varying degrees of difficulty. In less than two months, international teams will convene in Nagoya, Japan for the twenty-first games. As a preview of what to expect, let’s review some of last year’s winners. And just maybe it could give us a peek at the future of automation.

RoboCup Soccer

While Iran’s human soccer team is 28th in the world, their robot counterparts (Baset Pazhuh Tehran) won 1st place in the AdultSize Humanoid competition. Baset’s secret sauce is its proprietary algorithms for motion control, perception, and path planning. According to Baset’s team description paper, the key was building a “fast and stable walk engine” based upon the success of past competitions. This walk engine was able to combine “all actuators’ data in each joint, and changing the inverse and forward kinematics” to “avoid external forces affecting robot’s stability, this feature plays an important role to keep the robot standing when colliding to the other robots or obstacles.” Another big factor was their goalkeeper that used a stereo vision sensor to detect incoming plays and win the competition by having “better percepts of goal poles, obstacles, and the opponent’s goalkeeper. To locate each object in real self-coordinating system.” The team is part of a larger Iranian corporation, Baset, that could deploy this perception in the field. Baset’s oil and gas clients could benefit from better localization techniques and object recognition for pipeline inspections and autonomous work vehicles. If by 2050 RoboCup’s humanoids will be capable of playing humans in soccer, one has to wonder if Baset’s mechanical players will spend their offseason working in the Arabian peninsula?

RoboCup Rescue 


In 2001 the RoboCup organization added simulated rescue to the course challenge, paving the way for many life-saving innovations already being embraced by first responders. The course starts with a simulated earthquake environment whereby the robot performs a search and rescue mission lasting 20 minutes. The skills are graded by overcoming a number of obstacles that are designed to assess the robot’s autonomous operation, mobility, and object manipulation. Points are given by the number of victims found by the robot, details about the victims, and the quality of the area mapped. In 2016, students from the King Mongkut’s University of Technology North Bangkok won first place with their Invigorating Robot Activity Project (or iRAP).

Similar to Baset, iRap’s success is largely based upon solving problems from previous contests where they placed consistently in the top tier. The team had a total of four robots: one autonomous robot, two tele-operative robots, and one aerial drone. Each of the robots had multiple sensors related to providing critical data, such as CO2 levels, temperature, positioning, 2D mapping, images, and two-way communications. iRap’s devices managed to navigate with remarkable ease the test environment’s rough surfaces, hard terrains, rolling floor, stairs, and inclined floor. The most impressive performer was the caged quadcopter using enhanced sensors to localize itself within an outdoor search perimeter. According to the team’s description paper, “we have developed the autonomously outdoor robot that is the aerial robot. It can fly and localize itself by GPS sensor. Besides, the essential sensors for searching the victim.” It is interesting to note that the Thai team’s design was remarkably similar to Flyability’s Gimball that won first place in the UAE’s 2015 Drones for Good Competition. Like the RoboCup winner, the Gimball was designed specifically for search & rescue missions using a lightweight carbon fiber cage.

As RoboCup contestants push the envelope of navigation mapping technologies, it is quite possible that the 2017 fleet could develop subterranean devices that could actively find victims within minutes of being buried by the earth.

RoboCup @Home

The home, like soccer, is one of the most chaotic conditions for robots to operate successfully. It is also one of the biggest areas of interest for consumers. Last year, RoboCup @Home celebrated its 10th anniversary by bestowing the top accolade to Team-Bielefeld (ToBI) of Germany. ToBi built a humanoid-like robot that was capable of learning new skills through natural language within unknown environments. According to the team’s paper, “the challenge is two-fold. On the one hand, we need to understand the communicative cues of humans and how they interpret robotic behavior. On the other hand, we need to provide technology that is able to perceive the environment, detect and recognize humans, navigate in changing environments, localize and manipulate objects, initiate and understand a spoken dialogue and analyse the different scenes to gain a better understanding of the surrounding.” In order to achieve these ambitious objectives the team created a Cognitive Interaction Toolkit (CITK) to support an “aggregation of required system artifacts, an automated software build and deployment, as well as an automated testing environment.”

Infused with its proprietary software the team’s primary robot, the Meka M1 Mobile Manipulator (left) demonstrated the latest developments in human-robot-interactions within the domestic setting. The team showcased how the Meka was able to open previously shut doors, navigate safely around a person blocking its way, and recognize and grasp many household objects. According to the team, “the robot skills proved to be very effective for designing determined tasks, including more script-like tasks, e.g. ’Follow-Me’ or ’Who-is-Who’, as well as more flexible tasks including planning and dialogue aspects, e.g. ’General-PurposeService-Robot’ or ’Open-Challenge’.”

RoboCup @Work

The @Work category debuted in 2016 with the German team from the Leibniz Universität Hannover (LUHbots) winning first place. While LUHbots’ hardware was mostly off the shelf parts (a mobile robot KUKA youBot), the software utilized a number of proprietary algorithms. According to the paper, “in the RoboCup we use this software e.g. to grab objects using inverse kinematics, to optimize trajectories and to create fast and smooth movements with the manipulator. Besides the usability the main improvements are the graph based planning approach and the higher control frequency of the base and the manipulator.” The key to using this approach within a factory setting is its robust object recognition. The paper explains, “the robot measures the speed and position of the object. It calculates the point and time where the object reaches the task place. The arm moves above the calculated point. Waits for the object and accelerates until the arm is directly above the moving-object with the same speed. Overlapping the down movement with the current speed until gripping the object. The advantage of this approach is that while the calculated position and speed are correct every orientation and much higher objects can be gripped.”

Similar to other finalists, LUHbots’ object recognition software became the determining factor to its success. RoboCup’s goal of playing WorldCup soccer with robots may seem trivial, but its practice is anything but meaningless. In each category, the advances developed on the field of competitive science are paying real dividends on a global scale across many industries.

In the words of the RoboCup mission statement: “The ultimate goal is to ‘develop a robot soccer team which beats the human world champion team.’ (A more modest goal is ‘to develop a robot soccer team which plays like human players.’) Needless to say, the accomplishment of the ultimate goal will take decades of effort. It is not feasible with current technologies to accomplish this in the near future. However, this goal can easily lead to a series of well-directed subgoals. Such an approach is common in any ambitious or overly ambitious project. In the case of the American space program, the Mercury project and the Gemini project, which manned an orbital mission, were two precursors to the Apollo mission. The first subgoal to be accomplished in RoboCup is ‘to build real and software robot soccer teams which play reasonably well with modified rules.’ Even to accomplish this goal will undoubtedly generate technologies, which will impact a broad range of industries.”

* Editor’s note: thank you to Robohub for providing a twenty-year history of RoboCup videos. 

]]>
Drones land back to Earth at Xponential 2017 https://robohub.org/drones-land-back-to-earth-at-xponential-2017/ Mon, 15 May 2017 14:50:56 +0000 http://robohub.org/drones-land-back-to-earth-at-xponential-2017/ Read More ›]]>

PhoneDrone Ethos, Kickstarter campaign. Credit: xCraft/YouTube

JD Claridge’s story epitomizes the current state of the drone industry. Claridge, founder of xCraft, is best known for being the first contestant on Shark Tank to receive money from all the Sharks – even Kevin O’Leary! Walking the floor of Xponential 2017, the annual convention of the Association for Unmanned Vehicle Systems Integration (AUVSI), Claridge remarked to me how the drone industry has grown up since his TV appearance.

Claridge has gone from pitching cellphone cases that turn into drones (aka phonedrone) to solving mission critical problems. The age of fully autonomous flight is near and the drone industry is finally recovering from the hangover of overhyped Kickstarter videos (see Lily drone’s $34 million fraud). xCraft’s pivot to lightweight, power efficient, enterprise drones is an example of this evolved marketplace. During the three days of Xponential 2017, several far-reaching announcements were made between stalwarts of the tech industry and aviation startups. Claridge introduced me to his new partner, Rajant, which is a leader in industrial wireless networks. xCraft’s latest models utilize Rajant’s mesh networks to launch swarms of drones with one controller. More drones flying simultaneously enables users to maximize the flight time limitations of lithium batteries by covering greater areas within a single mission.

Bob Schena, Rajant’s CEO, said, “Rajant’s network technology now makes it possible for one pilot to operate many aircrafts concurrently, with flight times of 45 minutes. We’re pleased to partner with xCraft and bring more intelligence, mobility and autonomy to UAV communication infrastructures covering greater aerial distances while supporting various drone payloads.”

The battery has been the Achilles heel of the small drone industry since inception. While large winged craft relies heavily on fossil fuels, multirotor battery-operated drones have been plagued with shorter missions of under 45 minutes. Innovators like Claridge are leading the way for a new wave of creative solutions:

Solar Powered Wings 

Solar Powered Wings

Airbus showcased its Zephyr drone products or HAPS (High Altitude Pseudo-Satellite) UAVs using solar-winged craft for power. Zephyr UAVs can fly for months at a time, saving thousands of tons of fuel. The HAPS also offers a number of lightweight payload options from voice communications to persistent internet to real-time surveillance. Airbus was not the only solar solution on display; there were a handful of Chinese upstarts and solar cell purveyors for retrofitting existing aircrafts.

Hybrid Fuel Solutions  

In the Startup Pavilion, William Fredericks of the Advanced Aircraft Company (AAC) demoed a novel technology using a hybrid of diesel fuel and lithium batteries with flexible fixed wings and multirotors, resulting in over 3 hours of flying time. AAC’s prototype, the Hercules (above) is remarkably lightweight and fast. Fredricks is an aircraft designer by trade with 12 designs flying in the air, including NASA’s Greased Lightning that looks remarkably similar to Boeing’s Osprey. The Hercules is available for sale on the company’s website for multiple use cases, including: agricultural, first responders, and package delivery. It is interesting to note that a few rows from Frederick’s booth was his former employer, NASA, promoting their new Autonomy Incubator for “intelligent flight systems” and its “autonomy innovation lab,” (definitely an incubator to watch).

Vertical Take Off & Landing

In addition to hybrid fuel strategies, entrepreneurs are also rethinking the launch procedures. AAC’s Hercules and XCraft’s commercial line of drones vertically takeoff to reduce wind resistance and maximize energy consumption. Australian Startup Iridium Dynamics takes this approach to a new level with astonishing results. Its winged craft, Halo, uses a patent-pending “hover thrust” of its entire craft so its wings actually create the vertical lift to hover with minimal power. The drone also has two rotors to fly horizontally. According to Dion Gonano, Control Systems Engineer, it can fly for over 2 hours. The Halo also lands vertically into a stationary mechanical arm. While the website lists a number of commercial applications for this technology, it was unclear in my discussions with Gonano if they have deployed this technology in real tests.

New Charging Efficiencies

Prior to Xponential, Seattle-based WiBotic announced the closing of its $2.5 seed round to fund its next generation of battery charging technologies. The company has created a novel approach to wireless inductive charging for robotics. Its wireless inductive charging platform includes a patent-pending auto detect feature that can begin recharging once the robot enters the proximity of the base station, even during flight. According to Dr. Ben Waters, (CEO), its charge is faster than traditional solutions presently on the market. Dr. Waters demonstrated for me its suite of software tools that monitor battery performance, providing clients with a complete power management analytics platform. WiBotic is already piloting its technology with leading commercial customers in the energy and security sectors. WiBotic is the first inductive charging platform; other companies have created innovating battery-swapping techniques. Airobotics unique drone storage box that is deployed currently at power plants in Israel, includes a robotic arm, housed inside, that services the robot post flight by switching out the payload and battery:

Reducing Payload Weight

In addition to aircraft design, payload weight is a big factor of battery drain. A growing trend within the industry is miniaturizing the size and cost of the components. Ultimately, the mission of a drone is directly related to the type of payload from cameras for collecting images to precise measurements using Light Detection and Ranging sensors (or Lidar). Lidar is typically deployed in autonomous vehicles to provide the most precise position for the robot in a crowded area, like a self-driving car on the road. However, Lidar is currently extremely expensive and large for many multirotor surveys. Chris Brown of Z-Senz, a former scientist with the The National Institute of Standards and Technology (NIST), hopes to change the landscape of drones with his miniaturized Lidar sensor. Brown’s reduced sensor, SKY1, offers major advantages for size, weight, and power consumption without losing accuracy of high distance sensing. A recent study estimates the Lidar market is expected to exceed $5 billion by 2022, with Velodyne and Quanergy already gaining significant investment. Z-Senz is aiming to be commercially available by 2018.

Lidar is not the only measuring methodology, Global Positioning Solutions (GPS) have been deployed widely. Two of the finalists of the Xponetial Startup Showdown were startups focused on reducing GPS chip sizes and increasing functionality. Inertial Sense has produced a chip the size of a dime that is capable of housing an Inertial Measurement Unit (IMU), Attitude Heading Reference System (AHRS), and GPS-aided Inertial Navigation System (INS). Their website claims that their “advanced algorithms fuse output from MEMs inertial sensors, magnetometers, barometric pressure, and a high-sensitivity GPS (GNSS) receiver to deliver fast, accurate, and reliable attitude, velocity, and position even in the most dynamic environments.” The chips and micro navigation accessories are available on the company’s e-store.

The winner of the Showdown, uAvionix, is a leading developer of avionics for both manned and unmanned flight. Their new transceivers and transponders claim to be “the smallest, and lightest and most affordable on the market” (already GPS is a commodity). uAvionix presented its “Ping Network System that reduces weight on average by 40% as compared to the two-piece installations.” The Ping products also claim barometric altitude precision with accuracy beyond 80,000 ft.

Paul Beard, CEO of uAvionix, said, “our customers have asked for even smaller and lighter solutions; integrating the transceivers, GPS receivers, GPS antennas, and barometric pressure sensors into a single form factor facilitates easier installation and lowers weight and power draw requirements resulting in a longer usable flight time.”

As I rushed to the airport to catch my manned flight, I felt reenergized about the drone industry, although follies will persist. I mean who wouldn’t want a pool deckchair drone this summer?

This and all other autonomous subjects will be explored at RobotLabNYC’s next event with Dr. Howard Morgan (FirstRound Capital) and Tom Ryden (MassRobotics) – RSVP.

]]>
An AI Primer for mechatronics https://robohub.org/an-ai-primer-for-mechatronics/ Thu, 11 May 2017 13:24:01 +0000 http://robohub.org/an-ai-primer-for-mechatronics/ Read More ›]]>

Ex Machina. Source: Youtube/Universal Pictures

This week I attended an “Artificial Intelligence (AI) Roundtable” of leading scientists, entrepreneurs and venture investors. As the discussion focused mainly on basic statistical techniques, I left feeling unfulfilled. My friend, Matt Turck, recently wrote that “just about every major tech company is working very actively on AI,” which also means that every startup hungry for capital is purchasing a dot ‘ai’ domain name. As the lines blur between what is and what really isn’t, I feel it necessary to provide readers with a quick lens of how to view intelligent agents for mechatronics.

For 65 years, The Turing Test remained unsolvable until a computer program called “Eugene Goostman” conquered it in 2014. The chatbot, which simulates a 13-year-old Ukrainian boy, did the unthinkable, fooling a group of human judges into thinking it was more real than a live person on the other side of the screen. Alan Turing’s original thesis in developing the schema was testing the premise – “Can Machines Think?”

According to the competition’s organizer, Kevin Warwick of Coventry University,
“The words Turing test have been applied to similar competitions around the world. However, this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. We are therefore proud to declare that Alan Turing’s test was passed for the first time.” Since then, many have debated whether the threshold was indeed crossed then, or before, or ever…

Last December, the Turing Test’s sound barrier (of sorts) was broken by a group of Researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). Using a deep-learning algorithm, the AI fooled human viewers into thinking the images and corresponding sounds were indistinguishable. As a silent video clip played (above), the computer program produced a sound that was realistic enough to fool even the most hardened audiophiles. According to the research paper, the authors envision their algorithms in the future be used to automatically produce movies, TV shows, as well as helping robots better understand their environments.

Lead author Andrew Owens said: “being able to predict sound is an important first step toward being able to predict the consequences of physical interactions with the world. A robot could look at a sidewalk and instinctively know that the cement is hard and the grass is soft, and therefore know what would happen if they stepped on either of them.”

Unpacking Owens’ statement, one uncovers a lot of theory of artificial intelligence. The algorithm is using a deep learning approach to automatically train the computer to match sounds to pictures through experience. According to the paper, the researchers spent several months recording close to 1,000 videos of approximately 46,000 sounds, each representing unique objects being “hit, scraped and prodded with a drumstick.”

Typically artificial intelligence starts with creating an agent to solve a specific program, often agents are symbolic or logical, but deep learning approaches like the MIT example uses sub-symbolic neural networks that attempt to emulate how the human brain learns. Autonomous devices, like robots, use machine learning approaches to combine algorithms with experiences.

AI is very complex and relies on theories derived from a multitude of disciplines, including computer science, mathematics, psychology, linguistics, philosophy, neuroscience and statistics. For the purposes of this article, it may be best to group modern approaches into two categories: “supervised” and “unsupervised” learning. Supervised uses a method of “continuous target values” to mine data to predict outcomes, similar to how nominal variables are used to label values in statistics. In unsupervised learning there is no notion of a target value. Rather, algorithms perform operations based upon clustering data into classifications and then determine relationships between the inputs and outputs with numerical regression or other filtering methods. The primary difference between the two approaches is that in unsupervised learning the computer program is able to automatically connect and label patterns from streams of inputs. In the above example, the algorithm connects the silent video images to the library of drumstick sounds.

Now that the connections are made between patterns and inputs, the next step is to control behavior. The most rudimentary AI applications are divided between classifiers (“if shiny then silver”) and controllers (“if shiny then pick up”). It is important to note that controllers also classify conditions before performing actions. Classifiers use pattern matching to determine the closest match. In supervised learning, each pattern belongs to a certain predefined class. A data set is the labeling of classes and the observations received through experience. The more experience the increased number of classes and likewise data to input.

In robotics, unsupervised (machine) learning is required for object manipulation, navigation, localization, mapping and motion planning, which becomes increasingly more challenging in unstructured environments such as, enhanced manufacturing and autonomous driving. As a result, deep learning has given birth to many sub-specialities of AI, such as computer vision, speech recognition and natural language processing. Rodney Brooks (above) professed in 1986 a new theory of AI that led to the greatest advancement in machine intelligence for robotics. Brook’s “Subsumption Architecture” created a paradigm for real-time learning via live interaction through sensor inputs.

In his own words, Brooks said in a 2015 interview, “The work I did in the 80s on what I called subsumption architecture then led directly to the iRobot Roomba. And there are 14 million of them deployed worldwide. And it was also used in the iRobot PackBot, and there were 4,500 PackBots in Iraq and Afghanistan remediating roadside bombs – and by the way, there are some PackBots and Warriors from iRobot inside Fukushima now, using the subsumption architecture. There’s a variation of subsumption inside Baxter – we call it behaviour-based now, but it’s descendent of that subsumption architecture. And that’s what lets Baxter be aware of different things in parallel, for instance, it’s picked something up, it’s put it in a box, something goes wrong, and it drops the object, sadly. A traditional robot would just continue and sort of mime putting the thing in the box, but Baxter is aware of that changes behaviour – that’s using the behaviour-based approach, which is a variation on subsumption. So it is part of Baxter’s intelligence.”

Many next generation computer scientists are focusing on designing not just the software that acts like the brain, but hardware that is designed like a human cranium. Today, infamous deep learning algorithms, like Siri and Google Translate, run on traditional computing processing platforms that consume a lot of energy as the logic and memory boards are separated. Last year, Google’s AlphaGo, the most successful deep learning program to date, was only able to beat the world-champion human player in Go after being trained on a database of thirty million moves running on approximately one million watts of power.

When asking Dr. Devanand Shenoy, formerly with the U.S. Department of Energy about Google’s project, he said: “AlphaGo had to be retrained for every new game (a feature of narrow AI where the machine does one thing very well, even better than humans). Once learning and training algorithms are implemented on neuromorphic hardware that is distributed, asynchronous, perhaps event driven, and fault-tolerant, the ability to process data with many orders of magnitude improvements in energy-efficiency as well as superhuman speed across multiple applications could be possible in the future. Recent trends in transfer learning with reservoir computing as an example, suggest that artificial network configurations may be trained to learn more than one application.”

AlphaGo-Lee-Sedol-game-3-game-over.jpg

Superhuman intelligence could sound super scary, however, I am sobered by the predictions in the 1960s by the founder of artificial intelligence, Professor Herbert Simon, who stated then: “machines will be capable, within twenty years, of doing any work a man can do.”

As this article only touches the surface of artificial intelligence, this subject matter will be further explored on June 13th in New York City at RobotLabNYC’s next event with Dr. Howard Morgan (FirstRound Capital) and Tom Ryden (MassRobotics). There are limited seats available, so be sure to RSVP today.

]]>
Hardhat bots take over construction sites https://robohub.org/hardhat-bots-takeover-construction-sites/ Tue, 02 May 2017 14:27:11 +0000 http://robohub.org/hardhat-bots-takeover-construction-sites/ Read More ›]]>

Fastbrick’s Hadrian X prototype.

RobotLabNYC’s third installment will be on June 13, in New York City with Howard Morgan (FirstRound Capital) and Tom Ryden (MassRobotics); together, we will be “Exploring The Autonomous Future” (RSVP today). Coincidentally, Jimmy Fallon featured a new bit this week called “Showbotics,” providing viewers with a sneak peek into the robotic future:

While Fallon pokes fun, the reality is that robots today are showing up for work in record numbers. As America pulls out of NAFTA and starts a trade war with Canada over lumber imports, it is predicted that home building costs could increase by more than 20% over the next year. In order to keep America building without sacrificing margin, labor is shifting from humans with tool belts to job-ready robots.

building.gif

An example of machines being added to the field is MIT’s Digital Construction Platform (DCP) – a 3D-printing fabrication robot. The DCP consists of a robotic arm that can print structures made of insulating foam at architectural scale — in other words, big enough to live in. In a test last July, MIT researchers printed an open dome structure in two days that was as large as a ranch house. The average single-family home takes about 7 months to build (US Census Bureau).

According to the white paper published this week in Science Robotics, the researchers claim, “this printed test structure is currently both the largest monolithic structure ever 3D-printed by an on-site mobile platform and the fastest autonomously printed architectural-scale structure.”

Shanghai-based WinSun 3D-printed walls last year for a six-story apartment building out of concrete that was assembled on site for immediate occupancy. According to WinSun, they were able “to save 60% of the materials typically needed to construct a home and can be printed in a time span which equates to just 30% of that of traditional construction. In total, 80% less labor is needed, meaning more affordable construction, and less risk of injury to contractors.”

In the USA, Blueprint Robotics is manufacturing wood-framed houses today with industrial robots. Builders upload their architectural plans to Blueprint’s cloud-based platform which outputs within weeks pre-made walls, floors, and roofs. While the concept of pre-fabricated houses is not novel -Sears Roebuck & Co first introduced kits to homesteaders in the 1900s – Blueprint is the first to create an automated production line that pre-places everything from sheetrock to nails to lighting placement. This means modular structures can now be used for bigger and more luxurious developments from multi-million dollar mansions to large apartment buildings. Bloomberg News reported this week that even Marriott, the world’s largest hotel operator, is “turning to modular construction for some of its properties,” shipping the walls inside the same containers as the television sets and beds.

According to leading home consultant, John Burns, “this has to be the wave of the future — I don’t know how we solve the labor shortage otherwise. What drives modular construction is the ability to build the house more cost-effectively.” The Bureau of Labor states that almost 200,000 construction jobs are unfilled across the country at a boom time for homebuilders. The U.S. construction industry’s unemployment is at its lowest level in a decade as fewer young people enter the construction workforce. The door remains wide open for robotic disruption.

Jerry Smalley, CEO of Blueprint, claims his mechanized platform is filling this labor gap. While manual labor is greatly reduced, skilled tradesmen are still high demand, according to Smalley, “robots cut the hole, but somebody still has to put the electrical box and pipes in the right places.” Blueprint utilizes Germany-based Weinmann’s Stiles machines that have been shipping pre-fabricated homes in Europe for decades.

Construction globally generates over $8 trillion annually. It is also one of the most labor-intensive professions with concrete casting, wood framing and bricklaying. As a result, the construction industry accounts for almost 20% of US workplace fatalities. Masons are particularly prone to musculoskeletal injuries, due to repetitive motion strain from carrying and laying over 500 bricks a day by hand.

New York-based Construction Robotics is revolutionizing the masonry trade with its SAM100 (Semi-Automated Mason) robot, capable of working six times faster than the average human. SAM uses a combination of a conveyor belt, robotic arm, and concrete pump. Construction Robotics recently upgraded its product to enable the laying of popular “solider course” bricks. SAM costs the equivalent of the average salary of 10 masons or $500,000 and is already being deployed on sites throughout America.

Construction Robotics’s head of business development, Zachary Podkaminer said, “we don’t see construction sites being fully automated for decades, if not centuries. This is about collaboration between human workers and machines. What SAM does is to pick up the bricks, put mortar on them, and puts it on the wall. It still requires a mason to work alongside it. SAM’s just there to do the heavy lifting.”

Already Construction Robotics has competition from Australian-based Fastbrick Robotics. Fastbrick’s Hadrian X prototype claims to place bricks at an impressive rate of 1,000 per hour. The robot (shown above) can handle different brick sizes as well as cut, grind, mill, and route the bricks to fit any structure before putting them in place. Unlike SAM, which uses traditional bricks and mortar, the Hadrian machine is optimized for interlocking precision bricks (that are 15 times larger) and construction adhesive to hold the concrete structure.

Fastbrick Operation Manager, Gary Paull, says his system “will produce structurally sound wall to renderable quality at speeds never seen before.” The company plans to market the Hadrian bricklayer first in Western Australia and then to the rest of the country before offering it worldwide.

Deployment of task-orientated robots will be advanced further by the coming inevitability of autonomous vehicles ferrying heavy construction equipment and materials on the site.  According to Jenny Elfsberg of emerging technologies at Volvo CE, “Autonomous machines will increase safety in hazardous working environments and eliminate the possibility of accidents caused by human error. They will also perform repetitive tasks more efficiently and precisely than a human operator. And because machines will be operated in the most efficient way, customers will benefit from improved performance, productivity, fuel efficiency and durability.”

In a one-hour comparison, it was found that the autonomous wheel loader could reach the equivalent of 70% of that of a skilled operator’s productivity levels when loading and unloading. The machine has also done ‘real work’ for a Volvo CE customer at an asphalt plant in Sweden.

“In the future, you could also potentially have one operator for three or four machines, increasing productivity and further decreasing costs,” Elfsberg states. “Looking ahead, I imagine that autonomous machines will be smaller and more robust. There will be no need for a cab or suspension – much like the HX1 concept [shown above],” which Volvo CE unveiled last year.

Mining company Rio Tinto is deploying a fleet of close to 100 driverless dump trucks (manufactured by Komatsu) to move high-grade ore autonomously. It is also developing and testing an autonomous heavy-haul long-distance railway system, and has deployed an automated blast-hole drill system that allows a single operator to remotely control multiple drill rigs.

According to a Rio Tinto press release, “these driverless vehicles deliver their loads more efficiently, minimizing delays and fuel use, and are controlled remotely by operators who exert more control over their environment and ensure greater operational safety.”

Industry leaders, like Caterpillar and GE, are taking note by funding innovation at early financing rounds that are typical for corporate venture. Last year GE and Caterpillar invested in Clearpath Robotics’ $30 million Series B round to expand its fleet of autonomous mobile robots into industrial settings, like construction and mining. Previously, Clearpath’s mobile robots were primarily utilized in warehouses moving boxes and pallets around distribution centers.

This past March, Caterpillar co-hosted an industrial startup competition at SxSw’s Interactive Conference with their local dealer Holt. Leading autonomous navigation company 5D Robotics walked away with the $20,000 grand prize. 5D is currently working with United Rentals to convert their fleet of manual industrial machines into autonomous mobile robots. While Caterpillar has been working on self-driving trucks internally since the 1990s, reaching outside for innovation could be the difference in dominating the new autonomous landscape.

This new sentiment was reflected in its “Year In Review” investor statement: “For 90 years, Caterpillar has delivered breakthrough innovation inside our machines and engines. Today, that innovation is increasingly happening outside the machine. We’re going ‘beyond the yellow iron,’ harnessing the power of big data to offer our customers insights that decrease operating costs, increase uptime and maximize profitability.”

]]>
NASA spinoffs: Bringing space down to Earth https://robohub.org/nasa-spinoffs-bringing-space-down-to-earth/ Mon, 24 Apr 2017 11:33:13 +0000 http://robohub.org/nasa-spinoffs-bringing-space-down-to-earth/ Read More ›]]>

In a time of “America First,” the benefits of space travel are clouded by the smoke of hyperbole. In reality, there has been over 2,000 inventions courtesy of NASA that are making our lives better here on Earth. Every day, we benefit as much from the journey as from the destination. These innovations include new medicines developed in zero gravity; faster autonomous transportation technologies; and groundbreaking advances in computing (launched above the clouds). The map below illustrates the number of commercial endeavors that have spun out of the nation’s space program since 1976:

NASA has been an incubator of technology since its inception in 1959. This year, NASA unveiled a new web page illustrating its reach into the design of new smart cities and IoT connected homes, called NASA City. This addition to the already extensive NASA site comes on the heels of President Trump’s proposed budget cut of NASA’s funding that would end earth-centric education and innovations, including protecting the planet from an apocalyptic asteroid.  

According to the President’s outline, “the budget increases cooperation with industry through the use of public-private partnerships, focuses on the nation’s efforts on deep space exploration rather than Earth-centric research, and develops technologies that would achieve U.S. space goals and benefit the economy.”

This past week, I visited the Kennedy Space Center to learn firsthand how our lives have been positively changed by the sacrifices of the brave men and women who risked everything for humanity. As a small testimony to these lives lost in the pursuit of science, I have included below a list from NASA’s website of the top innovations we use every day

Health & Medicine

  • LED (Light-Emitting Diodes) Lighting –  The LED technology used in space shuttle plant growth experiments has contributed to the development of medical devices to relive “minor muscle and joint pain, arthritis, stiffness, and muscle spasms, and also promotes muscle relaxation and increases local blood circulation.” In fact, the U.S. Department of Defense uses LEDs now as a “soldier self-care” system to treat minor injuries and pain and to improve endurance in combat.
  • Infrared Ear Thermometers – NASA developed this thermometer using “infrared astronomy technology to measure the amount of energy emitted by the eardrum, the same way the temperature of stars and planets are measured.”
  • Artificial Limbs – NASA’s space robotic and extravehicular activities have now been applied to create more “functionally dynamic artificial limbs.” Additionally, temper foam was developed by NASA which prevents friction between the skin and the prosthetic.
  • Ventricular Assist Device – The ventricular assist device (VAD) functions as a “bridge to heart transplant” by pumping blood throughout the body to keep critically ill patients alive until a donor heart is available. “Weighing less than 4 ounces and measuring 1 by 3 inches, the pump is approximately one-tenth the size of other currently marketed pulsatile VADs, making it ideal for children.

Transportation

  • Anti-Icing Systems – NASA scientists developed a thermoelectric deicing system called Thermawing, a DC-powered air conditioner for single-engine aircraft called Thermacool. This allows “pilots to safely fly through ice encounters and provides pilots of single-engine aircraft the heated wing technology usually reserved for larger, jet-powered craft.”
  • Highway Safety – The cutting of grooves in concrete to increase traction and prevent injury was first developed for the Space Shuttle to reduce aircraft accidents on wet runways. It was later expanded into highway and pedestrian applications.
  • Improved Radial Tires – “Goodyear Tire and Rubber Company developed a fibrous material, five times stronger than steel, for NASA to use in parachute shrouds to soft-land the Vikings on the Martian surface.” Goodyear later added this technology to its new radial tire to increase tread life by over 10,000 miles.
  • Chemical Detection – Moisture and pH-sensitive sensors were created to warn NASA of potentially dangerous corrosive conditions in the Space Shuttle before significant structural damage occurred. This new type of sensor was later adapted by the Department of Defense for detecting chemical warfare agents and potential threats, such as toxic industrial compounds and nerve agents.

Public Safety

  • Video Enhancing and Analysis Systems – Video Image Stabilization and Registration technology used by NASA in space is now in the hands of FBI agents to analyze video footage.  Aside from law enforcement and security applications, the technology has also been adapted to serve military professionals for reconnaissance, weapons deployment, damage assessment, training, and mission debriefing.
  • Land Mine Removal – NASA utilizes its surplus rocket fuel “to produce a flare that can safely destroy land mines,” that is able “to reduce propellant waste without negatively impacting the environment.” The flare uses solid rocket fuel to burn a hole in the mine’s case and burn away the explosive contents before it is safely disarmed.
  • Fire-Resistant Reinforcement – “The Apollo heat shield was coated with a material whose purpose was to burn and thus dissipate energy during reentry while charring, to form a protective coating to block heat penetration.” Now the the heat shield coating is being used in fire-retardant paints and aircraft foams for aircraft, including a new “intumescent epoxy material” that expands when exposed to heat.
  • Firefighter Gear – Firefighting equipment widely used throughout the United States is based on a NASA development that coupled Agency design expertise with lightweight materials developed for the U.S. Space Program.

Consumer, Home, & Recreation

  • Temper Foam – “As the result of a program designed to develop a padding concept to improve crash protection” for the Shuttle and airplane passengers, NASA developed temper foam or “memory foam.” The material is now in most mattresses and pillows.
  • Enriched Baby Food – “Commercially available infant formulas now contain a nutritional enrichment ingredient that traces its existence to NASA-sponsored research that explored the potential of algae as a recycling agent for long-duration space travel.” The ingredient can now be found in over 90% of infant formulas sold worldwide.
  • Portable Cordless Vacuums – Wireless drills and other appliances were created by Black & Decker for the Apollo and Gemini space missions. “For the Apollo space mission, NASA required a portable, self-contained drill capable of extracting core samples from below the lunar surface.” This invention later led to the development of the Dustbuster.
  • Freeze Drying Technology – “In planning for the long-duration Apollo missions, NASA conducted extensive research into space food. One of the techniques developed was freeze drying.” The benefits were amazing as freeze drying food kept 98% of the nutrients but weighing 80% less, which is now used to help help homebound seniors maintain a healthy diet.

Environmental & Agricultural Resources

  • Harnessing Solar Energy – Solar energy was first utilized in space, and now “homes across the country are now being outfitted with modern, high-performance, low-cost, single crystal silicon solar power cells that allow them to reduce their traditional energy expenditures and contribute to pollution reduction.”
  • Pollution Remediation – “A product using NASA’s microencapsulating technology is available to consumers and industry enabling them to safely and permanently clean petroleum-based pollutants from water.” This technology was recently deployed in the cleanup of catastrophic oil spills such as BP’s Deep Water Horizon and the Exxon Valdez. 
  • Water Purification – As part of designing a complex system of devices intended to sustain the astronauts living on the International Space Station and, in the future, those who go on to explore Mars, NASA has been turning wastewater from respiration, sweat, and urine into drinkable water for years. This technology is now available for underdeveloped countries where well water may be heavily contaminated.

Computer Technology

  • Better Software NASA is collaborating with tech companies like Google and InterSense “to solve a variety of challenging technical problems ranging from large-scale data management and massively distributed computing, to human-computer interfaces—with the ultimate goal of making the vast, scattered ocean of data more accessible and usable” both in space and on Earth.
  • Structural Analysis – NASA has created thousands of computer programs over the decades to design, test, and analyze stress, vibration, and acoustical properties of a broad assortment of aerospace parts and structures (before prototyping even begins). The NASA Structural Analysis Program is now used to design everything from Cadillacs to roller coasters.
  • Refrigerated Internet-Connected Wall Ovens – Everything on the International Space Station is monitored and controlled via the Internet, even the oven named “ConnectIo.” ConnectIo is unique as it enables a “user to simply enter the dinner time, and the oven automatically switches from refrigeration to the cooking cycle,” so the meal is ready when the astronaut (or Earth-bound human) is at the table.

Industrial Productivity

  • Powdered Lubricants – NASA developed a solid lubricant coating material originally for its aeropropulsion engines, refrigeration compressors, turbochargers, and hybrid electrical turbogenerators. This technology is now saving industrial companies millions of dollars by improving efficiency, lowering friction, and reducing emissions in widespread industrial equipment.
  • Improved Mine Safety – “An ultrasonic bolt elongation monitor developed by a NASA scientist for testing tension and high-pressure loads on bolts” is now used for a plethora of applications from the evaluation of railroad ties to groundwater analysis to radiation dosimetry to medical testing for intracranial pressure.
  • Food Safety Systems – “Faced with the problem of how and what to feed an astronaut in a sealed capsule under weightless conditions while planning for human space flight, NASA enlisted the aid of The Pillsbury Company to address two principal concerns: eliminating crumbs of food that might contaminate the spacecraft’s atmosphere and sensitive instruments, and assuring absolute freedom from potentially catastrophic disease-producing bacteria and toxins.” Today HACCP is utilized by the U.S. Food and Drug Administration for the handling of seafood, juice, and dairy products.

NASA’s latest private-sector partnership is expanding into enabling the autonomous mobile future. Last year, the space agency teamed up with automaker Nissan to integrate and test autonomous technologies that have been deployed for over two decades on its Mars rovers in consumer ready cars. The partnership’s goal is to have a fully autonomous car on the road by 2020.

According to Nissan CEO, Carlos Ghosn, “The work of NASA and Nissan—with one directed to space and the other directed to earth—is connected by similar challenges. The partnership will accelerate Nissan’s development of safe, secure and reliable autonomous drive technology that we will progressively introduce to consumers beginning in 2016 up to 2020.”

The two organizations have cooperated on technological development in the past. For instance, Nissan used NASA’s research on neutral body posture in low-gravity conditions to develop more comfortable car seats. But hardware and software for self-driving cars could prove to be some of the most transformative technologies to reach mainstream acceptance in the coming years. Nissan joins Google’s Waymo on the Ames Research Center autonomous testing track.

In addition to the Ames facility, the Kennedy Space Center announced last month that it will convert the “Shuttle Landing Facility runway” to a controlled testing ground of driverless cars, as part of its contribution to the Central Florida Automated Vehicle Partnership. This announcement on the heels of its second showing at the Consumer Electronics Show (CES) in Las Vegas in January. NASA made a point of displaying, in particular, its tech transfer program for autonomous cars and robotics.

img_8569.jpg

Terry Fong, director of the NASA Ames Intelligent Robotics Group said, “NASA’s open-source software, VERVE (Visual Environment for Remote and Virtual Exploration), provides the foundation for human-robot teaming in Nissan’s new ‘Seamless Autonomous Mobility’ (SAM) system and CES is a great venue for showcasing this spin-off technology to the general public. VERVE allows humans to provide assistance to autonomous vehicles in unpredictable and difficult situations when the vehicles cannot solve the problem themselves.”


If you enjoyed this article about space robotics, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

]]>
The startup Launch Pad Competition blasts off at Automate 2017 https://robohub.org/the-startup-launch-pad-competition-blasts-off-at-automate-2017/ Thu, 13 Apr 2017 10:00:30 +0000 http://robohub.org/the-startup-launch-pad-competition-blasts-off-at-automate-2017/ Read More ›]]>

What does Magic Johnson and a twenty foot robot have in common? You guessed it, Automate 2017. While this might seem like an odd pairing, it accurately reflects the current state of the robotics industry. Already 2017 is on pace to beat last year’s $19 billion investment record, with the recent announcements of Intel’s $15B purchase of MobileyeABB’s $2B acquisition of Bernecker & Rainer; and Ford’s $1B investment in Argo AI.

The excitement inside Chicago’s McCormick Center on Wednesday was palpable, as Automate 2017 brought together not just established industrial leaders but also innovative startups at The Launchpad Competition. Eight mechatronic entrepreneurs competed to impress the judges (Chris Moehle of The Robotics Hub, Steve Taub of GE VenturesMelonee Wise of Fetch Robotics, and yours truly) in the hopes of winning a $10,000 prize from GE. Andra Keay of Silicon Valley Robotics, the afternoon’s Master of Ceremonies, kicked off the event by reminding the audience that some of the past participants have gone on to change the robotic landscape and raised tens of millions of dollars from investors. As we peek into the robotic crystal ball, please meet this year’s contestants:


Andros Robotics: Teaching humans to walk again with robots

Maciej Pietrusinski, CEO of Andros Robotics, shared with the audience the history of his invention (force-controlled actuators) which began as a PhD thesis on “The Robotic Gait Rehabilitation Trainer” in Northeastern University’s Biomedical Mechatronics Laboratory. The goal of Andros is to build low-cost service robots for the medical rehabilitation market. To achieve this objective the company is selling its proprietary force feedback actuators to academic labs and other robot companies. In the future, Andros plans to market an end-to-end exoskeleton solution. To date, Andros has yet to receive funding from SBIR grants from the National Science Foundation.

In his IEEE paper, Pietrusinski stated in 2014 that this invention will increase the “life expectancy” of the world’s growing geriatric population by maintaining their quality of life through “independent gait.” Among the disabilities targeted is stroke-caused hemiplegia. There are approximately 6.5 million stroke survivors and about 800,000 new cases every year in the USA. Through therapeutic robotic exoskeleton-walkers, Andros is improving the quality of life of the world’s largest demographic group. As a fan of exoskeletons, I want to encourage Andros to focus on raising capital from social impact investors that could assist Pietrusinski’s team to achieve greater success in its ambitious goal.


Appelix: Replacing jobs with drones

There are some jobs that are just too dangerous for humans, painting bridges is one of them. Meet Appelix – a drone manufacturer and painting services company which has developed a novel approach to automating large-spray operations. Their solution: a tethered drone platform that connects to a 3,000 PSI air compressor and power supply to continuously spray paint large structures: from bridges to commercial ships to gas storage tanks. The industrial-grade drone system is built entirely by the company and rented out on a job-by-job basis. It has already completed a number of successful pilots.

“We’ve taken people out of the dangerous situations and replaced them with automated robotics,” said Bob Dahlstrom, founder of Appelix. “Never mind the cost efficiencies.” According to Dahlstrom, the current product is market ready to begin widescale operations. In the future, Appelix will seek to expand its services business to include window washing, infrastructure inspections, and possibly de-icing planes. Appelix’s patent-pending “Worker Bee Drone” could even take over New York City’s “Façade Inspection Safety Program” that is still done by hand with humans hanging from ropes holding pickaxes hundreds of feet above walking pedestrians.


Augmented pixels: Cost-effective computer vision for robots 

One of the most impressive elements of the Automate 2017 Conference was the number of computer vision and photonic companies in the marketplace. A key element in autonomous anything, from vehicles to robots, is vision. However, the spectrum of vision has many complex variables including optical materials, detector ranges and laser line of site. Together these elements make up a variety of sensors (LIDAR, cameras, radar, etc). The best applications of photonics are a targeted specific problem, like obstacle detection, versus deploying a sensor for robotic everything.

Augmented Pixels is utilizing camera sensors for Simultaneous Localization And Mapping (SLAM) solutions for low-compute powered devices like drones. The company plans to evolve their solution into a true autonomous navigation platform for drones and robots in GPS-denied environments. In addition, the startup is also working on a hardware-optimized solution for indoor navigation with mobile phones and AR/VR glasses similar to Google’s Project Tango initiative.

Vitaliy Goncharuk, founder and CEO of Augmented Pixels, explained his market opportunity, “It became clear that the leaders of all markets from the automotive industry to drone and household robots understood that the future belongs to companies that can simplify the use of its gadgets to the level of an app for the smartphone. This requires a fundamentally new way of thinking about the design of future systems and user interfaces. Our company’s technology and the platform play key roles in this technology stack.” Melonee Wise asked Goncharuk how he will be able to compete now that Project Tango is open sourced (the question remained unanswered).


HEBI Robotics: Providing dexterity for robot systems

For a long time, deploying automation around a factory meant redesigning a production line around a robot’s reach. HEBI Robotics aims to change this dynamic through the deployment of modular elastic actuators. The HEBI modules enable the ability to create custom robots of virtually any configuration; from wheeled to collaborative arms with multi-degrees of freedom. As shown below, it can even feed you pasta:

Based in Pittsburgh PA, the HEBI technology is the result of years of research at the Biorobotics Lab at Carnegie Mellon University. According to the founder Bob Raida, “HEBI develops Lego-like robotic building blocks that enable rapid development and deployment of custom collaborative robots. Our customers are academia, R&D organizations and industrial end users that have a need for robots that can be designed and deployed quickly, then re-used as needs change. HEBI offers several families of modular actuators that function as full-featured robotic components as opposed to simple servo motors.” Last year at RoboBusiness, Modbot impressed many with the concept of a modular robotic arm, HEBI is now (cost-effectively) fulfilling on that promise.


Kinema systems: Automating the warehouse one box at a time

According to his blog post, Sachin Chitta, CEO of Kinema Systems, says “Kinema Pick is the first in a series of solutions from Kinema Systems for the ‘holy grail’ of industrial robotics – random picking.” Chitta’s robotic systems utilizes an off-the-shelf (Yaskawa) robotic arm with a suite of proprietary vision sensors, deep learning algorithms, and motion-planning software to solve the most critical problem in the warehouse – moving pallets. Every e-commerce company is following Amazon’s lead in automating distribution centers and operations. The problem to date has been the unstructured environment of random placements of pallet boxes on a moving conveyor belt. According to the pitch, Kinema’s self-learning systems now offers for the first time a turn-key configurable solution for its customers and their unskilled laborforce.

Chitta, a former Willow Garage Alum, explains “the biggest differentiator we bring to the table, is dealing with these situations [unstructured environments] where there is lack of structure or there is variety, so that we can address the actual problems facing logistics, warehouses, shipping, and even in manufacturing as well.” Chitta is definitely someone to watch in 2017.


Robotic materials: Grippers that understand touch

One of the biggest challenges in the robotic industry is understanding touch and tactile sensing. Robotic Materials claims to have “developed patent-pending sensors and control systems to be the first and only effective tactile sensing solution for collaborative robot applications.” Their secret sauce, developed at the University of Colorado, is a combination of proximity, contact, and force sensing that enables robots to accurately identify, grasp, and manipulate previously unknown parts, from ballpoint pens to water bottles to steel rims. The key to their solution is everything is integrated on the device, without the need for reprogramming, cloud hosting or learning-based system.

According to CEO Scott Thomas, “we are currently selling grippers and fingers for a select and growing number of robots and prosthetic devices (Rethink Robotics, Kinova, Bebionic), but wish to move up in the value chain to become a solution provider for robotic manipulation in unstructured environments to industry, startups and hobbyists.” I wonder how soon will Scott get a call from ABB or Kuka?


SAKE robotics: The ultimate inexpensive gripper

A key factor in the growth of collaborative and humanoid robots will be the capabilities to do useful tasks cost-effectively in chaotic environments. SAKE Robotics aims to be a game changer with their low-cost grippers that utilize tendons as the primary link between actuators and finger-like motion. While many other grippers on the market utilize tendons, there has been a serious reliability issue due to strength & wear. SAKE’s patent pending “Ceramic Tendon Technology” provides a high strength, low wear, low resistance and low-cost solution.

Their EZGripper Dual robotic grippers have been shown to reliably pick up and hold a variety of objects weighing between grams and multiple kilograms (from pencils to large objects). In addition, EXGripper enables 360 degree rotation with the grasp center aligned to the axis of rotation. According to Paul Ekas, founder of SAKE, “our vision to deliver the lowest cost, high dexterity robotic grippers [hands] that enable next generation, high value robots.” As Paul presented, I wondered how soon will Pepper be sporting ExGrippers.


Vention – Web-based industrial machine builder engine

Working on a factory floor often requires creativity in designing new processes and support machines to meet market demands. As manufacturing becomes more mechanized, Vention plans on enabling corporate managers with the ability to build and order machines via a web-browser in just a few days. According to the pitch, their platform is an “AI-enabled” cloud CAD application that integrates an ever-growing library of industrial “Lego-style” modules. This means that regardless of the design, structural, motion, and control parts are fully compatible with one another, saving time and money.

Etienne Lacroix, CEO, explained that “it became obvious to us that the next frontier for faster machine design wasn’t better design tools or higher performance hardware, but rather the integration between the two.” Lacroix’s partner and company CTO, Max Windisch, further states that “with the help of artificial intelligence, we are paving the way for significant democratization of mechanical engineering. We want to enable design-savvy individuals who never used a CAD software, to 3D design industrial machines and prototypes. Similar technology democratization in web development (i.e., WordPress, Squarespace, Wix) were complete game-changers; we are creating the equivalent for machine design.” A web-based machine platform is very compelling, however, inventory management could be the reason why no one has done this to date.  

Just like Magic Johnson, the robot industry is on the path to another championship season with this team of innovators.

]]>