Monday, December 30, 2013

Can Robots Play a Role in Improving Lives of Autistic Individuals?

Lately I am noticing a lot of interest from the robotics community in developing robots to help autistic individuals.  Some of these efforts are based on technology push, i.e., people have developed a cool new robot and they would like to see if autistic individuals can benefit from using it.  Some efforts are genuinely targeted at understanding the needs of the autistic individuals and developing solutions to help them.

This post shares my thoughts on this topic based on our family’s experiences in raising an autistic daughter. Let me begin by setting the context.  Autism is a neurodevelopment disorder that affects the brain development.  Representative symptoms associated with autism include difficulty with social interaction, limited verbal and non-verbal communication, and repetitive behaviors. Individuals with autism face many challenges in their daily lives.

The intensity of symptoms associated with autism can vary from mild to very severe and there is a considerable variation in symptoms.  Experts use the term autism spectrum disorder (ASD) to refer to autism and related disorders.  It is often said that no two autistic individuals are alike. According to the recent statistics, one out of every eighty eight children born in the U.S. is diagnosed with ASD.  Unfortunately, there is no known medical cure for autism, making it a pressing social problem.

Individuals with ASD struggle every day to live in the world designed for neuro-typical individuals.  Most autistic individuals are hyper sensitive and often experience sensory overloads. Sounds, smells, and sights that might appear normal to most people often can overpower the senses of autistic individuals. They use stimulatory repetitive behaviors to compensate for the sensory overload.  Many autistic individuals struggle with language. They have a basic understanding of the vocabulary and grammar, but advanced language concepts are often foreign to them. Many autistic individuals are good in picking up body language cues from their peers and can sense the disapproval and rejection of their behaviors by their neuro-typical peers. However, most autistic individuals are helpless in controlling their behaviors; their own bodies and brains betray them every day.

Here is how the world might appear to an autistic teen as he/she goes through the daily life. Imagine that you are in the 10th grade science class.  The heating system in the class is making a really loud annoying thumping noise. This is crippling your ability to think. You try to cover your ears and start humming to drown that excruciating sound.  Your science teacher is delivering the science lecture in a “foreign language”. You understand the basics, but you are unable to follow the advanced vocabulary being used in the class. You are extremely frustrated and the stress is making it impossible for you to sit in your seat, so you are constantly fidgeting. You are noticing disapproving looks from your peers who find you weird and annoying. You are feeling humiliated and unwelcome in the class. You would like to fit in, but you are unable to control movements of your own body. The teacher has just announced that the next class will have a quiz. Quizzes make you really anxious and now you can feel a knot forming in your stomach.  Nausea has kicked in and the simple task of walking from Science classroom to English classroom appears to be a Herculean task.    

Unfortunately,  parents are often helpless and unable to eliminate the pain and suffering of their ASD children.  Providing care for autistic individuals can be emotionally and physically exhausting.   Most parents try very hard to improve lives of their children
. Unfortunately, they also worry non-stop as to what will happen to their ASD sons and daughters as they grow old and unable to care for them.  Unfortunately, there is no good answer.  This can be a tiring, frustrating, and heart-breaking experience.  But this unfortunate adversity in life also showcases the resiliency of the human spirit. You meet so many individuals who do not give up and continue to fight incredibility hard to put one more smile on the faces of their loved ones and make the world a fair place by demanding universal accessibility.
  
Given this background, the question is - can robots play a role in improving the lives of autistic individuals?  We will have to approach this question very carefully as learning to interact with humans is a key to the survival of autistic individuals in the neuro-typical world. Robots should not try to reduce the human involvement in the lives of autistic individuals. However, robots can be useful in one of the following situations:
  1. Increasing the human interaction will be detrimental to the intended outcome.
     
  2. The use of robots can significantly improve the quality of life for autistic individuals.
     
  3. Humans with the right expertise are not available to meet the needs of autistic individuals.
Here are my preliminary thoughts on potential applications of robots based on the above described situations.  
  • Overcoming Positive Interaction Deficit: The human brain is wired to seek positive social interaction. Many autistic individuals also crave positive social interaction. However, it is very hard for them to interact with neuro-typical individuals and this can be quite frustrating for them. The lack of adequate amount of positive social interaction can lead to severe depression. In my opinion, there is no good way for us to overcome positive interaction deficit faced by autistic individuals by increasing the human interaction.  Human interaction is extremely important, but simply increasing the amount of human interaction does not mean that autistic individuals perceive this increase in a positive light. In fact, many autistic individuals prefer to interact with animals instead of humans because animals are non-judgmental and reciprocate affection unconditionally.  However, many autistic individuals are unable to take care of pets. I believe that robots can be designed to entertain, stimulate positive interaction, and uplift the moods of autistic individuals. Such robots must be carefully designed to ensure that they fulfill the positive interaction deficit and not try to replace the need for interacting with humans.       
     
  • Improving Safety and Independence:  Many autistic individuals lack the basic notions of safety. This significantly worries caregivers and interferes with the freedom and personal space of autistic individuals. I believe that robotics-based technologies can be developed and adopted to enhance safety and independence of autistic individuals. These technologies can be used for safety monitoring (e.g., kitchen stove is switched off after use, medicine was taken on time), assist with household chores (e.g., cleaning), and navigation in complex surroundings (e.g., finding a store in a mall). There are many interesting technologies being developed for assisted living facilities that might find use in homes of autistic individuals. These technologies are not likely to look like a typical robot, but we should not care about the form.        
     
  • Improving Training and Education:  We have to find a way to create meaningful employment opportunities for autistic individuals. Not doing this will create a significant financial strain on the rest of the society. Many autistic individuals have natural talents such as computers, music, and mathematics. These talents should be nurtured and harnessed. Learning to function well in the society will require developing appropriate social interaction skills such as making eye contact, reacting appropriately to facial expressions and body language, and making small talk. Currently autistic individuals get very limited opportunities to practice and hone these skills. Robots can be designed to enable autistic individuals to practice these skills for extended periods of time. It is very difficult to put effective special education teachers in all the classrooms with autistic children. Telepresence robots might be able to expand that geographic reach of superstar special education teachers and contribute to the training of autistic individuals.                   
I believe robots can play a useful role in improving the lives of autistic individuals, but we should take extreme care to ensure that robots do not displace humans from the lives of ASD individuals. Ultimately, human contact and interaction will be vital for ASD individuals to function well in the society.          

Tuesday, December 10, 2013

When can we buy robots to help us with household chores?

Recently Google has been in the news for its buying spree of robotics companies. Many people are excited about this and believe that this will greatly accelerate robotics technology development and hopefully make robots ubiquitous in our lives.

This post is mainly focused on robots for homes. In a simplistic sense, home robot market can be divided into the following three categories: (1) robots helping with dull and tedious household chores, (2) robots taking on new roles in homes (e.g., education, entertainment, companionship), and (3) robots in assisted living communities. Each of these markets has different underlying economics. In this post, I will focus on the first category.  Specifically, I am interested in the following question:  When can we buy robots to help us with household chores?

Before answering this question, let me quickly summarize the societal implications of home robots that can help with household chores:
  • Home robots might save many marriages by reducing fights over household chores. I am sure that divorce lawyers will hate home robots!
  • Teens will have love-hate relationship with robots. Parents won’t need to nag teens for doing chores. But teens won’t be able to make money by doing chores. Cash-deprived teens will need to cut down on money they spend on music and movies. This might be bad for certain pop stars. So watch out Justin Bieber, home robots won’t be good for your album sales.
  • Home robots will dethrone pet animals from being the stars of viral YouTube videos. They might even spawn new reality shows on TV to depict new relationship dynamics at homes as new home robots join the family.  
  • Call centers with human workers will be needed to bail out robots in distress. Hopefully, this will create new jobs for humans. Perhaps auto clubs like AAA can start new robot clubs to assist robot owners.
  • People won’t have any sense of privacy inside their homes. Robots will be able to monitor your every move, so be careful of what you do at home. I am sure NSA folks will be very happy with the advent of home robots. They will finally have the ability to know what time you take showers. They already know everything else!   
  • Occasionally, home robots will be involved in accidents. I can already see insurance companies salivating at this new opportunity and design new products. Hopefully, your robot can answer the phone when telemarketers from insurance company call you to sell new robot insurance policies.    
Here is a partial list of chores performed inside homes that can benefit from robot assistants:
  • Help with laundry 
  • Load and unload dishwashers 
  • Do preparatory work for cooking meals (e.g., chopping vegetables)
  • Clean kitchen
  • Clean toilets and bathtubs
  • Pick up objects (e.g., toys, newspaper, clothes, sneakers)  from the floor and move them to the right location 
  • Unload groceries from the car parked in garage
  • Assemble furniture
  • Help with moving heavy objects
  • Answer phone when telemarketers call at the dinner time
The above list does not include tasks for which the robot will have to venture outside the home. For example, I did not include lawn maintenance and snow cleaning tasks. A robot working outside the home will need to be able to deal with a wide variety of weather conditions and safety issues. This is a lot harder problem to solve.

I also on-purpose did not include pet sitting and baby sitting in the above list. Some people believe that robots should be able to do these. I think that this will be a good idea for TV shows, but a terrible idea in real-life.      

A home robot will require significant in-home installation, regular monitoring, and servicing to keep it operational. You may need to robot-proof your home to make sure that the robot does not damage your home and your home does not damage your robot.

I think we should look at a leasing or a renting model instead of a buying model. In this model, people will rent the robot from a company for a monthly fee.  The company will take care of the installation, monitoring, and service. If the robot is stuck, it should be able to contact a call center. Hopefully, someone in the call center should be able to teleoperate it to get out of the jam.

So how much people are willing to pay in monthly fees for robots at home? Based on my preliminary estimate, people will be willing to pay $200 to $500 per month for renting a robotic assistant for home.  I believe that people will be willing to pay $5 to $10 per hour of labor saved.  So if a robot can save 40 hours of tedious chores per month, then people will be willing to pay $200 to $400 per month for the robot.

Let us assume that a well-designed robot will have a service life of five years.  So the home robot market looks a lot like automobile market from pricing and service life points of view. 

In order for robots to be able to do forty hours’ worth of useful chores in homes, a lot of new technology will need to be developed. I believe that this technology can be sold at $200 per month if there were few million customers.  So here is the catch: unless there is a large market, the desired robots cannot be offered at the right price. However, unless the useful technology is available at the right price, the large market simply won’t exist.

Unfortunately, incremental development of home robot technology and its introduction in markets will be extremely slow. We will need to meet or exceed forty hours per month of useful robotic chores at home to create a significant home market and associated infrastructure. In my opinion, realizing a home robot is technologically feasible, but it will require billions of dollars of investment in technology development to ensure the high level of reliability and safety for home use. Unfortunately, venture capitalists don’t like these kinds of markets. My hope is that a cash-rich company such as Google, Microsoft, or Apple will go after developing this technology and create a new industry.  

Cell phones were invented for people to talk, but they have found new roles such as music players, web browsers, email clients, etc. The revenue growth in the cell phone market came because of the new roles played by cell phones.   I believe that the same thing is likely to happen for home robots as well. Initially people will be interested in getting robots at home to help with the household chores, but soon they will find new uses for these robots. I believe that robots might find an easier path to become fixtures in people’s home by adopting new roles such as tutors (e.g., music instructor, golf instructor), personal trainers, and entertainers.  

So, when can we buy robots to help us with household chores? If a cash-rich company like Google, Microsoft, or Apple goes after this technology, then we might have these robots before the end of this decade. Otherwise, we may need to wait for a while.

Saturday, November 23, 2013

International Robot Exhibition 2013: Interesting Trends and their Implications

I attended the International Robot Exhibition (iREX) in Tokyo in November 2013.  It was a mesmerizing display of robots – a gigantic hall filled with thousands of robots.  Robotics companies bring their latest and greatest robots to this exhibition.  As you walked through the exhibition hall, you saw a wide variety of amazing advances in the field of robotics.

I noticed several common trends in new product offerings from many different companies. The underlying technologies behind these products were proposed many years ago, but for a while these were serving niche markets. However, it appears that suddenly these technologies have become mainstream, and several different large established companies are featuring new products based on these ideas. So finally after many years of wait, these ideas have moved from labs to the mainstream robotics industry. 

Here is my pick of four noteworthy trends based on products offered by established companies in robotics space: 
 

Dual Arm Robots: Humans (and many other primates) have two arms, but industrial robots for the longest time have featured only single arms. The argument was that if a task needed two arms, you can buy two arms and mount them next to each other.  The mainstream robotics companies resisted the idea of connecting two arms to a body and selling it as an integrated package. However, it appears that thinking in the industrial robots community has changed over the last couple of years. Many companies at iREX were displaying new robots with two arms. In my opinion, the dual arm robot configuration will provide new advances in the dexterous manipulation area where two arms can be moved in a coordinated way to work with complex tools. Humans have a naturally tendency to utilize both of their hands when doing a task. Imagine cooking dinner with one hand tied behind your back! So dual arm configuration should it make it much easier for humans and robots to collaborate on complex tasks.    

ABB Dual Arm Robot
Nachi Dual Arm Robot
Eyes on the Hand: I saw several robots with cameras mounted very close to the hand. This configuration gives robots unobstructed close-up view of the parts being manipulated. This idea was proposed more than twenty years ago, but there were reservations in implementing it on the shop floor due to concerns about acquiring quality images and registering the images with a fast moving camera. I am happy to see that these challenges have been overcome and this configuration is featured on many robots. This configuration will enable new advances in visual servoing and enhance the accuracy of the fine manipulation of objects previously unseen by the robot. It is interesting to note that in the first trend reported above, companies created robots that embraced the anthropomorphic configuration. However, this trend moved robots away from anthropomorphic configuration by placing eyes on the hand. Cameras are inexpensive, so robots can afford to have eyes on the limbs. I am sure that many humans have wished that they had a pair of extra eyes.

Motoman Robot with Camera on Hand
Wearable Robots: There were many different kinds of robots on display that people can wear to enhance their capabilities, ranging from walking assist devices to exoskeletons. Some of these robots are targeting the physical therapy and rehabilitation market to help people recover from injuries or loss of motor functions due to medical complications (e.g., stroke). Some robots are targeting the assistive technology market to help people cope with diminished abilities due to aging or other medical conditions. It appears that the robotics industry has combined high efficiency actuators, lightweight structural materials, and new battery technologies to finally create useful products. Wearable robots are expected to positively impact the quality of life as the average human lifespan continues to increase due to the advances in medicine. They also provide new ways to carry out physical therapy and rehabilitation. I believe that they will eventually enter the sports market to help with athlete training. There is plenty of room in the amateur market too. It will be great to have a wearable robot that can teach you how to swing your golf club.

Honda Walking Assist Device
High Speed Pick and Place Robots Based on Parallel Kinematics:  Parallel kinematics based robots hold significant promise because the actuators can be placed near the base of the robot, significantly reducing the inertia of the moving links and enabling high speed operation. I was happy to see that every major company was featuring high speed pick and place robots based on parallel kinematics. Companies were reporting impressive workspace sizes, high repeatability, and large payload capacity in robots based on parallel kinematics. These robots are bringing speeds comparable to the hardware based fixed automation to programmable automation.               
ABB Flex Picker
Kawasaki Delta Robot


Saturday, October 26, 2013

The Role of Robots in Engineering Education

Currently, the robotics industry does not have enough jobs to employ all the engineering graduates who would like to pursue a career in robotics. Sometimes this leads to disappointment among engineering graduates who have passionately pursued robotics in school and find it frustrating that they cannot find a job in the robotics industry.  Hopefully, the robotics industry will continue to grow and the job situation will be significantly better a few years down the road. But what should graduating students interested in robotics do in the meantime?

This blog post attempts to argue that students should use robots as learning tools to acquire a much broader engineering knowledge base to impress employers in a wide variety of engineering industries. 

Human beings are fascinated by robots. I don’t fully understand the reason for this fascination, but robots seem to get people excited and enthralled.  In addition to being popular inhabitants of our factories, hospitals, and farms, robots are also emerging as popular cultural icons with an ever-increasing presence in music, movies, and books. Clearly the idea of humans ultimately creating superhuman robots is quite thrilling and intriguing. Let us also keep in mind that robots can be quite entertaining. 

Robots have become ambassadors of STEM education in the US. Many K-12 students get their first glimpses of the engineering world by participating in FIRST robotics.  Children as young as five years old are getting started in Junior FIRST LEGO Leagues, and they can continue advancing through various FIRST programs as they gain experience and grow older. The FIRST robotics experience finally culminates into the FIRST Robotics Competition.  In this program, teams of high school students compete with each other in a high profile national competition by building incredibly impressive robots. More than 350,000 students are expected to participate in the FIRST programs in 2013 and 2014 season.

Many undergraduate students who enter college with FIRST experience in high school take robotics courses in the college, continue their participation in challenging robotics competitions, and go on to build even more impressive robots. Some of the students interested in robotics go on to graduate school and continue their journey in the field of robotics. National Robotics Initiative and DARPA Robotics Challenge are providing many new exciting opportunities for students in the US.  

Along the way some students start to focus too much on “learning robotics.” It will be better from the employment perspective, if students viewed robots as a vehicle to learn engineering. The FIRST programs do a great job of emphasizing this point, but somehow along the way this message gets garbled for at least some students and the focus shifts to learning specialized tools for building robots.

Robots are very good examples of modern cyber physical systems. They contain mechanical, electrical, electronic, and software components that interact in complex ways to produce the desired behavior and performance. In terms of building blocks, they are no different than modern dish washers, automobiles, magnetic resonance imaging machines, and cranes. However, robots are a lot more fun to create than a dish washer and hence much more effective for teaching engineering principles.  

We should view robots as tools for learning modern engineering principles and have fun while doing it! Once you have learned how to design, build, and program a robot, you can create lots of other engineered artifacts that people are willing to buy today. Your experience with robots makes you an excellent catch in the job market. You should just learn to interpret the word robotics in the broadest possible sense.

Thursday, October 10, 2013

Can crowdsourcing be exploited to reduce the cost of autonomous robots?

Several recently developed prototypes show that it is possible to develop autonomous robots with remarkable capabilities. However, currently developing autonomous robots costs a fortune and takes forever!  In this post, I treat autonomous cars and unmanned air vehicles as robots.

Sophisticated autonomous robots require hundreds of thousands of lines of code. Manually writing this code for a new robot is very expensive and time consuming. Moreover, as the hardware changes, this code also requires significant upgrades. Often by the time code is written and debugged, the hardware is already obsolete. Therefore, developing autonomous robots is currently technically feasible but not affordable in many applications.

Human operators are very good in teleoperating robots in cluttered, unstructured, and dynamic environments with limited sensor data. Forget about the expert operators teleoperating unmanned vehicles! Even five year olds can learn to teleoperate their first remote control cars within couple of hours and are able to successfully annoy their parents, siblings, and dogs by zipping around tiny cars inside their homes. I have also seen teenagers performing amazing feats with their remotely controlled helicopters. So obviously, we should be interested in characterizing and understanding the strategies employed by human operators during these operations and automatically extracting building blocks of the autonomy code based on this understanding. Many robotics researchers are pursuing this path and this area of robotics is called learning from demonstrations.

Many impressive results ranging from training of surgical robots to teaching collision avoidance to unmanned vehicles have been reported by the learning from demonstrations community. Most such case studies have utilized small number of humans to perform these demonstrations. Because of the limited number of demonstrations, people often wonder how well the learned components of autonomy will perform in situations not encountered during demonstrations. Unfortunately, conducting extensive experiments in physical worlds is highly time consuming and expensive. It also limits the kind of scenarios that can be considered during demonstrations. Clearly a demonstration that might pose threat to the human or the robot has to be avoided. Conducting demonstrations in the virtual world is emerging as an attractive alternative.

Over the last few years, tremendous progress has been made in the area of physics-based robot simulators. For example, the on-going DARPA Robotics Challenge is making an extensive use of simulation technology to test autonomy components. Simulations are being routinely used to teach humans cognitive as well as motor skills. For example, flight simulators are routinely used for pilot training.

By combining advances in multi-player games that can be played over the network and accurate robot simulations, new games can be developed in which humans can compete and collaborate with each other by teleoperating virtual robots. This advancement means that demonstrations need not be confined to few experts. Instead, anyone with an Internet connection can participate in the training of a new robot. For example, DARPA used publicly distributed Anti-Submarine Warfare game to learn how to track quiet submarines. We are ready to leverage crowds to impart autonomy to robots. 

The use of crowd sourcing in robot training has many benefits. It provides a rich diversity in demonstrations and hence enhances the probability of generalization. Some of the participants are likely to exhibit out-of-the-box thinking and demonstrate a highly creative or innovative way of doing a task. This is great news for optimizing robot performance. For some people, this way of training robots might serve as a means to earn money by performing demonstrations (basically acting as robot tutors). Playing games that involve robots is likely to be entertaining for at least a segment of the population. This paradigm can also be used in situations where a robot is stuck during a difficult task and needs a creative solution to get out of the bind.               

Automatically learning autonomy components such as reasoning rules, controllers, planners etc. from the vast amount of demonstration data is an interesting challenge and will keep the research community busy for many years to come. But this seems to be the much needed crucial advancement to reduce the cost of autonomous robots.

Don’t worry robots! The crowd will rescue you from the dungeons of high cost and long development times!  

Friday, September 27, 2013

How can Robo Raven “feed” itself in jungles?

Our previous version of Robo Raven needs to be plugged into an electric socket for charging the battery. Ultimately, we envision Robo Raven flying deep into jungles, far away from civilizations, and hence electric sockets. To do this, Robo Raven needs to figure out a way to “feed” itself to keep going during long missions.

Real ravens are omnivorous and are happy to eat whatever is available. Unfortunately, mimicking this feat in Robo Raven is not practical at this point in time because the equipment necessary to convert biomass into 30 W of electrical power would make Robo Raven too heavy to fly. Since it is not practical to build a flying platform that can directly convert the biomass into energy needed to flap wings at the moment, we had to come up with a different option to “feed” Robo Raven.

From an energy perspective, ravens are constantly converting biomass into mechanical energy to flap their wings. Common sources of biomass that ravens consume in the wild are carcasses of dead animals. The dead animals accumulated their biomass by consuming plants which converted solar energy to biomass. Here is a high level summary of the energy conversion process at work behind the flight of ravens. Solar energy is converted to biomass (i.e., plants). One type of biomass (i.e., plants) is converted into another type of biomass (i.e., meat). Finally, ravens convert the biomass (i.e., meat) into mechanical energy needed to flap wings. So, ravens ultimately derive their energy from the sun.

We decided to bypass the multi-step energy conversion process used by ravens and instead have Robo Raven harness solar energy directly. Robo Raven features sufficiently large wings, so we decided to make wings out of flexible solar cells since there would be enough surface area for solar cells to generate a usable amount of power. The underlying material of the flexible solar is different from the material used in the previous version of Robo Raven, so we needed to design new wings. Additionally, we had to develop a new additive manufacturing process for making these wings. Solar cells on Robo Raven do not produce enough power to directly drive the motors (they produce around 3.6 W while we need around 30 W). So we decided to charge the battery using the solar cells. I am happy to report that thanks to the hard work of Savannah Nolen, Ariel Perez-Rosado, and Luke Roberts, students in our lab (co-advised by Hugh Bruck and I), we have developed Robo Raven III, the first flapping wing micro air vehicle that flies with solar cells. Please see below for a video.




So how good is the performance? Solar cells currently cover less than half the wing area in Robo Raven III. These solar cells produce 3.6 W of power during a sunny day. The efficiency of these solar cells appears be around 6%, and the combined efficiency of batteries and motors is somewhere between 25 to 50%.  We hope that the performance will go up significantly as more efficient solar cells become available and we cover more of Robo Raven III’s wing and body area with solar cells in future versions.

So how does this compare with energy conversion efficiency found in the nature? Plants are able to convert less than 10% of available solar energy into biomass. Plant-based biomass to meat-based biomass conversion is not very efficient either. It takes around ten gram of plant based biomass (e.g., corn) to produce 1 gram of meat if you ignore the energy needs of other body parts and metabolism. In other words, you need to eat at least 10 lbs. of corn to gain 1 lb. of body weight. Finally, converting energy stored in biomass into mechanical energy is also not very efficient.  Animals use the aerobic respiration to derive energy from the food, and typically less than one fourth of the energy available from respiration is converted into mechanical energy. Animals also lose a lot of energy due to metabolism.

As described in the paragraphs above, using solar cells to convert solar energy directly into mechanical energy for flapping wings is an order of magnitude more efficient when compared to conversion via the biological path. This advantage will magnify as solar cell technology improves, thus allowing conventional engineering to beat nature in terms of the solar energy conversion efficiency.

However, nature has a significant edge over engineered system in other areas. For example, one gram of meat stores 20 times more energy than one gram of the current battery technology. So in terms of the energy density, we engineers have a lot of catching up to do. In nature, solar energy collection devices (e.g., trees) are not on-board ravens. Hence, ravens ultimately utilize a large collection area to gather energy into highly a dense storage source (e.g., meat), giving them a much longer range and better endurance than Robo Raven III.

We still need to make significant improvements in solar cell efficiency and battery energy density to replicate the endurance of real ravens in Robo Raven III, but the good news is that Robo Raven III has already demonstrated that we can fly with a solar cell and battery combination. Now that we’ve successfully taken this step, swapping new technologies that are more efficient should be relatively simple!

Saturday, August 31, 2013

What will it take to develop autonomous cars for Indian roads?

Recent media reports about Google’s autonomous cars (also known as driverless or self-driving cars) have made people curious about this technology.  Several of my friends have asked me about the maturity of this technology.  I believe that the technology is almost ready to realize autonomous cars for well-marked roads in the US where drivers behave in predictable ways.  But the technology will require significant improvements to work well on congested streets in developing countries.  This post uses India as an example to discusses challenges for autonomous cars.  

I recently visited India and spent a lot of time on the road. My road trips included visits to many different towns and cities in the northern part of India. India is a truly splendid country with phenomenal sights and very imaginative people. It is densely populated and the civil infrastructure has not kept pace with the explosion of cars on the roads. Driving tends to be quite challenging in India. Hence, I did not dare driving myself. Instead, I simply sat on the passenger seat and had plenty of time to ponder over how the current generation of autonomous car technology will fare on Indian roads. This was a good way to distract myself from all the chaos on the road and the intense driving.  On a lighter note, I believe that taxi-drivers in India are responsible for getting their passengers to pray a lot more than all other religious influences combined together. And these prayers are genuine!

This was the monsoon season in India and at this time of the year, heavy downpours routinely manifest with very little warnings. This makes driving even more treacherous. Based on my observations, here is a selected list of features that will be needed to realize autonomous cars for Indian roads.

  • Lane-Free Driving: The concept of lanes simply does not exist for many drivers in India. The number of vehicles on the road is quite large and people like to utilize to roads to the fullest extent. Lane-free driving allows every square centimeter of the road to be fully utilized. It is not an uncommon sight on highways to see a giant truck going the wrong way (if you believe in lanes!) and hurtling towards your tiny car. Often it appears that people are playing a game of “chicken” while driving, eventually requiring one of the vehicles to yield and go off-road to avoid an imminent collision. A major challenge for autonomous cars will be to figure out when it should attempt to intimidate other vehicles on the road and when it should get out of the way.
  • Amphibious Operation: There were several situations when roads we intended to take were covered with water due to heavy downpours.  There was no good way to judge the water depth on the road. Moreover, we were unable to see the potholes under the water. The best idea that I came up with to deal with this challenge was to wait patiently and let some other vehicle with prior experience with the area to go over the water-filled road. If they were successful, then we could follow them. If they got stuck, then we better find an alternate route. This idea gives a new twist to the learning from demonstration concept. You might be tempted to park your car and swim if you don’t need to go too far, but I would advise against it. 
  • Adaptive Traffic Light Compliance: In many small towns, drivers tend to simply ignore traffic lights to increase road utilization and conserve petrol. Idling on traffic lights consumes precious petrol! This behavior by other drivers will create a dilemma for autonomous cars. If an autonomous car follows the traffic light while everyone is ignoring them, then someone is certainly going to rear-end it.  On the other hand, if the autonomous car is in the neighborhood where a large number of vehicles follow traffic signals, then it must follow the traffic signals. Traffic light compliance varies significantly from one place to another in India. The autonomous car will need the capability to autonomously adapt to the local customs of following or ignoring traffic lights.    
  • Negotiating around Animals: Many Indians are very appreciative of animals and hence tolerant of their presence in public spaces. Vehicles in India share roads with a wide variety of animals (e.g., cows, dogs, cats, pigs, donkeys, elephants, camels).  So the autonomous car will need to be able to recognize different types of animals and estimate their capabilities, mood, and intentions. Your autonomous car obviously needs to behave differently depending upon whether the angry animal charging towards your car is a large cow or a small dog.  Autonomous cars on Indian roads will also encounter slow moving animal herds consisting of hundreds of members. Waiting is often the best strategy in this case. But if you are in a hurry, then the car may need to come up with creative off-road driving maneuvers to dodge the herd.  
  • Pedestrians Avoidance: Cities and towns are densely populated in India and people seem to constantly appear in front of your car out of thin air and your car needs to make sure that it keeps moving forward without hitting anyone. Your car needs to select an appropriate avoidance maneuver depending upon whether the person appearing in front of your car is a teenager on the cell phone, a vendor trying to sell you coconuts, a grandmother with a bad knee rushing to the other side of the road to buy fresh mangoes, or a street performer doing gymnastics between the cars.  Pedestrians are used to communicating with drivers with animated gestures and occasional swearing. Autonomous cars on Indian roads will need to find a way to communicate with people on the streets.      
  • Honking-Based Communication: Honking at each other is an important element of communicating your intent while driving in India. For example, when you are turning into a narrow alley with limited visibility, you should honk vigorously to make sure that other drivers, pedestrians, and animals know about your existence and intent.  If you want to pass a slow moving vehicle, you would get behind it and start tailgating while honking vigorously until the other vehicle goes into the shoulder (often unpaved dirt) so that you can pass it. Your car horn is an important asset. Clearly an autonomous car will need to be able to interpret honking by others and should also be able to signal its intent by appropriate honking.   
  • Creative Parking: Unfortunately, many places in India do not have designated parking spots, so one has to be quite creative when parking a car in a crowded place.  Many people (including me) in the US find parallel parking on crowded city streets quite intimidating. It is difficult to describe in words the kind of crazy parking configurations I saw on city streets in India. It requires really advanced spatial reasoning to fit cars in really cramped spaces and then take them out without denting them. You have to utilize every centimeter of the space available to you. Autonomous cars will need to create new maneuvers on the spot to park themselves in the available spaces.         
Realizing the above mentioned features will require fundamental advances in perception, reasoning, planning, and control.  This will require recognizing what is present in the environment, if necessary inferring their intent, and estimating their capabilities. Cars will also need creative maneuvers to operate in extremely tight spaces.  Significant advances in reactive planning and control will be needed to ensure safety in highly unpredictable environments. New advances will also be needed in mapping and localization to accommodate fast changing landscape due to the construction boom. Communication with pedestrians, animals, and other cars will also require novel approaches. As a researcher, I am very excited about these challenges. It will keep us busy for a long time.

If Google is really serious about autonomous cars, then they should start incorporating the above mentioned features into their cars. It seems that the features listed above will be useful in most of the developing countries and hence give Google access to a huge market!

I believe that the true test of an autonomous car would be to successfully drive on a crowded street in Agra during an evening rush hour in the monsoon season. Obviously the performance of an autonomous car would need to be benchmarked against an experienced taxi driver.

PS: Unfortunately, I did not carry a camera with me during my road trips in India.  So it will be great if readers can share photographs highlighting challenging use-cases for assisting the design of autonomous cars for Indian roads.

Monday, August 5, 2013

Exploiting Bio-Inspired Limbless Locomotion in Robots

Limbless locomotion is utilized by several creatures in the nature.  Snakes have perfected this mode of locomotion through millions of years of evolution. Limbless locomotion has several distinguishing characteristics. It enables the creature to move through extremely rugged and cluttered terrains.  The obstacle height simply does not matter! The creatures can just go around them (or over them in some cases). It is also quite versatile and can be used to climb trees and jump over gaps. It also appears to work for a wide variety of size scales from tiny worms to huge pythons. At least in theory, this mode of locomotion is highly fault tolerant due to the high degree of redundancy in joints needed to locomote.  

Limbless locomotion has fascinated roboticists for decades. They have created very impressive platforms that present remarkable advances in robotics. However, currently robots that use limbless locomotion do not come close to their natural counterparts in terms of capabilities. Unfortunately, we do not yet have access to engineered actuators that can match natural muscles founds in biological creatures. We also do not simply have highly distributed fault-tolerant self-calibrating multi-modal sensors and materials with highly anisotropic friction properties. So our design options for limbless locomotion are limited and truly mimicking nature is simply not possible right now.

In the short term, we are better off taking a different approach to exploit inspiration from biological creatures in field of robotics. I believe that we should try to find a useful feature in the nature and exploit it to the fullest extent. This often means that the feature will be highly exaggerated or amplified in the engineering context. Ultimately, this might make our bio-inspired robots look like a caricature of their natural counterpart. We need to keep in mind that the nature imposes constraints on the size of features due to the inherent biological processes for realizing them. For example, so far naturally evolved flying creatures do not come close the size of engineered jumbo jet. So the notion of our robots looking like caricatures of animals should not a viewed as a disappointment. We should simply “borrow” ideas from nature and “distort” them to fully exploit their engineering potential based on the technological constraints.

James Hopkins, a graduate student in my lab has been trying hard to overcome speed limitations of engineered limbless locomotion. He decided to take his inspiration from rectilinear gaits utilized by some snakes. He then decided to dramatically exaggerate rectilinear gaits to increase the speed. This gait is ultimately implemented by expanding and contracting the body.  The current prototype can achieve the speed of one mile per hour. In this design, the speed is linearly proportional to the length of the robot. So by doubling length we should be able to easily achieve the speed of two miles per miles. Unfortunately, the motors used in the current design won’t allow us to go much beyond 2 miles per hour. For that, we will need to utilize better motors! James used actively actuated friction pads near the head and tail of the robot to improve traction. He has found that different terrains require different friction pads. He has used pads ranging from bed of nails for traversing over grass to rubber for carpets. This robot required us to use 3D printing to realize a novel mechanism for expanding and contracting the body and maintain a small body cross section. You can check out the video of the current prototype below:





When people see this video the first time, they often say “this looks nothing like what we have seen in nature”. Some of them look disappointed and perplexed! Some people have suggested that we should try to make our robot move like a real snake. I am against creating a search and rescue robot that moves and looks like a real snake. Imagine that you are trapped in building destroyed by an earth quake. I don’t know about you but the last thing that I want to see in that situation is a snake crawling towards me. For the record, I like birds but I am not fond of snakes! 

We wanted to come up a clever name for James’s new robot. I wanted to call it LiLo (Limbless Locomotor). But unfortunately I found that Ms. Lindsay Lohan, a paparazzi magnet celebrity in Hollywood is known by that moniker. So we decided to call our robot R2G2 (Robot with Rectilinear Gait for Ground operations). It may come as a surprise to you, but James and I are Star Wars fans!

Tuesday, July 9, 2013

Accelerating Learning: Is it possible to beat 10,000-hour rule?

In his very well-written and popular book Outliers, Malcolm Gladwell popularized 10,000-hour rule. This rule is based on work done by Ericson, a psychologist.  The basic premise behind this rule is that it takes 10,000 hours of quality practice to become an expert in something. Another wonderfully written book Bounce by Mathew Syed also referred to this rule. Both books attempt to explain anatomy of success, in particular, extreme success. 10,000-hour rule has been interpreted by the popular media in many different ways. You might disagree with the numerical value of the number of hours it takes to become an expert. However, there appears be no doubt that currently it takes a long time to become an expert.

We are living in the age of rapid technological advances. The rapid change of technology is a harbinger of creative destruction. As an existing industry dies due to obsolescence of the underlying technology, many jobs associated with it disappear too. Similarly, the birth of a new industry creates many new jobs. We will soon be approaching the situation where people will need to retool themselves by acquiring new skills every five to ten years to ensure that they remain employed.      

This new reality is in conflict with the way education system works today.  To become expert at something and get a well-paying job, one must spend years in post-secondary training.  If you want to change your field significantly, you can count on spending several years in the school again. 10,000-hour rule seem to provide a justification for it! However, spending years in school to retool themselves after losing the job is not going to be an economically viable option for most people.

We need to find a better way. One way would be to accelerate the learning process.  Can we beat 10,000-hour rule? Can we master a new craft in 1,000 hours instead? 
In a conventional classroom, one memorizes lots of facts and information, develops motor skills necessary to do the physical tasks associated with the profession (e.g., surgery), and learns problem solving and decision making skills. In disciplines that involve creating something new (e.g., engineering design, architecture), one also learns synthesis process and divergent thinking to enable creation of new artifacts.        

We live a different world compared to the early twentieth century. However, we have not made any significant leap in the learning process over the past one hundred years. I would like to share the following observations:

  • A large amount of time in a conventional education program is spent on memorizing lots of facts and information.  Clearly, it was necessary to do it in the past. But within few years, we can envision a smartphone that gives a person ability to instantly search for virtually every known fact and information.  How crucial is it to devote time to memorizing all the facts associated with a profession? We can instead imagine a scenario where a human memorizes crucial high level facts that help him/her in understanding how the information is organized within the field, but the low level facts need not be stored in the human brain. The human can access them from the cloud on as-needed basis. The decreased emphasize on rote memorization can speed up the learning process.
     
  • In many educational programs, a significant amount of time is spent on motor skill development.  Many future jobs will be done with assistance from robots (and perhaps exoskeleton). This should reduce the time needed to develop motor skills.    
     
  • In the current education system, problem solving, decision making, synthesis, and divergent thinking skills are learned in the context of a discipline.  So these skills are not easily transferable to a new discipline. For example, let us assume that you are currently an architect and would like to switch to bio-medical engineering. Unfortunately, it will take you many years in school to accomplish this.  We ought to be able to structure education such that problem solving, decision making, synthesis, and divergent thinking skills are learned in such a way that they can be easily transferable from one career (e.g., architect) to another (e.g., bio-medical engineer).   
     
  • Technology can be used during the learning process to ensure every hour spent on learning actually contributes to learning. Facial expression recognition (and perhaps non-invasive brain imaging) can help in making sure that the person is not getting bored or frustrated! This ought to improve the learning process. Personalized computer-based tutoring system should also improve the efficiency of learning process.  
In my opinion, accelerating the pace of learning is one of the biggest challenge and opportunity facing the human race.  Clearly, training world-class athletes and musicians will continue to take more than 10,000-hour of quality practice. But we ought to be able accelerate learning in many other fields.

Monday, June 24, 2013

3D Printing or Laser Cutting?

I am a big fan of 3D printing and we use it in our lab all the time. But I am beginning to realize that many new users often don’t fully understand limitations of 3D printers and try to use them while they would have been much better served by using a laser cutter. 

3D printing is inherently a slow process. This becomes obvious if you try to print a big part. If you are trying to make a large part and you want it quickly, then you might want to consider exploring laser (or waterjet) cutting instead. Let me try compare laser cutting with 3D printing with the example of a support bracket shown in Figure 1. 
Figure 1: Concept of a bracket

Figure 2 shows a 3D model of this bracket that you can make on a 3D printer. It will take several hours of printing time on a fused deposition modeling machine. You will also need to wait for many more hours for the support material to dissolve. Therefore, you will have to wait for an entire working day before you can get your bracket and start using it. 

Figure 2: Bracket design that can be made on 3D printer
Laser cutting is a fast process that requires no setup time. Unfortunately, fast speeds in laser cutting are only available if you are cutting two dimensional profiles. You can cut pockets using raster cuts, but then the laser cutting process tends to be slow. For the bracket shown in Figure 1, you can simply design it to be a four piece assembly (shown in Figure 3). Each of the four individual parts can be simply cut as a 2D profile on a laser cutter in a few minutes. 
Figure 3: Bracket design as a four part assembly; each part can be cut on a laser cutter.
If you can design the desired shape such that it can be assembled from parts produced by laser cutting two dimensional profiles, then you can realize your parts at a low cost and get them in a matter of minutes. You will be surprised by what can be realized by this simple process. For example, we have made several versions of legged robots using this process. 

If you really need to make integrated 3D parts and it is not possible for you to convert your 3D model into an assembly of 2D parts, then you should go for 3D printing. For example, to build a robotic bird we decided to use 3D printing to make structural parts. Assembling a structural frame from small laser cut parts would have been simply impossible in this case, so 3D printing was the right choice in this case. 

3D Printing or Laser Cutting? The answer to this question depends upon the part. 3D printing is a great process and the right answer in many cases. Laser cutting is process with remarkable capabilities, and you should think about it as an alternative to 3D printing. Laser cutting tends to be a lot faster and cheaper than 3D printing, but making it work requires creativity during design.

Wednesday, June 5, 2013

Buying Custom Mechanical Parts on the Internet

Prior to the widespread use of the Internet in mid 90s, getting a custom mechanical part manufactured was a time consuming task. For example, if you were an inventor with a brilliant idea living in a small town, you had to travel to the nearest city that had the appropriate manufacturing facility. Once you got there, it might take you days to meet the manufacturing expert and find out how many and what kind of changes were needed to make your concept work from the manufacturing point of view.

Today, if you have an Internet connection, you have access to companies who will be happy to make your custom part. Once your CAD model is ready, you can order the part with a few clicks of your mouse. The Internet has enabled new e-commerce models in the manufacturing space and you have many options. You can directly order parts from manufacturers, let a broker find you a manufacturer, or work with a manufacturing service provider. Basically, you don’t need to leave your home to get your parts manufactured. 
So a natural question is – what option should you choose? Let us quickly review what you might need to think about before answering that question. Here are the four basic issues that you need to consider: 
  • Can the process under consideration make your part? Every manufacturing process imposes restrictions on shape, material, and achievable accuracy. So it is important to ensure that the process can produce the part.
     
  • How much will it cost? For some people, cost is the main driver. Other issues are less important. In many situations, other considerations are more important and hence cost minimization is not the right approach.
     
  • How long will it take before the part is delivered to you after you place the order? Sometimes, people are under tremendous time pressure and getting the part as soon as possible is the most important criterion. Some customers are willing to pay a premium price to get the part shipped quickly.
     
  • What is the probability that the part that is delivered to you actually conforms to your specifications? Unfortunately, many things can go wrong when placing a part order on the Internet. Moreover, an outfit with a good looking website and a promise of the lowest possible price may not actually have the right capability or expertise. If you receive a defective part, then it may cause a major problem for your project schedule.
Depending upon your requirements and situation, you may need to consider the following four other issues:
  • There might be a few different ways to make a given part. For example, a part can be laser cut or can be made using a water-jet cutter. So it might be useful if the manufacturer can provide multiple process options to you.
     
  • If you are new to designing mechanical parts, then you might need help in performing manufacturability analysis and improving your part design. Different companies offer different levels of help in this area.
     
  • If you are working on a sensitive project, then you may worry about protecting your intellectual property. How do you know that the manufacturer will not inadvertently share your CAD models with others? If this is your concern, then you may need to carefully review the data protection policy of the manufacturer. You may need to sign a non-disclosure agreement (NDA) with the manufacturer. Please keep in mind that enforcing NDA with an international company might be quite hard.
     
  • Different manufactures require different CAD model formats. So you will need to find someone who can accept files produced by your CAD system.
There are three different types of models to buy custom mechanical parts. I will use representative companies in each model to explain the basic idea:
  • Direct Purchase from Manufacturers: There are many job shops with websites. You can upload your CAD model and they will make it and ship it to you. A well-known example is Protomold (www.protomold.com) for ordering injection molded parts. Usually, directly working with a well-known manufacturer gets you a good price and the fastest delivery. However, the process options might be limited. This appears to be the best option when you have experience with the process under consideration and do not need significant help in ensuring manufacturability.
     
  • Purchasing from Manufacturing Service Providers: Quickparts (http://www.quickparts.com/) uses a number of manufacturers to fulfill its orders. They take care of all the backend details of working with the manufacturer after you order the part. This appears to be the model of choice if you want process flexibility and do not want to directly deal with the manufacturer yourself. This model also provides good support to new designers. However, you might not necessarily get the lowest possible price or the fastest delivery time.
     
  • Finding a Supplier Using a Brokering Service: There are brokering services that will allow you to find a manufacturer to make your part. For example, MFG.COM (http://www.mfg.com/) can help you get quotes from different suppliers. You can then select the supplier who meets your needs. This appears to be a good model if you have experience in dealing with job shops and want to minimize the cost.
In summary, this post gives you a list of questions to ask as you attempt to buy custom mechanical parts on the Internet. I look forward to hearing your experiences. Did I miss anything important?

Monday, May 27, 2013

Turning Lasers into Robotic Optical Hands for Manipulating Biological Cells

Newton’s scientific accomplishments are truly astonishing. One of his remarkable theories stated that light has momentum. If light has momentum, then it should be possible to move objects by shining a light on them. I am sure that it sounded like a crazy idea when Newton first proposed it.

Over the years, people have done numerous experiments to confirm this theory. This idea is so captivating that it even influenced the great George Lucas. Star Wars movies featured famous Lightsabers that utilized the special properties of the light to create a powerful Jedi weapon. But we have not seen such fantastic spectacles of light and matter interaction in our everyday macroscale world. Light has very small momentum. So moving a heavy couch by shining a laser on it remains in the realm of science fiction. Unfortunately, if you make the laser too powerful, it will simply evaporate the couch and set your house on fire.

A different picture emerges at the microscale. It is certainly possible to move tiny objects by shining a laser on them. But this mode of interaction does not offer much control. Ashkin in 1986 figured out a better way. He created optical traps that were able to hold tiny particles in place. The basic idea was to bend and focus a laser beam tightly using an objective lens. Once the object enters the laser beam, the laser starts interacting with it and pushing it towards the focal point where it gets trapped.

We can imagine the laser as a collection of rays. These rays are reflected and refracted by objects that intercept them. As the rays are bent, their momentum changes and they exert force on the object. This phenomenon can be visualized as interactions between a stationary ball and a moving ball. The direction of the motion of the moving ball changes as it strikes the stationary ball. Hence it exerts a force on the stationary ball. Ashkin found out that once the effect of all the rays in the laser beam was accounted for, the direction of the resultant force was such that the object was pushed towards the focal point of the beam. So as the object entered the laser beam, it was simply pushed towards the focal point and once it reached that point it remained there. In essence, the focused laser beam created a particle trap. A trapped particle can be moved by moving the laser beam. Thus, the laser has been turned into a tweezing tool for grabbing small particles and moving them. Moving optical traps are often referred as optical tweezers.


Numerous groups have used optical traps to manipulate biological cells and study them. In fact, many important discoveries in biology have been made using optical traps. Biologists are primarily interested in fundamental scientific discoveries. So they are happy to create and move optical traps using tele-operation. Just like tele-operated robots, tele-operated optical traps have many inherent limitations. They are slow, require significant expertise, and limit what kind of manipulation is possible.

I was introduced to optical tweezers in 2004 during my sabbatical at the National Institute of Standards and Technology. Thank you Arvind Balijepalli and Tom LeBrun! I am interested in robotics. So once I learned about optical traps, I became interested in turning them into robotic hands for automatically manipulating biological cells. In many situations, directly trapping biological cells can cause complications. Cells might have an irregular shape and they might be susceptible to damage due to direct exposure to the laser.

We decided to take a different path. Rather than building robotic optical tweezers, we wanted to build robotic optical hands. The idea was to use the laser to trap and move microspheres made out of silica or polystyrenes. These microspheres can serve as “fingers” for gripping or pushing cells. So in our idea, the laser would act as an optical hand (i.e., hand of a ghost) and microspheres would act as “fingers”. This idea enabled free floating “fingers” with no physical hand attached to them. This would be truly an alien hand with no biological counterpart on Earth! We could have as many “fingers” as we wanted by splitting the laser beam to create multiple traps. We could also have multiple hands if we wanted! It was a crazy idea. But it removed many constraints associated with conventional microscale robotic grippers and offered several new possibilities. Soon we were hooked to make this idea a reality.

There were numerous challenges. Microspheres and cells float in the liquid medium and exhibit Brownian motion. We had to detect these objects in the scene, plan the next trap location, and make sure that microspheres and cells moved the way we wanted them to move. However, we only had a few milliseconds to do image analysis, planning, and control. Images are noisy and the environment has significant uncertainty. Moreover, the motions of all the hands and fingers need to be exquisitely coordinated. So this was a really tough robotics problem. +ashis banerjee , +Sagar Chowdhury , +Petr Svec , and +Atul Thakur worked incredibly hard to solve the challenging planning, perception, and control problems to realize this vision. They built upon the basic software capability provided by Andrew Pomerance. Wolfgang Losert and Chenlu Wang provided valuable help in conducting the experiments.
Thank you National Science  Foundation for supporting this work! 
 
Our adventures in this area began by concurrently trapping multiple microspheres and moving them into an ensemble. We then used that ensemble to hold a cell and move it. We also developed the capability to move the cell into its desired location by pushing on it using a microsphere. If a cell is very sensitive to the laser, then we can use an intermediate microsphere as a tool, so that the microsphere “finger” being trapped by the laser does not touch the cell and ensures physical separation between the cell and the laser. Please see below the video of our robotic optical hand.


Hopefully, our colleagues in biology and medicine would be able to think about new scientific theories  that can be enabled by the above described robotic optical hands and their variants. Possibilities range from understanding the behavior of cancer cells to understanding how cells communicate.

Robotics solutions that enable precise automated manipulation of individual cells are expected to revolutionize medicine and biology. Our explorations in this area have taught us that robotics at the microscale requires out-of-the-box thinking. We are now busy coming up with even crazier ideas to marry robotics and biology. So stay tuned for updates.

Sunday, May 19, 2013

Recent Advances in Industrial Robots and Their Implications on Manufacturing

Industrial robots (e.g., ABB, PUMA) have been quite successful in mass production assembly lines. For example, they are routinely used to weld, paint, and join parts in automobile industry. However, small and medium manufacturers (SMM) in the US have largely stayed away from using industrial robots. They continue to rely on manual labor and this makes it hard for them to compete with overseas suppliers with low labor costs.

The National Association of Manufacturers (NAM) defines small manufacturers as companies with 500 or fewer employees and medium-sized manufacturers as companies with 2,500 or fewer employees. The NAM estimates that that the US has close to 300,000 SMM, representing a very important segment of the manufacturing sector. As we move towards shorter product life cycles and customized products, the future of manufacturing in the US will depend upon the ability of SMM to remain cost competitive.

This blog post explores the reasons behind the lack of adoption of industrial robotics technology by SMM and recent advances in robotics that might change the status quo.

Let us explore a representative scenario to understand the limitations of the current industrial robots and why they are not used by SMM. Imagine that you are working in a small company and building a prototype of new medical device. You are under extreme time pressure to meet an important deadline. As you are assembling the device, you realize the bracket is too compliant. You need to laser cut it again in a much stiffer material. The good news is that it will only take six minutes to cut the bracket. But the logistics associated with it will take an hour. You really need to continue assembling the rest of the assembly and testing the controller. You simply don’t have an hour to spend and can certainly use an assistant right now!

Here is what you would like your assistant to do - walk over to the material storage area, locate the right material, pick up the material, take it to the laser cutter, open the laser cutter, place the material in it, press the button to start cutting, wait for the part to finish, open the laser cutter, pick up the part, clean it, and bring it to you. Obviously human assistants can do all of these tasks without even flexing their cognitive muscles. I am sure that they can do all of these tasks while texting and surfing the net on their smart phones! Unfortunately the current industrial robots simply cannot do these tasks. So you simply cannot get a robot assistant today!

Robots that rule the assembly line have the following four limitations. First, they are immobile. They cannot go to the task location. The work has to be brought to them. Second, their dexterity is extremely limited. Simple tasks such as opening shelves and precisely placing and securing a previously unseen part in a machine are out of their capabilities. Third, it takes a long time to program them. So using robots on no-repetitive tasks is simply counter-productive. Finally, robots cannot work in the close proximity of humans because of safety concerns. So you can forget about a robot assistant walking over and handing you a tool or a part to assist you on the shop floor.

Most SMM use highly automated machines (e.g., CNC machines, laser cutter, water-jet cutters, CNC press-brakes, 3D printers). However, SMM shop floors tend to be unstructured and often go through changes to meet the needs of the projects at hand. Main sources of manual labor in SMM are material transport and handling, machine setup and calibration, inspection, clean-up, and packaging. Unfortunately, the current industrial robots that are designed for mass production assembly lines are of not much use in these tasks. So industrial robots offer very little value to SMM!

Recent advances in robotics are challenging the status quo and aiming to turn robots into important tools for SMM. I would like to share the following important trends:

  • Mobile manipulators are robots that can transport themselves to the work site. I recently saw demonstrations of mobile manipulators developed by Kuka that show impressive capabilities. This capability will be very useful in expanding the role of robots in manufacturing, particularly from the SMM point of view.
  • Dexterity has been a major obstacle to the widespread use of robots in manufacturing. Recent developments on robot hands are targeting to overcome this obstacle (e.g., Schunk and Barrett hands). 3D printing enables users to quickly create their own customized grippers in few hours.
  • Baxter from Rethink Robotics is aiming to eliminate the need for writing code to program robots. Instead, robots can be programmed by demonstrating the tasks. This is expected to empower workers on the shop floor. They will be able to start utilizing robots without the need to wait for a robot programmer to assist them.
  • Recent advances in human-safe robots are enabling robots to work in the close proximity of humans. Kuka lightweight arm and Baxter are representative examples of advances in this area. Many researchers are developing methods to track human operators in the workspace to make robots aware of humans in the workspace and change planned robot motions to avert injury to humans. For example, +Krishnanand Kaipa , +Carlos Morato , and +Boxuan Zhao  in my lab have developed a system to monitor a human operator working in the close proximity of a robot using four Microsoft Kinect sensors. This information is used by the robot to update its plan. The video of this system is shown below.

I believe that ultimately the convergence of the above mentioned technologies will create the second generation of industrial robots that will revolutionize the manufacturing industry. 

Once the demand increases for these robots, the cost for them will start coming down. There is no reason why low-end industrial robots cannot be sold for less than ten thousand dollars once the economy of scale kicks in. This in turn will make robots affordable for SMM and manufacturing cost-competitive in high wage countries.

Tuesday, April 30, 2013

Robo Raven: A Step towards Bird-Inspired Flight

I have always been fascinated by birds. To me they represent beauty, freedom, and a design marvel. I am envious of bird watchers. What a wonderful hobby! Bird watching requires patience and traveling to exotic places where birds like to hang out. Unfortunately patience is not my forte. Also, at this stage in my life I am unable to travel to exotic places. So for now,I have compromised and have settled for the next best thing - creating and watching my own “birds”. I am interested in building robotic birds, not the kinds that look pretty on the shelf, but the ones that can actually flap their wings and fly. 
 
Eight years of experiments have taught me that designing and building robotic birds is hard, despite the apparent simplicity of the idea - flap wings to generate thrust to propel forward and use the moving air to generate lift to stay afloat. I am glad that this looked deceptively straightforward in the beginning; otherwise we would have never started on this adventurous journey. How hard can it be to build two wings and flap them using a motor? It turns out to be quite challenging if you want the bird to actually fly! This requires a long trial and error process due to the absence of accurate simulation tools. Many concepts that look good on the paper lead to a spectacular crash during the flight test, often causing a fatal injury to the robotic bird! So design iterations are painfully slow.


We had the first successful flight in 2007. +Arvind Ananthanarayanan, Wojciech Bejgerowski, +Dominik Müller  were the main architects behind this feat. Hugh Bruck, my faculty colleague at the University of Maryland, offered valuable help with the wing evaluation. We subsequently created three more flying versions using similar ideas. The last one in this series was completed in 2010. John Gerdes played a major role in building the last version in this series. Please click here to see videos of these “birds” in action. We were able to put a tiny video camera on them and get the “bird’s eye view”. We were able to launch them using a small ground robot. They were able to fly in moderate winds of around 10 mile per hour. As I mentioned before, every design flaw led to a fatal crash. But to our surprise, a very successful design also led to a fatal crash for our “birds” for a different reason. A hawk felt threatened by our "bird" and tore it apart in the mid-flight on multiple occasions!

Real birds are able to precisely control each wing during flight which enables them to do all sorts of aerobatic maneuvers. This has been a very difficult feat to achieve in bird-inspired robots. In fact, prior efforts (including our own mentioned above) utilized only simple wing motions where both wings are driven by a single motor. So motions of two wings are coupled. Minor adjustments can be made in wing motions by using small secondary actuators. But two wings cannot move completely independently. In the past, any major change in the wing motion had to be accomplished by doing a hardware change on the ground. Clearly this limited how close a robotic bird came to the real bird in terms of the flight characteristics.

I wanted to build a bird with completely independent wings that can be programmed with any arbitrary motion profiles. We did a preliminary experiment five years ago, but unfortunately it was not successful at that time. So we shelved the idea for few years. Hugh Bruck and I revived the idea again about a year ago. I am happy to report that we finally had a breakthrough last week. The students responsible for this success are Eli Barnett, John Gerdes, Johannes Kempny, Ariel Perez-Rosado, and Luke Roberts. Our new robot is based on a fundamentally new design concept. We call it Robo Raven. It features programmable wings that can be controlled independently. We can now program any desired motion patterns for the wings. This allows us to try new in-flight aerobatics that would have not been possible before. For example, we can now dive and roll. Please see below for the video of Robo Raven. 





 


The new design uses two actuators that can be synchronized electronically to achieve motion coordination between the two wings. The use of two actuators required a bigger battery and an on-board micro controller. All of this makes our robotic bird overweight. So how do we get Robo Raven to “diet” and lose weight? We used advanced manufacturing processes such as 3D printing and laser cutting to create lightweight polymer parts to reduce the weight. However, this alone was not sufficient. We needed three other tricks to get Robo Raven to fly. First, we programmed wing motion profiles that ensured that wings maintain the optimal velocity during the flap cycle to achieve the right balance between the lift and the thrust. Second, we developed a method to measure aerodynamic forces generated during the flapping cycle. This enabled us to quickly evaluate many different wing designs to select the best one. Finally, we had to perform system level optimization to make sure that all components worked well as an integrated system.

Robo Raven will enable us to explore new in-flight aerobatics. It will also allow us to more faithfully reproduce observed bird flights using robotic birds. I hope that this robotic bird will also inspire more people to choose “bird making” as their hobby!

Robotic birds (i.e., flapping wing micro air vehicles) are expected to offer advances in many different applications such as agriculture, surveillance, and environmental monitoring. Robo Raven is just the beginning. Many exciting developments lie ahead. The exotic bird that you might spot in your next trip to Hawaii might actually be a robot!