a humanoid kitchen robot:
A robot helping in the household no longer is a dream of the future. ARMAR, the humanoid robot, can understand commands and execute them independently. For instance, it gets the milk out of the fridge. Thanks to cameras and sensors, it orients itself in the room, recognizes objects, and grasps them with the necessary sensitivity. Additionally, it reacts to gestures and learns by watching a human colleague how to empty a dishwasher or clean the counter. Thus, it adapts naturally to man’s environment. At the CeBIT, ARMAR will show how it moves between a refrigerator, counter, and dishwasher.
Snake robots can use their many internal degrees of freedom to thread through tightly packed volumes accessing locations that people and machinery otherwise cannot use.
Moreover, these highly articulated devices can coordinate their internal degrees of freedom to perform a variety of locomotion capabilities that go beyond the capabilities of conventional wheeled and the recently developed legged robots. The true power of these devices is that they are versatile, achieving behaviors not limited to crawling, climbing, and swimming.
According to Howie Choset, the Lab Director at the Biorobotics Lab of Carnegie Mellon University,
Serpentine robots offer advantages over traditional mobile robots and robot arms because they have enhanced flexibility and reachability, especially in convoluted environments. These robots are well suited to inspect large space-fairing truss structures such as the future space station and can also be used to inspect the Space Shuttle cargo bay before launch. Serpentine mechanisms offer unique capabilities on Earth to applications such as bridge inspection, search and rescue, surface coating, and minimally invasive surgery. The work, described in this paper, will exploit a geometric structure, termed a roadmap, to guide the motions of a serpentine robot in highly convoluted spaces. This approach offers advantages over previous work with serpentine robots because it provides a general mathematical structure that is not mechanism specific, thereby having applications to a large class of problems.
Hod Lipson at Cornell University in Ithaca, New York presents successful hovering ornithopters that have been developed with a variety of wing designs. This project utilizes existing solutions to the power and stability problems and uses 3D printing as a novel approach to designing and manufacturing the key aerodynamic component: the wings.
If you want to mimic nature in the design of flying robots, then the best way to accomplish that is to mimic nature’s design process: evolution. One huge obstacle to this is the factor of time. Nature uses time on a very large scale to refine designs. If we hope to accomplish similar feats with robots we have to shorten the prototyping process.
Thus far, producing effective flapping wings for research and ornithopter construction has been a time consuming and delicate process taking days or longer to complete. The 3D printing technique allows wings to be produced in a matter of minutes, dramatically reducing the time of each design cycle. Overcoming this barrier to experimentation will allow a comprehensive study of lift production for a wide variety of wing shapes including those replicating real insect wings.
This is some of the most animal like movement I have ever seen in a robot. Even though it has a drunken style of walking, it reminds me of my dog climbing the stairs or of some of the best Ray Harryhausen animations. Please watch the video below to be amazed.
The goal of the Learning Locomotion Project was to use machine learning techniques to create autonomous control software for a robot quadruped such that it can traverse unknown rugged and complex terrains. As experimental platform, the LittleDog robot was chosen, which is about 30cm long and 20cm high, with three degrees of freedom per leg. The specifications for the project required that the robot should achieve a speed of at least 7.2 cm/s and climb over obstacles up to 10.7cm (for humans, this would correspond to obstacles of 50% body height which are traversed at slow walking speed). For additional information, follow the link to this article and the official project website.
The technical components of our approach included:
The ICRA 2010 paper was the Overall Best Paper Award Finalist, i.e., among the best 4 papers out of 2000 submissions.
Kevin Warwick, besides being the world’s first cyborg, is Professor of Cybernetics at the University of Reading, England, where he carries out research in artificial intelligence, control, robotics and cyborgs.
He and his team have taken the brain cells from rats, cultured them, and used them as the guidance control circuit for simple wheeled robots. Electrical impulses from the bot enter the batch of neurons, and responses from the cells are turned into commands for the device. The cells can form new connections, making the system a true learning machine.
He says in the video below that the team will start using human neurons. And it will go from the 100,000 cells to about 30 million brain cells which is very powerful.
This is heady stuff with the deepest moral and philosophical implications. This is exactly the kind of thing that creeps people out when it comes to cyborgs. And Warwick, being a fan of the Terminator films, appears to welcome the robot overlords with enthusiasm. He says, “Someday we’ll switch on that machine, and we won’t be able to switch it off.”
He recently made a presentation at The 2010 IEEE International Conference on Systems, Man, and Cybernetics in Istanbul, Turkey.
In his presentation he showed how the use of implant and electrode technology can be employed to create biological brains for robots, to enable human enhancement and to diminish the effects of certain neural illnesses. In all cases the end result is to increase the range of abilities of the recipients. An indication is given of a number of areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking a biological brain directly with computer technology. The emphasis is clearly placed on practical scientific studies that have been and are being undertaken and reported on. The area of focus is notably the use of electrode technology, where a connection is made directly with the cerebral cortex and/or nervous system. The presentation considered the future in which robots have biological, or part-biological, brains and in which neural implants link the human nervous system bi-directionally with technology and the internet.
The world’s robot population has reached 8.6 million. To get a sense of this number, consider:
Erico Guizzo arrived at the 8.6 million estimate based on data from the latest edition of World Robotics, a great numbers-filled report prepared annually by good folks at the International Federation of Robotics, or IFR. The report came out late last year — I finally had time to take a look at it — and refers to the robot market up to the end of 2008.
One of the big challenges facing robot developers is getting the robot to understand and process its environment. It’s a big leap from a robot that can steer around obstacles, to one that can detect discrete objects and actions and then interpret what it has detected. Even more so to do that in real time.
HAE JONG SEO (서해종) a PhD student from the Multidimensional Signal Processing Research Group at UC Santa Cruz, teamed up with Willow Garage for a project doing just that. And the great part is they were using low cost web cameras!
They were using a system called LARK (Locally Adaptive Regression Kernels). Do you think Twitter is actually an obscure engineering acronym like LARK? If its is, I don’t want to know. I prefer thinking of birds and I hope that LARK can detect a real lark someday.
In order for personal robots to interact with people, it is useful for robots to know where to look, locate and identify objects, and locate and identify human actions. As Willow Garage describes it:
LARK features have many applications, such as saliency detection. Saliency detection determines which parts of an image are more significant, such as containing objects or people. You can then focus your object detection on the salient regions of the image in order to detect more quickly. Saliency detection can be extended to “space-time” for use with video streams.
LARK features can also be used for generic object and action detection. As you can see in the video, objects such as door knobs, the PR2 robot, and human faces and be detected using LARK. Space-time LARK can also detect human actions, such as waving, sitting down, and getting closer to the camera.
When I think aerial swarming robots, I think of something sinister like the swarming human meat grinders from The Matrix trilogy. I don’t think of sunny summer days in the French countryside and a pretty girl under a yellow umbrella and robot launches that remind me of youthful paper airplane experiments. But that’s what these experiments look like on the outside. On the inside there is some serious R&D happening:
The Swarming Micro Air Vehicle Network (SMAVNET) Project
Sabine Hauert, Severin Leven, Jean-Christophe Zufferey and Dario Floreano
Project Goal Big Picture
The SMAVNET project aims at developing swarms of flying robots that can be deployed in disaster areas to rapidly create communication networks for rescuers. Flying robots are interesting for such applications because they are fast, can easily overcome difficult terrain, and benefit from line-of-sight communication.
To make aerial swarming a reality, robots and controllers need to be made as simple as possible.
From a hardware perspective, robots are designed to be robust, safe, light-weight and low-cost. Furthermore, protocols and human-swarm interfaces are developed to allow non-experts to easily and safely operate large groups of robots.
From a software perspective, controllers allow flying robots to work together. For swarming, robots react to wireless communication with neighboring robots or rescuers (communication-based behaviors). Using communication as a sensor is interesting because most flying robots are generally equipped with off-the-shelf radio modules that are low-cost, light-weight and relatively long-range. Furthermore, this strategy alleviates the need for position which is required for all existing aerial swarm algorithms and typically requires using sensors that depend on the environment (GPS, cameras) or are expensive and heavy (lasers, radars).
Flying Robots were specifically designed for safe, inexpensive and fast prototyping of aerial swarm experiments.
They are light weight (420 g, 80 cm wingspan) and built out of Expanded Polypropylene (EPP) with an electric motor mounted at the back and two control surfaces serving as elevons (combined ailerons and elevator). The robots runs on a LiPo battery and have an autonomy of 30 min. They are equipped with an autopilot for the control of altitude, airspeed and turn rate. Embedded in the autopilot is a micro-controller that runs a minimalist control strategy based on input from only 3 sensors: one gyroscope and two pressure sensors.
Swarm controllers are implemented on a Toradex Colibri PXA270 CPU board running Linux, connected to an off-the-shelf USB WiFi dongle. The output of these controllers, namely a desired turn rate, speed or altitude, is sent as control command to the autopilot.
In order to log flight trajectories, the robot is further equipped with a u-blox LEA-5H GPS module and a ZigBee (XBee PRO) transmitter.
Platform Swarm Algorithms
Designing swarm controllers is typically challenging because no obvious relationship exists between the individual robot behaviors and the emergent behavior of the entire swarm. For this reason, we turn to biology for inspiration.
In a first approach, artificial evolution is used for its potential to automatically discover simple and unthought-of robot controllers. Good evolved controllers are then reverse-engineered so as to capture the simple and efficient solutions found through evolution in hand-designed controllers that are easy to understand and can be modeled. Resulting controllers can therefore be adapted to a variety of scenarios in a predictable manner. Furthermore, they can be extended to accommodate entirely new applications. Reverse-engineered controllers demonstrate a variety of behaviors such as exploration, synchronization, area coverage and communication relay.
In a second approach, inspiration is taken from ants that can optimally deploy to search for and maintain pheromone paths leading to food sources in nature. This is analogous to the deployment and maintenance of communication pathways between rescuers using the SMAVNET.
All necessary software and hardware to perform experiments with 10 flying robots was developed in the scope of this project. To the best of our knowledge, this setup is the one with the most flying robots operating outdoors to this day.
For fast deployment of large swarms, input from the swarm operator must be reduced to a minimum during robot calibration, testing and all phases of flight (launch, swarming, landing). Therefore, robot reliability, safety and autonomy must be pushed to a maximum so that operators can easily perform experiments without safety pilots. In our setup, robots auto-calibrate and perform a self-check before being launched by the operator. Robots can be monitored and controlled though a swarm-interface running on a single computer.
The critical issue of operational safety has been addressed by light-weight, low-inertia platform design and by implementing several security features in software. Among other things, we looked at mid-air collision avoidance using local communication links and negotiation of flight altitudes between robots. By providing a risk analysis for ground impact and mid-air collisions to the Swiss Federal Office for Civil Aviation (FOCA), we obtained an official authorization for beyond-line-of-sight swarm operation at our testing site.