18th June 2015
World's most lifelike bionic hand will transform the lives of amputees
A congenital amputee from London has become the first user in the UK to be fitted with a new prosthetic hand that launches this week and sets a new benchmark in small myoelectric hands.
Developed using Formula 1 technology and specifically in scale for women and teenagers, the bebionic small hand is built around an accurate skeletal structure with miniaturised components designed to provide the most true-to-life movements.
The bebionic small hand, developed by prosthetic experts Steeper, will enable fundamental improvements in the lives of thousands of amputees across the world. The hand marks a turning point in the world of prosthetics as it perfectly mimics the functions of a real hand via 14 different precision grips. A bionic extension of the arm that enables the utmost dexterity will enable amputees to engage in a range of activities that would have previously been complex and unmanageable.
Nicky Ashwell, 29, born without a right hand, received Steeper's latest innovation at a fitting by London Prosthetics Centre, a private facility providing expert services in cutting-edge prosthetics. Before being fitted with the bebionic small hand, Nicky would use a cosmetic hand without movement; as a result, Nicky learned to carry out tasks with one hand. The bebionic small hand has been a major improvement to Nicky's life, enabling her to do things previously impossible with one hand such as riding a bike, gripping weights with both hands, using cutlery and opening her purse.
Nicky, who is a Product Manager at an online fashion forecasting and trend service, said: "When I first tried the bebionic small hand it was an exciting and strange feeling; it immediately opened up so many more possibilities for me. I realised that I had been making life challenging for myself when I didn't need to. The movements now come easily and look natural; I keep finding myself being surprised by the little things, like being able to carry my purse while holding my boyfriend's hand. I've also been able to do things never before possible like riding a bike and lifting weights."
Bebionic small hand works using sensors triggered by the user's muscle movements that connect to individual motors in each finger and powerful microprocessors. The technology comprises a unique system which tracks and senses each finger through its every move – mimicking the functions of a real hand. Development follows seven years of research and manufacturing, including the use of Formula 1 techniques and military technology along with advanced materials including aerograde aluminium and rare Earth magnets.
Ted Varley, Technical Director at Steeper said, "Looking to the future, there's a trend of technology getting more intricate; Steeper has embraced this and created a smaller hand with advanced technology that is suitable for women and teenagers. An accurate skeletal structure was firstly developed, with the complex technology then specifically developed to fit within this in order to maintain anatomical accuracy. In other myoelectric hands the technology is developed first, at the expense of the lifelikeness."
Bebionic small hand at a glance:
• Contains 337 mechanical parts
• 14 grip patterns and hand positions to allow a range of precision movements
• Weighs approximately 390g – the same as a large bar of Galaxy chocolate
• 165mm from base to middle fingertip – the size of an average woman's hand
• Strong enough to handle up to 45kg – around the same as 25 bricks
• The only multi-articulated hand with patented finger control system using rare Earth magnets
• Specifically designed with women, teenagers and smaller-framed men in mind
30th May 2015
Cheetah robot can jump over obstacles
Engineers at the Massachusetts Institute of Technology (MIT) have developed a new version of the Cheetah robot, which is able to leap over obstacles while running at high speed. The eerily lifelike machine uses a laser distance sensor and real-time algorithms to perceive its environment. In this demonstration video, it is shown hurdling objects up to 40cm (16") in height, and performing multiple jumps without a safety harness.
"A running jump is a truly dynamic behaviour," says Sangbae Kim, assistant professor of mechanical engineering, in a press release. "You have to manage balance and energy, and be able to handle impact after landing. Our robot is specifically designed for those highly dynamic behaviours."
In the future, this robot – and others like it – may serve important functions in the military. They could scout ahead of soldiers to provide real-time information on the battlefield, for example, or relieve troops of the burden of carrying ammunition, food, medical supplies, batteries and other equipment. These machines could also be useful in search and rescue operations, able to access difficult or remote terrain that would defeat other types of vehicle.
28th May 2015
Robot masters new skills through trial and error
Researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a significant milestone in the field of artificial intelligence.
UC Berkeley researchers (from left to right) Chelsea Finn, Pieter Abbeel, BRETT, Trevor Darrell and Sergey Levine (Photo courtesy of UC Berkeley Robot Learning Lab).
Researchers at the University of California, Berkeley, have demonstrated a new type of reinforcement learning for robots. This allows a machine to complete various tasks without pre-programmed details about its surroundings – such as putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more.
“What we’re reporting on here is a new approach to empowering a robot to learn,” said Professor Pieter Abbeel, Department of Electrical Engineering and Computer Sciences. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”
The latest developments are presented today, Thursday 28th May, at the International Conference on Robotics and Automation in Seattle. The work is part of a new People and Robots Initiative at UC’s Centre for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.
“Most robotic applications are in controlled environments, where objects are in predictable positions,” says UC Berkeley faculty member Trevor Darrell, who is leading the project with Abbeel. “The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings.”
Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates. Instead, the researchers turned to a new branch of AI known as deep learning. This is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.
“For all our versatility, humans are not born with a repertoire of behaviours that can be deployed like a Swiss army knife, and we do not need to be programmed,” explains postdoctoral researcher Sergey Levine, a member of the research team. “Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”
In the world of artificial intelligence, deep learning programs create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognise patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google’s speech-to-text program, or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition. Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.
“Moving about in an unstructured 3D environment is a whole different ballgame,” says Ph.D. student Chelsea Finn, another team member. “There are no labelled directions, no examples of how to solve the problem in advance. There are no examples of the correct solution like one would have in speech and vision recognition programs.”
In their experiments, the researchers worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks. They presented BRETT with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks. The algorithm controlling BRETT’s learning included a "reward" function that provided a score based on how well the robot was doing with the task.
BRETT takes in the scene including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based on the robot’s movements. Movements that bring the robot closer to completing the task will score higher than those that don't. The score feeds back through the neural net, so the robot can "learn" which movements are better for the task at hand. This end-to-end training process underlies the robot’s ability to learn on its own. As the PR2 moves its joints and manipulates objects, the algorithm calculates good values for the 92,000 parameters of the neural net it needs to learn.
When given the relevant coordinates for the beginning and end of the task, the PR2 can master a typical assignment in about 10 minutes. When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.
Abbeel says the field will likely see big improvements as the ability to process vast amounts of data increases: “With more data, you can start learning more complex things. We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”
7th May 2015
The first licenced autonomous driving truck in the US
Vehicle manufacturer Daimler this week announced that its Freightliner Inspiration Truck has become the world's first autonomous truck to be granted a licence for road use in the State of Nevada.
In July last year, Daimler provided the world's first demonstration of an autonomous truck in action, when the Mercedes-Benz Future Truck 2025 drove along a cordoned-off section of the A14 autobahn near Magdeburg, Germany. Engineers then transferred the system to the US brand Freightliner and created the Inspiration Truck – modified for use on American highways. The result: the State of Nevada has certified no less than two Freightliner Inspiration Trucks for regular operations on public roads. Governor Brian Sandoval handed over the official Nevada licence plates during a ceremony at the Las Vegas Motor Speed.
This futuristic vehicle is based on the existing Freightliner Cascadia model, but has the addition of "Highway Pilot" technology. The latter combines a sophisticated stereo camera and radar technology with systems providing lane stability, collision avoidance, speed control, braking, steering and an advanced dash display, allowing for safe autonomous operation on public highways. These components were extensively tested. As part of the truck's so-called Marathon Run, it covered over 10,000 miles (16,000 km) on a test circuit in Papenburg, Germany.
The radar unit in the front bumper scans the road ahead at both long and short range. The long-range radar, with a range of 820 feet and scanning an 18° segment, looks far and narrow to see vehicles ahead. The short-range radar, with a range of 230 feet and scanning a 130° segment, looks wider to see vehicles that might cut in front of the truck.
There is also a medium-range stereo camera, which is located behind the windscreen. The range of this camera is 328 feet, and it scans an area measuring 45° horizontal by 27° vertical. This camera is able to recognise lane markings and communicates to the Highway Pilot steering gear for autonomous lane guidance.
In addition, tiny cameras are located on the exterior of the truck. These reduce blind spots and are capable of replacing exterior mirrors, while creating a slight boost in fuel efficiency (1.5 percent).
The vehicle operates safely under a wide range of conditions – it will automatically comply with posted speed limits, regulate the distance from the vehicle ahead and use the stop-and-go function during rush hour. The driver can deactivate the Highway Pilot manually and is able to override the system at any time. If the vehicle is no longer able to process crucial aspects of its environment, e.g. due to road construction or bad weather, the driver is prompted to retake control.
A large, state-of-the-art dash interface, combined with video displays from the various cameras, is designed to offer a great driver experience and to vastly improve the way data from the truck's performance is communicated to the driver. Highway Pilot informs the driver visually on its current status and also accepts commands from the driver.
According to U.S. government data, 90 percent of truck crashes involve human error – much of that due to fatigue. Wolfgang Bernhard, a member of the Board of Management at Daimler, commented: "An autonomous system never gets tired, never gets distracted. It is always on 100 percent."
For now, the Inspiration Trucks will be limited to Nevada, one of the lowest density states in the country, but other states are likely to create similar regulations in the future, with California and Michigan expected to follow soon: "Ultimately, this has to be federally regulated to have a consistent basis across the country," says Martin Daum, president and CEO of Daimler Trucks North America.
The Inspiration Truck is only semi-autonomous, as it requires a human behind the wheel, who can take over in case of an emergency. The technology is advancing rapidly, however. Daimler and other manufacturers, including Nissan and Tesla, are planning to introduce fully autonomous vehicles (with no human driver on board) during the early 2020s. Worldwide, freight traffic shipped by road is predicted to triple by 2050, with self-driving vehicles expected to play an ever-increasing role in transportation.
Eventually, these autonomous vehicles will be intelligently connected – to their environment and other road users – to such an extent that they will be able to avoid areas with heavy traffic and contribute to reducing traffic jams. Traffic of the future will flow more smoothly and be far more predictable. Traffic systems will be more flexible and the infrastructure will be utilised better. Transport firms will operate more profitably, with fuel savings alongside lower maintenance costs as a result of less wear on the vehicle components, due to a more constant flow of traffic. Most importantly of all, road safety will be hugely improved – with many thousands of deaths prevented each year.
18th April 2015
World's first robotic kitchen to debut in 2017
Moley Robotics has unveiled an automated kitchen system, able to scan and replicate the movements of a human chef to produce recipes.
The world's first automated kitchen system was unveiled this week at Hanover Messe in Germany – the premier industrial robotics show. Developed by tech firm Moley Robotics, it features a dexterous robot integrated into a kitchen that cooks with the skill and flair of a master chef.
The company's goal is to produce a consumer version within two years, supported by an iTunes-style library of recipes that can be downloaded and created by the kitchen. The prototype at the exhibition is the result of two years development and the collaboration of an international team including Sebastian Conran who designed the cooking utensils and Mauro Izzo, DYSEGNO and the Yachtline company, who created the futuristic kitchen furniture.
Two complex, fully articulated hands, made by the Shadow Robot Company, comprise the kitchen's key enabling technology. The product of 18 years' research and development, Shadow's products are used in the nuclear industry and by NASA. Able to reproduce the movements of a human hand with astonishing accuracy, their utility underpins the unique capability of the automated kitchen.
The Moley Robotics system works by capturing human skills in motion. Tim Anderson – culinary innovator and winner of the BBC Master Chef competition – played an integral role in the kitchen's development. He first developed a dish that would test the system's capabilities – a crab bisque – and was then 3D recorded at a special studio cooking it. Every motion and nuance was captured, from the way Tim stirred the liquids to the way he controlled the temperature of the hob. His actions were then translated into elegant digital movement, using bespoke algorithms. The robot doesn't just cook like Tim – in terms of skill, technique and execution it is Tim producing the dish. The kitchen even 'signs off' its work with an 'OK' gesture – just as the chef does.
"To be honest, I didn't think this was possible," he said. "I chose crab bisque as a dish because it's a real challenge for a human chef to make well – never mind a machine. Having seen – and tasted – the results for myself, I am stunned. This is the beginning of something really significant: a whole new opportunity for producing good food and for people to explore the world's cuisines. It's very exciting."
Moley Robotics, headquartered in the UK, is now working to scale the technology ready for mass production and installation in regular-sized kitchens. Future iterations will be more compact, with smaller control arms but with added functionality in the form of a built-in refrigerator and dishwasher to complement a professional-grade hob and oven.
The company is working with designers, homebuilders, kitchen installers and food suppliers to promote the system. The mass-market product will be supported by a digital library of over 2,000 dishes when it launches in 2017 and it is envisaged that celebrity chefs will embrace 3D cooking downloads as an appealing addition to the cook book market. Home chefs will be able to upload their favourite recipes too, and so help create the 'iTunes' for food.
Moley Robotics was founded by London-based computer scientist, robotics and healthcare innovator Mark Oleynik. The company's aim is to produce technologies that address basic human needs and improve day-to-day quality of life.
"Whether you love food and want to explore different cuisines, or fancy saving a favourite family recipe for everyone to enjoy for years to come, the Automated Kitchen can do this," says Oleynik. "It is not just a labour saving device – it is a platform for our creativity. It can even teach us how to become better cooks!"
The robotic hands demonstrated this week offer a glimpse of the not-too-distant future, when even greater advances in movement, flexibility, touch and object recognition will have been achieved. Experts believe that near-perfect recreations of human hands, operating in a wide variety of environments, will be possible in just 10 years' time.