U.S. robotics company iRobot has won $7.2 million in contracts from the Brazilian government as the country prepares to host the FIFA World Cup.
iRobot are probably best known for their autonomous home vacuum cleaner (the Roomba). However, they also develop a range of military and police robots. As part of a massive security operation, the contract includes 30 PackBot robots, similar to those used in Afghanistan, Iraq and the inside of Japan’s Fukushima nuclear power plant. The machines will work alongside thousands of soldiers patrolling the host cities, performing surveillance and examining suspicious objects.
Since they were first deployed, at the World Trade Center in the aftermath of 9/11, more than 4,500 PackBots have been delivered to military and civil defence forces worldwide. They are credited with saving countless lives in theatre while performing bomb disposal and other dangerous missions. They can operate in a variety of environments when needed for troops and public safety professionals – enhancing situational awareness, reducing risk and increasing mission success.
"iRobot continues its international expansion, and Brazil represents an important market for the company’s unmanned ground vehicles," said Frank Wilson, senior vice president and general manager of iRobot’s Defense & Security business unit. "iRobot is excited to be providing the company’s state-of-the-art robotic technologies to Brazil as the country prepares for several high profile international events, including the 2014 FIFA World Cup."
Already released to developers, Google Glass will have its consumer launch by early 2014. This video is from Playground – a digital creative agency based in Toronto. In the company's own words:
"For us at Playground, Google Glass is exciting. We are constantly trying to dissect the human-technology relationship and Glass represents information technology at its most intimate. The Explorer edition Glass and its Mirror API is an amazing techno-social experiment, but it is an experiment with limitations. We wanted to visualize what Glass may do as the platform matures past today's limits."
Aldebaran Robotics are the creators of NAO — an autonomous, programmable humanoid robot. The latest model features a higher level of interaction, more accuracy, faster and more reliable "Nuance" voice recognition, smart torque control, improved walking algorithms and measures to reduce collisions. One of the recent apps to have been developed, seen in the video below, allows NAO to write any word you ask him, then spell the word as he writes it. This uses text-to-speech for word recognition, and inverse kinematics for the writing part.
A British engineer has unveiled a giant "mantis" robot, large enough to carry a human pilot, which is supported by multiple hydraulic legs. This all-terrain machine, which took four years to research and build, has reportedly attracted the interest of mining and marine research companies. For more information, visit the official website.
Back in October 2011, Boston Dynamics released a video of PETMAN – a surprisingly realistic humanoid robot. This was intended for eventual use in testing protective clothing worn by the military. The machine has since been redesigned with a head and other upgrades. These include sensors to detect leaks within a suit, along with artificial perspiration to simulate the micro-climate experienced by a real person. With its added clothing, PETMAN has been undergoing validation experiments and will soon be tested inside an exposure chamber with sarin, mustard gas and other deadly chemicals.
Japanese electronics firm Hitachi has unveiled "ROPITS" – Robot for Personal Intelligent Transportation System.
ROPITS is designed to aid the short-distance transportation of the elderly, or those with walking difficulties. The vehicle is equipped with a "specified arbitrary point autonomous pick-up and drop-off function" which can navigate to locations specified on a tablet or mobile device. Thanks to its small size and slow speed (3.7 mph, or 5.9 km/h), it can move safely across pavements, squares and open areas without being restricted to roads. On-board sensors provide a 360° view of the surrounding environment, allowing it to sense and react to pedestrians. Actuators and shock absorbers keep the body constantly maintained in a level position (horizontal state), so uneven surfaces can be handled without losing balance.
This and other such vehicles may be needed to support future societies with a higher proportion of old people than today. Japan faces a particular problem in this regard, having the largest proportion of elderly citizens in the world. The nation's elderly population, aged 65+, comprised 20% of its population in 2006, a figure that is forecast to reach 40% by 2060.
Hitachi claims that ROPITS could also be used as an autonomous delivery vehicle for a variety of services. The company intends to continue testing its vehicle and will present further details at the Robotics and Mechatronics Conference 2013, ROBOMEC 2013, to be held in the Tsukuba Special District from 22nd-25th May.
The system, called Zoe, bears a striking resemblance to Holly, the ship's computer in British sci-fi comedy, Red Dwarf. It is based on a template that, in the near future, could allow people to upload their own faces and voices. Users would be able to customise and personalise their own digital assistants for a range of applications – in mobile "face messages", gaming, audio-visual books, as a means of delivering online lectures or presentations, and in various user interfaces.
Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge: “This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being.”
As well as being more expressive than any previous system, Zoe is also remarkably data-light. The program used to run her is just tens of megabytes in size, which means that it can be easily incorporated into even the smallest computer devices, including tablets and smartphones.
It works by using a set of fundamental, “primary colour” emotions. Zoe’s voice, for example, has six basic settings – Happy, Sad, Tender, Angry, Afraid and Neutral. The user can adjust these settings to different levels, as well as altering the pitch, speed and depth of the voice itself. By combining these levels, it becomes possible to pre-set or create almost infinite emotional combinations.
Researchers of five European universities have developed a cloud-computing platform for robots. The platform allows robots connected to the Internet to directly access the powerful computational, storage, and communications infrastructure of modern data centers – the giant server farms behind the likes of Google, Facebook, and Amazon – for robotics tasks and robot learning.
With the development of the RoboEarth Cloud Engine, the team continues their work towards creating an Internet for robots. The new platform extends earlier work on allowing robots to share knowledge with other robots via a WWW-style database, greatly speeding up robot learning and adaptation in complex tasks.
More intelligent robots
The developed Platform as a Service (PaaS) allows robots to perform complex functions like mapping, navigation or processing of human voice commands within the cloud, at a fraction of the time required by robots' on-board computers. By making enterprise-scale computing infrastructure available to any robot with a wireless connection, the researchers believe that the new computing platform will help pave the way towards lighter, cheaper, more intelligent robots.
Mohanarajah Gajamohan, from the Swiss Federal Institute of Technology (ETH Zurich) and Technical Lead of the project: "The RoboEarth Cloud Engine is particularly useful for mobile robots, such as drones or autonomous cars, which require lots of computation for navigation. It also offers significant benefits for robot co-workers, such as factory robots working alongside humans, which require large knowledge databases, and for the deployment of robot teams."
Dr. Heico Sandee, RoboEarth's Program Manager at Eindhoven University of Technology in the Netherlands: "On-board computation reduces mobility and increases cost. With the rapid increase in wireless data rates caused by the booming demand of mobile communications devices, more and more of a robot's computational tasks can be moved into the cloud."
Impact on jobs
While high-tech companies that heavily rely on data centers have been criticised for creating fewer jobs than traditional companies (e.g. Google and Facebook employ less than half the number of workers of General Electric or Hewlett-Packard per dollar in revenue), the researchers don't believe that this new robotics platform should be cause for alarm. According to a recent study by the International Federation of Robotics and Metra Martech entitled "Positive Impact of Industrial Robots on Employment," robots don't kill jobs but rather tend to lead to an overall growth in jobs. Whether this trend remains true in the coming decades, however, remains to be seen.
Peter Diamandis is the founder and chairman of X PRIZE Foundation, co-founder and chairman of Singularity University and the co-author of Abundance: The Future Is Better Than You Think. He is also co-founder of the asteroid mining company, Planetary Resources. In this video, he discusses the future of humans evolving into meta-intelligence group-minds and invites participants to the second international Global Future 2045 congress (June 2013).
Boston Dynamics, creators of BigDog, have released a new video. The robot is shown handling heavy objects, using the strength of its legs and torso to power the motions of a front arm attachment. The main purpose of this extra limb is to help soldiers pick up and carry heavy loads, but there's much potential for other uses. Chris Melhuish, director of Bristol Robotics Laboratory, told the BBC: "I think the potential is enormous: from pets, to robots that are going to help you move your shopping, to robots on building sites that move bricks from one place to another, following a bricklayer around."
The swift turning flight and aerodynamics of bats offers amazing possibilities for the design of small aircraft, among other applications. By building a robotic bat wing, researchers at Brown University have uncovered the flight secrets of real bats: the function of ligaments, the elasticity of skin, structural support of musculature, skeletal flexibility, upstroke and downstroke.
Tests showed the machine can match the basic flight parameters of bats, producing enough thrust to overcome drag and enough lift to carry the weight of the model species.
A paper describing the robot and presenting results from preliminary experiments is published in the journal Bioinspiration and Biomimetics. The work was done in labs of Brown professors Kenneth Breuer and Sharon Swartz, who are the senior authors on the paper. Breuer, an engineer, and Swartz, a biologist, have studied bat flight and anatomy for years.
The faux flapper generates data that could never be collected directly from live animals, said Joseph Bahlman, a graduate student at Brown who led the project. Bats can’t fly when connected to instruments that record aerodynamic forces directly, so that isn’t an option — and bats don’t take requests.
"We can’t ask a bat to flap at a frequency of eight hertz then raise it to nine hertz so we can see what difference that makes," Bahlman said. "They don’t really cooperate that way."
But the model does exactly what the researchers want it to do. They can control each of its movement capabilities — kinematic parameters — individually. That way, they can adjust one parameter while keeping the rest constant to isolate the effects. This data will not only give new insights into the mechanics of bat flight, it could aid the design of small flapping aircraft or "micro aerial vehicles".
"The next step is to start playing with the materials," Bahlman said. "We’d like to try different wing materials, different amounts of flexibility on the bones, looking to see if there are beneficial trade-offs in these material properties."
The research was funded by the U.S. Air Force Office of Scientific Research and the National Science Foundation.
Last year, Google announced "Project Glass" – a research and development program which aims to prototype and build an augmented reality (AR) head-mounted display. The project's intended purpose was to allow hands-free displaying of information currently found on smartphones, while providing interaction with the Internet via natural language voice commands, in a manner similar to the iPhone application Siri.
Developers were given early access to the device for $1,500, with a consumer version expected in 2014. New details have now emerged on the company's website, including this video which shows the glasses in action. The search giant is offering trials of the product to "bold, creative individuals" and wants people to suggest ways in which they would make use of the headset.
New research from Indiana University has found that machine learning – the same computer science discipline that helped create voice recognition systems, self-driving cars and credit card fraud detection systems – can drastically improve both the cost and quality of health care in the United States.
Using an artificial intelligence framework, combining Markov Decision Processes and Dynamic Decision Networks, IU School of Informatics and Computing researchers Casey Bennett and Kris Hauser show how simulation modeling that understands and predicts the outcomes of treatment could reduce healthcare costs by over 50 percent while also improving patient outcomes by nearly 50 percent.
The work by Hauser, assistant professor of computer science, and PhD student Bennett improves upon their earlier work that showed how machine learning could determine the best treatment at a single point in time for an individual patient.
By using a new framework that employs sequential decision-making, the previous single-decision research can be expanded into models that simulate numerous alternative treatment paths out into the future; maintain beliefs about patient health status over time even when measurements are unavailable or uncertain; and continually plan/re-plan as new information becomes available. In other words, it can "think like a doctor."
"The Markov Decision Processes and Dynamic Decision Networks enable the system to deliberate about the future, considering all the different possible sequences of actions and effects in advance, even in cases where we are unsure of the effects," Bennett said.
Moreover, the approach is non-disease-specific – it could work for any diagnosis or disorder – simply by plugging in the relevant information.
The new work addresses three vexing issues related to health care in the U.S.:
Rising costs, expected to reach 30 percent of GDP by 2050;
Quality of care, where patients receive correct diagnosis and treatment less than half the time on a first visit;
Lag time of 13 to 17 years between research and practice in clinical care.
"We're using modern computational approaches to learn from clinical data and develop complex plans through the simulation of numerous, alternative sequential decision paths," Bennett said. "The framework here easily out-performs the current treatment-as-usual, case-rate/fee-for-service models of health care."
Bennett is also a data architect and research fellow with Centerstone Research Institute, the research arm of Centerstone, the nation's largest not-for-profit provider of community-based behavioral health care. The two researchers had access to clinical data, demographics and other information on over 6,700 patients who had major clinical depression diagnoses, of which about 65 to 70 percent had co-occurring chronic physical disorders like diabetes, hypertension and cardiovascular disease.
Using 500 randomly selected patients from that group for simulations, the two compared actual doctor performance and patient outcomes against sequential decision-making models, all using real patient data. They found great disparity in the cost per unit of outcome change when the artificial intelligence model's cost of $189 was compared to the treatment-as-usual cost of $497.
"This was at the same time that the AI approach obtained a 30 to 35 percent increase in patient outcomes," Bennett said. "And we determined that tweaking certain model parameters could enhance the outcome advantage to about 50 percent more improvement at about half the cost."
While most medical decisions are based on case-by-case, experience-based approaches, there is a growing body of evidence that complex treatment decisions might best be handled through modeling rather than intuition alone.
"Modeling lets us see more possibilities out to a further point, which is something that is hard for a doctor to do," Hauser said. "They just don't have all of that information available to them."
Using the growing availability of electronic health records, health information exchanges, large public biomedical databases and machine learning algorithms, the researchers believe the approach could serve as the basis for personalised treatment through integration of diverse, large-scale data passed along to clinicians at the time of decision-making for each patient. Centerstone alone, Bennett noted, has access to health information on over 1 million patients each year.
"Even with the development of new AI techniques that can approximate or even surpass human decision-making performance, we believe that the most effective long-term path could be combining artificial intelligence with human clinicians," Bennett said. "Let humans do what they do well, and let machines do what they do well. In the end, we may maximise the potential of both."
"Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach" was published recently in Artificial Intelligence in Medicine. The research was funded by the Ayers Foundation, the Joe C. Davis Foundation and Indiana University.
NASA's Curiosity Mars rover has used its onboard drill to obtain the first deep rock sample ever retrieved from the surface of another planet.
Click to enlarge
The fresh hole, about 0.63" (1.6 cm) wide and 2.5" (6.4 cm) deep in a patch of fine-grained sedimentary bedrock, can be seen in images and other data beamed to Earth over the weekend. The rock is believed to hold evidence about long-gone wet environments. In pursuit of that evidence, the rover will use its laboratory instruments to analyse powder collected by the drill.
John Grunsfeld, NASA associate administrator for the agency's Science Mission Directorate: "The most advanced planetary robot ever designed is now a fully operating analytical laboratory on Mars. This is the biggest milestone accomplishment for the Curiosity team since the sky-crane landing last August, another proud day for America."
For the next several days, ground controllers will command the rover's arm to carry out a series of steps to process the sample, ultimately delivering portions to the instruments inside. Within the sample-handling device, the powder will be vibrated over a sieve that filters out any particles larger than six-thousandths of an inch (150 microns) across. Small portions of the sieved sample will fall through ports on the rover deck into the Chemistry and Mineralogy (CheMin) instrument and the Sample Analysis at Mars (SAM) instrument. These instruments then will begin the much-anticipated detailed analysis.
The rock Curiosity drilled is called "John Klein" in memory of a deputy project manager who died in 2011. Drilling for a sample is the latest new activity for NASA's Mars Science Laboratory Project, which is using the car-sized Curiosity rover to investigate whether Gale Crater once had conditions favourable for life.
RP-VITA, created by iRobot and InTouch Health, enables doctors to provide patient care from anywhere in the world via a telemedicine solution.
US technology firm, iRobot Corp., has announced that its RP-VITA Remote Presence Robot has received 510(k) clearance by the U.S. Food and Drug Administration (FDA) for use in hospitals. RP-VITA is the first autonomous navigation remote presence robot to receive such authorisation.
This new machine is a joint effort between two industry leaders, iRobot and InTouch Health. The robot combines the latest in autonomous navigation and mobility technologies developed by iRobot with state-of-the-art telemedicine and electronic health record integration developed by InTouch Health. RP-VITA allows remote doctor-to-patient consults, ensuring that the physician is in the right place at the right time and has access to the necessary clinical information to take immediate action. The robot has unprecedented ease of use. It maps its own environment and uses an array of sophisticated sensors to autonomously move about a busy space without interfering with people or other objects. Using an intuitive iPad interface, a doctor can visit a patient, and communicate with hospital staff and patients with a single click, regardless of their location.
The FDA clearance specifies that RP-VITA can be used for active patient monitoring in pre-operative, peri-operative and post-surgical settings – including cardiovascular, neurological, prenatal, psychological and critical care assessments and examinations.
RP-VITA is being sold into the healthcare market by InTouch Health as its new flagship remote presence device. iRobot will continue to explore adjacent market opportunities for robots like RP-VITA and the iRobot Ava mobile robotics platform.
Colin Angle, chairman and CEO of iRobot: "FDA clearance of a robot that can move safely and independently through a fast-paced, chaotic and demanding hospital environment is a significant technological milestone for the robotics and healthcare industries. There are very few environments as difficult to maneuver as that of a busy ICU or emergency department. Having crossed this technology threshold, the potential for self-navigating robots in other markets, and for new applications, is virtually limitless."
Yulun Wang, chairman and CEO of InTouch Health: "Remote presence solutions have proven their worth in the medical arena for quite some time. RP-VITA has undergone stringent testing, and we are confident that the robot's ease of use and unique set of capabilities will enable new clinical applications and uses."
Every year, 2 million Americans – at least 1 in 20 patients – contract a hospital infection. Of these, over 100,000 will die from their illness. This is more than breast cancer, AIDS/HIV and road fatalities combined and results in $30 billion of additional healthcare costs. One of the most dangerous pathogens, the highly resilient Clostridium difficile (C. diff), is able to survive for months on surfaces. With antibiotics, handwashing and traditional methods of disinfection proving to be increasingly ineffective, these "superbugs" have the potential to become a major crisis.
Thankfully, technology may come to the rescue once again, as a new machine has been developed that could revolutionise the hospital environment. Using pulses of ultraviolet (UV) light, a robot from Xenex Healthcare is 20 times more effective at eliminating bacteria than standard chemical cleaning. It can treat rooms in just five minutes, plus it has less environmental impact than discarded plastic containers and heavy use of disinfectants. A motion detection system and door guard ensures the safety of patients, visitors and staff.
Although expensive – at $82,000 each – a growing number of hospitals are now using this machine. The first to do so was Cooley Dickinson Hospital, Massachusetts, which has since witnessed an 82 percent drop in C. diff. infections. Last month, Stamford Hospital became the first and only hospital in Connecticut to use the device.
Cyberpunk 2077 is a role-playing video game, based on the Cyberpunk series of pen-and-paper games. It is being produced by CD Projekt RED, developers of The Witcher and The Witcher 2: Assassins of Kings. Cyberpunk 2077 will feature a decadent futuristic world, in which ultra-modern technology co-exists with a degenerated human society.
The game aims to be a mature, ambitious title with character customisation being strongly tied to the plot. It will have a non-linear story with different character classes. The developers say that it won't be ready until 2015, but a trailer was released this week, which you can see below. For more info, visit the official website. If you're into sci-fi, check out the Fictional Future section of our forum.
For a number of years now, Google has been leading the way in self-driving, autonomous car technology. However, car makers Toyota and Audi are now developing the vehicles themselves, independently of the Internet search giant.
Both companies have confirmed that they will demonstrate self-driving systems at the Consumer Electronics Show (CES), the biggest technology trade show of the year, which begins on 8th January. Toyota released a brief, 5 second teaser clip this week, showing its prototype Lexus LS 600h. This is apparently codenamed the AASRV (Advanced Active Safety Research Vehicle) and will "lead the industry into a new automated era."
As you can see in the video below, it appears very similar to a Google self-driving Prius – but as mentioned, Toyota has developed this model entirely independently, with no partnership involved. In addition to the vehicle itself, they will also discuss the state of Intelligent Transport Systems (ITS) research and development, which includes vehicle-to-vehicle and vehicle-to-infrastructure communications technology. This is expected to be fairly widespread by 2019 and could massively reduce the number of casualties on the roads.
As for Audi, there is no video available. However, a spokesperson has stated that its car will include a feature allowing it to find a parking space and park without a driver behind the wheel.
Thanks largely to Google's lobbying efforts, new laws were introduced last year – in California and Nevada – to make self-driving vehicles a reality. It's clear that this technology is moving forward and could soon enter the mainstream. In our recent poll, 70% of readers said they would feel safe riding in a computer-controlled car.
The Artificial Intelligence Laboratory at the University of Zurich is aiming to develop one of the most advanced humanoid robots in the world. Furthermore, they intend to achieve this goal within just nine months.
Roboy was a project that began in May 2012, so the machine is already nearing completion. In March 2013, it will be revealed to the public at the Robots on Tour exhibition, to celebrate the laboratory's 25th anniversary.
There are 15 project partners and over 40 engineers and scientists working on Roboy. He is described as a "soft robot" – and a more advanced version of his older brother Ecce. Thanks to his construction as a tendon-driven robot modelled on human beings ("normal" robots use motors in their joints), he will move almost as elegantly as a real person. The team is even designing a tricycle for him to carry heavy objects more easily. Furthermore, at a later stage in the project, he will be covered in "soft skin", so that interacting with him becomes safer and more pleasant.
Creating humanoid androids is a great challenge for researchers. Elements such as quick, smooth movements or durable yet flexible and soft skin are difficult to recreate. Fundamental new discoveries are needed for this purpose. It is precisely through projects like Roboy that innovation is possible. The findings from his predecessor "Ecce" are being studied and evaluated, leading to design improvements and new materials. A robotics platform is being created to investigate and further develop the principles of tendon-driven drive technology.
Another innovative aspect of Roboy is the way he is financed, through sponsorship and crowd funding. Those supporting the project benefit not only from direct access to the expertise involved, but also brand recognition: their names or company logos are engraved onto the robot.
Service robots are already being used today for household chores, surveillance work and cleaning, but also in hospitals and care homes. Our aging population means that in the future, caring for older people will be a vital area for the deployment of these machines. We can very safely assume that service robots will be widespread in the future, perhaps even as commonplace as everyday objects like cars and computers are today.
The laboratory has now released a teaser video. For more information, visit Roboy's official website. You can also follow his progress on Facebook.
For the past 10 years, the Defense Advanced Research Projects Agency (DARPA) has been developing a series of new military robots. One of these – a four-legged machine known as the Legged Squad Support System (L3) – recently showed off its capabilities during field testing.
L3 is designed to accompany soldiers over terrain too difficult for conventional vehicles. It will carry up to 400 lbs of gear and enough fuel for missions covering 20 miles and lasting 24 hours. Due to enter service in 2014, it should greatly reduce the burden of equipment for soldiers. This latest version, seen in the video below, features new advances in control, stability and maneuverability – including "Leader Follow" decision making, enhanced roll recovery, exact foot placement over rough terrain, the ability to maneuver in an urban environment, and verbal command capability.
The robot would also be able to maneuver at night and serve as a mobile auxiliary power source to the squad, so troops could recharge batteries for radios and handheld devices while on patrol. The DARPA platform developer for the LS3 is engineering company Boston Dynamics, whose other work includes PETMAN and the Cheetah robot.
Researchers in Pittsburgh have developed a brain-computer interface (BCI) allowing a woman with quadriplegia to maneuver a robotic arm. Using just her thoughts alone, Jan Scheuermann was able to "high five" someone, grasp and move objects of different shapes and sizes, and feed herself chocolate.
Photo credit: UPMC
In a study published by The Lancet, the team describes how the BCI technology and training programs allowed Ms. Scheuermann, 53, to intentionally move an arm, turn and bend a wrist, and close a hand for the first time in nine years.
Senior investigator Andrew Schwartz, Professor at the Department of Neurobiology, Pittsburgh School of Medicine: "This is a spectacular leap toward greater function and independence for people who are unable to move their own arms. This technology, which interprets brain signals to guide a robot arm, has enormous potential that we are continuing to explore. Our study has shown us that it is technically feasible to restore ability; the participants have told us that BCI gives them hope for the future."
The researchers placed two quarter-inch square electrode grids with 96 contact points each in the regions of Ms. Scheuermann’s brain that would normally control right arm and hand movement. The electrode grids detect signals from individual neurons and then computer algorithms are used to identify the firing patterns associated with particular observed or imagined movements. That intent to move is then translated into actual movement of the robotic arm.
In a separate study, researchers also continue to study BCI technology that uses an electrocortigraphy (ECoG) grid, which sits on the surface of the brain, rather than penetrating the tissue as in the case of the grids used for Ms. Scheuermann.
Senior investigator Michael Boninger, M.D.: "We are learning so much about how the brain controls motor activity, thanks to the hard work and dedication of our trial participants. Perhaps in five to 10 years, we will have a device that can be used in the day-to-day lives of people who are not able to use their own arms."
Photo credit: UPMC
The next step for BCI technology will likely use a two-way electrode system, that not only captures the intention to move, but also stimulates the brain to generate sensation – potentially allowing a user to adjust grip strength, to firmly grasp a doorknob or gently cradle an egg, for example.
After that, "we're hoping this can become a fully implanted, wireless system that people can actually use in their homes without our supervision," said Jennifer Collinger, Ph.D., assistant professor. "It might even be possible to combine brain control with a device that directly stimulates muscles, to restore movement of the individual's own limb."
For now, Ms. Scheuermann is expected to continue putting the BCI technology through its paces for two more months, then the implants will be removed in another operation.
"This is the ride of my life," she said. "This is the rollercoaster. This is skydiving. It's just fabulous, and I'm enjoying every second of it."
At the University of Colorado Boulder, Assistant Professor Nikolaus Correll likes to think in multiples. If one robot can accomplish a singular task, think how much more could be accomplished if you had hundreds, thousands, or even millions of them.
Correll and his computer science research team – including research associate Dustin Reishus and professional research assistant Nick Farrow – have developed a basic robotic building block, which he hopes to reproduce in large quantities to develop increasingly complex systems.
Recently, the team created a swarm of 20 robots, each the size of a ping-pong ball, which they call "droplets." When the droplets swarm together, they form what Correll describes as a "liquid that thinks."
To accelerate the pace of innovation, he has created a lab where students can explore and develop new applications of robotics with basic, inexpensive tools.
Similar to the fictional "nanomorphs" depicted in the "Terminator" films, large swarms of intelligent robotic devices could be used for a range of tasks. Swarms of robots could be unleashed to contain an oil spill, or to self-assemble into a piece of hardware after being launched separately into space, Correll said.
He plans to use the droplets to demonstrate self-assembly and swarm-intelligent behaviors such as pattern recognition, sensor-based motion and adaptive shape change. These behaviors could then be transferred to large swarms for water- or air-based tasks.
Correll hopes to create a design methodology for aggregating the droplets into more complex behaviors – such as assembling parts of a large space telescope or an aircraft.
Correll has received the National Science Foundation's Faculty Early Career Development award known as "CAREER." In addition, he has received support from NSF's Early Concept Grants for Exploratory Research program, as well as NASA and the U.S. Air Force.
He is also continuing work on robotic garden technology he developed at the Massachusetts Institute of Technology in 2009. Correll has been working with Joseph Tanner in CU-Boulder's aerospace engineering sciences department to further develop the technology, involving autonomous sensors and robots that can tend gardens, in conjunction with a model of a long-term space habitat being built by students.
Correll says there is virtually no limit to what might be created through distributed intelligence systems: "Every living organism is made from a swarm of collaborating cells," he said. "Perhaps some day, our swarms will colonise space where they will assemble habitats and lush gardens for future space explorers."
This technology may bring potential dangers, however. Some futurists have warned of a Grey Goo scenario in which self-replicating, out of control nanobots consume the Earth.
Japan's latest android has been unveiled at the Humanoids Conference in Osaka.
Built by University of Tokyo researchers, Kenshiro is a musculo-skeletal humanoid robot that mimics one-quarter of the muscles found in humans. It has 160 pulley-like devices — 50 in the legs, 76 in the trunk, 12 in the shoulder and 22 in the neck — together with aluminium bones which include a spine, pelvis and rib cage. It has almost the same level of joint torque as a human.
Japan is a world leader in robotics and has produced some highly impressive models in recent years, such as the Geminoid. With androids becoming more and more human-like with each passing year, perhaps our 2150 prediction is looking a bit conservative now. What do you think? Let us know in the comments below. We're in the process of updating our 22nd century pages, so we'd appreciate your feedback.