Built by researchers in Germany, the latest generation of "Care-O-bot" is both cheaper and more versatile than its predecessors, offering a wide range of applications.
In recent years, there has been significant progress in robotics – both in terms of technological development, and the number of these machines appearing in homes and workplaces. Just some examples include OSHbot (an automated retail assistant), RP-VITA (a telemedicine robot for hospitals), A.L.O. (a robotic butler, or "Botlr") and the latest model of ASIMO (arguably the most famous robot in the world). As they become cheaper, more versatile and ever more widespread, robots can fulfil an increasingly diverse range of activities. Within the next few decades, it is likely that they will become an everyday common sight in countries around the world.
The Fraunhofer Institute in Germany has been developing service robots since the 1990s. One of their projects is the "Care-O-bot" – a two-armed, omnidirectional machine with autonomous navigation, object recognition and grasping abilities. It has gone through various design revisions since 1998 and the 4th generation has now been announced. Care-O-bot 4 is a major improvement on its predecessors. Described as "a universal helper for everyday scenarios", it is modular and can be adapted to a multitude of different scenarios at a cost that makes it commercially viable; from airports to apartments, care homes, DIY stores, hospitals, hotels, museums, restaurants, security applications and warehouses.
"The fourth generation of the Care-O-bot is not only more agile, modular and charming than its predecessors, but it also stands out through the use of cost-reducing construction principles," explains Dr Ulrich Reiser, Project and Group Leader at Fraunhofer IPA.
Andreas Haug, co-Founder and Managing Director of Phoenix Design, who partnered with Fraunhofer: "Care-O-bot 4 is a successful symbiosis of design and engineering, as well as functionality and emotion, which quickly encourages user interaction."
Click to enlarge
All images credit: Fraunhofer IPA
Large parts of Care-O-bot 4's internal construction are made with folding sheet metal, which is economical to produce in small quantities. Its design is streamlined with a "head" section and two arms, meaning its form resembles a human being. However, the developers did not want its appearance to be over-human, as this would "encourage false expectations with regard to its capabilities", says Dr. Reiser. It is just the robot's "internal values" that are human: it always maintains a respectful distance, shows what it has understood and what it intends to do, while also being able to make simple gestures and reflect emotions. As with its previous versions, social role models were used as a guiding vision in developing the design and functionality. While the concept for the Care-O-bot 3 was a more reserved, cautious butler, its successor is more friendly and dynamic.
Care-O-bot 4 also features a greater range of movements. Spherical joints around discreet pivot points on its neck and hips allow it to bend forward without losing its balance. Developers took inspiration from human anatomy, creating a moving part which shifts backwards when the robot bends over, ensuring balance is maintained when carrying a load in outstretched arms. It can make 360° rotations of its head and torso. An innovative one-finger hand with integrated sensors was developed in collaboration with manufacturing company Schunk.
Managing partner Henrik Schunk comments: "The Care-O-bot 4 represents a significant milestone in the mobile service robot industry, on account of its high degree of standardisation."
The Fraunhofer Institute was determined to ensure that Care-O-bot 4 is simple to use. The head is fitted with an easily accessible touchscreen and there is a microphone for speech recognition, along with cameras for personal and gesture recognition. Care-O-bot 4's spherical joints allow it to intuitively inform users what it is planning to do and what it has understood, including gestures such as nodding and shaking the head. A circle of LEDs on its torso and a laser pointer in the hand serve as information exchange points.
Care-O-bot 4 provides open software interfaces that make it easily expandable for developers. Ulrich Reiser is keen for as many scientists as possible to use the system developed in Stuttgart in order to steadily advance its possible areas of application: "The objective is to steadily grow the developer community that was established already around Care-O-bot 3," he explains. Numerous research institutions and universities around the world have already worked with Care-O-bot 3 and the new version 4 should follow suit.
The United Kingdom has joined a growing number of countries planning to allow driverless cars on roads. Yesterday, the government announced a review into highway regulations and maintenance checks in preparation for testing the new technology.
Credit: Department for Transport
A major review has confirmed the UK is uniquely positioned to develop driverless car technology. Up to now, the scope for testing driverless cars had been limited, but yesterday the industry was given the green light for testing on public roads. The UK's regulatory environment now sets it apart as a premium location for developing the new technology, with tremendous potential for reducing accidents and making traffic flow more smoothly.
"Driverless cars are the future," said Transport Minister Claire Perry. "I want Britain to be at the forefront of this exciting new development, to embrace a technology that could transform our roads and open up a brand new route for global investment. These are still early days, but today is an important step. The trials present a fantastic opportunity for this country to take a lead internationally in the development of this new technology."
Credit: Department for Transport
Business Secretary, Vince Cable: "The UK is at the cutting edge of automotive technology – from the all-electric cars built in Sunderland, to the formula 1 expertise in the Midlands. It's important for jobs, growth and society that we keep at the forefront of innovation, that's why I launched a competition to research and develop driverless cars. The projects we are now funding in Greenwich, Bristol, Milton Keynes and Coventry will help to ensure we are world-leaders in this field and able to benefit from what is expected to be a £900 billion industry by 2025.
"The government's industrial strategy is backing the automotive sector as it goes from strength to strength. We are giving business the confidence to invest over the long term and developing cutting-edge technology that will create high skilled jobs."
To mark the launch of the review, Vince Cable joined Claire Perry in Greenwich, home to one of the projects benefiting from £19 million of government funding for driverless car trials. They witnessed the first official testing of the fully autonomous Meridian shuttle in Greenwich and unveiled a prototype of a driverless pod that will be tested in public areas in Milton Keynes. They were also shown other autonomous vehicles involved in the trials, including a BAE wildcat vehicle that is the result of years of advanced research and development by BAE systems and will be tested in Bristol.
Credit: Department for Transport
The Department for Transport review, conducted over the past 6 months, considered the best and safest ways to trial automated vehicles where an individual is ready to take control of the car if necessary. It also looked further ahead to the implications of testing fully automated vehicles. The review provides legal clarity to encourage UK and international industry to invest in this technology and encourages the largest global businesses to come to the UK to develop and test new models.
The next step is for the government to introduce a code of practice to provide industry with the framework they need to trial cars in real-life scenarios, and to create more sophisticated versions of the models that already exist. This code of practice is scheduled for publication in spring 2015, with the first driverless cars supported by the prize fund expected to be tested on roads by the summer.
China is driving explosive growth in the robotics industry that is likely to continue for many years to come, according to a new report.
By 2017, more industrial robots will be operating in China's production plants than in the European Union or North America. Operating unit numbers there are forecast to more than double, from 182,000 to almost 428,000. For comparison, North America had 237,000 at the end of 2014, a number that will increase to about 291,000. That's according to the International Federation of Robotics (IFR) in their latest World Robot Statistics.
China is already the world's largest market for industrial robots when measured by annual sales, with 50,000 units shipped in 2014 – compared to 46,000 for the whole of Europe and 31,500 for North America. With vast numbers of robots being added to its factories each year, it will soon catch up in terms of operational stock numbers too.
Given China's still very low robotic density, alongside its very high human population, the market's future growth potential is enormous. The nation currently has only 30 industrial robots per 10,000 employees in manufacturing industries. For comparison, Germany's robotic density is ten times greater and in Japan the figure is 11 times greater. In North America, robotic density is five times higher than in China, where the majority of industrial robots are used for handling operations and for welding. The automotive industry is by far the largest sector to use robotics (approx. 40%).
"The automation of China's production plants has just started", says Per Vegard Nerseth, Managing Director of ABB Robotics. "As the first foreign robot manufacturer to arrive here, we have observed the market and developments for years now. We have witnessed swift, almost explosive growth over the last two or three years, surpassing even our expectations."
The Chinese government is simultaneously pushing forward with robotic research, partnering with leading foreign robotic manufacturers.
"Companies are forced to invest ever more in robots to be more productive and raise quality," says Gudrun Litzenberger, general secretary of the Frankfurt-based IFR.
NASA's Jet Propulsion Laboratory has proposed a Mars helicopter drone that could scout ahead of rovers and provide operators with a much better view of the surrounding Martian terrain.
Getting around on Mars is tricky business. Each NASA rover has delivered a wealth of information about the history and composition of the Red Planet, but a rover's vision is limited by the view of its on-board cameras, and images from spacecraft orbiting Mars are the only other clues to where to drive it. To have a better sense of where to go and what's worth studying, it could be useful to have a low-flying scout.
Enter the Mars Helicopter, a proposed add-on to Mars rovers of the future that could potentially triple the distance these vehicles currently drive in a Martian day, and deliver a new level of visual information for choosing which sites to explore. This drone would fly ahead of the rover almost every day, checking out various points of interest and helping engineers back on Earth plan the best possible driving route.
Scientists could also use the helicopter images to look for features for the rover to study in further detail. Another part of the drone's job would be to check out the best places for the rover to collect key samples and rocks for a cache, which a next-generation rover could pick up later.
The vehicle is envisioned to weigh 2.2 pounds (1 kg) and measure 3.6 feet (1.1 m) across from the tip of one blade to the other. The prototype body looks like a medium-size cubic tissue box. The current design is a proof-of-concept demonstration that has been tested at NASA's Jet Propulsion Laboratory in California.
A robot with advanced, non-invasive sensors and mobility could provide fast, accurate and objective data on the state of farms and vineyards.
French, German, Italian and Spanish universities and companies are developing an unmanned robot to assist with agriculture and wine production. Equipped with non-invasive advanced sensors and artificial intelligence systems, this machine will provide fast, reliable and objective information on the state of vineyards to grape growers – such as vegetative development, water status, production and grape composition.
The robot is part of the European project VineRobot, whose partners met recently at the Universitat Politècnica de València (UPV). The major advantage of this new technology is the large quantity of automatically obtained data, which any user can interpret easily, since it is represented on simple maps; as well as the wireless transmission of information from the smallholding.
"Robotics and precision agriculture provide producers with powerful tools in order to improve the competitiveness of their farms," says Javier Tardaguila, project manager and researcher at the University of La Rioja, Spain. "Robots like the one we are developing within this project will not substitute the vine grower, but will facilitate their work, so they can avoid the hardest part in field. It has several advantages, including the ability to predict grape production or its degree of ripeness in order to immediately assess its quality without touching it."
An additional benefit, explains Rovira, is the attractiveness of this new technology for young farmers, "as the high average age of farmers is a recurring matter of concern in industrialised countries."
During the project meeting held at the UPV, the researchers presented their first prototype, which they have been working on for a year. This includes a basic safety circuit with emergency switches and a bumper to stop the robot at any obstacle. The initial work has focused on two main areas: mobility in the field, improving the suspension and traction systems in order to climb up slopes with weeds; and the development of the various sensors.
The challenges for the next year are to give the robot enough autonomy to safely drive between vineyard lines using stereoscopic vision, integrating a side camera to provide information about the vegetation status of plants and possible bunches; and the coupling of the sensors on the robot.
The project will be completed in 2016, by which time a range of hi-tech machines are predicted to be appearing on farms, as this technology begins to enter the mainstream. In subsequent decades, the world faces a major challenge in terms of food and water production. Wine industries in particular will be severely affected by 2050, due to climate change. These fast, accurate and intelligent machines could go some way towards mitigating the impacts.
This week, DARPA revealed upgrades to its bipedal humanoid ATLAS robot. The machine was redesigned for DARPA by Boston Dynamics, with the goal of improving power efficiency to better support battery operation. Approximately 75 percent of the robot was rebuilt; only the lower legs and feet were carried over from the original design. In the future, ATLAS could assist emergency services in search and rescue operations, performing tasks such as shutting off valves, opening doors and operating powered equipment in environments where humans could not survive.
In addition to improved power and the ability to function without a power cord, other upgrades to ATLAS include:
• Repositioned shoulders and arms allow for increased workspace in front of the robot and let the robot view its hands in motion, thus providing additional sensor feedback to the operator.
• New electrically actuated lower arms will increase strength and dexterity and improve force sensing.
• The addition of an extra degree of freedom in the wrist means the robot will be able to turn a door handle simply by rotating its wrist as opposed to moving its entire arm.
• Three onboard perception computers are used for perception and task planning, and a wireless router in the head enables untethered communication.
• Re-sized actuators in the hip, knee, and back give the robot greater strength.
• A wireless emergency stop allows for safe operation.
• As a result of its new pump, Atlas is much, much quieter than before.
The upgraded robot will be used by up to seven teams competing in the DARPA Robotics Challenge Finals, which take place on 5th and 6th June 2015 at Fairplex in Pomona, California. Admission to the event is free and open to the public. For more information see http://www.theroboticschallenge.org.
DARPA aims to give small unmanned aerial vehicles advanced perception and autonomy to rapidly search buildings or other cluttered environments without teleoperation.
Micro aerial vehicles based on insects and birds are likely to enter military use in the next few years. The US agency DARPA is planning a new generation of small, fast, agile flying vehicles – able to quickly navigate a maze of rooms, stairways and corridors or other obstacle-filled environments without a remote pilot.
Military teams patrolling dangerous urban locations overseas and rescue teams responding to disasters like earthquakes or floods currently rely on remotely piloted unmanned aerial vehicles to provide a bird’s-eye view of the situation and spot threats that can’t be seen from the ground. But to know what’s going on inside an unstable building or a threatening indoor space often requires physical entry, which can put troops or civilian response teams in danger.
To address these challenges, DARPA has issued a Broad Agency Announcement for its Fast Lightweight Autonomy (FLA) program. This will focus on creating a new class of algorithms, enabling the development of autonomous drones small enough to fit through an open window and fly at speeds up to 20 metres per second (45mph) – while navigating complex indoor spaces, independent of communication with outside operators or sensors and without reliance on GPS waypoints.
“Birds of prey and flying insects exhibit the kinds of capabilities we want for small UAVs,” says Mark Micire, program manager. “Goshawks, for example, can fly very fast through a dense forest without smacking into a tree. Many insects, too, can dart and hover with incredible speed and precision. The goal of the FLA program is to explore non-traditional perception and autonomy methods that would give small UAVs the capacity to perform in a similar way, including an ability to easily navigate tight spaces at high speed and quickly recognise if it had already been in a room before.”
If successful, the algorithms developed in the program could enhance unmanned system capabilities by reducing the amount of processing power, communications, and human intervention needed for low-level tasks, such as navigation around obstacles in a cluttered environment. The initial focus is on UAVs, but advances made through the FLA program could potentially be applied to ground, marine and underwater systems, which could be especially useful in GPS-degraded or denied environments.
“Urban and disaster relief operations would be obvious key beneficiaries, but applications for this technology could extend to a wide variety of missions using small and large unmanned systems linked together with manned platforms as a system of systems,” says Stefanie Tompkins, director of DARPA’s Defence Sciences Office. “By enabling unmanned systems to learn ‘muscle memory’ and perception for basic tasks like avoiding obstacles, it would relieve overload and stress on human operators so they can focus on supervising the systems and executing the larger mission.”
A Colorado man has become the first bilateral shoulder-level amputee to wear – and simultaneously control – two modular prosthetic limbs using his thoughts alone.
Image Credit: Johns Hopkins University Applied Physics Laboratory
A Colorado man has made history at the Johns Hopkins University Applied Physics Laboratory (APL), becoming the first bilateral shoulder-level amputee to wear and simultaneously control not one, but two Modular Prosthetic Limbs (MPL). Most importantly, Les Baugh – who lost both of his arms in an electrical accident forty years ago – was able to operate the system by simply thinking about moving his limbs, performing a variety of tasks during a short training period.
During two weeks of testing, Baugh took part in a research effort to further assess the usability of the MPL technology, developed over the past decade as part of the Revolutionising Prosthetics Program. Before putting the limb system through the paces, Baugh had to undergo a surgery at Johns Hopkins Hospital known as targeted muscle reinnervation.
“It’s a relatively new surgical procedure that reassigns nerves that once controlled the arm and the hand,” explained Johns Hopkins Trauma Surgeon Albert Chi, M.D. “By reassigning existing nerves, we can make it possible for people who have had upper-arm amputations to control their prosthetic devices by merely thinking about the action they want to perform.”
After recovery, Baugh visited the Laboratory for training on the use of the MPLs. First, he worked with researchers on the pattern recognition system.
Image Credit: Johns Hopkins University Applied Physics Laboratory
“We use pattern recognition algorithms to identify individual muscles that are contracting – how well they communicate with each other – and their amplitude and frequency,” Chi explained. “We take that information and translate that into actual movements within a prosthetic.”
Then Baugh was fitted for a custom socket for his torso and shoulders that supports the prosthetic limbs and also makes the neurological connections with the reinnervated nerves. While the socket got its finishing touches, the team had him work with the limb system through a Virtual Integration Environment (VIE) — a virtual-reality version of the MPL.
The VIE is completely interchangeable with the prosthetic limbs and through APL’s licensing process currently provides 19 groups in the research community with a low-cost means of testing brain–computer interfaces. It is used to test novel neural interface methods and study phantom limb pain, and serves as a portable training system.
By the time the socket was finished, Baugh said he was more than ready to get started. When he was fitted with the socket, and the prosthetic limbs were attached, he said “I just went into a whole different world.” He moved several objects, including an empty cup from a counter-shelf height to a higher shelf, a task that required him to coordinate the control of eight separate motions to complete.
Image Credit: Johns Hopkins University Applied Physics Laboratory
“This task simulated activities that may commonly be faced in a day-to-day environment at home,” said Courtney Moran, a prosthetist who worked with Baugh. “This was significant because this is not possible with currently available prostheses. He was able to do this with only 10 days of training, which demonstrates the intuitive nature of the control.”
Moran said the research team was floored by what Baugh was able to accomplish.
“We expected him to exceed performance compared to what he might achieve with conventional systems, but the speed with which he learned motions and the number of motions he was able to control in such a short period of time was far beyond expectation,” she explained. “What really was amazing – and was another major milestone with MPL control – was his ability to control a combination of motions across both arms at the same time. This was a first for simultaneous bimanual control.”
Principal Investigator Michael McLoughlin: “I think we are just getting started. It’s like the early days of the Internet. There is just a tremendous amount of potential ahead of us, and we’ve just started down this road. And I think the next five to 10 years are going to bring phenomenal advancement.”
The next step, McLoughlin said, is to send Baugh home with a pair of limb systems so that he can see how they integrate with his everyday life.
Baugh is looking forward to that day: “Maybe for once I’ll be able to put change in the pop machine and get pop out of it,” he said. He’s looking forward to doing “simple things that most people don’t think of. And it’s re-available to me.”
Stanford University will lead a 100-year effort to study the long-term implications of artificial intelligence in all aspects of life.
Stanford University has invited leading thinkers from several institutions to begin a 100-year effort to study and anticipate how artificial intelligence (AI) will affect every aspect of how people work, live and play. This effort – called the One Hundred Year Study on Artificial Intelligence, or AI100 – has been initiated by computer scientist Eric Horvitz, a former president of the Association for the Advancement of Artificial Intelligence.
In 2009, Horvitz hosted a conference at which top researchers considered breakthroughs in AI and its influence on people and society. While the group concluded that the advances have been largely positive, their debate highlighted the need for longer-term studies of the implications. Now, along with Russ Altman, professor of bioengineering and computer science, Horvitz has formed a group that will begin a series of periodic studies on how AI will affect automation, democracy, ethics, law, national security, privacy, psychology and other issues. These subjects are outlined in a white paper.
"Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life," said John Hennessy the President of Stanford University, who helped initiate the project. "Given Stanford's pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children's children."
Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. The seven researchers will together form the first AI100 standing committee. It and subsequent committees will identify the most compelling topics in AI at any given time, and convene a panel of experts to study and report on these issues. Horvitz envisions this process repeating itself every several years, as new topics are chosen and the horizon of AI technology is scouted.
"I'm very optimistic about the future and see great value ahead for humanity, with advances in systems that can perceive, learn and reason," explains Horvitz, who is launching AI100 as a private philanthropic initiative. "However, it is difficult to anticipate all of the opportunities and issues, so we need to create an enduring process."
Altman, who studied computer science and medicine with Horvitz at Stanford during the late 1980s, said a university is the best place to nurture such a long-term effort: "If your goal is to create a process that looks ahead 30 to 50 to 70 years, it's not altogether clear what artificial intelligence will mean, or how you would study it," he said. "But it's a pretty good bet that Stanford will be around, and that whatever is important at the time, the university will be involved in it."
AI100 is funded by a gift from Eric and his wife Mary Horvitz. They envision that the program, with its century-long chain of committees, study panels and growing digital archive, will remain a centre of vigilance as the future unfolds: "We're excited about kicking off a hundred years of observation and thinking about the influences of AI on people and society. It's our hope that the study, with its extended memory and long gaze, will provide important insights and guidance over the next century and beyond," said Horvitz.
Long-term thinking will be vital if humanity is to survive and prosper in the future. More and more people are now recognising its importance as demonstrated by efforts such as the Long Now Foundation, Singularity University, the 100 Year Starship project, the climate projections of the IPCC and indeed this website, Future Timeline. The group of scientists who will join Horvitz and Altman in forming the first AI100 committee – and their comments – are listed below.
Higgins Professor of Natural Sciences at Harvard University and an expert on multi-agent collaborative systems
"I'm excited about the potential for AI100 to focus attention on ways to design AI to work with and for people. We can shift the discussion about the societal impact of AI from the extremes to positions that take into account the nuances of societal values, human cognitive capacities and actual AI capabilities."
Professor of computer science at the University of British Columbia and the Canada Research Chair in Artificial Intelligence, who created the world's first soccer-playing robot
"This study will provide a forum for us to consider critical issues in the design and use of AI systems, including their economic and social impact."
Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web
"We won't be putting the genie back in the bottle. AI technology is progressing along so many directions and progress is being driven by so many different organisations that it is bound to continue. AI100 is an innovative and far-sighted response to this trend – an opportunity for us as a society to determine the path of our future and not to simply let it unfold unawares."
Deirdre K. Mulligan
Lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy
"The 100-year study provides an intellectual and practical home for the long-term interdisciplinary research necessary to document, understand and shape AI to support human flourishing and democratic ideals."
Professor of computer science at Stanford, who seeks to incorporate common sense into AI
"The complexities of the field have tended to give rise to uninformed and misguided perceptions and commentaries. This long-term study will help create a more accurate and nuanced view of AI."
A new AI software program developed by researchers at Google and Stanford University can recognise objects in photos and videos at near-human levels of understanding.
It was only recently that computer systems became smart enough to identify unknown objects in photographs. Even then, it has generally been limited to individual objects. Now, two separate teams of researchers at Google and Stanford University have created software able to describe entire scenes. This could lead to much better and more intelligent algorithms in the future.
While each team used a slightly different approach, they both combined deep convolutional neural networks with recurrent neural networks that excel at text analysis and natural language processing. The programs were able to "learn" from each new interaction, with algorithms enabling the system to improve its accuracy by scanning scene after scene, looking for patterns, and then using the accumulation of previously described scenes to extrapolate what is being depicted in the next unknown image.
"The system can analyse an unknown image and explain it in words and phrases that make sense," says Fei-Fei Li, a professor of computer science and director of the Stanford Artificial Intelligence Lab. "This is an important milestone. It's the first time we've had a computer vision system that could tell a basic story about an unknown image by identifying discrete objects and also putting them into some context."
These latest algorithms are being trained on a visual dictionary – the ImageNet project – with a database of more than 14 million objects. Each object is described by a mathematical term, or vector, that enables the machine to recognise the shape the next time it is encountered. Those mathematical definitions are linked to the words humans would use to describe the objects.
“I was amazed that even with the small amount of training data that we were able to do so well,” said Oriol Vinyals, a Google computer scientist who worked with members of the Google Brain project. “The field is just starting, and we will see a lot of increases.”
In the near term, computer vision systems that can discern the story in a picture will enable people to search photo or video archives and find highly specific images. Eventually, these advances will lead to robotic systems able to navigate unknown situations. Driverless cars would also be made safer. However, it also raises the prospect of even greater levels of government surveillance.
"A group of young people playing a game of Frisbee."
"A person riding a motorcycle on a dirt road."
"A pizza sitting on top of a pan on top of a stove."
Online retail giant Amazon has unveiled a new hi-tech speaker system with a wide range of interactive features. Called Echo, the cylindrical device is controlled by your voice, activated by a special "wake word" and uses far-field listening to hear from anywhere in the room. It can provide real-time information, music, news, weather, a timer/alarm, and many more services – even telling jokes. Crisp vocals with dynamic bass are fine-tuned to deliver an immersive sound from 360° omni-directional speakers.
With an always-on connection, it uses the cloud to continually learn and increase functionality over time – adapting to speech patterns, vocabulary and users' personal preferences. For now, Echo is only available to those with an invitation, but you can request an invite on its product page. It is currently priced at $199, but Prime members can obtain it for $99 for a limited time. Although its technology appears impressive, to some people it might seem rather Orwellian. Let us know your opinion in the comments below.
A new retail service robot can help customers shop smarter, navigate stores more efficiently, and instantly access information. OSHbot is being introduced by Orchard Supply Hardware at its flagship store in San Jose, California.
This holiday season, Lowe's Innovation Labs will introduce two autonomous retail service robots in a flagship Orchard Supply Hardware store in midtown San Jose, California to study how robotics technology can benefit customers and employees.
Called OSHbot, the robots will assist customers to quickly navigate stores by directing them to specific products and providing real-time information about product promotions and inventory. In the coming months, OSHbots will also be able to communicate with customers in multiple languages and remotely connect with expert employees at other Orchard stores.
"Using science fiction prototyping, we explored solutions to improve customer experiences by helping customers quickly find the products and information they came in looking for," said Kyle Nel, executive director of Lowe's Innovation Labs. "As a result we developed autonomous retail service robot technology to be an intuitive tool customers can use to ask for help, in their preferred language, and expect a consistent experience."
For store employees, OSHbot will provide an additional layer of support by helping customers with simple questions, enabling more time for them to focus on delivering project expertise. Applications designed to support employees also include real-time inventory management and connecting with employees in other locations to share know-how and answer customer questions.
The OSHbot incorporates scanning technology first developed for the Lowe's Holoroom home improvement simulator. For example, a customer may bring in a spare part and scan the object using OSHbot's 3D sensing camera. After scanning and identifying the object, OSHbot will provide product information to the customer and help guide them to its location on store shelves.
The OSHbot was developed through a partnership between Lowe's Innovation Labs and Fellow Robots, a Silicon Valley company specialising in the design and development of autonomous service robots. The partnership was initiated through SU Labs – a Singularity University program that connects corporate innovation teams with startups and other organisations to explore exponentially accelerating technologies and create new sustainable business solutions.
"The last decade was one of rapid technological advancement and prototyping, especially in robotics," said Marco Mascorro, chief executive officer of Fellow Robots. "With OSHbot, we've worked closely with Lowe's Innovation Labs to take autonomous retail service robot technology out of the sandbox and into the consumer market – enhancing the in-store consumer experience and creating smarter shoppers."
The Office of Naval Research (ONR) has announced a technological breakthrough that allows unmanned surface vehicles (USV) to not only protect Navy ships, but also, for the first time, autonomously “swarm” offensively on hostile vessels.
First-of-its-kind technology – demonstrated on the James River in Virginia – allows unmanned, self-guided vessels to overwhelm an adversary. This is achieved using a combination of sensors and software called CARACaS (Control Architecture for Robotic Agent Command and Sensing). The hardware is small and light enough to be portable and can be installed on almost any boat. It is also inexpensive, at just $2000 for each kit.
These automated patrols could leave warships they're protecting and swarm around potential threats on the water. This technology could be utilised by the U.S. Navy within a year, defence officials say, adding it could help stop attacks like the deadly 2000 bombing of the USS Cole.
“Our Sailors and Marines can’t fight tomorrow’s battles using yesterday’s technology,” said Chief of Naval Research, Matthew Klunder. “This kind of breakthrough is the result of the Navy’s long-term support for innovative research in science and technology.”
Without a human physically needing to be at the controls, the boats can operate in sync with other unmanned vessels – choosing their own routes; swarming to interdict enemy vessels; and escorting/protecting naval assets.
“This networking unmanned platforms demonstration was a cost-effective way to integrate many small, cheap, and autonomous capabilities that can significantly improve our warfighting advantage,” said Admiral Jonathan Greenert, Chief of Naval Operations.
“This multiplies combat power by allowing CARACaS-enabled boats to do some of the dangerous work,” said Dr. Robert Brizzolara, program manager at the ONR. “It will remove our Sailors and Marines from many dangerous situations; for instance when they need to approach hostile or suspicious vessels. If an adversary were to fire on the USVs, no humans would be at risk.”
In the tests, as many as 13 Navy boats were operating together. First they escorted a high-value Navy ship, and then, when a simulated enemy vessel was detected, the boats sped into action, swarming around the threat. This demonstration comes near the anniversary of the USS Cole bombing off the coast of Yemen. In that October 2000 terrorist attack, a small boat laden with explosives was able to get near a guided-missile destroyer and detonate, killing 17 Sailors and injuring 39 others.
Autonomous unmanned surface vehicles could play a vital role in protecting people, ports and commerce. In the future, the capability could be scaled up to include even greater numbers of USVs – and even to other platforms such as drones, helicopters and jet fighters.
"This is something that you might find not only just on our naval vessels. We could certainly see this utilised to protect merchant vessels, to protect ports and harbours; used also to protect offshore oil rigs," Klunder said.
Software company IPsoft has announced a new artificial intelligence platform named “Amelia” that makes it possible to automate knowledge distribution over a wide range of functions. Exposed to the same information as any new hire, she instantly applies information to solve queries. With Amelia able to shoulder the burden of tedious, often laborious tasks, she partners with human co-workers to achieve new levels of productivity and service quality.
Whereas most other technologies demand that humans adapt their behaviour in order to interact with ‘smart machines’, Amelia is intelligent enough to interact like a human herself. She learns using the same natural language manuals as her colleagues, but in a matter of seconds. She understands the full semantic meaning of what she reads – rather than simply recognising individual words – by applying context, logic and inferring implications. Independently, rather than through time-intensive programming, Amelia creates her own process map of the information she is given so that she can work out for herself exactly what actions to take, depending on the specific problem being solved. Like a human worker, she learns from her colleagues and by observing their work, is able to continually build up knowledge.
In a fraction of the time it takes traditionally to train someone in a new role, Amelia is able to perform at a high level. What is more, as she already speaks over 20 languages, she is able to support international operations with ease. Her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.
Much like machines transformed agriculture and manufacturing, cognitive technologies will drive the next evolution of the global workforce. In the future, companies will compete in the digital economy with a digital workforce that comprises a balance of human and virtual employees. Research firm Gartner predicts that by 2017, autonomics and cognitive platforms like Amelia will drive a 60 percent reduction in the cost of managed services. This technology is already being piloted within a number of Fortune 1000 companies and IPsoft expects to announce new customers and prominent industry partners before the end of this year.
"We want to make sure that human beings can dedicate their time to more valuable tasks. Taking out the more repetitive tasks is I think a noble aspiration for a company," said Frank Lansink, EU CEO of IPsoft, at a briefing in the firm's HQ at 30 St Mary Axe (the Gherkin). "Our purpose is to elevate human beings into a more meaningful role, adding value to society, or to enterprise, or the customer."
In this interview with CNN Money, Elon Musk says that a Tesla car able to self-drive up to 90% of the time will be launched in 2015. The company will also reveal its next electric vehicle – the model "D" – on 9th October, according to a tweet.
Researchers in Bangladesh have designed a computer program able to accurately recognise users’ emotional states as much as 87% of the time, depending on the emotion.
Writing in the journal Behaviour & Information Technology, Nazmul Haque Nahin and his colleagues describe how their study combined – for the first time – two established ways of detecting user emotions: keystroke dynamics and text-pattern analysis.
To provide data for the study, volunteers were asked to note their emotional state after typing passages of fixed text, as well as at regular intervals during their regular (‘free text’) computer use. This provided researchers with data about keystroke attributes associated with seven emotional states (joy, fear, anger, sadness, disgust, shame and guilt). To help them analyse sample texts, the researchers made use of a standard database of words and sentences associated with the same seven emotional states.
After running a variety of tests, the researchers found that their new ‘combined’ results were better than their separate results; what’s more, the ‘combined’ approach improved performance for five of the seven categories of emotion. Joy (87%) and anger (81%) had the highest rates of accuracy.
This research is an important contribution to ‘affective computing’, a growing field dedicated to ‘detecting user emotion in a particular moment’. As the authors note – for all the advances in computing power, performance and size in recent years, a lot more can still be done in terms of their interactions with end users. “Emotionally aware systems can be a step ahead in this regard,” they write. “Computer systems that can detect user emotion can do a lot better than the present systems in gaming, online teaching, text processing, video and image processing, user authentication and so many other areas where user emotional state is crucial.”
While much work remains to be done, this research is an important step in making ‘emotionally intelligent’ systems that recognise users’ emotional states to adapt their music, graphics, content or approach to learning a reality.
A huge, self-organising robot swarm consisting of 1,024 individual machines has been demonstrated by Harvard.
Swarm robotics is a new and emerging field of technology involving the coordination of multiple robots to perform a group task. By combining a large number of machines, it is possible to create a hive intelligence – capable of much greater achievements than a lone individual. In the same way that insects such as ants, bees and termites cooperate, researchers can build wireless networks of machines able to sense, navigate and communicate information about their surroundings.
Recent efforts have included a formation of 20 "droplets" created by the University of Colorado, a group of 40 robots developed at the Sheffield Centre for Robotics, and drones using augmented reality to produce "spatially targeted communication and self-assembly". Although impressive, those projects – and others since – have lacked the raw numbers to be considered a genuine "swarm" like the creatures mentioned earlier. This week, however, scientists at Harvard took research in the field to a whole new level, by demonstrating a network of more than 1,000 machines working simultaneously.
Known as "Kilobots", these devices are just a few centimetres across, roughly the size of a U.S. quarter. Each is equipped with tiny vibrating motors allowing them to slide across a surface, using an infrared transmitter and receiver to alert their neighbours and measure their proximity. From just a simple command, they can arrange themselves into a variety of complex shapes and patterns.
In 2011, open-source hardware and software was developed and licensed by Harvard to improve the algorithms used in machine networks. A report showed how groups of 25 Kilobots – demonstrating behaviours such as foraging, formation control and synchronisation – had the potential for much bigger numbers. Following three years of further testing and experimentation, the university has now succeeded in coordinating a swarm of 1,024 units.
The new, smarter algorithm enables the Kilobots to correct their own mistakes, avoiding traffic jams and errors that would otherwise become more likely in larger-scale groups. If an individual deviates off-course, nearby robots can sense the problem and cooperate to fix it. As robots become cheaper and more numerous, with a continued trend in miniaturisation, this form of social behaviour could lead to revolutionaryapplicationsin the future.
As Professor Radhika Nagpal explains in a press release: “Increasingly, we’re going to see large numbers of robots working together – whether it's hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways. Understanding how to design ‘good’ systems at that scale will be critical. We can simulate the behaviour of large swarms of robots, but a simulation can only go so far. The real-world dynamics – the physical interactions and variability – make a difference, and having the Kilobots to test the algorithm on real robots has helped us better understand how to recognise and prevent the failures that occur at these large scales.”
These latest developments are reported in the peer-reviewed journal Science.
From next week, guests at the Aloft hotel chain may feel like they are living in the future, as a new robotic butler offers its services.
Aloft Hotels has announced A.L.O. as the company’s first “Botlr” (robotic butler). This futuristic service will be introduced on 20th August, making Aloft the first major hotel brand to hire a robot for both front and back of house duties.
In this role, A.L.O. will be on call 24/7 as a robotic operative, assisting the human staff in delivering amenities to guest rooms. Professionally “dressed” in a custom shrink-wrapped, vinyl collared uniform and nametag, A.L.O. can modestly accept tweets as tips. It will not only free up time for employees, allowing them to create a more personalised experience for guests, but will also enhance the hotel’s image and technological features.
Brian McGuinness, Global Brand Leader: “As you can imagine, hiring for this particular position was a challenge as we were seeking a very specific set of automated skills, and one that could work – literally – around the clock. As soon as A.L.O. entered the room, we knew it was what we were looking for. A.L.O. has the work ethic of Wall-E, the humour of Rosie from The Jetsons and reminds me of my favourite childhood robot – R2-D2. We are excited to have it join our team.”
A.L.O. was developed by Silicon Valley-based Savioke – a new startup company with funding from Google Ventures – which the robotics community has been eagerly anticipating. It uses a combination of sonar wave technology, lasers and cameras to avoid people and obstacles. It can facilitate and prioritise multiple guest deliveries, communicate easily with guests and various hotel platforms, and efficiently navigate throughout the property – including the elevator, using WiFi.
Steve Cousins, CEO of Savioke: “We are thrilled to introduce our robot to the world today through our relationship with Aloft Hotels. In our early testing, all of us at Savioke have seen the look of delight on those guests who receive a room delivery from a robot. We have also seen the front desk get busy at times, and expect Botlr will be especially helpful at those times, freeing up human talent to interact with guests on a personal level.”
The first A.L.O. reports for duty next week at Aloft Cupertino, next to the Apple HQ. If successful, all 100 of the company's hotels may introduce them during 2015. In the future, Cousins predicts a huge market for service robots like A.L.O.: “There are all these places, hotels, elder care facilities, hospitals, that have a few hundred robots maybe – but no significant numbers – and we think that's just a huge opportunity.”
Scientists at IBM Research have created a neuromorphic (brain-like) computer chip, featuring 1 million programmable neurons and 256 million programmable synapses.
IBM this week unveiled "TrueNorth" – the most advanced and powerful computer chip of its kind ever built. This neurosynaptic processor is the first to achieve one million individually programmable neurons, sixteen times more than the current largest neuromorphic chip. Designed to mimic the structure of the human brain, it represents a major departure from older computer architectures of the last 70 years. By merging the pattern recognition abilities of neurosynaptic chips with traditional system layouts, researchers aim to create "holistic computing intelligence".
Measured by device count, TrueNorth is the largest IBM chip ever fabricated, with 5.4 billion transistors at 28nm. Yet it consumes under 70 milliwatts while running at biological real time – orders of magnitude less power than a typical modern processor. This amazing feat is made possible because neurosynaptic chips are event driven, as opposed to the "always on" operation of traditional chips. In other words, they function only when needed, resulting in vastly less energy use and a much cooler temperature. It is hoped this combination of ultra-efficient power consumption and entirely new system architecture will allow computers to far more accurately emulate the brain.
TrueNorth is composed of 4,096 cores, with each of these modules integrating memory, computation and communication. The cores are distributed in a parallel, flexible and fault-tolerant grid – able to continue operating when individual cores fail, similar to a biological system. And – like a brain cortex – adjacent TrueNorth chips can be seamlessly tiled and scaled up. To demonstrate this scalability, IBM also revealed a 16-chip motherboard with 16 million programmable neurons: roughly equivalent to a frog brain.
Each of these "neurons" features 256 inputs, whereas the human brain averages 10,000. That may sound like a huge difference – but in the world of computers and technology, progress tends to be exponential. In other words, we could see machines as computationally powerful as a human brain within 10–15 years. The implications are staggering. When sufficiently scaled up, this new generation of "cognitive computers" could transform society, leading to a myriad of applications able to intelligently analyse visual, auditory, and multi-sensory data.
Vince Cable, UK Business Secretary, has announced measures that give the green light for driverless cars on UK roads from January 2015.
UK cities can now bid for a share of a £10 million (US$17m) competition to host a driverless cars trial. The government is calling on cities to join together with businesses and research organisations to put forward proposals to become a test location. Up to three cities will be selected to host the trials from next year, with each project expected to last between 18 and 36 months, starting in January 2015.
Ministers have also launched a review to look at current road regulations to establish how the UK can stay at the forefront of driverless car technology and ensure there is an appropriate regime for testing driverless cars in the UK. Two areas will be covered in the review: cars with a qualified driver who can take over control of the driverless car, and fully autonomous vehicles where there is no driver.
Speaking at MIRA – a vehicle engineering consultancy, test and research facility – where he tested a driverless car with Science Minister Greg Clark, Business Secretary Vince Cable said: "The excellence of our scientists and engineers has established the UK as a pioneer in the development of driverless vehicles through pilot projects. Today’s announcement will see driverless cars take to our streets in less than six months, putting us at the forefront of this transformational technology and opening up new opportunities for our economy and society.
"Through the government's industrial strategy, we are backing the automotive sector as it goes from strength to strength. We are providing the right environment to give businesses the confidence to invest and create high skilled jobs."
Britain joins a growing number of countries planning to use this technology. Elsewhere in Europe, cities in Belgium, France and Italy intend to operate transport systems for driverless cars. In the USA, four states have passed laws permitting autonomous cars: Nevada, Florida, California, and Michigan. FutureTimeline.net predicts annual purchases of autonomous vehicles will reach almost 100 million worldwide by 2035. The benefits could be enormous, with drastic reductions in accident fatalities, traffic congestion and pollution.
With many of Earth's metals and minerals facing a supply crunch in the decades ahead, deep ocean mining could provide a way of unlocking major new resources. Amid growing commercial interest, the UN's International Seabed Authority has just issued seven exploration licences.
Credit: Nautilus Minerals Inc.
To build a fantastic utopian future of gleaming eco-cities, flying cars, robots and spaceships, we're going to need metal. A huge amount of it. Unfortunately, our planet is being mined at such a rapid pace that some of the most important elements face critical shortages in the coming decades. These include antimony (2022), silver (2029), lead (2031) and many others. To put the impact of our mining and other activities in perspective: on land, humans are now responsible for moving about ten times as much rock and earth as natural phenomena such as earthquakes, volcanoes and landslides. The UN predicts that on current trends, humanity's annual resource consumption will triple by 2050.
While substitution in the form of alternative metals could help, a longer term answer is needed. Asteroid mining could eventually provide an abundance from space – but a more immediate, technically viable and commercially attractive solution is likely to arise here on Earth. That's where deep sea mining comes in. Just as offshore oil and gas drilling was developed in response to fossil fuel scarcity on land, the same principle could be applied to unlock massive new metal reserves from the seabed. Oceans cover 72% of the Earth's surface, with vast unexplored areas that may hold a treasure trove of rare and precious ores. Further benefits would include:
• Curbing of China's monopoly on the industry. As of 2014, the country is sitting on nearly half the world's known reserves of rare earth metals and produces over 90% of the world's supply.
• Limited social disturbance. Seafloor production will not require the social dislocation and resulting impact on culture or disturbance of traditional lands common to many land-based operations.
• Little production infrastructure. As the deposits are located on the seafloor, production will be limited to a floating ship with little need for additional land-based infrastructure. The concentration of minerals is an order of magnitude higher than typical land-based deposits with a corresponding smaller footprint on the Earth's surface.
• Minimal overburden or stripping. The ore generally occurs directly on the seafloor and will not require large pre-strips or overburden removal.
• Improved worker safety. Operations will be mostly robotic and won't require human exposure to typically dangerous mining or "cutting face" activities. Only a hundred or so people will be employed on the production vessel, with a handful more included in the support logistics.
Credit: Nautilus Minerals Inc.
Interest in deep sea mining first emerged in the 1960s – but consistently low prices of mineral resources at the time halted any serious implementation. By the 2000s, the only resource being mined in bulk was diamonds, and even then, just a few hundred metres below the surface. In recent years, however, there has been renewed interest, due to a combination of rising demand and improvements in exploration technology.
The UN's International Seabed Authority (ISA) was set up to manage these operations and prevent them from descending into a free-for-all. Until 2011, only a handful of exploration permits had been issued – but since then, demand has surged. This week, seven new licences were issued to companies based in Brazil, Germany, India, Russia, Singapore and the UK. The number is expected to reach 26 by the end of 2014, covering a total area of seabed greater than 1.2 million sq km (463,000 sq mi).
Michael Lodge of the ISA told the BBC: "There's definitely growing interest. Most of the latest group are commercial companies so they're looking forward to exploitation in a reasonably short time – this move brings that closer."
So far, only licences for exploration have been issued, but full mining rights are likely to be granted over the next few years. The first commercial activity will take place off the coast of Papua New Guinea, where a Canadian company – Nautilus Minerals – plans to extract copper, gold and silver from hydrothermal vents. After 18 months of delays, this was approved outside the ISA system and is expected to commence in 2016. Nautilus has been developing Seafloor Production Tools (SPTs), the first of which was completed in April. This huge robotic machine is known as the Bulk Cutter and weighs 310 tonnes when fully assembled. The SPTs have been designed to work at depths of 1 mile (1.6 km), but operations as far down as 2.5 miles (4 km) should be possible eventually.
As with any mining activity, concerns have been raised from scientists and conservationists regarding the environmental impact of these plans, but the ISA says it will continue to demand high levels of environmental assessment from its applicants. Looking ahead, analysts believe that deep sea mining could be widespread in many parts of the world by 2040.
A Japanese humanoid robot called Pepper, whose makers claim can read people's emotions, has been unveiled in Tokyo. Telecoms company Softbank, which created the robot, says Pepper can understand 70 to 80 percent of spontaneous conversations. News agency AFP met the pint-sized chatterbox, who took time out from his day job greeting customers at SoftBank stores.
Developed by Microsoft, Project Adam is a new deep-learning system modelled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry. The goal of Project Adam is to enable software to visually recognise any object. This is being marketed as a competitor to Google's Brain project, currently being worked on by Ray Kurzweil.
The National Museum of Emerging Science and Innovation has today opened a new permanent exhibition entitled, "Android: What is Human?" where visitors can meet the world's most advanced androids – robots which closely resemble humans.
The National Museum of Emerging Science and Innovation, also known simply as the "Miraikan", is created by Japan's Science and Technology Agency. This new exhibition displays three android robots: the recently developed Kodomoroid and Otonaroid – a child android and an adult female android, respectively – and Telenoid, an android designed without individual human physical features. The exhibition is curated by Dr. Hiroshi Ishiguro, a leading android expert who has been studying the question, "What is human?"
Kodomoroid and Otonaroid will attempt to fill human roles as the world's first android announcer and as the Miraikan's android science communicator, respectively. The organisers of the exhibition claim it will be "a unique and rare event" – providing visitors with the opportunity to communicate with and operate these advanced robots, while shedding light on the attributes of humans in contrast with those of androids.
With soft skin made from special silicon and smooth motion possible by artificial muscle, android robots are becoming increasingly similar to real humans. If an android robot gains the ability to talk and live identically to a human, you may not be able to distinguish between androids and humans. If this comes to pass, what would the word human mean? What is human? This question has been subject to debate since ancient times, and efforts to find an answer are still being made in all fields, including the humanities, social sciences, and art. Building an android can be described as a process of understanding what makes a human look like a human, as Ishiguro explains:
Kodomoroid is a teleoperated android resembling a child. It is a news announcer with potential exceeding that of its human equivalent. It can recite news reports gathered from around the world 24 hours a day, every day, in a variety of voices and languages. In a studio on the museum's third floor, you can watch her deliver news about global issues and weather reports.
Otonaroid is a teleoperated android robot resembling an adult female. She has been hired by the Miraikan as a robot science communicator. At the exhibition, you can talk with her in face-to-face conversations and also operate her movements.
Telenoid is a teleoperated android robot with a minimal design, created as an attempt to embody the minimum physical requirements for human-like communication. At the exhibition, you can talk with it and also operate it.
Researchers are claiming a major breakthrough in artificial intelligence with a machine program that can pass the famous Turing Test.
At the Royal Society in London yesterday, an event called Turing Test 2014 was organised by the University of Reading. This involved a chat program known as Eugene being presented to a panel of judges and trying to convince them it was human. These judges included the actor Robert Llewellyn – who played robot Kryten in sci-fi comedy TV series Red Dwarf – and Lord Sharkey, who led a successful campaign for Alan Turing's posthumous pardon last year. During this competition, which saw five computers taking part, Eugene fooled 33% of human observers into thinking it was a real person as it claimed to be a 13-year-old boy from Odessa in Ukraine.
In 1950, British mathematician and computer scientist Alan Turing published his seminal paper, "Computing Machinery and Intelligence", in which he proposed the now-famous test for artificial intelligence. Turing predicted that by the year 2000, machines with 10 GB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory.
In the years since 1950, the test has proven both highly influential and widely criticised. A number of breakthroughs have emerged in recent times from groups claiming to have satisfied the criteria for "artificial intelligence". We have seen Cleverbot, for example, and IBM's Watson, as well as gaming bots and the CAPTCHA-solving Vicarious. It is therefore easy to be sceptical about whether Eugene represents something genuinely new and revolutionary.
Professor Kevin Warwick (who also happens to be the world's first cyborg), comments in a press release from the university: "Some will claim that the Test has already been passed. The words 'Turing Test' have been applied to similar competitions around the world. However, this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing's Test was passed for the first time on Saturday."
Eugene's creator and part of the development team, Vladimir Veselov, said as follows: "Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality. This year, we improved the 'dialog controller' which makes the conversation far more human-like when compared to programs that just answer questions. Going forward, we plan to make Eugene smarter and continue working on improving what we refer to as 'conversation logic'."
Is the Turing Test a reliable indicator of intelligence? Who gets to decide the figure of 30% and what is the significance of this number? Surely imitation and pre-programmed replies cannot qualify as "understanding"? These questions and many others will be asked in the coming days, just as they have been asked following similar breakthroughs in the past. To gain a proper understanding of intelligence, we will need to reverse engineer the brain – something which is very much achievable in the next decade, based on current trends.
Regardless of whether Eugene is a bona fide AI, computing power will continue to grow exponentially in the coming years, with major implications for society in general. Benefits may include a 50% reduction in healthcare costs, as software programs are used for big data management to understand and predict the outcomes of treatment. Call centre staff, already competing with virtual employees today, could be almost fully automated in the 2030s, with zero waiting times for callers trying to seek help. Self-driving cars and other forms of AI could radically reshape our way of life.
Downsides to AI may include a dramatic rise in unemployment as humans are increasingly replaced by machines. Another big area of concern is security, as Professor Warwick explains: "Having a computer that can trick a human into thinking that someone – or even something – is a person we trust is a wake-up call to cybercrime. The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true... when in fact it is not."
Further into the future, AI will gain increasingly mobile capabilities, able to learn and become aware of the physical world. No longer restricted to the realms of software and cyberspace, it will occupy hardware that includes machines literally indistinguishable from real people. By then, science fiction will have become reality and our civilisation will enter a profound, world-changing epoch that some have called a technological singularity. If Ray Kurzweil's ultimate prediction is to be believed, our galaxy and perhaps the entire universe may become saturated with intelligence, as formally lifeless rocks are converted into sentient matter.
At the Code Conference in California, Microsoft has demonstrated Skype Translator – a new technology enabling cross-lingual conversations in real time. Resembling the "universal translator" from Star Trek, this feature will be available on Windows 8 by the end of 2014 as a limited beta. Microsoft has worked on machine translation for 15 years, and translating voice over Skype in real time had once been considered "a nearly impossible task." In the world of technology, however, miracles do happen. This video shows the software in action. According to CEO Satya Nadella, it does more than just automatic speech recognition, machine translation and voice synthesis: it can actually "learn" from different languages, through a brain-like neural net. When you consider that 300 million people are now connecting to Skype each month, making 2 billion minutes of conversation each day, the potential in terms of improved communication is staggering.
Fully autonomous weapons, or “killer robots,” would jeopardise basic human rights, whether used in wartime or for law enforcement, Human Rights Watch said in a report released yesterday, on the eve of the first multilateral meeting on the subject at the United Nations.
The 26-page report, “Shaking the Foundations: The Human Rights Implications of Killer Robots,” is the first report to assess in detail the risks posed by these weapons during law enforcement operations – expanding the debate beyond the battlefield. Human Rights Watch found that fully autonomous weapons threaten rights and principles under international law as fundamental as the right to life, the right to a remedy, and the principle of dignity.
“In policing, as well as war, human judgment is critically important to any decision to use a lethal weapon,” said Steve Goose, arms division director. “Governments need to say no to fully autonomous weapons for any purpose and to preemptively ban them now, before it is too late.”
International debate over fully autonomous weapons has previously focused on their potential role in armed conflict and questions over whether they would comply with international humanitarian law, also called the laws of war. Human Rights Watch, in this new report, examines the potential impact of fully autonomous weapons under human rights law, which applies during peacetime as well as armed conflict.
Nations must adopt a preemptive international ban on these weapons, which could identify and fire on targets without meaningful human intervention, Human Rights Watch said. Countries are pursuing ever-greater autonomy in weapons, and precursors already exist.
The release of the report, co-published with Harvard Law School’s International Human Rights Clinic, coincides with the first ever multilateral meeting on the weapons. Many of the 117 countries that joined the Convention on Conventional Weapons will attend the meeting of experts on lethal autonomous weapons systems at the United Nations in Geneva this week. Members of the convention agreed at their annual meeting in November 2013 to begin work on the issue in 2014.
Human Rights Watch believes the agreement to work on these weapons in the Convention on Conventional Weapons forum could eventually lead to new international law prohibiting fully autonomous weapons. The convention preemptively banned blinding lasers in 1995.
Human Rights Watch is a founding member and coordinator of the Campaign to Stop Killer Robots. This coalition of 51 nongovernmental organisations in two dozen countries calls for a preemptive ban on the development, production, and use of fully autonomous weapons.
Human Rights Watch issued its first report on the subject, “Losing Humanity: The Case against Killer Robots,” back in November 2012. In April 2013, Christof Heyns – UN special rapporteur on extrajudicial, summary or arbitrary executions – issued a report citing a range of objections to the weapons, and called for all nations to adopt national moratoria and begin international discussions about how to address them.
Fully autonomous weapons could be prone to killing people unlawfully because these weapons could not be programmed to handle every situation, Human Rights Watch found. According to robot experts, there is little prospect that these weapons would possess human qualities, such as judgment, that facilitate compliance with the right to life in unforeseen situations.
Fully autonomous weapons would also undermine human dignity, Human Rights Watch said. These inanimate machines could not understand or respect the value of life, yet they would have the power to determine when to take it away.
Serious doubts exist about whether there could be meaningful accountability for the actions of a fully autonomous weapon. There would be legal and practical obstacles to holding anyone – a superior officer, programmer, or manufacturer – responsible for a robot’s actions. Both criminal and civil law are ill suited to the task, Human Rights Watch found.
“The accountability gap would weaken deterrence for future violations,” said Bonnie Docherty, senior researcher in the arms division at Human Rights Watch and lecturer at the Harvard clinic as well as author of the report. “It would be very difficult for families to obtain retribution or remedy for the unlawful killing of a relative by such a machine.”
The human rights impacts of killer robots compound a host of other legal, ethical, and scientific concerns – including the potential for an arms race, prospect of proliferation, and questions about their ability to protect civilians adequately on the battlefield or the street, Human Rights Watch found.
After eight years of development, a new hi-tech bionic arm has become the first of its kind to gain regulatory approval for mass production.
The DEKA Arm System is part of the $100m Revolutionising Prosthetics program launched by the Defense Advanced Research Projects Agency (DARPA). Upper-limb prosthetic technology had for many years lagged behind lower-limb technology and the program sought to address this issue. The DEKA was made possible through a combination of breakthroughs in both engineering and biology, resulting in a bionic arm that offers near-natural control. It is nicknamed "The Luke", after Star Wars' Luke Skywalker who received a robotic replacement for the hand he lost in a fight with Darth Vader.
Simultaneous control of multiple joints is enabled by miniature motors and a variety of input devices, including wireless signals generated by sensors on the user's feet. Constructed from lightweight but strong materials, the battery-powered arm system is of similar size and weight to a real limb and has six user-selectable grips.
During eight years of testing and development, 36 volunteers took part in studies to refine the arm's design. Their feedback helped engineers to create a mind-controlled device enabling amputees to perform a wide range of tasks – preparing food, using locks and keys, opening envelopes, brushing hair, using zippers and feeding themselves, all of which greatly enhances their independence and quality of life.
Similar devices are being developed around the world, but this is the first of its kind to gain approval from the U.S. Food and Drug Administration (FDA). Dr. Geoffrey Ling, Director of DARPA's Biological Technologies Office, comments in a press release: "DARPA is a place where we can bring dreams to life."
If bee populations continue to decline, the dystopian future depicted in this video could one day become a reality.
Bees and other pollinating insects play an essential role in ecosystems. A third of all our food depends on their pollination. A world without pollinators would be devastating for food production. Since the late 1990s, beekeepers around the world have observed the mysterious and sudden disappearance of bees, and report unusually high rates of decline in honeybee colonies. Although the exact causes are not yet fully understood, growing evidence suggests that chemical-intensive farming methods and the use of insecticides play a major role. Greenpeace has now launched a campaign demanding urgent action to address this issue – including a ban on the most harmful chemicals, along with increased science funding and more sustainable agricultural practices.
Honda this week showcased the newest version of ASIMO, the world's most advanced humanoid robot, for the first time in North America, featuring its latest innovations – including the ability to communicate in sign language and to climb stairs without stopping.
ASIMO – which stands for Advanced Step in Innovative Mobility – was first introduced 14 years ago. Since then, it has made significant advances – including physical improvements like running and hopping on one leg, as well as breakthroughs in dexterity and intelligence, that have furthered Honda's dream of creating humanoid robots to help society.
"This is an exciting project for Honda," said Satoshi Shigemi, senior chief engineer of Honda R&D and the leader of Honda's humanoid robotics program. "Our engineers are working tirelessly to develop new technologies aimed at helping ASIMO work in a real world environment."
The new version of ASIMO has undergone numerous changes to its 4'3", 110-pound body. Developments in the lower body have enhanced stability and balance control, allowing the robot to climb more smoothly, run faster and change directions in a more-controlled fashion.
Enhancements in the upper body include major increases in the degrees of freedom available in the robot's hands. Each hand now contains 13 degrees of freedom, which allows ASIMO to perform many more intricate and precise tasks.
The increased hand dexterity provides additional movement in each finger, which also led to the development of ASIMO's new ability to communicate using both American and Japanese sign language. Force sensors in the robot's hands also provide instantaneous feedback allowing ASIMO to use the appropriate amount of force when performing a task. This allows the robot to pick up paper cups without crushing them, for example, but still allows it to use a stronger force when necessary.
"It was obvious that overall flexibility was necessary, and many more complex tasks can now be performed because of the improved operational capacity in the hands," Shigemi continued. "But perhaps more importantly, these innovations enhance ASIMO's communication skills, which is essential to interact with human beings."
Advanced technologies derived from research on ASIMO have also benefited other Honda business lines. For example, the Vehicle Stability Assistance (VSA) used in the Honda Civic, along with technologies in the championship-winning Honda Moto GP motorcycles had their genesis in Honda's robotics research program.
Later this summer, the new ASIMO will follow in the footsteps of its predecessor to become a daily performer at Disneyland's Tomorrowland.
In 1997, Deep Blue became the first computer to win against a human chess champion, when it defeated Garry Kasparov. In 2011, IBM's Watson competed on the Jeopardy! quiz show against former winners Brad Rutter and Ken Jennings, defeating them both. Now, another competition between man and machine is about to unfold. On Tuesday 11th March, KUKA – a German manufacturer of high-end industrial robots – will open its first plant in Shanghai, China. The opening will be celebrated with a table tennis match between their KR AGILUS robot and Tim Boll, the German champion. This event is intended to demonstrate the speed, precision and flexibility of KUKA's industrial robots. For more information, click here.
Dan Barry is an engineer and scientist, currently serving as the Co-Chair of Artificial Intelligence and Robotics at Singularity University. In 2005, he started his own company, Denbar Robotics, that creates robotic assistants for home and commercial use. In 2011 he co-founded 9th Sense, a company that sells telepresence robots. He has seven patents, has published over 50 articles in scientific journals, and is a former NASA astronaut. In this video, Barry asks the question: "How are we going to know that a robot is self-aware?"
Headquartered in New York City's "Silicon Alley", the new Watson Group formed by IBM will fuel innovative products and startups – introducing cloud solutions to accelerate research, visualise Big Data and enable analytics exploration.
IBM today announced it will establish the IBM Watson Group, a new business unit dedicated to the development and commercialisation of cloud-delivered cognitive innovations. The move signifies a strategic shift by IBM to accelerate into the marketplace a new class of software, services and apps that can "think", improve by learning, and discover answers and insights to complex questions from massive amounts of Big Data.
IBM will invest more than $1 billion into the Watson Group, focusing on research and development to bring cloud-delivered cognitive applications and services to market. This will include $100 million available for venture investments to support IBM's recently launched ecosystem of start-ups and businesses, which are building a new class of cognitive apps powered by Watson, in the IBM Watson Developers Cloud.
According to technology research firm Gartner, smart machines will be the most disruptive change ever brought about by information technology, and can make people more effective, empowering them to do "the impossible."
The IBM Watson Group will have a new headquarters at 51 Astor Place in New York City's "Silicon Alley" technology hub, leveraging the talents of 2,000 professionals, whose goal is to design, develop and accelerate the adoption of Watson cognitive technologies that transform industries and professions. The new group will tap subject matter experts from IBM's Research, Services, Software and Systems divisions, as well as industry experts who will identify markets that cognitive computing can disrupt and evolve, such as healthcare, financial services, retail, travel and telecommunications.
Nearly three years after its triumph on the TV show Jeopardy!, IBM has advanced Watson from a quiz game innovation into a commercial technology. Now delivered from the cloud and powering new consumer apps, Watson is 24 times faster and 90 percent smaller – IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes.
Named after IBM founder Thomas J. Watson, the machine was developed in IBM’s Research labs. Using natural language processing and analytics, Watson handles information akin to how people think, representing a major shift in the ability to quickly analyse, understand and respond to Big Data. Watson’s ability to answer complex questions in natural language with speed, accuracy and confidence will transform decision making across a range of industries.
"Watson is one of the most significant innovations in IBM's 100 year history, and one that we want to share with the world," says IBM Senior Vice President Mike Rhodin (pictured below), who will lead the group. "These new cognitive computing innovations are designed to augment users’ knowledge – be it the researcher exploring genetic data to create new therapies, or a business executive who needs evidence-based insights to make a crucial decision."
At the Consumer Electronics Show (CES) in Las Vegas, Intel Corporation has been showing off its latest innovative technologies. These include an intelligent 3D camera system, a range of new wearable electronics, and a 22nm dual-core PC the size of an SD card.
Intel CEO Brian Krzanich has outlined a range of new products, initiatives and strategic relationships aimed at accelerating innovation across a range of mobile and wearable devices. He made the announcements during the pre-show keynote for the 2014 Consumer Electronics Show in Las Vegas, the biggest gathering of the tech industry in the USA.
Krzanich's keynote painted a vision of how the landscape of computing is being re-shaped and where security is too important not to have it embedded in all devices. The world is entering a new era of integrated computing defined not by the device, but the integration of technology into people's lifestyles in ways that offer new utility and value. As examples, he highlighted several immersive and intuitive technologies that Intel will begin offering in 2014, such as Intel RealSense – hardware and software that will bring human senses to Intel-based devices. This will include 3D cameras that deliver more intelligent experiences – improving the way people learn, collaborate and are entertained.
The first Intel RealSense 3D camera features a best-in-class depth sensor and a full 1080p colour camera. It can detect finger level movements enabling highly accurate gesture recognition, facial features for understanding movement and emotions. It can understand foregrounds and backgrounds to allow control, enhance interactive augmented reality (AR), simply scan items in three dimensions, and more.
This camera will be integrated into a growing spectrum of Intel-based devices including 2 in 1, tablet, Ultrabook, notebook, and all-in-one (AIO) designs. Systems with the new camera will be available beginning in the second half of 2014 from Acer, Asus, Dell, Fujitsu, HP, Lenovo and NEC.
To advance the computer's "hearing" sense, a new generation of speech recognition technology will be available on a variety of systems. This conversational personal assistant works with popular websites and applications. It comes with selectable personalities, and allows for ongoing dialogue with Intel-based devices. People can simply tell it to play music, get answers, connect with friends and find content – all by using natural language. This assistant is also capable of calendar checks, getting maps and directions, finding flights or booking a dinner reservation. Available offline, people can control their device, dictate notes and more without an Internet connection.
Krzanich then explained how Intel aims to accelerate wearable device innovation. A number of reference designs were highlighted including: smart earbuds providing biometric and fitness capabilities, a smart headset that is always ready and can integrate with existing personal assistant technologies, a smart wireless charging bowl, a smart baby onesie and a smart bottle warmer that will start warming milk when the onesie senses the baby is awake and hungry.
The smart earbuds (pictured below) provide full stereo audio, monitor heart rate and pulse all while the applications on the user's phone keep track of running distance and calories burned. The product includes software to precision-tune workouts by automatically choosing music that matches the target heart rate profile. As an added bonus, it harvests energy directly from the audio microphone jack, eliminating the need for a battery or additional power source to charge the product.
The Intel CEO announced collaborations to increase dialogue and cooperation between fashion and technology industries to explore and bring to market new smart wearable electronics. He also kicked-off the Intel "Make it Wearable" challenge – a global effort aimed at accelerating creativity and innovation with technology. This effort will call upon the smartest and most creative minds to consider factors impacting the proliferation of wearable devices and ubiquitous computing, such as meaningful usages, aesthetics, battery life, security and privacy.
In addition to reference designs for wearable technology, Intel will offer a number of accessible, low-cost entry platforms aimed at lowering entry barriers for individuals and small companies, allowing them to create innovative web-connected wearables or other small form factor devices. Underscoring this point, Krzanich announced Intel Edison – a low-power, 22nm-based computer in an SD card form factor with built-in wireless abilities and support for multiple operating systems. From prototype to production, Intel Edison will enable rapid innovation and product development by a range of inventors, entrepreneurs and consumer product designers when available this summer.
"Wearables are not everywhere today, because they aren't yet solving real problems and they aren't yet integrated with our lifestyles," said Krzanich. "We're focused on addressing this engineering innovation challenge. Our goal is: if something computes and connects, it does it best with Intel inside."
Krzanich also discussed how Intel is addressing a critical issue for the industry as a whole: conflict minerals from the Democratic Republic of the Congo (DRC). Intel has achieved a critical milestone and the minerals used in microprocessor silicon and packages manufactured in Intel's factories are now "conflict-free", as confirmed by third-party audits.
"Two years ago, I told several colleagues that we needed a hard goal, a commitment to reasonably conclude that the metals used in our microprocessors are conflict-free," Krzanich said. "We felt an obligation to implement changes in our supply chain to ensure that our business and our products were not inadvertently funding human atrocities in the Democratic Republic of the Congo. Even though we have reached this milestone, it is just a start. We will continue our audits and resolve issues that are found."