future timeline technology singularity humanity
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed

Blog » AI & Robotics


5th November 2015

First AI-based scientific search engine will accelerate research process

A new search engine – Semantic Scholar – uses artificial intelligence to transform the research process for computer scientists.


semantic scholar ai search engine 2015 technology


The Allen Institute for Artificial Intelligence (AI2) this week launches its free Semantic Scholar service, which allows scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work. Leveraging AI2's expertise in data mining, natural-language processing and computer vision, Semantic Scholar provides an AI-enhanced way to quickly search and discover information. At launch, the system searches over three million computer science papers, and will add new scientific categories on an ongoing basis.

"No one can keep up with the explosive growth of scientific literature," said Dr. Oren Etzioni, CEO at AI2. "Which papers are most relevant? Which are considered the highest quality? Is anyone else working on this specific or related problem? Now, researchers can begin to answer these questions in seconds, speeding research and solving big problems faster."

With Semantic Scholar, computer scientists can:

• Home in quickly on what they are looking for, with advanced selection tools. Researchers can filter results by author, publication, topic, and date published. This gets the most relevant result in the fastest way possible, and reduces information overload.
• Instantly access a paper's figures and tables. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
• Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
• Be prompted with key phrases within each paper, to winnow the search further.


semantic scholar ai search engine 2015 technology


Using machine reading and vision methods, Semantic Scholar crawls the web – finding all PDFs of publicly available papers on computer science topics – extracting both text and diagrams/captions, and indexing it all for future contextual retrieval. Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of study and how influential its citations are. It provides the scientist with a simple user interface (optimised for mobile) that maps to academic researchers' expectations. Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.

Only a small number of free academic search engines are currently in widespread use. Google Scholar is by far the largest, with 100 million documents. However, researchers have noted problems with the current generation of these search engines.

"A significant proportion of the documents are not scholarly by anyone's measure," says Péter Jacsó, an information scientist at the University of Hawaii who identified a series of basic errors in search results from Google Scholar. While some of the issues have recently been fixed, says Jacsó, "there are still millions and millions of errors."

"Google has access to a lot of data. But there's still a step forward that needs to be taken in understanding the content of the paper," says Jose Manuel Gomez-Perez, who works on search engines and is director of research and development in Madrid for the software company Expert System.

Semantic Scholar builds on the foundation of current research paper search engines, adding AI methods to overcome information overload and paving the way for even more advanced and intelligent algorithms in the future.

"What if a cure for an intractable cancer is hidden within the tedious reports on thousands of clinical studies? In 20 years' time, AI will be able to read – and more importantly, understand – scientific text," says Etzioni. "These AI readers will be able to connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed. AI-based discovery engines will help find the answers to science's thorniest problems."


  speech bubble Comments »



3rd November 2015

A shape-shifting, self-driving concept car by Nissan

A new futuristic concept car by Nissan has been unveiled at the 2015 Tokyo Motor Show.




At the Tokyo Motor Show 2015, Nissan Motor Company unveiled a concept vehicle that the company says embodies its vision for the future of autonomous driving and zero emission EVs: the Intelligent Driving System (IDS).

"Nissan's forthcoming technologies will revolutionise the relationship between car and driver, and future mobility," said Carlos Ghosn, Nissan president and CEO, presenting at the show. "Nissan Intelligent Driving improves a driver's ability to see, think and react. It compensates for human error, which causes more than 90% of all car accidents. As a result, time spent behind the wheel is safer, cleaner, more efficient and more fun."

After leading the development and expansion of EV technology, Nissan once again stands at the forefront of automotive technology. By integrating advanced vehicle control and safety technologies with cutting-edge artificial intelligence (AI), Nissan is among the leaders developing practical, real-world applications of autonomous driving. The company plans to include this technology on multiple vehicles by 2020, and progress is well on track to achieve this goal, said Ghosn.


2015 nissan shape shifting self driving car 2020 technology


Some have compared a future with autonomous drive to living in a world of conveyer belts that simply ferry people from point A to B, but the Nissan IDS promises a very different vision. Even when a driver selects Piloted Drive and turns over driving to the vehicle, the car's performance – from accelerating to braking to cornering – imitates the driver's own style and preferences.

In Manual Drive mode, the driver has control. The linear acceleration and cornering are pure and exhilarating. Yet behind the scenes, the Nissan IDS continues to provide assistance. Sensors constantly monitor conditions and assistance is available even while the driver is in control. In the event of imminent danger, the Nissan IDS will assist the driver in taking evasive action.

In addition to learning, the IDS concept's AI communicates like an attentive partner. From information concerning traffic conditions, the driver's schedule to personal interests, it has what is needed to create a driving experience that is comfortable, enjoyable and safe.


2015 nissan shape shifting self driving car 2020 technology


"A key point behind the Nissan IDS Concept is communication. For autonomous drive to become reality, as a society we have to consider not only communication between car and driver but also between cars and people. The Nissan IDS Concept's design embodies Nissan's vision of autonomous drive as expressed in the phrase together, we ride," says Mitsunori Morita, Design Director.

Together, we ride is demonstrated in the shape-shifting interior design: "The Nissan IDS Concept has different interiors, depending on whether the driver opts for Piloted Drive or Manual Drive. This was something that we thought was absolutely necessary to express our idea of autonomous drive," explains Morita.

In piloted self-driving mode, all four seats rotate inward, and the steering wheel recedes into the dashboard, giving the driver space to relax and making it easier to see and talk to other passengers. The interior, comprised of natural materials such as mesh leather, is illuminated by soft light, adding a further layer of comfort that feels almost like a home living room.

"In every situation, it is about giving the driver more choices and greater control," Ghosn said at the show. "And the driver will remain the focus of our technology development efforts."


2015 nissan shape shifting self driving car 2020 technology


For autonomous drive to be widely accepted, people need to fully trust the technology. Through its innovative communication features, the Nissan IDS promotes confidence and a sense of harmony for those outside the car as well. Various exterior lights and displays convey to pedestrians and others the car's awareness of its surroundings and signals its intentions. The car's silver side body line, for example, is actually an LED that Nissan calls the Intention Indicator. If there are pedestrians or cyclists nearby, the strip shines red, signalling that the car is aware of them. Another electronic display, facing outside from the instrument panel, can flash messages such as "After you" to pedestrians.

Another feature of this electric vehicle is energy efficiency, with advanced aerodynamic performance for a greater driving range. The carbon fibre body is lightweight and constrained in height to sharply minimise aerodynamic drag, while the tires are designed to minimise air and roll resistance. The wheels have a layered form that creates tiny vortexes of air on their surface, which further contributes to smooth air flow. The Nissan IDS concept is fitted with a high-capacity 60 kWh battery.

"By the time Nissan Intelligent Driving technology is available on production cars, EVs will be able to go great distances on a single charge," says Mitsunori Morita, Design Director. "Getting to this point will, of course, require the further evolution of batteries – but aerodynamic performance is also very important. We incorporated our most advanced aerodynamic technology in the design of the Nissan IDS Concept."


2015 nissan shape shifting self driving car 2020 technology


At Nissan's annual shareholder meeting in June, Executive Vice President Hideyuki Sakamoto said: "Our zero emission strategy centres on EVs. We are pursuing improved electric powertrain technologies – such as motors, batteries and inverters – which will enable us to mass produce and market EVs that equal or surpass the convenience of gasoline-powered cars."

Other technologies on the Nissan IDS concept include "Piloted Park" that can be operated by smartphone or tablet, and wireless charging technologies. Through these, the driver can leave parking and charging to the car.

Self-driving, zero emission cars are clearly the future, and Nissan appears to be well-positioned for delivering this vision. The Nissan LEAF is the world's most popular electric vehicle, with 96% of customers willing to recommend the car to friends. Yesterday, the firm posted a rise of 37.4% in net income for the six months ending in September.

"Nissan has delivered solid revenue growth and improved profitability in the first half of the fiscal year, driven by encouraging demand for our vehicles in North America and a rebound in western Europe," said chief executive Carlos Ghosn.


  speech bubble Comments »



19th October 2015

Robots that can pick and sort fruit

A new fruit-picking robot demonstrates an innovative solution to a complex automation challenge.


robot picking fruit
Credit: Cambridge Consultants


A robotics breakthrough by product design and development firm Cambridge Consultants is set to boost productivity across the food chain – from the field to the warehouse. Known as "Robocrop", it paves the way for robots to take on complex picking and sorting tasks involving irregular organic items – sorting fruit and vegetables, for example, or locating and removing specific weeds among crops in a field.

“Traditional robots struggle when it comes to adapting to deal with uncertainty,” says Chris Roberts, head of industrial robotics. “Our innovative blend of existing technologies and novel signal processing techniques has resulted in a radical new system design that is poised to disrupt the industry.”

Robot technology has been around for a long time and robots are very good at doing the same thing over and over again within a controlled environment. Where they struggle is doing not quite the same thing over and over again. Working within a changing environment – or performing tasks that need to vary from time to time – has traditionally been very challenging for robot systems.

Robots in car production lines, for example, can move metal parts weighing hundreds of kilograms from one place to another with sub-millimetre accuracy. This is simple for the robots – and the computers managing them – to achieve, since all the parts are identical and the positions never change.

Contrast this with the task of picking up fruit and vegetables in a warehouse. To succeed at this, robots must be able to work around people, cope with irregular items, and adapt to a changing environment. Designing a robot that is able to pick a number of different items like fruit requires many tasks to be performed – from recognising the correct objects and calculating what order to pick them in, to planning a grip, and the lifting and placing of items.


robot picking fruit
Credit: Cambridge Consultants


“Our world-class industrial sensing and control team has combined high-powered image-processing algorithms with low-cost sensors and commodity hardware to allow ‘soft’ control of robots when the task is not rigidly defined,” explains Roberts. “The system is capable of handling objects for which no detailed computer-aided design (CAD) model exists – a necessary step to using a robot with natural objects which, although they share some characteristics, are not identical.

“Our demonstration of the technology has fruit stacked randomly in a bowl – with our robot using machine vision and some smart software to identify which piece of fruit is on top. It translates this information into real-world co-ordinates and positions the ‘hand’ to pick the required fruit, whilst avoiding other objects. The custom-made hand adapts to the shape of the fruit and securely grips it without damaging it. Once picked, the fruit can also be sorted by colour so that, for example, red apples can be separated from green apples.

“The robot system demonstrates what is possible when you bring together experts from different fields to solve a problem. We’ve combined our programming, electronics and mechanical engineering expertise with our machine vision and robotics skills to demonstrate the kind of smart system that could transform a variety of industrial and commercial processes.”

Cambridge Consultants will be demonstrating its Robocrop technology at the Electronics Design Show, 21st-22nd October at the Ricoh Arena, Coventry, stand B5 – and at AgriTechnica, 10th-14th November, at the Messegelände in Hanover, Germany, hall 15, stand F13.




  speech bubble Comments »



14th September 2015

DARPA prosthetic hand can "feel" physical sensations

Through DARPA, a 28-year-old paralysed man has become the first person to feel physical sensations through a prosthetic hand directly connected to his brain.


2015 darpa prosthetic hand feels


A 28-year-old man, paralysed for more than a decade as a result of a spinal cord injury, has become the first person to be able to "feel" physical sensations through a prosthetic hand directly connected to his brain. He can even identify which mechanical finger is being gently touched.

This advance, made possible by sophisticated neural technologies, has been developed under U.S. military agency DARPA's Revolutionising Prosthetics program that was first launched in 2006. It could lead to a future in which people living with paralysed or missing limbs will not only be able to manipulate objects by sending signals from their brain to robotic devices, but also be able to sense precisely what those devices are touching.

"We've completed the circuit," says DARPA program manager Justin Sanchez. "Prosthetic limbs that can be controlled by thoughts are showing great promise, but without feedback from signals travelling back to the brain it can be difficult to achieve the level of control needed to perform precise movements. By wiring a sense of touch from a mechanical hand directly into the brain, this work shows the potential for seamless bio-technological restoration of near-natural function."

The clinical work involved placing electrode arrays onto the paralysed volunteer's sensory cortex, the brain region responsible for identifying tactile sensations such as pressure. In addition, the team placed arrays on the volunteer's motor cortex, the part of the brain that directs body movements.

Wires were run from the arrays on the motor cortex to a mechanical hand developed by the Applied Physics Laboratory (APL) at Johns Hopkins University. That gave the volunteer – whose identity is withheld to protect his privacy – the capacity to control the hand's movements with his thoughts, a feat previously accomplished under the DARPA program by another person with similar injuries.

Then, breaking new technological ground, the researchers went on to provide the volunteer a sense of touch. The APL hand contains sophisticated torque sensors that can detect when pressure is being applied to any of its fingers, and can convert those physical "sensations" into electrical signals. The team used wires to route those signals to the arrays on the volunteer's brain.


2015 darpa prosthetic hand feels


In the very first set of tests, during which the researchers gently touched each of the hand's fingers while the volunteer was blindfolded, he was able to report with nearly 100% accuracy which mechanical finger was being touched. The feeling, he reported, was as if his own hand were being touched.

"At one point, instead of pressing one finger, the team decided to press two without telling him," said Sanchez. "He responded in jest asking whether somebody was trying to play a trick on him. That is when we knew that the feelings he was perceiving through the robotic hand were near-natural."

Sanchez described the basic findings at Wait, What? A Future Technology Forum, hosted by DARPA in St. Louis. Further details about the work are being withheld pending peer review and acceptance for publication in a scientific journal.

The restoration of sensation with implanted neural arrays is one of several neurotechnology-based advances emerging from DARPA's Biological Technologies Office, Sanchez said. Another major program is Restoring Active Memory (RAM), which seeks to develop brain interfaces to restore function to individuals living with memory loss from traumatic brain injury or complex neuropsychiatric illness.

"DARPA's investments in neurotechnologies are helping to open entirely new worlds of function and experience for individuals living with paralysis and have the potential to benefit people with similarly debilitating brain injuries or diseases," Sanchez added.


2015 darpa prosthetic hand feels


  speech bubble Comments »



1st September 2015

Japan to open fully automated lettuce factory in 2017

Japanese factory operator SPREAD Co. has announced it will develop the world's first large-scale vegetable factory that is fully automated from seeding to harvest and capable of producing 30,000 heads of lettuce per day.


japan kyoto spread automated lettuce factor 2016 2017
Credit: SPREAD Co.


SPREAD Co. was founded in 2006 and operates the world's largest vegetable factory using artificial lighting in Kameoka, Kyoto Prefecture. Four types of lettuce are currently produced, totalling 21,000 heads per day that are shipped to around 2,000 stores throughout the year.

As the company embarks on global expansion, it is now focussing on environmentally-friendly measures to be featured in the construction of a major next-generation vegetable factory. This new facility will be a vertical farm with total automation of the cultivation process from start to finish. It will cut labour costs by 50 percent, while energy costs will be reduced by 30 percent per head of lettuce through the use of artificial LED lighting specifically created for SPREAD, as well as the development of a unique air conditioning system. Up to 98 percent of water will be recycled onsite.

Thanks to indoor operations, this highly controlled environment will be unaffected by pests, temperature or weather conditions and will not require any chemical pesticides. Productivity per unit volume will be doubled in comparison to the company's existing factory in Kameoka, as a result of innovative efforts to save space in the cultivation area. Stacker machines will carry seedlings and hand them over to robots that will take care of transplanting them. Once fully grown, they will be harvested and delivered automatically to the packaging line.


japan kyoto spread automated lettuce factor 2016 2017


The project will require up to 2 billion yen (US$16.7 million) of investment, which includes onsite R&D and testing facilities. The factory will have a total area of 4,400 square metres (47,400 sq ft) and be capable of producing 30,000 heads of lettuce per day. Construction is expected to start in spring 2016 with commercial operations beginning from summer 2017. The company is predicting annual sales of approximately 1 billion yen (US$8.4 million).

SPREAD Co. has plans for major expansion. They intend to increase the scale of production to 500,000 heads of lettuce per day within five years and will continue expanding their franchise both domestically and internationally.


  speech bubble Comments »



10th August 2015

First autonomous vessel to cross the ocean

Plans have been unveiled for "Mayflower Autonomous Research Ship" (MARS), the world's first full-sized, fully autonomous unmanned ship to cross the Atlantic Ocean.


first autonomous vessel to cross the ocean


A pioneering project has been launched to design, build and sail the world’s first full-sized, fully autonomous unmanned ship across the Atlantic Ocean. The Mayflower Autonomous Research Ship, codenamed MARS, will be powered by state-of-the-art renewable energy technology, and will carry a variety of drones through which it will conduct experiments during the crossing.

MARS is being developed by a partnership of Plymouth University, autonomous craft specialists MSubs, and award-winning yacht designers Shuttleworth Design, and is expected to take two-and-a-half years to build. Following a year-long testing phase, the planned voyage in 2020 will also mark the 400th anniversary of the original Mayflower sailings from Plymouth to the North American continent.

Professor Kevin Jones, Executive Dean of the Faculty of Science and Engineering at the University: “MARS has the potential to be a genuine world-first, and will operate as a research platform, conducting numerous scientific experiments during the course of its voyage. And it will be a test bed for new navigation software and alternative forms of power, incorporating huge advancements in solar, wave and sail technology. As the eyes of the world follow its progress, it will provide a live educational resource to students – a chance to watch, and maybe participate in history in the making.”


first autonomous vessel to cross the ocean


Plymouth-based firm MSubs will be leading on the construction, using their expertise in building autonomous marine vessels for a variety of global customers. Managing Director Brett Phaneuf said the project would confront current regulations governing autonomous craft at sea, and confirmed that conversations had already been initiated with bodies such as the Maritime and Coastguard Agency and DNV GL, the international certification and classification society. 

“While advances in technology have propelled land and air-based transport to new levels of intelligent autonomy, it has been a different story on the sea,” Brett said. “The civilian maritime world has, as yet, been unable to harness the autonomous drone technology that has been used so effectively in situations considered unsuitable for humans. It begs the question, if we can put a rover on Mars and have it autonomously conduct research, why can't we sail an unmanned vessel across the Atlantic Ocean and, ultimately, around the globe? That's something we are hoping to answer with MARS.”

The concepts are being worked on by Isle of Wight-based Shuttleworth Design, and they will be preparing scale models for testing in the University's Marine Building. Many of the features of the trimaran are yet to be finalised, but it is expected to take advantage of advancements in solar panel technology to provide the energy required for its propulsion.


first autonomous vessel to cross the ocean


Orion Shuttleworth comments: “We want the vessel to really capture the imagination. It's of a scale unmatched by anything in the civilian world.”

The multi-million pound project is part of the University's 'Shape the Future' fundraising Campaign, recently launched at the House of Lords. Initial funding has been provided by the University, MSubs, and the ProMare Foundation, and corporate and private sponsorship will be sought for ongoing support. MARS will also create a large number of student internship opportunities for the University.

Christian Burden, Director of Development at the University: “MARS represents the very essence of the fundraising campaign we have recently launched at the University – not only does it reflect the values and characteristics of the University, but it is also a game changing project in every sense, one that will transform lives and have an enormous impact on the maritime and marine industries. MARS will be a multi-million pound project, providing benefit and value to many of our local partners who will be involved in building the ship. With the initial design phase underway, we are now seeking additional external sponsorship and philanthropy to help make this project become a reality – it's a once in a lifetime opportunity to be involved in a project like this, so we look forward to working with future supporters and partners.”


first autonomous vessel to cross the ocean


  speech bubble Comments »



31st July 2015

Neural network is 10 times bigger than the previous world record

Digital Reasoning, a developer of cognitive computing, recently announced that it has trained the largest neural network in the world to date with a stunning 160 billion parameters. Google’s previous record was 11.2 billion, while the Lawrence Livermore National Laboratory trained a neural network with 15 billion parameters.


brain computer deep learning network


The results of Digital Reasoning’s research with deep learning and neural networks were published in the Journal of Machine Learning and Arxiv alongside other notable companies like Google, Facebook, and Microsoft. They were presented at the prestigious 32nd International Conference on Machine Learning in Lille, France, earlier this month.

Neural Networks are computer systems that are modelled after the human brain. Like the human brain, these networks can gather new data, process it, and react to it. Digital Reasoning’s paper, titled “Modelling Order in Neural Word Embeddings at Scale,” details both the impressive scope of their neural network as well as the exponential improvement in quality.

In their research, Matthew Russell, Digital Reasoning’s Chief Technology Officer, and his team evaluated neural word embeddings on “word analogy” accuracy. Neural networks generate a vector of numbers for each word in a vocabulary. This allowed the research team to do “word math.” For instance, “king” minus “man” plus “woman” would yield a result of “queen.” There is an industry standard dataset of around 20,000 word analogies. Google's previous accuracy on this metric was a 76.2% accuracy rate. In other words, Google was able to get 76.2% of the word analogies "correct" in their system. Stanford's best score is a 75.0% accuracy. Digital Reasoning’s model achieves a score of 85.8% accuracy, which is a near 40% reduction in error over both Google and Stanford, a massive advancement in the state of the art.

“We are extremely proud of the results we have achieved, and the contribution we are making daily to the field of deep learning,” said Russell. “This is a tremendous accomplishment for the company and marks an important milestone in putting a defensible stake in the ground towards our position as not just a thought leader in the space, but as an organisation that is truly advancing the state of the art in a rigorous, peer reviewed way.”


  speech bubble Comments »



28th July 2015

Autonomous weapons: an open letter from AI and robotics researchers

The International Joint Conference on Artificial Intelligence (IJCAI) is currently taking place in Buenos Aires, Argentina. Today an open letter was officially announced at the conference, warning against the dangers of killer robots and a military AI arms race.


© Kgermolaev | Dreamstime.com


The letter is signed by over a thousand experts in the AI field. In addition to these researchers, the signatories also include high profile names such as Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind's CEO Demis Hassabis, Professor Stephen Hawking, philosopher and cognitive scientist Daniel Dennett and Noam Chomsky who was voted the world's top public intellectual in a 2005 poll. It reads as follows:

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.


The full list of signatories is available at http://futureoflife.org/AI/open_letter_autonomous_weapons.

This letter follows growing concern in recent years over the use of robots and AI. In 2013, a survey by the University of Massachusetts Amherst showed that a majority of Americans, across the political spectrum, opposed the outsourcing of lethal military and defence targeting decisions to machines.

Last year, a report by Human Rights Watch warned that fully autonomous weapons, or "killer robots," would jeopardise basic human rights, whether used in wartime or for law enforcement. Human Rights Watch is a founding member and coordinator of the Campaign to Stop Killer Robots – a coalition of 51 nongovernmental organisations calling for a preemptive ban on the development, production, and use of fully autonomous weapons.


  speech bubble Comments »



24th July 2015

Deep Genomics creates deep learning technology to transform genomic medicine

Deep Genomics, a new technology start-up, was launched this week. The company aims to use deep learning and artificial intelligence to accelerate our understanding of the human genome.


deep genomics future timeline
Credit: Hui Y. Xiong et al./Science


Evolution has altered the human genome over hundreds of thousands of years – and now humans can do it in a matter of months. Faster than anyone expected, scientists have discovered how to read and write DNA code in a living body, using hand-held genome sequencers and gene-editing systems. But knowing how to write is different from knowing what to write. To diagnose and treat genetic diseases, scientists must predict the biological consequences of both existing mutations and those they plan to introduce.

Deep Genomics, a start-up company spun out of research at the University of Toronto, is on a mission to predict the consequences of genomic changes by developing new deep learning technologies.

“Our vision is to change the course of genomic medicine,” says Brendan Frey, the company’s president and CEO, who is also a professor in the Edward S. Rogers Sr. Department of Electrical & Computer Engineering at the University of Toronto and a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR). “We’re inventing a new generation of deep learning technologies that can tell us what will happen within a cell when DNA is altered by natural mutations, therapies or even by deliberate gene editing.”

Deep Genomics is the only company to combine more than a decade of world-leading expertise in both deep learning and genome biology. “Companies like Google, Facebook and DeepMind have used deep learning to hugely improve image search, speech recognition and text processing. We’re doing something very different. The mission of Deep Genomics is to save lives and improve health,” says Frey. CIFAR Senior Fellow Yann LeCun, the head of Facebook’s Artificial Intelligence lab, is also an advisor to the company.

"Our company, Deep Genomics, will change the course of genomic medicine. CIFAR played a crucial role in establishing the research network that led to our breakthroughs in deep learning and genomic medicine," Frey says.

Deep Genomics is now releasing its first product, called SPIDEX, which provides information about how hundreds of millions of DNA mutations may alter splicing in the cell, a process that is crucial for normal development. Because errant splicing is behind many diseases and disorders, including cancers and autism spectrum disorder, SPIDEX has immediate and practical importance for genetic testing and pharmaceutical development. The science validating the SPIDEX tool was described earlier this year in the journal Science.

“The genome contains a catalogue of genetic variation that is our DNA blueprint for health and disease,” says CIFAR Senior Fellow Stephen Scherer, director of the Centre for Applied Genomics at SickKids and the McLaughlin Centre at the University of Toronto, and an advisor to Deep Genomics. “Brendan has put together a fantastic team of experts in artificial intelligence and genome biology – if anybody can decode this blueprint and harness it to take us into a new era of genomic medicine, they can.”

Until now, geneticists have spent decades experimentally identifying and examining mutations within specific genes that can be clearly connected to disease, such as the BRCA-1 and BRCA-2 genes for breast cancer. However, the number of mutations that could lead to disease is vast and most have not been observed before, let alone studied.

These mystery mutations pose an enormous challenge for current genomic diagnosis. Labs send the mutations they’ve collected to Deep Genomics, and the company uses their proprietary deep learning system, which includes SPIDEX, to ‘read’ the genome and assess how likely the mutation is to cause a problem. It can also connect the dots between a variant of unknown significance and a variant that has been linked to disease. “Faced with a new mutation that’s never been seen before, our system can determine whether it impacts cellular biochemistry in the same way as some other highly dangerous mutation,” says Frey.

Deep Genomics is committed to supporting publicly funded efforts to improve human health. “Soon after our Science paper was published, medical researchers, diagnosticians and genome biologists asked us to create a database to support academic research,” says Frey. “The first thing we’re doing with the company is releasing this database – that’s very important to us.”

“Soon, you’ll be able to have your genome sequenced cheaply and easily with a device that plugs into your laptop. The technology already exists,” explains Frey. “When genomic data is easily accessible to everyone, the big questions are going to be about interpreting the data and providing people with smart options. That’s where we come in.”

Deep Genomics envisions a future where computers are trusted to predict the outcome of experiments and treatments, long before anyone picks up a test tube. To realise that vision, the company plans to grow its team of data scientists and computational biologists. Deep Genomics will continue to invent new deep learning technologies and work with diagnosticians and biologists to understand the many complex ways that cells interpret DNA, from transcription and splicing to polyadenylation and translation. Building a thorough understanding of these processes has massive implications for genetic testing, pharmaceutical research and development, personalised medicine and improving human longevity.


  speech bubble Comments »



24th July 2015

New computer program is first to recognise sketches more accurately than a human

Researchers from Queen Mary University of London (QMUL) have built the first computer program that can recognise hand-drawn sketches better than humans.


computer program recognises sketches better than humans


Known as Sketch-a-Net, the program is capable of correctly identifying the subject of sketches 74.9 per cent of the time, compared to humans that only managed a success rate of 73.1 per cent. As sketching becomes more relevant with the increase in the use of touchscreens, this development could provide a foundation for new ways to interact with computers.

Touchscreens could understand what you are drawing – enabling you to retrieve a specific image by drawing it with your fingers, which is more natural than keyword searches for finding items such as furniture or fashion accessories. This improvement could also aid police forensics when an artist’s impression of a criminal needs to be matched to a mugshot or CCTV database.

The research, which was accepted at the British Machine Vision Conference, also showed that the program performed better at determining finer details in sketches. For example, it was able to successfully distinguish the specific bird variants ‘seagull’, ‘flying-bird’, ‘standing-bird’ and ‘pigeon’ with 42.5 per cent accuracy compared to humans that only achieved 24.8 per cent.


computer program recognises sketches better than humans


Sketches are very intuitive to humans and have been used as a communication tool for thousands of years, but recognising free-hand sketches is challenging because they are abstract, varied and consist of black and white lines rather than coloured pixels like a photo. Solving sketch recognition will lead to a greater scientific understanding of visual perception.

Sketch-a-Net is a ‘deep neural network’ – a type of computer program designed to emulate the processing of the human brain. It is particularly successful because it accommodates the unique characteristics of sketches, particularly the order the strokes were drawn. This was information that was previously ignored but is especially important for understanding drawings on touchscreens.


computer program recognises sketches better than humans


Timothy Hospedales, co-author of the study and Lecturer in the School of Electronic Engineering and Computer Science, QMUL, said: “It’s exciting that our computer program can solve the task even better than humans can. Sketches are an interesting area to study because they have been used since pre-historic times for communication and now, with the increase in use of touchscreens, they are becoming a much more common communication tool again. This could really have a huge impact for areas such as police forensics, touchscreen use and image retrieval, and ultimately will help us get to the bottom of visual understanding.”

The paper, 'Sketch-a-Net that Beats Humans' by Q. Yu, Y. Yang, Y. Song, T. Xiang and T. Hospedales, will be presented at the 26th British Machine Vision Conference on Tuesday 8th September 2015.


computer program recognises sketches better than humans


  speech bubble Comments »



18th June 2015

World's most lifelike bionic hand will transform the lives of amputees

A congenital amputee from London has become the first user in the UK to be fitted with a new prosthetic hand that launches this week and sets a new benchmark in small myoelectric hands.


bebionic small


Developed using Formula 1 technology and specifically in scale for women and teenagers, the bebionic small hand is built around an accurate skeletal structure with miniaturised components designed to provide the most true-to-life movements.

The bebionic small hand, developed by prosthetic experts Steeper, will enable fundamental improvements in the lives of thousands of amputees across the world. The hand marks a turning point in the world of prosthetics as it perfectly mimics the functions of a real hand via 14 different precision grips. A bionic extension of the arm that enables the utmost dexterity will enable amputees to engage in a range of activities that would have previously been complex and unmanageable.

Nicky Ashwell, 29, born without a right hand, received Steeper's latest innovation at a fitting by London Prosthetics Centre, a private facility providing expert services in cutting-edge prosthetics. Before being fitted with the bebionic small hand, Nicky would use a cosmetic hand without movement; as a result, Nicky learned to carry out tasks with one hand. The bebionic small hand has been a major improvement to Nicky's life, enabling her to do things previously impossible with one hand such as riding a bike, gripping weights with both hands, using cutlery and opening her purse.

Nicky, who is a Product Manager at an online fashion forecasting and trend service, said: "When I first tried the bebionic small hand it was an exciting and strange feeling; it immediately opened up so many more possibilities for me. I realised that I had been making life challenging for myself when I didn't need to. The movements now come easily and look natural; I keep finding myself being surprised by the little things, like being able to carry my purse while holding my boyfriend's hand. I've also been able to do things never before possible like riding a bike and lifting weights."




Bebionic small hand works using sensors triggered by the user's muscle movements that connect to individual motors in each finger and powerful microprocessors. The technology comprises a unique system which tracks and senses each finger through its every move – mimicking the functions of a real hand. Development follows seven years of research and manufacturing, including the use of Formula 1 techniques and military technology along with advanced materials including aerograde aluminium and rare Earth magnets.

Ted Varley, Technical Director at Steeper said, "Looking to the future, there's a trend of technology getting more intricate; Steeper has embraced this and created a smaller hand with advanced technology that is suitable for women and teenagers. An accurate skeletal structure was firstly developed, with the complex technology then specifically developed to fit within this in order to maintain anatomical accuracy. In other myoelectric hands the technology is developed first, at the expense of the lifelikeness."

Bebionic small hand at a glance:
• Contains 337 mechanical parts
• 14 grip patterns and hand positions to allow a range of precision movements
• Weighs approximately 390g – the same as a large bar of Galaxy chocolate
• 165mm from base to middle fingertip – the size of an average woman's hand
• Strong enough to handle up to 45kg – around the same as 25 bricks
• The only multi-articulated hand with patented finger control system using rare Earth magnets
• Specifically designed with women, teenagers and smaller-framed men in mind




  speech bubble Comments »



30th May 2015

Cheetah robot can jump over obstacles

Engineers at the Massachusetts Institute of Technology (MIT) have developed a new version of the Cheetah robot, which is able to leap over obstacles while running at high speed. The eerily lifelike machine uses a laser distance sensor and real-time algorithms to perceive its environment. In this demonstration video, it is shown hurdling objects up to 40cm (16") in height, and performing multiple jumps without a safety harness.

"A running jump is a truly dynamic behaviour," says Sangbae Kim, assistant professor of mechanical engineering, in a press release. "You have to manage balance and energy, and be able to handle impact after landing. Our robot is specifically designed for those highly dynamic behaviours."

In the future, this robot – and others like it – may serve important functions in the military. They could scout ahead of soldiers to provide real-time information on the battlefield, for example, or relieve troops of the burden of carrying ammunition, food, medical supplies, batteries and other equipment. These machines could also be useful in search and rescue operations, able to access difficult or remote terrain that would defeat other types of vehicle.




  speech bubble Comments »



28th May 2015

Robot masters new skills through trial and error

Researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a significant milestone in the field of artificial intelligence.


  robot skills trial and error
UC Berkeley researchers (from left to right) Chelsea Finn, Pieter Abbeel, BRETT, Trevor Darrell and Sergey Levine (Photo courtesy of UC Berkeley Robot Learning Lab).


Researchers at the University of California, Berkeley, have demonstrated a new type of reinforcement learning for robots. This allows a machine to complete various tasks without pre-programmed details about its surroundings – such as putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more.

“What we’re reporting on here is a new approach to empowering a robot to learn,” said Professor Pieter Abbeel, Department of Electrical Engineering and Computer Sciences. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”

The latest developments are presented today, Thursday 28th May, at the International Conference on Robotics and Automation in Seattle. The work is part of a new People and Robots Initiative at UC’s Centre for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.

“Most robotic applications are in controlled environments, where objects are in predictable positions,” says UC Berkeley faculty member Trevor Darrell, who is leading the project with Abbeel. “The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings.”

Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates. Instead, the researchers turned to a new branch of AI known as deep learning. This is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.


deep learning brain technology future timeline


“For all our versatility, humans are not born with a repertoire of behaviours that can be deployed like a Swiss army knife, and we do not need to be programmed,” explains postdoctoral researcher Sergey Levine, a member of the research team. “Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”

In the world of artificial intelligence, deep learning programs create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognise patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google’s speech-to-text program, or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition. Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.

“Moving about in an unstructured 3D environment is a whole different ballgame,” says Ph.D. student Chelsea Finn, another team member. “There are no labelled directions, no examples of how to solve the problem in advance. There are no examples of the correct solution like one would have in speech and vision recognition programs.”

In their experiments, the researchers worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks. They presented BRETT with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks. The algorithm controlling BRETT’s learning included a "reward" function that provided a score based on how well the robot was doing with the task.




BRETT takes in the scene including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based on the robot’s movements. Movements that bring the robot closer to completing the task will score higher than those that don't. The score feeds back through the neural net, so the robot can "learn" which movements are better for the task at hand. This end-to-end training process underlies the robot’s ability to learn on its own. As the PR2 moves its joints and manipulates objects, the algorithm calculates good values for the 92,000 parameters of the neural net it needs to learn.

When given the relevant coordinates for the beginning and end of the task, the PR2 can master a typical assignment in about 10 minutes. When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.

Abbeel says the field will likely see big improvements as the ability to process vast amounts of data increases: “With more data, you can start learning more complex things. We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”


  speech bubble Comments »



7th May 2015

The first licenced autonomous driving truck in the US

Vehicle manufacturer Daimler this week announced that its Freightliner Inspiration Truck has become the world's first autonomous truck to be granted a licence for road use in the State of Nevada.


first autonomous truck us 2015


In July last year, Daimler provided the world's first demonstration of an autonomous truck in action, when the Mercedes-Benz Future Truck 2025 drove along a cordoned-off section of the A14 autobahn near Magdeburg, Germany. Engineers then transferred the system to the US brand Freightliner and created the Inspiration Truck – modified for use on American highways. The result: the State of Nevada has certified no less than two Freightliner Inspiration Trucks for regular operations on public roads. Governor Brian Sandoval handed over the official Nevada licence plates during a ceremony at the Las Vegas Motor Speed.

This futuristic vehicle is based on the existing Freightliner Cascadia model, but has the addition of "Highway Pilot" technology. The latter combines a sophisticated stereo camera and radar technology with systems providing lane stability, collision avoidance, speed control, braking, steering and an advanced dash display, allowing for safe autonomous operation on public highways. These components were extensively tested. As part of the truck's so-called Marathon Run, it covered over 10,000 miles (16,000 km) on a test circuit in Papenburg, Germany.


highway pilot technology freightliner inspiration truck autonomous 2015


The radar unit in the front bumper scans the road ahead at both long and short range. The long-range radar, with a range of 820 feet and scanning an 18° segment, looks far and narrow to see vehicles ahead. The short-range radar, with a range of 230 feet and scanning a 130° segment, looks wider to see vehicles that might cut in front of the truck.

There is also a medium-range stereo camera, which is located behind the windscreen. The range of this camera is 328 feet, and it scans an area measuring 45° horizontal by 27° vertical. This camera is able to recognise lane markings and communicates to the Highway Pilot steering gear for autonomous lane guidance.

In addition, tiny cameras are located on the exterior of the truck. These reduce blind spots and are capable of replacing exterior mirrors, while creating a slight boost in fuel efficiency (1.5 percent).


highway pilot technology freightliner inspiration truck autonomous 2015


The vehicle operates safely under a wide range of conditions – it will automatically comply with posted speed limits, regulate the distance from the vehicle ahead and use the stop-and-go function during rush hour. The driver can deactivate the Highway Pilot manually and is able to override the system at any time. If the vehicle is no longer able to process crucial aspects of its environment, e.g. due to road construction or bad weather, the driver is prompted to retake control.

A large, state-of-the-art dash interface, combined with video displays from the various cameras, is designed to offer a great driver experience and to vastly improve the way data from the truck's performance is communicated to the driver. Highway Pilot informs the driver visually on its current status and also accepts commands from the driver.


highway pilot technology freightliner inspiration truck autonomous 2015


According to U.S. government data, 90 percent of truck crashes involve human error – much of that due to fatigue. Wolfgang Bernhard, a member of the Board of Management at Daimler, commented: "An autonomous system never gets tired, never gets distracted. It is always on 100 percent."

For now, the Inspiration Trucks will be limited to Nevada, one of the lowest density states in the country, but other states are likely to create similar regulations in the future, with California and Michigan expected to follow soon: "Ultimately, this has to be federally regulated to have a consistent basis across the country," says Martin Daum, president and CEO of Daimler Trucks North America.

The Inspiration Truck is only semi-autonomous, as it requires a human behind the wheel, who can take over in case of an emergency. The technology is advancing rapidly, however. Daimler and other manufacturers, including Nissan and Tesla, are planning to introduce fully autonomous vehicles (with no human driver on board) during the early 2020s. Worldwide, freight traffic shipped by road is predicted to triple by 2050, with self-driving vehicles expected to play an ever-increasing role in transportation.

Eventually, these autonomous vehicles will be intelligently connected – to their environment and other road users – to such an extent that they will be able to avoid areas with heavy traffic and contribute to reducing traffic jams. Traffic of the future will flow more smoothly and be far more predictable. Traffic systems will be more flexible and the infrastructure will be utilised better. Transport firms will operate more profitably, with fuel savings alongside lower maintenance costs as a result of less wear on the vehicle components, due to a more constant flow of traffic. Most importantly of all, road safety will be hugely improved – with many thousands of deaths prevented each year.




  speech bubble Comments »



18th April 2015

World's first robotic kitchen to debut in 2017

Moley Robotics has unveiled an automated kitchen system, able to scan and replicate the movements of a human chef to produce recipes.




The world's first automated kitchen system was unveiled this week at Hanover Messe in Germany – the premier industrial robotics show. Developed by tech firm Moley Robotics, it features a dexterous robot integrated into a kitchen that cooks with the skill and flair of a master chef.

The company's goal is to produce a consumer version within two years, supported by an iTunes-style library of recipes that can be downloaded and created by the kitchen. The prototype at the exhibition is the result of two years development and the collaboration of an international team including Sebastian Conran who designed the cooking utensils and Mauro Izzo, DYSEGNO and the Yachtline company, who created the futuristic kitchen furniture.

Two complex, fully articulated hands, made by the Shadow Robot Company, comprise the kitchen's key enabling technology. The product of 18 years' research and development, Shadow's products are used in the nuclear industry and by NASA. Able to reproduce the movements of a human hand with astonishing accuracy, their utility underpins the unique capability of the automated kitchen.


most advanced robot hand


The Moley Robotics system works by capturing human skills in motion. Tim Anderson – culinary innovator and winner of the BBC Master Chef competition – played an integral role in the kitchen's development. He first developed a dish that would test the system's capabilities – a crab bisque – and was then 3D recorded at a special studio cooking it. Every motion and nuance was captured, from the way Tim stirred the liquids to the way he controlled the temperature of the hob. His actions were then translated into elegant digital movement, using bespoke algorithms. The robot doesn't just cook like Tim – in terms of skill, technique and execution it is Tim producing the dish. The kitchen even 'signs off' its work with an 'OK' gesture – just as the chef does.

"To be honest, I didn't think this was possible," he said. "I chose crab bisque as a dish because it's a real challenge for a human chef to make well – never mind a machine. Having seen – and tasted – the results for myself, I am stunned. This is the beginning of something really significant: a whole new opportunity for producing good food and for people to explore the world's cuisines. It's very exciting."

Moley Robotics, headquartered in the UK, is now working to scale the technology ready for mass production and installation in regular-sized kitchens. Future iterations will be more compact, with smaller control arms but with added functionality in the form of a built-in refrigerator and dishwasher to complement a professional-grade hob and oven.

The company is working with designers, homebuilders, kitchen installers and food suppliers to promote the system. The mass-market product will be supported by a digital library of over 2,000 dishes when it launches in 2017 and it is envisaged that celebrity chefs will embrace 3D cooking downloads as an appealing addition to the cook book market. Home chefs will be able to upload their favourite recipes too, and so help create the 'iTunes' for food.


future technology robot hand kitchen home timeline


Moley Robotics was founded by London-based computer scientist, robotics and healthcare innovator Mark Oleynik. The company's aim is to produce technologies that address basic human needs and improve day-to-day quality of life.

"Whether you love food and want to explore different cuisines, or fancy saving a favourite family recipe for everyone to enjoy for years to come, the Automated Kitchen can do this," says Oleynik. "It is not just a labour saving device – it is a platform for our creativity. It can even teach us how to become better cooks!"

The robotic hands demonstrated this week offer a glimpse of the not-too-distant future, when even greater advances in movement, flexibility, touch and object recognition will have been achieved. Experts believe that near-perfect recreations of human hands, operating in a wide variety of environments, will be possible in just 10 years' time.


  speech bubble Comments »



« Previous  



AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure
















future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed