future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
     
 
       
   
 
 

Blog » AI & Robotics

 
     
 

30th May 2016

MasterCard unveils the first commerce application for humanoid robot Pepper

Customers at Pizza Hut restaurants in Asia will soon get the chance to have their order taken by a robot.

 

mastercard pepper robot commerce

 

MasterCard has unveiled the first commerce application for SoftBank Robotics' humanoid robot Pepper. The application will be powered by MasterPass, the global digital payment service from MasterCard that connects consumers with merchants, enabling them to make fast, simple, and secure digital payments across channels and devices. Pizza Hut Restaurants Asia P/L will be the inaugural launch partner working together with MasterCard to create innovative customer engagement with Pepper.

A major first step in bringing conversational commerce experiences to merchants and consumers, this new app will extend the robot's ability to integrate customer service, access to information and sales into a seamless and consistent user experience. Pizza Hut Asia will be piloting the Pepper robot for order-taking and personalised engagement in its stores by the end of 2016.

"Consumers have come to expect personalised service, customised offers, and simple and seamless processes both in-store and online," said Tobias Puehse, Vice President for Innovation Management, Digital Payments & Labs at MasterCard. "The app's goal is to provide consumers with a more memorable and personalised shopping experience beyond today's self-serve machines and kiosks, by combining Pepper's intelligence with a secure digital payment experience via MasterPass."

 

mastercard pepper robot commerce 2016 technology

 

The robot will be installed in "between six and ten stores in Asia this year," said John Sheldon, Global SVP, Innovation Management, MasterCard Labs. Pepper can speak 19 languages and will "add more intelligence to kiosk ordering. Pepper guides you through the process of placing the order and can answer nutritional questions and communicate any specials."

A customer will be able to initiate an engagement by simply greeting Pepper and pairing their MasterPass account by either tapping the Pepper icon within the wallet or by scanning a QR code on the tablet that the robot holds. After pairing with MasterPass, Pepper can assist cardholders by providing personalised recommendations and offers, additional information on products, or assistance in checking out and paying for items. Pepper will initiate, approve and complete a transaction by connecting to MasterPass via a Wi-Fi connection and the entire transaction happens within the wallet.

Pepper has a number of human-like features. The robots "are intentionally designed to convey emotion," using sensors and cameras "to interpret the emotional state of the person they are interacting with and the cameras that it's using are evaluating the behaviour." For example, if the customer is excited and animated, so, too, would be Pepper. If the customer's movements are more muted, "then it would instead respond with a lot calmer and smaller gestures, so as to put that person at ease." If the customer gives his or her permission, the robot can remember their order history and ask if they want the same food or drink this time.

"We are excited to welcome Pepper to the Pizza Hut family," said Vipul Chawla, Managing Director of Pizza Hut Restaurants Asia. "Core to our digital transformation journey is the ability to make it easier for customers to engage, connect and transact with Pizza Hut. With an order-and-payment-enabled Pepper, customers can now come to expect personalised ordering, reduce wait time for carryout, and have a fun, frictionless user experience."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

10th May 2016

Robotic diver can improve deep sea exploration

Stanford University has revealed "OceanOne", an underwater robot with artificial intelligence and haptic feedback able to move around the seabed using thrusters.

 

oceanone robotic mermaid diver
OceanOne. Credit: Frederic Osada and Teddy Seguin/DRASSM/Stanford University

 

Pictured here is OceanOne, a new underwater humanoid robot designed and built by scientists at Stanford University. Described as a "robotic mermaid", it is roughly five feet in length, with two fully articulated arms, a head featuring twin camera eyes, and a tail section housing batteries, computers and eight multi-directional thrusters.

The machine can function as a remote avatar for a human pilot, diving to depths that would be too dangerous for most people. It is remotely controlled using a set of joysticks and its human-like stereoscopic vision shows the pilot exactly what the robot is seeing. It is outfitted with haptic force feedback and an artificial brain – in essence, a virtual diver.

The pilot can take control at any moment, but usually won’t need to lift a finger. Sensors throughout the robot gauge currents and turbulence, automatically activating thrusters to keep the robot in place. And even as the body moves, quick-firing motors adjust the arms to keep its hands steady as it works. Navigation relies on perception of the environment, from both sensors and cameras, and these data run through smart algorithms that help OceanOne avoid collisions. If it senses that its thrusters won’t slow it down quickly enough, it can quickly brace for impact with its arms, an advantage of a humanoid body build.

"OceanOne will be your avatar," says Oussama Khatib, Professor of Computer Science and Mechanical Engineering at Stanford. "The intent here is to have a human diving virtually, to put the human out of harm's way. Having a machine that has human characteristics that can project the human diver's embodiment at depth is going to be amazing."

 

oceanone
The wreck of La Lune, sunk in 1664. Credit: Frederic Osada and Teddy Seguin/DRASSM/Stanford University

 

Indeed, Khatib recently used this robot to explore the wreck of La Lune, the flagship of King Louis XIV which sank in 1664. Located 20 miles (32 km) off the southern coast of France, 100 metres below the Mediterranean, no human had touched the ruins – or the many treasures and artifacts the ship had once carried – in the centuries since.

With guidance from a team of deep-sea archaeological experts who had studied the site, Khatib spotted a grapefruit-sized vase. He was able to hover precisely over the vase, reach out, feel its contours and weight (via the haptic feedback) and stick a finger inside to get a better grip – all while sitting comfortably in a boat using joysticks to control OceanOne. The vase was placed gently in a recovery basket and brought back to the surface. When the vase returned to the boat, Khatib was the first person to touch it in hundreds of years. It was in remarkably good condition, though it showed every day of its time underwater: the surface was covered in ocean detritus, and smelled like raw oysters.

 

Click to enlarge

vase
Vase recovered by OceanOne. Credit: Frederic Osada and Teddy Seguin/DRASSM/Stanford University

 

The expedition to La Lune was OceanOne's maiden voyage. Based on its astonishing success, Khatib hopes the robot will one day take on highly skilled underwater tasks, opening up a whole new realm of ocean exploration.

"We connect the human to the robot in a very intuitive and meaningful way," Khatib said. "The human can provide intuition, expertise and cognitive abilities to the robot. The two bring together an amazing synergy. The human and robot can do things in areas too dangerous for a human, while the human is still there."

Earth is not the only world in our Solar System with oceans. Perhaps in the distant future, machines like OceanOne could be used to virtually explore places like Europa, Enceladus or Titan, for example.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

16th April 2016

Artificial intelligence finds cancer cells more efficiently

By using a laser at nanosecond speeds, in combination with deep learning algorithms, a new microscope detects cancer cells more efficiently than standard methods.

 

nanosecond ai laser microscope

 

Scientists at the University of California, Los Angeles (UCLA) have developed a new technique for identifying cancer cells in blood samples, faster and more accurately than current standard methods.

One common approach to testing for cancer involves doctors adding biochemicals to blood samples. These biochemicals attach biological "labels" to cancer cells, which enable instruments to detect and identify them. However, the biochemicals can damage cells and render the samples unusable for future analyses. Other techniques are available that don't use labelling, but these can be inaccurate, because they only identify cancer cells based on a single physical characteristic.

The new technique, demonstrated by the California NanoSystems Institute at UCLA, images cells without destroying them. Not only that, but it can identify up to 16 physical characteristics – including size, granularity and biomass – instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, for rapidly imaging cells in blood samples, and a deep learning program that identifies cancer cells with over 95 percent accuracy.

The "photonic time stretch" was invented by Professor Barham Jalali, who holds a patent for this technology, and its use in microscopes is just one of many possible applications. It works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly – in nanoseconds, or billionths of a second – that the images would be too weak to be detected and too fast to be digitised by normal instrumentation. The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitised at a rate of 36 million images per second. It then uses deep learning to distinguish the cancer cells from healthy white blood cells. Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data, with the goal of achieving accurate decision making.

 

nanosecond ai laser microscope

Time-stretch quantitative phase imaging (TS-QPI) and analytics system (credit: Claire Lifan Chen et al./Nature)

 

"Each frame is slowed down in time and optically amplified so it can be digitised," explains Ata Mahjoubfar, a UCLA postdoctoral fellow. "This lets us perform fast cell imaging that the artificial intelligence component can distinguish."

Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA method eliminates that problem too: "The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination," said Claire Lifan Chen, a UCLA doctoral student.

In their paper – published in the journal Nature Scientific Reports – the researchers write that their system could lead to data-driven diagnoses by the cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and a better understanding of the tumour-specific gene expression in cells, leading to new treatments for disease.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

10th April 2016

Self-driving trucks complete journey across Europe

Fleets of self-driving trucks this week completed the European Truck Platooning Challenge.

 

self driving trucks fleet 2016

 

As part of the world's first cross-border initiative with smart trucks, six "platoons" of semi-automated trucks have completed their journeys from various cities across Europe, reaching their final destination at the Port of Rotterdam on 6th April.

They were participating in the European Truck Platooning Challenge organised by the Dutch government as part of its EU Presidency. The European Automobile Manufacturers' Association (ACEA) and commercial vehicle members – including Daimler, Scania and Volvo – are active partners of the initiative with each supplying a platoon. One set of trucks made by Scania travelled over 2,000 km and crossed four borders. Daimler made headlines in 2014 when the company demonstrated the world's first autonomous truck in Magdeburg, Germany, and in 2015 its Freightliner Inspiration Trucks gained a licence for road use in Nevada.

Truck platooning – which has the potential to make transport cleaner, safer and more efficient – is the linking of two or three trucks in a convoy. These vehicles follow each other at a fixed, close distance, by using connectivity technology and automated driving support systems.

 

 

 

Using this technique cuts fuel use by 15%, prevents human error from causing accidents, and reduces congestion, according to research firm TNO. Expenses can be lowered significantly.

Two trucks doing 100,000 miles annually can save €6,000 ($6,840) on fuel by platooning, compared to driving on cruise control. Safety is greatly improved by using technology such as Volvo's emergency braking system and Daimler's Highway Pilot Connect – systems with braking reaction times of under 0.1 seconds, compared to 1.4 seconds for a human driver. A Wi-Fi connection between the trucks ensures synchronised braking and can prevent sudden jolt/shock effects.

When operating in platoon mode, a convoy of three semi-autonomous trucks can travel much closer together – requiring only 80 metres of road space, from end to end. For comparison, if they were driven by humans, they would need to fill 185 metres of road. Congestion on roads will therefore be greatly reduced if more and more self-driving vehicles are deployed in the future, while pollution can also be lowered.

 

 

 

Melanie Schulz, Dutch minister for Infrastructure and the Environment who spearheaded this initiative, commented: "The results of this first ever major try-out in Europe are promising. The hands-on experience gained here will be very useful in the informal European transport council on 14th April in Amsterdam. It will certainly help my colleagues and I discuss the adjustments needed to make self-driving transport a reality."

There are still a number of barriers standing in the way of truck platooning across Europe. These barriers are not of a technical nature as platooning technology exists already; rather they are caused by differences in legislation between the EU member states: "Harmonisation is needed if we want a wide-scale introduction of platooning," stated Harrie Schippers, President of DAF Trucks.

Sufficient demand is also crucial, to ensure the right level of market uptake. Following the Truck Platooning Challenge, there have been encouraging expressions of interest from the business community and the transport sector, including Unilever and major Dutch supermarkets. The testing phase is the most important next step. More and more national governments are offering industry the opportunity to test their latest vehicles and technologies, thereby also supporting efforts to increase public awareness, understanding and acceptance. However, this is also vital on a pan-European scale.

"It is precisely for this reason that we believe that the European Truck Platooning Challenge has been a huge success: it has fostered much-needed cooperation between all relevant stakeholders right across the EU, facilitating cross-border driving and encouraging compatibility on legal and technical issues," said Schippers. "We look forward to harvesting the learnings from this initiative so that, together, we can make truck platoons a common sight on Europe's roads in the future."

 

self driving trucks fleet 2016

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

19th March 2016

The world's first autonomous pizza delivery vehicle

Domino's Pizza Enterprises in Australia has unveiled the world's first autonomous pizza delivery vehicle.

 

 

 

Domino's Pizza Enterprises in Australia has revealed plans for an autonomous delivery vehicle, named the DRU (Domino's Robotic Unit). While still at the prototype stage, the company says it demonstrates just how serious they are about informing regulation in this space.

The machine is designed with sleek, refined forms combined with a friendly persona and lighting to help customers identify and interact with it. DRU is a four wheeled vehicle with compartments built to keep the customer's order piping hot and drinks cold, whilst travelling on the footpath at a safe speed from the store to the customer's door. It can select the best path of travel, with on-board sensors enabling it to perceive obstacles along the way and avoid them if necessary. The pizza is kept in a locked storage compartment, accessible through use of a security code sent to the customer's phone.

Domino's Group CEO and Managing Director, Don Meij said that autonomous vehicles would open up new opportunities and create an impetus for innovation both in Australia and around the world: "This highlights what can happen when disruptive thinking is fostered – it turns into a commercially viable and revolutionary product. It allows Domino's to explore new concepts and push the boundaries of what is possible for our customers. The DRU prototype is only the first step in our research and development as we continue to develop a range of innovations set to revolutionise the entire pizza ordering experience."

 

dominos dru robotic unit

 

Meij confirmed the idea for DRU came from within the company's internal innovation sessions and has been developed in Domino's own DLAB, a purpose built lab to help budding entrepreneurs commercialise their ideas: "With a dedicated innovation lab, this project has been accelerated much faster than normal projects, without losing any of the quality control," he said.

DRU is powered by technology from Australian start-up company, Marathon Robotics.

"To launch DRU from concept through to development of a prototype highlights the extraordinary talent and resources available on our doorstep – both with excellent external talent such as Marathon and the knowledge and experience of our internal team at Domino's."

Domino's has been working with the Queensland Department of Transport and Main Roads, along with other global partners to ensure the delivery droid concept meets relevant legislative requirements as it is trialled and tested.

"We are also working with Government agencies on the project to ensure all legal requirements are met. The agencies have been very supportive in the process to date and we're all excited about what this technology can lead to."

While autonomous drones and cars still need to pass a number of regulatory hurdles and challenges before they're delivering pizza on Australian streets, DRU shows that Domino's is actively engaged in the field and working with regulators on the future commercialisation of such technology. The machine has already been involved in a number of customer deliveries in restricted streets identified by the Department under a special permit and is currently operated in semi-autonomous mode. These early trials are a big step forward in commercialising fully autonomous delivery vehicles.

"With autonomous vehicles opening up possibilities for saving lives, saving time and moving goods more efficiently, we look forward to continuing our work in this field and leading the commercial trials so that our customers can reap the benefits," Mr Meij said. He confirmed that DRU will one day fit into and enhance the existing team of delivery personnel, and when he does will be a welcome addition to the team: "DRU is cheeky and endearing and we are confident that one day, he will become an integral part of the Domino's family. He's a road to the future and one that we are very excited about exploring further."

The global service robotics market is forecast to be worth $18 billion USD by 2020, according to analyst firm Research and Markets. This value will most likely be higher if DRU proves successful, influencing other fast-food suppliers to implement robotic technology in their delivery services.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

24th February 2016

"Next generation" humanoid robot revealed

Boston Dynamics has revealed the latest version of its Atlas humanoid robot, featuring eerily lifelike movements and reactions.

This new generation of the Atlas robot – designed to operate both outdoors and inside – is specialised for mobile manipulation. Electrically powered and hydraulically actuated, it uses sensors in its body and legs to balance, with LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain, help with navigation and manipulate objects. This machine is 5' 9" tall (about a head shorter than the DRC Atlas) and weighs 180 lbs (81 kg), which is much lighter than its 330 lb (150 kg) predecessor from a couple of years ago.

 

 

 

  speech bubble Comments »
 

 

 

16th January 2016

U.S. government pledges $4bn for self-driving vehicles

The U.S. government this week revealed plans for a 10-year, nearly $4 billion investment to accelerate the development and adoption of safe vehicle automation, through real-world pilot projects.

 

self driving vehicles future timeline 2016 technology

 

In his final State of the Union address, President Obama signalled his intent to invest in a 21st century transportation system. U.S. Transportation Secretary Anthony Foxx this week revealed part of the president's proposal: a 10-year, nearly $4 billion investment to accelerate the development and adoption of safe vehicle automation, through real-world pilot projects.

Secretary Foxx also announced the removal of potential roadblocks to the integration of innovative, transformational automotive technology that can significantly improve safety, mobility, and sustainability. Foxx made the announcement at the North American International Auto Show (NAIAS) in Detroit, where he was joined by leaders in technology, executives of traditional auto manufacturers, and newcomers to the industry.

"We are on the cusp of a new era in automotive technology with enormous potential to save lives, reduce greenhouse gas emissions, and transform mobility for the American people," said Foxx. "Today's actions and those we will pursue in the coming months will provide the foundation and the path forward for manufacturers, state officials, and consumers to use new technologies and achieve their full safety potential."

The President's 2017 budget proposal would provide nearly $4 billion over 10 years for pilot programs to test connected vehicle systems in designated corridors throughout the country, and work with industry leaders to ensure a common, multistate framework, for connected and autonomous vehicles.

 

self driving vehicles future timeline 2016 technology

 

Secretary Foxx has also confirmed an update for the National Highway Traffic Safety Administration's (NHTSA) 2013 preliminary policy statement on autonomous vehicle technology. This new guidance reflects the reality that the widespread deployment of fully autonomous vehicles is now feasible.

Administrator Mark Rosekind said: "The NHTSA is using all of its available tools to accelerate the deployment of technologies that can eliminate 94% of fatal crashes involving human error. We will work with state partners toward creating a consistent national policy on these innovations, provide options now and into the future for manufacturers seeking to deploy autonomous vehicles, and keep our safety mission paramount at every stage."

Under Foxx's leadership, the Department of Transportation has been working to transform government for the 21st century, by harnessing technology and innovation that will improve people's lives. In 2015, he refocused the national dialogue about the future of America's transport system by releasing Beyond Traffic – a report examining the challenges facing the country's infrastructure between now and 2045. The draft framework has already influenced decisions by elected officials, planners, and stakeholders nationwide. He also launched the Smart City Challenge, a national competition to implement bold, data-driven ideas that make transportation safer, easier, and more reliable in cities. He also worked to accelerate the Department's efforts to incorporate vehicle-to-vehicle (V2V) communication technology into new vehicles.

Numerous forecasts by technology analysts point to a future dominated by intelligent, self-driving vehicles and smarter road systems. These new measures announced by the U.S. government will help to bring that futuristic vision closer to reality.

 

  speech bubble Comments »
 

 

 

1st January 2016

New social and telepresence robots developed in Singapore

Two new humanoid robots have been demonstrated by researchers at Nanyang Technological University (NTU) in Singapore.

 

nadine robot singapore
Nadine (left) and EDGAR (right). Photos courtesy of NTU.

 

Say hello to Nadine – a “receptionist” at Nanyang Technological University in Singapore. She is friendly, and will greet you back. Next time you meet her, she will remember your name and your previous conversation with her. She looks almost like a human being, with soft skin and flowing brunette hair. She smiles when greeting you, looks at you in the eye when talking, and can also shake hands with you. Unlike most conventional robots, Nadine has her own personality, mood and emotions. She can be happy or sad, depending on the conversation. She also has a good memory, can visually recognise the people she has met and remember what the person had said before.

Nadine is the latest social robot developed by scientists at NTU. The doppelganger of its creator, Prof Nadia Thalmann, Nadine is powered by intelligent software similar to Apple’s Siri and Microsoft’s Cortana. In the future, robots like Nadine could be used as personal assistants in the office, or used as social companions for the young and the elderly at home.

A humanoid robot like Nadine is just one of the interfaces where NTU's technology can be applied. It can also be made virtual and appear on a TV or computer screen, and become a low-cost virtual social companion. With further progress in robotics driven by technological improvements in silicon chips, sensors and computation, physical social robots like Nadine are poised to become more visible in offices and homes in the coming decades.

 

 

 

A second robot – known as EDGAR – was also demonstrated at NTU’s new media showcase. EDGAR is available in two different configurations as shown in the videos below. The first allows him to autonomously deliver speeches and read from a script. With an integrated webcam, he automatically tracks the people he meets to engage them in conversation, giving informative or witty replies to questions. This makes him ideal for use at public venues like tourist attractions and shopping centres, where he can offer practical information to visitors.

The other configuration allows EDGAR to become a tele-presence robot, optimised to replicate the movements of a human user. By standing in front of a specialised webcam, the user can control EDGAR remotely from anywhere in the world. Their face and expressions are projected onto the robot’s face in real time, while the robot mimics the person’s upper body movements, which include two highly articulated arms.

 

     

 

Led by Prof Gerald Seet, from the School of Mechanical and Aerospace Engineering, this robot is the result of three years of research and development: “EDGAR is a real demonstration of how telepresence and social robots can be used for business and education,” explains Seet. “Telepresence provides an additional dimension to mobility. The user may project his or her physical presence at one or more locations simultaneously, meaning that geography is no longer an obstacle.

“In future, a renowned educator giving lectures or classes to large groups of people in different locations at the same time could become commonplace. Or you could attend classes or business meetings all over the world using robot proxies – saving time and travel costs.”

Given that some companies have expressed interest in the robot technologies, the next step for these NTU scientists is to look at how they can partner with industry to bring them to the market.

 

edgar robot singapore

 

The Nadine and EDGAR robots are among NTU’s many exciting innovations that companies can leverage for commercialisation, Professor Thalmann says: “Robotics technologies have advanced significantly over the past few decades and are already being used in manufacturing and logistics. As countries worldwide face challenges of an aging population, social robots can be one solution to address the shrinking workforce, become personal companions for children and the elderly at home, and even serve as a platform for healthcare services in future.”

“Over the past four years, our team at NTU have been fostering cross-disciplinary research in social robotics technologies – involving engineering, computer science, linguistics, psychology and other fields – to transform a virtual human, from within a computer, into a physical being that is able to observe and interact with other humans.

“This is somewhat like a real companion that is always with you – and conscious of what is happening. So in future, these socially intelligent robots could be like C-3PO, the iconic golden droid from Star Wars, with knowledge of language and etiquette.”

 

  speech bubble Comments »
 

 

 

16th December 2015

"OpenAI" – a new venture to create benevolent artificial intelligence

A team of world-class researchers, engineers and technology experts have announced a non-profit collaboration – OpenAI – that will work to encourage the development of AI that benefits humanity as a whole, unconstrained by a need to generate financial return.

 

2015 openai artificial intelligence future timeline

 

OpenAI is a brand new artificial intelligence (AI) research organisation that has just been announced in San Francisco, California. The company aims to advance and develop "friendly" AI in such a way as to benefit humanity as a whole. One of the largest differences between OpenAI and other organisations working on artificial intelligence, is OpenAI's non-profit status. In a press release explaining its founding, OpenAI states that their research will therefore be "unconstrained by a need to generate financial return", allowing them to "better focus on a positive human impact."

Related to this, another major goal of the organisation is to prevent corporations and governments from gaining too much power by them employing advanced AI, and instead making sure that the benefits of AI are divided as equally as possible. To meet that objective, the organisation will aim to "freely collaborate" with other institutions and researchers, by making their research open source.

The team's co-chairs are Tesla Motors and SpaceX CEO Elon Musk and entrepreneur Sam Altman. Former research scientist at Google, Ilya Sutskever, is the research director. Former Stripe CTO Greg Brockman is the CTO. They are supported by $1 billion in commitments from various sources. In the past, many of the employees and board members have openly stated their concern of existential risk from advanced AI – most notably, Elon Musk, who has openly declared his desire to personally oversee research done in this area.

"It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly," the company says in a statement. Artificial intelligence "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely."

You can read more on the company's website, OpenAI.com. They can also be followed on Twitter at @open_ai.

 

  speech bubble Comments »
 

 

 

5th November 2015

First AI-based scientific search engine will accelerate research process

A new search engine – Semantic Scholar – uses artificial intelligence to transform the research process for computer scientists.

 

semantic scholar ai search engine 2015 technology

 

The Allen Institute for Artificial Intelligence (AI2) this week launches its free Semantic Scholar service, which allows scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work. Leveraging AI2's expertise in data mining, natural-language processing and computer vision, Semantic Scholar provides an AI-enhanced way to quickly search and discover information. At launch, the system searches over three million computer science papers, and will add new scientific categories on an ongoing basis.

"No one can keep up with the explosive growth of scientific literature," said Dr. Oren Etzioni, CEO at AI2. "Which papers are most relevant? Which are considered the highest quality? Is anyone else working on this specific or related problem? Now, researchers can begin to answer these questions in seconds, speeding research and solving big problems faster."

With Semantic Scholar, computer scientists can:

• Home in quickly on what they are looking for, with advanced selection tools. Researchers can filter results by author, publication, topic, and date published. This gets the most relevant result in the fastest way possible, and reduces information overload.
• Instantly access a paper's figures and tables. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
• Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
• Be prompted with key phrases within each paper, to winnow the search further.

 

semantic scholar ai search engine 2015 technology

 

Using machine reading and vision methods, Semantic Scholar crawls the web – finding all PDFs of publicly available papers on computer science topics – extracting both text and diagrams/captions, and indexing it all for future contextual retrieval. Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of study and how influential its citations are. It provides the scientist with a simple user interface (optimised for mobile) that maps to academic researchers' expectations. Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.

Only a small number of free academic search engines are currently in widespread use. Google Scholar is by far the largest, with 100 million documents. However, researchers have noted problems with the current generation of these search engines.

"A significant proportion of the documents are not scholarly by anyone's measure," says Péter Jacsó, an information scientist at the University of Hawaii who identified a series of basic errors in search results from Google Scholar. While some of the issues have recently been fixed, says Jacsó, "there are still millions and millions of errors."

"Google has access to a lot of data. But there's still a step forward that needs to be taken in understanding the content of the paper," says Jose Manuel Gomez-Perez, who works on search engines and is director of research and development in Madrid for the software company Expert System.

Semantic Scholar builds on the foundation of current research paper search engines, adding AI methods to overcome information overload and paving the way for even more advanced and intelligent algorithms in the future.

"What if a cure for an intractable cancer is hidden within the tedious reports on thousands of clinical studies? In 20 years' time, AI will be able to read – and more importantly, understand – scientific text," says Etzioni. "These AI readers will be able to connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed. AI-based discovery engines will help find the answers to science's thorniest problems."

 

  speech bubble Comments »
 

 

 

3rd November 2015

A shape-shifting, self-driving concept car by Nissan

A new futuristic concept car by Nissan has been unveiled at the 2015 Tokyo Motor Show.

 

 

 

At the Tokyo Motor Show 2015, Nissan Motor Company unveiled a concept vehicle that the company says embodies its vision for the future of autonomous driving and zero emission EVs: the Intelligent Driving System (IDS).

"Nissan's forthcoming technologies will revolutionise the relationship between car and driver, and future mobility," said Carlos Ghosn, Nissan president and CEO, presenting at the show. "Nissan Intelligent Driving improves a driver's ability to see, think and react. It compensates for human error, which causes more than 90% of all car accidents. As a result, time spent behind the wheel is safer, cleaner, more efficient and more fun."

After leading the development and expansion of EV technology, Nissan once again stands at the forefront of automotive technology. By integrating advanced vehicle control and safety technologies with cutting-edge artificial intelligence (AI), Nissan is among the leaders developing practical, real-world applications of autonomous driving. The company plans to include this technology on multiple vehicles by 2020, and progress is well on track to achieve this goal, said Ghosn.

 

2015 nissan shape shifting self driving car 2020 technology

 

Some have compared a future with autonomous drive to living in a world of conveyer belts that simply ferry people from point A to B, but the Nissan IDS promises a very different vision. Even when a driver selects Piloted Drive and turns over driving to the vehicle, the car's performance – from accelerating to braking to cornering – imitates the driver's own style and preferences.

In Manual Drive mode, the driver has control. The linear acceleration and cornering are pure and exhilarating. Yet behind the scenes, the Nissan IDS continues to provide assistance. Sensors constantly monitor conditions and assistance is available even while the driver is in control. In the event of imminent danger, the Nissan IDS will assist the driver in taking evasive action.

In addition to learning, the IDS concept's AI communicates like an attentive partner. From information concerning traffic conditions, the driver's schedule to personal interests, it has what is needed to create a driving experience that is comfortable, enjoyable and safe.

 

2015 nissan shape shifting self driving car 2020 technology

 

"A key point behind the Nissan IDS Concept is communication. For autonomous drive to become reality, as a society we have to consider not only communication between car and driver but also between cars and people. The Nissan IDS Concept's design embodies Nissan's vision of autonomous drive as expressed in the phrase together, we ride," says Mitsunori Morita, Design Director.

Together, we ride is demonstrated in the shape-shifting interior design: "The Nissan IDS Concept has different interiors, depending on whether the driver opts for Piloted Drive or Manual Drive. This was something that we thought was absolutely necessary to express our idea of autonomous drive," explains Morita.

In piloted self-driving mode, all four seats rotate inward, and the steering wheel recedes into the dashboard, giving the driver space to relax and making it easier to see and talk to other passengers. The interior, comprised of natural materials such as mesh leather, is illuminated by soft light, adding a further layer of comfort that feels almost like a home living room.

"In every situation, it is about giving the driver more choices and greater control," Ghosn said at the show. "And the driver will remain the focus of our technology development efforts."

 

2015 nissan shape shifting self driving car 2020 technology

 

For autonomous drive to be widely accepted, people need to fully trust the technology. Through its innovative communication features, the Nissan IDS promotes confidence and a sense of harmony for those outside the car as well. Various exterior lights and displays convey to pedestrians and others the car's awareness of its surroundings and signals its intentions. The car's silver side body line, for example, is actually an LED that Nissan calls the Intention Indicator. If there are pedestrians or cyclists nearby, the strip shines red, signalling that the car is aware of them. Another electronic display, facing outside from the instrument panel, can flash messages such as "After you" to pedestrians.

Another feature of this electric vehicle is energy efficiency, with advanced aerodynamic performance for a greater driving range. The carbon fibre body is lightweight and constrained in height to sharply minimise aerodynamic drag, while the tires are designed to minimise air and roll resistance. The wheels have a layered form that creates tiny vortexes of air on their surface, which further contributes to smooth air flow. The Nissan IDS concept is fitted with a high-capacity 60 kWh battery.

"By the time Nissan Intelligent Driving technology is available on production cars, EVs will be able to go great distances on a single charge," says Mitsunori Morita, Design Director. "Getting to this point will, of course, require the further evolution of batteries – but aerodynamic performance is also very important. We incorporated our most advanced aerodynamic technology in the design of the Nissan IDS Concept."

 

2015 nissan shape shifting self driving car 2020 technology

 

At Nissan's annual shareholder meeting in June, Executive Vice President Hideyuki Sakamoto said: "Our zero emission strategy centres on EVs. We are pursuing improved electric powertrain technologies – such as motors, batteries and inverters – which will enable us to mass produce and market EVs that equal or surpass the convenience of gasoline-powered cars."

Other technologies on the Nissan IDS concept include "Piloted Park" that can be operated by smartphone or tablet, and wireless charging technologies. Through these, the driver can leave parking and charging to the car.

Self-driving, zero emission cars are clearly the future, and Nissan appears to be well-positioned for delivering this vision. The Nissan LEAF is the world's most popular electric vehicle, with 96% of customers willing to recommend the car to friends. Yesterday, the firm posted a rise of 37.4% in net income for the six months ending in September.

"Nissan has delivered solid revenue growth and improved profitability in the first half of the fiscal year, driven by encouraging demand for our vehicles in North America and a rebound in western Europe," said chief executive Carlos Ghosn.

 

  speech bubble Comments »
 

 

 

 
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed