future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
     
 
       
   
 
 

Blog » AI & Robotics

 
     
 

16th January 2016

U.S. government pledges $4bn for self-driving vehicles

The U.S. government this week revealed plans for a 10-year, nearly $4 billion investment to accelerate the development and adoption of safe vehicle automation, through real-world pilot projects.

 

self driving vehicles future timeline 2016 technology

 

In his final State of the Union address, President Obama signalled his intent to invest in a 21st century transportation system. U.S. Transportation Secretary Anthony Foxx this week revealed part of the president's proposal: a 10-year, nearly $4 billion investment to accelerate the development and adoption of safe vehicle automation, through real-world pilot projects.

Secretary Foxx also announced the removal of potential roadblocks to the integration of innovative, transformational automotive technology that can significantly improve safety, mobility, and sustainability. Foxx made the announcement at the North American International Auto Show (NAIAS) in Detroit, where he was joined by leaders in technology, executives of traditional auto manufacturers, and newcomers to the industry.

"We are on the cusp of a new era in automotive technology with enormous potential to save lives, reduce greenhouse gas emissions, and transform mobility for the American people," said Foxx. "Today's actions and those we will pursue in the coming months will provide the foundation and the path forward for manufacturers, state officials, and consumers to use new technologies and achieve their full safety potential."

The President's 2017 budget proposal would provide nearly $4 billion over 10 years for pilot programs to test connected vehicle systems in designated corridors throughout the country, and work with industry leaders to ensure a common, multistate framework, for connected and autonomous vehicles.

 

self driving vehicles future timeline 2016 technology

 

Secretary Foxx has also confirmed an update for the National Highway Traffic Safety Administration's (NHTSA) 2013 preliminary policy statement on autonomous vehicle technology. This new guidance reflects the reality that the widespread deployment of fully autonomous vehicles is now feasible.

Administrator Mark Rosekind said: "The NHTSA is using all of its available tools to accelerate the deployment of technologies that can eliminate 94% of fatal crashes involving human error. We will work with state partners toward creating a consistent national policy on these innovations, provide options now and into the future for manufacturers seeking to deploy autonomous vehicles, and keep our safety mission paramount at every stage."

Under Foxx's leadership, the Department of Transportation has been working to transform government for the 21st century, by harnessing technology and innovation that will improve people's lives. In 2015, he refocused the national dialogue about the future of America's transport system by releasing Beyond Traffic – a report examining the challenges facing the country's infrastructure between now and 2045. The draft framework has already influenced decisions by elected officials, planners, and stakeholders nationwide. He also launched the Smart City Challenge, a national competition to implement bold, data-driven ideas that make transportation safer, easier, and more reliable in cities. He also worked to accelerate the Department's efforts to incorporate vehicle-to-vehicle (V2V) communication technology into new vehicles.

Numerous forecasts by technology analysts point to a future dominated by intelligent, self-driving vehicles and smarter road systems. These new measures announced by the U.S. government will help to bring that futuristic vision closer to reality.

 

  speech bubble Comments »
 

 

 

1st January 2016

New social and telepresence robots developed in Singapore

Two new humanoid robots have been demonstrated by researchers at Nanyang Technological University (NTU) in Singapore.

 

nadine robot singapore
Nadine (left) and EDGAR (right). Photos courtesy of NTU.

 

Say hello to Nadine – a “receptionist” at Nanyang Technological University in Singapore. She is friendly, and will greet you back. Next time you meet her, she will remember your name and your previous conversation with her. She looks almost like a human being, with soft skin and flowing brunette hair. She smiles when greeting you, looks at you in the eye when talking, and can also shake hands with you. Unlike most conventional robots, Nadine has her own personality, mood and emotions. She can be happy or sad, depending on the conversation. She also has a good memory, can visually recognise the people she has met and remember what the person had said before.

Nadine is the latest social robot developed by scientists at NTU. The doppelganger of its creator, Prof Nadia Thalmann, Nadine is powered by intelligent software similar to Apple’s Siri and Microsoft’s Cortana. In the future, robots like Nadine could be used as personal assistants in the office, or used as social companions for the young and the elderly at home.

A humanoid robot like Nadine is just one of the interfaces where NTU's technology can be applied. It can also be made virtual and appear on a TV or computer screen, and become a low-cost virtual social companion. With further progress in robotics driven by technological improvements in silicon chips, sensors and computation, physical social robots like Nadine are poised to become more visible in offices and homes in the coming decades.

 

 

 

A second robot – known as EDGAR – was also demonstrated at NTU’s new media showcase. EDGAR is available in two different configurations as shown in the videos below. The first allows him to autonomously deliver speeches and read from a script. With an integrated webcam, he automatically tracks the people he meets to engage them in conversation, giving informative or witty replies to questions. This makes him ideal for use at public venues like tourist attractions and shopping centres, where he can offer practical information to visitors.

The other configuration allows EDGAR to become a tele-presence robot, optimised to replicate the movements of a human user. By standing in front of a specialised webcam, the user can control EDGAR remotely from anywhere in the world. Their face and expressions are projected onto the robot’s face in real time, while the robot mimics the person’s upper body movements, which include two highly articulated arms.

 

     

 

Led by Prof Gerald Seet, from the School of Mechanical and Aerospace Engineering, this robot is the result of three years of research and development: “EDGAR is a real demonstration of how telepresence and social robots can be used for business and education,” explains Seet. “Telepresence provides an additional dimension to mobility. The user may project his or her physical presence at one or more locations simultaneously, meaning that geography is no longer an obstacle.

“In future, a renowned educator giving lectures or classes to large groups of people in different locations at the same time could become commonplace. Or you could attend classes or business meetings all over the world using robot proxies – saving time and travel costs.”

Given that some companies have expressed interest in the robot technologies, the next step for these NTU scientists is to look at how they can partner with industry to bring them to the market.

 

edgar robot singapore

 

The Nadine and EDGAR robots are among NTU’s many exciting innovations that companies can leverage for commercialisation, Professor Thalmann says: “Robotics technologies have advanced significantly over the past few decades and are already being used in manufacturing and logistics. As countries worldwide face challenges of an aging population, social robots can be one solution to address the shrinking workforce, become personal companions for children and the elderly at home, and even serve as a platform for healthcare services in future.”

“Over the past four years, our team at NTU have been fostering cross-disciplinary research in social robotics technologies – involving engineering, computer science, linguistics, psychology and other fields – to transform a virtual human, from within a computer, into a physical being that is able to observe and interact with other humans.

“This is somewhat like a real companion that is always with you – and conscious of what is happening. So in future, these socially intelligent robots could be like C-3PO, the iconic golden droid from Star Wars, with knowledge of language and etiquette.”

 

  speech bubble Comments »
 

 

 

16th December 2015

"OpenAI" – a new venture to create benevolent artificial intelligence

A team of world-class researchers, engineers and technology experts have announced a non-profit collaboration – OpenAI – that will work to encourage the development of AI that benefits humanity as a whole, unconstrained by a need to generate financial return.

 

2015 openai artificial intelligence future timeline

 

OpenAI is a brand new artificial intelligence (AI) research organisation that has just been announced in San Francisco, California. The company aims to advance and develop "friendly" AI in such a way as to benefit humanity as a whole. One of the largest differences between OpenAI and other organisations working on artificial intelligence, is OpenAI's non-profit status. In a press release explaining its founding, OpenAI states that their research will therefore be "unconstrained by a need to generate financial return", allowing them to "better focus on a positive human impact."

Related to this, another major goal of the organisation is to prevent corporations and governments from gaining too much power by them employing advanced AI, and instead making sure that the benefits of AI are divided as equally as possible. To meet that objective, the organisation will aim to "freely collaborate" with other institutions and researchers, by making their research open source.

The team's co-chairs are Tesla Motors and SpaceX CEO Elon Musk and entrepreneur Sam Altman. Former research scientist at Google, Ilya Sutskever, is the research director. Former Stripe CTO Greg Brockman is the CTO. They are supported by $1 billion in commitments from various sources. In the past, many of the employees and board members have openly stated their concern of existential risk from advanced AI – most notably, Elon Musk, who has openly declared his desire to personally oversee research done in this area.

"It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly," the company says in a statement. Artificial intelligence "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely."

You can read more on the company's website, OpenAI.com. They can also be followed on Twitter at @open_ai.

 

  speech bubble Comments »
 

 

 

5th November 2015

First AI-based scientific search engine will accelerate research process

A new search engine – Semantic Scholar – uses artificial intelligence to transform the research process for computer scientists.

 

semantic scholar ai search engine 2015 technology

 

The Allen Institute for Artificial Intelligence (AI2) this week launches its free Semantic Scholar service, which allows scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work. Leveraging AI2's expertise in data mining, natural-language processing and computer vision, Semantic Scholar provides an AI-enhanced way to quickly search and discover information. At launch, the system searches over three million computer science papers, and will add new scientific categories on an ongoing basis.

"No one can keep up with the explosive growth of scientific literature," said Dr. Oren Etzioni, CEO at AI2. "Which papers are most relevant? Which are considered the highest quality? Is anyone else working on this specific or related problem? Now, researchers can begin to answer these questions in seconds, speeding research and solving big problems faster."

With Semantic Scholar, computer scientists can:

• Home in quickly on what they are looking for, with advanced selection tools. Researchers can filter results by author, publication, topic, and date published. This gets the most relevant result in the fastest way possible, and reduces information overload.
• Instantly access a paper's figures and tables. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
• Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
• Be prompted with key phrases within each paper, to winnow the search further.

 

semantic scholar ai search engine 2015 technology

 

Using machine reading and vision methods, Semantic Scholar crawls the web – finding all PDFs of publicly available papers on computer science topics – extracting both text and diagrams/captions, and indexing it all for future contextual retrieval. Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of study and how influential its citations are. It provides the scientist with a simple user interface (optimised for mobile) that maps to academic researchers' expectations. Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.

Only a small number of free academic search engines are currently in widespread use. Google Scholar is by far the largest, with 100 million documents. However, researchers have noted problems with the current generation of these search engines.

"A significant proportion of the documents are not scholarly by anyone's measure," says Péter Jacsó, an information scientist at the University of Hawaii who identified a series of basic errors in search results from Google Scholar. While some of the issues have recently been fixed, says Jacsó, "there are still millions and millions of errors."

"Google has access to a lot of data. But there's still a step forward that needs to be taken in understanding the content of the paper," says Jose Manuel Gomez-Perez, who works on search engines and is director of research and development in Madrid for the software company Expert System.

Semantic Scholar builds on the foundation of current research paper search engines, adding AI methods to overcome information overload and paving the way for even more advanced and intelligent algorithms in the future.

"What if a cure for an intractable cancer is hidden within the tedious reports on thousands of clinical studies? In 20 years' time, AI will be able to read – and more importantly, understand – scientific text," says Etzioni. "These AI readers will be able to connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed. AI-based discovery engines will help find the answers to science's thorniest problems."

 

  speech bubble Comments »
 

 

 

3rd November 2015

A shape-shifting, self-driving concept car by Nissan

A new futuristic concept car by Nissan has been unveiled at the 2015 Tokyo Motor Show.

 

 

 

At the Tokyo Motor Show 2015, Nissan Motor Company unveiled a concept vehicle that the company says embodies its vision for the future of autonomous driving and zero emission EVs: the Intelligent Driving System (IDS).

"Nissan's forthcoming technologies will revolutionise the relationship between car and driver, and future mobility," said Carlos Ghosn, Nissan president and CEO, presenting at the show. "Nissan Intelligent Driving improves a driver's ability to see, think and react. It compensates for human error, which causes more than 90% of all car accidents. As a result, time spent behind the wheel is safer, cleaner, more efficient and more fun."

After leading the development and expansion of EV technology, Nissan once again stands at the forefront of automotive technology. By integrating advanced vehicle control and safety technologies with cutting-edge artificial intelligence (AI), Nissan is among the leaders developing practical, real-world applications of autonomous driving. The company plans to include this technology on multiple vehicles by 2020, and progress is well on track to achieve this goal, said Ghosn.

 

2015 nissan shape shifting self driving car 2020 technology

 

Some have compared a future with autonomous drive to living in a world of conveyer belts that simply ferry people from point A to B, but the Nissan IDS promises a very different vision. Even when a driver selects Piloted Drive and turns over driving to the vehicle, the car's performance – from accelerating to braking to cornering – imitates the driver's own style and preferences.

In Manual Drive mode, the driver has control. The linear acceleration and cornering are pure and exhilarating. Yet behind the scenes, the Nissan IDS continues to provide assistance. Sensors constantly monitor conditions and assistance is available even while the driver is in control. In the event of imminent danger, the Nissan IDS will assist the driver in taking evasive action.

In addition to learning, the IDS concept's AI communicates like an attentive partner. From information concerning traffic conditions, the driver's schedule to personal interests, it has what is needed to create a driving experience that is comfortable, enjoyable and safe.

 

2015 nissan shape shifting self driving car 2020 technology

 

"A key point behind the Nissan IDS Concept is communication. For autonomous drive to become reality, as a society we have to consider not only communication between car and driver but also between cars and people. The Nissan IDS Concept's design embodies Nissan's vision of autonomous drive as expressed in the phrase together, we ride," says Mitsunori Morita, Design Director.

Together, we ride is demonstrated in the shape-shifting interior design: "The Nissan IDS Concept has different interiors, depending on whether the driver opts for Piloted Drive or Manual Drive. This was something that we thought was absolutely necessary to express our idea of autonomous drive," explains Morita.

In piloted self-driving mode, all four seats rotate inward, and the steering wheel recedes into the dashboard, giving the driver space to relax and making it easier to see and talk to other passengers. The interior, comprised of natural materials such as mesh leather, is illuminated by soft light, adding a further layer of comfort that feels almost like a home living room.

"In every situation, it is about giving the driver more choices and greater control," Ghosn said at the show. "And the driver will remain the focus of our technology development efforts."

 

2015 nissan shape shifting self driving car 2020 technology

 

For autonomous drive to be widely accepted, people need to fully trust the technology. Through its innovative communication features, the Nissan IDS promotes confidence and a sense of harmony for those outside the car as well. Various exterior lights and displays convey to pedestrians and others the car's awareness of its surroundings and signals its intentions. The car's silver side body line, for example, is actually an LED that Nissan calls the Intention Indicator. If there are pedestrians or cyclists nearby, the strip shines red, signalling that the car is aware of them. Another electronic display, facing outside from the instrument panel, can flash messages such as "After you" to pedestrians.

Another feature of this electric vehicle is energy efficiency, with advanced aerodynamic performance for a greater driving range. The carbon fibre body is lightweight and constrained in height to sharply minimise aerodynamic drag, while the tires are designed to minimise air and roll resistance. The wheels have a layered form that creates tiny vortexes of air on their surface, which further contributes to smooth air flow. The Nissan IDS concept is fitted with a high-capacity 60 kWh battery.

"By the time Nissan Intelligent Driving technology is available on production cars, EVs will be able to go great distances on a single charge," says Mitsunori Morita, Design Director. "Getting to this point will, of course, require the further evolution of batteries – but aerodynamic performance is also very important. We incorporated our most advanced aerodynamic technology in the design of the Nissan IDS Concept."

 

2015 nissan shape shifting self driving car 2020 technology

 

At Nissan's annual shareholder meeting in June, Executive Vice President Hideyuki Sakamoto said: "Our zero emission strategy centres on EVs. We are pursuing improved electric powertrain technologies – such as motors, batteries and inverters – which will enable us to mass produce and market EVs that equal or surpass the convenience of gasoline-powered cars."

Other technologies on the Nissan IDS concept include "Piloted Park" that can be operated by smartphone or tablet, and wireless charging technologies. Through these, the driver can leave parking and charging to the car.

Self-driving, zero emission cars are clearly the future, and Nissan appears to be well-positioned for delivering this vision. The Nissan LEAF is the world's most popular electric vehicle, with 96% of customers willing to recommend the car to friends. Yesterday, the firm posted a rise of 37.4% in net income for the six months ending in September.

"Nissan has delivered solid revenue growth and improved profitability in the first half of the fiscal year, driven by encouraging demand for our vehicles in North America and a rebound in western Europe," said chief executive Carlos Ghosn.

 

  speech bubble Comments »
 

 

 

 
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed