future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
     
 
       
   
 
 

Blog » AI & Robotics

 
     
 

1st December 2016

Almost half of tech professionals expect their job to be automated within ten years

45% of technology professionals believe a significant part of their job will be automated by 2027 – rendering their current skills redundant. Changes in technology are so rapid that 94% say their career would be severely limited if they didn't teach themselves new technical skills.

That's according to the Harvey Nash Technology Survey 2017, representing the views of more than 3,200 technology professionals from 84 countries.

The chance of automation varies greatly with job role. Testers and IT Operations professionals are most likely to expect their job role to be significantly affected in the next decade (67% and 63% respectively). Chief Information Officers (CIOs), Vice Presidents of Information Technology (VP IT) and Programme Managers expect to be least affected (31% and 30% respectively).

David Savage, associate director, Harvey Nash UK, commented: "Through automation, it is possible that ten years from now the Technology team will be unrecognisable in today's terms. Even for those roles relatively unaffected directly by automation, there is a major indirect effect – anything up to half of their work colleagues may be machines by 2027."

 

future timeline automation technology 2027

 

In response to automation technology, professionals are prioritising learning over any other career development tactics. Self-learning is significantly more important to them than formal training or qualifications; only 12 per cent indicate "more training" as a key thing they want in their job and only 27% see gaining qualifications as a top priority for their career.

Despite the increase in automation, the survey reveals that technology professionals remain in high demand, with participants receiving at least seven headhunt calls in the last year. Software Engineers and Developers are most in demand, followed by Analytics / Big Data roles. Respondents expect the most important technologies in the next five years to be Artificial Intelligence, Augmented / Virtual Reality and Robotics, as well as Big Data, Cloud and the Internet of Things. Unsurprisingly, these are also the key areas cited in what are the "hot skills to learn".

"Technology careers are in a state of flux," says Simon Hindle, a director at Harvey Nash Switzerland. "On one side, technology is 'eating itself', with job roles increasingly being commoditised and automated. On the other side, new opportunities are being created, especially around Artificial Intelligence, Big Data and Automation. In this rapidly changing world, the winners will be the technology professionals who take responsibility for their own skills development, and continually ask: 'where am I adding value that no other person – or machine – can add?'"

 

future timeline automation technology 2027

 

Key highlights from the Harvey Nash Technology Survey 2017:

AI growth: The biggest technology growth area is expected to be Artificial Intelligence (AI). 89% of respondents expect it to be important to their company in five years' time, almost four times the current figure of 24%.

Big Data is big, but still unproven. 57% of organisations are implementing Big Data at least to some extent. For many, it is moving away from being an 'experiment' into something more core to their business; 21% say they are using it in a 'strategic way'. However, only three in ten organisations with a Big Data strategy are reporting success to date.

Immigration is key to the tech industry, and Brexit is a concern. The technology sector is overwhelmingly in favour of immigration; 73% believe it is critical to their country’s competitiveness. 33% of respondents to the survey were born outside the country they are currently working. Almost four in ten tech immigrants in the UK are from Europe, equating to one in ten of the entire tech working population in the UK. Moreover, UK workers make up over a fifth of the tech immigrant workforce of Ireland and Germany.

Where are all the women? This year's report reveals that 16% of respondents are women; not very different from the 13% who responded in 2013. The pace of change is glacial and – at this rate – it will take decades before parity is reached.

Tech people don't trust the cloud. Four in ten have little or no trust in how cloud companies are using their personal data, while five in ten at least worry about it. Trust in the cloud is affected by age (the older you are, the less you trust).

The end of the CIO role? Just 3% of those under 30 aspire to be a CIO; instead they would prefer to be a CTO (14% chose this), entrepreneur (19%) or CEO (11%). This suggests that the traditional role of the CIO is relatively unattractive to Gen Y.

Headhunters' radar: Software Engineers and Developers get headhunted the most, followed closely by Analytics / Big Data roles. At the same time, 75% believe recruiters are too focused on assessing technical skills, and overlook good people as a result.

 

cloud computing future timeline technology 2027

 

 

Supporting data from the survey (global averages):

 

Which technologies are important to your company now, and which do you expect to be important in five years' time?

job automation future of work

 

Agree or disagree? Within ten years, a significant part of my job that I currently perform will be automated.

job automation future of work

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th November 2016

Tesla demonstrates its self-driving car technology

Innovative American car company, Tesla, has released a video showcasing its self-driving car technology that would be included in all forthcoming vehicles that it manufactures.

 

 

 

The video above demonstrates just how advanced Tesla's Enhanced Autopilot hardware is. The time-lapse footage follows the car on its journey as it correctly follows the rules of the road by identifying road signs, traffic management systems and other road users. A person is seen sat in the car but the video makes clear that this is purely for legal reasons.

The automated system comes equipped with eight cameras, providing full 360° visibility around the vehicle at up to 250 metres' range. A dozen updated ultrasonic sensors detect both hard and soft objects at nearly twice the distance of Tesla's previous hardware. A forward-facing radar gives additional data about the driving environment on a redundant wavelength that is able to see through heavy rain, fog, dust and even the car ahead.

Tesla's Chief Executive Elon Musk certainly has faith in the technology and has predicted that by the end of 2017 a Tesla will be able to drive itself from one US coast to the other. Drivers wanting to adopt this new technology will have to be patient, however, as in addition to legal legislation, Tesla plans to conduct millions of miles of testing to ensure the safety of operating the system.

 

tesla self driving car technology future timeline

 

Earlier this month, Tesla agreed a deal to buy German automated car specialists, Grohmann Engineering, in a bid to accelerate production. The firm's founder Klaus Grohmann will also be joining Tesla to head a new division within the automaker, called Tesla Advanced Automation Germany.

"Because automation is such a vital part of the future of Tesla, the phrase I've used before is that it's about building the machine that's building the machine," Musk commented. "That actually becomes more important than the machine itself as the volume increases. We think it's important to bring in world-class engineering talent and our first choice was Grohmann."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th November 2016

Machine learning can identify a suicidal person

Using a person's spoken or written words, a new computer algorithm identifies with high accuracy whether that person is suicidal, mentally ill but not suicidal, or neither.

 

brain words algorithm

 

A new study shows that technology known as machine learning is up to 93% accurate in correctly classifying a suicidal person and 85% accurate in identifying a person who has a mental illness but is not suicidal, or neither. These results provide strong evidence for using intelligent software as a decision-support tool to help clinicians and caregivers identify and prevent suicidal behaviour.

"These computational approaches provide novel opportunities to apply technological innovations in suicide care and prevention, and it surely is needed," explains John Pestian, PhD, professor in Biomedical Informatics & Psychiatry at Cincinnati Children's Hospital Medical Centre and the study's lead author. "When you look around healthcare facilities, you see tremendous support from technology, but not so much for those who care for mental illness. Only now are our algorithms capable of supporting those caregivers. This methodology can easily be extended to schools, shelters, youth clubs, juvenile justice centres, and community centres, where earlier identification may help to reduce suicide attempts and deaths."

Pestian and his team enrolled 379 patients over the study's 18 month period – from emergency departments as well as inpatient and outpatient centres across three sites. Those enrolled included patients who were suicidal, diagnosed as mentally ill but not suicidal, or neither (serving as a control group).

Each patient completed standardised behavioural rating scales and participated in a semi-structured interview, answering five open-ended questions to stimulate conversation such as "Do you have hope?" "Are you angry?" and "Does it hurt emotionally?"

The researchers extracted and analysed both verbal and non-verbal language from the data. They then used machine learning algorithms to classify the patients into one of the three groups. Their results showed that machine learning algorithms could tell the difference between the groups with an accuracy of up to 93%. The scientists also noticed that the control patients tended to laugh more during interviews, sigh less, and express less anger, less emotional pain and more hope.

This software could become more and more useful in the future, as depression is expected to become the number one global disease burden by 2030. However, such intelligent algorithms may raise concerns over privacy and civil liberties, with potential for information to be abused. For example, authorities might use the software to spy on citizens as they communicate via email or social media, perhaps deciding from the data and wording style that a certain individual is dangerous and must be imprisoned, even if that person is actually innocent.

The study is published in the journal Suicide and Life-Threatening Behavior.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th November 2016

AI can beat humans at lip-reading

The University of Oxford has demonstrated "LipNet", a new AI algorithm capable of lip-reading over 40% more accurately than a real person.

 

 

 

2016 has been a big year for artificial intelligence, with many important breakthroughs that we've covered on our blog. Yet again, what was once confined to science fiction has become a reality, as this week a research team presented a new AI lip-reading system able to beat humans.

The University of Oxford's Department of Computer Science has developed "LipNet", a visual recognition system that can process whole sentences and learn which letter corresponds to the slightest mouth movement.

"The end-to-end model eliminates the need to segment videos into words before predicting a sentence," the research team explains. "LipNet requires neither hand-engineered spatiotemporal visual features, nor a separately-trained sequence model."

While an experienced human lip-reader can achieve accuracy of 52%, the LipNet efficiency is 93%. It's eerily reminiscent of HAL 9000, the sentient computer in Arthur C. Clarke's 2001: A Space Odyssey.

However, while LipNet has proven to be very promising, it is still at a relatively early stage of development. So far, it has been trained and tested on short, formulaic videos that show a well-lit person face-on. In its current form, LipNet could not be used on more challenging video footage – so it is currently unsuitable for use as a surveillance tool. But the team is keen to develop it further in real-world situations, especially as an aid for people with hearing disabilities.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th October 2016

AI predicts outcomes of human rights trials

The judicial decisions of the European Court of Human Rights (ECtHR) have been predicted to 79% accuracy using an artificial intelligence (AI) method developed by researchers at University College London (UCL), the University of Sheffield and the University of Pennsylvania.

The method is the first to predict the outcomes of a major international court by automatically analysing case text using a machine learning algorithm.

"We don't see AI replacing judges or lawyers, but we think they'd find it useful for rapidly identifying patterns in cases that lead to certain outcomes," explained Dr Nikolaos Aletras, who led the study at UCL Computer Science. "It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights."

In developing their method, the team found that judgements by the ECtHR are highly correlated to non-legal facts, rather than directly legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, 'realists' rather than 'formalists'. This supports findings from previous studies of the decision-making processes of other high level courts, including the US Supreme Court.

"The study, which is the first of its kind, corroborates the findings of other empirical work on the determinants of reasoning performed by high level courts. It should be further pursued and refined, through the systematic examination of more data," explained co-author Dr Dimitrios Tsarapatsanis, Lecturer in Law at the University of Sheffield.

 

AI robot judge

 

A team of computer and legal scientists from the UK worked alongside Daniel Preoțiuc-Pietro – a postdoctoral researcher in natural language processing and machine learning from the University of Pennsylvania – to extract case information published by the ECtHR. They identified English language data sets for 584 cases relating to Articles 3, 6 and 8 of the Convention. Article 3 forbids torture and inhuman and degrading treatment (250 cases); Article 6 protects the right to a fair trial (80 cases) and Article 8 provides a right to respect for one's "private and family life, his home and his correspondence" (254 cases). They then applied an AI algorithm to find patterns in the text. To prevent bias and mislearning, they selected an equal number of violation and non-violation cases.

The most reliable factors for predicting the court's final decision were found to be the language used, as well as the topics and the circumstances mentioned in the case text. The 'circumstances' section includes information about the factual background to the case. By combining the information extracted from the abstract 'topics' that the cases cover and 'circumstances' across data for all three Articles, an accuracy of 79% was achieved.

"Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge – so this is the first time judgements have been predicted using analysis of text prepared by the court," said co-author Dr Lampos, UCL Computer Science.

"There is no reason why it cannot be extended to understand testimonies from witnesses or lawyers' notes," said Dr Aletras.

The study appears in the journal PeerJ Computer Science.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st October 2016

AI milestone: a new system can match humans in conversational speech recognition

A new automated system that can achieve parity and even beat humans in conversational speech recognition has been announced by researchers at Microsoft.

 

AI conversational speech recognition future timeline

 

A team at Microsoft's Artificial Intelligence and Research group has published a study in which they demonstrate a technology that recognises spoken words in a conversation as well as a real person does.

Last month, the same team achieved a word error rate (WER) of 6.3%. In their new paper this week, they report a WER of just 5.9%, which is equal to that of professional transcriptionists and is the lowest ever recorded against the industry standard Switchboard speech recognition task.

“We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.”

“Even five years ago, I wouldn’t have thought we could have achieved this,” said Harry Shum, the group's executive vice president. “I just wouldn’t have thought it would be possible.”

Microsoft has been involved in speech recognition and speech synthesis research for many years. The company developed Speech API in 1994 and later introduced speech recognition technology in Office XP and Office 2003, as well as Internet Explorer. However, the word error rates for these applications were much higher back then.

 

speech recognition trend future timeline

 

In their new paper, the researchers write: "the key to our system's performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training."

The team used Microsoft’s own Computational Network Toolkit – an open source, deep learning framework. This was able to process deep learning algorithms across multiple computers, running a specialised GPU to greatly improve its speed and enhance the quality of research. The team believes their milestone will have broad implications for both consumer and business products, including entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana.

“This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.

“The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group.

Future improvements may also include speech recognition that works well in more real-life settings – places with lots of background noise, for example, such as at a party or while driving on the highway. The technology will also become better at assigning names to individual speakers when multiple people are talking, as well as working with a wide variety of voices, regardless of age, accent or ability.

The full study – Achieving Human Parity in Conversational Speech Recognition – is available at: https://arxiv.org/abs/1610.05256

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

19th September 2016

How AI might affect urban life in 2030

A diverse panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence might affect life in a typical North American city, and to spur discussion about how to ensure that AI is deployed in ways that are safe, fair and beneficial.

 

AI urban 2030

 

In December 2014, Stanford University began a century-long project known as the One Hundred Year Study on Artificial Intelligence (or AI100). This was intended to study the long-term implications of artificial intelligence in all aspects of work, life and play – providing guidance on the ethical development of smart software, sensors and machines. The team behind AI100 has now published the results of their first investigation, titled: "Artificial Intelligence and Life in 2030."

"We believe specialised AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life," said Peter Stone, a computer scientist from the University of Texas at Austin and chair of the 17-member panel of international experts. "But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared."

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

"AI technologies can be reliable and broadly beneficial," Grosz said. "Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion."

The report investigates eight areas of human activity in which AI technologies are already beginning to affect urban life, in ways that will become increasingly pervasive and profound by 2030. The 28,000-word study includes a glossary to help non-technical readers understand new AI applications – such as how computer vision might help screen tissue samples for cancers, for example, or how natural language processing will enable computers to grasp not simply the literal definitions, but the connotations and intent, behind words.

"It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared," the researchers write in their report, noting the need for public discourse. "Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies. [...] Who is responsible when a self-driven car crashes, or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?"

 

AI 2030 future technology

 

The eight sections discuss:

• Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

• Home/service robots: Like the robotic vacuum cleaners already in some homes, specialised robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

• Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

• Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

• Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organise and deliver media in more engaging, personalised and interactive ways.

• Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

• Public safety and security: Cameras, drones and software to analyse crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

• Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

"Until now, most of what is known about AI comes from science fiction books and movies," Stone says. "This study provides a realistic foundation to discuss how AI technologies are likely to affect society." Meanwhile, Grosz said she hopes the AI 100 report "initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies."

The full report can be downloaded at https://ai100.stanford.edu/sites/default/files/ai_100_report_0831fnl.pdf

You can listen to a podcast below:

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

7th September 2016

Technology vs. Humanity: The coming clash between man and machine

In his latest book, Technology vs. Humanity, futurist Gerd Leonhard asks the question: "How can society stay in control as machines enter deeper into our lives, our bodies, and eventually our brains?"

 

 

 

Fast Future Publishing has announced the launch of a groundbreaking new book by futurist and humanist Gerd Leonhard. This will explore the critical challenges and choices we face in balancing mankind's urge to upgrade and automate everything (including human biology itself) with our quest for freedom and happiness.

Technology vs. Humanity is the second in the company's "FutureScapes" series of books, looking at the core issues and ideas shaping mankind's future. The book is available at fastfuturepublishing.com and there is a 20% pre-launch discount for purchases made before the September 8th launch date.

The ever-accelerating pace of technology has driven a migration from the mainframe to the desktop, to the laptop, to the smartphone, to wearables and soon to brain-computer interfaces. As we blur the distinction between human and machine with implants and ingestible inserts, Gerd Leonhard makes a last-minute clarion call for an honest debate and a more philosophical exchange on what society needs and wants, and how best to steer the relentless pace of innovation.

Leonhard argues that, "Before it's too late, we must stop and ask the big questions: How do we embrace technology without becoming it? How do we ensure all technological progress is geared towards the service of humanity? When it happens—gradually, then suddenly—the machine era will create the greatest watershed in human life on Earth, and we as humans have to be in control of it."

 

technology vs humanity book

 

Leonhard puts a spotlight on key issues and developments that will shape our future world:

• What are the technological "megashifts" that will transform life, work, business, the economy, and government?

• Are we approaching the end of work-as-we-know-it?

• Will scientific advances enable the next generation to live for centuries?

• Why don't Big Data, the Internet of Things, and Artificial Intelligence have the same kind of global governance policies and standards that we've demanded and imposed on previous technological revolutions, such as nuclear power?

• How can we address the urgent need for "digital ethics" before Silicon Valley assumes control of the species previously known as Homo sapiens?

If we are, indeed, the last fully "human-only" generation in history, shouldn't 2016 see the beginning of a conversation about where all this is leading? Gerd asks: What moral values are we prepared to stand up for – before being "human" alters its meaning forever?

Technology vs. Humanity by Gerd Leonhard is published 8th September by Fast Future Publishing.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

26th August 2016

World's first commercial drone delivery service

Domino's Pizza Enterprises Limited has joined forces with a global leader in drone deliveries, Flirtey, to launch the first commercial drone delivery service in the world.

 

 

 

Domino's Pizza Enterprises Limited (Domino's) has joined forces with a global leader in drone deliveries, Flirtey to launch the first commercial drone delivery service in the world. The two companies exhibited the first stage of their partnership with a demonstration of pizza delivery by drone yesterday in Auckland, New Zealand. The successful demonstration was also attended by the Civil Aviation Authority (CAA) and Minister of Transport Simon Bridges.

The test was conducted under Civil Aviation Rules Part 101 and marks a final step in Flirtey's approval process – following which, the partnership will aim to connect people with pizza via CAA-approved trial store-to-door drone deliveries from a selected Domino's New Zealand store with flights to customer homes later this year.

New Zealand was selected as the launch market given that its current regulations allow for businesses to embrace unmanned aircraft opportunities, which enable the gradual testing of new and innovative technologies. Domino's Group CEO and Managing Director, Don Meij said the company's growth in recent years had led to a significant increase in the number of deliveries and that Domino's is constantly looking for innovative and futuristic ways to improve its service.

"With the increased number of deliveries we make each year, we were faced with the challenge of ensuring our delivery times continue to decrease and that we strive to offer our customers new and progressive ways of ordering from us," he said. "Research into different delivery methods led us to Flirtey. Their success within the airborne delivery space has been impressive and it's something we have wanted to offer our customers."

The use of drones as a delivery method is designed to work alongside Domino's current delivery fleet and will be fully integrated into online ordering and GPS systems.

"Domino's is all about providing customers with choice and making customer's lives easier. Adding innovation such as drone deliveries means customers can experience cutting-edge technology and the convenience of having their Supreme pizza delivered via air to their door. This is the future. We have invested heavily to provide our stores with different delivery fleet options – such as electric scooters, e-bikes and even the Domino's Robotic Unit - DRU that we launched earlier this year.

"We've always said that it doesn't make sense to have a 2-tonne machine delivering a 2-kilogram order. DRU DRONE is the next stage of the company's expansion into the artificial intelligence space and gives us the ability to learn and adopt new technologies in the business."

The Flirtey delivery drone is constructed from carbon fibre, aluminium and 3D printed components. It is a lightweight, autonomous and electrically driven unmanned aerial vehicle. It lowers its cargo via tether and has built-in safety features such as low battery return to safe location and auto-return home in case of low GPS signal or communication loss.

 

worlds first commercial drone delivery service

 

The reach that a drone offers is greater than other current options which are restricted by traffic, roads and distance. Domino's will look to the results of the trial to determine where drones are implemented further.

"What drones allow us to do is to extend that delivery area by removing barriers such as traffic and access, as well as offering a much faster, safer delivery option, which means we can deliver further afield than we currently do to our rural customers while reaching our urban customers in a much more efficient time."

The trial flights are set to commence later this year following the beginning of daylight savings in New Zealand. Domino's will offer Drone Delivery Specials at the launch of the trial with plans to extend the dimensions, weight and distance of deliveries, based on results and customer feedback.

"These trial deliveries will help provide the insight we need to extend the weight carried by the drone and distance travelled," said Meij. "It is this insight that we hope will lead to being able to consider a drone delivery option for the majority of our orders. We are planning a phased trial approach which is based on the CAA granting approval, as both Domino's and Flirtey are learning what is possible with the drone delivery for our products – but this isn't a pie in the sky idea. It's about working with the regulators and Flirtey to make this a reality."

Flirtey CEO Matt Sweeny said: "Launching the first commercial drone delivery service in the world is a landmark achievement for Flirtey and Domino's, heralding a new frontier of on-demand delivery for customers across New Zealand and around the globe. New Zealand has the most forward-thinking aviation regulations in the world, and with our new partnership, we are uniquely positioned to bring the same revolutionary drone delivery service to customers globally. We are getting closer to the time where you can push a button on your smartphone and have Domino's delivered by drone to your home."

Domino's is looking at opportunities for drone delivery trials in its six other markets – Australia, Belgium, France, The Netherlands, Japan and Germany.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th August 2016

World's first public trial of self-driving taxi

A company in Singapore is conducting the world's first public trial of a self-driving taxi. If successful, the service will be launched in 2018.

 

self-driving taxi singapore technology

 

nuTonomy, a company developing state-of-the art software for self-driving cars, today launched the first-ever public trial of a robo-taxi service. The trial, which will continue on an on-going basis, is being held within Singapore's "one-north", a 2.5-square-mile business district where nuTonomy has been conducting daily autonomous vehicle (AV) testing since April.

Beginning today, select Singapore residents will be invited to use nuTonomy's ride-hailing smartphone app to book a no-cost ride in a nuTonomy self-driving car that employs the company's sophisticated software, which has been integrated with high-performance sensing and computing components. Rides will be provided in a Renault Zoe or Mitsubishi i-MiEV electric vehicle that nuTonomy has specially configured for autonomous driving. An engineer will ride in the vehicle to observe system performance and assume control if needed to ensure passenger comfort and safety.

Throughout the trial, nuTonomy will collect and evaluate valuable data related to software system performance, vehicle routing efficiency, the vehicle booking process, and the overall passenger experience. This data will enable nuTonomy to refine its software in preparation for the launch of a widely-available commercial robo-taxi service in Singapore from 2018.

 

 

 

Earlier this month, nuTonomy was selected by the Singapore Land Transport Authority (LTA) as an R&D partner, to support the development of a commercial AV service in Singapore. This trial represents the first, rapid result of that partnership. nuTonomy is the first, and to date only, private enterprise approved by the Singapore government to test AVs on public roads.

CEO and co-founder of nuTonomy, Karl Iagnemma, said: "nuTonomy's first-in-the-world public trial is a direct reflection of the level of maturity that we have achieved with our AV software system. The trial represents an extraordinary opportunity to collect feedback from riders in a real-world setting, and this feedback will give nuTonomy a unique advantage as we work toward deployment of a self-driving vehicle fleet in 2018."

Autonomous taxis could eventually reduce the number of cars on Singapore's roads from 900,000 to 300,000, according to Doug Parker, the firm's chief operating officer: "When you are able to take that many cars off the road, it creates a lot of possibilities. You can create smaller roads, you can create much smaller car parks. I think it will change how people interact with the city going forward."

In May of this year, nuTonomy completed a $16m Series A funding led by Highland Capital Partners that included participation from Fontinalis Partners, Signal Ventures, Samsung Ventures, and EDBI, the dedicated corporate investment arm of the Singapore Economic Development Board.

In addition to Singapore, nuTonomy is operating self-driving cars in Michigan and the United Kingdom, where it tests software in partnership with major automotive manufacturers such as Jaguar Land Rover.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

15th August 2016

Computer program learns to replicate human handwriting

Researchers at University College London have devised a software algorithm able to scan and replicate almost anyone's handwriting.

 

computer handwriting

 

In a world increasingly dominated by the QWERTY keyboard, computer scientists at University College London (UCL) have developed software which may spark the comeback of the handwritten word, by analysing the handwriting of any individual and accurately replicating it.

The scientists have created "My Text in Your Handwriting" – a programme which semi-automatically examines a sample of a person's handwriting that can be as little as one paragraph, and generates new text saying whatever the user wishes, as if the author had handwritten it themselves.

"Our software has lots of valuable applications," says lead author, Dr Tom Haines. "Stroke victims, for example, may be able to formulate letters without the concern of illegibility, or someone sending flowers as a gift could include a handwritten note without even going into the florist. It could also be used in comic books where a piece of handwritten text can be translated into different languages without losing the author's original style."

Published in ACM Transactions on Graphics, the machine learning algorithm is built around glyphs – a specific instance of a character. Authors produce different glyphs to represent the same element of writing – the way one individual writes an "a" will usually be different to the way others write an "a". Although an individual's writing has slight variations, every author has a recognisable style that manifests in their glyphs and spacing. The software learns what is consistent across an individual's style and reproduces this.

 

computer handwriting

 

To generate an individual's handwriting, the software analyses and replicates the author's specific character choices, pen-line texture, colour and the inter-character ligatures (the joining-up between letters), as well as vertical and horizontal spacing.

Co-author, Dr Oisin Mac Aodha (UCL Computer Science), said: "Up until now, the only way to produce computer-generated text that resembles a specific person's handwriting would be to use a relevant font. The problem with such fonts is that it is often clear that the text has not been penned by hand, which loses the character and personal touch of a handwritten piece of text. What we've developed removes this problem and so could be used in a wide variety of commercial and personal circumstances."

The system is flexible enough that samples from historical documents can be used with little extra effort. Thus far, the scientists have analysed and replicated the handwriting of such figures as Abraham Lincoln, Frida Kahlo and Arthur Conan Doyle. Infamously, Conan Doyle never actually wrote Sherlock Holmes as saying, "Elementary my dear Watson" but the team have produced evidence to make you think otherwise.

To test the effectiveness of their software, the research team asked people to distinguish between handwritten envelopes and ones created by their automatic software. People were tricked by the computer-generated writing up to 40% of the time. Given how convincing it can be, some may believe this method could help in forging documents – but the team explained it works both ways and could actually help in detecting forgeries.

"Forgery and forensic handwriting analysis are still almost entirely manual processes – but by taking the novel approach of viewing handwriting as texture-synthesis, we can use our software to characterise handwriting to quantify the odds that something was forged," explained Dr Gabriel Brostow, senior author. "For example, we could calculate what ratio of people start their 'o's' at the bottom versus the top and this kind of detailed analysis could reduce the forensics service's reliance on heuristics."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

14th July 2016

Robots could build giant telescopes in space

Researchers have published a new concept for space telescope design that uses a modular structure and robot to build an extremely large telescope in space, faster and more efficiently than human astronauts.

 

robot space telescope construction

 

Enhancing astronomers' ability to peer ever more deeply into the cosmos may hinge on developing larger space-based telescopes. A new concept in space telescope design makes use of a modular structure and an assembly robot to build an extremely large telescope in space, performing tasks that would be too difficult, expensive, or time-consuming for human astronauts.

The Robotically Assembled Modular Space Telescope (RAMST) is described by Nicolas Lee and his colleagues at the California Institute of Technology and the Jet Propulsion Laboratory in an article published this week by the Journal of Astronomical Telescopes, Instruments, and Systems (JATIS).

Ground-based telescopes, while very large and powerful, are limited by atmospheric effects and their fixed location on Earth. Space-based telescopes do not have those problems – but have other limits, such as launch vehicle volume and mass capacity. A new modular space telescope that overcomes restrictions on volume and mass could allow telescope components to be launched incrementally, enabling the design and deployment of truly enormous space telescopes.

The Hubble Space Telescope features a mirror diameter of 2.4 m (7.8 ft). Its successor, the James Webb Telescope – due for launch in 2018 – will be nearly triple this size at 6.5 m (23 ft). A longer-term proposal known as the Advanced Technology Large-Aperture Space Telescope (ATLAST) would be even larger, with a mirror up to 16 m (52 ft) in width. The future concept by Lee and his colleagues, however, would dwarf all of these, spanning 100 m (328 ft). This would be powerful enough to obtain detailed views of exoplanets in other star systems, as well as images from the deep universe with phenomenal clarity.

 

future space telescopes timeline

 

The team's paper, "Architecture for in-space robotic assembly of a modular space telescope," focuses primarily on a robotic system to perform tasks in which astronaut fatigue would be a problem. The observatory would be constructed in Earth orbit and operated at the Sun–Earth Lagrange Point 2.

"Our goal is to address the principal technical challenges associated with such an architecture, so that future concept studies addressing a particular science driver can consider robotically assembled telescopes in their trade space," the authors write.

The main features of their proposed architecture include a mirror built with a modular structure, a general-purpose robot to put the telescope together and provide ongoing servicing, and advanced metrology technologies to support the assembly and operation of the telescope. An optional feature is the potential ability to fly the unassembled components of the telescope in formation. The system architecture is scalable to a variety of telescope sizes and would not be limited to particular optical designs.

"The capability to assemble a modular space telescope has other potential applications," says Harley Thronson, a senior scientist for Advanced Astrophysics Concepts at NASA's Goddard Space Flight Centre. "For example, astronomers using major ground-based telescopes are accustomed to many decades of operation, and the Hubble Space Telescope has demonstrated that this is possible in space if astronauts are available. A robotic system of assembly, upgrade, repair, and resupply offers the possibility of very long useful lifetimes of space telescopes of all kinds."

 

future space telescope

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

30th May 2016

MasterCard unveils the first commerce application for humanoid robot Pepper

Customers at Pizza Hut restaurants in Asia will soon get the chance to have their order taken by a robot.

 

mastercard pepper robot commerce

 

MasterCard has unveiled the first commerce application for SoftBank Robotics' humanoid robot Pepper. The application will be powered by MasterPass, the global digital payment service from MasterCard that connects consumers with merchants, enabling them to make fast, simple, and secure digital payments across channels and devices. Pizza Hut Restaurants Asia P/L will be the inaugural launch partner working together with MasterCard to create innovative customer engagement with Pepper.

A major first step in bringing conversational commerce experiences to merchants and consumers, this new app will extend the robot's ability to integrate customer service, access to information and sales into a seamless and consistent user experience. Pizza Hut Asia will be piloting the Pepper robot for order-taking and personalised engagement in its stores by the end of 2016.

"Consumers have come to expect personalised service, customised offers, and simple and seamless processes both in-store and online," said Tobias Puehse, Vice President for Innovation Management, Digital Payments & Labs at MasterCard. "The app's goal is to provide consumers with a more memorable and personalised shopping experience beyond today's self-serve machines and kiosks, by combining Pepper's intelligence with a secure digital payment experience via MasterPass."

 

mastercard pepper robot commerce 2016 technology

 

The robot will be installed in "between six and ten stores in Asia this year," said John Sheldon, Global SVP, Innovation Management, MasterCard Labs. Pepper can speak 19 languages and will "add more intelligence to kiosk ordering. Pepper guides you through the process of placing the order and can answer nutritional questions and communicate any specials."

A customer will be able to initiate an engagement by simply greeting Pepper and pairing their MasterPass account by either tapping the Pepper icon within the wallet or by scanning a QR code on the tablet that the robot holds. After pairing with MasterPass, Pepper can assist cardholders by providing personalised recommendations and offers, additional information on products, or assistance in checking out and paying for items. Pepper will initiate, approve and complete a transaction by connecting to MasterPass via a Wi-Fi connection and the entire transaction happens within the wallet.

Pepper has a number of human-like features. The robots "are intentionally designed to convey emotion," using sensors and cameras "to interpret the emotional state of the person they are interacting with and the cameras that it's using are evaluating the behaviour." For example, if the customer is excited and animated, so, too, would be Pepper. If the customer's movements are more muted, "then it would instead respond with a lot calmer and smaller gestures, so as to put that person at ease." If the customer gives his or her permission, the robot can remember their order history and ask if they want the same food or drink this time.

"We are excited to welcome Pepper to the Pizza Hut family," said Vipul Chawla, Managing Director of Pizza Hut Restaurants Asia. "Core to our digital transformation journey is the ability to make it easier for customers to engage, connect and transact with Pizza Hut. With an order-and-payment-enabled Pepper, customers can now come to expect personalised ordering, reduce wait time for carryout, and have a fun, frictionless user experience."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

 
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed