future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
     
 
       
   
 
 

Blog » AI & Robotics

 
     
 

24th August 2014

Computer program recognises emotions with
87% accuracy

Researchers in Bangladesh have designed a computer program able to accurately recognise users’ emotional states as much as 87% of the time, depending on the emotion.

 

keyboard typing

 

Writing in the journal Behaviour & Information Technology, Nazmul Haque Nahin and his colleagues describe how their study combined – for the first time – two established ways of detecting user emotions: keystroke dynamics and text-pattern analysis.

To provide data for the study, volunteers were asked to note their emotional state after typing passages of fixed text, as well as at regular intervals during their regular (‘free text’) computer use. This provided researchers with data about keystroke attributes associated with seven emotional states (joy, fear, anger, sadness, disgust, shame and guilt). To help them analyse sample texts, the researchers made use of a standard database of words and sentences associated with the same seven emotional states.

After running a variety of tests, the researchers found that their new ‘combined’ results were better than their separate results; what’s more, the ‘combined’ approach improved performance for five of the seven categories of emotion. Joy (87%) and anger (81%) had the highest rates of accuracy.

This research is an important contribution to ‘affective computing’, a growing field dedicated to ‘detecting user emotion in a particular moment’. As the authors note – for all the advances in computing power, performance and size in recent years, a lot more can still be done in terms of their interactions with end users. “Emotionally aware systems can be a step ahead in this regard,” they write. “Computer systems that can detect user emotion can do a lot better than the present systems in gaming, online teaching, text processing, video and image processing, user authentication and so many other areas where user emotional state is crucial.”

While much work remains to be done, this research is an important step in making ‘emotionally intelligent’ systems that recognise users’ emotional states to adapt their music, graphics, content or approach to learning a reality.

 

  speech bubble Comments »
 

 

 

16th August 2014

Swarm of 1,000 robots able to self-organise

A huge, self-organising robot swarm consisting of 1,024 individual machines has been demonstrated by Harvard.

 

robot swarm

 

Swarm robotics is a new and emerging field of technology involving the coordination of multiple robots to perform a group task. By combining a large number of machines, it is possible to create a hive intelligence – capable of much greater achievements than a lone individual. In the same way that insects such as ants, bees and termites cooperate, researchers can build wireless networks of machines able to sense, navigate and communicate information about their surroundings.

Recent efforts have included a formation of 20 "droplets" created by the University of Colorado, a group of 40 robots developed at the Sheffield Centre for Robotics, and drones using augmented reality to produce "spatially targeted communication and self-assembly". Although impressive, those projects – and others since – have lacked the raw numbers to be considered a genuine "swarm" like the creatures mentioned earlier. This week, however, scientists at Harvard took research in the field to a whole new level, by demonstrating a network of more than 1,000 machines working simultaneously.

Known as "Kilobots", these devices are just a few centimetres across, roughly the size of a U.S. quarter. Each is equipped with tiny vibrating motors allowing them to slide across a surface, using an infrared transmitter and receiver to alert their neighbours and measure their proximity. From just a simple command, they can arrange themselves into a variety of complex shapes and patterns.

 

robot swarm

 

In 2011, open-source hardware and software was developed and licensed by Harvard to improve the algorithms used in machine networks. A report showed how groups of 25 Kilobots – demonstrating behaviours such as foraging, formation control and synchronisation – had the potential for much bigger numbers. Following three years of further testing and experimentation, the university has now succeeded in coordinating a swarm of 1,024 units.

The new, smarter algorithm enables the Kilobots to correct their own mistakes, avoiding traffic jams and errors that would otherwise become more likely in larger-scale groups. If an individual deviates off-course, nearby robots can sense the problem and cooperate to fix it. As robots become cheaper and more numerous, with a continued trend in miniaturisation, this form of social behaviour could lead to revolutionary applications in the future.

As Professor Radhika Nagpal explains in a press release: “Increasingly, we’re going to see large numbers of robots working together – whether it's hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways. Understanding how to design ‘good’ systems at that scale will be critical. We can simulate the behaviour of large swarms of robots, but a simulation can only go so far. The real-world dynamics – the physical interactions and variability – make a difference, and having the Kilobots to test the algorithm on real robots has helped us better understand how to recognise and prevent the failures that occur at these large scales.”

These latest developments are reported in the peer-reviewed journal Science.

 

 

  speech bubble Comments »
 

 

 

14th August 2014

Robotic butlers to appear in hotels

From next week, guests at the Aloft hotel chain may feel like they are living in the future, as a new robotic butler offers its services.

 

robotic butler

 

Aloft Hotels has announced A.L.O. as the company’s first “Botlr” (robotic butler). This futuristic service will be introduced on 20th August, making Aloft the first major hotel brand to hire a robot for both front and back of house duties.

In this role, A.L.O. will be on call 24/7 as a robotic operative, assisting the human staff in delivering amenities to guest rooms. Professionally “dressed” in a custom shrink-wrapped, vinyl collared uniform and nametag, A.L.O. can modestly accept tweets as tips. It will not only free up time for employees, allowing them to create a more personalised experience for guests, but will also enhance the hotel’s image and technological features.

Brian McGuinness, Global Brand Leader: “As you can imagine, hiring for this particular position was a challenge as we were seeking a very specific set of automated skills, and one that could work – literally – around the clock. As soon as A.L.O. entered the room, we knew it was what we were looking for. A.L.O. has the work ethic of Wall-E, the humour of Rosie from The Jetsons and reminds me of my favourite childhood robot – R2-D2. We are excited to have it join our team.”

 

 

A.L.O. was developed by Silicon Valley-based Savioke – a new startup company with funding from Google Ventures – which the robotics community has been eagerly anticipating. It uses a combination of sonar wave technology, lasers and cameras to avoid people and obstacles. It can facilitate and prioritise multiple guest deliveries, communicate easily with guests and various hotel platforms, and efficiently navigate throughout the property – including the elevator, using WiFi.

Steve Cousins, CEO of Savioke: “We are thrilled to introduce our robot to the world today through our relationship with Aloft Hotels. In our early testing, all of us at Savioke have seen the look of delight on those guests who receive a room delivery from a robot. We have also seen the front desk get busy at times, and expect Botlr will be especially helpful at those times, freeing up human talent to interact with guests on a personal level.”

The first A.L.O. reports for duty next week at Aloft Cupertino, next to the Apple HQ. If successful, all 100 of the company's hotels may introduce them during 2015. In the future, Cousins predicts a huge market for service robots like A.L.O.: “There are all these places, hotels, elder care facilities, hospitals, that have a few hundred robots maybe – but no significant numbers – and we think that's just a huge opportunity.”

 

hotel robot butler

 

  speech bubble Comments »
 

 

 

9th August 2014

Brain-like supercomputer the size of a postage stamp

Scientists at IBM Research have created a neuromorphic (brain-like) computer chip, featuring 1 million programmable neurons and 256 million programmable synapses.

 

brain computer

 

IBM this week unveiled "TrueNorth" – the most advanced and powerful computer chip of its kind ever built. This neurosynaptic processor is the first to achieve one million individually programmable neurons, sixteen times more than the current largest neuromorphic chip. Designed to mimic the structure of the human brain, it represents a major departure from older computer architectures of the last 70 years. By merging the pattern recognition abilities of neurosynaptic chips with traditional system layouts, researchers aim to create "holistic computing intelligence".

Measured by device count, TrueNorth is the largest IBM chip ever fabricated, with 5.4 billion transistors at 28nm. Yet it consumes under 70 milliwatts while running at biological real time – orders of magnitude less power than a typical modern processor. This amazing feat is made possible because neurosynaptic chips are event driven, as opposed to the "always on" operation of traditional chips. In other words, they function only when needed, resulting in vastly less energy use and a much cooler temperature. It is hoped this combination of ultra-efficient power consumption and entirely new system architecture will allow computers to far more accurately emulate the brain.

TrueNorth is composed of 4,096 cores, with each of these modules integrating memory, computation and communication. The cores are distributed in a parallel, flexible and fault-tolerant grid – able to continue operating when individual cores fail, similar to a biological system. And – like a brain cortex – adjacent TrueNorth chips can be seamlessly tiled and scaled up. To demonstrate this scalability, IBM also revealed a 16-chip motherboard with 16 million programmable neurons: roughly equivalent to a frog brain.

Each of these "neurons" features 256 inputs, whereas the human brain averages 10,000. That may sound like a huge difference – but in the world of computers and technology, progress tends to be exponential. In other words, we could see machines as computationally powerful as a human brain within 10–15 years. The implications are staggering. When sufficiently scaled up, this new generation of "cognitive computers" could transform society, leading to a myriad of applications able to intelligently analyse visual, auditory, and multi-sensory data.

 

  speech bubble Comments »
 

 

 

31st July 2014

UK government to allow driverless cars on roads from January 2015

Vince Cable, UK Business Secretary, has announced measures that give the green light for driverless cars on UK roads from January 2015.

 

driverless cars

 

UK cities can now bid for a share of a £10 million (US$17m) competition to host a driverless cars trial. The government is calling on cities to join together with businesses and research organisations to put forward proposals to become a test location. Up to three cities will be selected to host the trials from next year, with each project expected to last between 18 and 36 months, starting in January 2015.

Ministers have also launched a review to look at current road regulations to establish how the UK can stay at the forefront of driverless car technology and ensure there is an appropriate regime for testing driverless cars in the UK. Two areas will be covered in the review: cars with a qualified driver who can take over control of the driverless car, and fully autonomous vehicles where there is no driver.

Speaking at MIRA – a vehicle engineering consultancy, test and research facility – where he tested a driverless car with Science Minister Greg Clark, Business Secretary Vince Cable said: "The excellence of our scientists and engineers has established the UK as a pioneer in the development of driverless vehicles through pilot projects. Today’s announcement will see driverless cars take to our streets in less than six months, putting us at the forefront of this transformational technology and opening up new opportunities for our economy and society.

"Through the government's industrial strategy, we are backing the automotive sector as it goes from strength to strength. We are providing the right environment to give businesses the confidence to invest and create high skilled jobs."

Britain joins a growing number of countries planning to use this technology. Elsewhere in Europe, cities in Belgium, France and Italy intend to operate transport systems for driverless cars. In the USA, four states have passed laws permitting autonomous cars: Nevada, Florida, California, and Michigan. FutureTimeline.net predicts annual purchases of autonomous vehicles will reach almost 100 million worldwide by 2035. The benefits could be enormous, with drastic reductions in accident fatalities, traffic congestion and pollution.

 

  speech bubble Comments »
 

 

 

25th July 2014

Deep sea mining moves a step closer

With many of Earth's metals and minerals facing a supply crunch in the decades ahead, deep ocean mining could provide a way of unlocking major new resources. Amid growing commercial interest, the UN's International Seabed Authority has just issued seven exploration licences.

 

deep sea mining
Credit: Nautilus Minerals Inc.

 

To build a fantastic utopian future of gleaming eco-cities, flying cars, robots and spaceships, we're going to need metal. A huge amount of it. Unfortunately, our planet is being mined at such a rapid pace that some of the most important elements face critical shortages in the coming decades. These include antimony (2022), silver (2029), lead (2031) and many others. To put the impact of our mining and other activities in perspective: on land, humans are now responsible for moving about ten times as much rock and earth as natural phenomena such as earthquakes, volcanoes and landslides. The UN predicts that on current trends, humanity's annual resource consumption will triple by 2050.

While substitution in the form of alternative metals could help, a longer term answer is needed. Asteroid mining could eventually provide an abundance from space – but a more immediate, technically viable and commercially attractive solution is likely to arise here on Earth. That's where deep sea mining comes in. Just as offshore oil and gas drilling was developed in response to fossil fuel scarcity on land, the same principle could be applied to unlock massive new metal reserves from the seabed. Oceans cover 72% of the Earth's surface, with vast unexplored areas that may hold a treasure trove of rare and precious ores. Further benefits would include:

• Curbing of China's monopoly on the industry. As of 2014, the country is sitting on nearly half the world's known reserves of rare earth metals and produces over 90% of the world's supply.

• Limited social disturbance. Seafloor production will not require the social dislocation and resulting impact on culture or disturbance of traditional lands common to many land-based operations.

• Little production infrastructure. As the deposits are located on the seafloor, production will be limited to a floating ship with little need for additional land-based infrastructure. The concentration of minerals is an order of magnitude higher than typical land-based deposits with a corresponding smaller footprint on the Earth's surface.

• Minimal overburden or stripping. The ore generally occurs directly on the seafloor and will not require large pre-strips or overburden removal.

• Improved worker safety. Operations will be mostly robotic and won't require human exposure to typically dangerous mining or "cutting face" activities. Only a hundred or so people will be employed on the production vessel, with a handful more included in the support logistics.

 

robot mining
Credit: Nautilus Minerals Inc.

 

Interest in deep sea mining first emerged in the 1960s – but consistently low prices of mineral resources at the time halted any serious implementation. By the 2000s, the only resource being mined in bulk was diamonds, and even then, just a few hundred metres below the surface. In recent years, however, there has been renewed interest, due to a combination of rising demand and improvements in exploration technology.

The UN's International Seabed Authority (ISA) was set up to manage these operations and prevent them from descending into a free-for-all. Until 2011, only a handful of exploration permits had been issued – but since then, demand has surged. This week, seven new licences were issued to companies based in Brazil, Germany, India, Russia, Singapore and the UK. The number is expected to reach 26 by the end of 2014, covering a total area of seabed greater than 1.2 million sq km (463,000 sq mi).

Michael Lodge of the ISA told the BBC: "There's definitely growing interest. Most of the latest group are commercial companies so they're looking forward to exploitation in a reasonably short time – this move brings that closer."

So far, only licences for exploration have been issued, but full mining rights are likely to be granted over the next few years. The first commercial activity will take place off the coast of Papua New Guinea, where a Canadian company – Nautilus Minerals – plans to extract copper, gold and silver from hydrothermal vents. After 18 months of delays, this was approved outside the ISA system and is expected to commence in 2016. Nautilus has been developing Seafloor Production Tools (SPTs), the first of which was completed in April. This huge robotic machine is known as the Bulk Cutter and weighs 310 tonnes when fully assembled. The SPTs have been designed to work at depths of 1 mile (1.6 km), but operations as far down as 2.5 miles (4 km) should be possible eventually.

As with any mining activity, concerns have been raised from scientists and conservationists regarding the environmental impact of these plans, but the ISA says it will continue to demand high levels of environmental assessment from its applicants. Looking ahead, analysts believe that deep sea mining could be widespread in many parts of the world by 2040.

 

 

  speech bubble Comments »
 

 

 

17th July 2014

An interview with Japanese humanoid robot Pepper

A Japanese humanoid robot called Pepper, whose makers claim can read people's emotions, has been unveiled in Tokyo. Telecoms company Softbank, which created the robot, says Pepper can understand 70 to 80 percent of spontaneous conversations. News agency AFP met the pint-sized chatterbox, who took time out from his day job greeting customers at SoftBank stores.

 

 

  speech bubble Comments »
 

 

 

15th July 2014

Project Adam: a new deep-learning system

Developed by Microsoft, Project Adam is a new deep-learning system modelled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry. The goal of Project Adam is to enable software to visually recognise any object. This is being marketed as a competitor to Google's Brain project, currently being worked on by Ray Kurzweil.

 

 

  speech bubble Comments »
 

 

 

25th June 2014

Realistic androids go on display in Japan

The National Museum of Emerging Science and Innovation has today opened a new permanent exhibition entitled, "Android: What is Human?" where visitors can meet the world's most advanced androids – robots which closely resemble humans.

 

japanese androids

 

The National Museum of Emerging Science and Innovation, also known simply as the "Miraikan", is created by Japan's Science and Technology Agency. This new exhibition displays three android robots: the recently developed Kodomoroid and Otonaroid – a child android and an adult female android, respectively – and Telenoid, an android designed without individual human physical features. The exhibition is curated by Dr. Hiroshi Ishiguro, a leading android expert who has been studying the question, "What is human?"

Kodomoroid and Otonaroid will attempt to fill human roles as the world's first android announcer and as the Miraikan's android science communicator, respectively. The organisers of the exhibition claim it will be "a unique and rare event" – providing visitors with the opportunity to communicate with and operate these advanced robots, while shedding light on the attributes of humans in contrast with those of androids.

With soft skin made from special silicon and smooth motion possible by artificial muscle, android robots are becoming increasingly similar to real humans. If an android robot gains the ability to talk and live identically to a human, you may not be able to distinguish between androids and humans. If this comes to pass, what would the word human mean? What is human? This question has been subject to debate since ancient times, and efforts to find an answer are still being made in all fields, including the humanities, social sciences, and art. Building an android can be described as a process of understanding what makes a human look like a human, as Ishiguro explains:

 

 

  kodomoroid   Otonaroid   Telenoid  
             
 

Kodomoroid is a teleoperated android resembling a child. It is a news announcer with potential exceeding that of its human equivalent. It can recite news reports gathered from around the world 24 hours a day, every day, in a variety of voices and languages. In a studio on the museum's third floor, you can watch her deliver news about global issues and weather reports.

 

Otonaroid is a teleoperated android robot resembling an adult female. She has been hired by the Miraikan as a robot science communicator. At the exhibition, you can talk with her in face-to-face conversations and also operate her movements.

 

Telenoid is a teleoperated android robot with a minimal design, created as an attempt to embody the minimum physical requirements for human-like communication. At the exhibition, you can talk with it and also operate it.

 

 

  speech bubble Comments »
 

 

 

8th June 2014

Turing Test passed? Researchers claim breakthrough in artificial intelligence

Researchers are claiming a major breakthrough in artificial intelligence with a machine program that can pass the famous Turing Test.

 

sentient program

 

At the Royal Society in London yesterday, an event called Turing Test 2014 was organised by the University of Reading. This involved a chat program known as Eugene being presented to a panel of judges and trying to convince them it was human. These judges included the actor Robert Llewellyn – who played robot Kryten in sci-fi comedy TV series Red Dwarf – and Lord Sharkey, who led a successful campaign for Alan Turing's posthumous pardon last year. During this competition, which saw five computers taking part, Eugene fooled 33% of human observers into thinking it was a real person as it claimed to be a 13-year-old boy from Odessa in Ukraine.

In 1950, British mathematician and computer scientist Alan Turing published his seminal paper, "Computing Machinery and Intelligence", in which he proposed the now-famous test for artificial intelligence. Turing predicted that by the year 2000, machines with 10 GB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory.

In the years since 1950, the test has proven both highly influential and widely criticised. A number of breakthroughs have emerged in recent times from groups claiming to have satisfied the criteria for "artificial intelligence". We have seen Cleverbot, for example, and IBM's Watson, as well as gaming bots and the CAPTCHA-solving Vicarious. It is therefore easy to be sceptical about whether Eugene represents something genuinely new and revolutionary.

Professor Kevin Warwick (who also happens to be the world's first cyborg), comments in a press release from the university: "Some will claim that the Test has already been passed. The words 'Turing Test' have been applied to similar competitions around the world. However, this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing's Test was passed for the first time on Saturday."

Eugene's creator and part of the development team, Vladimir Veselov, said as follows: "Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality. This year, we improved the 'dialog controller' which makes the conversation far more human-like when compared to programs that just answer questions. Going forward, we plan to make Eugene smarter and continue working on improving what we refer to as 'conversation logic'."

 

eugene

 

Is the Turing Test a reliable indicator of intelligence? Who gets to decide the figure of 30% and what is the significance of this number? Surely imitation and pre-programmed replies cannot qualify as "understanding"? These questions and many others will be asked in the coming days, just as they have been asked following similar breakthroughs in the past. To gain a proper understanding of intelligence, we will need to reverse engineer the brain – something which is very much achievable in the next decade, based on current trends.

Regardless of whether Eugene is a bona fide AI, computing power will continue to grow exponentially in the coming years, with major implications for society in general. Benefits may include a 50% reduction in healthcare costs, as software programs are used for big data management to understand and predict the outcomes of treatment. Call centre staff, already competing with virtual employees today, could be almost fully automated in the 2030s, with zero waiting times for callers trying to seek help. Self-driving cars and other forms of AI could radically reshape our way of life.

Downsides to AI may include a dramatic rise in unemployment as humans are increasingly replaced by machines. Another big area of concern is security, as Professor Warwick explains: "Having a computer that can trick a human into thinking that someone – or even something – is a person we trust is a wake-up call to cybercrime. The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true... when in fact it is not."

Further into the future, AI will gain increasingly mobile capabilities, able to learn and become aware of the physical world. No longer restricted to the realms of software and cyberspace, it will occupy hardware that includes machines literally indistinguishable from real people. By then, science fiction will have become reality and our civilisation will enter a profound, world-changing epoch that some have called a technological singularity. If Ray Kurzweil's ultimate prediction is to be believed, our galaxy and perhaps the entire universe may become saturated with intelligence, as formally lifeless rocks are converted into sentient matter.

 

  speech bubble Comments »
 

 

 

29th May 2014

A breakthrough in real-time translated conversations

At the Code Conference in California, Microsoft has demonstrated Skype Translator – a new technology enabling cross-lingual conversations in real time. Resembling the "universal translator" from Star Trek, this feature will be available on Windows 8 by the end of 2014 as a limited beta. Microsoft has worked on machine translation for 15 years, and translating voice over Skype in real time had once been considered "a nearly impossible task." In the world of technology, however, miracles do happen. This video shows the software in action. According to CEO Satya Nadella, it does more than just automatic speech recognition, machine translation and voice synthesis: it can actually "learn" from different languages, through a brain-like neural net. When you consider that 300 million people are now connecting to Skype each month, making 2 billion minutes of conversation each day, the potential in terms of improved communication is staggering.

 

 

  speech bubble Comments »
 

 

 

13th May 2014

The human rights implications of killer robots

Fully autonomous weapons, or “killer robots,” would jeopardise basic human rights, whether used in wartime or for law enforcement, Human Rights Watch said in a report released yesterday, on the eve of the first multilateral meeting on the subject at the United Nations.

 

killer robot

 

The 26-page report, “Shaking the Foundations: The Human Rights Implications of Killer Robots,” is the first report to assess in detail the risks posed by these weapons during law enforcement operations – expanding the debate beyond the battlefield. Human Rights Watch found that fully autonomous weapons threaten rights and principles under international law as fundamental as the right to life, the right to a remedy, and the principle of dignity.

“In policing, as well as war, human judgment is critically important to any decision to use a lethal weapon,” said Steve Goose, arms division director. “Governments need to say no to fully autonomous weapons for any purpose and to preemptively ban them now, before it is too late.”

International debate over fully autonomous weapons has previously focused on their potential role in armed conflict and questions over whether they would comply with international humanitarian law, also called the laws of war. Human Rights Watch, in this new report, examines the potential impact of fully autonomous weapons under human rights law, which applies during peacetime as well as armed conflict.

Nations must adopt a preemptive international ban on these weapons, which could identify and fire on targets without meaningful human intervention, Human Rights Watch said. Countries are pursuing ever-greater autonomy in weapons, and precursors already exist.

 

robot

 

The release of the report, co-published with Harvard Law School’s International Human Rights Clinic, coincides with the first ever multilateral meeting on the weapons. Many of the 117 countries that joined the Convention on Conventional Weapons will attend the meeting of experts on lethal autonomous weapons systems at the United Nations in Geneva this week. Members of the convention agreed at their annual meeting in November 2013 to begin work on the issue in 2014.

Human Rights Watch believes the agreement to work on these weapons in the Convention on Conventional Weapons forum could eventually lead to new international law prohibiting fully autonomous weapons. The convention preemptively banned blinding lasers in 1995.

Human Rights Watch is a founding member and coordinator of the Campaign to Stop Killer Robots. This coalition of 51 nongovernmental organisations in two dozen countries calls for a preemptive ban on the development, production, and use of fully autonomous weapons.

 

campaign to stop killer robots
© 2013 Campaign to Stop Killer Robots

 

Human Rights Watch issued its first report on the subject, “Losing Humanity: The Case against Killer Robots,” back in November 2012. In April 2013, Christof Heyns – UN special rapporteur on extrajudicial, summary or arbitrary executions – issued a report citing a range of objections to the weapons, and called for all nations to adopt national moratoria and begin international discussions about how to address them.

Fully autonomous weapons could be prone to killing people unlawfully because these weapons could not be programmed to handle every situation, Human Rights Watch found. According to robot experts, there is little prospect that these weapons would possess human qualities, such as judgment, that facilitate compliance with the right to life in unforeseen situations.

Fully autonomous weapons would also undermine human dignity, Human Rights Watch said. These inanimate machines could not understand or respect the value of life, yet they would have the power to determine when to take it away.

Serious doubts exist about whether there could be meaningful accountability for the actions of a fully autonomous weapon. There would be legal and practical obstacles to holding anyone – a superior officer, programmer, or manufacturer – responsible for a robot’s actions. Both criminal and civil law are ill suited to the task, Human Rights Watch found.

“The accountability gap would weaken deterrence for future violations,” said Bonnie Docherty, senior researcher in the arms division at Human Rights Watch and lecturer at the Harvard clinic as well as author of the report. “It would be very difficult for families to obtain retribution or remedy for the unlawful killing of a relative by such a machine.”

The human rights impacts of killer robots compound a host of other legal, ethical, and scientific concerns – including the potential for an arms race, prospect of proliferation, and questions about their ability to protect civilians adequately on the battlefield or the street, Human Rights Watch found. 

 

  speech bubble Comments »
 

 

 

11th May 2014

FDA approves the first prosthetic arm controlled by muscle electrical signals

After eight years of development, a new hi-tech bionic arm has become the first of its kind to gain regulatory approval for mass production.

 

the luke prosthetic arm

 

The DEKA Arm System is part of the $100m Revolutionising Prosthetics program launched by the Defense Advanced Research Projects Agency (DARPA). Upper-limb prosthetic technology had for many years lagged behind lower-limb technology and the program sought to address this issue. The DEKA was made possible through a combination of breakthroughs in both engineering and biology, resulting in a bionic arm that offers near-natural control. It is nicknamed "The Luke", after Star Wars' Luke Skywalker who received a robotic replacement for the hand he lost in a fight with Darth Vader.

Simultaneous control of multiple joints is enabled by miniature motors and a variety of input devices, including wireless signals generated by sensors on the user's feet. Constructed from lightweight but strong materials, the battery-powered arm system is of similar size and weight to a real limb and has six user-selectable grips.

During eight years of testing and development, 36 volunteers took part in studies to refine the arm's design. Their feedback helped engineers to create a mind-controlled device enabling amputees to perform a wide range of tasks – preparing food, using locks and keys, opening envelopes, brushing hair, using zippers and feeding themselves, all of which greatly enhances their independence and quality of life.

Similar devices are being developed around the world, but this is the first of its kind to gain approval from the U.S. Food and Drug Administration (FDA). Dr. Geoffrey Ling, Director of DARPA's Biological Technologies Office, comments in a press release: "DARPA is a place where we can bring dreams to life."

 

 

  speech bubble Comments »
 

 

 

28th April 2014

RoboBees

If bee populations continue to decline, the dystopian future depicted in this video could one day become a reality.

Bees and other pollinating insects play an essential role in ecosystems. A third of all our food depends on their pollination. A world without pollinators would be devastating for food production. Since the late 1990s, beekeepers around the world have observed the mysterious and sudden disappearance of bees, and report unusually high rates of decline in honeybee colonies. Although the exact causes are not yet fully understood, growing evidence suggests that chemical-intensive farming methods and the use of insecticides play a major role. Greenpeace has now launched a campaign demanding urgent action to address this issue – including a ban on the most harmful chemicals, along with increased science funding and more sustainable agricultural practices.

 

 

  speech bubble Comments »
 

 

 

18th April 2014

Latest version of ASIMO robot makes debut

Honda this week showcased the newest version of ASIMO, the world's most advanced humanoid robot, for the first time in North America, featuring its latest innovations – including the ability to communicate in sign language and to climb stairs without stopping.

 

 

ASIMO – which stands for Advanced Step in Innovative Mobility – was first introduced 14 years ago. Since then, it has made significant advances – including physical improvements like running and hopping on one leg, as well as breakthroughs in dexterity and intelligence, that have furthered Honda's dream of creating humanoid robots to help society.

"This is an exciting project for Honda," said Satoshi Shigemi, senior chief engineer of Honda R&D and the leader of Honda's humanoid robotics program. "Our engineers are working tirelessly to develop new technologies aimed at helping ASIMO work in a real world environment."

The new version of ASIMO has undergone numerous changes to its 4'3", 110-pound body. Developments in the lower body have enhanced stability and balance control, allowing the robot to climb more smoothly, run faster and change directions in a more-controlled fashion.

Enhancements in the upper body include major increases in the degrees of freedom available in the robot's hands. Each hand now contains 13 degrees of freedom, which allows ASIMO to perform many more intricate and precise tasks.

The increased hand dexterity provides additional movement in each finger, which also led to the development of ASIMO's new ability to communicate using both American and Japanese sign language. Force sensors in the robot's hands also provide instantaneous feedback allowing ASIMO to use the appropriate amount of force when performing a task. This allows the robot to pick up paper cups without crushing them, for example, but still allows it to use a stronger force when necessary.

"It was obvious that overall flexibility was necessary, and many more complex tasks can now be performed because of the improved operational capacity in the hands," Shigemi continued. "But perhaps more importantly, these innovations enhance ASIMO's communication skills, which is essential to interact with human beings."

Advanced technologies derived from research on ASIMO have also benefited other Honda business lines. For example, the Vehicle Stability Assistance (VSA) used in the Honda Civic, along with technologies in the championship-winning Honda Moto GP motorcycles had their genesis in Honda's robotics research program.

Later this summer, the new ASIMO will follow in the footsteps of its predecessor to become a daily performer at Disneyland's Tomorrowland.

 

  speech bubble Comments »
 

 

 

10th March 2014

Table tennis: man vs machine

In 1997, Deep Blue became the first computer to win against a human chess champion, when it defeated Garry Kasparov. In 2011, IBM's Watson competed on the Jeopardy! quiz show against former winners Brad Rutter and Ken Jennings, defeating them both. Now, another competition between man and machine is about to unfold. On Tuesday 11th March, KUKA – a German manufacturer of high-end industrial robots – will open its first plant in Shanghai, China. The opening will be celebrated with a table tennis match between their KR AGILUS robot and Tim Boll, the German champion. This event is intended to demonstrate the speed, precision and flexibility of KUKA's industrial robots. For more information, click here.

 

 

  speech bubble Comments »
 

 

 

1st February 2014

Philosophy of mind and robotics with Dan Barry

Dan Barry is an engineer and scientist, currently serving as the Co-Chair of Artificial Intelligence and Robotics at Singularity University. In 2005, he started his own company, Denbar Robotics, that creates robotic assistants for home and commercial use. In 2011 he co-founded 9th Sense, a company that sells telepresence robots. He has seven patents, has published over 50 articles in scientific journals, and is a former NASA astronaut. In this video, Barry asks the question: "How are we going to know that a robot is self-aware?"

 

 

  speech bubble Comments »
 

 

 

9th January 2014

IBM forms Watson Group to meet growing demand for cognitive innovations

Headquartered in New York City's "Silicon Alley", the new Watson Group formed by IBM will fuel innovative products and startups – introducing cloud solutions to accelerate research, visualise Big Data and enable analytics exploration.

 

 

IBM today announced it will establish the IBM Watson Group, a new business unit dedicated to the development and commercialisation of cloud-delivered cognitive innovations. The move signifies a strategic shift by IBM to accelerate into the marketplace a new class of software, services and apps that can "think", improve by learning, and discover answers and insights to complex questions from massive amounts of Big Data.

IBM will invest more than $1 billion into the Watson Group, focusing on research and development to bring cloud-delivered cognitive applications and services to market. This will include $100 million available for venture investments to support IBM's recently launched ecosystem of start-ups and businesses, which are building a new class of cognitive apps powered by Watson, in the IBM Watson Developers Cloud.

According to technology research firm Gartner, smart machines will be the most disruptive change ever brought about by information technology, and can make people more effective, empowering them to do "the impossible."

The IBM Watson Group will have a new headquarters at 51 Astor Place in New York City's "Silicon Alley" technology hub, leveraging the talents of 2,000 professionals, whose goal is to design, develop and accelerate the adoption of Watson cognitive technologies that transform industries and professions. The new group will tap subject matter experts from IBM's Research, Services, Software and Systems divisions, as well as industry experts who will identify markets that cognitive computing can disrupt and evolve, such as healthcare, financial services, retail, travel and telecommunications.

Nearly three years after its triumph on the TV show Jeopardy!, IBM has advanced Watson from a quiz game innovation into a commercial technology. Now delivered from the cloud and powering new consumer apps, Watson is 24 times faster and 90 percent smaller – IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes.

Named after IBM founder Thomas J. Watson, the machine was developed in IBM’s Research labs. Using natural language processing and analytics, Watson handles information akin to how people think, representing a major shift in the ability to quickly analyse, understand and respond to Big Data. Watson’s ability to answer complex questions in natural language with speed, accuracy and confidence will transform decision making across a range of industries.

"Watson is one of the most significant innovations in IBM's 100 year history, and one that we want to share with the world," says IBM Senior Vice President Mike Rhodin (pictured below), who will lead the group. "These new cognitive computing innovations are designed to augment users’ knowledge – be it the researcher exploring genetic data to create new therapies, or a business executive who needs evidence-based insights to make a crucial decision."

 

mike rhodin IBM Watson

 

  speech bubble Comments »
 

 

 

7th January 2014

Intel at CES 2014

At the Consumer Electronics Show (CES) in Las Vegas, Intel Corporation has been showing off its latest innovative technologies. These include an intelligent 3D camera system, a range of new wearable electronics, and a 22nm dual-core PC the size of an SD card.

 

intel edison 22nm dual core pc 2014 technology

 

Intel CEO Brian Krzanich has outlined a range of new products, initiatives and strategic relationships aimed at accelerating innovation across a range of mobile and wearable devices. He made the announcements during the pre-show keynote for the 2014 Consumer Electronics Show in Las Vegas, the biggest gathering of the tech industry in the USA.

Krzanich's keynote painted a vision of how the landscape of computing is being re-shaped and where security is too important not to have it embedded in all devices. The world is entering a new era of integrated computing defined not by the device, but the integration of technology into people's lifestyles in ways that offer new utility and value. As examples, he highlighted several immersive and intuitive technologies that Intel will begin offering in 2014, such as Intel RealSense – hardware and software that will bring human senses to Intel-based devices. This will include 3D cameras that deliver more intelligent experiences – improving the way people learn, collaborate and are entertained.

The first Intel RealSense 3D camera features a best-in-class depth sensor and a full 1080p colour camera. It can detect finger level movements enabling highly accurate gesture recognition, facial features for understanding movement and emotions. It can understand foregrounds and backgrounds to allow control, enhance interactive augmented reality (AR), simply scan items in three dimensions, and more.

This camera will be integrated into a growing spectrum of Intel-based devices including 2 in 1, tablet, Ultrabook, notebook, and all-in-one (AIO) designs. Systems with the new camera will be available beginning in the second half of 2014 from Acer, Asus, Dell, Fujitsu, HP, Lenovo and NEC.

To advance the computer's "hearing" sense, a new generation of speech recognition technology will be available on a variety of systems. This conversational personal assistant works with popular websites and applications. It comes with selectable personalities, and allows for ongoing dialogue with Intel-based devices. People can simply tell it to play music, get answers, connect with friends and find content – all by using natural language. This assistant is also capable of calendar checks, getting maps and directions, finding flights or booking a dinner reservation. Available offline, people can control their device, dictate notes and more without an Internet connection.

 

 

Krzanich then explained how Intel aims to accelerate wearable device innovation. A number of reference designs were highlighted including: smart earbuds providing biometric and fitness capabilities, a smart headset that is always ready and can integrate with existing personal assistant technologies, a smart wireless charging bowl, a smart baby onesie and a smart bottle warmer that will start warming milk when the onesie senses the baby is awake and hungry.

The smart earbuds (pictured below) provide full stereo audio, monitor heart rate and pulse all while the applications on the user's phone keep track of running distance and calories burned. The product includes software to precision-tune workouts by automatically choosing music that matches the target heart rate profile. As an added bonus, it harvests energy directly from the audio microphone jack, eliminating the need for a battery or additional power source to charge the product.

 

intel smart earbuds 2014 technology

 

The Intel CEO announced collaborations to increase dialogue and cooperation between fashion and technology industries to explore and bring to market new smart wearable electronics. He also kicked-off the Intel "Make it Wearable" challenge – a global effort aimed at accelerating creativity and innovation with technology. This effort will call upon the smartest and most creative minds to consider factors impacting the proliferation of wearable devices and ubiquitous computing, such as meaningful usages, aesthetics, battery life, security and privacy.

In addition to reference designs for wearable technology, Intel will offer a number of accessible, low-cost entry platforms aimed at lowering entry barriers for individuals and small companies, allowing them to create innovative web-connected wearables or other small form factor devices. Underscoring this point, Krzanich announced Intel Edison – a low-power, 22nm-based computer in an SD card form factor with built-in wireless abilities and support for multiple operating systems. From prototype to production, Intel Edison will enable rapid innovation and product development by a range of inventors, entrepreneurs and consumer product designers when available this summer.

 

intel edison 22nm dual core pc 2014 technology

 

"Wearables are not everywhere today, because they aren't yet solving real problems and they aren't yet integrated with our lifestyles," said Krzanich. "We're focused on addressing this engineering innovation challenge. Our goal is: if something computes and connects, it does it best with Intel inside."

Krzanich also discussed how Intel is addressing a critical issue for the industry as a whole: conflict minerals from the Democratic Republic of the Congo (DRC). Intel has achieved a critical milestone and the minerals used in microprocessor silicon and packages manufactured in Intel's factories are now "conflict-free", as confirmed by third-party audits.

"Two years ago, I told several colleagues that we needed a hard goal, a commitment to reasonably conclude that the metals used in our microprocessors are conflict-free," Krzanich said. "We felt an obligation to implement changes in our supply chain to ensure that our business and our products were not inadvertently funding human atrocities in the Democratic Republic of the Congo. Even though we have reached this milestone, it is just a start. We will continue our audits and resolve issues that are found."

 

intel conflict minerals

 

  speech bubble Comments »
 

 

 

 
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed