With many of Earth's metals and minerals facing a supply crunch in the decades ahead, deep ocean mining could provide a way of unlocking major new resources. Amid growing commercial interest, the UN's International Seabed Authority has just issued seven exploration licences.
Credit: Nautilus Minerals Inc.
To build a fantastic utopian future of gleaming eco-cities, flying cars, robots and spaceships, we're going to need metal. A huge amount of it. Unfortunately, our planet is being mined at such a rapid pace that some of the most important elements face critical shortages in the coming decades. These include antimony (2022), silver (2029), lead (2031) and many others. To put the impact of our mining and other activities in perspective: on land, humans are now responsible for moving about ten times as much rock and earth as natural phenomena such as earthquakes, volcanoes and landslides. The UN predicts that on current trends, humanity's annual resource consumption will triple by 2050.
While substitution in the form of alternative metals could help, a longer term answer is needed. Asteroid mining could eventually provide an abundance from space – but a more immediate, technically viable and commercially attractive solution is likely to arise here on Earth. That's where deep sea mining comes in. Just as offshore oil and gas drilling was developed in response to fossil fuel scarcity on land, the same principle could be applied to unlock massive new metal reserves from the seabed. Oceans cover 72% of the Earth's surface, with vast unexplored areas that may hold a treasure trove of rare and precious ores. Further benefits would include:
• Curbing of China's monopoly on the industry. As of 2014, the country is sitting on nearly half the world's known reserves of rare earth metals and produces over 90% of the world's supply.
• Limited social disturbance. Seafloor production will not require the social dislocation and resulting impact on culture or disturbance of traditional lands common to many land-based operations.
• Little production infrastructure. As the deposits are located on the seafloor, production will be limited to a floating ship with little need for additional land-based infrastructure. The concentration of minerals is an order of magnitude higher than typical land-based deposits with a corresponding smaller footprint on the Earth's surface.
• Minimal overburden or stripping. The ore generally occurs directly on the seafloor and will not require large pre-strips or overburden removal.
• Improved worker safety. Operations will be mostly robotic and won't require human exposure to typically dangerous mining or "cutting face" activities. Only a hundred or so people will be employed on the production vessel, with a handful more included in the support logistics.
Credit: Nautilus Minerals Inc.
Interest in deep sea mining first emerged in the 1960s – but consistently low prices of mineral resources at the time halted any serious implementation. By the 2000s, the only resource being mined in bulk was diamonds, and even then, just a few hundred metres below the surface. In recent years, however, there has been renewed interest, due to a combination of rising demand and improvements in exploration technology.
The UN's International Seabed Authority (ISA) was set up to manage these operations and prevent them from descending into a free-for-all. Until 2011, only a handful of exploration permits had been issued – but since then, demand has surged. This week, seven new licences were issued to companies based in Brazil, Germany, India, Russia, Singapore and the UK. The number is expected to reach 26 by the end of 2014, covering a total area of seabed greater than 1.2 million sq km (463,000 sq mi).
Michael Lodge of the ISA told the BBC: "There's definitely growing interest. Most of the latest group are commercial companies so they're looking forward to exploitation in a reasonably short time – this move brings that closer."
So far, only licences for exploration have been issued, but full mining rights are likely to be granted over the next few years. The first commercial activity will take place off the coast of Papua New Guinea, where a Canadian company – Nautilus Minerals – plans to extract copper, gold and silver from hydrothermal vents. After 18 months of delays, this was approved outside the ISA system and is expected to commence in 2016. Nautilus has been developing Seafloor Production Tools (SPTs), the first of which was completed in April. This huge robotic machine is known as the Bulk Cutter and weighs 310 tonnes when fully assembled. The SPTs have been designed to work at depths of 1 mile (1.6 km), but operations as far down as 2.5 miles (4 km) should be possible eventually.
As with any mining activity, concerns have been raised from scientists and conservationists regarding the environmental impact of these plans, but the ISA says it will continue to demand high levels of environmental assessment from its applicants. Looking ahead, analysts believe that deep sea mining could be widespread in many parts of the world by 2040.
A Japanese humanoid robot called Pepper, whose makers claim can read people's emotions, has been unveiled in Tokyo. Telecoms company Softbank, which created the robot, says Pepper can understand 70 to 80 percent of spontaneous conversations. News agency AFP met the pint-sized chatterbox, who took time out from his day job greeting customers at SoftBank stores.
Developed by Microsoft, Project Adam is a new deep-learning system modelled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry. The goal of Project Adam is to enable software to visually recognise any object. This is being marketed as a competitor to Google's Brain project, currently being worked on by Ray Kurzweil.
The National Museum of Emerging Science and Innovation has today opened a new permanent exhibition entitled, "Android: What is Human?" where visitors can meet the world's most advanced androids – robots which closely resemble humans.
The National Museum of Emerging Science and Innovation, also known simply as the "Miraikan", is created by Japan's Science and Technology Agency. This new exhibition displays three android robots: the recently developed Kodomoroid and Otonaroid – a child android and an adult female android, respectively – and Telenoid, an android designed without individual human physical features. The exhibition is curated by Dr. Hiroshi Ishiguro, a leading android expert who has been studying the question, "What is human?"
Kodomoroid and Otonaroid will attempt to fill human roles as the world's first android announcer and as the Miraikan's android science communicator, respectively. The organisers of the exhibition claim it will be "a unique and rare event" – providing visitors with the opportunity to communicate with and operate these advanced robots, while shedding light on the attributes of humans in contrast with those of androids.
With soft skin made from special silicon and smooth motion possible by artificial muscle, android robots are becoming increasingly similar to real humans. If an android robot gains the ability to talk and live identically to a human, you may not be able to distinguish between androids and humans. If this comes to pass, what would the word human mean? What is human? This question has been subject to debate since ancient times, and efforts to find an answer are still being made in all fields, including the humanities, social sciences, and art. Building an android can be described as a process of understanding what makes a human look like a human, as Ishiguro explains:
Kodomoroid is a teleoperated android resembling a child. It is a news announcer with potential exceeding that of its human equivalent. It can recite news reports gathered from around the world 24 hours a day, every day, in a variety of voices and languages. In a studio on the museum's third floor, you can watch her deliver news about global issues and weather reports.
Otonaroid is a teleoperated android robot resembling an adult female. She has been hired by the Miraikan as a robot science communicator. At the exhibition, you can talk with her in face-to-face conversations and also operate her movements.
Telenoid is a teleoperated android robot with a minimal design, created as an attempt to embody the minimum physical requirements for human-like communication. At the exhibition, you can talk with it and also operate it.
Researchers are claiming a major breakthrough in artificial intelligence with a machine program that can pass the famous Turing Test.
At the Royal Society in London yesterday, an event called Turing Test 2014 was organised by the University of Reading. This involved a chat program known as Eugene being presented to a panel of judges and trying to convince them it was human. These judges included the actor Robert Llewellyn – who played robot Kryten in sci-fi comedy TV series Red Dwarf – and Lord Sharkey, who led a successful campaign for Alan Turing's posthumous pardon last year. During this competition, which saw five computers taking part, Eugene fooled 33% of human observers into thinking it was a real person as it claimed to be a 13-year-old boy from Odessa in Ukraine.
In 1950, British mathematician and computer scientist Alan Turing published his seminal paper, "Computing Machinery and Intelligence", in which he proposed the now-famous test for artificial intelligence. Turing predicted that by the year 2000, machines with 10 GB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory.
In the years since 1950, the test has proven both highly influential and widely criticised. A number of breakthroughs have emerged in recent times from groups claiming to have satisfied the criteria for "artificial intelligence". We have seen Cleverbot, for example, and IBM's Watson, as well as gaming bots and the CAPTCHA-solving Vicarious. It is therefore easy to be sceptical about whether Eugene represents something genuinely new and revolutionary.
Professor Kevin Warwick (who also happens to be the world's first cyborg), comments in a press release from the university: "Some will claim that the Test has already been passed. The words 'Turing Test' have been applied to similar competitions around the world. However, this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing's Test was passed for the first time on Saturday."
Eugene's creator and part of the development team, Vladimir Veselov, said as follows: "Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality. This year, we improved the 'dialog controller' which makes the conversation far more human-like when compared to programs that just answer questions. Going forward, we plan to make Eugene smarter and continue working on improving what we refer to as 'conversation logic'."
Is the Turing Test a reliable indicator of intelligence? Who gets to decide the figure of 30% and what is the significance of this number? Surely imitation and pre-programmed replies cannot qualify as "understanding"? These questions and many others will be asked in the coming days, just as they have been asked following similar breakthroughs in the past. To gain a proper understanding of intelligence, we will need to reverse engineer the brain – something which is very much achievable in the next decade, based on current trends.
Regardless of whether Eugene is a bona fide AI, computing power will continue to grow exponentially in the coming years, with major implications for society in general. Benefits may include a 50% reduction in healthcare costs, as software programs are used for big data management to understand and predict the outcomes of treatment. Call centre staff, already competing with virtual employees today, could be almost fully automated in the 2030s, with zero waiting times for callers trying to seek help. Self-driving cars and other forms of AI could radically reshape our way of life.
Downsides to AI may include a dramatic rise in unemployment as humans are increasingly replaced by machines. Another big area of concern is security, as Professor Warwick explains: "Having a computer that can trick a human into thinking that someone – or even something – is a person we trust is a wake-up call to cybercrime. The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true... when in fact it is not."
Further into the future, AI will gain increasingly mobile capabilities, able to learn and become aware of the physical world. No longer restricted to the realms of software and cyberspace, it will occupy hardware that includes machines literally indistinguishable from real people. By then, science fiction will have become reality and our civilisation will enter a profound, world-changing epoch that some have called a technological singularity. If Ray Kurzweil's ultimate prediction is to be believed, our galaxy and perhaps the entire universe may become saturated with intelligence, as formally lifeless rocks are converted into sentient matter.
At the Code Conference in California, Microsoft has demonstrated Skype Translator – a new technology enabling cross-lingual conversations in real time. Resembling the "universal translator" from Star Trek, this feature will be available on Windows 8 by the end of 2014 as a limited beta. Microsoft has worked on machine translation for 15 years, and translating voice over Skype in real time had once been considered "a nearly impossible task." In the world of technology, however, miracles do happen. This video shows the software in action. According to CEO Satya Nadella, it does more than just automatic speech recognition, machine translation and voice synthesis: it can actually "learn" from different languages, through a brain-like neural net. When you consider that 300 million people are now connecting to Skype each month, making 2 billion minutes of conversation each day, the potential in terms of improved communication is staggering.
Fully autonomous weapons, or “killer robots,” would jeopardise basic human rights, whether used in wartime or for law enforcement, Human Rights Watch said in a report released yesterday, on the eve of the first multilateral meeting on the subject at the United Nations.
The 26-page report, “Shaking the Foundations: The Human Rights Implications of Killer Robots,” is the first report to assess in detail the risks posed by these weapons during law enforcement operations – expanding the debate beyond the battlefield. Human Rights Watch found that fully autonomous weapons threaten rights and principles under international law as fundamental as the right to life, the right to a remedy, and the principle of dignity.
“In policing, as well as war, human judgment is critically important to any decision to use a lethal weapon,” said Steve Goose, arms division director. “Governments need to say no to fully autonomous weapons for any purpose and to preemptively ban them now, before it is too late.”
International debate over fully autonomous weapons has previously focused on their potential role in armed conflict and questions over whether they would comply with international humanitarian law, also called the laws of war. Human Rights Watch, in this new report, examines the potential impact of fully autonomous weapons under human rights law, which applies during peacetime as well as armed conflict.
Nations must adopt a preemptive international ban on these weapons, which could identify and fire on targets without meaningful human intervention, Human Rights Watch said. Countries are pursuing ever-greater autonomy in weapons, and precursors already exist.
The release of the report, co-published with Harvard Law School’s International Human Rights Clinic, coincides with the first ever multilateral meeting on the weapons. Many of the 117 countries that joined the Convention on Conventional Weapons will attend the meeting of experts on lethal autonomous weapons systems at the United Nations in Geneva this week. Members of the convention agreed at their annual meeting in November 2013 to begin work on the issue in 2014.
Human Rights Watch believes the agreement to work on these weapons in the Convention on Conventional Weapons forum could eventually lead to new international law prohibiting fully autonomous weapons. The convention preemptively banned blinding lasers in 1995.
Human Rights Watch is a founding member and coordinator of the Campaign to Stop Killer Robots. This coalition of 51 nongovernmental organisations in two dozen countries calls for a preemptive ban on the development, production, and use of fully autonomous weapons.
Human Rights Watch issued its first report on the subject, “Losing Humanity: The Case against Killer Robots,” back in November 2012. In April 2013, Christof Heyns – UN special rapporteur on extrajudicial, summary or arbitrary executions – issued a report citing a range of objections to the weapons, and called for all nations to adopt national moratoria and begin international discussions about how to address them.
Fully autonomous weapons could be prone to killing people unlawfully because these weapons could not be programmed to handle every situation, Human Rights Watch found. According to robot experts, there is little prospect that these weapons would possess human qualities, such as judgment, that facilitate compliance with the right to life in unforeseen situations.
Fully autonomous weapons would also undermine human dignity, Human Rights Watch said. These inanimate machines could not understand or respect the value of life, yet they would have the power to determine when to take it away.
Serious doubts exist about whether there could be meaningful accountability for the actions of a fully autonomous weapon. There would be legal and practical obstacles to holding anyone – a superior officer, programmer, or manufacturer – responsible for a robot’s actions. Both criminal and civil law are ill suited to the task, Human Rights Watch found.
“The accountability gap would weaken deterrence for future violations,” said Bonnie Docherty, senior researcher in the arms division at Human Rights Watch and lecturer at the Harvard clinic as well as author of the report. “It would be very difficult for families to obtain retribution or remedy for the unlawful killing of a relative by such a machine.”
The human rights impacts of killer robots compound a host of other legal, ethical, and scientific concerns – including the potential for an arms race, prospect of proliferation, and questions about their ability to protect civilians adequately on the battlefield or the street, Human Rights Watch found.
After eight years of development, a new hi-tech bionic arm has become the first of its kind to gain regulatory approval for mass production.
The DEKA Arm System is part of the $100m Revolutionising Prosthetics program launched by the Defense Advanced Research Projects Agency (DARPA). Upper-limb prosthetic technology had for many years lagged behind lower-limb technology and the program sought to address this issue. The DEKA was made possible through a combination of breakthroughs in both engineering and biology, resulting in a bionic arm that offers near-natural control. It is nicknamed "The Luke", after Star Wars' Luke Skywalker who received a robotic replacement for the hand he lost in a fight with Darth Vader.
Simultaneous control of multiple joints is enabled by miniature motors and a variety of input devices, including wireless signals generated by sensors on the user's feet. Constructed from lightweight but strong materials, the battery-powered arm system is of similar size and weight to a real limb and has six user-selectable grips.
During eight years of testing and development, 36 volunteers took part in studies to refine the arm's design. Their feedback helped engineers to create a mind-controlled device enabling amputees to perform a wide range of tasks – preparing food, using locks and keys, opening envelopes, brushing hair, using zippers and feeding themselves, all of which greatly enhances their independence and quality of life.
Similar devices are being developed around the world, but this is the first of its kind to gain approval from the U.S. Food and Drug Administration (FDA). Dr. Geoffrey Ling, Director of DARPA's Biological Technologies Office, comments in a press release: "DARPA is a place where we can bring dreams to life."
If bee populations continue to decline, the dystopian future depicted in this video could one day become a reality.
Bees and other pollinating insects play an essential role in ecosystems. A third of all our food depends on their pollination. A world without pollinators would be devastating for food production. Since the late 1990s, beekeepers around the world have observed the mysterious and sudden disappearance of bees, and report unusually high rates of decline in honeybee colonies. Although the exact causes are not yet fully understood, growing evidence suggests that chemical-intensive farming methods and the use of insecticides play a major role. Greenpeace has now launched a campaign demanding urgent action to address this issue – including a ban on the most harmful chemicals, along with increased science funding and more sustainable agricultural practices.
Honda this week showcased the newest version of ASIMO, the world's most advanced humanoid robot, for the first time in North America, featuring its latest innovations – including the ability to communicate in sign language and to climb stairs without stopping.
ASIMO – which stands for Advanced Step in Innovative Mobility – was first introduced 14 years ago. Since then, it has made significant advances – including physical improvements like running and hopping on one leg, as well as breakthroughs in dexterity and intelligence, that have furthered Honda's dream of creating humanoid robots to help society.
"This is an exciting project for Honda," said Satoshi Shigemi, senior chief engineer of Honda R&D and the leader of Honda's humanoid robotics program. "Our engineers are working tirelessly to develop new technologies aimed at helping ASIMO work in a real world environment."
The new version of ASIMO has undergone numerous changes to its 4'3", 110-pound body. Developments in the lower body have enhanced stability and balance control, allowing the robot to climb more smoothly, run faster and change directions in a more-controlled fashion.
Enhancements in the upper body include major increases in the degrees of freedom available in the robot's hands. Each hand now contains 13 degrees of freedom, which allows ASIMO to perform many more intricate and precise tasks.
The increased hand dexterity provides additional movement in each finger, which also led to the development of ASIMO's new ability to communicate using both American and Japanese sign language. Force sensors in the robot's hands also provide instantaneous feedback allowing ASIMO to use the appropriate amount of force when performing a task. This allows the robot to pick up paper cups without crushing them, for example, but still allows it to use a stronger force when necessary.
"It was obvious that overall flexibility was necessary, and many more complex tasks can now be performed because of the improved operational capacity in the hands," Shigemi continued. "But perhaps more importantly, these innovations enhance ASIMO's communication skills, which is essential to interact with human beings."
Advanced technologies derived from research on ASIMO have also benefited other Honda business lines. For example, the Vehicle Stability Assistance (VSA) used in the Honda Civic, along with technologies in the championship-winning Honda Moto GP motorcycles had their genesis in Honda's robotics research program.
Later this summer, the new ASIMO will follow in the footsteps of its predecessor to become a daily performer at Disneyland's Tomorrowland.
In 1997, Deep Blue became the first computer to win against a human chess champion, when it defeated Garry Kasparov. In 2011, IBM's Watson competed on the Jeopardy! quiz show against former winners Brad Rutter and Ken Jennings, defeating them both. Now, another competition between man and machine is about to unfold. On Tuesday 11th March, KUKA – a German manufacturer of high-end industrial robots – will open its first plant in Shanghai, China. The opening will be celebrated with a table tennis match between their KR AGILUS robot and Tim Boll, the German champion. This event is intended to demonstrate the speed, precision and flexibility of KUKA's industrial robots. For more information, click here.
Dan Barry is an engineer and scientist, currently serving as the Co-Chair of Artificial Intelligence and Robotics at Singularity University. In 2005, he started his own company, Denbar Robotics, that creates robotic assistants for home and commercial use. In 2011 he co-founded 9th Sense, a company that sells telepresence robots. He has seven patents, has published over 50 articles in scientific journals, and is a former NASA astronaut. In this video, Barry asks the question: "How are we going to know that a robot is self-aware?"
Headquartered in New York City's "Silicon Alley", the new Watson Group formed by IBM will fuel innovative products and startups – introducing cloud solutions to accelerate research, visualise Big Data and enable analytics exploration.
IBM today announced it will establish the IBM Watson Group, a new business unit dedicated to the development and commercialisation of cloud-delivered cognitive innovations. The move signifies a strategic shift by IBM to accelerate into the marketplace a new class of software, services and apps that can "think", improve by learning, and discover answers and insights to complex questions from massive amounts of Big Data.
IBM will invest more than $1 billion into the Watson Group, focusing on research and development to bring cloud-delivered cognitive applications and services to market. This will include $100 million available for venture investments to support IBM's recently launched ecosystem of start-ups and businesses, which are building a new class of cognitive apps powered by Watson, in the IBM Watson Developers Cloud.
According to technology research firm Gartner, smart machines will be the most disruptive change ever brought about by information technology, and can make people more effective, empowering them to do "the impossible."
The IBM Watson Group will have a new headquarters at 51 Astor Place in New York City's "Silicon Alley" technology hub, leveraging the talents of 2,000 professionals, whose goal is to design, develop and accelerate the adoption of Watson cognitive technologies that transform industries and professions. The new group will tap subject matter experts from IBM's Research, Services, Software and Systems divisions, as well as industry experts who will identify markets that cognitive computing can disrupt and evolve, such as healthcare, financial services, retail, travel and telecommunications.
Nearly three years after its triumph on the TV show Jeopardy!, IBM has advanced Watson from a quiz game innovation into a commercial technology. Now delivered from the cloud and powering new consumer apps, Watson is 24 times faster and 90 percent smaller – IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes.
Named after IBM founder Thomas J. Watson, the machine was developed in IBM’s Research labs. Using natural language processing and analytics, Watson handles information akin to how people think, representing a major shift in the ability to quickly analyse, understand and respond to Big Data. Watson’s ability to answer complex questions in natural language with speed, accuracy and confidence will transform decision making across a range of industries.
"Watson is one of the most significant innovations in IBM's 100 year history, and one that we want to share with the world," says IBM Senior Vice President Mike Rhodin (pictured below), who will lead the group. "These new cognitive computing innovations are designed to augment users’ knowledge – be it the researcher exploring genetic data to create new therapies, or a business executive who needs evidence-based insights to make a crucial decision."
At the Consumer Electronics Show (CES) in Las Vegas, Intel Corporation has been showing off its latest innovative technologies. These include an intelligent 3D camera system, a range of new wearable electronics, and a 22nm dual-core PC the size of an SD card.
Intel CEO Brian Krzanich has outlined a range of new products, initiatives and strategic relationships aimed at accelerating innovation across a range of mobile and wearable devices. He made the announcements during the pre-show keynote for the 2014 Consumer Electronics Show in Las Vegas, the biggest gathering of the tech industry in the USA.
Krzanich's keynote painted a vision of how the landscape of computing is being re-shaped and where security is too important not to have it embedded in all devices. The world is entering a new era of integrated computing defined not by the device, but the integration of technology into people's lifestyles in ways that offer new utility and value. As examples, he highlighted several immersive and intuitive technologies that Intel will begin offering in 2014, such as Intel RealSense – hardware and software that will bring human senses to Intel-based devices. This will include 3D cameras that deliver more intelligent experiences – improving the way people learn, collaborate and are entertained.
The first Intel RealSense 3D camera features a best-in-class depth sensor and a full 1080p colour camera. It can detect finger level movements enabling highly accurate gesture recognition, facial features for understanding movement and emotions. It can understand foregrounds and backgrounds to allow control, enhance interactive augmented reality (AR), simply scan items in three dimensions, and more.
This camera will be integrated into a growing spectrum of Intel-based devices including 2 in 1, tablet, Ultrabook, notebook, and all-in-one (AIO) designs. Systems with the new camera will be available beginning in the second half of 2014 from Acer, Asus, Dell, Fujitsu, HP, Lenovo and NEC.
To advance the computer's "hearing" sense, a new generation of speech recognition technology will be available on a variety of systems. This conversational personal assistant works with popular websites and applications. It comes with selectable personalities, and allows for ongoing dialogue with Intel-based devices. People can simply tell it to play music, get answers, connect with friends and find content – all by using natural language. This assistant is also capable of calendar checks, getting maps and directions, finding flights or booking a dinner reservation. Available offline, people can control their device, dictate notes and more without an Internet connection.
Krzanich then explained how Intel aims to accelerate wearable device innovation. A number of reference designs were highlighted including: smart earbuds providing biometric and fitness capabilities, a smart headset that is always ready and can integrate with existing personal assistant technologies, a smart wireless charging bowl, a smart baby onesie and a smart bottle warmer that will start warming milk when the onesie senses the baby is awake and hungry.
The smart earbuds (pictured below) provide full stereo audio, monitor heart rate and pulse all while the applications on the user's phone keep track of running distance and calories burned. The product includes software to precision-tune workouts by automatically choosing music that matches the target heart rate profile. As an added bonus, it harvests energy directly from the audio microphone jack, eliminating the need for a battery or additional power source to charge the product.
The Intel CEO announced collaborations to increase dialogue and cooperation between fashion and technology industries to explore and bring to market new smart wearable electronics. He also kicked-off the Intel "Make it Wearable" challenge – a global effort aimed at accelerating creativity and innovation with technology. This effort will call upon the smartest and most creative minds to consider factors impacting the proliferation of wearable devices and ubiquitous computing, such as meaningful usages, aesthetics, battery life, security and privacy.
In addition to reference designs for wearable technology, Intel will offer a number of accessible, low-cost entry platforms aimed at lowering entry barriers for individuals and small companies, allowing them to create innovative web-connected wearables or other small form factor devices. Underscoring this point, Krzanich announced Intel Edison – a low-power, 22nm-based computer in an SD card form factor with built-in wireless abilities and support for multiple operating systems. From prototype to production, Intel Edison will enable rapid innovation and product development by a range of inventors, entrepreneurs and consumer product designers when available this summer.
"Wearables are not everywhere today, because they aren't yet solving real problems and they aren't yet integrated with our lifestyles," said Krzanich. "We're focused on addressing this engineering innovation challenge. Our goal is: if something computes and connects, it does it best with Intel inside."
Krzanich also discussed how Intel is addressing a critical issue for the industry as a whole: conflict minerals from the Democratic Republic of the Congo (DRC). Intel has achieved a critical milestone and the minerals used in microprocessor silicon and packages manufactured in Intel's factories are now "conflict-free", as confirmed by third-party audits.
"Two years ago, I told several colleagues that we needed a hard goal, a commitment to reasonably conclude that the metals used in our microprocessors are conflict-free," Krzanich said. "We felt an obligation to implement changes in our supply chain to ensure that our business and our products were not inadvertently funding human atrocities in the Democratic Republic of the Congo. Even though we have reached this milestone, it is just a start. We will continue our audits and resolve issues that are found."