20th November 2015
Self-healing sensor brings 'electronic skin' closer to reality
Scientists have developed a self-healing, flexible sensor that mimics the self-healing properties of human skin. Cuts or scratches to the sensors "heal" themselves in less than one day.
Flexible sensors have been developed for use in consumer electronics, robotics, health care, and space flight. Future possible applications could include the creation of ‘electronic skin’ and prosthetic limbs that allow wearers to ‘feel’ changes in their environments.
One problem with current flexible sensors, however, is that they can be easily scratched and otherwise damaged, potentially destroying their functionality. Researchers in the Department of Chemical Engineering at the Technion – Israel Institute of Technology in Haifa (Israel), who were inspired by the healing properties in human skin, have developed materials that can be integrated into flexible devices to “heal” incidental scratches or damaging cuts that might compromise device functionality. The advancement, using a new kind of synthetic polymer (a polymer is a large molecule composed of many repeated smaller molecules) has self-healing properties that mimic human skin, which means that e-skin “wounds” can quickly “heal” themselves in remarkably short time – less than a day.
A paper outlining the characteristics and applications of the unique, self-healing sensor has been published in the current issue of Advanced Materials.
“The vulnerability of flexible sensors used in real-world applications calls for the development of self-healing properties similar to how human skin heals,” said self-healing sensor co-developer Professor Hossam Haick. “Accordingly, we have developed a complete, self-healing device in the form of a bendable and stretchable chemiresistor where every part – no matter where the device is cut or scratched – is self-healing.”
The new sensor is comprised of a self-healing substrate, high conductivity electrodes, and molecularly modified gold nanoparticles. “The gold particles on top of the substrate and between the self-healing electrodes are able to “heal” cracks that could completely disconnect electrical connectivity,” explains Prof. Haick.
Once healed, the polymer substrate of the self-healing sensor demonstrates sensitivity to volatile organic compounds (VOCs), with detection capability down to tens of parts per billion. It also demonstrates superior healability at the extreme temperatures of -20 degrees C to 40 degrees C. This property, said the researchers, can extend applications of the self-healing sensor to areas of the world with extreme climates. From sub-freezing cold to equatorial heat, the self-healing sensor is environment-stable.
The healing polymer works quickest, said the researchers, when the temperature is between 0 degrees C and 10 degrees C, when moisture condenses and is then absorbed by the substrate. Condensation makes the substrate swell, allowing the polymer chains to begin to flow freely and, in effect, begin “healing.” Once healed, the nonbiological, chemiresistor still has high sensitivity to touch, pressure and strain, which the researchers tested in demanding stretching and bending tests.
Another unique feature is that the electrode resistance increases after healing and can survive 20 times or more cutting/healing cycles than prior to healing. Essentially, healing makes the self-healing sensor even stronger. The researchers noted in their paper that “the healing efficiency of this chemiresistor is so high that the sensor survived several cuttings at random positions.”
The researchers are currently experimenting with carbon-based self-healing composites and self-healing transistors.
“The self-healing sensor raises expectations that flexible devices might someday be self-administered, which increases their reliability,” explained co-developer Dr. Tan-Phat Huynh, also of the Technion, whose work focuses on the development of self-healing electronic skin. “One day, the self-healing sensor could serve as a platform for biosensors that monitor human health using electronic skin.”
9th November 2015
Fastest ever brain-computer interface for spelling
Researchers in China have achieved high-speed spelling with a noninvasive brain-computer interface.
Brain–computer interfaces (BCI) are a relatively new and emerging technology allowing direct communication between the brain and an external device. They are used for assisting, augmenting, or repairing cognitive or sensory-motor functions. Research on BCIs began in the 1970s and the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.
The past 20 years have seen major progress in BCIs. However, they are still limited by low communication rates, caused by interference from spontaneous electroencephalography (EEG) signals. Now, a team of researchers from Tsinghua University in China, State Key Laboratory Integrated Optoelectronics, Institute of Semiconductors (IOS), and the Chinese Academy of Sciences have developed a greatly improved system. Their EEG-based BCI speller can achieve information transfer rates (ITRs) of 60 characters (∼12 words) per minute, by far the highest ever reported in BCI spellers for either noninvasive or invasive methods. In some of the tests, they reached up to 5.32 bits per second. For comparison, most other systems in recent years have been at 1 or 2 ITRs.
According to the researchers, they achieved this via an extremely high consistency of frequency and phase between the visual flickering signals and the elicited single-trial steady-state visual evoked potentials. Specifically, they developed a new joint frequency-phase modulation (JFPM) method to tag 40 characters with 0.5-seconds-long flickering signals, and created a user-specific target identification algorithm using individual calibration data. A paper describing this breakthrough appears in the 3rd November edition of the journal Proceedings of the National Academy of Sciences (PNAS).
In the not-too-distant future, this kind of technology could be applied to other uses, besides medicine. For example, it could be incorporated into smartphones and other consumer electronics to allow texting, typing or other on-screen actions by thought power alone. A partnership between the Japanese government and private sector aims to achieve this by 2020. With continued progress in the speed of BCIs, a new form of "virtual telepathy" could emerge within a few decades.
5th November 2015
First AI-based scientific search engine will accelerate research process
A new search engine – Semantic Scholar – uses artificial intelligence to transform the research process for computer scientists.
The Allen Institute for Artificial Intelligence (AI2) this week launches its free Semantic Scholar service, which allows scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work. Leveraging AI2's expertise in data mining, natural-language processing and computer vision, Semantic Scholar provides an AI-enhanced way to quickly search and discover information. At launch, the system searches over three million computer science papers, and will add new scientific categories on an ongoing basis.
"No one can keep up with the explosive growth of scientific literature," said Dr. Oren Etzioni, CEO at AI2. "Which papers are most relevant? Which are considered the highest quality? Is anyone else working on this specific or related problem? Now, researchers can begin to answer these questions in seconds, speeding research and solving big problems faster."
With Semantic Scholar, computer scientists can:
• Home in quickly on what they are looking for, with advanced selection tools. Researchers can filter results by author, publication, topic, and date published. This gets the most relevant result in the fastest way possible, and reduces information overload.
• Instantly access a paper's figures and tables. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
• Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
• Be prompted with key phrases within each paper, to winnow the search further.
Using machine reading and vision methods, Semantic Scholar crawls the web – finding all PDFs of publicly available papers on computer science topics – extracting both text and diagrams/captions, and indexing it all for future contextual retrieval. Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of study and how influential its citations are. It provides the scientist with a simple user interface (optimised for mobile) that maps to academic researchers' expectations. Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.
Only a small number of free academic search engines are currently in widespread use. Google Scholar is by far the largest, with 100 million documents. However, researchers have noted problems with the current generation of these search engines.
"A significant proportion of the documents are not scholarly by anyone's measure," says Péter Jacsó, an information scientist at the University of Hawaii who identified a series of basic errors in search results from Google Scholar. While some of the issues have recently been fixed, says Jacsó, "there are still millions and millions of errors."
"Google has access to a lot of data. But there's still a step forward that needs to be taken in understanding the content of the paper," says Jose Manuel Gomez-Perez, who works on search engines and is director of research and development in Madrid for the software company Expert System.
Semantic Scholar builds on the foundation of current research paper search engines, adding AI methods to overcome information overload and paving the way for even more advanced and intelligent algorithms in the future.
"What if a cure for an intractable cancer is hidden within the tedious reports on thousands of clinical studies? In 20 years' time, AI will be able to read – and more importantly, understand – scientific text," says Etzioni. "These AI readers will be able to connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed. AI-based discovery engines will help find the answers to science's thorniest problems."
17th October 2015
World's first community-wide 10 gigabit Internet
The world's fastest broadband service is now available in Tennessee, USA, offering up to 10Gbps.
Chattanooga, the fourth largest city in the state of Tennessee, became known as "Gig City" for being the first in the USA to build a community-wide fibre optic network delivering 1 gigabit (1,000 Mbps) Internet. This week, the city announced an even bigger milestone as public utility EPB is now offering the world's first 10 gigabit (10 Gbps) broadband across a large community-wide territory. Unlike point-to-point commercial installations, which have been possible for some time, EPB's 10 Gig service is now available for access by every home and business in a 600 square mile area, via a new generation of fibre access technology known as Time and Wavelength-Division Multiplexed Passive Optical Network (TWDM-PON), provided by Alcatel-Lucent.
Alcatel-Lucent’s pioneering new solution is the world’s most advanced ultra-broadband technology, because it delivers scalability across a large region including urban, suburban and rural homes and businesses. Residents everywhere in EPB’s service area can obtain it for $299/month with free installation, no contracts and no cancellation fees.
“Five years ago, Chattanooga and Hamilton County became the first in the USA to offer up to 1 Gig Internet speeds,” said Harold DePriest, CEO of EPB. “Today, we become the first community in the world capable of delivering up to 10 Gigs to all 170,000 households and businesses in our service area.”
EPB is also launching 5 Gig and 10 Gig Internet products for small businesses as well as 3 Gig, 5 Gig and 10 Gig “Professional” products for larger enterprises. These Internet services are available at varying price points.
“Chattanooga’s 10 Gig fibre optic network is a world-class platform for innovation,” DePriest said. “In recent years, the need for faster Internet speeds has increased rapidly. Chattanooga is the perfect place for companies to enhance their productivity today and test the applications everyone in the country will want tomorrow.”
For companies that need to upload and download huge files – including ventures involved in 3D printing, film production, gaming, medical imaging, advanced software development, big data, etc. – Chattanooga offers a unique opportunity to dramatically increase productivity and workflow whether employees are working from home or the office.
“Chattanooga is a city ready to compete in the 21st Century innovation economy,” said Mayor Andy Berke. “The 1 gigabit service has already played a pivotal role in transforming our city, attracting new businesses and providing our residents with affordable high-speed connectivity. The 10 Gig offering will continue to grow wages, diversify our local economy and propel Chattanooga as a centre for technology and invention.”
Chattanooga’s fibre optic network has produced tangible results. A recent study by the University of Tennessee shows the “Gig Network” helped the city to generate at least 2,800 new jobs and over $865 million in economic and social benefits. The study also found that EPB's smart grid – the cornerstone of the fibre optic network – has avoided 125 million minutes of electric service interruptions by automatically re-routing power (often in under a second) to prevent an outage or dramatically reduce outage durations.
Chattanooga has emerged as a role model for other cities across the nation, illustrating the benefits that high-speed Internet can bring. The likes of Comcast and Time Warner Cable have tried – and failed – to kill proposals by the city to expand its public broadband networks, with regulator the FCC ruling against these larger ISPs.
For comparison, the average connection in the USA is currently estimated at 11.9Mbps, while South Korea has the world's highest average at 23.6Mbps and the global average recently surpassed 4Mbps. Internet speeds – like many areas of computing – are an example of an exponentially improving technology. Based on current trends in technology and price performance, FutureTimeline predicts that terabit connections (1Tbps) will be commonplace by the early 2030s.
11th October 2015
Smartphone app fixes 10,000 problems in Detroit
A smartphone app launched in Detroit earlier this year has vastly improved the reporting and fixing of neighbourhood problems.
Six months ago this week, the city of Detroit launched a new smartphone app – Improve Detroit – which has been downloaded by over 6,500 residents. More than 10,000 complaints made via the app have been closed since April. The average time to close a case is nine days, a vast improvement from when problems often languished for years.
Residents have used the app to get:
• More than 3,000 illegal dumping sites cleaned up
• 2,092 potholes repaired
• 991 complaints resolved related to running water in an abandoned structure
• 565 abandoned vehicles removed
• 506 water main breaks taken care of
• 277 traffic signal issues fixed
“The Improve Detroit app has ushered in a new era of customer service and accountability in city government,” Mayor Mike Duggan said. “It’s never been easier for Detroiters to get their voices heard and their complaints taken care of.”
Not only are problems getting resolved, but residents are raving about the app, which has a four star rating on the Google Play store.
"It saves time, it gets results, and I love how I can follow the progress being made on the complaint," comments Dan Wroblewski, who lives on Detroit's far west side and uses it to report issues while patrolling his neighbourhood.
The Improve Detroit app is just one of several the City of Detroit has made available as it works to bring its customer service into the digital age. Residents can also download the Detroit Police Connect app to get up-to-date information on DPD, contact police anonymously with tips to help keep their neighbourhood safe, find numbers for precincts, bureaus and other departments and more. Meanwhile, the DDOT Bus app provides riders with the real-time location, movement and arrival time of the next bus at their stop, saving them time and from having to wait in inclement weather unnecessarily. They can also plan their trip by seeing which routes and transfers to take.
3rd October 2015
A breakthrough in replacing silicon with carbon nanotubes
IBM has announced a breakthrough that will accelerate the replacement of silicon transistors with carbon nanotubes. Their new method could work down to 1.8 nanometre node sizes.
IBM this week announced a major engineering breakthrough that could accelerate carbon nanotubes replacing silicon transistors to power future computing technologies. Researchers at the company have demonstrated a new way to shrink transistor contacts, without reducing performance of carbon nanotube devices – paving the way to dramatically faster, smaller and more powerful chips beyond the capabilities of traditional semiconductors. The details were published yesterday in the journal Science.
IBM has overcome a major hurdle that silicon and other transistor technologies face when scaling down. In any transistor, two things scale: the channel and its two contacts. As devices become ever smaller, increased contact resistance for carbon nanotubes has hindered performance gains, until now. These results could overcome contact resistance challenges all the way to 1.8 nanometre nodes – four technology generations away.
Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling Big Data to be analysed faster, increasing the power and battery life of mobile devices and the Internet of Things, and allowing cloud data centres to deliver services more efficiently and economically.
Silicon transistors – tiny switches that carry information on a chip – have been shrunk year after year since the mid-20th century, but are now approaching the limits of miniaturisation. With Moore's Law running out of steam, shrinking the size of the transistor, including the channels and contacts – without compromising its performance – has been a major challenge in recent years.
IBM has previously shown that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometres – equivalent to 10,000 times thinner than a strand of human hair and less than half the size of today's leading silicon technology. IBM's new contact approach overcomes the other major hurdle in incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower consumption of power.
Earlier this summer, IBM unveiled the first 7 nanometre node silicon test chip, pushing the limits of silicon technologies. By advancing the research of carbon nanotubes to replace traditional silicon devices, IBM is paving the way for a post-silicon future and delivering on its $3 billion chip R&D investment announced in July 2014.
"These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems," said vice president of Science & Technology at IBM Research, Dario Gil. "As silicon technology nears its physical limits, new materials, devices and circuit architectures must be ready to deliver the advanced technologies that will be required by the Cognitive Computing era. This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected."
IBM's new carbon nanotube bonding technique, showing the fabricated nanotube transistor with end-bonded contact and a contact length below 10 nm, with potential to scale to 1.8 nm.
Carbon nanotubes represent a new class of semiconductor materials, consisting of single atomic sheets of carbon rolled into a tube. They form the core of a transistor device whose superior electrical properties could allow Moore's Law to continue for at least several more generations.
Electrons in carbon transistors can move more easily than in silicon-based devices, and the ultra-thin body of carbon nanotubes provide additional advantages at the atomic scale. Inside a chip, contacts are the valves that control the flow of electrons from metal into the channels of a semiconductor. As transistors begin to shrink in size, electrical resistance increases within the contacts, which impedes performance. Until now, decreasing the size of the contacts on a device caused a commensurate drop in performance – a challenge facing both silicon and carbon nanotube transistor technologies.
IBM researchers had to forego traditional contact schemes by inventing a metallurgical process akin to "microscopic welding", which chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunk below 10 nanometres without deteriorating performance of the carbon nanotube devices.
“For any advanced transistor technology, the increase in contact resistance due to the decrease in the size of transistors becomes a major performance bottleneck,” Gil added. “Our novel approach is to make the contact from the end of the carbon nanotube, which we show does not degrade device performance. This brings us a step closer to the goal of a carbon nanotube technology within the decade.”
A set of end-contacted nanotube transistors. Credit: IBM Research
24th September 2015
New world record for quantum teleportation distance
Researchers at the National Institute of Standards and Technology (NIST) have "teleported" or transferred quantum information carried in light particles over 100 kilometres (km) of optical fibre, four times farther than the previous record.
Researchers at NIST have “teleported” or transferred quantum information carried in light particles over 100 km (62 miles) of optical fibre – four times farther than the previous record. The experiment confirmed that quantum communication is feasible over long distances in fibre. Other research groups have teleported quantum information over longer distances in free space, but the ability to do so over conventional fibre-optic lines offers more flexibility for network design.
Not to be confused with Star Trek's fictional "beaming up" of people, quantum teleportation involves the transfer, or remote reconstruction, of information encoded in quantum states of matter or light. Teleportation is useful in both quantum communications and quantum computing, which offer prospects for novel capabilities such as unbreakable encryption. The basic method for quantum teleportation was first proposed more than 20 years ago and has been performed by a number of research groups, including one at NIST using atoms in 2004.
The new record, described in Optica, involved transferring quantum information contained in one photon – its specific time slot in a sequence – to another photon transmitted over 102 km of spooled fibre in a laboratory in Colorado. The achievement was made possible by advanced single-photon detectors designed and made at NIST.
"Only about 1 percent of photons make it all the way through 100 km of fibre," says NIST's Marty Stevens. "We never could have done this experiment without these new detectors, which can measure this incredibly weak signal."
Until now, so much quantum data was lost in fibre that transmission rates and distances were low. This new teleportation technique could be used to make devices called quantum repeaters that could resend data periodically, in order to extend network reach, perhaps enough to eventually build a "quantum internet." Previously, researchers thought quantum repeaters might need to rely on atoms or other matter, instead of light – a difficult engineering challenge that would also slow down transmission.
Various quantum states can be used to carry information; the NIST experiment used quantum states that indicate when in a sequence of time slots a single photon arrives. This method is novel, in that four of NIST's photon detectors were positioned to filter out specific quantum states. The detectors rely on superconducting nanowires made of molybdenum silicide. They can record over 80 percent of arriving photons, revealing whether they are in the same or different time slots, each just 1 nanosecond long. The experiments were performed at wavelengths commonly used in telecommunications. Below is an infographic with more details.
12th September 2015
Constant use of social media technology causes teen anxiety and depression
A study by the British Psychological Society warns that constant pressure on teenagers to use social media technology causes lower sleep quality, lower self-esteem, higher anxiety and increased depression levels.
© Bantuquka | Dreamstime.com
The need to be constantly available and respond 24/7 on social media accounts can cause depression, anxiety and reduced sleep quality for teenagers, says a study presented yesterday at a British Psychological Society conference in Manchester.
The researchers, Dr Heather Cleland Woods and Holly Scott of the University of Glasgow, provided questionnaires for 467 teenagers regarding their overall and night-time specific social media use. A further set of tests measured sleep quality, self-esteem, anxiety, depression and emotional investment in social media which relates to the pressure felt to be available 24/7 and the anxiety around, for example, not responding immediately to texts or posts.
Dr Cleland Woods explained: "Adolescence can be a period of increased vulnerability for the onset of depression and anxiety, and poor sleep quality may contribute to this. It is important that we understand how social media use relates to these. Evidence is increasingly supporting a link between social media use and wellbeing, particularly during adolescence, but the causes of this are unclear."
Analysis showed that overall and night-time specific social media use along with emotional investment were related to poorer sleep quality, lower self-esteem as well as higher anxiety and depression levels.
Lead researcher Dr Cleland Woods said: "While overall social media use impacts on sleep quality, those who log on at night appear to be particularly affected. This may be mostly true of individuals who are highly emotionally invested. This means we have to think about how our kids use social media, in relation to time for switching off."
Last year, a similar study found that increased use of digital media is causing children's social skills to decline.
© Nito100 | Dreamstime.com
8th September 2015
3-D printing of transparent glass is now possible
Researchers at MIT have demonstrated the first 3D printing technique able to make transparent glass objects.
The range of materials that 3D printers can work with has been steadily growing in recent years – from bioprinted cartilage constructs, to combinations of different plastic types in full colour, to elastic silicon membranes for heart attack patients, and even artificial rhino horn.
Some materials have been more difficult to develop, such as glass. Until now, it was only possible for opaque glass to be 3D printed. However, a team at the Massachusetts Institute of Technology (MIT) has achieved the first method for creating fully transparent glass, as demonstrated in this video.
The platform, known as "G3DP", is based on a dual-heated chamber concept. An upper chamber acts as a kiln cartridge, while the lower chamber serves to anneal the structures. The kiln cartridge operates at over 1,000°C (1900°F), with molten material being funnelled through a custom nozzle of alumina-zircon-silica. Objects are formed inside a third chamber, where they are cooled in a gradual, controlled way to ensure they don't break.
Finding a nozzle suitable for molten glass was a major challenge, according to the researchers. It had to be made of a material able to handle both high temperatures and resist the glass sticking to it. A paper describing their work, "Additive Manufacturing of Optically Transparent Glass", is available online.
7th September 2015
The world's first quantum dot monitor
Electronics maker Philips has launched the world's first quantum dot monitor in Europe, featuring 99% Adobe RGB colour accuracy.
Pictured here is the 27" E6, the world's first quantum dot monitor, delivering 99% Adobe RGB colour in full HD 1080p (1920x1080) at a mainstream price. It has been developed in a collaboration between Philips and QD Vision.
The monitor is based on Colour IQ technology, which uses quantum dots – semiconductor nanocrystals engineered to emit light in any colour. The size of each quantum dot determines the energy it emits, which determines the exact colour. In addition to their large and extremely accurate colour pallete, quantum dots are also very stable, which means that they won't wear out over time. The colours of individual dots won't warp or change over time.
"Quantum dot technology is changing the way monitor users think about colour, and the new 27" E Line monitor is the first on the market to showcase this new technology," said Stefan Sommer, a director at MMD, which exclusively markets and sells Philips branded LCD displays worldwide. "QD Vision is helping us create a monitor with 99% Adobe RGB colour at a very aggressive price point, making it accessible to everyone who uses a monitor."
Even at the highest price points, most of today's monitors are only capable of displaying less than 95% of the Adobe RGB standard, with mainstream models typically only presenting roughly 70% of the colour spectrum. Using the Colour IQ system, it is now possible for monitors to deliver the full Adobe RGB standard (>99%), but at much lower overall costs.
"The superior colour of our edge-lit quantum dots and our strong price-performance characteristics make them an ideal catalyst for positive disruption in the global monitor industry," said Matt Mazzuchi, Vice President, Market and Business Development at QD Vision. "Our close collaboration with Philips monitors brought this full gamut colour monitor to European consumers."
The new E6 quantum dot monitor will be available in Europe from October 2015. There's no word yet on when the product will launch in the U.S.
4th August 2015
New memory technology is 1,000 times faster
Intel and Micron have unveiled "3D XPoint" – a new memory technology that is 1,000 times faster than NAND and 10 times denser than conventional DRAM.
Intel Corporation and Micron Technology, Inc. have unveiled 3D XPoint technology, a non-volatile memory that has the potential to revolutionise any device, application or service that benefits from fast access to large sets of data. Now in production, 3D XPoint technology is a major breakthrough in memory process technology and the first new memory category since the introduction of NAND flash in 1989.
The explosion of connected devices and digital services is generating massive amounts of new data. To make this data useful, it must be stored and analysed very quickly, creating challenges for service providers and system builders who must balance cost, power and performance trade-offs when they design memory and storage solutions. 3D XPoint technology combines the performance, density, power, non-volatility and cost advantages of all available memory technologies on the market today. This technology is up to 1,000 times faster, with up to 1,000 times greater endurance than NAND, and is 10 times denser than conventional memory.
"For decades, the industry has searched for ways to reduce the lag time between the processor and data to allow much faster analysis," says Rob Crooke, senior vice president and general manager of Intel's Non-Volatile Memory Solutions Group. "This new class of non-volatile memory achieves this goal and brings game-changing performance to memory and storage solutions."
"One of the most significant hurdles in modern computing is the time it takes the processor to reach data on long-term storage," says Mark Adams, president of Micron. "This new class of non-volatile memory is a revolutionary technology that allows for quick access to enormous data sets and enables entirely new applications."
As the digital world balloons exponentially – from 4.4 zettabytes of data created in 2013, to an expected 44 zettabytes by 2020 – 3D XPoint technology can turn this immense amount of data into valuable information in nanoseconds. For example, retailers may use 3D XPoint technology to more quickly identify fraud detection patterns in financial transactions; healthcare researchers could process and analyse much larger data sets in real time, accelerating complex tasks such as genetic analysis and disease tracking.
The performance benefits of 3D XPoint technology could also enhance the PC experience, allowing consumers to enjoy faster interactive social media and collaboration as well as more immersive gaming experiences. The non-volatile nature of this technology also makes it a great choice for a variety of low-latency storage applications, since data is not erased when the device is powered off.
Following more than a decade of research and development, 3D XPoint technology was built from the ground up to address the need for non-volatile, high-performance, high-endurance and high-capacity storage and memory at an affordable cost. It ushers in a new class of non-volatile memory that significantly reduces latencies, allowing much more data to be stored close to the processor and accessed at speeds previously impossible for non-volatile storage.
The innovative, transistor-less cross point architecture creates a three-dimensional checkerboard where memory cells sit at the intersection of word lines and bit lines, allowing the cells to be addressed individually. As a result, data can be written and read in small sizes, leading to faster and more efficient read/write processes.
3D XPoint technology will sample later this year with select customers, and Intel and Micron are developing individual products based on the technology.
31st July 2015
Neural network is 10 times bigger than the previous world record
Digital Reasoning, a developer of cognitive computing, recently announced that it has trained the largest neural network in the world to date with a stunning 160 billion parameters. Google’s previous record was 11.2 billion, while the Lawrence Livermore National Laboratory trained a neural network with 15 billion parameters.
The results of Digital Reasoning’s research with deep learning and neural networks were published in the Journal of Machine Learning and Arxiv alongside other notable companies like Google, Facebook, and Microsoft. They were presented at the prestigious 32nd International Conference on Machine Learning in Lille, France, earlier this month.
Neural Networks are computer systems that are modelled after the human brain. Like the human brain, these networks can gather new data, process it, and react to it. Digital Reasoning’s paper, titled “Modelling Order in Neural Word Embeddings at Scale,” details both the impressive scope of their neural network as well as the exponential improvement in quality.
In their research, Matthew Russell, Digital Reasoning’s Chief Technology Officer, and his team evaluated neural word embeddings on “word analogy” accuracy. Neural networks generate a vector of numbers for each word in a vocabulary. This allowed the research team to do “word math.” For instance, “king” minus “man” plus “woman” would yield a result of “queen.” There is an industry standard dataset of around 20,000 word analogies. Google's previous accuracy on this metric was a 76.2% accuracy rate. In other words, Google was able to get 76.2% of the word analogies "correct" in their system. Stanford's best score is a 75.0% accuracy. Digital Reasoning’s model achieves a score of 85.8% accuracy, which is a near 40% reduction in error over both Google and Stanford, a massive advancement in the state of the art.
“We are extremely proud of the results we have achieved, and the contribution we are making daily to the field of deep learning,” said Russell. “This is a tremendous accomplishment for the company and marks an important milestone in putting a defensible stake in the ground towards our position as not just a thought leader in the space, but as an organisation that is truly advancing the state of the art in a rigorous, peer reviewed way.”
24th July 2015
Deep Genomics creates deep learning technology to transform genomic medicine
Deep Genomics, a new technology start-up, was launched this week. The company aims to use deep learning and artificial intelligence to accelerate our understanding of the human genome.
Credit: Hui Y. Xiong et al./Science
Evolution has altered the human genome over hundreds of thousands of years – and now humans can do it in a matter of months. Faster than anyone expected, scientists have discovered how to read and write DNA code in a living body, using hand-held genome sequencers and gene-editing systems. But knowing how to write is different from knowing what to write. To diagnose and treat genetic diseases, scientists must predict the biological consequences of both existing mutations and those they plan to introduce.
Deep Genomics, a start-up company spun out of research at the University of Toronto, is on a mission to predict the consequences of genomic changes by developing new deep learning technologies.
“Our vision is to change the course of genomic medicine,” says Brendan Frey, the company’s president and CEO, who is also a professor in the Edward S. Rogers Sr. Department of Electrical & Computer Engineering at the University of Toronto and a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR). “We’re inventing a new generation of deep learning technologies that can tell us what will happen within a cell when DNA is altered by natural mutations, therapies or even by deliberate gene editing.”
Deep Genomics is the only company to combine more than a decade of world-leading expertise in both deep learning and genome biology. “Companies like Google, Facebook and DeepMind have used deep learning to hugely improve image search, speech recognition and text processing. We’re doing something very different. The mission of Deep Genomics is to save lives and improve health,” says Frey. CIFAR Senior Fellow Yann LeCun, the head of Facebook’s Artificial Intelligence lab, is also an advisor to the company.
"Our company, Deep Genomics, will change the course of genomic medicine. CIFAR played a crucial role in establishing the research network that led to our breakthroughs in deep learning and genomic medicine," Frey says.
Deep Genomics is now releasing its first product, called SPIDEX, which provides information about how hundreds of millions of DNA mutations may alter splicing in the cell, a process that is crucial for normal development. Because errant splicing is behind many diseases and disorders, including cancers and autism spectrum disorder, SPIDEX has immediate and practical importance for genetic testing and pharmaceutical development. The science validating the SPIDEX tool was described earlier this year in the journal Science.
“The genome contains a catalogue of genetic variation that is our DNA blueprint for health and disease,” says CIFAR Senior Fellow Stephen Scherer, director of the Centre for Applied Genomics at SickKids and the McLaughlin Centre at the University of Toronto, and an advisor to Deep Genomics. “Brendan has put together a fantastic team of experts in artificial intelligence and genome biology – if anybody can decode this blueprint and harness it to take us into a new era of genomic medicine, they can.”
Until now, geneticists have spent decades experimentally identifying and examining mutations within specific genes that can be clearly connected to disease, such as the BRCA-1 and BRCA-2 genes for breast cancer. However, the number of mutations that could lead to disease is vast and most have not been observed before, let alone studied.
These mystery mutations pose an enormous challenge for current genomic diagnosis. Labs send the mutations they’ve collected to Deep Genomics, and the company uses their proprietary deep learning system, which includes SPIDEX, to ‘read’ the genome and assess how likely the mutation is to cause a problem. It can also connect the dots between a variant of unknown significance and a variant that has been linked to disease. “Faced with a new mutation that’s never been seen before, our system can determine whether it impacts cellular biochemistry in the same way as some other highly dangerous mutation,” says Frey.
Deep Genomics is committed to supporting publicly funded efforts to improve human health. “Soon after our Science paper was published, medical researchers, diagnosticians and genome biologists asked us to create a database to support academic research,” says Frey. “The first thing we’re doing with the company is releasing this database – that’s very important to us.”
“Soon, you’ll be able to have your genome sequenced cheaply and easily with a device that plugs into your laptop. The technology already exists,” explains Frey. “When genomic data is easily accessible to everyone, the big questions are going to be about interpreting the data and providing people with smart options. That’s where we come in.”
Deep Genomics envisions a future where computers are trusted to predict the outcome of experiments and treatments, long before anyone picks up a test tube. To realise that vision, the company plans to grow its team of data scientists and computational biologists. Deep Genomics will continue to invent new deep learning technologies and work with diagnosticians and biologists to understand the many complex ways that cells interpret DNA, from transcription and splicing to polyadenylation and translation. Building a thorough understanding of these processes has massive implications for genetic testing, pharmaceutical research and development, personalised medicine and improving human longevity.
24th July 2015
New computer program is first to recognise sketches more accurately than a human
Researchers from Queen Mary University of London (QMUL) have built the first computer program that can recognise hand-drawn sketches better than humans.
Known as Sketch-a-Net, the program is capable of correctly identifying the subject of sketches 74.9 per cent of the time, compared to humans that only managed a success rate of 73.1 per cent. As sketching becomes more relevant with the increase in the use of touchscreens, this development could provide a foundation for new ways to interact with computers.
Touchscreens could understand what you are drawing – enabling you to retrieve a specific image by drawing it with your fingers, which is more natural than keyword searches for finding items such as furniture or fashion accessories. This improvement could also aid police forensics when an artist’s impression of a criminal needs to be matched to a mugshot or CCTV database.
The research, which was accepted at the British Machine Vision Conference, also showed that the program performed better at determining finer details in sketches. For example, it was able to successfully distinguish the specific bird variants ‘seagull’, ‘flying-bird’, ‘standing-bird’ and ‘pigeon’ with 42.5 per cent accuracy compared to humans that only achieved 24.8 per cent.
Sketches are very intuitive to humans and have been used as a communication tool for thousands of years, but recognising free-hand sketches is challenging because they are abstract, varied and consist of black and white lines rather than coloured pixels like a photo. Solving sketch recognition will lead to a greater scientific understanding of visual perception.
Sketch-a-Net is a ‘deep neural network’ – a type of computer program designed to emulate the processing of the human brain. It is particularly successful because it accommodates the unique characteristics of sketches, particularly the order the strokes were drawn. This was information that was previously ignored but is especially important for understanding drawings on touchscreens.
Timothy Hospedales, co-author of the study and Lecturer in the School of Electronic Engineering and Computer Science, QMUL, said: “It’s exciting that our computer program can solve the task even better than humans can. Sketches are an interesting area to study because they have been used since pre-historic times for communication and now, with the increase in use of touchscreens, they are becoming a much more common communication tool again. This could really have a huge impact for areas such as police forensics, touchscreen use and image retrieval, and ultimately will help us get to the bottom of visual understanding.”
The paper, 'Sketch-a-Net that Beats Humans' by Q. Yu, Y. Yang, Y. Song, T. Xiang and T. Hospedales, will be presented at the 26th British Machine Vision Conference on Tuesday 8th September 2015.
20th July 2015
New massless particle is observed for the first time
Scientists report the discovery of the Weyl fermion after an 85-year search. This massless quasiparticle could lead to future electronics that are faster and with less waste heat.
An international team led by Princeton University scientists has discovered an elusive massless particle, first theorised 85 years ago. This particle is known as the Weyl fermion, and could give rise to faster and more efficient electronics, because of its unusual ability to behave as both matter and antimatter inside a crystal. Weyl fermions, if applied to next-generation electronics, could allow a nearly free and efficient flow of electricity in electronics – and thus greater power – especially for computers. The researchers report their discovery in the journal Science.
Proposed by the mathematician and physicist Hermann Weyl in 1929, Weyl fermions have been long sought by scientists, because they are regarded as possible building blocks of other subatomic particles, and are even more basic than electrons. Their basic nature means that Weyl fermions could provide a much more stable and efficient transport of particles than electrons, the main particle behind modern electronics. Unlike electrons, Weyl fermions are massless and possess a high degree of mobility.
"The physics of the Weyl fermion are so strange – there could be many things that arise from this particle that we're just not capable of imagining now," explained Professor M. Zahid Hasan, who led the team.
The researchers' find differs from other particle discoveries, in that the Weyl fermion can be reproduced and potentially applied. Particles such as the Higgs boson are typically detected in the fleeting aftermath of collisions. The Weyl fermion, however, was captured inside a specially designed synthetic metallic crystal called tantalum arsenide.
Professor M. Zahid Hasan
The Weyl fermion has two characteristics that could improve future electronics, possibly helping to continue the exponential growth in computer power, while also proving useful in developing efficient quantum computing. Firstly, they behave like a composite of monopole- and antimonopole-like particles inside a crystal. This means that Weyl particles that have opposite, magnetic-like charges, can nonetheless move independently of each other with a high degree of mobility. Secondly, Weyl fermions can be used to create massless electrons that move very quickly with no backscattering. In electronics, backscattering hinders efficiency and generates heat. While normal electrons are lost when they collide with an obstruction, Weyl electrons simply move through and around roadblocks.
"It's like they have their own GPS and steer themselves without scattering," said Hasan. "They will move and move only in one direction since they are either right-handed or left-handed and never come to an end because they just tunnel through. These are very fast electrons that behave like unidirectional light beams and can be used for new types of quantum computing."
Hasan and his group researched and simulated dozens of crystal structures before finding the one suitable for holding Weyl fermions. Once fashioned, the crystals were loaded into a scanning tunnelling spectromicroscope (pictured above) and cooled to near absolute zero. Crystals passing the spectromicroscope test were taken to the Lawrence Berkeley National Laboratory in California, for testing with high-energy photon beams. Once fired through the crystal, the beams' shape, size and direction indicated the presence of the long-elusive Weyl fermion.
The hunt for the Weyl fermion began in the earliest days of quantum theory, when physicists first realised that their equations implied the existence of antimatter counterparts to electrons and other commonly known particles.
"People figured that although Weyl's theory was not applicable to relativity or neutrinos, it is the most basic form of fermion and had all other kinds of weird and beautiful properties that could be useful," said Hasan.
"After more than 80 years, we found that this fermion was already there, waiting. It is the most basic building block of all electrons," he said. "It is exciting that we could finally make it come out following Weyl's 1929 theoretical recipe."
14th July 2015
China maintains supercomputing lead
For the fifth consecutive time, Tianhe-2, a supercomputer developed by China's National University of Defence Technology, has retained its position as the world's no. 1 system, according to the 45th edition of the twice-yearly TOP500 list.
Tianhe-2, which means "Milky Way-2", continues to lead the TOP500 list with a performance of 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark.
In second place is Titan, a Cray XK7 system at the Department of Energy's (DOE) Oak Ridge National Laboratory. Titan, the top system in the US and one of the most energy-efficient systems on the list, achieved 17.59 petaflop/s on the Linpack benchmark.
The only new entry in the top ten is at no. 7 – Shaheen II is a Cray XC40 system installed at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Shaheen II achieved 5.54 petaflop/s on the Linpack benchmark, making it the highest-ranked Middle East system in the 22-year history of the list and the first to crack the top ten.
There are 68 systems with performance greater than 1 petaflop/s on the list, up from 50 last November. In total, the combined performance of all 500 systems has grown to 363 petaflop/s, compared to 309 petaflop/s last November and 274 petaflop/s one year ago. HP has the lead in the total number of systems with 178 (35.6%), compared to IBM with 111 systems (22.2%).
Nine systems in the top ten were all installed in 2011 or 2012, and this low level of turnover among the top supercomputers reflects a slowing trend that began in 2008. However, new systems are in the pipeline that may reignite the pace of development and get performance improvements back on track. For example, Oak Ridge National Laboratory is building the IBM/Nvidia "Summit", featuring up to 300 petaflops – an order of magnitude faster than China's Tianhe-2 – that is planned for 2018. Meanwhile, British company Optalysys claims it will have a multi-exaflop optical computer by 2020.
To view the complete list, visit top500.org.
13th July 2015
7 nanometre chips enable Moore's Law to continue
Researchers have announced a breakthrough in the manufacture of 7 nanometre (nm) computer chips, enabling the trend of Moore's Law to continue for the next few years.
IBM Research has announced the semiconductor industry's first 7nm (nanometre) node test chips with functioning transistors. The breakthrough was accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute's Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE) and could result in the ability to place more than 20 billion tiny switches – transistors – on the fingernail-sized chips that power everything from smartphones to spacecraft.
To achieve the higher performance, lower power and scaling benefits promised by 7nm technology, researchers had to bypass conventional semiconductor manufacturing approaches. Among the novel processes and techniques pioneered in this collaboration were a number of industry-first innovations, most notably Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.
Industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging "exponential" technologies. This accomplishment was part of IBM's $3 billion, five-year investment in chip R&D announced last year.
"For business and society to get the most out of tomorrow's computers and devices, scaling to 7nm and beyond is essential," said Arvind Krishna, senior vice president and director of IBM Research. "That's why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology. Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come."
Microprocessors utilising 22nm and 14nm technology power today's servers, cloud data centres and mobile devices, and 10nm technology is well on the way to becoming a mature technology. The IBM Research-led alliance achieved close to 50 percent area scaling improvements over today's most advanced technology, introduced SiGe channel material for transistor performance enhancement at 7nm node geometries, process innovations to stack them below 30nm pitch and full integration of EUV lithography at multiple levels. These techniques and scaling could result in at least a 50 percent power/performance improvement for next generation systems that will power the Big Data, cloud and mobile era. These new 7nm chips are expected to start appearing in computers and other gadgets in 2017-18.
8th July 2015
The world's first 2TB consumer SSDs
Samsung has announced the first 2 terabyte solid state drives for the consumer market – continuing the exponential trend in data storage.
Samsung has announced two new SSDs – the 850 Pro and 850 EVO – both offering double the capacity of the previous generation. The 2.5" form factor drives can greatly boost performance for desktops and laptops. They will be especially useful in the accessing and storage of 4K video, which can often require enormous file sizes. The available capacities include 120GB, 250GB, 500GB, and 1TB, all the way up to 2TB.
The 850 Pro is designed for power users needing the maximum possible speed, while the 850 EVO is less powerful but somewhat cheaper. The 850 Pro features up to 550MBps sequential read and 520MBps sequential write rates and 100,000 random I/Os per second (IOPS). The 850 EVO has 540MBps sequential read and 520MBps write rates, with up to 90,000 random IOPS. Both models feature 3D V-NAND technology, which stacks 32 layers of transistors on top of each other. The drives also use multi-level cell (MLC) and triple-level cell (TLC) (2- and 3-bit per cell) technology for even greater memory density.
Until recently, consumers were forced to choose between speed or size when it came to upgrading their hard drives. For pure speed, a solid state drive was the best option, while larger sizes were typically catered for with slower and clunkier spinning drives. These new terabyte-scale SSDs are going to change that – combining both high speed and high capacity. Price may still be an issue, as Samsung's new product line doesn't come cheap. The 2TB version of the 850 Pro will retail for $999.99 and the 850 EVO is $799.99. However, given the trend in price performance witnessed in earlier generations of data storage, it is likely these high capacity SSDs will soon be a lot cheaper.
"Samsung experienced a surge in demand for 500 gigabyte (GB) and higher capacity SSDs with the introduction of our V-NAND SSDs," says Un-Soo Kim, Senior Vice President of Branded Product Marketing, Memory Business, in a press release from Samsung. "The release of the 2TB SSD is a strong driver into the era of multi-terabyte SSD solutions. We will continue to expand our ultra-high performance and large density SSD product portfolio and provide a new computing experience to users around the globe."
26th June 2015
70% of the world using smartphones by 2020
By 2020, advanced mobile technology will be commonplace around the globe, according to a new report from Ericsson.
The latest edition of the Ericsson Mobility Report shows that by 2020, advanced mobile technology will be commonplace in every corner of the globe — smartphone subscriptions will more than double, reaching 6.1 billion, 70% of the world's population will be using smartphones, and over 90% will be covered by mobile broadband networks.
The report – a comprehensive update on the latest mobile trends – shows that growth in mature markets comes from an increasing number of devices per individual. In developing regions, it comes from a swell of new subscribers as smartphones become more affordable; almost 80% of smartphone subscriptions added by year-end 2020 will be from Asia Pacific, the Middle East, and Africa.
With the continued rise of smartphones comes an exponential growth in data usage: smartphone data is predicted to increase ten-fold by 2020, when 80% of all mobile data traffic will come from smartphones (as opposed to basic feature phones). In North America, monthly data usage per smartphone will increase from an average of 2.4 GB today to 14 GB by 2020. It is likely that the 5G standard will be adopted by then.
Rima Qureshi, Senior Vice President and Chief Strategy Officer of Ericsson, says: "This immense growth in advanced mobile technology and data usage, driven by a surge in mobile connectivity and smartphone uptake, will make today's big data revolution feel like the arrival of a floppy disk. We see the potential for mass-scale transformation, bringing a wealth of opportunities for telecom operators and others to capture new revenue streams. But it also requires greater focus on cost efficient delivery and openness to new business models to compete and remain effective."
An expanding range of applications and business models, coupled with falling modem costs, are key factors driving the growth of connected devices. Added to this, new use cases are emerging for both short and long range applications, leading to even stronger growth of connected devices moving forward. Ericsson's forecast, outlined in the report, points to 26 billion connected devices by 2020, confirming we are well on the way to reaching the vision of 50 billion connected devices.
Each year until 2020, mobile video traffic will grow by a staggering 55 percent per year and will constitute around 60 percent of all mobile data traffic by the end of that period. Growth is largely driven by shifting user preferences towards video streaming services, and the increasing prevalence of video in online content including news, advertisements and social media.
When looking at data consumption in advanced mobile broadband markets, findings show a significant proportion of traffic is generated by a limited number of subscribers. These heavy data users represent 10 percent of total subscribers, but generate 55 percent of total data traffic. Video is dominant among heavy users, who typically watch around one hour of video per day, which is 20 times more than the average user.
To accompany the Mobility Report, Ericsson has created a Traffic Exploration Tool for creating customised graphs and tables, using data from the report. The information can be filtered by region, subscription, technology, traffic, and device type.
8th June 2015
New mobile app could revolutionise human rights justice
The International Bar Association (IBA) today launched the eyeWitness app – a new tool for documenting and reporting human rights atrocities in a secure and verifiable way, so the information can be used as evidence in a court of law.
With social media increasingly the forum for communicating human rights, many online images have raised awareness of atrocities around the world but typically lack the attribution or information necessary to be used as evidence in a court of law. Now anyone with an Android-enabled smart phone – including human right defenders, journalists, and investigators – can download the eyeWitness to Atrocities app and help hold accountable the perpetrators of atrocity crimes, such as genocide, crimes against humanity, torture and war crimes.
"The eyeWitness to Atrocities app will be a transformational tool in the fight for human rights, providing a solution to the evidentiary challenges surrounding mobile phone footage," said IBA Executive Director Mark Ellis. "Until now, it has been extremely difficult to verify the authenticity of these images and to protect the safety of those brave enough to record them. As an advocate for the voiceless, the International Bar Association is dedicated to empowering activists on the ground who are witnessing these atrocities with the ability to bring criminals to justice."
The app design is based on extensive research on the rules of evidence in international, regional and national courts and tribunals. It includes several features to guarantee authenticity, facilitate verification and protect confidentiality by allowing the user to decide whether or not to be anonymous.
"Putting information and technology in the hands of citizens worldwide has a powerful role to play in advancing the rule of law," said Ian McDougall, EVP and General Counsel of LexisNexis Legal & Professional, which partnered with the IBA. "LexisNexis Legal & Professional's world class data hosting capabilities will provide the eyeWitness programme with the same technology that we use to safeguard sensitive and confidential material for our clients every day. It's all part of our company's broader commitment to advancing the rule of law around the world, as we believe every business has a role to play in building a safer, more just global society."
How the App Works
When a user records an atrocity, the app automatically collects and embeds into the video file GPS coordinates, date and time, device sensor data and surrounding objects, such as Bluetooth and Wi-Fi networks. The user has the option of adding any additional identifying information about the image. This metadata will provide information integral to verifying and contextualising the footage. The images and accompanying data are encrypted and securely stored within the app. The app also embeds a chain of custody record to verify that the footage has not been edited or digitally manipulated. The user then submits this information directly from the app to a database maintained by the eyeWitness organisation.
Once the video is transmitted, it is stored in a secure repository that functions as a virtual evidence locker safeguarding the original, encrypted footage for future investigations and legal proceedings. The submitted footage is only accessible by a group of legal experts at eyeWitness who will analyse the footage and identify the appropriate authorities, including international, regional or national courts, to pursue relevant cases.
"The IBA is proud to be spearheading the project and allocating $1 million of IBA reserves as part of its efforts to promote, protect and enforce human rights under a just rule of law," said David Rivkin, IBA President. The IBA is working in partnership with LexisNexis Legal & Professional, a part of RELX Group, which is hosting the secure repository, database and backup system to store and analyse data collected via the app. The IBA is also partnering with human rights organisations to put the app in the hands of those working in some of the world's most severe conflict zones.
"The eyeWitness app promises to revolutionise the effectiveness of ground-level human rights reporting," said Deirdre Collings, Executive Director of the SecDev Foundation, a Canadian research organisation. "We also see the app's usefulness for media activists in conflict and authoritarian environments who undertake vital but high-risk reporting. We're proud to include eyeWitness in our training programme for our partners in Syria and will be rolling it out across our projects in the CIS region and Vietnam."
Established in 1947 and headquartered in London, the IBA is the world's leading organisation of international legal practitioners, bar associations and law societies. Through its global membership of individual lawyers, law firms, bar associations and law societies, it influences the development of international law reform and shapes the future of the legal profession throughout the world.
5th May 2015
'Centimetre accurate' GPS system could transform virtual reality and mobile devices
Researchers at the University of Texas at Austin have developed a centimetre-accurate GPS-based positioning system that could revolutionise geolocation on virtual reality headsets, cellphones and other technologies – making global positioning and orientation far more precise than what is currently available on a mobile device.
The researchers' new system could allow unmanned aerial vehicles to deliver packages to a specific spot on a consumer's back porch, improve collision avoidance technologies on cars and allow virtual reality (VR) headsets to be used outdoors. This ultra-accurate GPS, coupled with a smartphone camera, could be used to quickly build a globally referenced 3-D map of one's surroundings that would greatly expand the radius of a VR game. Currently, VR does not use GPS, which limits its use to indoors and usually a two- to three-foot radius.
"Imagine games where, rather than sit in front of a monitor and play, you are in your backyard actually running around with other players," said Todd Humphreys, lead researcher and assistant professor in the Department of Aerospace Engineering and Engineering Mechanics. "To be able to do this type of outdoor, multiplayer virtual reality game, you need highly accurate position and orientation that is tied to a global reference frame."
Humphreys and his team in the Radionavigation Lab have designed a low-cost system that reduces location errors from the size of a large car to the size of a nickel – a more than 100 times increase in accuracy. Humphreys collaborated on the new technology with Professor Robert W. Heath from the Department of Electrical and Computer Engineering, along with graduate students.
Centimetre-accurate positioning systems are already used in geology, surveying and mapping – but the survey-grade antennas these systems employ are too large and costly for use in mobile devices. This breakthrough by Humphreys and his team is a powerful and sensitive software-defined GPS receiver that can extract centimetre accuracies from the inexpensive antennas found in mobile devices. Such precise measurements were not previously possible. The researchers anticipate that their software's ability to leverage low-cost antennas will reduce the overall cost of centimetre accuracy and make it economically feasible for mobile devices.
Humphreys and his team have spent six years building a specialised receiver, called GRID, to extract so-called carrier phase measurements from low-cost antennas. GRID currently operates outside the phone, but it will eventually run on the phone's internal processor. To further develop this technology, they recently co-founded a startup, called Radiosense. Humphreys and his team are working with Samsung to develop a snap-on accessory that will tell smartphones, tablets and virtual reality headsets their precise position and orientation.
The researchers designed their system to deliver precise position and orientation information – how one's head rotates or tilts – to less than one degree of measurement accuracy. This level of accuracy could enhance VR environments that are based on real-world settings, as well as improve other applications including visualisation and 3-D mapping. Additionally, it could make a significant difference in people's daily lives, including transportation, where centimetre-accurate GPS could allow better vehicle-to-vehicle communication technology.
"If your car knows in real time the precise position and velocity of an approaching car that is blocked from view by other traffic, your car can plan ahead to avoid a collision," Humphreys said.
28th March 2015
10TB solid state drives may soon be possible
An innovative new process architecture can extend Moore's Law for flash storage – bringing significant improvements in density while lowering the cost of NAND flash.
Intel Corporation – in partnership with Micron – have announced the availability of 3D NAND, the world's highest-density flash memory. Flash is the storage technology used inside the lightest laptops, fastest data centres, and nearly every cellphone, tablet and mobile device.
3D NAND works by stacking the components in vertical layers with extraordinary precision to create devices with three times higher data capacity than competing NAND technologies. This enables more storage in a smaller space, bringing significant cost savings, low power usage and higher performance to a range of mobile consumer devices, as well as the most demanding enterprise deployments.
As data cells begin to approach the size of individual atoms, traditional "planar" NAND is nearing its practical scaling limits. This poses a major challenge for the memory industry. 3D NAND is poised to make a dramatic impact by keeping flash storage aligned with Moore's Law, the exponential trend of performance gains and cost savings, driving more widespread use of flash storage in the future.
"3D NAND technology has the potential to create fundamental market shifts," said Brian Shirley, vice president of Memory Technology and Solutions at Micron Technology. "The depth of the impact that flash has had to date – from smartphones to flash-optimised supercomputing – is really just scratching the surface of what's possible."
One of the most significant aspects of this breakthrough is in the foundational memory cell itself. Intel and Micron used a floating gate cell, a universally utilised design refined through years of high-volume planar flash manufacturing. This is the first use of a floating gate cell in 3D NAND, which was a key design choice to enable greater performance, quality and reliability.
The data cells are stacked vertically in 32 layers to achieve 256Gb multilevel cell (MLC) and 384Gb triple-level cell (TLC) dies within a standard package. This can enable gum stick-sized SSDs with 3.5TB of storage and standard 2.5-inch SSDs with greater than 10TB. Because capacity is achieved by stacking cells vertically, individual cell dimensions can be considerably larger. This is expected to increase both performance and endurance and make even the TLC designs well-suited for data centre storage.
Key product features of this 3D NAND design include:
• Large Capacities – Triple the capacity of existing technology, up to 48GB of NAND per die, enabling 750GB to fit in a single fingertip-sized package.
• Reduced Cost per GB – First-generation 3D NAND is architected to achieve better cost efficiencies than planar NAND.
• Fast – High read/write bandwidth, I/O speeds and random read performance.
• Green – New sleep modes enable low-power use by cutting power to inactive NAND die (even when other dies in the same package are active), dropping power consumption significantly in standby mode.
• Smart – Innovative new features improve latency and increase endurance over previous generations, and also make system integration easier.
The 256Gb MLC version of 3D NAND is sampling with select partners today, and the 384Gb TLC design will be sampling later this spring. The fab production line has already begun initial runs, and both devices will be in full production by the fourth quarter of this year. Both companies are also developing individual lines of SSD solutions based on 3D NAND technology and expect those products to be available within the next year.