future timeline technology singularity humanity
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed

Blog » Computers & the Internet


3rd October 2015

A breakthrough in replacing silicon with carbon nanotubes

IBM has announced a breakthrough that will accelerate the replacement of silicon transistors with carbon nanotubes. Their new method could work down to 1.8 nanometre node sizes.


carbon nanotubes future technology


IBM this week announced a major engineering breakthrough that could accelerate carbon nanotubes replacing silicon transistors to power future computing technologies. Researchers at the company have demonstrated a new way to shrink transistor contacts, without reducing performance of carbon nanotube devices – paving the way to dramatically faster, smaller and more powerful chips beyond the capabilities of traditional semiconductors. The details were published yesterday in the journal Science.

IBM has overcome a major hurdle that silicon and other transistor technologies face when scaling down. In any transistor, two things scale: the channel and its two contacts. As devices become ever smaller, increased contact resistance for carbon nanotubes has hindered performance gains, until now. These results could overcome contact resistance challenges all the way to 1.8 nanometre nodes – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling Big Data to be analysed faster, increasing the power and battery life of mobile devices and the Internet of Things, and allowing cloud data centres to deliver services more efficiently and economically.

Silicon transistors – tiny switches that carry information on a chip – have been shrunk year after year since the mid-20th century, but are now approaching the limits of miniaturisation. With Moore's Law running out of steam, shrinking the size of the transistor, including the channels and contacts – without compromising its performance – has been a major challenge in recent years.


carbon nanotubes future technology


IBM has previously shown that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometres – equivalent to 10,000 times thinner than a strand of human hair and less than half the size of today's leading silicon technology. IBM's new contact approach overcomes the other major hurdle in incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower consumption of power.

Earlier this summer, IBM unveiled the first 7 nanometre node silicon test chip, pushing the limits of silicon technologies. By advancing the research of carbon nanotubes to replace traditional silicon devices, IBM is paving the way for a post-silicon future and delivering on its $3 billion chip R&D investment announced in July 2014.

"These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems," said vice president of Science & Technology at IBM Research, Dario Gil. "As silicon technology nears its physical limits, new materials, devices and circuit architectures must be ready to deliver the advanced technologies that will be required by the Cognitive Computing era. This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected."


  carbon nanotube future technology

IBM's new carbon nanotube bonding technique, showing the fabricated nanotube transistor with end-bonded contact and a contact length below 10 nm, with potential to scale to 1.8 nm.


Carbon nanotubes represent a new class of semiconductor materials, consisting of single atomic sheets of carbon rolled into a tube. They form the core of a transistor device whose superior electrical properties could allow Moore's Law to continue for at least several more generations.

Electrons in carbon transistors can move more easily than in silicon-based devices, and the ultra-thin body of carbon nanotubes provide additional advantages at the atomic scale. Inside a chip, contacts are the valves that control the flow of electrons from metal into the channels of a semiconductor. As transistors begin to shrink in size, electrical resistance increases within the contacts, which impedes performance. Until now, decreasing the size of the contacts on a device caused a commensurate drop in performance – a challenge facing both silicon and carbon nanotube transistor technologies.

IBM researchers had to forego traditional contact schemes by inventing a metallurgical process akin to "microscopic welding", which chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunk below 10 nanometres without deteriorating performance of the carbon nanotube devices.

“For any advanced transistor technology, the increase in contact resistance due to the decrease in the size of transistors becomes a major performance bottleneck,” Gil added. “Our novel approach is to make the contact from the end of the carbon nanotube, which we show does not degrade device performance. This brings us a step closer to the goal of a carbon nanotube technology within the decade.”


  carbon nanotube future technology
A set of end-contacted nanotube transistors. Credit: IBM Research


  speech bubble Comments »



24th September 2015

New world record for quantum teleportation distance

Researchers at the National Institute of Standards and Technology (NIST) have "teleported" or transferred quantum information carried in light particles over 100 kilometres (km) of optical fibre, four times farther than the previous record.


quantum teleportation world record distance 2015


Researchers at NIST have “teleported” or transferred quantum information carried in light particles over 100 km (62 miles) of optical fibre – four times farther than the previous record. The experiment confirmed that quantum communication is feasible over long distances in fibre. Other research groups have teleported quantum information over longer distances in free space, but the ability to do so over conventional fibre-optic lines offers more flexibility for network design.

Not to be confused with Star Trek's fictional "beaming up" of people, quantum teleportation involves the transfer, or remote reconstruction, of information encoded in quantum states of matter or light. Teleportation is useful in both quantum communications and quantum computing, which offer prospects for novel capabilities such as unbreakable encryption. The basic method for quantum teleportation was first proposed more than 20 years ago and has been performed by a number of research groups, including one at NIST using atoms in 2004.

The new record, described in Optica, involved transferring quantum information contained in one photon – its specific time slot in a sequence – to another photon transmitted over 102 km of spooled fibre in a laboratory in Colorado. The achievement was made possible by advanced single-photon detectors designed and made at NIST.


quantum teleportation world record distance 2015


"Only about 1 percent of photons make it all the way through 100 km of fibre," says NIST's Marty Stevens. "We never could have done this experiment without these new detectors, which can measure this incredibly weak signal."

Until now, so much quantum data was lost in fibre that transmission rates and distances were low. This new teleportation technique could be used to make devices called quantum repeaters that could resend data periodically, in order to extend network reach, perhaps enough to eventually build a "quantum internet." Previously, researchers thought quantum repeaters might need to rely on atoms or other matter, instead of light – a difficult engineering challenge that would also slow down transmission.

Various quantum states can be used to carry information; the NIST experiment used quantum states that indicate when in a sequence of time slots a single photon arrives. This method is novel, in that four of NIST's photon detectors were positioned to filter out specific quantum states. The detectors rely on superconducting nanowires made of molybdenum silicide. They can record over 80 percent of arriving photons, revealing whether they are in the same or different time slots, each just 1 nanosecond long. The experiments were performed at wavelengths commonly used in telecommunications. Below is an infographic with more details.


quantum teleportation world record distance 2015


  speech bubble Comments »



12th September 2015

Constant use of social media technology causes teen anxiety and depression

A study by the British Psychological Society warns that constant pressure on teenagers to use social media technology causes lower sleep quality, lower self-esteem, higher anxiety and increased depression levels.


social media technology future timeline 2015
© Bantuquka | Dreamstime.com


The need to be constantly available and respond 24/7 on social media accounts can cause depression, anxiety and reduced sleep quality for teenagers, says a study presented yesterday at a British Psychological Society conference in Manchester.

The researchers, Dr Heather Cleland Woods and Holly Scott of the University of Glasgow, provided questionnaires for 467 teenagers regarding their overall and night-time specific social media use. A further set of tests measured sleep quality, self-esteem, anxiety, depression and emotional investment in social media which relates to the pressure felt to be available 24/7 and the anxiety around, for example, not responding immediately to texts or posts.

Dr Cleland Woods explained: "Adolescence can be a period of increased vulnerability for the onset of depression and anxiety, and poor sleep quality may contribute to this. It is important that we understand how social media use relates to these. Evidence is increasingly supporting a link between social media use and wellbeing, particularly during adolescence, but the causes of this are unclear."

Analysis showed that overall and night-time specific social media use along with emotional investment were related to poorer sleep quality, lower self-esteem as well as higher anxiety and depression levels.

Lead researcher Dr Cleland Woods said: "While overall social media use impacts on sleep quality, those who log on at night appear to be particularly affected. This may be mostly true of individuals who are highly emotionally invested. This means we have to think about how our kids use social media, in relation to time for switching off."

Last year, a similar study found that increased use of digital media is causing children's social skills to decline.


social media technology future timeline 2015
© Nito100 | Dreamstime.com


  speech bubble Comments »



8th September 2015

3-D printing of transparent glass is now possible

Researchers at MIT have demonstrated the first 3D printing technique able to make transparent glass objects.

The range of materials that 3D printers can work with has been steadily growing in recent years – from bioprinted cartilage constructs, to combinations of different plastic types in full colour, to elastic silicon membranes for heart attack patients, and even artificial rhino horn.

Some materials have been more difficult to develop, such as glass. Until now, it was only possible for opaque glass to be 3D printed. However, a team at the Massachusetts Institute of Technology (MIT) has achieved the first method for creating fully transparent glass, as demonstrated in this video.

The platform, known as "G3DP", is based on a dual-heated chamber concept. An upper chamber acts as a kiln cartridge, while the lower chamber serves to anneal the structures. The kiln cartridge operates at over 1,000°C (1900°F), with molten material being funnelled through a custom nozzle of alumina-zircon-silica. Objects are formed inside a third chamber, where they are cooled in a gradual, controlled way to ensure they don't break.

Finding a nozzle suitable for molten glass was a major challenge, according to the researchers. It had to be made of a material able to handle both high temperatures and resist the glass sticking to it. A paper describing their work, "Additive Manufacturing of Optically Transparent Glass", is available online.




  speech bubble Comments »



7th September 2015

The world's first quantum dot monitor

Electronics maker Philips has launched the world's first quantum dot monitor in Europe, featuring 99% Adobe RGB colour accuracy.


quantum dot monitor 2015 technology


Pictured here is the 27" E6, the world's first quantum dot monitor, delivering 99% Adobe RGB colour in full HD 1080p (1920x1080) at a mainstream price. It has been developed in a collaboration between Philips and QD Vision.

The monitor is based on Colour IQ technology, which uses quantum dots – semiconductor nanocrystals engineered to emit light in any colour. The size of each quantum dot determines the energy it emits, which determines the exact colour. In addition to their large and extremely accurate colour pallete, quantum dots are also very stable, which means that they won't wear out over time. The colours of individual dots won't warp or change over time.

"Quantum dot technology is changing the way monitor users think about colour, and the new 27" E Line monitor is the first on the market to showcase this new technology," said Stefan Sommer, a director at MMD, which exclusively markets and sells Philips branded LCD displays worldwide. "QD Vision is helping us create a monitor with 99% Adobe RGB colour at a very aggressive price point, making it accessible to everyone who uses a monitor."




Even at the highest price points, most of today's monitors are only capable of displaying less than 95% of the Adobe RGB standard, with mainstream models typically only presenting roughly 70% of the colour spectrum. Using the Colour IQ system, it is now possible for monitors to deliver the full Adobe RGB standard (>99%), but at much lower overall costs.

"The superior colour of our edge-lit quantum dots and our strong price-performance characteristics make them an ideal catalyst for positive disruption in the global monitor industry," said Matt Mazzuchi, Vice President, Market and Business Development at QD Vision. "Our close collaboration with Philips monitors brought this full gamut colour monitor to European consumers."

The new E6 quantum dot monitor will be available in Europe from October 2015. There's no word yet on when the product will launch in the U.S.


quantum dot monitor 2015 technology


  speech bubble Comments »



4th August 2015

New memory technology is 1,000 times faster

Intel and Micron have unveiled "3D XPoint" – a new memory technology that is 1,000 times faster than NAND and 10 times denser than conventional DRAM.


3dxpoint memory technology 2015 future timeline


Intel Corporation and Micron Technology, Inc. have unveiled 3D XPoint technology, a non-volatile memory that has the potential to revolutionise any device, application or service that benefits from fast access to large sets of data. Now in production, 3D XPoint technology is a major breakthrough in memory process technology and the first new memory category since the introduction of NAND flash in 1989.

The explosion of connected devices and digital services is generating massive amounts of new data. To make this data useful, it must be stored and analysed very quickly, creating challenges for service providers and system builders who must balance cost, power and performance trade-offs when they design memory and storage solutions. 3D XPoint technology combines the performance, density, power, non-volatility and cost advantages of all available memory technologies on the market today. This technology is up to 1,000 times faster, with up to 1,000 times greater endurance than NAND, and is 10 times denser than conventional memory.


3dxpoint memory technology 2015 future timeline


"For decades, the industry has searched for ways to reduce the lag time between the processor and data to allow much faster analysis," says Rob Crooke, senior vice president and general manager of Intel's Non-Volatile Memory Solutions Group. "This new class of non-volatile memory achieves this goal and brings game-changing performance to memory and storage solutions."

"One of the most significant hurdles in modern computing is the time it takes the processor to reach data on long-term storage," says Mark Adams, president of Micron. "This new class of non-volatile memory is a revolutionary technology that allows for quick access to enormous data sets and enables entirely new applications."

As the digital world balloons exponentially – from 4.4 zettabytes of data created in 2013, to an expected 44 zettabytes by 2020 – 3D XPoint technology can turn this immense amount of data into valuable information in nanoseconds. For example, retailers may use 3D XPoint technology to more quickly identify fraud detection patterns in financial transactions; healthcare researchers could process and analyse much larger data sets in real time, accelerating complex tasks such as genetic analysis and disease tracking.

The performance benefits of 3D XPoint technology could also enhance the PC experience, allowing consumers to enjoy faster interactive social media and collaboration as well as more immersive gaming experiences. The non-volatile nature of this technology also makes it a great choice for a variety of low-latency storage applications, since data is not erased when the device is powered off.




Following more than a decade of research and development, 3D XPoint technology was built from the ground up to address the need for non-volatile, high-performance, high-endurance and high-capacity storage and memory at an affordable cost. It ushers in a new class of non-volatile memory that significantly reduces latencies, allowing much more data to be stored close to the processor and accessed at speeds previously impossible for non-volatile storage.

The innovative, transistor-less cross point architecture creates a three-dimensional checkerboard where memory cells sit at the intersection of word lines and bit lines, allowing the cells to be addressed individually. As a result, data can be written and read in small sizes, leading to faster and more efficient read/write processes.

3D XPoint technology will sample later this year with select customers, and Intel and Micron are developing individual products based on the technology.


3dxpoint memory technology 2015 future timeline


  speech bubble Comments »



31st July 2015

Neural network is 10 times bigger than the previous world record

Digital Reasoning, a developer of cognitive computing, recently announced that it has trained the largest neural network in the world to date with a stunning 160 billion parameters. Google’s previous record was 11.2 billion, while the Lawrence Livermore National Laboratory trained a neural network with 15 billion parameters.


brain computer deep learning network


The results of Digital Reasoning’s research with deep learning and neural networks were published in the Journal of Machine Learning and Arxiv alongside other notable companies like Google, Facebook, and Microsoft. They were presented at the prestigious 32nd International Conference on Machine Learning in Lille, France, earlier this month.

Neural Networks are computer systems that are modelled after the human brain. Like the human brain, these networks can gather new data, process it, and react to it. Digital Reasoning’s paper, titled “Modelling Order in Neural Word Embeddings at Scale,” details both the impressive scope of their neural network as well as the exponential improvement in quality.

In their research, Matthew Russell, Digital Reasoning’s Chief Technology Officer, and his team evaluated neural word embeddings on “word analogy” accuracy. Neural networks generate a vector of numbers for each word in a vocabulary. This allowed the research team to do “word math.” For instance, “king” minus “man” plus “woman” would yield a result of “queen.” There is an industry standard dataset of around 20,000 word analogies. Google's previous accuracy on this metric was a 76.2% accuracy rate. In other words, Google was able to get 76.2% of the word analogies "correct" in their system. Stanford's best score is a 75.0% accuracy. Digital Reasoning’s model achieves a score of 85.8% accuracy, which is a near 40% reduction in error over both Google and Stanford, a massive advancement in the state of the art.

“We are extremely proud of the results we have achieved, and the contribution we are making daily to the field of deep learning,” said Russell. “This is a tremendous accomplishment for the company and marks an important milestone in putting a defensible stake in the ground towards our position as not just a thought leader in the space, but as an organisation that is truly advancing the state of the art in a rigorous, peer reviewed way.”


  speech bubble Comments »



24th July 2015

Deep Genomics creates deep learning technology to transform genomic medicine

Deep Genomics, a new technology start-up, was launched this week. The company aims to use deep learning and artificial intelligence to accelerate our understanding of the human genome.


deep genomics future timeline
Credit: Hui Y. Xiong et al./Science


Evolution has altered the human genome over hundreds of thousands of years – and now humans can do it in a matter of months. Faster than anyone expected, scientists have discovered how to read and write DNA code in a living body, using hand-held genome sequencers and gene-editing systems. But knowing how to write is different from knowing what to write. To diagnose and treat genetic diseases, scientists must predict the biological consequences of both existing mutations and those they plan to introduce.

Deep Genomics, a start-up company spun out of research at the University of Toronto, is on a mission to predict the consequences of genomic changes by developing new deep learning technologies.

“Our vision is to change the course of genomic medicine,” says Brendan Frey, the company’s president and CEO, who is also a professor in the Edward S. Rogers Sr. Department of Electrical & Computer Engineering at the University of Toronto and a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR). “We’re inventing a new generation of deep learning technologies that can tell us what will happen within a cell when DNA is altered by natural mutations, therapies or even by deliberate gene editing.”

Deep Genomics is the only company to combine more than a decade of world-leading expertise in both deep learning and genome biology. “Companies like Google, Facebook and DeepMind have used deep learning to hugely improve image search, speech recognition and text processing. We’re doing something very different. The mission of Deep Genomics is to save lives and improve health,” says Frey. CIFAR Senior Fellow Yann LeCun, the head of Facebook’s Artificial Intelligence lab, is also an advisor to the company.

"Our company, Deep Genomics, will change the course of genomic medicine. CIFAR played a crucial role in establishing the research network that led to our breakthroughs in deep learning and genomic medicine," Frey says.

Deep Genomics is now releasing its first product, called SPIDEX, which provides information about how hundreds of millions of DNA mutations may alter splicing in the cell, a process that is crucial for normal development. Because errant splicing is behind many diseases and disorders, including cancers and autism spectrum disorder, SPIDEX has immediate and practical importance for genetic testing and pharmaceutical development. The science validating the SPIDEX tool was described earlier this year in the journal Science.

“The genome contains a catalogue of genetic variation that is our DNA blueprint for health and disease,” says CIFAR Senior Fellow Stephen Scherer, director of the Centre for Applied Genomics at SickKids and the McLaughlin Centre at the University of Toronto, and an advisor to Deep Genomics. “Brendan has put together a fantastic team of experts in artificial intelligence and genome biology – if anybody can decode this blueprint and harness it to take us into a new era of genomic medicine, they can.”

Until now, geneticists have spent decades experimentally identifying and examining mutations within specific genes that can be clearly connected to disease, such as the BRCA-1 and BRCA-2 genes for breast cancer. However, the number of mutations that could lead to disease is vast and most have not been observed before, let alone studied.

These mystery mutations pose an enormous challenge for current genomic diagnosis. Labs send the mutations they’ve collected to Deep Genomics, and the company uses their proprietary deep learning system, which includes SPIDEX, to ‘read’ the genome and assess how likely the mutation is to cause a problem. It can also connect the dots between a variant of unknown significance and a variant that has been linked to disease. “Faced with a new mutation that’s never been seen before, our system can determine whether it impacts cellular biochemistry in the same way as some other highly dangerous mutation,” says Frey.

Deep Genomics is committed to supporting publicly funded efforts to improve human health. “Soon after our Science paper was published, medical researchers, diagnosticians and genome biologists asked us to create a database to support academic research,” says Frey. “The first thing we’re doing with the company is releasing this database – that’s very important to us.”

“Soon, you’ll be able to have your genome sequenced cheaply and easily with a device that plugs into your laptop. The technology already exists,” explains Frey. “When genomic data is easily accessible to everyone, the big questions are going to be about interpreting the data and providing people with smart options. That’s where we come in.”

Deep Genomics envisions a future where computers are trusted to predict the outcome of experiments and treatments, long before anyone picks up a test tube. To realise that vision, the company plans to grow its team of data scientists and computational biologists. Deep Genomics will continue to invent new deep learning technologies and work with diagnosticians and biologists to understand the many complex ways that cells interpret DNA, from transcription and splicing to polyadenylation and translation. Building a thorough understanding of these processes has massive implications for genetic testing, pharmaceutical research and development, personalised medicine and improving human longevity.


  speech bubble Comments »



24th July 2015

New computer program is first to recognise sketches more accurately than a human

Researchers from Queen Mary University of London (QMUL) have built the first computer program that can recognise hand-drawn sketches better than humans.


computer program recognises sketches better than humans


Known as Sketch-a-Net, the program is capable of correctly identifying the subject of sketches 74.9 per cent of the time, compared to humans that only managed a success rate of 73.1 per cent. As sketching becomes more relevant with the increase in the use of touchscreens, this development could provide a foundation for new ways to interact with computers.

Touchscreens could understand what you are drawing – enabling you to retrieve a specific image by drawing it with your fingers, which is more natural than keyword searches for finding items such as furniture or fashion accessories. This improvement could also aid police forensics when an artist’s impression of a criminal needs to be matched to a mugshot or CCTV database.

The research, which was accepted at the British Machine Vision Conference, also showed that the program performed better at determining finer details in sketches. For example, it was able to successfully distinguish the specific bird variants ‘seagull’, ‘flying-bird’, ‘standing-bird’ and ‘pigeon’ with 42.5 per cent accuracy compared to humans that only achieved 24.8 per cent.


computer program recognises sketches better than humans


Sketches are very intuitive to humans and have been used as a communication tool for thousands of years, but recognising free-hand sketches is challenging because they are abstract, varied and consist of black and white lines rather than coloured pixels like a photo. Solving sketch recognition will lead to a greater scientific understanding of visual perception.

Sketch-a-Net is a ‘deep neural network’ – a type of computer program designed to emulate the processing of the human brain. It is particularly successful because it accommodates the unique characteristics of sketches, particularly the order the strokes were drawn. This was information that was previously ignored but is especially important for understanding drawings on touchscreens.


computer program recognises sketches better than humans


Timothy Hospedales, co-author of the study and Lecturer in the School of Electronic Engineering and Computer Science, QMUL, said: “It’s exciting that our computer program can solve the task even better than humans can. Sketches are an interesting area to study because they have been used since pre-historic times for communication and now, with the increase in use of touchscreens, they are becoming a much more common communication tool again. This could really have a huge impact for areas such as police forensics, touchscreen use and image retrieval, and ultimately will help us get to the bottom of visual understanding.”

The paper, 'Sketch-a-Net that Beats Humans' by Q. Yu, Y. Yang, Y. Song, T. Xiang and T. Hospedales, will be presented at the 26th British Machine Vision Conference on Tuesday 8th September 2015.


computer program recognises sketches better than humans


  speech bubble Comments »



20th July 2015

New massless particle is observed for the first time

Scientists report the discovery of the Weyl fermion after an 85-year search. This massless quasiparticle could lead to future electronics that are faster and with less waste heat.


weyl fermion massless particle 2015 science


An international team led by Princeton University scientists has discovered an elusive massless particle, first theorised 85 years ago. This particle is known as the Weyl fermion, and could give rise to faster and more efficient electronics, because of its unusual ability to behave as both matter and antimatter inside a crystal. Weyl fermions, if applied to next-generation electronics, could allow a nearly free and efficient flow of electricity in electronics – and thus greater power – especially for computers. The researchers report their discovery in the journal Science.

Proposed by the mathematician and physicist Hermann Weyl in 1929, Weyl fermions have been long sought by scientists, because they are regarded as possible building blocks of other subatomic particles, and are even more basic than electrons. Their basic nature means that Weyl fermions could provide a much more stable and efficient transport of particles than electrons, the main particle behind modern electronics. Unlike electrons, Weyl fermions are massless and possess a high degree of mobility.

"The physics of the Weyl fermion are so strange – there could be many things that arise from this particle that we're just not capable of imagining now," explained Professor M. Zahid Hasan, who led the team.

The researchers' find differs from other particle discoveries, in that the Weyl fermion can be reproduced and potentially applied. Particles such as the Higgs boson are typically detected in the fleeting aftermath of collisions. The Weyl fermion, however, was captured inside a specially designed synthetic metallic crystal called tantalum arsenide.


weyl fermion massless particle 2015 science
Professor M. Zahid Hasan


The Weyl fermion has two characteristics that could improve future electronics, possibly helping to continue the exponential growth in computer power, while also proving useful in developing efficient quantum computing. Firstly, they behave like a composite of monopole- and antimonopole-like particles inside a crystal. This means that Weyl particles that have opposite, magnetic-like charges, can nonetheless move independently of each other with a high degree of mobility. Secondly, Weyl fermions can be used to create massless electrons that move very quickly with no backscattering. In electronics, backscattering hinders efficiency and generates heat. While normal electrons are lost when they collide with an obstruction, Weyl electrons simply move through and around roadblocks.

"It's like they have their own GPS and steer themselves without scattering," said Hasan. "They will move and move only in one direction since they are either right-handed or left-handed and never come to an end because they just tunnel through. These are very fast electrons that behave like unidirectional light beams and can be used for new types of quantum computing."

Hasan and his group researched and simulated dozens of crystal structures before finding the one suitable for holding Weyl fermions. Once fashioned, the crystals were loaded into a scanning tunnelling spectromicroscope (pictured above) and cooled to near absolute zero. Crystals passing the spectromicroscope test were taken to the Lawrence Berkeley National Laboratory in California, for testing with high-energy photon beams. Once fired through the crystal, the beams' shape, size and direction indicated the presence of the long-elusive Weyl fermion.

The hunt for the Weyl fermion began in the earliest days of quantum theory, when physicists first realised that their equations implied the existence of antimatter counterparts to electrons and other commonly known particles.

"People figured that although Weyl's theory was not applicable to relativity or neutrinos, it is the most basic form of fermion and had all other kinds of weird and beautiful properties that could be useful," said Hasan.

"After more than 80 years, we found that this fermion was already there, waiting. It is the most basic building block of all electrons," he said. "It is exciting that we could finally make it come out following Weyl's 1929 theoretical recipe."


  speech bubble Comments »



14th July 2015

China maintains supercomputing lead

For the fifth consecutive time, Tianhe-2, a supercomputer developed by China's National University of Defence Technology, has retained its position as the world's no. 1 system, according to the 45th edition of the twice-yearly TOP500 list.


tianhe-2 supercomputer future timeline technology 2015


Tianhe-2, which means "Milky Way-2", continues to lead the TOP500 list with a performance of 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark.

In second place is Titan, a Cray XK7 system at the Department of Energy's (DOE) Oak Ridge National Laboratory. Titan, the top system in the US and one of the most energy-efficient systems on the list, achieved 17.59 petaflop/s on the Linpack benchmark.

The only new entry in the top ten is at no. 7 – Shaheen II is a Cray XC40 system installed at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Shaheen II achieved 5.54 petaflop/s on the Linpack benchmark, making it the highest-ranked Middle East system in the 22-year history of the list and the first to crack the top ten.

There are 68 systems with performance greater than 1 petaflop/s on the list, up from 50 last November. In total, the combined performance of all 500 systems has grown to 363 petaflop/s, compared to 309 petaflop/s last November and 274 petaflop/s one year ago. HP has the lead in the total number of systems with 178 (35.6%), compared to IBM with 111 systems (22.2%).

Nine systems in the top ten were all installed in 2011 or 2012, and this low level of turnover among the top supercomputers reflects a slowing trend that began in 2008. However, new systems are in the pipeline that may reignite the pace of development and get performance improvements back on track. For example, Oak Ridge National Laboratory is building the IBM/Nvidia "Summit", featuring up to 300 petaflops – an order of magnitude faster than China's Tianhe-2 – that is planned for 2018. Meanwhile, British company Optalysys claims it will have a multi-exaflop optical computer by 2020.

To view the complete list, visit top500.org.


top 500 supercomputer list technology timeline 2015


  speech bubble Comments »



13th July 2015

7 nanometre chips enable Moore's Law to continue

Researchers have announced a breakthrough in the manufacture of 7 nanometre (nm) computer chips, enabling the trend of Moore's Law to continue for the next few years.


7nm nanometer chip future technology timeline 2015


IBM Research has announced the semiconductor industry's first 7nm (nanometre) node test chips with functioning transistors. The breakthrough was accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute's Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE) and could result in the ability to place more than 20 billion tiny switches – transistors – on the fingernail-sized chips that power everything from smartphones to spacecraft.

To achieve the higher performance, lower power and scaling benefits promised by 7nm technology, researchers had to bypass conventional semiconductor manufacturing approaches. Among the novel processes and techniques pioneered in this collaboration were a number of industry-first innovations, most notably Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.

Industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging "exponential" technologies. This accomplishment was part of IBM's $3 billion, five-year investment in chip R&D announced last year.


7nm nanometer chip future technology timeline 2015


"For business and society to get the most out of tomorrow's computers and devices, scaling to 7nm and beyond is essential," said Arvind Krishna, senior vice president and director of IBM Research. "That's why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology. Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come."

Microprocessors utilising 22nm and 14nm technology power today's servers, cloud data centres and mobile devices, and 10nm technology is well on the way to becoming a mature technology. The IBM Research-led alliance achieved close to 50 percent area scaling improvements over today's most advanced technology, introduced SiGe channel material for transistor performance enhancement at 7nm node geometries, process innovations to stack them below 30nm pitch and full integration of EUV lithography at multiple levels. These techniques and scaling could result in at least a 50 percent power/performance improvement for next generation systems that will power the Big Data, cloud and mobile era. These new 7nm chips are expected to start appearing in computers and other gadgets in 2017-18.


7nm nanometer chip future technology timeline 2015


  speech bubble Comments »



8th July 2015

The world's first 2TB consumer SSDs

Samsung has announced the first 2 terabyte solid state drives for the consumer market – continuing the exponential trend in data storage.


samsung 2tb ssd


Samsung has announced two new SSDs – the 850 Pro and 850 EVO – both offering double the capacity of the previous generation. The 2.5" form factor drives can greatly boost performance for desktops and laptops. They will be especially useful in the accessing and storage of 4K video, which can often require enormous file sizes. The available capacities include 120GB, 250GB, 500GB, and 1TB, all the way up to 2TB.

The 850 Pro is designed for power users needing the maximum possible speed, while the 850 EVO is less powerful but somewhat cheaper. The 850 Pro features up to 550MBps sequential read and 520MBps sequential write rates and 100,000 random I/Os per second (IOPS). The 850 EVO has 540MBps sequential read and 520MBps write rates, with up to 90,000 random IOPS. Both models feature 3D V-NAND technology, which stacks 32 layers of transistors on top of each other. The drives also use multi-level cell (MLC) and triple-level cell (TLC) (2- and 3-bit per cell) technology for even greater memory density.

Until recently, consumers were forced to choose between speed or size when it came to upgrading their hard drives. For pure speed, a solid state drive was the best option, while larger sizes were typically catered for with slower and clunkier spinning drives. These new terabyte-scale SSDs are going to change that – combining both high speed and high capacity. Price may still be an issue, as Samsung's new product line doesn't come cheap. The 2TB version of the 850 Pro will retail for $999.99 and the 850 EVO is $799.99. However, given the trend in price performance witnessed in earlier generations of data storage, it is likely these high capacity SSDs will soon be a lot cheaper.

"Samsung experienced a surge in demand for 500 gigabyte (GB) and higher capacity SSDs with the introduction of our V-NAND SSDs," says Un-Soo Kim, Senior Vice President of Branded Product Marketing, Memory Business, in a press release from Samsung. "The release of the 2TB SSD is a strong driver into the era of multi-terabyte SSD solutions. We will continue to expand our ultra-high performance and large density SSD product portfolio and provide a new computing experience to users around the globe."


samsung 2tb ssd solid state drive technology packaging


  speech bubble Comments »



26th June 2015

70% of the world using smartphones by 2020

By 2020, advanced mobile technology will be commonplace around the globe, according to a new report from Ericsson.


future timeline smartphone technology 2020


The latest edition of the Ericsson Mobility Report shows that by 2020, advanced mobile technology will be commonplace in every corner of the globe — smartphone subscriptions will more than double, reaching 6.1 billion, 70% of the world's population will be using smartphones, and over 90% will be covered by mobile broadband networks.

The report – a comprehensive update on the latest mobile trends – shows that growth in mature markets comes from an increasing number of devices per individual. In developing regions, it comes from a swell of new subscribers as smartphones become more affordable; almost 80% of smartphone subscriptions added by year-end 2020 will be from Asia Pacific, the Middle East, and Africa.

With the continued rise of smartphones comes an exponential growth in data usage: smartphone data is predicted to increase ten-fold by 2020, when 80% of all mobile data traffic will come from smartphones (as opposed to basic feature phones). In North America, monthly data usage per smartphone will increase from an average of 2.4 GB today to 14 GB by 2020. It is likely that the 5G standard will be adopted by then.


future timeline mobile smartphone technology 2020


Rima Qureshi, Senior Vice President and Chief Strategy Officer of Ericsson, says: "This immense growth in advanced mobile technology and data usage, driven by a surge in mobile connectivity and smartphone uptake, will make today's big data revolution feel like the arrival of a floppy disk. We see the potential for mass-scale transformation, bringing a wealth of opportunities for telecom operators and others to capture new revenue streams. But it also requires greater focus on cost efficient delivery and openness to new business models to compete and remain effective."

An expanding range of applications and business models, coupled with falling modem costs, are key factors driving the growth of connected devices. Added to this, new use cases are emerging for both short and long range applications, leading to even stronger growth of connected devices moving forward. Ericsson's forecast, outlined in the report, points to 26 billion connected devices by 2020, confirming we are well on the way to reaching the vision of 50 billion connected devices.  

Each year until 2020, mobile video traffic will grow by a staggering 55 percent per year and will constitute around 60 percent of all mobile data traffic by the end of that period. Growth is largely driven by shifting user preferences towards video streaming services, and the increasing prevalence of video in online content including news, advertisements and social media.

When looking at data consumption in advanced mobile broadband markets, findings show a significant proportion of traffic is generated by a limited number of subscribers. These heavy data users represent 10 percent of total subscribers, but generate 55 percent of total data traffic. Video is dominant among heavy users, who typically watch around one hour of video per day, which is 20 times more than the average user.

To accompany the Mobility Report, Ericsson has created a Traffic Exploration Tool for creating customised graphs and tables, using data from the report. The information can be filtered by region, subscription, technology, traffic, and device type.


  speech bubble Comments »



8th June 2015

New mobile app could revolutionise human rights justice

The International Bar Association (IBA) today launched the eyeWitness app – a new tool for documenting and reporting human rights atrocities in a secure and verifiable way, so the information can be used as evidence in a court of law.


eyewitness mobile app technology iba


With social media increasingly the forum for communicating human rights, many online images have raised awareness of atrocities around the world but typically lack the attribution or information necessary to be used as evidence in a court of law. Now anyone with an Android-enabled smart phone – including human right defenders, journalists, and investigators – can download the eyeWitness to Atrocities app and help hold accountable the perpetrators of atrocity crimes, such as genocide, crimes against humanity, torture and war crimes.

"The eyeWitness to Atrocities app will be a transformational tool in the fight for human rights, providing a solution to the evidentiary challenges surrounding mobile phone footage," said IBA Executive Director Mark Ellis. "Until now, it has been extremely difficult to verify the authenticity of these images and to protect the safety of those brave enough to record them. As an advocate for the voiceless, the International Bar Association is dedicated to empowering activists on the ground who are witnessing these atrocities with the ability to bring criminals to justice."

The app design is based on extensive research on the rules of evidence in international, regional and national courts and tribunals. It includes several features to guarantee authenticity, facilitate verification and protect confidentiality by allowing the user to decide whether or not to be anonymous.

"Putting information and technology in the hands of citizens worldwide has a powerful role to play in advancing the rule of law," said Ian McDougall, EVP and General Counsel of LexisNexis Legal & Professional, which partnered with the IBA. "LexisNexis Legal & Professional's world class data hosting capabilities will provide the eyeWitness programme with the same technology that we use to safeguard sensitive and confidential material for our clients every day. It's all part of our company's broader commitment to advancing the rule of law around the world, as we believe every business has a role to play in building a safer, more just global society."

How the App Works

When a user records an atrocity, the app automatically collects and embeds into the video file GPS coordinates, date and time, device sensor data and surrounding objects, such as Bluetooth and Wi-Fi networks. The user has the option of adding any additional identifying information about the image. This metadata will provide information integral to verifying and contextualising the footage. The images and accompanying data are encrypted and securely stored within the app. The app also embeds a chain of custody record to verify that the footage has not been edited or digitally manipulated. The user then submits this information directly from the app to a database maintained by the eyeWitness organisation.

Once the video is transmitted, it is stored in a secure repository that functions as a virtual evidence locker safeguarding the original, encrypted footage for future investigations and legal proceedings. The submitted footage is only accessible by a group of legal experts at eyeWitness who will analyse the footage and identify the appropriate authorities, including international, regional or national courts, to pursue relevant cases.

"The IBA is proud to be spearheading the project and allocating $1 million of IBA reserves as part of its efforts to promote, protect and enforce human rights under a just rule of law," said David Rivkin, IBA President. The IBA is working in partnership with LexisNexis Legal & Professional, a part of RELX Group, which is hosting the secure repository, database and backup system to store and analyse data collected via the app. The IBA is also partnering with human rights organisations to put the app in the hands of those working in some of the world's most severe conflict zones.

"The eyeWitness app promises to revolutionise the effectiveness of ground-level human rights reporting," said Deirdre Collings, Executive Director of the SecDev Foundation, a Canadian research organisation. "We also see the app's usefulness for media activists in conflict and authoritarian environments who undertake vital but high-risk reporting. We're proud to include eyeWitness in our training programme for our partners in Syria and will be rolling it out across our projects in the CIS region and Vietnam."

Established in 1947 and headquartered in London, the IBA is the world's leading organisation of international legal practitioners, bar associations and law societies. Through its global membership of individual lawyers, law firms, bar associations and law societies, it influences the development of international law reform and shapes the future of the legal profession throughout the world.




  speech bubble Comments »



5th May 2015

'Centimetre accurate' GPS system could transform virtual reality and mobile devices

Researchers at the University of Texas at Austin have developed a centimetre-accurate GPS-based positioning system that could revolutionise geolocation on virtual reality headsets, cellphones and other technologies – making global positioning and orientation far more precise than what is currently available on a mobile device.


pizza delivery drone gps


The researchers' new system could allow unmanned aerial vehicles to deliver packages to a specific spot on a consumer's back porch, improve collision avoidance technologies on cars and allow virtual reality (VR) headsets to be used outdoors. This ultra-accurate GPS, coupled with a smartphone camera, could be used to quickly build a globally referenced 3-D map of one's surroundings that would greatly expand the radius of a VR game. Currently, VR does not use GPS, which limits its use to indoors and usually a two- to three-foot radius.

"Imagine games where, rather than sit in front of a monitor and play, you are in your backyard actually running around with other players," said Todd Humphreys, lead researcher and assistant professor in the Department of Aerospace Engineering and Engineering Mechanics. "To be able to do this type of outdoor, multiplayer virtual reality game, you need highly accurate position and orientation that is tied to a global reference frame."

Humphreys and his team in the Radionavigation Lab have designed a low-cost system that reduces location errors from the size of a large car to the size of a nickel – a more than 100 times increase in accuracy. Humphreys collaborated on the new technology with Professor Robert W. Heath from the Department of Electrical and Computer Engineering, along with graduate students.

Centimetre-accurate positioning systems are already used in geology, surveying and mapping – but the survey-grade antennas these systems employ are too large and costly for use in mobile devices. This breakthrough by Humphreys and his team is a powerful and sensitive software-defined GPS receiver that can extract centimetre accuracies from the inexpensive antennas found in mobile devices. Such precise measurements were not previously possible. The researchers anticipate that their software's ability to leverage low-cost antennas will reduce the overall cost of centimetre accuracy and make it economically feasible for mobile devices.




Humphreys and his team have spent six years building a specialised receiver, called GRID, to extract so-called carrier phase measurements from low-cost antennas. GRID currently operates outside the phone, but it will eventually run on the phone's internal processor. To further develop this technology, they recently co-founded a startup, called Radiosense. Humphreys and his team are working with Samsung to develop a snap-on accessory that will tell smartphones, tablets and virtual reality headsets their precise position and orientation.

The researchers designed their system to deliver precise position and orientation information – how one's head rotates or tilts – to less than one degree of measurement accuracy. This level of accuracy could enhance VR environments that are based on real-world settings, as well as improve other applications including visualisation and 3-D mapping. Additionally, it could make a significant difference in people's daily lives, including transportation, where centimetre-accurate GPS could allow better vehicle-to-vehicle communication technology.

"If your car knows in real time the precise position and velocity of an approaching car that is blocked from view by other traffic, your car can plan ahead to avoid a collision," Humphreys said.


  speech bubble Comments »



28th March 2015

10TB solid state drives may soon be possible

An innovative new process architecture can extend Moore's Law for flash storage – bringing significant improvements in density while lowering the cost of NAND flash.


10tb solid state drive intel technology 2015 timeline


Intel Corporation – in partnership with Micron – have announced the availability of 3D NAND, the world's highest-density flash memory. Flash is the storage technology used inside the lightest laptops, fastest data centres, and nearly every cellphone, tablet and mobile device.

3D NAND works by stacking the components in vertical layers with extraordinary precision to create devices with three times higher data capacity than competing NAND technologies. This enables more storage in a smaller space, bringing significant cost savings, low power usage and higher performance to a range of mobile consumer devices, as well as the most demanding enterprise deployments.

As data cells begin to approach the size of individual atoms, traditional "planar" NAND is nearing its practical scaling limits. This poses a major challenge for the memory industry. 3D NAND is poised to make a dramatic impact by keeping flash storage aligned with Moore's Law, the exponential trend of performance gains and cost savings, driving more widespread use of flash storage in the future.


10tb solid state drive intel technology 2015 timeline


"3D NAND technology has the potential to create fundamental market shifts," said Brian Shirley, vice president of Memory Technology and Solutions at Micron Technology. "The depth of the impact that flash has had to date – from smartphones to flash-optimised supercomputing – is really just scratching the surface of what's possible."

One of the most significant aspects of this breakthrough is in the foundational memory cell itself. Intel and Micron used a floating gate cell, a universally utilised design refined through years of high-volume planar flash manufacturing. This is the first use of a floating gate cell in 3D NAND, which was a key design choice to enable greater performance, quality and reliability.

The data cells are stacked vertically in 32 layers to achieve 256Gb multilevel cell (MLC) and 384Gb triple-level cell (TLC) dies within a standard package. This can enable gum stick-sized SSDs with 3.5TB of storage and standard 2.5-inch SSDs with greater than 10TB. Because capacity is achieved by stacking cells vertically, individual cell dimensions can be considerably larger. This is expected to increase both performance and endurance and make even the TLC designs well-suited for data centre storage.


10tb solid state drive intel technology 2015 timeline


Key product features of this 3D NAND design include:

Large Capacities – Triple the capacity of existing technology, up to 48GB of NAND per die, enabling 750GB to fit in a single fingertip-sized package.

Reduced Cost per GB – First-generation 3D NAND is architected to achieve better cost efficiencies than planar NAND.

Fast – High read/write bandwidth, I/O speeds and random read performance.

Green – New sleep modes enable low-power use by cutting power to inactive NAND die (even when other dies in the same package are active), dropping power consumption significantly in standby mode.

Smart – Innovative new features improve latency and increase endurance over previous generations, and also make system integration easier.

The 256Gb MLC version of 3D NAND is sampling with select partners today, and the 384Gb TLC design will be sampling later this spring. The fab production line has already begun initial runs, and both devices will be in full production by the fourth quarter of this year. Both companies are also developing individual lines of SSD solutions based on 3D NAND technology and expect those products to be available within the next year.




  speech bubble Comments »


« Previous  



AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure
















future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed

Privacy Policy