future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
     
     
 
       
 
 
 

Blog » Computers & the Internet

 
     
 

5th December 2016

Construction of practical quantum computers radically simplified

Scientists at the University of Sussex have invented a ground-breaking new method that puts the construction of large-scale quantum computers within reach of current technology.

Quantum computers could solve certain problems – that would take the fastest supercomputer millions of years to calculate – in just a few milliseconds. They have the potential to create new materials and medicines, as well as solve long-standing scientific and financial problems.

Universal quantum computers can be built in principle, but the technological challenges are tremendous. The engineering required to build one is considered more difficult than manned space travel to Mars – until now.

Quantum computing experiments on a small scale using trapped ions (charged atoms) are carried out by aligning individual laser beams onto individual ions with each ion forming a quantum bit. However, a large-scale quantum computer would need billions of quantum bits, therefore requiring billions of precisely aligned lasers, one for each ion.

Instead, scientists at the University of Sussex have invented a simple method where voltages are applied to a quantum computer microchip (without having to align laser beams) – to the same effect. The team also succeeded in demonstrating the core building block of this new method with an impressively low error rate.

 

quantum computer future timeline
Credit: University of Sussex

 

"This development is a game changer for quantum computing making it accessible for industrial and government use," said Professor Winfried Hensinger, who heads the Ion Quantum Technology Group at the university and is director of the Sussex Centre for Quantum Technologies. "We will construct a large-scale quantum computer at Sussex making full use of this exciting new technology."

Quantum computers may revolutionise society in a similar way as the emergence of classical computers. "Developing this step-changing new technology has been a great adventure and it is absolutely amazing observing it actually work in the laboratory," said Hensinger's colleague, Dr Seb Weidt.

The Ion Quantum Technology Group forms part of the UK's National Quantum Technology Programme, a £270 million investment by the government to accelerate the introduction of quantum technologies into the marketplace.

A paper on this latest research, 'Trapped-ion quantum logic with global radiation fields', is published in the journal Physical Review Letters.

 

quantum computer future timeline
Professor Winfried Hensinger (left) and Dr Seb Weidt (right).

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

1st December 2016

Almost half of tech professionals expect their job to be automated within ten years

45% of technology professionals believe a significant part of their job will be automated by 2027 – rendering their current skills redundant. Changes in technology are so rapid that 94% say their career would be severely limited if they didn't teach themselves new technical skills.

That's according to the Harvey Nash Technology Survey 2017, representing the views of more than 3,200 technology professionals from 84 countries.

The chance of automation varies greatly with job role. Testers and IT Operations professionals are most likely to expect their job role to be significantly affected in the next decade (67% and 63% respectively). Chief Information Officers (CIOs), Vice Presidents of Information Technology (VP IT) and Programme Managers expect to be least affected (31% and 30% respectively).

David Savage, associate director, Harvey Nash UK, commented: "Through automation, it is possible that ten years from now the Technology team will be unrecognisable in today's terms. Even for those roles relatively unaffected directly by automation, there is a major indirect effect – anything up to half of their work colleagues may be machines by 2027."

 

future timeline automation technology 2027

 

In response to automation technology, professionals are prioritising learning over any other career development tactics. Self-learning is significantly more important to them than formal training or qualifications; only 12 per cent indicate "more training" as a key thing they want in their job and only 27% see gaining qualifications as a top priority for their career.

Despite the increase in automation, the survey reveals that technology professionals remain in high demand, with participants receiving at least seven headhunt calls in the last year. Software Engineers and Developers are most in demand, followed by Analytics / Big Data roles. Respondents expect the most important technologies in the next five years to be Artificial Intelligence, Augmented / Virtual Reality and Robotics, as well as Big Data, Cloud and the Internet of Things. Unsurprisingly, these are also the key areas cited in what are the "hot skills to learn".

"Technology careers are in a state of flux," says Simon Hindle, a director at Harvey Nash Switzerland. "On one side, technology is 'eating itself', with job roles increasingly being commoditised and automated. On the other side, new opportunities are being created, especially around Artificial Intelligence, Big Data and Automation. In this rapidly changing world, the winners will be the technology professionals who take responsibility for their own skills development, and continually ask: 'where am I adding value that no other person – or machine – can add?'"

 

future timeline automation technology 2027

 

Key highlights from the Harvey Nash Technology Survey 2017:

AI growth: The biggest technology growth area is expected to be Artificial Intelligence (AI). 89% of respondents expect it to be important to their company in five years' time, almost four times the current figure of 24%.

Big Data is big, but still unproven. 57% of organisations are implementing Big Data at least to some extent. For many, it is moving away from being an 'experiment' into something more core to their business; 21% say they are using it in a 'strategic way'. However, only three in ten organisations with a Big Data strategy are reporting success to date.

Immigration is key to the tech industry, and Brexit is a concern. The technology sector is overwhelmingly in favour of immigration; 73% believe it is critical to their country’s competitiveness. 33% of respondents to the survey were born outside the country they are currently working. Almost four in ten tech immigrants in the UK are from Europe, equating to one in ten of the entire tech working population in the UK. Moreover, UK workers make up over a fifth of the tech immigrant workforce of Ireland and Germany.

Where are all the women? This year's report reveals that 16% of respondents are women; not very different from the 13% who responded in 2013. The pace of change is glacial and – at this rate – it will take decades before parity is reached.

Tech people don't trust the cloud. Four in ten have little or no trust in how cloud companies are using their personal data, while five in ten at least worry about it. Trust in the cloud is affected by age (the older you are, the less you trust).

The end of the CIO role? Just 3% of those under 30 aspire to be a CIO; instead they would prefer to be a CTO (14% chose this), entrepreneur (19%) or CEO (11%). This suggests that the traditional role of the CIO is relatively unattractive to Gen Y.

Headhunters' radar: Software Engineers and Developers get headhunted the most, followed closely by Analytics / Big Data roles. At the same time, 75% believe recruiters are too focused on assessing technical skills, and overlook good people as a result.

 

cloud computing future timeline technology 2027

 

 

Supporting data from the survey (global averages):

 

Which technologies are important to your company now, and which do you expect to be important in five years' time?

job automation future of work

 

Agree or disagree? Within ten years, a significant part of my job that I currently perform will be automated.

job automation future of work

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

3rd November 2016

A virus-sized computing device

Researchers at University of California, Santa Barbara, have designed a functional nanoscale computing element that could be packed into a space no bigger than 50 nanometres on any side.

 

red blood cell nanotechnology nanotech future timeline

 

In 1959, renowned physicist Richard Feynman, in his talk “Plenty of Room at the Bottom” spoke of a future in which tiny machines could perform huge feats. Like many forward-looking concepts, his molecule and atom-sized world remained for years in the realm of science fiction. And then, scientists and other creative thinkers began to realise Feynman’s nanotechnological visions.

In the spirit of Feynman’s insight, and in response to the challenges he issued, electrical and computer engineers at UC Santa Barbara have developed a design for a functional nanoscale computing device. The concept involves a dense, three-dimensional circuit operating on an unconventional type of logic that could, theoretically, be packed into a block no bigger than 50 nanometres on any side.

“Novel computing paradigms are needed to keep up with demand for faster, smaller and more energy-efficient devices,” said Gina Adam, a postdoctoral researcher at UCSB’s Department of Electrical and Computer Engineering and lead author of the paper “Optimised stateful material implication logic for three dimensional data manipulation” published in the journal Nano Research. “In a regular computer, data processing and memory storage are separated, which slows down computation. Processing data directly inside a three-dimensional memory structure would allow more data to be stored and processed much faster.”

While efforts to shrink computing devices have been ongoing for decades – in fact, Feynman’s challenges as he presented them in 1959 have been met – scientists and engineers continue to carve out room at the bottom for even more advanced nanotechnology. An 8-bit adder operating in 50 x 50 x 50 nanometre dimensions, put forth as part of the current Feynman Grand Prize challenge by the Foresight Institute, has not yet been achieved. However, the continuing development and fabrication of progressively smaller components is bringing this virus-sized computing device closer to reality.

“Our contribution is that we improved the specific features of that logic and designed it so it could be built in three dimensions,” says Dmitri Strukov, UCSB professor of computer science.

 

nanoscale computer device nanotechnology future timeline

 

Key to this development is a system called material implication logic, combined with memristors – circuit elements whose resistance depends on the most recent charges and the directions of those currents that have flowed through them. Unlike the conventional computing logic and circuitry found in our present computers and other devices, in this form of computing, logic operation and information storage happen simultaneously and locally. This greatly reduces the need for components and space typically used to perform logic operations and to move data back and forth between operation and memory storage. The result of the computation is immediately stored in a memory element, which prevents data loss in the event of power outages – a critical function in autonomous systems such as robotics.

In addition, the researchers reconfigured the traditionally two-dimensional architecture of the memristor into a three-dimensional block, which could then be stacked and packed into the space required to meet the Feynman Grand Prize Challenge.

“Previous groups show that individual blocks can be scaled to very small dimensions,” said Strukov, who worked at technology company Hewlett-Packard’s labs when they ramped up development of memristors. By applying those results to his group’s developments, he said, the challenge could easily be met.

Memristors are being heavily researched in academia and in industry for their promising uses in future memory storage and neuromorphic computing. While implementations of material implication logic are rather exotic and not yet mainstream, uses for it could pop up any time, particularly in energy-scarce systems such as robotics and medical implants.

“Since this technology is still new, more research is needed to increase its reliability and lifetime and to demonstrate large-scale, 3-D circuits tightly packed in tens or hundreds of layers,” Adam said.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

3rd November 2016

1,000-fold increase in 3-D scanning speed

Researchers at Penn State University report a 1,000-fold increase in the scanning speed for 3-D printing, using a space-charge-controlled KTN beam deflector with a large electro-optic effect.

 

3d printer scanner future timeline

 

A major technological advance in the field of high-speed beam-scanning devices has resulted in a speed boost of up to 1000 times, according to researchers in Penn State's College of Engineering. Using a space-charge-controlled KTN beam deflector – a kind of crystal made of potassium tantalate and potassium niobate – with a large electro-optic effect, researchers have found that scanning at a much higher speed is possible.

"When the crystal materials are applied to an electric field, they generate uniform reflecting distributions, that can deflect an incoming light beam," said Professor Shizhuo Yin, from the School of Electrical Engineering and Computer Science. "We conducted a systematic study on indications of speed and found out the phase transition of the electric field is one of the limiting factors."

To overcome this issue, Yin and his team of researchers eliminated the electric field-induced phase transition in a nanodisordered KTN crystal by making it work at a higher temperature. They not only went beyond the Curie temperature (at which certain materials lose their magnetic properties, replaced by induced magnetism), they went beyond the critical end point (in which a liquid and its vapour can co-exist).

 

3d printer scanner future timeline

Credit: Penn State

 

This increased the scanning speed from the microsecond range to the nanosecond range, and led to improved high-speed imaging, broadband optical communications and ultrafast laser display and printing. The researchers believe this could lead to a new generation of 3-D printers, with objects that once took an hour to print now taking a matter of seconds.

Yin said technology like this would be especially useful in the medical industry – high-speed imaging will now be possible in real-time. For example, optometrists who use a non-invasive test that uses light waves to take cross-section pictures of a person's retina, would be able to have a 3-D image of their patients' retinas as they are performing the surgery, so they can see what needs to be corrected during the procedure.

The group's findings are published in the journal Nature Scientific Reports.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st October 2016

AI milestone: a new system can match humans in conversational speech recognition

A new automated system that can achieve parity and even beat humans in conversational speech recognition has been announced by researchers at Microsoft.

 

AI conversational speech recognition future timeline

 

A team at Microsoft's Artificial Intelligence and Research group has published a study in which they demonstrate a technology that recognises spoken words in a conversation as well as a real person does.

Last month, the same team achieved a word error rate (WER) of 6.3%. In their new paper this week, they report a WER of just 5.9%, which is equal to that of professional transcriptionists and is the lowest ever recorded against the industry standard Switchboard speech recognition task.

“We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.”

“Even five years ago, I wouldn’t have thought we could have achieved this,” said Harry Shum, the group's executive vice president. “I just wouldn’t have thought it would be possible.”

Microsoft has been involved in speech recognition and speech synthesis research for many years. The company developed Speech API in 1994 and later introduced speech recognition technology in Office XP and Office 2003, as well as Internet Explorer. However, the word error rates for these applications were much higher back then.

 

speech recognition trend future timeline

 

In their new paper, the researchers write: "the key to our system's performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training."

The team used Microsoft’s own Computational Network Toolkit – an open source, deep learning framework. This was able to process deep learning algorithms across multiple computers, running a specialised GPU to greatly improve its speed and enhance the quality of research. The team believes their milestone will have broad implications for both consumer and business products, including entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription, and personal digital assistants such as Cortana.

“This will make Cortana more powerful, making a truly intelligent assistant possible,” Shum said.

“The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group.

Future improvements may also include speech recognition that works well in more real-life settings – places with lots of background noise, for example, such as at a party or while driving on the highway. The technology will also become better at assigning names to individual speakers when multiple people are talking, as well as working with a wide variety of voices, regardless of age, accent or ability.

The full study – Achieving Human Parity in Conversational Speech Recognition – is available at: https://arxiv.org/abs/1610.05256

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

20th October 2016

Quantum computers: 10-fold boost in stability achieved

A team at Australia's University of New South Wales has created a new quantum bit that remains in a stable superposition for 10 times longer than previously achieved.

 

quantum computers stability breakthrough future timeline
Credit: Arne Laucht/UNSW

 

Australian engineers have created a new quantum bit which remains in a stable superposition for 10 times longer than previously achieved, dramatically expanding the time during which calculations could be performed in a silicon quantum computer.

The new quantum bit, consisting of the spin of a single atom in silicon and merged with an electromagnetic field – known as 'dressed qubit' – retains quantum information for much longer than 'undressed' atoms, opening up new avenues to build and operate the superpowerful quantum computers of the future.

"We have created a new quantum bit where the spin of a single electron is merged together with a strong electromagnetic field," comments Arne Laucht from the School of Electrical Engineering & Telecommunications at University of New South Wales (UNSW), lead author of the paper. "This quantum bit is more versatile and more long-lived than the electron alone, and will allow us to build more reliable quantum computers."

Building a quantum computer is a difficult and ambitious challenge, but has potential to deliver revolutionary tools for otherwise impossible calculations – such as the design of complex drugs and advanced materials, or the rapid search of massive, unsorted databases. Its speed and power lie in the fact that quantum systems can host multiple 'superpositions' of different initial states, which in a computer are treated as inputs which, in turn, all get processed at the same time.

"The greatest hurdle in using quantum objects for computing is to preserve their delicate superpositions long enough to allow us to perform useful calculations," said Andrea Morello, Program Manager in the Centre for Quantum Computation & Communication Technology at UNSW. "Our decade-long research program had already established the most long-lived quantum bit in the solid state, by encoding quantum information in the spin of a single phosphorus atom inside a silicon chip placed in a static magnetic field," he said.

What Laucht and colleagues did was push this further: "We have now implemented a new way to encode the information: we have subjected the atom to a very strong, continuously oscillating electromagnetic field at microwave frequencies, and thus we have 'redefined' the quantum bit as the orientation of the spin with respect to the microwave field."

 

quantum computers stability breakthrough future timeline
Tuning gates (red), microwave antenna (blue), and single electron transistor used for spin readout (yellow).
Credit: Guilherme Tosi & Arne Laucht/UNSW

 

The results are striking: since the electromagnetic field steadily oscillates at a very high frequency, any noise or disturbance at a different frequency results in a zero net effect. The UNSW researchers achieved an improvement by a factor of 10 in the time span during which a quantum superposition can be preserved, with a dephasing time of T2*=2.4 milliseconds.

"This new 'dressed qubit' can be controlled in a variety of ways that would be impractical with an 'undressed qubit'," adds Morello. "For example, it can be controlled by simply modulating the frequency of the microwave field, just like an FM radio. The 'undressed qubit' instead requires turning the amplitude of the control fields on and off, like an AM radio. In some sense, this is why the dressed qubit is more immune to noise: the quantum information is controlled by the frequency, which is rock-solid, whereas the amplitude can be more easily affected by external noise."

Since the device is built upon standard silicon technology, this result paves the way to the construction of powerful and reliable quantum processors based on the same fabrication process already used for today's computers. The UNSW team leads the world in developing silicon quantum computing and Morello's team is part of a consortium who have struck a A$70 million deal between UNSW, researchers, business, and the Australian government to develop a prototype silicon quantum integrated circuit – a major step in building the world's first quantum computer in silicon.

A functional quantum computer would allow massive increases in speed and efficiency for certain computing tasks – even when compared with today's fastest silicon-based 'classical' computers. In a number of key areas – such as searching enormous databases, solving complicated sets of equations, and modelling atomic systems such as biological molecules or drugs – they would far surpass today's computers. They would also be extremely useful in the finance and healthcare industries, and for government, security and defence organisations.

Quantum computers could identify and develop new medicines by vastly accelerating the computer-aided design of pharmaceutical compounds (minimising lengthy trial and error testing), and develop new, lighter and stronger materials spanning consumer electronics to aircraft. They would also make possible new types of computing applications and solutions that are beyond our ability to foresee.

The UNSW study appears this week in the peer-reviewed journal, Nature Nanotechnology.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

19th October 2016

Large-scale deployment of body-worn cameras for London police

The Metropolitan Police Service (MPS) is taking a global lead with what is believed to be the largest rollout of body-worn cameras by police anywhere in the world to enhance the service it gives to London.

 

body worn cameras london police
Credit: Met Police

 

This week sees the beginning of a large-scale deployment of Body Worn Video (BWV) which is being issued to more than 22,000 frontline police officers in the British capital. The Met Commissioner, Sir Bernard Hogan-Howe, was joined in Lewisham by the London Mayor, Sadiq Khan, to witness the rollout of the cameras, which follows a successful trial and wide-ranging public consultation and academic evaluation. Over the coming months, cameras will be issued to all 32 London boroughs and a number of frontline specialist roles, including overt firearms officers.

The devices have already shown they can bring speedier justice for victims. This has proved particularly successful in domestic abuse cases, which have seen an increase in earlier guilty pleas from offenders who know their actions have been recorded. The technology offers greater transparency for those in front of the camera as well as behind it. Londoners can feel reassured during their interactions with police, whilst allowing officers to demonstrate professionalism in many challenging and contentious interactions, such as the use of stop and search.

All footage recorded on BWV is subject to legal safeguards and guidance. Footage from the Axon Body Camera is automatically uploaded to secure servers once the device has been docked, and flagged for use as evidence at court or other proceedings. Video not retained as evidence or for a policing purpose is automatically deleted within 31 days. If the public wish to view footage taken of them they can request, in writing, to obtain it under freedom of information laws. It must be within 31 days, unless it has been marked as policing evidence and therefore retained.

The cameras will be worn attached to the officer's uniform and will not be permanently recording. This ensures that officers' interactions with the public are not unnecessarily impeded. Members of the public will be informed as soon as practical that they are being recorded. It is highly visible in any case, with a flashing red circle in the centre of the camera and a frequent beeping noise when the device is activated.

The interactive graphic below explains how body worn video will be used.

 

 

Mayor of London, Sadiq Khan, said: "Body Worn Video is a huge step forward in bringing our capital's police force into the 21st century and encouraging trust and confidence in community policing. This technology is already helping drive down complaints against officers and making them more accountable, as well as helping to gather better evidence for swifter justice."

Metropolitan Police Commissioner, Sir Bernard Hogan-Howe: "Body Worn Video will support our officers in the many challenging situations they have to deal with, at the same time as building the public's confidence. Our experience of using cameras already shows that people are more likely to plead guilty when they know we have captured the incident on a camera. That then speeds up justice, puts offenders behind bars more quickly and most importantly protects potential victims. Video captures events in a way that can't be represented on paper in the same detail – a picture paints a thousand words, and it has been shown the mere presence of this type of video can often defuse potentially violent situations without the need for force to be used."

Last month, a study published by the University of Cambridge found that body-worn cameras led to a 93% drop in complaints made against police by the UK and US public, suggesting the cameras result in behavioural changes that ‘cool down’ potentially volatile encounters. A similar study in 2014 found that officers wearing cameras witnessed a 59% drop in their use-of-force, while complaints against them fell by 87% compared to the previous year.

The deployment of all 22,000 cameras in London will be managed in a phased approach and is expected to be complete by next summer.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

12th October 2016

Scientists create the smallest ever transistor – just a single nanometre long

Researchers at the Department of Energy's Lawrence Berkeley National Laboratory have demonstrated a working 1 nanometre (nm) transistor.

 

1 nanometre transistor future timeline
Credit: Sujay Desai/UC Berkeley

 

For more than a decade, engineers have been eyeing the finish line in the race to shrink the size of components in integrated circuits. They knew that the laws of physics had set a 5-nanometre threshold on the size of transistor gates among conventional semiconductors, about one-third the size of high-end 14-nanometre-gate transistors currently on the market.

However, some laws are made to be broken, or at least challenged.

A research team led by faculty scientist Ali Javey at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has done just that by creating a transistor with a functioning 1-nanometre gate. For comparison, a strand of human hair is about 50,000 nanometres thick.

"We made the smallest transistor reported to date," said Javey, lead principal investigator of the Electronic Materials program in Berkeley Lab's Materials Science Division. "The gate length is considered a defining dimension of the transistor. We demonstrated a 1-nanometre-gate transistor, showing that with the choice of proper materials, there is a lot more room to shrink our electronics."

The key was to use carbon nanotubes and molybdenum disulfide (MoS2), an engine lubricant commonly sold in auto parts shops. MoS2 is part of a family of materials with immense potential for applications in LEDs, lasers, nanoscale transistors, solar cells, and more.

This breakthrough could help in keeping alive Intel co-founder Gordon Moore's prediction that the density of transistors on integrated circuits would double every two years, enabling the increased performance of our laptops, mobile phones, televisions, and other electronics.

 

moores law

 

"The semiconductor industry has long assumed that any gate below 5 nanometres wouldn't work – so anything below that was not even considered," said study lead author Sujay Desai, a graduate student in Javey's lab. "This research shows that sub-5-nanometre gates should not be discounted. Industry has been squeezing every last bit of capability out of silicon. By changing the material from silicon to MoS2, we can make a transistor with a gate that is just 1 nanometre in length, and operate it like a switch."

"This work demonstrated the shortest transistor ever," said Javey, who is also a UC Berkeley professor of electrical engineering and computer sciences. "However, it's a proof of concept. We have not yet packed these transistors onto a chip, and we haven't done this billions of times over. We also have not developed self-aligned fabrication schemes for reducing parasitic resistances in the device. But this work is important to show that we are no longer limited to a 5-nanometre gate for our transistors. Moore's Law can continue a while longer by proper engineering of the semiconductor material and device architecture."

His team's research is published this month in the peer-reviewed journal Science.

 

1 nanometre transistor future timeline
Credit: Qingxiao Wang/UT Dallas

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st September 2016

1 terabit per second achieved in optical fibre trial

Terabit-per-second data transmission using a novel modulation approach in optical fibre has been announced by researchers in Germany.

 

terabit internet connection speed 2016

 

Nokia Bell Labs, Deutsche Telekom T-Labs and the Technical University of Munich have achieved unprecedented transmission capacity and spectral efficiency in an optical communications field trial with a new modulation technique. This breakthrough could extend the capability of optical networks to meet surging data traffic demands in the future.

Their research has shown that the flexibility and performance of optical networks can be maximised when adjustable transmission rates are dynamically adapted to channel conditions and traffic demands. As part of the Safe and Secure European Routing (SASER) project, the experiment over a deployed optical fibre network achieved a net 1 terabit (TB) transmission rate. This is close to the theoretical maximum information transfer rate of that channel and thus approaching the Shannon Limit discovered in 1948 by Claude Shannon, the "father of information theory."

The trial of this novel modulation approach – known as Probabilistic Constellation Shaping (PCS) – uses quadrature amplitude modulation (QAM) formats to achieve higher transmission capacity over a given channel, to significantly improve the spectral efficiency of optical communications. PCS modifies the probability with which constellation points (the alphabet of the transmission) are used. Traditionally, all constellation points use the same frequency. However, PCS cleverly uses constellation points with high amplitude less frequently than those with lesser amplitude, sending signals that are overall more resilient to noise and other potential disruption. This allows the data transmission rate to be tailored to ideally fit the transmission channel, delivering up to 30% greater reach.

This research is a key milestone in proving that PCS could be used in the future to improve optical communications. With 5G wireless technology forecast to emerge by 2020, today's optical transport systems must evolve to meet the exponentially growing demand of network data traffic, increasing at a cumulative annual rate of 100%. PCS is now part of this evolution, allowing increases in optical fibre flexibility and performance that will move data traffic faster and over greater distances without increasing the network complexity.

Marcus Weldon, President of Nokia Bell Labs and the Chief Technology Officer, commented: "Future optical networks not only need to support orders of magnitude higher capacity, but also the ability to dynamically adapt to channel conditions and traffic demand. Probabilistic Constellation Shaping offers great benefits to service providers and enterprises, by enabling optical networks to operate closer to the Shannon Limit to support massive datacentre interconnectivity and provide the flexibility and performance required for modern networking in the digital era."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st September 2016

World's first 1 terabyte SD card is announced

Hard drive manufacturer Western Digital has announced the first 1 terabyte capacity SD card at Photokina 2016.

 

worlds first 1 terabyte sd card announced 2016

 

Western Digital Corporation (WDC), which acquired SanDisk for US$19 billion in May, has unveiled a 1 terabyte (TB) SDXC card prototype at the world's leading trade fair for photo and video professionals. With ever-increasing demand for high resolution content, such as 4K and 8K, the company continues to push the boundaries of technology and to demonstrate the power of exponential growth.

"Showcasing the most advanced imaging technologies is truly exciting for us," said Dinesh Bahal, Vice President of Product Management. "16 years ago we introduced the first SanDisk 64MB SD card and today we are enabling capacities of 1TB. Over the years, our goal has remained the same: continue to innovate and set the pace for the imaging industry. The SanDisk 1TB SD card prototype represents another significant achievement as growth of high-resolution content and capacity-intensive applications such as virtual reality, video surveillance and 360 video, are progressing at astounding rates."

Since the introduction of a record-breaking 512GB memory card at Photokina 2014, Western Digital has proven it can nearly double the capacity in the same SD card form factor using proprietary technology. Higher capacity cards expand the possibilities for professional videographers and photographers, giving them even greater ability to create more of the highest quality content, without the interruption of changing cards.

"Just a few short years ago, the idea of a 1TB capacity point in an SD card seemed so futuristic," said Sam Nicholson, CEO of Stargate Studios and a member of the American Society of Cinematographers. "It's amazing that we're now at the point where it's becoming a reality. With growing demand for applications like VR, we can certainly use 1TB when we're out shooting continuous high-quality video. High-capacity cards allow us to capture more without interruption – streamlining our workflow, and eliminating the worry that we may miss a moment because we have to stop to swap out cards."

Western Digital will be demonstrating the SanDisk 1TB card prototype and showcasing its newest offerings at Photokina, Hall 02.1 Stand A014.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 
     
       
     
   
« Previous  
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed

Privacy Policy