Chinese scientists report the transmission of entangled photons between suborbital space and Earth, using the satellite Micius. More satellites could follow in the near future, with plans for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.
In a landmark study, Chinese scientists report the successful transmission of entangled photons between suborbital space and Earth. Furthermore, whereas the previous record for entanglement distance was 100 km (62 miles), here, transmission over more than 1,200 km (746 miles) was achieved.
The distribution of quantum entanglement, especially across vast distances, holds major implications for quantum teleportation and encryption networks. Yet, efforts to entangle quantum particles – essentially "linking" them together over long distances – have been limited to 100 km or less, mainly because the entanglement is lost as they are transmitted along optical fibres, or through open space on land.
One way to overcome this issue is to break the line of transmission into smaller segments and repeatedly swap, purify and store quantum information along the optical fibre. Another approach to achieving global quantum networks is by making use of lasers and satellite technologies. Using a Chinese satellite called Micius, launched last year and equipped with specialised quantum tools, Juan Yin et al. demonstrated the latter feat. The Micius satellite was used to communicate with three ground stations across China, each up to 1,200 km apart.
The separation between the orbiting satellite and these ground stations varied from 500 to 2,000 km. A laser beam on the satellite was subjected to a beam splitter, which gave the beam two distinct polarised states. One of the spilt beams was used for transmission of entangled photons, while the other was used for photon receipt. In this way, entangled photons were received at the separate ground stations.
"It's a huge, major achievement," Thomas Jennewein, physicist at the University of Waterloo in Canada, told Science. "They started with this bold idea and managed to do it."
"The Chinese experiment is quite a remarkable technological achievement," said Artur Ekert, a professor of quantum physics at the University of Oxford, in an interview with Live Science. "When I proposed the entangled-based quantum key distribution back in 1991 when I was a student in Oxford, I did not expect it to be elevated to such heights."
One of the many challenges faced by the team was keeping the beams of photons focused precisely on the ground stations as the satellite hurtled through space at nearly 8 kilometres per second.
Quantum encryption, if successfully developed, could revolutionise communications. Information sent via this method would, in theory, be absolutely secure and practically impossible for hackers to intercept. If two people shared an encrypted quantum message, a third person would be unable to access it without changing the information in an unpredictable way. Further satellite tests are planned by China in the near future, with potential for a European–Asian quantum-encrypted network by 2020, and a global network by 2030.
Traditional silicon-based transistors revolutionised electronics with their ability to switch current on and off. By controlling the flow of current, the creation of smaller computers and other devices was possible. Over the decades, rapid gains in miniaturisation led to computers shrinking from room-sized monoliths, to wardrobe-sized, to desktops and laptops and eventually handheld smartphones – a phenomenon known as Moore's Law. In recent years, however, concerns have arisen that the rate of progress may have slowed, or could even be approaching a fundamental limit.
A solution may be on the horizon. This month, researchers have theorised a next-generation transistor based not on silicon but on a ribbon of graphene, a two-dimensional carbon material with the thickness of a single atom. Their findings – reported in Nature Communications – could have big implications for electronics, computing speeds and big data in the future. Graphene-based transistors may someday lead to computers that are 1,000 times faster and use a hundredth of today's power.
"If you want to continue to push technology forward, we need faster computers to be able to run bigger and better simulations for climate science, for space exploration, for Wall Street. To get there, we can't rely on silicon transistors anymore," said Ryan M. Gelfand, director of the NanoBioPhotonics Laboratory at UCF.
University of Central Florida Assistant Professor Ryan M. Gelfand
His team found that by applying a magnetic field to a graphene ribbon, they could change the resistance of current flowing through it. For this device, the magnetic field was controlled by increasing or decreasing the current through adjacent carbon nanotubes. The strength of the magnetic field matched the flow of current through this new kind of transistor, much like a valve controlling the flow of water through a pipe.
Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But clock speeds that rely on silicon transistors have been relatively stagnant for over a decade now, and are mostly still stuck in the 3 to 4 gigahertz range.
A cascading series of graphene transistor-based logic circuits could produce a massive jump, explains Gelfland, with clock speeds approaching the terahertz range – 1,000 times faster – because communication between each of the graphene nanoribbons would occur via electromagnetic waves, instead of the physical movement of electrons. They would also be smaller and far more efficient, allowing device-makers to shrink technology and squeeze in more functionality.
"The concept brings together an assortment of existing nanoscale technologies and combines them in a new way," said Dr. Joseph Friedman, assistant professor of electrical and computer engineering at UT Dallas, who collaborated with Gelfland and his team. While the concept is still in the early stages, Friedman said work towards a prototype all-carbon, cascaded spintronic computing system will continue in the NanoSpinCompute research laboratory.
Varjo, a tech startup based in Helsinki, Finland, has today unveiled a new VR/AR technology it has been developing in secret. This features nearly 70 times the pixel count of current generation headsets and is sufficient to match human eye resolution.
Varjo ("Shadow" in Finnish) Technologies today announced it has emerged from stealth and is now demonstrating the world's first human eye-resolution headmounted display for upcoming Virtual Reality, Augmented Reality and Mixed Reality (VR/AR/MR) products. Designed for professional users and with graphics an order of magnitude beyond any currently shipping or announced head-mounted display, this major advancement will enable unprecedented levels of immersion and realism.
This breakthrough is accomplished by Varjo's patented technology that replicates how the human eye naturally works, creating a super-high-resolution image to the user's gaze direction. This is further combined with video-see-through (VST) technology for unparalleled AR/MR capabilities.
Codenamed "20|20" after perfect vision, Varjo's prototype is based on unique technology created by a team of optical scientists, creatives and developers who formerly occupied top positions at Microsoft, Nokia, Intel, Nvidia and Rovio. It will be shipping in Varjo-branded products specifically for professional users and applications starting in late Q4, 2017.
"Varjo's patented display innovation pushes VR technology 10 years ahead of the current state-of-the-art, where people can experience unprecedented resolution of VR and AR content limited only by the perception of the human eye itself," said Urho Konttori, CEO and founder. "This technology – along with Varjo VST – jump-starts the immersive computing age overnight: VR is no longer a curiosity, but now can be a professional tool for all industries."
In a study involving simulated out-of-hospital cardiac arrests, drones carrying an automated external defibrillator arrived in less time than emergency medical services.
Drones carrying an automated external defibrillator could dramatically improve the response time for heart emergencies – potentially saving many thousands of lives each year – according to a study published by the Journal of the American Medical Association (JAMA).
Out-of-hospital cardiac arrest (OHCA) in the United States has a low survival rate (less than 10%), with reducing time to defibrillation as the most important factor for increasing survival. Drones can be activated by a dispatcher and sent to an address provided by a 911 caller and may carry an automated external defibrillator (AED) to the location so that a bystander can use it. Whether drones could actually reduce response times in a real-life situation is unknown. Researchers from the Karolinska Institutet in Stockholm, Sweden, compared the time to delivery of an AED using fully autonomous drones for simulated OHCAs vs emergency medical services (EMS).
A drone was developed and certified by the Swedish Transportation Agency and equipped with an AED weighing 771 grams (1.7 lbs), then placed at a fire station in a municipality north of Stockholm. The drone was equipped with a global positioning system (GPS) and high-definition camera and integrated with an autopilot software system. It was dispatched to locations where cardiac arrests within a 10 km (6.2 mile) radius of the fire station had previously occurred.
A total of 18 remotely operated flights were performed with a median flight distance of about two miles. The median time from call to dispatch of the EMS was 3:00 minutes, while the median time from dispatch to drone launch was three seconds. The median time from dispatch to arrival of the drone was 5:21 minutes vs 22:00 minutes for EMS. The drone arrived more quickly than EMS in all cases with a median reduction in response time of 16:39 minutes.
"Saving 16 minutes is likely to be clinically important," the authors write. "Nonetheless, further test flights, technological development, and evaluation of integration with dispatch centers and aviation administrators are needed. The outcomes of OHCA using the drone-delivered AED by bystanders vs resuscitation by EMS should be studied."
Two new moons – S/2016 J1 and S/2017 J1 – are reported to be orbiting Jupiter, bringing the gas giant's total number of known natural satellites to 69.
They were found by astronomers including Scott Sheppard from the Carnegie Institution for Science in Washington, DC – who is credited with dozens of previous moon discoveries in the outer Solar System – along with minor planet specialists David Tholen and Chadwick Trujillo. The team had been conducting a survey of much more distant objects in the Kuiper Belt, when they happened to spot these points of light moving near Jupiter, which was conveniently close in the sky at the time.
S/2016 J1 (Jupiter LIV)
S/2017 J1 (Jupiter LIX)
The first of these (since renamed Jupiter LIV) is only about 1 km (0.6 miles) in diameter, making it very tiny indeed when compared to the likes of Ganymede, Callisto, Io and Europa, which are thousands of times bigger. The second moon (renamed Jupiter LIX) is larger at 2 km (1.2 miles). Both are members of the widely dispersed Pasiphae group – a family of retrograde satellites with similar orbits and a common origin: they are believed to have formed when Jupiter captured a 60 km asteroid, which subsequently broke up after a collision.
Jupiter LIV and Jupiter LIX are located about 20.5 million km and 23.5 million km from Jupiter, respectively.
Click to enlarge
In addition to these new discoveries, the team also rediscovered a number of "lost" moons from earlier years: "There are several lost moons of Jupiter that were discovered in 2003," they write on their website. "They are known moons, but their orbits are not well enough known to accurately predict where they are now, so they are considered lost. There were 14 of these lost moons at the beginning of 2016. We have for sure recovered five."
More discoveries are likely to be on the way, as Sheppard writes: "There are likely a few more new moons as well in our 2017 observations, but we need to reobserve them in 2018 to determine which of the discoveries are new and which are lost 2003 moons. Stay tuned."
Researchers from the Netherlands and Germany have identified seven risk genes for insomnia.
An international team of researchers has found, for the first time, seven risk genes for insomnia. This discovery is an important step forward in understanding the biological mechanisms of sleep. In addition, it proves that insomnia is not, as is often claimed, a purely psychological condition.
Insomnia is among the most common health complaints – affecting between 10% and 30% of adults worldwide at any given point in time and up to half in a given year. Even after treatment, poor sleep can remain a persistent vulnerability for many people. Professor Van Someren, a sleep specialist from the Vrije Universiteit Amsterdam (VU), believes his team's findings could lead to an understanding of insomnia at the level of communication within and between neurons, providing new ways of treating the condition. He also hopes this breakthrough will improve the recognition of insomnia.
"Compared to the severity, prevalence and risks of insomnia, only few studies targeted its causes," he says. "Insomnia is all too often dismissed as being 'all in your head'. Our research brings a new perspective: insomnia is also in the genes."
From a sample of 113,000 individuals, the researchers found seven genes for insomnia. These play a role in the regulation of transcription, the process where DNA is read in order to make an RNA copy of it, and exocytosis, the release of molecules by cells in order to communicate with their environment. One of the identified genes, MEIS1, has previously been related to two other sleep disorders: Periodic Limb Movements of Sleep (PLMS) and Restless Legs Syndrome (RLS). By collaborating with Konrad Oexle and colleagues from the Institute of Neurogenomics in Munich, Germany, they concluded that variants in the gene seem to contribute to all three disorders. Strikingly, PLMS and RLS are characterised by restless movement and sensation, respectively, whereas insomnia is characterised mainly by a restless stream of consciousness.
The researchers also found a strong genetic overlap with other traits – such as anxiety disorders, depression, neuroticism, and low subjective wellbeing: "This is an interesting finding, because these characteristics tend to go hand in hand with insomnia. We now know that this is partly due to the shared genetic basis," says neuroscientist Anke Hammerschlag (VU), PhD student and first author of the study.
The team also studied whether the same genetic variants were important for men and women. "Part of the genetic variants turned out to be different," says Professor Danielle Posthuma, a statistical geneticist at VU Amsterdam. "This suggests that, for some part, different biological mechanisms may lead to insomnia in men and women. We also found a difference between men and women in terms of prevalence: in the sample we studied, including mainly people older than 50, 33% of the women reported to suffer from insomnia. For men, this was 24%."
A large survey of experts in artificial intelligence suggests there is a 50% chance of AI outperforming humans in all tasks within 45 years and of automating all human jobs in 120 years.
A team from the University of Oxford and Yale University has published a new survey that reveals experts' opinions on the likely timing of future milestones in artificial intelligence (AI). For this paper, they wrote to all researchers who had published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning). Of the 1,634 authors contacted, a total of 352 (21%) responded.
Questions given to participants concerned the timing of specific AI capabilities (e.g. folding laundry, language translation), superiority at specific occupations (e.g. truck driver, surgeon), superiority over humans at all tasks, and the social impacts of advanced AI.
According to this survey, experts believe that AI will outperform humans in many activities in the near future, such as translating languages (by 2024), writing a high-school essay (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks within 45 years (shown in the graph below), and of automating all human jobs within 120 years.
Aggregate subjective probability of 'High-Level Machine Intelligence' (HLMI) arrival by future years. Each respondent provided three data points for their forecast.
Interestingly, Asian respondents predict these dates much sooner than North Americans. There was no correlation between the seniority of a researcher and the predictions they made. A majority of researchers believe the field of machine learning has accelerated in recent years, with 67% saying progress was faster in the second half of their career.
While there is much hype and fear about the danger of robots, depicted in movies such as Terminator, the researchers take a more optimistic view about the longer-term future of AI. The majority believe it will benefit humanity, with only a 5% chance of being "extremely bad" (i.e. causing human extinction).
"These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI," the survey team concludes.
Most of the questions focus on the cognitive aspects of intelligence that fit well-defined tasks. "But parts of intelligence – such as emotional intelligence – go beyond cognition," says Georgios Yannakakis, an Associate Professor at the Institute of Digital Games, University of Malta. "It would be interesting to ask when AI will surpass humans at being art or movie critics."
The full survey results can be downloaded as a PDF at arXiv.org.
As the price of installation continues to fall, renewable power has set another new record, with 161 gigawatts (GW) being added in 2016 – increasing total global capacity to more than 2 terawatts (TW).
Renewable Energy Policy Network for the 21st Century (REN21) this week published its Renewables 2017 Global Status Report (GSR), the most comprehensive annual overview of the state of renewable energy.
The report finds that additions of installed renewable power capacity set new records in 2016, with 161 gigawatts (GW) added, increasing total global capacity by almost 9% over 2015, to nearly 2,017 GW. Solar PV accounted for about 47% of capacity added, followed by wind power at 34% and hydropower at 16%.
In a growing number of countries, renewables are becoming the least cost option. Recent deals in Denmark, Egypt, India, Mexico, Peru and the UAE saw renewable electricity being supplied at $0.05 per kilowatt-hour or less. This is well below equivalent costs for fossil fuel and nuclear generating capacity in each of these countries. Auctions are increasingly able to rely only on the wholesale price of power, without the need for government subsidies.
The inherent need for "baseload" is a myth, according to the report. Integrating large shares of variable renewable generation can be done without fossil fuel and nuclear baseload with sufficient flexibility in the power system – through grid interconnections, sector coupling and enabling technologies such as ICT, storage systems, electric vehicles and heat pumps. This sort of flexibility not only balances variable generation, it also optimises the system and reduces generation costs overall. It should come as no surprise, therefore, that the number of countries successfully managing peaks approaching or exceeding 100% electricity generation from renewable sources is on the rise. In 2016, Denmark and Germany, for example, successfully managed peaks of renewables-based electricity of 140% and 86.3%, respectively.
Global CO2 emissions from fossil fuels and industry remained stable for a third year in a row, despite 3% growth in the global economy and an increased demand for energy. This can be attributed mainly to the decline of coal, but also to the rapid growth in renewable energy capacity and improvements in energy efficiency.
Other positive trends include:
• Innovations and breakthroughs in storage technology, which increasingly provide additional flexibility to the power system. In 2016, around 0.8 GW of new advanced energy storage became operational, bringing the year-end total to 6.4 GW. As shown in the graph below, grid-connected battery storage grew by 50% to over 1.7 GW.
• Markets for mini-grids and stand-alone systems are evolving rapidly, and Pay-As-You-Go (PAYG) business models, supported by mobile technology, are now exploding. In 2012, investments in PAYG solar companies amounted to only $3 million; by 2016 that figure had risen to over $223 million (up from $158 million during 2015).
"The world is now adding more renewable power capacity each year than it adds in new capacity from all fossil fuels combined," says Arthouros Zervos, Chair of the REN21. "One of the most important findings of this year's GSR, is that holistic, systemic approaches are key and should become the rule rather than the exception. As the share of renewables grows, we will need investment in infrastructure as well as a comprehensive set of tools: integrated and interconnected transmission and distribution networks, measures to balance supply and demand, sector coupling (for example the integration of power and transport networks); and deployment of a wide range of enabling technologies."
Despite these encouraging trends, however, the energy transition is not happening fast enough. To achieve the goals of the Paris Agreement, an even greater acceleration of clean tech will be required. Investment continues to be heavily focused on wind and solar PV – however, all renewable energy technologies need to be deployed in order to keep global warming below 2°C.
Transport, heating and cooling sectors continue to lag behind the power sector. The deployment of renewable technologies in the heating and cooling sector remains a challenge in light of the unique and distributed nature of this market. Renewables-based decarbonisation of the transport sector is not yet being seriously considered, or seen as a priority. Despite a significant expansion in the sales of electric vehicles, primarily due to the declining cost of battery technology, much more needs to be done to ensure that sufficient infrastructure is in place and that they are powered by renewable electricity. While the shipping and aviation sectors present the greatest challenges, government policies or commercial disruption have not sufficiently stimulated the development of solutions.
Fossil fuel subsidies continue to impede progress. Globally, subsidies for fossil fuels and nuclear power continue to dramatically exceed those for renewable technologies. By the end of 2016, more than 50 countries had committed to phasing out fossil fuel subsidies, and some reforms have occurred – but not enough. The ratio of fossil fuel subsidies to renewable energy subsidies is 4:1. For every $1 spent on renewables, governments spend $4 perpetuating our dependence on fossil fuels.
Christine Lins, Executive Secretary of REN21, explains: "The world is in a race against time. The single most important thing we could do to reduce CO2 emissions quickly and cost-effectively, is phase-out coal and speed up investments in energy efficiency and renewables. When China announced in January that it was cancelling over 100 coal plants currently in development, they set an example for governments everywhere: change happens quickly when governments act by establishing clear, long-term policy and financial signals and incentives."
Chipmaker Intel has announced a new generation of processors, including the Core i9 series, its first teraflop desktop CPUs.
Intel has this week introduced a new family of microprocessors – the Core X-series – which the company describes as the most scalable, accessible and powerful desktop platform ever developed. This includes a new Core i9 brand and Core i9 Extreme Edition, the first consumer desktop CPU with 18 cores and 36 threads. The company is also launching the Intel X299, which adds even more I/O and overclocking capabilities.
Given their extreme power and speed, this family of processors is being pitched at gamers, content creators, and overclocking enthusiasts. Intel expects to increase its presence in high-end desktop markets and believes that customers will pay premiums in exchange for higher performance. Prices for the i9 line-up will range from $999 to $1999.
Prior to this announcement, Intel's high-end desktop processors (known as Broadwell-E) came with six, eight or 10 core options. The Core X-series will include five Core i9 chips, with a minimum of 10 cores and the top-end i9-7980 featuring a massive 18 cores. A major update has also been announced for Intel's Turbo Boost Max Technology 3.0, which will identify the two top cores and direct critical workloads to those, for a big jump on single or multithreaded performance.
The Core i9-7980 will be the first Intel consumer processor to exceed a teraflop of computing power, meaning it can perform a trillion computational operations every second. To put this in perspective, that is equal to the ASCI Red, which reigned as the world's most powerful supercomputer from 1997 until the year 2000. All Core i9 chips will have 3.3GHz base clock speeds, with up to 4.5GHz using Turbo Boost 3.0, and up to 44 PCIe lanes.
"The possibilities with this type of performance are endless," says Gregory Bryant, a senior vice president, in a blog post. "Content creators can have fast image rendering, video encoding, audio production and real-time preview – all running in parallel seamlessly so they spend less time waiting and more time creating. Gamers can play their favourite game, while they also stream, record and encode their gameplay, and share on social media – all while surrounded by multiple screens for a 12K experience with up to four discrete graphics cards."
In addition to Core i9, there are also three new i7 chips and an i5, including the quad-core i5-7640X and i7 models in 4, 6 and 8-core variants. Prices will range from $242 for the i5, to $599 for the i7-7820X.
The largest optical and infrared telescope ever to be built is on track for operation in 2024. It will feature 256 times the light gathering area of the Hubble Space Telescope and provide images 16 times sharper.
Credit: ESO/L. Calçada
A ceremony marking the first stone of the Extremely Large Telescope (ELT) has been attended by President of Chile, Michelle Bachelet Jeria. The event was held at the European Southern Observatory's (ESO) Paranal Observatory in northern Chile, close to the site of the future telescope. This milestone marked the beginning of construction of the dome and main structure of the world's biggest optical telescope, ushering in a new era in astronomy.
In her speech, the President emphasised: "With the symbolic start of this construction work, we are building more than a telescope here: it is one of the greatest expressions of scientific and technological capabilities and of the extraordinary potential of international cooperation."
Tim de Zeeuw, Director General of ESO, thanked the President and her Government for their continuing support of ESO in Chile and their protection of the country's unequalled skies: "The ELT will produce discoveries that we simply cannot imagine today, and it will surely inspire numerous people around the world to think about science, technology and our place in the Universe," he said. "This will bring great benefit to ESO Member States, to Chile, and the rest of the world."
Dome of the ELT compared with existing major ground-based telescopes. Credit: ESO
With a main mirror 39 m (128 ft) in diameter, the Extremely Large Telescope (ELT) will be the largest optical/infrared telescope in the world and will take telescope engineering into new territory. It will be housed in a gigantic rotating dome 85 m (279 ft) in diameter – comparable in area to a football pitch. This will provide 256 times the light gathering area of the Hubble Space Telescope and generate images 16 times sharper.
The ELT site was donated by the Government of Chile, and is surrounded by a further large concession of land, protecting the future operations of the telescope from interference of all kinds – and helping to retain Chile's status as the astronomy capital of the world.
The ELT will be the biggest "eye" ever pointed towards the sky and may revolutionise our perception of the Universe. It will study the atmospheres of extrasolar planets and look for signs of alien life, study the nature of dark energy and dark matter, and observe the Universe's early stages to explore our origins. Its suite of instruments will allow astronomers to probe the earliest stages of the formation of planetary systems and to detect water and organic molecules within protoplanetary discs around stars in the making. Thus, the ELT will answer fundamental questions regarding the formation and evolution of planets. By probing the most distant bodies, the telescope will provide clues to understanding the formation and relationship of the first objects that appeared in the universe: primordial stars, primordial galaxies and black holes.
One of the more ambitious goals of the ELT is the possibility of making a direct measurement of the acceleration of the Universe's expansion. This could have a major impact on our understanding of the Universe. It will also look for variations in fundamental physical constants. An unambiguous detection of such variations would have far-reaching consequences for our comprehension of the general laws of physics.
The ELT could raise entirely new questions that we cannot conceive of today – as well as improving life here on Earth through new technology and engineering breakthroughs. First light is planned for 2024.