Online retailer Amazon has revealed a new rapid delivery method that will use unmanned aerial vehicles to send packages to customers within 30 minutes. Assuming the Federal Aviation Administration (FAA) approves it, this futuristic service – "Amazon Prime Air" – could be introduced by 2015. Read more at the company's press release.
Tianhe-2, a Chinese supercomputer, has retained its position as the world's no. 1 system with 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark – according to the latest TOP500 list of the world's most powerful supercomputers.
The 42nd edition of the twice-yearly TOP500 list was announced yesterday at the SC13 conference in Denver, Colorado. While a typical desktop PC has four cores, Tianhe-2 (which means “Milky Way 2”) features 3,120,000 – each using Intel's "Ivy Bridge" 22 nanometre processors. It has 1,024,000 gigabytes of random-access memory (RAM), 12.4 petabytes of storage space and needs 17,800 kilowatts (kW) of electricity to work. Including external cooling, it requires 24,000 kW. The entire complex occupies 720 square metres of floor space and costs 2.4 billion Yuan (US$390 million).
China’s National University of Defence Technology (NUDT) – which developed Tianhe-2 – says it will be offered as a "research and education" tool once tests are completed. Local reports suggest that the car industry is a "priority" client, so it may be useful in complex engine simulations, for example, or devising new materials and more efficient components.
Titan – installed at the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory – remains the no. 2 system, achieving 17.59 petaflop/s on the Linpack benchmark. Titan is among the most energy efficient systems on the list, consuming a total of 8.21 MW of electrical power and delivering 2.14 gigaflops per watt, compared to 1.9 for Tianhe-2.
Sequoia, an IBM BlueGene/Q system installed at the DOE’s Lawrence Livermore National Laboratory, is the no. 3 system. First delivered in 2011, Sequoia reached 17.17 petaflop/s on the Linpack benchmark.
In all, there are 31 systems with performance greater than a petaflop/s on the list, an increase of five compared to the June 2013 list. Intel continues to provide the processors for the largest share (82.4 percent) of TOP500 systems.
Although China holds the no.1 spot, the U.S. is clearly the leading consumer of supercomputers, with 265 of the top 500 systems (253 last time). The European share (102 systems compared to 112 last time) is still lower than the Asian share (115 systems, down from 118 last time).
Like many forms of information technology, the growth of supercomputing power has followed a remarkably smooth and consistent trend. As shown in the graph below, we can expect to see the first exaflop machine by 2019. An exaflop is 1,000,000,000,000,000,000 (a million trillion, or a quintillion) calculations per second. Such computing power will be invaluable to researchers – providing faster and more accurate simulations of climate, weather, astrophysics, genetics, molecular dynamics and many other processes. Zettaflop machines could emerge by 2030.
As scientists develop the next wave of smartwatches and other wearable computing, they might want to continue focusing their attention on the arms and the wrists. According to a recent study, portable electronic devices placed on the collar, torso, waist or legs may cause awkwardness, embarrassment or strange looks.
In a paper titled “Don’t Mind Me Touching My Wrist,” Georgia Tech researchers reported a case study of interaction with on-body technology in public. Specifically, they surveyed people in both the United States and South Korea to gain cultural insights into perceptions of e-textiles, or electronic devices, stitched into everyday clothing.
For the study, researchers directed participants to watch videos of people receiving incoming alerts from e-textile interfaces on various parts of their body including wrists, forearms, collarbones, torsos, waists and front pant pocket. They were asked to describe their thoughts about the interaction (such as whether it appeared normal, silly or awkward) and its placement on the body.
In general, the study found that in both countries, the wrist and forearm were the most preferred locations for e-textiles, as well as the most normal placement when watching someone use the devices.
“This may be due to the fact that these locations are already being used for wearable technology,” said Halley Profita, former Georgia Tech industrial design graduate student, who led the study. “People strap smartphones or MP3 players to their arms while exercising. Runners wear GPS watches.”
According to the study:
Gender of the technology user affected opinions about the interaction. For example, Americans were uncomfortable when men used a device located at the front pant pocket region or when women reached for their torsos or collarbones.
South Koreans reported exceptionally low acceptance of women using the devices anywhere except for their arms.
Respondents expressed differing views on the most important factors on deciding how to use e-textiles. Americans focused on ease of operation and accessibility; South Koreans raised personal perception issues.
“The South Koreans also said they wanted an easy-to-use system – but the technology should not make them look awkward or weird,” Profita said. “This isn’t surprising, because their culture emphasises modesty, politeness and avoidance of embarrassing situations.”
The findings were presented at the International Symposium in Wearable Computing, held in Switzerland.
The ability to shrink laboratory-scale processes to automated chip-sized systems would revolutionise biotechnology and medicine. For example, inexpensive and highly portable devices that process blood samples to detect biological agents, such as anthrax, are needed by the U.S. military and for homeland security efforts.
A microfluidic bioreactor. Credit: Adam Fenster/University of Rochester.
One of the challenges of "lab-on-a-chip" technology is the need for miniaturised pumps to move solutions through micro-channels. Electroosmotic pumps (EOPs) — devices in which fluids appear to magically move through porous media in the presence of an electric field — are ideal, because they can be readily miniaturised. EOPs however, require bulky and external power sources, which defeats the concept of portability. But a super-thin silicon membrane developed at the University of Rochester could now make it possible to drastically shrink the power source, paving the way for new diagnostic devices the size of a credit card.
"Up until now, electroosmotic pumps have had to operate at a very high voltage, about 10 kilovolts," said James McGrath, associate professor of biomedical engineering. "Our device works in the range of one-quarter of a volt, which means it can be integrated into devices and powered with small batteries."
McGrath's research paper is published this week by the journal Proceedings of the National Academy of Sciences.
McGrath and his colleagues use porous nanocrystalline silicon (pnc-Si) membranes that are microscopically thin – it takes more than one thousand stacked on top of each other to equal the width of a human hair. And that's what allows for a low-voltage system.
A porous membrane needs to be placed between two electrodes in order to create what's known as electroosmotic flow, which occurs when an electric field interacts with ions on a charged surface, causing fluids to move through channels. Membranes previously used in EOPs have resulted in a significant voltage drop between the electrodes, forcing engineers to begin with bulky and high-voltage power sources. The thin pnc Si membranes allow the electrodes to be placed much closer to each other, creating a much stronger electric field with a much smaller drop in voltage. As a result, a smaller power source is needed.
"Until now, not everything associated with miniature pumps was miniaturised," said McGrath. "Our device opens the door for a tremendous number of applications."
Along with medical applications, it's been suggested that EOPs could be used to cool electronic devices. As electronic devices get smaller, components are packed more tightly, making it easier for the devices to overheat. With miniature power supplies, it may be possible to use EOPs to help cool laptops and other portable electronic devices.
McGrath said there's one other benefit to the silicon membranes. "Due to scalable fabrication methods, the nanocrystalline silicon membranes are inexpensive to make and can be easily integrated on silicon or silica-based microfluid chips."
A new software algorithm is capable of solving CAPTCHAs – a test commonly used in computing to determine whether or not the user is human.
Vicarious, a startup developing artificial intelligence software, has announced that its algorithms can now reliably solve modern CAPTCHAs, including Google's reCAPTCHA, the world's most widely used test of a machine's ability to act human.
A CAPTCHA (which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart") is considered broken if an algorithm is able to achieve a precision of at least 1%. Leveraging core insights from machine learning and neuroscience, the Vicarious AI can achieve success rates of up to 90% on modern CAPTCHAs from Google, Yahoo, PayPal, Captcha.com, and others. This advancement, the company says, renders text-based CAPTCHAs no longer effective as a Turing test.
"Recent AI systems like IBM’s Watson and deep neural networks rely on brute force: connecting massive computing power to massive datasets. This is the first time this distinctively human act of perception has been achieved, and it uses relatively minuscule amounts of data and computing power. The Vicarious algorithms achieve a level of effectiveness and efficiency much closer to actual human brains", said Vicarious co-founder D. Scott Phoenix.
"Understanding how brain creates intelligence is the ultimate scientific challenge. Vicarious has a long-term strategy for developing human level artificial intelligence, and it starts with building a brain-like vision system. Modern CAPTCHAs provide a snapshot of the challenges of visual perception, and solving those in a general way required us to understand how the brain does it", said co-founder Dr. Dileep George.
Solving CAPTCHA is the first public demonstration of Recursive Cortical Network (RCN) technology. Although still many years away, the commercial applications of RCN will have broad implications for robotics, medical image analysis, image and video search, and many other fields.
"We should not underestimate the significance of Vicarious crossing this milestone," said Facebook co-founder and board member Dustin Moskovitz. "This is an exciting time for artificial intelligence research, and they are at the forefront of building the first truly intelligent machines."
By altering the friction encountered by a person's fingertip, a new algorithm developed by Disney can create the perception of a 3D bump on a touch surface, without having to physically move the surface. This method can be used to simulate the feel of a wide variety of objects and textures.
The algorithm is based on a discovery that when a person slides their finger over a real physical bump, the person perceives the bump largely because lateral friction forces stretch and compress skin on the sliding finger.
"Our brain perceives the 3D bump on a surface mostly from information that it receives via skin stretching," said Ivan Poupyrev, who directs Disney Research in Pittsburgh. "Therefore, if we can artificially stretch skin on a finger as it slides on the touch screen, the brain will be fooled into thinking an actual physical bump is on a touch screen, even though the touch surface is completely smooth."
In their experiments, the researchers used electrovibration to modulate the friction between the sliding finger and the touch surface with electrostatic forces. They created and validated a "psycho-physical model" that closely simulates friction forces perceived by the human finger when it slides over a real bump.
The model was then incorporated into an algorithm that dynamically modulates the frictional forces on a sliding finger so that they match the tactile properties of the visual content displayed on the touch screen along the finger's path. A wide variety of visual artifacts thus can be dynamically enhanced with tactile feedback that adjusts as the display changes.
"The traditional approach to tactile feedback is to have a library of canned effects that are played back whenever a particular interaction occurs," said Ali Israr, who was the research lead on the project. "This makes it difficult to create a tactile feedback for dynamic visual content, where the sizes and orientation of features constantly change. With our algorithm we do not have one or two effects, but a set of controls that make it possible to tune tactile effects to a specific visual artifact on the fly."
"Touch interaction has become the standard for smartphones, tablets and even desktop computers, so designing algorithms that can convert the visual content into believable tactile sensations has immense potential for enriching the user experience," Poupyrev said. "We believe our algorithm will make it possible to render rich tactile information over visual content and that this will lead to new applications for tactile displays.
Disney will present their findings at the ACM Symposium on User Interface Software and Technology being held this week in St Andrews, Scotland. By impact factor, it is the most impactful conference in the field of human-computer interaction.
Researchers at Cambridge University have developed a new technique allowing carbon nanotube "forests" to be grown at five times the density of previous methods.
Scanning electron microscope images of CNT forests, low and high density.
Carbon nanotubes' outstanding mechanical, electrical and thermal properties make them an alluring material to electronics manufacturers. Until recently, however, scientists believed that growing the high density of tiny graphene cylinders needed for many microelectronics applications would be difficult.
Now a team from Cambridge University in England has devised a simple technique to increase the density of nanotube forests grown on conductive supports about five times over previous methods. The high density nanotubes might one day replace some metal electronic components, leading to faster devices.
"The high density aspect is often overlooked in many carbon nanotube growth processes, and is an unusual feature of our approach," says John Robertson, a professor in the electronic devices and materials group in the department of engineering at Cambridge. High-density forests are necessary for certain applications of carbon nanotubes, such as electronic interconnects and thermal interface materials.
Robertson and his colleagues grew carbon nanotubes on a conductive copper surface that was coated with co-catalysts cobalt and molybdenum. In a novel approach, they grew at lower temperature than is typical which is applicable in the semiconductor industry. When the interaction of metals was analysed by X-ray photoelectron spectroscopy, it revealed the creation of a more supportive substrate for the forests to root in. The subsequent nanotube growth exhibited the highest mass density reported so far.
"In microelectronics, this approach to growing high-density carbon nanotube forests on conductors can potentially replace and outperform the current copper-based interconnects in a future generation of devices," says Cambridge researcher Hisashi Sugime. In the future, more robust carbon nanotube forests may also help to improve thermal interface materials, battery electrodes, and supercapacitors.
The article, "Low temperature growth of ultra-high mass density carbon nanotube forests on conductive supports" appears in the journal Applied Physics Letters.
A study by the Oxford Martin School shows that nearly half of US jobs could be at risk of computerisation within 20 years. Transport, logistics and office roles are most likely to come under threat.
The new study, a collaboration between Dr Carl Benedikt Frey (Oxford Martin School) and Dr Michael A. Osborne (Department of Engineering Science, University of Oxford), found that jobs in transportation, logistics, as well as office and administrative support, are at "high risk" of automation. More surprisingly, occupations within the service industry are also highly susceptible, despite recent job growth in this sector.
"We identified several key bottlenecks currently preventing occupations being automated," says Dr. Osborne. "As big data helps to overcome these obstacles, a great number of jobs will be put at risk."
The study examined over 700 detailed occupation types, noting the types of tasks workers perform and the skills required. By weighting these factors, as well as the engineering obstacles currently preventing computerisation, the researchers assessed the degree to which these occupations may be automated in the coming decades.
"Our findings imply that as technology races ahead, low-skilled workers will move to tasks that are not susceptible to computerisation – i.e., tasks that require creative and social intelligence," the paper states. "For workers to win the race, however, they will have to acquire creative and social skills."
"While computerisation has been historically confined to routine tasks involving explicit rule-based activities, algorithms for big data are now rapidly entering domains reliant upon pattern recognition and can readily substitute for labour in a wide range of non-routine cognitive tasks. In addition, advanced robots are gaining enhanced senses and dexterity, allowing them to perform a broader scope of manual tasks. This is likely to change the nature of work across industries and occupations."
The low susceptibility of engineering and science occupations to computerisation, on the other hand, is largely due to the high degree of creative intelligence they require. However, even these occupations could be taken over by computers in the longer term.
Dr Frey said the United Kingdom is expected to face a similar challenge to the US. "While our analysis was based on detailed datasets relating to US occupations, the implications are likely to extend to employment in the UK and other developed countries," he said.
Researchers have discovered a "global ecology" of interacting machines that trade on the global markets at speeds too fast for humans, causing periodic outages. These high frequency trading algorithms could lead to increasingly large crashes, as the volume of data in the world continues to grow exponentially.
Recently, the global financial market experienced a series of computer glitches that abruptly brought operations to a halt. This was so serious that – on one day – it resulted in a third fewer shares being traded in the USA. One reason for these "flash freezes" may be the sudden emergence of mobs of ultrafast robots, which trade on the global markets and operate at speeds beyond human capability, thus overwhelming the system. The appearance of this "ultrafast machine ecology" is documented in a new study published today in Nature Scientific Reports.
The findings suggest that for time scales less than one second, the financial world makes a sudden transition into a cyber jungle inhabited by packs of aggressive trading algorithms. "These algorithms can operate so fast that humans are unable to participate in real time, and instead, an ultrafast ecology of robots rises up to take control," explains Neil Johnson, professor of physics in the College of Arts and Sciences at the University of Miami (UM).
"Our findings show that, in this new world of ultrafast robot algorithms, the behaviour of the market undergoes a fundamental and abrupt transition to another world where conventional market theories no longer apply," Johnson says.
Society's push for ever faster systems that outpace competitors has led to algorithms capable of operating faster than the response time for a human. For instance, the quickest a person can react to potential danger is about one second. Even a chess grandmaster takes around 650 milliseconds to realise that he is in trouble – yet microchips for trading can operate in a fraction of a millisecond (1 millisecond is 0.001 seconds).
In this study, the researchers assembled and analysed a high-throughput millisecond-resolution price stream of multiple stocks and exchanges. From January 2006, through to February 2011, they found 18,520 extreme events lasting less than 1.5 seconds, including both crashes and spikes.
The team realised that as the duration of these ultrafast extreme events fell below human response times, the number of crashes and spikes increased dramatically. They created a model to understand the behaviour and concluded that the events were the product of ultrafast computer trading and not attributable to other factors, such as regulations or mistaken trades. Johnson, who is head of the inter-disciplinary research group on complexity at UM, compares the situation to an ecological environment.
"As long as you have the normal combination of prey and predators, everything is in balance, but if you introduce predators that are too fast, they create extreme events," Johnson says. "What we see with the new ultrafast computer algorithms is predatory trading. In this case, the predator acts before the prey even knows it's there."
Johnson explains that in order to regulate these ultrafast computer algorithms, we need to understand their collective behaviour. This is a daunting task, but is made easier by the fact that the algorithms that operate below human response times are relatively simple, because simplicity allows faster processing.
"There are relatively few things that an ultrafast algorithm will do," Johnson says. "This means that they are more likely to start adopting the same behaviour, and hence form a cyber crowd or cyber mob which attacks a certain part of the market. This is what gives rise to the extreme events that we observe," he says. "Our math model is able to capture this collective behaviour by modelling how these cyber mobs behave."
In fact, Johnson believes this new understanding of cyber-mobs may have other important applications outside of finance – such as dealing with cyber-attacks and cyber-warfare.
By analysing MRI images of the brain with an elegant mathematical model, it is possible to reconstruct thoughts more accurately than ever before. In this way, researchers from Radboud University Nijmegen, Netherlands, have succeeded in determining which letter a test subject was looking at.
Functional MRI scanners have been used in cognition research primarily to determine which brain areas are active while test subjects perform a specific task. The question is simple: is a particular brain region on or off? A research group at the Donders Institute for Brain, Cognition and Behaviour at Radboud University has gone a step further: they have used data from the scanner to determine what a test subject is looking at.
The researchers 'taught' a model how small volumes of 2x2x2 mm from the brain scans – known as voxels – respond to individual pixels. By combining all the information about the pixels from the voxels, it became possible to reconstruct the image viewed by the subject. The result was not a clear image, but a somewhat fuzzy speckle pattern. In this study, the researchers used hand-written letters.
Prior knowledge improves model performance
"After this we did something new", says lead researcher Marcel van Gerven. "We gave the model prior knowledge: we taught it what letters look like. This improved the recognition of the letters enormously. The model compares the letters to determine which one corresponds most exactly with the speckle image, and then pushes the results of the image towards that letter. The result was the actual letter, a true reconstruction."
"Our approach is similar to how we believe the brain itself combines prior knowledge with sensory information. For example, you can recognise the lines and curves in this article as letters only after you have learned to read. And this is exactly what we are looking for: models that show what is happening in the brain in a realistic fashion. We hope to improve the models to such an extent that we can also apply them to the working memory or to subjective experiences such as dreams or visualisations. Reconstructions indicate whether the model you have created approaches reality."
Improved resolution; more possibilities
Japanese researchers achieved a similar feat in 2008. However, this latest research at Radboud University is based on a higher resolution. Sanne Schoenmakers, who is working on a thesis about decoding thoughts, explains: "In our further research we will be working with a more powerful MRI scanner. Due to the higher resolution of the scanner, we hope to be able to link the model to more detailed images. We are currently linking images of letters to 1200 voxels in the brain; with the more powerful scanner we will link images of faces to 15,000 voxels."
The team's research is published in the journal Neuroimage.
Worldwide, mobile phone sales totalled 435 million units in the second quarter of 2013 – an increase of 3.6 percent from the same period in 2012 – according to research firm Gartner. Smartphone sales reached 225 million units, up 47 percent from last year, while basic feature phones totalled 210 million units, a decline of nearly 25 percent.
Asia/Pacific, Latin America and Eastern Europe had the highest smartphone growth rates of 74.1 percent, 55.7 percent and 31.6 percent respectively, as smartphone sales grew in all regions.
Samsung maintained the no. 1 position in the global smartphone market, as its share of sales reached 31.7 percent, up from 29.7 percent in the second quarter of 2012. Apple’s smartphone sales reached 32 million units in the second quarter of 2013, up 10.2 percent from a year ago.
Anshul Gupta, principal research analyst at Gartner: “With second quarter of 2013 sales broadly on track, we see little need to adjust our expectations for worldwide mobile phone sales forecast to total 1.82 billion units this year. Flagship devices brought to market in time for the holidays, and the continued price reduction of smartphones will drive consumer adoption in the second half of the year.”
In the smartphone operating system (OS) market, Microsoft took over BlackBerry for the first time, taking the no. 3 spot with 3.3 percent market share in the second quarter of 2013. “While Microsoft has managed to increase share and volume in the quarter, it should continue to focus on growing interest from app developers to help grow its appeal among users,” said Mr. Gupta. The Android OS continued to increase its lead, garnering 79 percent of the market in the second quarter, followed by Apple's iOS with 14.2%.
Basic feature phones will be a hard sell in about five to 10 years time, says Gupta: "It will reach a point where sales of a new model of feature phone will not be able to justify the amount of time and money that is spent into developing it."
A new automated medical system has initiated research that could one day radically improve how neurological and psychological diseases are treated.
Medtronic, Inc. has announced the first implant of a novel deep brain stimulation (DBS) system that – for the first time – enables the sensing and recording of select brain activity while simultaneously providing targeted DBS therapy. This will initiate research on how the brain responds to the therapy and could yield major insights that significantly change the way people with devastating neurological and psychological disorders are treated.
The Activa PC+S DBS system delivers proven Medtronic DBS therapy, while at the same time sensing and recording electrical activity in key areas of the brain, using sensing technology and an adjustable algorithm, which enable the system to gather brain signals at various moments as selected by a physician. Initially, this new technology will be made available to a select group of physicians worldwide for use in clinical studies. These physicians will use the system to map the brain’s responses to Medtronic DBS therapy and explore applications for the treatment across a range of neurological and psychological conditions.
The Activa PC+S system was implanted for the first time at Ludwig Maximilians University in Munich, Germany, in a person with Parkinson’s disease. This patient will be treated by a team that includes the neurologist Kai Bötzel and neurosurgeon Jan Mehrkens. Dr. Bötzel will be the first to use data gathered by the Activa PC+S system to gain unprecedented insight into how the brain responds.
Conventional DBS therapy uses an implanted medical device, similar to a pacemaker, to deliver mild electrical pulses to precisely targeted areas of the brain. This needs to be programmed and adjusted by a trained clinician to maximise symptom control and minimise side effects.
"Everything that is on the market today is a one-way stimulator," says Joseph Neimat, a neurosurgeon at Vanderbilt University Medical Center who specialises in deep brain stimulation. "The devices don't record or respond to a patient. What would be better would be to have a system that could anticipate or read a patient's state, and then respond with an appropriate stimulus."
“DBS therapy works for people with Parkinson’s disease and other movement disorders, but there is much to learn about how the brain responds to the therapy,” said Dr. Bötzel. “This new system will allow us to treat patients with conventional DBS therapy, while at the same time opening the door for research that was not possible until now. We hope these insights will lead to the development of effective new treatments tailored to the needs of individuals.”
“Devastating conditions like Parkinson’s disease and obsessive-compulsive disorder take a significant toll on countless people, as well as their loved ones,” said Lothar Krinke, Ph.D., vice president and general manager. “Medtronic is excited to provide this new system to researchers worldwide, and we expect that their respective studies will lead to accelerated understanding of how neurological and psychological conditions develop and progress. This represents a significant milestone for DBS therapy and the long-term journey toward a closed-loop DBS system, which could personalise therapy by using device data to automatically adjust to the needs of individual patients.”
Medtronic’s Activa PC+S system received CE (Conformité Européenne) mark in January 2013. It is not approved by the U.S. Food and Drug Administration for commercial use in the United States, and will be made available to select physicians for investigational use only. Additional implants of the Activa PC+S system, including the first implant in the United States, will take place in the coming months.
London and New York-based company botObjects recently announced the ProDesk3D, which they claimed to be the first full-colour 3D printer small enough to fit on a desktop. In addition to its colour abilities and compactness, they confirmed that it would print at 25 microns – some four times more accurate than its competitors (Makerbot's Replicator 2 has a resolution of 100 microns).
This gives an extremely smooth finish, overcoming the issue of surface grooves which often appear in 3D-printed objects. The machine uses different-coloured cartridges on the fly, just like an inkjet printer, instead of requiring single-colour spools of raw plastic to be swapped out. This includes a palette of new "translucent" PLA colours for some impressive blending effects, customisable with software on Windows 7 and Mac OS X. There is no complex or tricky set up, as the ProDesk3D arrives out-of-the-box complete.
The company has received over 100,000 enquiries and expects to ship its first orders by 1st October 2013. The standard and limited edition models both have a somewhat hefty price tag of nearly $3,000 each, making them high-end products. However, the cost of 3D printing has fallen rapidly in recent years and if this continues, it is expected to become a mainstream consumer technology by 2015. Following their recent announcement, the company has now released a video of the product in action:
Nokia has revealed the Lumia 1020 – a Windows 8 smartphone that includes a 41 megapixel camera sensor, PureView technology, Optical Image Stabilisation and high-resolution zoom.
This is Nokia's second phone to feature such a camera. Last year, the company launched the Pureview 808 model. However, this was based on the ageing Symbian operating system, which limited its appeal. The new Lumia 1020 instead runs on the latest Microsoft OS with over 160,000 apps.
As well as its ultra-high resolution (7712 × 5360 pixels), the camera uses a process called "oversampling". This generates a smaller 5MP version of the image that removes unwanted visual noise, achieves higher definition and light sensitivity, and enables lossless zoom. Unlike its predecessor, the Lumia 1020 can save both types at the same time, meaning that the owner does not need to worry about switching settings.
In addition, the camera's video mode takes advantage of the higher resolution, by allowing the user to zoom in four times while filming at 1080p without any loss of quality, and six times at 720p. The lens system is mounted on ball-bearings and is fitted with a gyroscope and motors to counteract movement and camera shake.
Given the exponential progress of digital technology, these sort of cameras should be fairly standard and low cost within the next few years. By the 2020s, we will probably be saying the same about gigapixel cameras.
Using nanostructured glass, scientists at the University of Southampton have, for the first time, demonstrated the recording and retrieval processes of five dimensional digital data by femtosecond laser writing. The storage allows unprecedented parameters, including 360 TB/disc data capacity, thermal stability up to 1000°C and practically unlimited lifetime.
Coined as the 'Superman' memory crystal, as the glass memory has been compared to the "memory crystals" used in the Superman films, the data is recorded via self-assembled nanostructures created in fused quartz, which is able to store vast quantities of data for over a million years. The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.
A 300 kb text file was successfully recorded in 5D using an ultrafast laser – producing extremely short and intense pulses of light. The file is written in three layers of nanostructured dots separated by five micrometres (one millionth of a metre). The self-assembled nanostructures change the way light travels through glass, modifying polarisation of light that can then be read by a combination of optical microscope and a polariser, similar to that found in Polaroid sunglasses.
The research is led by Jingyu Zhang at the Optoelectronics Research Centre (ORC) and conducted under a joint project with Eindhoven University of Technology.
"We are developing a very stable and safe form of portable memory using glass, which could be highly useful for organisations with big archives," says Jingyu. "At the moment, companies have to back up their archives every five to ten years because hard-drive memory has a relatively short lifespan. Museums who want to preserve information or places like the national archives where they have huge numbers of documents, would really benefit."
Professor Peter Kazansky, the ORC's group supervisor: "It is thrilling to think that we have created the first document which will likely survive the human race. This technology can secure the last evidence of civilisation: all we've learnt will not be forgotten."
The team presented their paper at the Conference on Lasers and Electro-Optics (CLEO'13) in San Jose. They are now looking for industry partners to commercialise this ground-breaking new technology.