15th August 2016
Computer program learns to replicate human handwriting
Researchers at University College London have devised a software algorithm able to scan and replicate almost anyone's handwriting.
In a world increasingly dominated by the QWERTY keyboard, computer scientists at University College London (UCL) have developed software which may spark the comeback of the handwritten word, by analysing the handwriting of any individual and accurately replicating it.
The scientists have created "My Text in Your Handwriting" – a programme which semi-automatically examines a sample of a person's handwriting that can be as little as one paragraph, and generates new text saying whatever the user wishes, as if the author had handwritten it themselves.
"Our software has lots of valuable applications," says lead author, Dr Tom Haines. "Stroke victims, for example, may be able to formulate letters without the concern of illegibility, or someone sending flowers as a gift could include a handwritten note without even going into the florist. It could also be used in comic books where a piece of handwritten text can be translated into different languages without losing the author's original style."
Published in ACM Transactions on Graphics, the machine learning algorithm is built around glyphs – a specific instance of a character. Authors produce different glyphs to represent the same element of writing – the way one individual writes an "a" will usually be different to the way others write an "a". Although an individual's writing has slight variations, every author has a recognisable style that manifests in their glyphs and spacing. The software learns what is consistent across an individual's style and reproduces this.
To generate an individual's handwriting, the software analyses and replicates the author's specific character choices, pen-line texture, colour and the inter-character ligatures (the joining-up between letters), as well as vertical and horizontal spacing.
Co-author, Dr Oisin Mac Aodha (UCL Computer Science), said: "Up until now, the only way to produce computer-generated text that resembles a specific person's handwriting would be to use a relevant font. The problem with such fonts is that it is often clear that the text has not been penned by hand, which loses the character and personal touch of a handwritten piece of text. What we've developed removes this problem and so could be used in a wide variety of commercial and personal circumstances."
The system is flexible enough that samples from historical documents can be used with little extra effort. Thus far, the scientists have analysed and replicated the handwriting of such figures as Abraham Lincoln, Frida Kahlo and Arthur Conan Doyle. Infamously, Conan Doyle never actually wrote Sherlock Holmes as saying, "Elementary my dear Watson" but the team have produced evidence to make you think otherwise.
To test the effectiveness of their software, the research team asked people to distinguish between handwritten envelopes and ones created by their automatic software. People were tricked by the computer-generated writing up to 40% of the time. Given how convincing it can be, some may believe this method could help in forging documents – but the team explained it works both ways and could actually help in detecting forgeries.
"Forgery and forensic handwriting analysis are still almost entirely manual processes – but by taking the novel approach of viewing handwriting as texture-synthesis, we can use our software to characterise handwriting to quantify the odds that something was forged," explained Dr Gabriel Brostow, senior author. "For example, we could calculate what ratio of people start their 'o's' at the bottom versus the top and this kind of detailed analysis could reduce the forensics service's reliance on heuristics."
• Follow us on Twitter
• Follow us on Facebook
30th July 2016
Vortex laser offers hope for Moore's Law
A new laser that travels in a corkscrew pattern is shown to carry ten times or more the information of conventional lasers, potentially offering a way to extend Moore's Law.
Like a whirlpool, a new light-based communication tool carries data in swift, circular motions. This optics advancement could become a central component of the next generation of computers designed to handle society's growing demand for information sharing. It may also help to ease concerns for those worried about the predicted end of Moore's Law – the idea that researchers will find new ways to make computers ever smaller, faster and cheaper.
"To transfer more data while using less energy, we need to rethink what's inside these machines," says Liang Feng, PhD, assistant professor in the Department of Electrical Engineering at the University at Buffalo's (UB) School of Engineering and Applied Sciences.
For decades, researchers have been able to cram exponentially increasing numbers of components onto silicon-based chips. Their success explains why a typical handheld smartphone has more computing power than the world's most powerful computers of the 1980s, which cost millions in today's dollars and were the size of a large filing cabinet.
But researchers are approaching a bottleneck, in which existing technology may no longer meet society's demand for data. Predictions vary, but many suggest this could happen within the next five years. This problem is being addressed in numerous ways, including optical communications, which use light to carry information. Examples of optical communications vary from old lighthouses to modern fibre optic cables used to watch television and browse the web. Lasers are a key part of today's optical communication systems and researchers have been manipulating them in various ways, most commonly by funnelling different signals into one path, to pack more information together. But these techniques are also reaching their limits.
The UB-led research team is pushing laser technology forward using another light control method, known as orbital angular momentum. This distributes the laser in a corkscrew pattern with a vortex at the centre, as pictured above. Usually too large to work on today's computers, they were able to shrink the vortex laser to the point where it is compatible with modern chips. Because the laser beam travels in a corkscrew pattern, encoding information into different vortex twists, it can deliver at least 10 times the information of conventional lasers, which move linearly.
However, the vortex laser is just one component of many – such as advanced transmitters and receivers – which will ultimately be needed to continue building more powerful computers and data centres in the future.
The study was published yesterday in the peer-reviewed journal Science. The research was supported with grants from the U.S. Army Research Office, the U.S. Department of Energy and National Science Foundation.
• Follow us on Twitter
• Follow us on Facebook
19th July 2016
Smallest ever hard disk writes information atom by atom
Scientists in the Netherlands, working at the limits of miniaturisation, have used one bit per atom to create 1 kilobyte of data storage.
Every day, modern society creates more than a billion gigabytes of new data. To store all this information, it is increasingly important that each single bit occupies as little space as possible. A team of scientists at the Kavli Institute of Nanoscience at Delft University, Netherlands, managed to bring this reduction to the ultimate limit: they built a memory of 1 kilobyte (8,000 bits), where each bit is represented by the position of one single chlorine atom.
"In theory, this storage density would allow all books ever created by humans to be written on a single post stamp", says lead scientist Sander Otte. They reached a storage density of 500 Terabits per square inch (Tbpsi), 500 times better than the best commercial hard disk currently available. His team reports on this breakthrough in Nature Nanotechnology.
In 1959, physicist Richard Feynman challenged his colleagues to engineer the world at the smallest possible scale. In his famous lecture, There's Plenty of Room at the Bottom, he speculated that a platform allowing us to arrange individual atoms, in an exact orderly pattern, would make it possible to store one piece of information per atom. To honour the visionary Feynman, Otte and his team have now coded a section of Feynman's lecture on an area 100 nanometres wide.
STM scan (96 nm wide, 126 nm tall) of the 1 kB memory, written to a section of Feynman's lecture, There's Plenty of Room at the Bottom.
The team used a scanning tunnelling microscope (STM), in which a sharp needle probes the atoms of a surface, one by one. Using these probes, scientists not only see the atoms, but can also push them around: "You could compare it to a sliding puzzle", Otte explains. "Every bit consists of two positions on a surface of copper atoms, and one chlorine atom that we can slide back and forth between these two positions. If the chlorine atom is in the top position, there is a hole beneath it – we call this a 1. If the hole is in the top position and the chlorine atom is therefore on the bottom, then the bit is a 0." Because the chlorine atoms are surrounded by other chlorine atoms, except near the holes, they keep each other in place. That is why this method with holes is much more stable than methods with loose atoms and more suitable for data storage.
The researchers from Delft organised their memory in blocks of 8 bytes (64 bits). Each block has a marker, made of the same type of 'holes' as the raster of chlorine atoms. Inspired by the pixelated square barcodes (QR codes) often used to scan tickets for airplanes and concerts, these markers work like miniature QR codes that carry information about the precise location of the block on the copper layer. The code will also indicate if a block has been damaged, for instance due to some local contaminant or an error in the surface. This allows memory to be scaled up easily to very big sizes, even if the copper surface is not entirely perfect.
The new method offers excellent prospects in terms of stability and scalability. Still, this type of memory should not be expected in commercial use anytime soon: "In its current form, the memory can operate only in very clean vacuum conditions and at liquid nitrogen temperature (77 K), so the actual storage of data on an atomic scale is still some way off," explains Otte. "But through this achievement, we have certainly come a big step closer".
• Follow us on Twitter
• Follow us on Facebook
21st June 2016
The first supercomputer to reach 100 petaflops
China has announced the Sunway TaihuLight – the world's fastest supercomputer, with a Linpack rating of 93 petaflops and peak performance of 125 petaflops.
Credit: ©Science China Press
The Sunway TaihuLight is the first system in the world to reach a peak performance of over 100 petaflops (100,000,000,000,000,000 floating point operations per second). It is a completely home-grown machine, designed and operated by the National Supercomputing Centre in Wuxi (NSCC-Wuxi), eastern China.
As the world's fastest supercomputer, it will contribute to research such as Earth system modelling, ocean surface wave modelling, atomistic simulations, phase-field simulations, hi-tech manufacturing and big data analytics. With advancements in these and other fields, the models that scientists use are becoming increasingly complex, and the temporal and spatial resolutions they require are also increasing rapidly. All of these factors contribute to the demand for exponential improvements in computing power.
The Sunway TaihuLight is three times faster than the previous record holder, Tianhe-2, which ran at 34 petaflops. In fact, it actually surpasses the next five machines on the TOP500 list combined. It has a total of 10.6 million CPU cores and features 1.3 petabytes of RAM. The system is so powerful that it requires about 15 megawatts (MW) of electricity. However, this is actually less than the 17.8 MW needed by Tianhe-2, making it far more energy efficient. The system runs on its own operating system, Raise OS.
"As the first number one system of China that is completely based on home-grown processors, the Sunway TaihuLight system demonstrates the significant progress that China has made in the domain of designing and manufacturing large-scale computation systems," said director of the NSCC, Prof. Guangwen Yang.
China now has more supercomputers among the world's top 500 than any other nation. Although lagging behind in the supercomputer race, America is planning to launch a new machine of its own in 2018 called the Summit that should run between 150-300 petaflops. By 2019, experts believe the first exaflop computer may arrive. Longer term, zettaflop and yottaflop machines could arise in the 2030s and 2040s, respectively. If trends continue, a billion human brains could be simulated in real time by the 2050s.
• Follow us on Twitter
• Follow us on Facebook
17th May 2016
IBM scientists achieve storage memory breakthrough
A new technology developed at IBM can speed up machine learning and access to the Internet of Things, mobile phone apps and cloud storage.
For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell, using a relatively new technology known as phase-change memory (PCM).
The current memory landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years, PCM has attracted the industry's attention as a potential "universal" memory technology, based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM won't lose data when powered off (unlike DRAM), and endures at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.
This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things. Along with standalone PCM, hybrid applications could be possible, which combine PCM and flash storage together, with PCM as an extremely fast cache. For example, a mobile phone's operating system could be stored in PCM, enabling the phone to launch in a few seconds. For enterprise-level systems, entire databases could be stored in PCM for blazing fast query processing of time-critical online applications, such as financial transactions. In addition, machine learning algorithms using big datasets would see a speed boost by reducing the latency overhead when reading data between iterations.
PCM materials exhibit two stable states – the amorphous (without a clearly defined structure) and the crystalline (with structure) phases, of low and high electrical conductivity, respectively.
To store a '0' or a '1', known as bits, on a PCM cell, a high or medium electrical current is applied to the material. A '0' can be programmed to be written in the amorphous phase or a '1' in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied.
Previously, scientists at IBM and other institutes have demonstrated the ability to store 1 bit per cell in PCM, but today at the IEEE International Memory Workshop in Paris, IBM scientists presented, for the first time, successfully storing 3 bits per cell in a 64k-cell array, at elevated temperatures and after a million endurance cycles.
To achieve multi-bit storage, the researchers developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.
"Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry," said Dr. Haris Pozidis, manager of non-volatile memory research at IBM. "Reaching 3 bits per cell is a significant milestone, because at this density the cost of PCM will be significantly less than DRAM and closer to flash."
• Follow us on Twitter
• Follow us on Facebook
11th May 2016
Samsung announces 256GB microSD card
Samsung Electronics has introduced the EVO Plus 256GB MicroSD Card, featuring the highest capacity in its class.
Samsung Electronics has unveiled its newest memory card globally – the EVO Plus 256GB microSD card. This device offers the highest capacity for any microSD card in its class. The previous record holder was a 200GB offering from SanDisk, announced in March 2015.
Samsung's new card provides fast speeds and expanded memory storage for use in premium smartphones and tablets, 360-degree video recorders, action cameras, drones and more. Consumers can now record up to 12 hours of 4K UHD video or 33 hours of Full HD video on their mobile device or action camera without needing to change or replace the memory card, allowing them to experience more and worry less about running out of memory.
The EVO Plus 256GB raises the bar for capacity and performance of microSD cards, thanks to Samsung's advanced V-NAND technology offering high read and write speeds of up to 95MB/s and 90MB/s, respectively. This level of performance will provide general consumers and professionals with superb user convenience for storing heavy-loaded, high-resolution photography and 4K video recording, as well as graphic intensive multimedia like virtual reality (VR) and gaming.
"With the upward trend of consumers using high-performance, high-capacity mobile devices, our new, V-NAND-based 256GB microSD card allows us to deliver the memory card consumers have been craving," said Un-Soo Kim, Senior Vice President of Brand Product Marketing. "Our card will provide consumers with large capacity, and high read and write speeds. We are excited to offer our customers convenient and seamless multimedia experiences when they access, store and share all of the content they create and capture."
The card is water, temperature, x-ray and magnet proof, and comes with a full 10-year warranty. It will be offered in more than 50 countries including the USA, Europe, China and other regions starting in June 2016 for $249.99 (manufacturer's suggested retail price). Given the continued rapid progress of microSD card storage capacities, it surely can't be long before the first 1TB card emerges.
• Follow us on Twitter
• Follow us on Facebook
25th March 2016
Future autonomous cars may include windshield movie displays
Ford Motor Company has filed a patent for what it calls an "Autonomous Vehicle Entertainment System" to be used in self-driving cars. If successfully developed and commercialised, this would enable the interior side of a windshield to be turned into a 50-inch movie screen.
The patent describes a dual-screen system, which consists of a projector, along with a large and small screen. The larger screen can be automatically deployed at the front of the vehicle's interior – completely replacing the window view – while the seat layout can also be adjusted to provide additional comfort and leg room, offering a more "theatre"-like experience. The projector is mounted to the ceiling, aimed at the large screen, playing a variety of media content that may include movies, television shows, games, music and so on.
This large cinematic screen is only intended for deployment in fully autonomous mode, i.e. when the vehicle is driving itself. It retracts back into the ceiling when a human driver is needed at the wheel. If entering non-autonomous mode, Ford states the media content might continue to be viewable on a smaller screen elsewhere – such as the dashboard, instrument cluster, or rear-view mirror:
At this stage, the Autonomous Vehicle Entertainment System is only a concept, without a prototype or demonstration in the works: "We submit patents on innovative ideas as a normal course of business," says company spokesman Allan Hall, in an interview with Forbes magazine. "Patent applications are intended to protect new ideas, but aren't necessarily an indication of new business or product plans."
However, Ford has been investing heavily in autonomous vehicle research. At the Consumer Electronics Show in Las Vegas earlier this year, it was revealed that the company is tripling its fleet of fully autonomous Fusion Hybrid models – making it the largest in the industry – with about 30 vehicles being tested on roads in California, Arizona and Michigan. Ford is also using a lightweight, next-generation sensor technology featuring higher resolution and a longer range of 200 metres, capable of handling a greater variety of driving scenarios.
Given the massive cost savings, improved safety, reduced congestion and other benefits, it seems almost inevitable that self-driving cars will be commonplace in the not-too-distant future and will revolutionise the world of transport. A large majority (75%) of new cars will be autonomous by 2035, according to a forecast by Navigant Research. The market for in-car entertainment systems like that seen in Ford's patent could be huge. Perhaps their idea could be further refined to remove the need for a ceiling projector – instead using a flexible, roll-up electronic display (pictured below). Either way, long distance journeys could become a lot less boring in the future.
By RDECOM [CC BY 1.0], via Wikimedia Commons
• Follow us on Twitter
• Follow us on Facebook
17th March 2016
Sony reveals price and release date for PlayStation VR
Sony has announced that PlayStation VR, a virtual reality system for the PlayStation 4, will launch from October 2016 in North America, Japan, Europe and Asia, at a suggested retail price of $399 USD, 44,980 yen, €399 and £349.
“Ever since we unveiled PS VR during the 2014 Game Developers Conference, we’ve received a tremendous response from gamers and developers alike,” said Andrew House, President and Global CEO of Sony Computer Entertainment Inc. “To make sure that we are able to prepare and deliver enough units of PS VR and a wide variety of software titles to consumers worldwide, we have decided to launch PS VR in October 2016. For those who are looking forward to its launch, we would like to thank everyone for their patience and continued support. We are beyond excited to deliver to consumers the amazing experience that PS VR offers.”
Currently, more than 230 developers and publishers are working on PS VR software – from smaller independent teams, to larger studios at the industry's top publishers such as Ubisoft and 2K Games. In addition to gaming, the system also has a Cinematic mode, which lets users enjoy a variety of content in a large virtual screen while wearing the headset. Supported content for the Cinematic mode includes standard PS4 games and videos, as well as a variety of PS4 features including Share Play and Live from PlayStation. Users will also be able to enjoy 360 degree photos and videos that are captured by devices such as omnidirectional cameras on PS VR, via PS4 Media Player, allowing them to feel as if they are physically “inside” the captured scene.
PS VR features a panel resolution of 1920×1080 (960×1080 per eye), refresh rates of 120Hz/90Hz, a 100° field of view and 3D audio processing. Although somewhat less technically advanced than its rivals, the PS VR is considerably cheaper. It costs $200 less than the Oculus Rift and $400 less than HTC's Vive. The global market for head-mounted VR is set to explode over the next few years, according to forecasts.
“When Oculus and HTC announced their relative headset pricing, Sony was offered an open goal opportunity to take an early lead in the consumer VR market, which it has taken with aplomb,” said IHS analyst Piers Harding-Rolls. “Sony's walled garden approach to the PS4 platform means it is well placed to provide a better controlled and consistent VR experience to consumers. This will be important in driving adoption and positive word of mouth.”
• Follow us on Twitter
• Follow us on Facebook
14th March 2016
Viessmann launches the UK's first WiFi-enabled boiler
Leading international heating systems manufacturer Viessmann has announced it is introducing the first domestic heating boilers with WiFi and Internet connectivity.
At the EcoBuild exhibition and conference in London, international heating systems manufacturer Viessmann announced it is introducing the first domestic heating boilers with WiFi and Internet connectivity. This new technology will enable homeowners to control their heating and hot water settings from anywhere in the world that has a mobile phone or Internet connection, simply by using the new, free Viessmann Vicare app.
The app – available for Apple and Android smartphones, as well as the iPad, iPod touch and Android tablets – allows homeowners to set the boiler’s daily programme and to adjust the boiler’s functions, improving heating comfort and convenience and saving unnecessary energy costs. The app will also remind the homeowner and the homeowner’s chosen registered gas engineer when the boiler’s annual service is due.
Unlike other apps for domestic boilers, which communicate only with thermostats, Viessmann’s is the first to connect with the boiler and to continually monitor its performance. If a technical fault should develop, the app will automatically inform the gas engineer, with a diagnosis of the problem and list of the parts needed for rectification.
All new Viessmann Vitodens 100 and 200 gas condensing boilers, which go on sale in September, can be WiFi enabled. All 100 and 200 range models installed since 2007 can also be WiFi enabled retrospectively, with a £60 control accessory. Internet connectivity is achieved via Viessmann’s Vitocom 100 system, which connects to the homeowners’ WiFi.
Viessmann’s marketing director, Darren McMahon commented: “We’re living in an increasingly connected world where we expect to have all information at our fingertips, and the health and servicing needs of our domestic heating and hot water boilers should be no different. Internet-enabled boilers are a big step forward in boiler development and Viessmann is proud to be the first manufacturer to make this available.
“We’ve given homeowners increased security and the potential to save money and reduce their carbon footprint, whilst increasing peace of mind.”
• Follow us on Twitter
• Follow us on Facebook
5th March 2016
World's largest capacity SSD begins shipping with 15.35TB of storage
Electronics giant Samsung has announced that its new enterprise-grade solid state drive (SSD) – the "PM1633a" – is now shipping.
The SSD pictured above, first revealed at the Flash Memory Summit in August 2015, has a whopping 15.35 terabytes (TB) of storage. It has sequential read and write speeds of up to 1,200MB/s – which is twice those of a typical SATA SSD – while the random read IOPS performance is 1,000 times that of SAS-type hard "spinning" disks.
The PM1633a drive supports 1 DWPD (drive writes per day), which means 15.36TB of data can be written every day on this single drive, without failure – a level of reliability that will improve cost of ownership for enterprise storage systems. Because it comes in a 2.5-inch form factor, enterprise managers can fit twice as many of these drives in a standard 19-inch, 2U rack, compared to an equivalent 3.5-inch storage drive. These performance gains stem from Samsung’s latest vertical NAND (V-NAND) flash technology, as well as the company’s proprietary controller and firmware technology.
The dies are stacked in 16 layers to form a single 512GB package, with a total of 32 NAND flash packages in the drive. Using 3rd generation, 256-gigabit V-NAND which stacks cell-arrays in 48 layers, the PM1633a offers major performance and reliability upgrades from its predecessor, which used Samsung’s 2nd generation, 32-layer, 128Gb V-NAND memory.
“To satisfy an increasing market need for ultra-high-capacity SSDs from leading enterprise storage system manufacturers, we are directing our best efforts toward meeting our customers’ SSD requests,” said Jung-Bae Lee, senior vice president for memory products at Samsung Electronics. “We will continue to lead the industry with next-generation SSDs, using our advanced 3D V-NAND memory technology, in order to accelerate the growth of the premium memory market while delivering greater performance and efficiency to our customers.”
Consumer-level SSDs are not expected to reach this capacity until 2018, according to analysts' forecasts. By then, enterprise-grade models will have grown exponentially in size, reaching at least 128TB.
• Follow us on Twitter
• Follow us on Facebook
17th February 2016
Tiny crystal stores 360TB of data for billions of years
Scientists have announced a major step forward in creating "5D" data storage that can survive for billions of years.
Scientists at the University of Southampton, England, have achieved a major step forward in the creation of digital data storage that is capable of surviving for billions of years. Using nanostructured glass, researchers from the University's Optoelectronics Research Centre (ORC) have developed the recording and retrieval processes of five dimensional (5D) digital data by femtosecond laser writing.
The storage allows unprecedented properties including 360 terabytes (TB) per disc capacity, thermal stability up to 1,000°C and a virtually unlimited lifetime at room temperature (or 13.8 billion years at 190°C), opening a new era of eternal data archiving. As an extremely stable and safe form of portable memory, the technology could be highly useful in organisations with big archives, such as national archives, museums and libraries, to ensure their information and records are kept perfectly preserved.
The technology was first experimentally demonstrated in July 2013, when a simple 300 kb text file was recorded in 5D. Now, major documents from throughout human history – such as the Universal Declaration of Human Rights, Newton's Opticks, Magna Carta and Kings James Bible – have been saved as digital copies that could survive the human race.
The documents were recorded using an ultrafast laser, producing extremely short and intense pulses of light. The file is written in three layers of nanostructured dots separated by five micrometres (a millionth of a metre). The self-assembled nanostructures change how light travels through glass, modifying the polarisation of light, which is then read by a combination of optical microscope and a polariser, similar to that found in Polaroid sunglasses.
Coined as the "Superman memory crystal", as the glass memory has been compared to the "memory crystals" used in the Superman films, the data is recorded via self-assembled nanostructures created within fused quartz. The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.
Professor Peter Kazansky, from the ORC, comments: "It is thrilling to think that we have created the technology to preserve documents and information and store it in space for future generations. This technology can secure the last evidence of our civilisation: all we've learnt will not be forgotten."
The researchers are presenting their research today at the International Society for Optical Engineering Conference in San Francisco, USA. Their invited paper is titled "Eternal 5D data storage by ultrafast laser writing in glass." The team are now looking for industry partners to further develop and commercialise their ground-breaking new technology.
16th February 2016
Virtual reality therapy could help people with depression
A new immersive virtual reality therapy could help people with depression to be less critical and more compassionate towards themselves, reducing depressive symptoms, finds a new study from University College London (UCL) and ICREA-University of Barcelona.
This new therapy, previously tested by healthy volunteers, was used by 15 depressed patients aged 23-61. Nine reported reduced depressive symptoms a month after the therapy, of whom four experienced a clinically significant drop in depression severity. The study is published in the British Journal of Psychiatry Open and was funded by the Medical Research Council.
Patients in the study wore a virtual reality headset to see from the perspective of a life-size 'avatar' or virtual body. Seeing this virtual body in a mirror moving in the same way as their own body typically produces the illusion that this is their own body. This is called 'embodiment'.
While embodied in an adult avatar, participants were trained to express compassion towards a distressed virtual child. As they talked to the child it appeared to gradually stop crying and respond positively to the compassion. After a few minutes, the patients were embodied in the virtual child and saw the adult avatar deliver their own compassionate words and gestures back to them. This brief eight minute scenario was repeated three times at weekly intervals and patients were followed up a month later.
"People who struggle with anxiety and depression can be excessively self-critical when things go wrong in their lives," explains study lead Professor Chris Brewin (UCL Clinical, Educational & Health Psychology). "In this study, by comforting the child and then hearing their own words back, patients are indirectly giving themselves compassion. The aim was to teach patients to be more compassionate towards themselves and less self-critical, and we saw promising results. A month after the study, several patients described how their experience had changed their response to real-life situations in which they would previously have been self-critical."
The study offers a promising proof-of-concept, but as a small trial without a control group it cannot show whether the intervention is responsible for the clinical improvement in patients.
"We now hope to develop the technique further to conduct a larger controlled trial, so that we can confidently determine any clinical benefit," says co-author Professor Mel Slater (ICREA-University of Barcelona and UCL Computer Science). "If a substantial benefit is seen, then this therapy could have huge potential. The recent marketing of low-cost home virtual reality systems means that methods such as this could potentially be part of every home and be used on a widespread basis."
22nd January 2016
Brain implant will connect a million neurons with superfast bandwidth
A neural interface being created by the United States military aims to greatly improve the resolution and connection speed between biological and non-biological matter.
The Defence Advanced Research Projects Agency (DARPA) – a branch of the U.S. military – has announced a new research and development program known as Neural Engineering System Design (NESD). This aims to create a fully implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world.
The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. A communications link would be achieved in a biocompatible device no larger than a cubic centimetre. This could lead to breakthrough treatments for a number of brain-related illnesses, as well as providing new insights into possible future upgrades for aspiring transhumanists.
“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” says Phillip Alvelda, program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”
Among NESD’s potential applications are devices that could help restore sight or hearing, by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.
Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that communicate clearly and individually with any of up to one million neurons in a given region of the brain.
To achieve these ambitious goals and ensure the technology is practical outside of a research setting, DARPA will integrate and work in parallel with numerous areas of science and technology – including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques, to transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent the data with minimal loss.
The NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping, manufacturing services and intellectual property. In later phases of the program, these partners could help transition the resulting technologies into commercial applications. DARPA will invest up to $60 million in the NESD program between now and 2020.
12th January 2016
World's first virtual reality rollercoaster
In a groundbreaking move that could revolutionise the world of theme parks, the UK's Alton Towers Resort announces today it is launching a rollercoaster entirely dedicated to virtual reality.
Set to open in April, Galactica is the world's first rollercoaster entirely customised for the full virtual reality experience, transforming riders into astronauts and plunging them into outer space with a G force of 3.5, which is more powerful than the 3G of a real rocket launch.
The exhilarating new ride will combine the physical exertion and adrenaline rush of Alton Towers' iconic flying rollercoaster, with the breathtaking sensation of travelling through space. Cutting edge technology launches riders into a different world, complete with virtual space suits, stunning visuals and an exciting adventure. The visuals have been perfectly synchronised to the thrilling twists, turns and loops of the rollercoaster to recreate the sensation of hurtling through space. Visitors will ride in a prone position along the 840-metre long (2,760 ft) track, to recreate the feeling of flying.
Galactica's epic space theme is set to be hugely popular following Tim Peake's maiden voyage into space in December 2015. Stunning, high-quality visuals deliver an immersive experience that its designers claim is breathtakingly realistic. Each rider wears a modified Samsung Gear VR headset. Through this, an on-board artificial intelligence guides them from the launch pad up into space – flying and looping beyond the stars, banking through wormholes and speeding across distant galaxies, revealing the wonders of the cosmos in stunning clarity.
Commenting on the new attraction, Marketing Director Gill Riley says: "Galactica uses groundbreaking technology to give riders a breathtaking and completely unique rollercoaster experience. Tim Peake captured the imagination of millions of Brits last year when he set off on his mission to the International Space Station – and now our visitors can become astronauts too.
"There is nowhere else in the world that people can experience the feeling of a flying rollercoaster combined with soaring through the universe. For two minutes, our guests will be transported into space and we believe Galactica showcases the future for theme parks around the world – it's a complete game changer!"
4th December 2015
1,000-fold increase in 3-D imaging resolution
A new system developed by MIT can increase the resolution of conventional 3-D imaging devices by 1,000 times.
Researchers at the Massachusetts Institute of Technology (MIT) have shown that by exploiting the polarisation of light – the physical phenomenon behind polarised sunglasses and most 3-D movie systems – they can increase the resolution of conventional 3-D imaging devices by up to 1,000 times. This technique could lead to high-quality 3-D cameras built into smartphones, or the ability to snap photos of objects and then use 3-D printing to produce accurate replicas. Further out, the work may also improve the ability of driverless cars in rain, snow and other reduced visibility conditions.
"Today, they can miniaturise 3-D cameras to fit on cellphones," says Achuta Kadambi, a PhD student in the MIT Media Lab and one of the system's developers. "But they make compromises to the 3-D sensing, leading to very coarse recovery of geometry. That's a natural application for polarisation, because you can still use a low-quality sensor, and adding a polarising filter gives you something that's better than many machine-shop laser scanners."
The researchers have described their new system – which they call Polarised 3D – in a paper to be presented at the International Conference on Computer Vision later this month.
Their experimental setup consisted of a Microsoft Kinect – which gauges depth using reflection time – combined with an ordinary polarising photographic lens placed in front of its camera. In each experiment, they took three photos of an object, rotating the polarising filter each time, and their algorithms compared the light intensities of the resulting images.
On its own, at a distance of several metres, the Kinect can resolve physical features as small as a centimetre or so across. But with the addition of the polarisation information, the hybrid system was able to resolve features in the range of tens of micrometres: one-thousandth the size. For comparison, they also imaged several of their test objects with a high-precision laser scanner, which requires that the object be inserted into the scanner bed. Polarised 3D still offered the higher resolution.
A mechanically rotated polarisation filter would probably be impractical in a cellphone camera, but grids of tiny polarisation filters that can overlay individual pixels in a light sensor would work. The paper also offers the tantalising prospect that polarisation systems may help in the development of self-driving cars. Experimental self-driving vehicles of today are reliable under normal illumination conditions – but their vision algorithms go haywire in rain, snow, or fog, due to water particles in the air scattering light in unpredictable ways. Polarised 3D could exploit information contained in interfering waves of light to handle scattering.
Yoav Schechner, associate professor of electrical engineering, comments on the research: "The work fuses two 3-D sensing principles, each having pros and cons. One principle provides the range for each scene pixel – the state of the art for most 3-D imaging systems. The second principle does not provide range. On the other hand, it derives the object slope, locally. In other words, per scene pixel, it tells how flat or oblique the object is."
"The work uses each principle to solve problems associated with the other principle," Schechner explains. "Because this approach practically overcomes ambiguities in polarisation-based shape sensing, it can lead to wider adoption of polarisation in the toolkit of machine-vision engineers."
30th November 2015
Bitcoin debit card introduced in USA
Coinbase has unveiled the first US-issued Bitcoin debit card, accepted at over 38 million merchants worldwide. It is predicted that Bitcoin will become the world's sixth largest reserve currency by 2030.
Bitcoin has been around for a while now. Launched worldwide in 2009, this form of digital money isn't controlled or issued by any bank or government and isn't pegged to the value of any other currency. Instead, it works as a decentralised virtual currency with an open network managed by its users. Fast, secure and reliable, Bitcoin is designed for the Internet age – allowing the transfer of any amount of money to anyone in the world, without needing a bank.
An increasing number of large online businesses now accept Bitcoin payments, including Dell, Expedia, Google, OkCupid, Paypal, Reddit and many others. It has drawn the support of some politicians, notably U.S. Presidential candidate Rand Paul, who accepts donations in Bitcoin.
Although growing in popularity, it's still difficult to make regular day-to-day purchases with Bitcoin in the USA, such as buying petrol from a station or groceries at a neighbourhood store, or a meal at a restaurant. That could be about to change, however, thanks to a new Bitcoin debit card. Known as the Shift Card, it can function like a normal VISA debit card – allowing users in 24 states across the USA to spend their virtual money both online and offline at over 38 million merchants around the world. It can also be used to withdraw cash from an ATM, with funds taken out of the person's online Bitcoin balance, not a bank account, although this requires a fee. An accompanying Shift mobile app enables you to check account balances and transaction details, or easily add and edit account information for quick selection at time of payment.
Shift Card has been developed by Coinbase, a Bitcoin wallet and exchange company founded in 2012 and headquartered in San Francisco. It operates exchanges between Bitcoin and fiat currencies in 32 countries, and Bitcoin transactions and storage in 190 countries worldwide. Coinbase and Shift are working through legal and regulatory issues to make the card available throughout all 50 states of the USA.
"At the end of the day, what we're trying to do is make Bitcoin easy to use," says Adam White, vice president at Coinbase. "We want to make it easy to buy and sell Bitcoin, and we want to make it easy to spend. A mainstream debit card based on Bitcoin is a key element."
"It's now possible to live on Bitcoin alone, through a combination of an employer paying the user in Bitcoin and the user spending Bitcoin for everyday items via their debit card," says White.
In a related story, UK-based Magister Advisors has predicted that Bitcoin will become the world's sixth largest reserve currency by 2030. According to their survey, banks and financial institutions are willing to spend around $1 billion on developing blockchain technology over the next two years.
"Blockchain technology will underpin a growing number of routine transactions globally as trust grows," said Jeremy Millar, a partner at Magister Advisors, in a statement released by the organisation. "Our interviews with 30 of the leading Bitcoin companies worldwide cement our view that the currency is gaining traction. Growing vendor acceptance and the adoption of Bitcoin in developing markets are creating a pincer movement that will lead to widespread business and consumer acceptance and adoption over time."