Aaron Council is founder and CEO of the popular Gyges3D.com online community. He has published a series of books that explore 3D printing across social, political and economic landscapes and seeks to promote discussion about the importance of this evolving technology. Michael Petch is the founder of Black Dog Consulting, which provides strategic advice, media solutions and start-up services for a range of companies operating in the new technology sphere, including crypto-currency, 3D printing and innovative mobile app developers.
3D Printing: Rise of the 3rd Industrial Revolution explores what is arguably the single most important technology to arrive in recent years. Going beyond the headline-grabbing stories of 3D printed guns, this new book graphically illustrates how 3D printing will change the world. The authors thoroughly examine the history, the current market and the future.
Themes explored include how 3D printing is used in next-generation games consoles, such as the Xbox One, and how a robot can be created by combining these technologies. A discussion on the impact of 3D printing on medicine and healthcare is covered in depth – including how 3D printing will allow drugs to be downloaded from the Internet and printed using common household materials.
Credit: Nanoscribe GmbH
The philosophy behind 3D printing is examined in clear English and the authors point out how 3D printing is likely to change the current economic system for the better. The importance of the technology for the future of society and how it will create jobs in both the U.S. and the developing world is given a detailed chapter. The political and social implications – such as a reduction in materialism and even an end to conflict – are also explored.
"Reality is stranger than science fiction," the authors say. "3D printing will create future spaceships and create robots to manufacture off-world colonies in orbit and on new planets."
3D Printing: Rise of the 3rd Industrial Revolution contains thought-provoking material for both experienced 3D printing enthusiasts and those new to the subject alike.
Download a free copy from Amazon (for a limited time only).
Controlled by facial expressions, this new "wearable ear PC" can perform a range of functions such as opening apps and monitoring users' health. With a built-in GPS, compass, gyro-sensor, speaker and microphone, the device weighs just 17g. Its creator, Japanese engineer Kazuhiro Taniguchiset, says the device will be launched in 2016.
Flash memory manufacturer SanDisk has announced the first microSD card to reach 128GB capacity. Originally scheduled for release in 2012, the company had major problems fitting such a huge amount of storage into this small a form factor. To overcome these technical issues, SanDisk developed an innovative proprietary technique allowing 16 memory die to be vertically stacked, each shaved to be thinner than a strand of hair. The card delivers twice the speed of ordinary microSD memory cards and offers the highest video recording performance available. SanDisk is displaying the new product at Mobile World Congress being held this week in Barcelona.
"The technology used to design the 128GB Ultra microSDXC card is well in line with what mobile users expect, and demonstrates SanDisk's commitment to mobility," said Christopher Chute at Worldwide Digital Imaging, IDC. "Being able to fit this much capacity into a microSD card smaller than a fingernail is a game changer, and expands the possibilities of what people can do with their mobile devices. The 128GB Ultra microSDXC card frees users from constant concerns around storage limitations."
Data storage has come a long way during the last 60 years. The RAMAC 305 – the first computer with a hard disk drive – weighed over a ton, with a capacity of just 4.4 MB in a wardrobe-sized machine.
Researchers at IBM Labs have achieved a new technological breakthrough enabling data transfer rates of up to 400 Gigabits per second (Gb/s) at extremely low power.
Photo Credit: IBM Research
The device pictured above is a new, ultra-fast, energy efficient analog-to-digital converter (ADC) presented this week at the International Solid-State Circuits Conference (ISSCC) in San Francisco. It can transfer Big Data between clouds and data centres four times faster than current technology. At this speed, 160 Gigabytes – or the equivalent of a two-hour, 4K ultra-high definition movie – could be downloaded in only a few seconds.
While only a lab prototype, a previous version of the design has been licensed to Semtech Corp, a leading supplier of analog and mixed-signal semiconductors. The company is using this technology to develop advanced communications platforms expected to be announced later this year.
As Big Data and Internet traffic continue to grow exponentially, future networking standards will have to support higher data rates. In 1992, for example, 100 gigabytes of data was transferred per day, whereas today, that figure has grown to over two exabytes daily, a 20 million-fold increase.
To support the increase in traffic, scientists at IBM Research and Ecole Polytechnique Fédérale de Lausanne (EPFL) have been developing ADC technology to enable complex digital equalization across long-distance fiber channels. An ADC converts analog signals to digital, approximating the right combination of zeros and ones to digitally represent data so it can be stored on computers and analysed for patterns.
For example, the Square Kilometre Array (SKA), an international project to build the world’s largest and most sensitive radio telescope, will use hundreds of thousands of ADCs to convert the analog radio signals originating from the Big Bang. The radio data that the SKA collects from deep space is expected to produce 10 times the global internet traffic and the prototype ADC would be an ideal candidate to transport its signals fast and at very low power – a critical requirement, considering the thousands of antennas that will be spread over 3,000 kilometres (1,900 miles).
Image used with permission from Jo Bowler, SKA Program Development Office, Jodrell Bank Centre for Astrophysics.
Dr. Martin Schmatz, IBM Research: “Our ADC supports IEEE standards for data communication and brings together speed and energy efficiency at 32 nanometres, enabling us to start tackling the largest Big Data applications. With Semtech as our partner, we are bringing our previous generation of the ADC to market less than 12 months since it was first developed and tested.”
Semtech signed a non-exclusive technology licensing agreement, including access to patented designs and technological know-how, with IBM to develop the technology for its own family of products, ranging from optical communications to advanced radar systems.
Craig Hornbuckle, from Semtech: “Through leveraging the IBM 32nm SOI process with its unique feature set, we are developing products that are well-suited for meeting the challenge presented by the next step in high performance communications systems, such as 400 Gb/s Optical systems and Advanced Radar systems. We are also seeing an expanding range of applications in the existing radio frequency communications marketplace where high-speed digital logic is replacing functions that have been traditionally performed by less flexible analog circuitry.”
Stratasys Ltd. has announced the launch of its ground-breaking Objet500 Connex3, the first and only machine to combine colours with multi-material 3D printing.
A game-changer for product design, engineering and manufacturing processes, the Objet500 Connex3 features a unique triple-jetting technology. This combines droplets of three base materials to produce parts with virtually unlimited combinations of rigid, flexible, and transparent colour materials as well as colour digital materials – all in a single print run.
This ability to achieve the characteristics of an assembled part without assembly or painting is a significant time-saver, helping manufacturers to validate designs and make decisions earlier before committing to manufacturing, and bringing products to market 50% faster.
"Stratasys' goal is to help our customers revolutionise their design and manufacturing processes," says Stratasys CEO, David Reis. "I believe our new Objet500 Connex3 Colour Multi-material 3D Printer will transform the way our customers design, engineer and manufacture new products. In general and with Connex technology in particular, we will continue to push the envelope of what's possible in a 3D world."
Click to enlarge
Similar to a 2D inkjet printer, three colour materials – VeroCyan, VeroMagenta and VeroYellow – are combined to produce hundreds of vivid colours. These colour materials join Stratasys' extensive range of PolyJet photopolymer materials including digital materials, rigid, rubber-like, transparent, and high temperature materials to simulate both standard and higher temperature engineering plastics.
The Objet500 Connex3 also features six palettes for new rubber-like Tango colours, ranging from opaque to transparent in various shore values to address markets such as automotive, consumer and sporting goods and fashion.
Stratasys VP of product marketing and sales operations, Igal Zeitun: "As the first true multi-purpose 3D printer, we believe the Objet500 Connex3 is in a league of its own – enabling you to dream up a product in the morning, and hold it in your hands by the afternoon, with the exact intended colour, material properties and surface finish."
Duncan Wood, publisher of 3D printing magazine TCT, told the BBC: "This is groundbreaking stuff. Being able to produce single products incorporating materials of different rigidity and colour has been the holy grail of 3D printing to date. This is industrial-grade technology that will afford designers a level of creativity they've never had before."
Globally, an estimated 285 million people have diabetes – a chronic disease that occurs when the pancreas does not produce enough insulin, or when the body cannot effectively use the insulin it produces. Its incidence is growing rapidly, and by 2030, the number of cases is predicted to almost double. By 2050, as many as one in three U.S. adults could be affected if current trends continue.
To keep their blood sugar levels under control, sufferers need to constantly monitor themselves. This can involve pricking their finger to get a blood sample, two to four times per day. For many people, managing this condition is therefore a painful and disruptive process.
To address this problem, Internet giant Google has announced it is developing a smart contact lens. This wearable tech will measure glucose levels in tears, using a tiny wireless chip and miniaturised sensor, embedded between two layers of soft contact lens material. When glucose levels fall below a certain threshold, tiny LED lights will activate themselves to function as a warning system for the wearer.
Google admits it is still "early days" for this technology, but there is clearly great potential for improving the lives of diabetes sufferers around the world. To achieve their goal, they intend to partner with other technology companies who have previous experience of bringing products like this to market. You can read more at the Google Official Blog.
Headquartered in New York City's "Silicon Alley", the new Watson Group formed by IBM will fuel innovative products and startups – introducing cloud solutions to accelerate research, visualise Big Data and enable analytics exploration.
IBM today announced it will establish the IBM Watson Group, a new business unit dedicated to the development and commercialisation of cloud-delivered cognitive innovations. The move signifies a strategic shift by IBM to accelerate into the marketplace a new class of software, services and apps that can "think", improve by learning, and discover answers and insights to complex questions from massive amounts of Big Data.
IBM will invest more than $1 billion into the Watson Group, focusing on research and development to bring cloud-delivered cognitive applications and services to market. This will include $100 million available for venture investments to support IBM's recently launched ecosystem of start-ups and businesses, which are building a new class of cognitive apps powered by Watson, in the IBM Watson Developers Cloud.
According to technology research firm Gartner, smart machines will be the most disruptive change ever brought about by information technology, and can make people more effective, empowering them to do "the impossible."
The IBM Watson Group will have a new headquarters at 51 Astor Place in New York City's "Silicon Alley" technology hub, leveraging the talents of 2,000 professionals, whose goal is to design, develop and accelerate the adoption of Watson cognitive technologies that transform industries and professions. The new group will tap subject matter experts from IBM's Research, Services, Software and Systems divisions, as well as industry experts who will identify markets that cognitive computing can disrupt and evolve, such as healthcare, financial services, retail, travel and telecommunications.
Nearly three years after its triumph on the TV show Jeopardy!, IBM has advanced Watson from a quiz game innovation into a commercial technology. Now delivered from the cloud and powering new consumer apps, Watson is 24 times faster and 90 percent smaller – IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes.
Named after IBM founder Thomas J. Watson, the machine was developed in IBM’s Research labs. Using natural language processing and analytics, Watson handles information akin to how people think, representing a major shift in the ability to quickly analyse, understand and respond to Big Data. Watson’s ability to answer complex questions in natural language with speed, accuracy and confidence will transform decision making across a range of industries.
"Watson is one of the most significant innovations in IBM's 100 year history, and one that we want to share with the world," says IBM Senior Vice President Mike Rhodin (pictured below), who will lead the group. "These new cognitive computing innovations are designed to augment users’ knowledge – be it the researcher exploring genetic data to create new therapies, or a business executive who needs evidence-based insights to make a crucial decision."
A Polish company – Better Reality – has been developing a new graphics platform known as "Thorskan". This is able to scan real environments and recreate them in 3D with spectacular resolution and detail. Though it is mainly being used by advertisers and Hollywood studios like 20th Century Fox, Better Reality says it can also work in video games.
Another Polish company, The Farm 51, has indeed been using Thorskan to create a new PC/XB1/PS4 game – Get Even – that is set for release in 2015. This will feature "ambitious new dynamic and photorealistic graphics", a taste of which can be found in the preview below. A trailer was also released yesterday. We recommend watching in full screen HD.
At the Consumer Electronics Show (CES) in Las Vegas, Intel Corporation has been showing off its latest innovative technologies. These include an intelligent 3D camera system, a range of new wearable electronics, and a 22nm dual-core PC the size of an SD card.
Intel CEO Brian Krzanich has outlined a range of new products, initiatives and strategic relationships aimed at accelerating innovation across a range of mobile and wearable devices. He made the announcements during the pre-show keynote for the 2014 Consumer Electronics Show in Las Vegas, the biggest gathering of the tech industry in the USA.
Krzanich's keynote painted a vision of how the landscape of computing is being re-shaped and where security is too important not to have it embedded in all devices. The world is entering a new era of integrated computing defined not by the device, but the integration of technology into people's lifestyles in ways that offer new utility and value. As examples, he highlighted several immersive and intuitive technologies that Intel will begin offering in 2014, such as Intel RealSense – hardware and software that will bring human senses to Intel-based devices. This will include 3D cameras that deliver more intelligent experiences – improving the way people learn, collaborate and are entertained.
The first Intel RealSense 3D camera features a best-in-class depth sensor and a full 1080p colour camera. It can detect finger level movements enabling highly accurate gesture recognition, facial features for understanding movement and emotions. It can understand foregrounds and backgrounds to allow control, enhance interactive augmented reality (AR), simply scan items in three dimensions, and more.
This camera will be integrated into a growing spectrum of Intel-based devices including 2 in 1, tablet, Ultrabook, notebook, and all-in-one (AIO) designs. Systems with the new camera will be available beginning in the second half of 2014 from Acer, Asus, Dell, Fujitsu, HP, Lenovo and NEC.
To advance the computer's "hearing" sense, a new generation of speech recognition technology will be available on a variety of systems. This conversational personal assistant works with popular websites and applications. It comes with selectable personalities, and allows for ongoing dialogue with Intel-based devices. People can simply tell it to play music, get answers, connect with friends and find content – all by using natural language. This assistant is also capable of calendar checks, getting maps and directions, finding flights or booking a dinner reservation. Available offline, people can control their device, dictate notes and more without an Internet connection.
Krzanich then explained how Intel aims to accelerate wearable device innovation. A number of reference designs were highlighted including: smart earbuds providing biometric and fitness capabilities, a smart headset that is always ready and can integrate with existing personal assistant technologies, a smart wireless charging bowl, a smart baby onesie and a smart bottle warmer that will start warming milk when the onesie senses the baby is awake and hungry.
The smart earbuds (pictured below) provide full stereo audio, monitor heart rate and pulse all while the applications on the user's phone keep track of running distance and calories burned. The product includes software to precision-tune workouts by automatically choosing music that matches the target heart rate profile. As an added bonus, it harvests energy directly from the audio microphone jack, eliminating the need for a battery or additional power source to charge the product.
The Intel CEO announced collaborations to increase dialogue and cooperation between fashion and technology industries to explore and bring to market new smart wearable electronics. He also kicked-off the Intel "Make it Wearable" challenge – a global effort aimed at accelerating creativity and innovation with technology. This effort will call upon the smartest and most creative minds to consider factors impacting the proliferation of wearable devices and ubiquitous computing, such as meaningful usages, aesthetics, battery life, security and privacy.
In addition to reference designs for wearable technology, Intel will offer a number of accessible, low-cost entry platforms aimed at lowering entry barriers for individuals and small companies, allowing them to create innovative web-connected wearables or other small form factor devices. Underscoring this point, Krzanich announced Intel Edison – a low-power, 22nm-based computer in an SD card form factor with built-in wireless abilities and support for multiple operating systems. From prototype to production, Intel Edison will enable rapid innovation and product development by a range of inventors, entrepreneurs and consumer product designers when available this summer.
"Wearables are not everywhere today, because they aren't yet solving real problems and they aren't yet integrated with our lifestyles," said Krzanich. "We're focused on addressing this engineering innovation challenge. Our goal is: if something computes and connects, it does it best with Intel inside."
Krzanich also discussed how Intel is addressing a critical issue for the industry as a whole: conflict minerals from the Democratic Republic of the Congo (DRC). Intel has achieved a critical milestone and the minerals used in microprocessor silicon and packages manufactured in Intel's factories are now "conflict-free", as confirmed by third-party audits.
"Two years ago, I told several colleagues that we needed a hard goal, a commitment to reasonably conclude that the metals used in our microprocessors are conflict-free," Krzanich said. "We felt an obligation to implement changes in our supply chain to ensure that our business and our products were not inadvertently funding human atrocities in the Democratic Republic of the Congo. Even though we have reached this milestone, it is just a start. We will continue our audits and resolve issues that are found."
Online retailer Amazon has revealed a new rapid delivery method that will use unmanned aerial vehicles to send packages to customers within 30 minutes. Assuming the Federal Aviation Administration (FAA) approves it, this futuristic service – "Amazon Prime Air" – could be introduced by 2015. Read more at the company's press release.
Tianhe-2, a Chinese supercomputer, has retained its position as the world's no. 1 system with 33.86 petaflop/s (quadrillions of calculations per second) on the Linpack benchmark – according to the latest TOP500 list of the world's most powerful supercomputers.
The 42nd edition of the twice-yearly TOP500 list was announced yesterday at the SC13 conference in Denver, Colorado. While a typical desktop PC has four cores, Tianhe-2 (which means “Milky Way 2”) features 3,120,000 – each using Intel's "Ivy Bridge" 22 nanometre processors. It has 1,024,000 gigabytes of random-access memory (RAM), 12.4 petabytes of storage space and needs 17,800 kilowatts (kW) of electricity to work. Including external cooling, it requires 24,000 kW. The entire complex occupies 720 square metres of floor space and costs 2.4 billion Yuan (US$390 million).
China’s National University of Defence Technology (NUDT) – which developed Tianhe-2 – says it will be offered as a "research and education" tool once tests are completed. Local reports suggest that the car industry is a "priority" client, so it may be useful in complex engine simulations, for example, or devising new materials and more efficient components.
Titan – installed at the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory – remains the no. 2 system, achieving 17.59 petaflop/s on the Linpack benchmark. Titan is among the most energy efficient systems on the list, consuming a total of 8.21 MW of electrical power and delivering 2.14 gigaflops per watt, compared to 1.9 for Tianhe-2.
Sequoia, an IBM BlueGene/Q system installed at the DOE’s Lawrence Livermore National Laboratory, is the no. 3 system. First delivered in 2011, Sequoia reached 17.17 petaflop/s on the Linpack benchmark.
In all, there are 31 systems with performance greater than a petaflop/s on the list, an increase of five compared to the June 2013 list. Intel continues to provide the processors for the largest share (82.4 percent) of TOP500 systems.
Although China holds the no.1 spot, the U.S. is clearly the leading consumer of supercomputers, with 265 of the top 500 systems (253 last time). The European share (102 systems compared to 112 last time) is still lower than the Asian share (115 systems, down from 118 last time).
Like many forms of information technology, the growth of supercomputing power has followed a remarkably smooth and consistent trend. As shown in the graph below, we can expect to see the first exaflop machine by 2019. An exaflop is 1,000,000,000,000,000,000 (a million trillion, or a quintillion) calculations per second. Such computing power will be invaluable to researchers – providing faster and more accurate simulations of climate, weather, astrophysics, genetics, molecular dynamics and many other processes. Zettaflop machines could emerge by 2030.
As scientists develop the next wave of smartwatches and other wearable computing, they might want to continue focusing their attention on the arms and the wrists. According to a recent study, portable electronic devices placed on the collar, torso, waist or legs may cause awkwardness, embarrassment or strange looks.
In a paper titled “Don’t Mind Me Touching My Wrist,” Georgia Tech researchers reported a case study of interaction with on-body technology in public. Specifically, they surveyed people in both the United States and South Korea to gain cultural insights into perceptions of e-textiles, or electronic devices, stitched into everyday clothing.
For the study, researchers directed participants to watch videos of people receiving incoming alerts from e-textile interfaces on various parts of their body including wrists, forearms, collarbones, torsos, waists and front pant pocket. They were asked to describe their thoughts about the interaction (such as whether it appeared normal, silly or awkward) and its placement on the body.
In general, the study found that in both countries, the wrist and forearm were the most preferred locations for e-textiles, as well as the most normal placement when watching someone use the devices.
“This may be due to the fact that these locations are already being used for wearable technology,” said Halley Profita, former Georgia Tech industrial design graduate student, who led the study. “People strap smartphones or MP3 players to their arms while exercising. Runners wear GPS watches.”
According to the study:
Gender of the technology user affected opinions about the interaction. For example, Americans were uncomfortable when men used a device located at the front pant pocket region or when women reached for their torsos or collarbones.
South Koreans reported exceptionally low acceptance of women using the devices anywhere except for their arms.
Respondents expressed differing views on the most important factors on deciding how to use e-textiles. Americans focused on ease of operation and accessibility; South Koreans raised personal perception issues.
“The South Koreans also said they wanted an easy-to-use system – but the technology should not make them look awkward or weird,” Profita said. “This isn’t surprising, because their culture emphasises modesty, politeness and avoidance of embarrassing situations.”
The findings were presented at the International Symposium in Wearable Computing, held in Switzerland.
The ability to shrink laboratory-scale processes to automated chip-sized systems would revolutionise biotechnology and medicine. For example, inexpensive and highly portable devices that process blood samples to detect biological agents, such as anthrax, are needed by the U.S. military and for homeland security efforts.
A microfluidic bioreactor. Credit: Adam Fenster/University of Rochester.
One of the challenges of "lab-on-a-chip" technology is the need for miniaturised pumps to move solutions through micro-channels. Electroosmotic pumps (EOPs) — devices in which fluids appear to magically move through porous media in the presence of an electric field — are ideal, because they can be readily miniaturised. EOPs however, require bulky and external power sources, which defeats the concept of portability. But a super-thin silicon membrane developed at the University of Rochester could now make it possible to drastically shrink the power source, paving the way for new diagnostic devices the size of a credit card.
"Up until now, electroosmotic pumps have had to operate at a very high voltage, about 10 kilovolts," said James McGrath, associate professor of biomedical engineering. "Our device works in the range of one-quarter of a volt, which means it can be integrated into devices and powered with small batteries."
McGrath's research paper is published this week by the journal Proceedings of the National Academy of Sciences.
McGrath and his colleagues use porous nanocrystalline silicon (pnc-Si) membranes that are microscopically thin – it takes more than one thousand stacked on top of each other to equal the width of a human hair. And that's what allows for a low-voltage system.
A porous membrane needs to be placed between two electrodes in order to create what's known as electroosmotic flow, which occurs when an electric field interacts with ions on a charged surface, causing fluids to move through channels. Membranes previously used in EOPs have resulted in a significant voltage drop between the electrodes, forcing engineers to begin with bulky and high-voltage power sources. The thin pnc Si membranes allow the electrodes to be placed much closer to each other, creating a much stronger electric field with a much smaller drop in voltage. As a result, a smaller power source is needed.
"Until now, not everything associated with miniature pumps was miniaturised," said McGrath. "Our device opens the door for a tremendous number of applications."
Along with medical applications, it's been suggested that EOPs could be used to cool electronic devices. As electronic devices get smaller, components are packed more tightly, making it easier for the devices to overheat. With miniature power supplies, it may be possible to use EOPs to help cool laptops and other portable electronic devices.
McGrath said there's one other benefit to the silicon membranes. "Due to scalable fabrication methods, the nanocrystalline silicon membranes are inexpensive to make and can be easily integrated on silicon or silica-based microfluid chips."
A new software algorithm is capable of solving CAPTCHAs – a test commonly used in computing to determine whether or not the user is human.
Vicarious, a startup developing artificial intelligence software, has announced that its algorithms can now reliably solve modern CAPTCHAs, including Google's reCAPTCHA, the world's most widely used test of a machine's ability to act human.
A CAPTCHA (which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart") is considered broken if an algorithm is able to achieve a precision of at least 1%. Leveraging core insights from machine learning and neuroscience, the Vicarious AI can achieve success rates of up to 90% on modern CAPTCHAs from Google, Yahoo, PayPal, Captcha.com, and others. This advancement, the company says, renders text-based CAPTCHAs no longer effective as a Turing test.
"Recent AI systems like IBM’s Watson and deep neural networks rely on brute force: connecting massive computing power to massive datasets. This is the first time this distinctively human act of perception has been achieved, and it uses relatively minuscule amounts of data and computing power. The Vicarious algorithms achieve a level of effectiveness and efficiency much closer to actual human brains", said Vicarious co-founder D. Scott Phoenix.
"Understanding how brain creates intelligence is the ultimate scientific challenge. Vicarious has a long-term strategy for developing human level artificial intelligence, and it starts with building a brain-like vision system. Modern CAPTCHAs provide a snapshot of the challenges of visual perception, and solving those in a general way required us to understand how the brain does it", said co-founder Dr. Dileep George.
Solving CAPTCHA is the first public demonstration of Recursive Cortical Network (RCN) technology. Although still many years away, the commercial applications of RCN will have broad implications for robotics, medical image analysis, image and video search, and many other fields.
"We should not underestimate the significance of Vicarious crossing this milestone," said Facebook co-founder and board member Dustin Moskovitz. "This is an exciting time for artificial intelligence research, and they are at the forefront of building the first truly intelligent machines."
By altering the friction encountered by a person's fingertip, a new algorithm developed by Disney can create the perception of a 3D bump on a touch surface, without having to physically move the surface. This method can be used to simulate the feel of a wide variety of objects and textures.
The algorithm is based on a discovery that when a person slides their finger over a real physical bump, the person perceives the bump largely because lateral friction forces stretch and compress skin on the sliding finger.
"Our brain perceives the 3D bump on a surface mostly from information that it receives via skin stretching," said Ivan Poupyrev, who directs Disney Research in Pittsburgh. "Therefore, if we can artificially stretch skin on a finger as it slides on the touch screen, the brain will be fooled into thinking an actual physical bump is on a touch screen, even though the touch surface is completely smooth."
In their experiments, the researchers used electrovibration to modulate the friction between the sliding finger and the touch surface with electrostatic forces. They created and validated a "psycho-physical model" that closely simulates friction forces perceived by the human finger when it slides over a real bump.
The model was then incorporated into an algorithm that dynamically modulates the frictional forces on a sliding finger so that they match the tactile properties of the visual content displayed on the touch screen along the finger's path. A wide variety of visual artifacts thus can be dynamically enhanced with tactile feedback that adjusts as the display changes.
"The traditional approach to tactile feedback is to have a library of canned effects that are played back whenever a particular interaction occurs," said Ali Israr, who was the research lead on the project. "This makes it difficult to create a tactile feedback for dynamic visual content, where the sizes and orientation of features constantly change. With our algorithm we do not have one or two effects, but a set of controls that make it possible to tune tactile effects to a specific visual artifact on the fly."
"Touch interaction has become the standard for smartphones, tablets and even desktop computers, so designing algorithms that can convert the visual content into believable tactile sensations has immense potential for enriching the user experience," Poupyrev said. "We believe our algorithm will make it possible to render rich tactile information over visual content and that this will lead to new applications for tactile displays.
Disney will present their findings at the ACM Symposium on User Interface Software and Technology being held this week in St Andrews, Scotland. By impact factor, it is the most impactful conference in the field of human-computer interaction.
Researchers at Cambridge University have developed a new technique allowing carbon nanotube "forests" to be grown at five times the density of previous methods.
Scanning electron microscope images of CNT forests, low and high density.
Carbon nanotubes' outstanding mechanical, electrical and thermal properties make them an alluring material to electronics manufacturers. Until recently, however, scientists believed that growing the high density of tiny graphene cylinders needed for many microelectronics applications would be difficult.
Now a team from Cambridge University in England has devised a simple technique to increase the density of nanotube forests grown on conductive supports about five times over previous methods. The high density nanotubes might one day replace some metal electronic components, leading to faster devices.
"The high density aspect is often overlooked in many carbon nanotube growth processes, and is an unusual feature of our approach," says John Robertson, a professor in the electronic devices and materials group in the department of engineering at Cambridge. High-density forests are necessary for certain applications of carbon nanotubes, such as electronic interconnects and thermal interface materials.
Robertson and his colleagues grew carbon nanotubes on a conductive copper surface that was coated with co-catalysts cobalt and molybdenum. In a novel approach, they grew at lower temperature than is typical which is applicable in the semiconductor industry. When the interaction of metals was analysed by X-ray photoelectron spectroscopy, it revealed the creation of a more supportive substrate for the forests to root in. The subsequent nanotube growth exhibited the highest mass density reported so far.
"In microelectronics, this approach to growing high-density carbon nanotube forests on conductors can potentially replace and outperform the current copper-based interconnects in a future generation of devices," says Cambridge researcher Hisashi Sugime. In the future, more robust carbon nanotube forests may also help to improve thermal interface materials, battery electrodes, and supercapacitors.
The article, "Low temperature growth of ultra-high mass density carbon nanotube forests on conductive supports" appears in the journal Applied Physics Letters.
A study by the Oxford Martin School shows that nearly half of US jobs could be at risk of computerisation within 20 years. Transport, logistics and office roles are most likely to come under threat.
The new study, a collaboration between Dr Carl Benedikt Frey (Oxford Martin School) and Dr Michael A. Osborne (Department of Engineering Science, University of Oxford), found that jobs in transportation, logistics, as well as office and administrative support, are at "high risk" of automation. More surprisingly, occupations within the service industry are also highly susceptible, despite recent job growth in this sector.
"We identified several key bottlenecks currently preventing occupations being automated," says Dr. Osborne. "As big data helps to overcome these obstacles, a great number of jobs will be put at risk."
The study examined over 700 detailed occupation types, noting the types of tasks workers perform and the skills required. By weighting these factors, as well as the engineering obstacles currently preventing computerisation, the researchers assessed the degree to which these occupations may be automated in the coming decades.
"Our findings imply that as technology races ahead, low-skilled workers will move to tasks that are not susceptible to computerisation – i.e., tasks that require creative and social intelligence," the paper states. "For workers to win the race, however, they will have to acquire creative and social skills."
"While computerisation has been historically confined to routine tasks involving explicit rule-based activities, algorithms for big data are now rapidly entering domains reliant upon pattern recognition and can readily substitute for labour in a wide range of non-routine cognitive tasks. In addition, advanced robots are gaining enhanced senses and dexterity, allowing them to perform a broader scope of manual tasks. This is likely to change the nature of work across industries and occupations."
The low susceptibility of engineering and science occupations to computerisation, on the other hand, is largely due to the high degree of creative intelligence they require. However, even these occupations could be taken over by computers in the longer term.
Dr Frey said the United Kingdom is expected to face a similar challenge to the US. "While our analysis was based on detailed datasets relating to US occupations, the implications are likely to extend to employment in the UK and other developed countries," he said.
Researchers have discovered a "global ecology" of interacting machines that trade on the global markets at speeds too fast for humans, causing periodic outages. These high frequency trading algorithms could lead to increasingly large crashes, as the volume of data in the world continues to grow exponentially.
Recently, the global financial market experienced a series of computer glitches that abruptly brought operations to a halt. This was so serious that – on one day – it resulted in a third fewer shares being traded in the USA. One reason for these "flash freezes" may be the sudden emergence of mobs of ultrafast robots, which trade on the global markets and operate at speeds beyond human capability, thus overwhelming the system. The appearance of this "ultrafast machine ecology" is documented in a new study published today in Nature Scientific Reports.
The findings suggest that for time scales less than one second, the financial world makes a sudden transition into a cyber jungle inhabited by packs of aggressive trading algorithms. "These algorithms can operate so fast that humans are unable to participate in real time, and instead, an ultrafast ecology of robots rises up to take control," explains Neil Johnson, professor of physics in the College of Arts and Sciences at the University of Miami (UM).
"Our findings show that, in this new world of ultrafast robot algorithms, the behaviour of the market undergoes a fundamental and abrupt transition to another world where conventional market theories no longer apply," Johnson says.
Society's push for ever faster systems that outpace competitors has led to algorithms capable of operating faster than the response time for a human. For instance, the quickest a person can react to potential danger is about one second. Even a chess grandmaster takes around 650 milliseconds to realise that he is in trouble – yet microchips for trading can operate in a fraction of a millisecond (1 millisecond is 0.001 seconds).
In this study, the researchers assembled and analysed a high-throughput millisecond-resolution price stream of multiple stocks and exchanges. From January 2006, through to February 2011, they found 18,520 extreme events lasting less than 1.5 seconds, including both crashes and spikes.
The team realised that as the duration of these ultrafast extreme events fell below human response times, the number of crashes and spikes increased dramatically. They created a model to understand the behaviour and concluded that the events were the product of ultrafast computer trading and not attributable to other factors, such as regulations or mistaken trades. Johnson, who is head of the inter-disciplinary research group on complexity at UM, compares the situation to an ecological environment.
"As long as you have the normal combination of prey and predators, everything is in balance, but if you introduce predators that are too fast, they create extreme events," Johnson says. "What we see with the new ultrafast computer algorithms is predatory trading. In this case, the predator acts before the prey even knows it's there."
Johnson explains that in order to regulate these ultrafast computer algorithms, we need to understand their collective behaviour. This is a daunting task, but is made easier by the fact that the algorithms that operate below human response times are relatively simple, because simplicity allows faster processing.
"There are relatively few things that an ultrafast algorithm will do," Johnson says. "This means that they are more likely to start adopting the same behaviour, and hence form a cyber crowd or cyber mob which attacks a certain part of the market. This is what gives rise to the extreme events that we observe," he says. "Our math model is able to capture this collective behaviour by modelling how these cyber mobs behave."
In fact, Johnson believes this new understanding of cyber-mobs may have other important applications outside of finance – such as dealing with cyber-attacks and cyber-warfare.
By analysing MRI images of the brain with an elegant mathematical model, it is possible to reconstruct thoughts more accurately than ever before. In this way, researchers from Radboud University Nijmegen, Netherlands, have succeeded in determining which letter a test subject was looking at.
Functional MRI scanners have been used in cognition research primarily to determine which brain areas are active while test subjects perform a specific task. The question is simple: is a particular brain region on or off? A research group at the Donders Institute for Brain, Cognition and Behaviour at Radboud University has gone a step further: they have used data from the scanner to determine what a test subject is looking at.
The researchers 'taught' a model how small volumes of 2x2x2 mm from the brain scans – known as voxels – respond to individual pixels. By combining all the information about the pixels from the voxels, it became possible to reconstruct the image viewed by the subject. The result was not a clear image, but a somewhat fuzzy speckle pattern. In this study, the researchers used hand-written letters.
Prior knowledge improves model performance
"After this we did something new", says lead researcher Marcel van Gerven. "We gave the model prior knowledge: we taught it what letters look like. This improved the recognition of the letters enormously. The model compares the letters to determine which one corresponds most exactly with the speckle image, and then pushes the results of the image towards that letter. The result was the actual letter, a true reconstruction."
"Our approach is similar to how we believe the brain itself combines prior knowledge with sensory information. For example, you can recognise the lines and curves in this article as letters only after you have learned to read. And this is exactly what we are looking for: models that show what is happening in the brain in a realistic fashion. We hope to improve the models to such an extent that we can also apply them to the working memory or to subjective experiences such as dreams or visualisations. Reconstructions indicate whether the model you have created approaches reality."
Improved resolution; more possibilities
Japanese researchers achieved a similar feat in 2008. However, this latest research at Radboud University is based on a higher resolution. Sanne Schoenmakers, who is working on a thesis about decoding thoughts, explains: "In our further research we will be working with a more powerful MRI scanner. Due to the higher resolution of the scanner, we hope to be able to link the model to more detailed images. We are currently linking images of letters to 1200 voxels in the brain; with the more powerful scanner we will link images of faces to 15,000 voxels."
The team's research is published in the journal Neuroimage.
Worldwide, mobile phone sales totalled 435 million units in the second quarter of 2013 – an increase of 3.6 percent from the same period in 2012 – according to research firm Gartner. Smartphone sales reached 225 million units, up 47 percent from last year, while basic feature phones totalled 210 million units, a decline of nearly 25 percent.
Asia/Pacific, Latin America and Eastern Europe had the highest smartphone growth rates of 74.1 percent, 55.7 percent and 31.6 percent respectively, as smartphone sales grew in all regions.
Samsung maintained the no. 1 position in the global smartphone market, as its share of sales reached 31.7 percent, up from 29.7 percent in the second quarter of 2012. Apple’s smartphone sales reached 32 million units in the second quarter of 2013, up 10.2 percent from a year ago.
Anshul Gupta, principal research analyst at Gartner: “With second quarter of 2013 sales broadly on track, we see little need to adjust our expectations for worldwide mobile phone sales forecast to total 1.82 billion units this year. Flagship devices brought to market in time for the holidays, and the continued price reduction of smartphones will drive consumer adoption in the second half of the year.”
In the smartphone operating system (OS) market, Microsoft took over BlackBerry for the first time, taking the no. 3 spot with 3.3 percent market share in the second quarter of 2013. “While Microsoft has managed to increase share and volume in the quarter, it should continue to focus on growing interest from app developers to help grow its appeal among users,” said Mr. Gupta. The Android OS continued to increase its lead, garnering 79 percent of the market in the second quarter, followed by Apple's iOS with 14.2%.
Basic feature phones will be a hard sell in about five to 10 years time, says Gupta: "It will reach a point where sales of a new model of feature phone will not be able to justify the amount of time and money that is spent into developing it."
A new automated medical system has initiated research that could one day radically improve how neurological and psychological diseases are treated.
Medtronic, Inc. has announced the first implant of a novel deep brain stimulation (DBS) system that – for the first time – enables the sensing and recording of select brain activity while simultaneously providing targeted DBS therapy. This will initiate research on how the brain responds to the therapy and could yield major insights that significantly change the way people with devastating neurological and psychological disorders are treated.
The Activa PC+S DBS system delivers proven Medtronic DBS therapy, while at the same time sensing and recording electrical activity in key areas of the brain, using sensing technology and an adjustable algorithm, which enable the system to gather brain signals at various moments as selected by a physician. Initially, this new technology will be made available to a select group of physicians worldwide for use in clinical studies. These physicians will use the system to map the brain’s responses to Medtronic DBS therapy and explore applications for the treatment across a range of neurological and psychological conditions.
The Activa PC+S system was implanted for the first time at Ludwig Maximilians University in Munich, Germany, in a person with Parkinson’s disease. This patient will be treated by a team that includes the neurologist Kai Bötzel and neurosurgeon Jan Mehrkens. Dr. Bötzel will be the first to use data gathered by the Activa PC+S system to gain unprecedented insight into how the brain responds.
Conventional DBS therapy uses an implanted medical device, similar to a pacemaker, to deliver mild electrical pulses to precisely targeted areas of the brain. This needs to be programmed and adjusted by a trained clinician to maximise symptom control and minimise side effects.
"Everything that is on the market today is a one-way stimulator," says Joseph Neimat, a neurosurgeon at Vanderbilt University Medical Center who specialises in deep brain stimulation. "The devices don't record or respond to a patient. What would be better would be to have a system that could anticipate or read a patient's state, and then respond with an appropriate stimulus."
“DBS therapy works for people with Parkinson’s disease and other movement disorders, but there is much to learn about how the brain responds to the therapy,” said Dr. Bötzel. “This new system will allow us to treat patients with conventional DBS therapy, while at the same time opening the door for research that was not possible until now. We hope these insights will lead to the development of effective new treatments tailored to the needs of individuals.”
“Devastating conditions like Parkinson’s disease and obsessive-compulsive disorder take a significant toll on countless people, as well as their loved ones,” said Lothar Krinke, Ph.D., vice president and general manager. “Medtronic is excited to provide this new system to researchers worldwide, and we expect that their respective studies will lead to accelerated understanding of how neurological and psychological conditions develop and progress. This represents a significant milestone for DBS therapy and the long-term journey toward a closed-loop DBS system, which could personalise therapy by using device data to automatically adjust to the needs of individual patients.”
Medtronic’s Activa PC+S system received CE (Conformité Européenne) mark in January 2013. It is not approved by the U.S. Food and Drug Administration for commercial use in the United States, and will be made available to select physicians for investigational use only. Additional implants of the Activa PC+S system, including the first implant in the United States, will take place in the coming months.
London and New York-based company botObjects recently announced the ProDesk3D, which they claimed to be the first full-colour 3D printer small enough to fit on a desktop. In addition to its colour abilities and compactness, they confirmed that it would print at 25 microns – some four times more accurate than its competitors (Makerbot's Replicator 2 has a resolution of 100 microns).
This gives an extremely smooth finish, overcoming the issue of surface grooves which often appear in 3D-printed objects. The machine uses different-coloured cartridges on the fly, just like an inkjet printer, instead of requiring single-colour spools of raw plastic to be swapped out. This includes a palette of new "translucent" PLA colours for some impressive blending effects, customisable with software on Windows 7 and Mac OS X. There is no complex or tricky set up, as the ProDesk3D arrives out-of-the-box complete.
The company has received over 100,000 enquiries and expects to ship its first orders by 1st October 2013. The standard and limited edition models both have a somewhat hefty price tag of nearly $3,000 each, making them high-end products. However, the cost of 3D printing has fallen rapidly in recent years and if this continues, it is expected to become a mainstream consumer technology by 2015. Following their recent announcement, the company has now released a video of the product in action:
Nokia has revealed the Lumia 1020 – a Windows 8 smartphone that includes a 41 megapixel camera sensor, PureView technology, Optical Image Stabilisation and high-resolution zoom.
This is Nokia's second phone to feature such a camera. Last year, the company launched the Pureview 808 model. However, this was based on the ageing Symbian operating system, which limited its appeal. The new Lumia 1020 instead runs on the latest Microsoft OS with over 160,000 apps.
As well as its ultra-high resolution (7712 × 5360 pixels), the camera uses a process called "oversampling". This generates a smaller 5MP version of the image that removes unwanted visual noise, achieves higher definition and light sensitivity, and enables lossless zoom. Unlike its predecessor, the Lumia 1020 can save both types at the same time, meaning that the owner does not need to worry about switching settings.
In addition, the camera's video mode takes advantage of the higher resolution, by allowing the user to zoom in four times while filming at 1080p without any loss of quality, and six times at 720p. The lens system is mounted on ball-bearings and is fitted with a gyroscope and motors to counteract movement and camera shake.
Given the exponential progress of digital technology, these sort of cameras should be fairly standard and low cost within the next few years. By the 2020s, we will probably be saying the same about gigapixel cameras.
Using nanostructured glass, scientists at the University of Southampton have, for the first time, demonstrated the recording and retrieval processes of five dimensional digital data by femtosecond laser writing. The storage allows unprecedented parameters, including 360 TB/disc data capacity, thermal stability up to 1000°C and practically unlimited lifetime.
Coined as the 'Superman' memory crystal, as the glass memory has been compared to the "memory crystals" used in the Superman films, the data is recorded via self-assembled nanostructures created in fused quartz, which is able to store vast quantities of data for over a million years. The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.
A 300 kb text file was successfully recorded in 5D using an ultrafast laser – producing extremely short and intense pulses of light. The file is written in three layers of nanostructured dots separated by five micrometres (one millionth of a metre). The self-assembled nanostructures change the way light travels through glass, modifying polarisation of light that can then be read by a combination of optical microscope and a polariser, similar to that found in Polaroid sunglasses.
The research is led by Jingyu Zhang at the Optoelectronics Research Centre (ORC) and conducted under a joint project with Eindhoven University of Technology.
"We are developing a very stable and safe form of portable memory using glass, which could be highly useful for organisations with big archives," says Jingyu. "At the moment, companies have to back up their archives every five to ten years because hard-drive memory has a relatively short lifespan. Museums who want to preserve information or places like the national archives where they have huge numbers of documents, would really benefit."
Professor Peter Kazansky, the ORC's group supervisor: "It is thrilling to think that we have created the first document which will likely survive the human race. This technology can secure the last evidence of civilisation: all we've learnt will not be forgotten."
The team presented their paper at the Conference on Lasers and Electro-Optics (CLEO'13) in San Jose. They are now looking for industry partners to commercialise this ground-breaking new technology.