future timeline technology singularity humanity
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed


AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure















27th August 2014

Lockheed Martin agrees deal to track space debris

US defence giant Lockheed Martin has formed a partnership with Australian tech firm Electro Optic Systems to monitor the growing problem of space debris.


space junk


Lockheed Martin announced yesterday that it will join forces with Electro Optic Systems (EOS) to provide space tracking services to the space industry. This joint development will see the establishment of a new facility in Western Australia – operational by early 2016 – to offer a more detailed picture of space debris for both government and commercial customers.

The site will use a combination of lasers and highly sensitive optical systems to detect, track and characterise an estimated 200,000 man-made objects. These advanced electro-optical technologies will zoom in on specific objects to determine how fast they’re moving, what direction they’re spinning and what they’re made of.

“Ground-based space situational awareness is a growing priority for government and commercial organisations around the world that need to protect their investments in space,” said Rick Ambrose, executive vice president, Lockheed Martin Space Systems. “Through this agreement with Electro Optic Systems, we’ll offer customers a clearer picture of the objects that could endanger their satellites, and do so with great precision and cost-effectiveness.”

“The partnership with Lockheed Martin will help both organisations establish a global network of space sensors, while simultaneously increasing the market reach of the partners’ data and services,” said Electro Optic Systems Chief Executive Officer Ben Greene. “We consider the strategic partnership with Lockheed Martin a major step towards the achievement of critical mass of sensors, data and services, all of which are critical in providing detailed yet easily usable information on space debris.”

Space junk has been accumulating in Earth orbit for over 50 years. It is made up of everything from spent rocket stages, to defunct satellites, to debris left over from accidental collisions. The amount of space junk is expected to triple by 2030. Experts have expressed concerns about a potential scenario known as Kessler Syndrome – in which a catastrophic chain reaction occurs, making space exploration and satellite communications impossible. This was depicted in the recent sci-fi movie Gravity.


  speech bubble Comments »



26th August 2014

Fully-functional organ grown from scratch inside a living animal for the first time

A whole functioning organ – a thymus – has been grown inside a mouse. The treatment could be available to humans within 10 years.



Lab-grown replacement organs have moved a step closer, thanks to a major breakthrough by researchers at the University of Edinburgh, Scotland. Scientists have for the first time grown a complex, fully-functional organ from scratch in a living animal by transplanting cells that were originally created in a laboratory. The researchers created a thymus – an organ next to the heart that produces immune cells known as T cells that are vital for guarding against disease. With further research, this discovery could lead to new treatments for elderly patients and others with a weakened immune system.

The team from the MRC Centre for Regenerative Medicine at the University of Edinburgh took cells called fibroblasts from a mouse embryo. They turned the fibroblasts into a completely different type of cell called thymus cells, using a technique called reprogramming. The new cells changed shape to look like thymus cells and were also capable of supporting development of T cells in the lab – a specialised function that only thymus cells can perform.

When the researchers mixed reprogrammed cells with other key thymus cell types and transplanted them into a live mouse, the cells formed a replacement organ. This new organ had the same structure, complexity and function as a healthy adult thymus. It is the first time that scientists have made an entire living organ from cells that were created outside of the body by reprogramming.


thymus grown inside animal


Doctors have already shown that patients with thymus disorders can be treated with infusions of extra immune cells or transplantation of a thymus shortly after birth. The problem is that both are limited by a lack of organ donors and problems matching tissue to the recipient. Within the next decade, lab-grown cells could form the basis of a thymus transplant treatment for people with a weakened immune system.

The technique may also offer a way of making patient-matched T cells in the laboratory that could be used in cell therapies. Such treatments may benefit bone marrow transplant patients, by helping speed up the rate at which they rebuild their immune system after transplant. The discovery offers hope to babies born with genetic conditions that prevent the thymus from developing properly. Older people could also be helped as the thymus is the first organ to deteriorate with age. The results of this study are published in the journal Nature Cell Biology.


thymus in human


  speech bubble Comments »



24th August 2014

Human Longevity, Inc. signs agreement to develop and commercialise new stem cell therapies

Human Longevity Inc. (HLI) has signed an agreement with Celgene Cellular Therapeutics (CCT) to license, develop, and co-promote Celgene's proprietary placental cell population, PSC-100.


stem cell differentiation


Human Longevity Inc. (HLI) was co-founded in March 2014 by Peter Diamandis, Craig Venter and Robert Hariri, with a specific purpose: extending the healthy, high performance human lifespan. This new company intends to achieve innovations in two main areas – genomics and stem cell sciences.

Over the next decade, HLI has the objective of sequencing the human genomes of 1 million individuals, while also collecting phenotypic, microbiome, imaging and metabalomic data. All of this information will be crunched using artificial intelligence and machine learning to provide extraordinary insights into human aging. In the arena of stem cells, they will begin harnessing stem cells as the regenerative engine of the body. Peter Diamandis has stated that it should be possible to extend healthy human life by 30-40 years if these treatments are successfully developed.

HLI has now made a significant step towards its eventual goals by signing an agreement with Celgene Cellular Therapeutics (CCT) to license, develop, and co-promote Celgene’s proprietary placental cell population, PSC-100. A variety of applications will be explored for this unique cell population – including for sarcopenia, an age-related condition resulting in degenerative loss of skeletal muscle mass, quality and strength. HLI will use its expertise and technology to model PSC-100 at the molecular level, complementing the data Celgene has gleaned from PSC-100 in Phase 1 human studies.

“We think that cellular-based therapeutics combined with our genomics-based discovery systems offer exciting potential for age-related diseases,” said Craig Venter, PhD, HLI CEO and co-founder.  PSC-100 provides an “advanced basis” for testing cell therapy on diseases, he concluded.

HLI itself will be licensing access to its database, and developing new diagnostics and therapeutics as part of their product offerings.


human longevity inc logo


  speech bubble Comments »



24th August 2014

Computer program recognises emotions with
87% accuracy

Researchers in Bangladesh have designed a computer program able to accurately recognise users’ emotional states as much as 87% of the time, depending on the emotion.


keyboard typing


Writing in the journal Behaviour & Information Technology, Nazmul Haque Nahin and his colleagues describe how their study combined – for the first time – two established ways of detecting user emotions: keystroke dynamics and text-pattern analysis.

To provide data for the study, volunteers were asked to note their emotional state after typing passages of fixed text, as well as at regular intervals during their regular (‘free text’) computer use. This provided researchers with data about keystroke attributes associated with seven emotional states (joy, fear, anger, sadness, disgust, shame and guilt). To help them analyse sample texts, the researchers made use of a standard database of words and sentences associated with the same seven emotional states.

After running a variety of tests, the researchers found that their new ‘combined’ results were better than their separate results; what’s more, the ‘combined’ approach improved performance for five of the seven categories of emotion. Joy (87%) and anger (81%) had the highest rates of accuracy.

This research is an important contribution to ‘affective computing’, a growing field dedicated to ‘detecting user emotion in a particular moment’. As the authors note – for all the advances in computing power, performance and size in recent years, a lot more can still be done in terms of their interactions with end users. “Emotionally aware systems can be a step ahead in this regard,” they write. “Computer systems that can detect user emotion can do a lot better than the present systems in gaming, online teaching, text processing, video and image processing, user authentication and so many other areas where user emotional state is crucial.”

While much work remains to be done, this research is an important step in making ‘emotionally intelligent’ systems that recognise users’ emotional states to adapt their music, graphics, content or approach to learning a reality.


  speech bubble Comments »



24th August 2014

New satellite data shows "unprecedented" ice loss from Greenland and West Antarctic

Ice loss from the Greenland and West Antarctic ice sheets has more than doubled in the last five years, based on extensive mapping by the European satellite CryoSat-2. This "unprecedented" rate of melting – around 500 cubic kilometres of ice per year – could mean future sea levels have been underestimated.


antarctica ice


Researchers from the Alfred Wegener Institute in Germany have – for the first time – extensively mapped both Greenland’s and Antarctica’s ice sheets using the recently launched ESA satellite CryoSat-2. This new data shows that the ice crusts of these regions are declining at a rate never seen before.

Elevation changes were calculated from a high-precision altimeter, using 200 million data points for Antarctica and 14.3 million for Greenland. The loss of ice volume since 2009 was found to have doubled in Greenland and tripled in West Antarctica, with a combined thinning of 500 km3 (120 mi3) per year. That is equivalent to a 6cm (2.5") layer of water covering the entire surface of the contiguous United States.

The areas where CryoSat-2 found the biggest elevation changes were Jakobshavn Glacier in West Greenland and Pine Island Glacier in West Antarctica. Since February 2014, scientists have known that the Jakobshavn Glacier is collapsing into the ocean at a record rate of 46 m/day. The Pine Island Glacier hit the headlines in July 2013, when a table iceberg the size of Hamburg was seen breaking off the tip of its ice shelf.

The Western Antarctic is rapidly losing ice volume – but East Antarctica is gaining volume, an argument frequently put forward by climate sceptics. However, this small increase occurring in the east is nowhere near enough to compensate the huge losses on the other side of the continent, as the map shows below.


elevation change maps greenland and antarctica

Maps of elevation changes in Greenland (left) and Antarctica (right). Red shows ice losses, while blue shows ice gains.


These latest findings, published in the 20th August issue of The Cryosphere, will add to concerns that future sea levels may be underestimated by the Intergovernmental Panel on Climate Change (IPCC). A survey by the Vision Prize – which provides impartial and independent polling of experts on important scientific issues – has found a majority of expert respondents believe that future sea levels will be at the upper end of the IPCC's projections. New research in Nature Geoscience recently showed how Greenland is far more vulnerable to warming ocean waters than had previously been thought, with NASA glaciologist Eric Rignot stating that "the globe's ice sheets will contribute far more to sea level rise than current projections show." In fact the West Antarctic Ice Sheet has now begun an "irreversible" process of collapse which has "passed the point of no return" according to NASA. It is also worth noting that the so-called warming pause can be explained by missing heat data in the polar regions, the Arctic for example having warmed roughly eight times faster than the rest of the planet. There is certainly no hiatus when looking at extreme high temperature records.

This has grave implications for the future. Low-lying regions such as Bangladesh, the Maldives and Kivalina will be among the first places to be affected, followed by Bangkok, then major Western cities by the 2050s and 2060s, resulting in massive geoengineering projects. Because of the delayed reaction from these interventions, the crisis will likely continue into the 2070s and possibly beyond. If society collapses and the mitigation efforts fail, sea levels could ultimately rise 66 metres (216 ft) by 7000 AD.


ice melt


  speech bubble Comments »



22nd August 2014

Tidal stream and wave power – slower than expected growth by 2020

Bloomberg New Energy Finance has revised down its forecasts for global tidal stream and wave power deployment in 2020 – by 11 percent and 72 percent respectively.


undersea turbine
The Ness of Duncansby project. Credit: ScottishRenewables.com


Global installations of tidal stream and wave power are set to grow to 148MW and 21MW respectively by 2020, from almost nothing today, according to new research from Bloomberg New Energy Finance, but these are still trifling amounts in the context of the world’s power system.

The emergence of marine renewable energy technologies is taking longer than hoped, due to project setbacks, fatigue among venture capital investors, and the sheer difficulty of deploying devices in the harsh marine environment. This latest forecast represents a downward revision from the figures of 167MW for tidal stream and 74MW for wave that Bloomberg New Energy Finance published a year ago.

Tidal stream power involves using machines resembling underwater wind turbines to convert the energy of the tides into electricity. Wave power involves the use of buoys, snakes, flaps and other devices to capture the energy of the waves. Engineers and entrepreneurs have been working hard on both for the last two decades, spending hundreds of millions of dollars.


wave energy
Credit: AlphaGalileo Foundation


Angus McCrone, senior analyst at Bloomberg New Energy Finance, said: “Governments in countries such as the UK, France, Australia and Canada have identified tidal and wave as large opportunities not just for clean power generation, but also for creating local jobs and building national technological expertise. That continues to be the case, and we will see further progress over the rest of this decade. But caution is necessary because taking devices from the small-scale demonstrator stage to the pre-commercial array stage is proving even more expensive and time-consuming than many companies – and their investors – expected.”

The last 12 months have seen a number of wave power companies fail or falter. Oceanlinx and Wavebob went out of business, Wavegen was folded back into parent company Voith, AWS Ocean Energy scaled back its activities, and Ocean Power Technologies has cancelled two of its main projects. Other wave firms such as Aquamarine, Carnegie, Pelamis and Seabased have pressed on with device and project development.

There have been clearer positives for tidal stream technology companies, with Andritz Hydro Hammerfest and Alstom/TGL both earning Renewable Obligation Certificates for electricity generated from devices at the European Marine Energy Centre in Orkney, Scotland, and Atlantis Resources raising £12m (US$20m) in an initial public offering on London’s AIM in February.

However, the amount of marine energy capacity installed and generating consistently for a period of years remains tiny. There is only the 1.2MW SeaGen tidal stream device owned by Siemens/Marine Current Turbines in Strangford Lough, Northern Ireland, and a few small pilot wave power plants.

Michael Liebreich, chairman of the advisory board at Bloomberg New Energy Finance, commented: “Tidal stream and wave power companies continue to face huge challenges. Although the potential is almost limitless, it’s a tough environment. It is possible to make equipment reliable, as the offshore oil and gas industry has shown, but it’s not cheap. And you have to put a huge amount of steel and concrete into the water, which is inherently expensive. It is still unclear whether this can be done at a cost competitive with offshore wind, let alone other clean energy generating technologies.”


  speech bubble Comments »



20th August 2014

DARPA aims to revolutionise tank design

The U.S. military is researching possible designs for a new generation of stealthier, faster, more mobile tanks.


darpa future tank concept designs


For the past 100 years of mechanised warfare, protection for ground-based armoured fighting vehicles and their occupants has boiled down almost exclusively to a simple equation: more armour equals more protection. Weapons’ ability to penetrate armour, however, has advanced faster than armour’s ability to withstand penetration. As a result, achieving even incremental improvements in crew survivability has required significant increases in vehicle mass and cost.

The trend of increasingly heavy, less mobile and more expensive combat platforms has limited Soldiers’ and Marines’ ability to rapidly deploy and manoeuvre in theatre and accomplish missions in varied and evolving threat environments. Moreover, larger vehicles are limited to roads, as well as requiring more logistical support and are more expensive to design, develop, field and replace. The U.S. military has now reached a point where – considering tactical mobility, strategic mobility, survivability and cost – innovative and disruptive solutions are necessary for a new generation of armoured fighting vehicles.

The Defense Advanced Research Projects Agency (DARPA) has created the Ground X-Vehicle Technology (GXV-T) program to overcome these challenges. GXV-T seeks to investigate revolutionary ground-vehicle technologies that would simultaneously improve the mobility and survivability of vehicles through means other than adding more armour – i.e. avoiding detection, engagement and hits by adversaries. This improved stealth and mobility would enable future U.S. ground forces to more efficiently and cost-effectively tackle the varied and unpredictable combat situations of the 21st century.


old vs new tanks


“GXV-T’s goal is not just to improve or replace one particular vehicle – it’s about breaking the ‘more armour’ paradigm and revolutionising protection for all armoured fighting vehicles,” says Kevin Massey, DARPA program manager. “Inspired by how X-plane programs have improved aircraft capabilities over the past 60 years, we plan to pursue groundbreaking fundamental research and development to help make future armoured fighting vehicles significantly more mobile, effective, safe and affordable.”

Technical goals include the following improvements relative to today’s armoured fighting vehicles:

  • Reduce vehicle size and weight by 50 percent
  • Reduce onboard crew needed to operate vehicle by 50 percent
  • Increase vehicle speed by 100 percent
  • Access 95 percent of terrain
  • Reduce signatures that enable adversaries to detect and engage vehicles

DARPA says these four technical areas are examples of where advanced technologies could be developed that would meet the program’s objectives:

  • Radically enhanced mobility – ability to traverse diverse off-road terrain, including slopes and various elevations; advanced suspensions and novel track/wheel configurations; extreme speed; rapid omnidirectional movement changes in three dimensions
  • Survivability through agility – autonomously avoid incoming threats without harming occupants through technologies such as agile motion (dodging) and active repositioning of armour
  • Crew augmentation – improved physical and electronically assisted situational awareness for crew and passengers; semi-autonomous driver assistance and automation of  key crew functions, similar to capabilities found in modern commercial airplane cockpits
  • Signature management – reduction of detectable signatures, including visible, infrared (IR), acoustic and electromagnetic (EM)

DARPA aims to develop GXV-T technologies over a period of 24 months, from 2015 to 2017.


darpa future tank concept


  speech bubble Comments »



19th August 2014

Earth Overshoot Day 2014

Today – 19th August – is the date when our ecological footprint exceeds our planet's budget for this year.


earth hourglass


It has taken less than eight months for humanity to use up nature’s entire budget for the year and go into "ecological overshoot" – according to data from the Global Footprint Network (GFN), an international sustainability think tank with offices in North America, Europe and Asia.

Global Footprint Network monitors humanity’s demand on the planet (ecological footprint) against nature’s biocapacity, i.e. its ability to replenish the planet's resources and absorb waste, including CO2. Earth Overshoot Day marks the date when humanity's footprint in a given year exceeds what Earth can regenerate in that year. Since the year 2000, overshoot has grown, according to GFN’s calculations. Consequently, Earth Overshoot Day has moved from 1st October in 2000 to 19th August this year.

"Global overshoot is becoming a defining challenge of the 21st century. It is both an ecological and an economic problem," says Mathis Wackernagel, president of the GFN and co-creator of the resource accounting metric. "Countries with resource deficits and low income are exceptionally vulnerable. Even high-income countries that have had the financial advantage to shield themselves from the most direct impacts of resource dependence need to realise that a long-term solution requires addressing such dependencies before they turn into a significant economic stress."

In 1961, humanity used just three-quarters of the biocapacity Earth had available that year for generating food, fibre, timber, fish stock and absorbing greenhouse gases. Most countries had biocapacities larger than their own respective footprints. By the early 1970s, economic and demographic growth had increased humanity’s footprint beyond what the planet could renewably produce. We went into ecological overshoot.


biocapacity and overshoot


Today, 86 percent of the world's population lives in countries that demand more from nature than their own ecosystems can renew. According to the GFN's calculations, it would take 1.5 Earths to produce the renewable resources necessary to support humanity’s current footprint. Future trends in population, energy, food and other resource consumption indicate this will rise to three planets by the 2050s, which could be physically unfeasible.

The costs of our ecological overspending are becoming more evident by the day. The "interest" we are paying on our mounting ecological debt – in the form of deforestation, freshwater scarcity, soil erosion, biodiversity loss and the build-up of CO2 in our atmosphere – also comes with mounting human and economic costs.

Governments who ignore resource limits in their decision-making put their long-term economic performance at risk. In times of persistent overshoot, countries running biocapacity deficits will find that reducing their resource dependence is aligned with their self-interest. Conversely, countries that are endowed with biocapacity reserves have an incentive to preserve these ecological assets that constitute a growing competitive advantage in a world of tightening ecological constraints.




More and more countries are taking action in a variety of ways. The Philippines is on track to adopt the GFN's Ecological Footprint at the national level – the first country in Southeast Asia to do so – via its National Land Use Act. This policy, the first of its kind in the Philippines, is designed to protect areas from haphazard development and plan for the country's use and management of its own physical resources. Legislators are seeking to integrate the Ecological Footprint metric into this national policy, putting resource limits at the centre of decision-making.

The United Arab Emirates (UAE), a high-income country, intends to significantly reduce its per capita Ecological Footprint – one of the world’s highest – starting with carbon emissions. Its Energy Efficiency Lighting Standard will result in only energy-efficient indoor-lighting products being made available throughout the territory before the end of this year.

Morocco wants to collaborate with the Global Footprint Network on a review of the nation’s 15-year strategy for sustainable development in agriculture – Plan Maroc Vert – through the lens of the Ecological Footprint. Specifically, Morocco is interested in comprehensively assessing how the plan contributes to the sustainability of the agriculture sector, as well as a society-wide transition towards sustainability.

Regardless of a nation’s specific circumstances, incorporating ecological risk into economic planning and development strategy is not just about foresight – it has become an urgent necessity.


  speech bubble Comments »



19th August 2014

Atomic-force microscopes 20 times more sensitive

Laser physicists have found a way to make atomic-force microscope probes 20 times more sensitive and capable of detecting forces as small as the weight of an individual virus.


microscope setup


The technique – developed by researchers in the Quantum Optics Group of the Australian National University, Canberra – uses laser beams to cool a nanowire probe to minus 265 degrees Celsius.

“The level of sensitivity achieved after cooling is accurate enough for us to sense the weight of a large virus, 100 billion times lighter than a mosquito,” said Professor Ping Koy Lam, the leader of the Quantum Optics Group.

This could be used to improve the resolution of atomic-force microscopes, which are state-of-the-art tools for measuring nanoscopic structures and the tiny forces between molecules. Atomic force microscopes can achieve ultra-sensitive measurements of microscopic features by scanning a wire probe over a surface. However, such probes – around 500 times finer than a human hair – are prone to vibration.

“At room temperature the probe vibrates, just because it is warm, and this can make your measurements noisy,” said co-author Dr Ben Buchler. “We can stop this motion by shining lasers at the probe.”


silver gallium nanowire
Credit: Quantum Optics Group, Australian National University


The force sensor, pictured above, was a 200 nm-wide silver gallium nanowire coated with gold.

“The laser makes the probe warp and move due to heat. But we have learned to control this warping effect, and were able to use the effect to counter the thermal vibration of the probe,” said Giovanni Guccione, a PhD student on the team.

However, the probe cannot be used while the laser is on, as the laser effect overwhelms the sensitive probe. So the laser has to be turned off and any measurements quickly made before the probe heats up within a few milliseconds. By making measurements over a number of heating/cooling cycles, accurate values can be determined.

“We now understand this cooling effect really well,” says Harry Slatyer, another PhD student. “With clever data processing, we might be able to improve the sensitivity, and even eliminate the need for a cooling laser.”


  speech bubble Comments »



16th August 2014

Swarm of 1,000 robots able to self-organise

A huge, self-organising robot swarm consisting of 1,024 individual machines has been demonstrated by Harvard.


robot swarm


Swarm robotics is a new and emerging field of technology involving the coordination of multiple robots to perform a group task. By combining a large number of machines, it is possible to create a hive intelligence – capable of much greater achievements than a lone individual. In the same way that insects such as ants, bees and termites cooperate, researchers can build wireless networks of machines able to sense, navigate and communicate information about their surroundings.

Recent efforts have included a formation of 20 "droplets" created by the University of Colorado, a group of 40 robots developed at the Sheffield Centre for Robotics, and drones using augmented reality to produce "spatially targeted communication and self-assembly". Although impressive, those projects – and others since – have lacked the raw numbers to be considered a genuine "swarm" like the creatures mentioned earlier. This week, however, scientists at Harvard took research in the field to a whole new level, by demonstrating a network of more than 1,000 machines working simultaneously.

Known as "Kilobots", these devices are just a few centimetres across, roughly the size of a U.S. quarter. Each is equipped with tiny vibrating motors allowing them to slide across a surface, using an infrared transmitter and receiver to alert their neighbours and measure their proximity. From just a simple command, they can arrange themselves into a variety of complex shapes and patterns.


robot swarm


In 2011, open-source hardware and software was developed and licensed by Harvard to improve the algorithms used in machine networks. A report showed how groups of 25 Kilobots – demonstrating behaviours such as foraging, formation control and synchronisation – had the potential for much bigger numbers. Following three years of further testing and experimentation, the university has now succeeded in coordinating a swarm of 1,024 units.

The new, smarter algorithm enables the Kilobots to correct their own mistakes, avoiding traffic jams and errors that would otherwise become more likely in larger-scale groups. If an individual deviates off-course, nearby robots can sense the problem and cooperate to fix it. As robots become cheaper and more numerous, with a continued trend in miniaturisation, this form of social behaviour could lead to revolutionary applications in the future.

As Professor Radhika Nagpal explains in a press release: “Increasingly, we’re going to see large numbers of robots working together – whether it's hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways. Understanding how to design ‘good’ systems at that scale will be critical. We can simulate the behaviour of large swarms of robots, but a simulation can only go so far. The real-world dynamics – the physical interactions and variability – make a difference, and having the Kilobots to test the algorithm on real robots has helped us better understand how to recognise and prevent the failures that occur at these large scales.”

These latest developments are reported in the peer-reviewed journal Science.



  speech bubble Comments »



14th August 2014

Robotic butlers to appear in hotels

From next week, guests at the Aloft hotel chain may feel like they are living in the future, as a new robotic butler offers its services.


robotic butler


Aloft Hotels has announced A.L.O. as the company’s first “Botlr” (robotic butler). This futuristic service will be introduced on 20th August, making Aloft the first major hotel brand to hire a robot for both front and back of house duties.

In this role, A.L.O. will be on call 24/7 as a robotic operative, assisting the human staff in delivering amenities to guest rooms. Professionally “dressed” in a custom shrink-wrapped, vinyl collared uniform and nametag, A.L.O. can modestly accept tweets as tips. It will not only free up time for employees, allowing them to create a more personalised experience for guests, but will also enhance the hotel’s image and technological features.

Brian McGuinness, Global Brand Leader: “As you can imagine, hiring for this particular position was a challenge as we were seeking a very specific set of automated skills, and one that could work – literally – around the clock. As soon as A.L.O. entered the room, we knew it was what we were looking for. A.L.O. has the work ethic of Wall-E, the humour of Rosie from The Jetsons and reminds me of my favourite childhood robot – R2-D2. We are excited to have it join our team.”



A.L.O. was developed by Silicon Valley-based Savioke – a new startup company with funding from Google Ventures – which the robotics community has been eagerly anticipating. It uses a combination of sonar wave technology, lasers and cameras to avoid people and obstacles. It can facilitate and prioritise multiple guest deliveries, communicate easily with guests and various hotel platforms, and efficiently navigate throughout the property – including the elevator, using WiFi.

Steve Cousins, CEO of Savioke: “We are thrilled to introduce our robot to the world today through our relationship with Aloft Hotels. In our early testing, all of us at Savioke have seen the look of delight on those guests who receive a room delivery from a robot. We have also seen the front desk get busy at times, and expect Botlr will be especially helpful at those times, freeing up human talent to interact with guests on a personal level.”

The first A.L.O. reports for duty next week at Aloft Cupertino, next to the Apple HQ. If successful, all 100 of the company's hotels may introduce them during 2015. In the future, Cousins predicts a huge market for service robots like A.L.O.: “There are all these places, hotels, elder care facilities, hospitals, that have a few hundred robots maybe – but no significant numbers – and we think that's just a huge opportunity.”


hotel robot butler


  speech bubble Comments »



13th August 2014

CO2 "sponge" could soak up pollution

A new polymer that could help to absorb man-made emissions from power plants has been announced by the American Chemical Society.


co2 sponge


A sponge-like plastic that soaks up the greenhouse gas carbon dioxide (CO2) might ease our transition away from polluting fossil fuels and toward new energy sources, such as hydrogen. The material — a relative of the plastics used in food containers — could play a role in the U.S. government's plan to cut CO2 emissions 30 percent by 2030, and could also be integrated into power plant smokestacks in the future. A report on the new material is one of nearly 12,000 presentations at the 248th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, taking place in San Francisco this week.

“The key point is that this polymer is stable, it’s cheap, and it adsorbs CO2 extremely well,” says Andrew Cooper, Ph.D. “It’s geared toward function in a real-world environment. In a future landscape where fuel-cell technology is used, this adsorbent could work toward zero-emission technology.”

Adsorbents are most commonly used to remove greenhouse gas pollutants from smokestacks at power plants where fossil fuels are burned. However, Cooper and his team intend this adsorbent — a microporous organic polymer — for a different application. The new material would become part of an emerging technology called integrated gasification combined cycle (IGCC), which can convert fossil fuel into hydrogen gas. Hydrogen holds great promise for use in fuel-cell cars and electricity generation, because it produces almost no pollution. IGCC is a bridging technology that is intended to jump-start the hydrogen economy, or the transition to hydrogen fuel, while still using the existing fossil-fuel infrastructure. But the IGCC process yields a mixture of hydrogen and CO2 gas, which must be separated.

Cooper, who is from the University of Liverpool, claims that the sponge works best under the high pressures intrinsic to the IGCC process. Just like a kitchen sponge swells when it takes on water, the adsorbent swells slightly when it soaks up CO2 in the tiny spaces between its molecules. When the pressure drops, the adsorbent deflates and releases the CO2, which can then be collected for storage or conversion into useful compounds.

The material — a brown, sand-like powder — is made by linking together many small carbon-based molecules into a network. Cooper explains that the idea to use this structure was inspired by polystyrene, a plastic used in styrofoam and other packaging material. Polystyrene can adsorb small amounts of CO2 by the same swelling action.

One advantage of using polymers is that they tend to be very stable. The material can even withstand being boiled in acid, proving it should tolerate the harsh conditions in power plants where CO2 adsorbents are needed. Other CO2 scrubbers — whether created from plastics or metals or in liquid form — do not always hold up well, he says. Another benefit of this new adsorbent is its ability to adsorb CO2 without also taking on water vapour, which can clog up other materials and make them less effective. Its low cost, reusability, and long lifetime also makes the sponge polymer attractive. In his report, Cooper also describes how it is relatively simple to embed the spongy polymers in the kinds of membranes already being evaluated to remove CO2 from power plant exhaust. Combining two types of scrubbers could make even better adsorbents, by harnessing the strengths of each.


  speech bubble Comments »



12th August 2014

Stem cells show promise for stroke in pilot study

Worldwide, stroke is among the leading causes of death, killing over 6.2 million people each year. A new therapy using stem cells extracted from bone marrow has shown promising results in the first trial of its kind in humans.


stroke brain scans


Five patients received stem cell treatment in a pilot study conducted by Imperial College Healthcare NHS Trust and scientists at Imperial College London. The therapy was found to be safe, with all patients showing improvements in clinical measures of disability. These findings are published in the journal Stem Cells Translational Medicine. It is the first UK human trial of a stem cell treatment for acute stroke to be published.

The therapy uses a type of cell called CD34+ cells, a set of stem cells in the bone marrow that give rise to blood cells and blood vessel lining cells. Previous research has demonstrated that treatment using these cells can significantly improve recovery from stroke in animals. Rather than developing into brain cells themselves, the cells are thought to release chemicals that trigger the growth of new brain tissue and new blood vessels in the area damaged by stroke.

The patients were treated within seven days of a severe stroke – in contrast to several other stem cell trials, most of which have treated patients after six months or later. The Imperial researchers believe early treatment may improve the chances of a better recovery. A bone marrow sample was taken from each patient. The CD34+ cells were isolated from the sample and then infused into an artery that supplies the brain. No previous trial has selectively used CD34+ cells, so early after the stroke, until now.

Although the trial was mainly designed to assess the safety and tolerability of the treatment, the patients all showed improvements in their condition in clinical tests over a six-month follow-up period. Four out of five patients had the most severe type of stroke: only four per cent of people who experience this kind of stroke are expected to be alive and independent six months later. In the trial, all four of these patients were alive and three were independent after six months.

Dr Soma Banerjee, a lead author and Consultant in Stroke Medicine at Imperial College Healthcare NHS Trust, said: “This study showed that the treatment appears to be safe and that it’s feasible to treat patients early when they might be more likely to benefit. The improvements we saw in these patients are very encouraging, but it’s too early to draw definitive conclusions about the effectiveness of the therapy. We need to do more tests to work out the best dose and timescale for treatment before starting larger trials.”

Worldwide, stroke was the second most frequent cause of death in 2011, accounting for 6.2 million deaths (~11% of the total). The incidence of stroke increases exponentially from 30 years of age. Advanced age is among the most significant risk factors, with two-thirds of strokes occurring in those over the age of 65. However, stroke can occur at any age, including in childhood. Survivors can be affected by a wide range of mental and physical symptoms, and many never recover their independence. Stem cell therapy is seen as an exciting new potential avenue of treatment for stroke, but its exact role is yet to be clearly defined.

Dr Paul Bentley, also a lead author of the study, from the Department of Medicine at Imperial College London: “This is the first trial to isolate stem cells from human bone marrow and inject them directly into the damaged brain area using keyhole techniques. Our group are currently looking at new brain scanning techniques to monitor the effects of cells once they have been injected.”

Professor Nagy Habib, Principal Investigator of the study, from the Department of Surgery and Cancer at Imperial College London: "These are early but exciting data worth pursuing. Scientific evidence from our lab further supports the clinical findings and our aim is to develop a drug, based on the factors secreted by stem cells, that could be stored in the hospital pharmacy so that it is administered to the patient immediately following the diagnosis of stroke in the emergency room. This may diminish the minimum time to therapy and therefore optimise outcome. Now the hard work starts to raise funds for this exciting research.”


  speech bubble Comments »



9th August 2014

Brain-like supercomputer the size of a postage stamp

Scientists at IBM Research have created a neuromorphic (brain-like) computer chip, featuring 1 million programmable neurons and 256 million programmable synapses.


brain computer


IBM this week unveiled "TrueNorth" – the most advanced and powerful computer chip of its kind ever built. This neurosynaptic processor is the first to achieve one million individually programmable neurons, sixteen times more than the current largest neuromorphic chip. Designed to mimic the structure of the human brain, it represents a major departure from older computer architectures of the last 70 years. By merging the pattern recognition abilities of neurosynaptic chips with traditional system layouts, researchers aim to create "holistic computing intelligence".

Measured by device count, TrueNorth is the largest IBM chip ever fabricated, with 5.4 billion transistors at 28nm. Yet it consumes under 70 milliwatts while running at biological real time – orders of magnitude less power than a typical modern processor. This amazing feat is made possible because neurosynaptic chips are event driven, as opposed to the "always on" operation of traditional chips. In other words, they function only when needed, resulting in vastly less energy use and a much cooler temperature. It is hoped this combination of ultra-efficient power consumption and entirely new system architecture will allow computers to far more accurately emulate the brain.

TrueNorth is composed of 4,096 cores, with each of these modules integrating memory, computation and communication. The cores are distributed in a parallel, flexible and fault-tolerant grid – able to continue operating when individual cores fail, similar to a biological system. And – like a brain cortex – adjacent TrueNorth chips can be seamlessly tiled and scaled up. To demonstrate this scalability, IBM also revealed a 16-chip motherboard with 16 million programmable neurons: roughly equivalent to a frog brain.

Each of these "neurons" features 256 inputs, whereas the human brain averages 10,000. That may sound like a huge difference – but in the world of computers and technology, progress tends to be exponential. In other words, we could see machines as computationally powerful as a human brain within 10–15 years. The implications are staggering. When sufficiently scaled up, this new generation of "cognitive computers" could transform society, leading to a myriad of applications able to intelligently analyse visual, auditory, and multi-sensory data.


  speech bubble Comments »



9th August 2014

New technology can extract audio from visual data

Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analysing microscopic vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a crisp packet, photographed from 15 feet away through sound-proof glass.



In other experiments, the researchers extracted useful audio signals from videos of aluminium foil, the surface of a glass of water, and even the leaves of a potted plant. Their findings are presented at this year’s SIGGRAPH, the world's largest conference on computer graphics and interactive techniques.

“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realise that this information was there.”

Reconstructing audio from video requires that the frequency of the video samples — the number of frames of video captured per second — be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera able to capture 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.

In other experiments, however, they used an ordinary digital camera. Because of a quirk in the design of most cameras’ sensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second. While this audio reconstruction wasn’t as faithful as that with the high-speed camera, it may still be good enough to identify the gender of a speaker in a room; the number of speakers — and even, given accurate enough information about the acoustic properties of speakers’ voices — their identities.

The researchers’ technique has obvious applications in law enforcement and forensics, but Davis is more enthusiastic about the possibility of what he describes as a new kind of imaging: “We’re recovering sounds from objects. That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.”


crisp packet


In their experiments, the researchers have been measuring the material, mechanical, and structural properties of objects based on motions less than a tenth of a micrometre in size. That corresponds to 1/5000th of a pixel in close-up images — but it's possible to infer motions smaller than a pixel by looking at the way a single pixel’s colour value fluctuates over time.

“This is new and refreshing. It’s the kind of stuff that no other group would do right now,” says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. “We’re scientists, and sometimes we watch these movies, like James Bond, and we think, ‘This is Hollywood theatrics. It’s not possible to do that. This is ridiculous.’ And suddenly, there you have it. This is totally out of some Hollywood thriller. You know that the killer has admitted his guilt because there’s surveillance footage of his potato chip bag vibrating.”

However, technology of this kind may raise concerns over privacy in the future — particularly with ongoing, exponential advances in screen resolution, computer power and sensing abilities. Imagine a miniaturised version, for instance, able to be incorporated into glasses or even bionic eyes. The use of surveillance drones and high-definition CCTV will also increase greatly in the coming years. Looking at the more distant future, the algorithms will be orders of magnitude more accurate and detailed, possibly combined with X-ray camera vision to peer through walls and other intervening obstacles. Perhaps by then, we will enter a world in which privacy becomes a thing of the past.


  speech bubble Comments »



8th August 2014

Air traffic growth will outpace carbon reduction efforts

Carbon reduction efforts by airlines will be outweighed by growth in air traffic, even if the most contentious mitigation measures are implemented, according to new research by the University of Southampton.




Even if proposed mitigation measures are agreed upon and put in place, air traffic growth rates are likely to outpace emission reductions, unless demand is substantially reduced.

"There is little doubt that increasing demand for air travel will continue for the foreseeable future," says Professor John Preston, travel expert and study co-author. "As a result, civil aviation is going to become an increasingly significant contributor to greenhouse gas emissions."

The authors of the new study – which is published in the journal Atmospheric Environment – have calculated that the ticket price increase necessary to drive down demand would value CO2 emissions at up to one hundred times the amount of current valuations.

"This would translate to a yearly 1.4 per cent increase on ticket prices, breaking the trend of increasing lower airfares," says co-author and researcher Matt Grote. "The price of domestic tickets has dropped by 1.3 per cent a year between 1979 and 2012, and international fares have fallen by 0.5 per cent per annum between 1990 and 2012."

However, the research suggests any move to suppress demand would be resisted by the airline industry and national governments. The researchers say a global regulator ‘with teeth’ is urgently needed to enforce CO2 emission reduction measures.

"Some mitigation measures can be left to the aviation sector to resolve," says Professor Ian Williams, Head of the Centre for Environmental Science at the University of Southampton. "For example, the industry will continue to seek improvements to fuel efficiency as this will reduce costs. However, other essential measures, such as securing international agreements, setting action plans, regulations and carbon standards will require political leadership at a global level."

The literature review conducted by the researchers suggests that the UN's International Civil Aviation Organisation (ICAO) "lacks the legal authority to force compliance and therefore is heavily reliant on voluntary cooperation and piecemeal agreements".

Current targets, set at the most recent ICAO Assembly Session last October, include a global average fuel-efficiency improvement of two per cent a year (up to 2050) and keeping global net CO2 emissions for international aviation at the same level from 2020. Global market based measures (MBM) have yet to be agreed upon, while Boeing predicts the number of aircraft in service to double between the years 2011 and 2031.


  speech bubble Comments »



7th August 2014

Implanted neurons could lead to Parkinson's cure

Scientists at the Luxembourg Centre for Systems Biomedicine (LCSB) have grafted neurons reprogrammed from skin cells into the brains of mice for the first time with long-term stability. Six months after implantation, the neurons had become fully functional and integrated into the brain. This successful demonstration of lastingly stable neuron implantation raises hope for future therapies in humans that could replace sick neurons with healthy ones in the brains of Parkinson’s disease patients, for example.

The LCSB research group led by Prof. Jens Schwamborn and Kathrin Hemmer is working continuously to bring cell replacement therapy to maturity as a treatment for neurodegenerative diseases. The path towards successful therapy in humans, however, is long. “Successes in human therapy are still a long way off, but I am sure successful cell replacement therapies will exist in future. Our research results have taken us a step further in this direction,” claims Prof. Schwamborn.


mouse neurons in brain
Credit: Luxembourg Centre for Systems Biomedicine (LCSB)


In their latest tests, the research group succeeded in creating stable nerve tissue in the brain from neurons that had been reprogrammed from skin cells. The stem cell researchers’ technique of producing neurons – or more specifically, induced neuronal stem cells (iNSC) – in a petri dish from the host’s own skin cells greatly improves the compatibility of the implanted cells. The treated mice showed no adverse side effects, even six months after implantation into the hippocampus and cortex regions of the brain. In fact it was quite the opposite – the implanted neurons were fully integrated into the complex network of the brain. The neurons exhibited normal activity and were connected to the original brain cells via newly formed synapses, the contact points between nerve cells.

These tests demonstrate that the scientists are continually gaining a better understanding of how to treat such cells in order to successfully replace damaged or dead tissue. “Building upon the current insights, we will now be looking specifically at the type of neurons that die off in the brain of Parkinson’s patients – namely the dopamine-producing neurons,” Schwamborn reports.

In the future, implanted neurons could produce the lacking dopamine directly in the patient’s brain and transport it to the appropriate sites. This could result in an actual cure, as has so far been impossible. The researchers have published their results in the current issue of Stem Cell Reports.


  speech bubble Comments »



6th August 2014

Rosetta probe arrives at comet 67P

As we reported last month, the European Space Agency's Rosetta probe has been nearing its destination: the icy comet 67P/Churyumov-Gerasimenko. After a journey of ten years, five months and four days – covering a distance of 4 billion miles (6.4 billion km) – it has finally arrived in orbit. The journey involved looping around the Sun five times, followed by a series of ten rendezvous manoeuvres that began in May to adjust its speed and trajectory to gradually match those of the comet, which is rushing towards the inner Solar System at nearly 34,000 mph (55,000 km/h). If any of those manoeuvres had failed, the mission would have been lost, and the spacecraft would simply have flown by the comet.

ESA's Director of Science and Robotic Exploration, Alvaro Giménez: "Today's achievement is a result of a huge international endeavour spanning several decades. We have come an extraordinarily long way since the mission concept was first discussed in the late 1970s and approved in 1993, and now we are ready to open a treasure chest of scientific discovery that is destined to rewrite the textbooks on comets for even more decades to come."

Rosetta will now perform a detailed study of the comet, identifying a target site for the Philae robotic lander. As many as five possible landing sites will be identified by late August, before the primary site is identified in mid-September. The final timeline for the sequence of events for deploying Philae – currently expected for 11th November – will be confirmed by the middle of October. After landing, Rosetta will continue to accompany the comet until its closest approach to the Sun in August 2015 and beyond, watching its behaviour from close quarters to provide a unique insight and real-time experience of how a comet works as it hurtles around the Sun. This could reveal new clues to the origins of the Solar System, our home planet and life itself.


comet 67p


comet 67p


comet closeup image


comet close up image

All images credit: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA


  speech bubble Comments »



4th August 2014

Five daily portions of fruit and vegetables may be enough to lower risk of early death

Eating five daily portions of fruit and vegetables is associated with a lower risk of death from any cause, particularly from cardiovascular disease – but beyond five portions appears to have no further effect, finds a new study.


fruit and vegetables


These results conflict with a recent study published in BMJ's Journal of Epidemiology and Community Health suggesting that seven or more daily portions of fruits and vegetables were linked to lowest risk of death.

There is growing evidence that increasing fruit and vegetable consumption is related to a lower risk of death from cardiovascular disease and cancer. However, the results are not entirely consistent. So a team of researchers based in China and the United States decided to examine the association between fruit and vegetable intake and risk of all-cause, cardiovascular, and cancer deaths.

They analysed the results of sixteen studies involving a total of 833,000 participants and 56,000 deaths. Differences in study design and quality were taken into account to minimise bias. Higher consumption of fruit and vegetables was significantly associated with a lower risk of death from all causes, particularly from cardiovascular diseases.

Average risk of death from all causes was reduced by 5% for each additional daily serving of fruit and vegetables, while risk of cardiovascular death was reduced by 4 percent for each additional daily serving of fruit and vegetables. But the researchers identified a threshold around five servings per day, after which the risk of death did not reduce further.

In contrast, a higher consumption of fruit and vegetables was not appreciably associated with risk of death from cancer. The researchers suggest that — as well as advice to eat adequate amounts of fruit and vegetables — the adverse effects of obesity, physical inactivity, smoking and high alcohol intake on cancer risk should be further emphasised.

Although a threshold of five servings was identified, the team reiterates the importance of regular fruit and vegetable intake, concluding that their study "provides further evidence that a higher consumption of fruits and vegetables is associated with a lower risk of mortality from all causes, particularly from cardiovascular diseases. The results support current recommendations to increase consumption of fruits and vegetables to promote health and longevity."


  speech bubble Comments »



3rd August 2014

Tesla and Panasonic to build $5 billion "Gigafactory"

Tesla has reached an agreement with Panasonic to build a $5 billion "Gigafactory". This will produce more batteries than all other lithium-ion battery factories in the world combined, slashing costs by nearly one-third and boosting the adoption of electric vehicles.


tesla gigafactory 2020


Tesla Motors and Panasonic had been in talks for several months over a massive new factory to produce electric car batteries. This week, they signed an agreement to build the $5 billion facility. Dubbed the "Gigafactory," its location is still unknown – but sites are being evaluated in Arizona, California, Nevada, New Mexico and Texas. Tesla will be responsible for the land, buildings and utilities, while Panasonic will handle the equipment, manufacturing and supply side, based on their mutual approval.

Ground-breaking is planned to begin later this year, and the first batteries are expected to roll off the assembly line in 2017. It is hoped that by 2020, 500,000 battery cells will be produced each year; 35 GWh worth of cells and 50 GWh worth of packs. These will be used to power Tesla's Model S and Model X cars, along with a cheaper Model 3 sedan being introduced in 2017. The Model 3 is expected to be around $35,000 – half the cost of a Model S.

According to the press release, cost reductions at the Gigafactory will be driven by economies of scale previously impossible in battery cell production. Further savings will be achieved by manufacturing cells that have been optimised for electric vehicle design – both in size and function – by co-locating suppliers on-site to eliminate packaging, transportation and duty costs and inventory carrying costs, and by manufacturing at a location with lower utility and operating expenses. As shown in the rendering above, localised solar and wind turbines will be used to power the facility.

Tesla co-founder and CEO, Elon Musk, says there will eventually be a need for "several more" of these Gigafactories. Other efforts by Tesla to boost electric cars have included its revolutionary supercharger network, offering free high-speed charges in less than an hour. There are now more than 100 of these stations operating in the United States, with many more planned, covering 98 percent of the population by the end of 2015. Networks are also being established in Europe and Asia. The company released its patents in June this year, to encourage the spread of its technology. Future historians will surely look back on Elon Musk favourably.


  speech bubble Comments »



2nd August 2014

A new data transfer record: 43 terabits per second

A team in Denmark has broken the world record for single fibre data transmission, achieving a transfer rate of 43 terabits per second over a distance of 41 miles (67 km). They also report a speed of 1 petabit (1000 terabits) when combining multiple lasers.


43 terabits per second world data speed transfer record 2014


In 2009, a research group at the Technical University of Denmark (DTU) was the first to break the 1 terabit barrier for data transfer. Their record was shattered in 2011, when the Karlsruhe Institute of Technology in Germany achieved 26 terabits per second. Now, DTU have regained the title, demonstrating 43 terabits per second (Tbps) through a single optical fibre. This is fast enough to download a 1GB file in about 0.0002 seconds – or the entire contents of a 1TB hard drive in 0.2 seconds.

The Danish team's effort may seem almost excessive, to the point of comedy. However, current trends show that insanely fast transfer speeds like this will be necessary in the relatively near future. Like a digital explosion, the Internet continues to expand and grow exponentially – doubling in size every two years. Improvements in video quality and image resolution mean the amount of data appearing online is mushrooming to enormous proportions, while at the same time, billions more people are gaining access to the web.

This also requires energy which currently generates about two percent of CO2 emissions. Therefore, it is essential to identify solutions for the Internet that make significant reductions in power consumption while simultaneously expanding the bandwidth.

DTU's researchers achieved their latest record by using a new type of optical fibre borrowed from the Japanese telecoms giant NNT. This type of fibre contains seven cores (glass threads) instead of the single core used in standard fibres, making it possible to transfer even more data. Despite the fact that it comprises seven cores, the new fibre does not take up any more space than the standard version.

As to when speeds in the tens of terabits range might be affordable to mainstream consumers, we reckon sometime in the 2030s.


  speech bubble Comments »



1st August 2014

NASA reveals payload for Mars 2020 rover

NASA has announced the payload for its Mars 2020 rover mission, an upgraded version of the Curiosity rover currently exploring the Red Planet.


mars rover 2020


The next rover NASA will send to Mars in 2020 will carry seven instruments for unprecedented science and exploratory investigations. The agency confirmed the selected payload yesterday at its headquarters in Washington. Managers made their selections out of 58 proposals received in January from researchers and engineers worldwide. Proposals received were twice the usual number submitted for instrument competitions in the recent past. This is an indication of the extraordinary interest by the science community in the future exploration of Mars. The selected proposals have a total value of approximately $130 million for research and development.

The Mars 2020 mission will be based on the design of the highly successful Mars Science Laboratory rover – Curiositywhich landed in 2012 and is currently operating on Mars. The new rover will carry more sophisticated, upgraded hardware and new instruments to conduct geological assessments of the rover's landing site, determine the potential habitability of the environment, and directly search for signs of ancient Martian life. It will identify and store a collection of 30 rock and soil samples for return to Earth by a later mission. The rover will feature a new set of wheels, tougher and more durable than its predecessor, potentially boosting the mission lifespan.

"The Mars 2020 rover, with these new advanced scientific instruments – including those from our international partners – holds the promise to unlock more mysteries of Mars' past as revealed in the geological record," said John Grunsfeld, a former astronaut, and associate administrator of NASA's Science Mission Directorate in Washington. "This mission will further our search for life in the universe and also offer opportunities to advance new capabilities in exploration technology."

The Mars 2020 rover will also help to advance our knowledge of how future human explorers could use natural resources available on the surface of the Red Planet. An ability to live off the Martian land would transform future exploration of the planet. Designers of manned expeditions can use this mission to understand the hazards posed by Martian dust and demonstrate technology to process carbon dioxide from the atmosphere to produce oxygen. These experiments will help engineers learn how to use Martian resources to produce oxygen for human respiration and potentially for use as an oxidiser for rocket fuel.

"The 2020 rover will help answer questions about the Martian environment that astronauts will face and test technologies they need before landing on, exploring and returning from the Red Planet," said William Gerstenmaier, associate administrator for the Human Exploration and Operations Mission Directorate at the NASA Headquarters in Washington. "Mars has resources needed to help sustain life, which can reduce the amount of supplies that human missions will need to carry. Better understanding the Martian dust and weather will be valuable data for planning human Mars missions. Testing ways to extract these resources and understand the environment will help make the pioneering of Mars feasible."


mars 2020 rover payload


The selected payload instruments are:

  • Mastcam-Z, an advanced hi-res camera system with panoramic, stereoscopic and zoom ability.
  • SuperCam, an instrument that can provide imaging, chemical composition analysis, and mineralogy. The instrument will also be able to detect the presence of organic compounds in rocks and regolith from a distance.
  • Planetary Instrument for X-ray Lithochemistry (PIXL), an X-ray fluorescence spectrometer that will also contain an imager with high resolution to determine the fine scale elemental composition of Martian surface materials. PIXL will provide capabilities that permit more detailed detection and analysis of chemical elements than ever before.
  • Scanning Habitable Environments with Raman & Luminescence for Organics and Chemicals (SHERLOC), a spectrometer that will provide fine-scale imaging and uses an ultraviolet (UV) laser to determine fine-scale mineralogy and detect organic compounds. SHERLOC will be the first UV Raman spectrometer to fly to the surface of Mars and will provide complementary measurements with other instruments in the payload.
  • The Mars Oxygen ISRU Experiment (MOXIE), a device that will produce oxygen from Martian atmospheric CO2, demonstrating a technology of critical importance in future manned exploration.
  • Mars Environmental Dynamics Analyzer (MEDA), a set of sensors that will provide measurements of temperature, wind speed and direction, pressure, relative humidity and dust size and shape.
  • Radar Imager for Mars' Subsurface Exploration (RIMFAX), a ground-penetrating radar that will provide centimetre-scale resolution of the geologic structure of the subsurface.

This announcement comes in the same week that an earlier 2004 roverOpportunity – having travelled more than 25 miles (40 kilometres), has set a new "off-world" record as the rover having driven the greatest distance. It surpasses the previous record held by the Soviet Union's Lunokhod 2 rover that had travelled 24 miles (39 kilometres).


  speech bubble Comments »



« Previous  



AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure
















future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed