future timeline technology singularity humanity
 
   
future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed
 
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
     
     
     
 
       
   
 
     
 

30th November 2016

A significant collapse of the West Antarctic ice sheet "within our lifetimes"

A massive iceberg that splintered away from West Antarctica was caused by a deep subsurface rift opening within the ice shelf, the first time this has been observed, say researchers. They warn that a significant collapse of the ice sheet is possible "within our lifetimes", with major consequences for sea level rise.

 

west antarctic ice sheet collapse future timeline
Rift in Pine Island Glacier ice shelf, West Antarctica. Credit: NASA/Nathan Kurtz

 

A major glacier in Antarctica is breaking apart from the inside out – suggesting that the ocean is weakening ice on the edges. The Pine Island Glacier – part of the ice shelf that bounds the West Antarctic Ice Sheet – is one of two glaciers that researchers believe are most likely to undergo rapid retreat, bringing more ice from the interior of the ice sheet to the ocean, where its melting would flood coastlines around the world.

In 2015, a nearly 225 square mile (582 square kilometre) iceberg broke away from the glacier. However, it wasn’t until researchers were testing some new image-processing software that they noticed something strange in satellite images taken before the event. In these images, they saw evidence that a rift formed at the very base of the ice shelf nearly 20 miles inland during 2013. Over the next two years, the rift propagated upward, until it broke through the ice surface and set the iceberg adrift over 12 days in late July and early August 2015.

"It's no longer a question of whether the West Antarctic Ice Sheet will melt, it's a question of when," said Ian Howat, an associate professor of earth sciences at Ohio State University. "This kind of rifting behaviour provides another mechanism for rapid retreat of these glaciers – adding to the probability that we may see significant collapse of West Antarctica in our lifetimes."

 

 

 

While this is the first time that researchers have witnessed a deep subsurface rift opening within Antarctic ice, they have seen similar breakups in the Greenland Ice Sheet – in spots where ocean water has seeped inland along the bedrock and begun to melt the ice from underneath. Howat said the satellite images provide the first strong evidence that these large Antarctic ice shelves respond to changes at their ocean edge in a similar way as observed in Greenland.

"Rifts usually form at the margins of an ice shelf, where the ice is thin and subject to shearing that rips it apart," he explained. "However, this latest event in the Pine Island Glacier was due to a rift that originated from the centre of the ice shelf and propagated out to the margins. This implies that something weakened the centre of the ice shelf, with the most likely explanation being a crevasse melted out at the bedrock level by a warming ocean."

Another clue: the rift opened in the bottom of a "valley" where the ice had thinned compared to the surrounding ice shelf. The valley is likely a sign of something researchers have long suspected: because the bottom of the West Antarctic Ice Sheet lies below sea level, ocean water can intrude far inland and remain unseen. New valleys forming on the surface would be one outward sign that ice was melting away far below.

The origin of the rift in the Pine Island Glacier would have gone unseen, too, except that the satellite images Howat and his team were analysing happened to be taken when the Sun was low in the sky. Long shadows cast across the ice drew their attention to the valley that had formed there.

"The really troubling thing is that there are many of these valleys further up-glacier," Howat said. "If they are actually sites of weakness that are prone to rifting, we could potentially see more accelerated ice loss in Antarctica."

 

west antarctic ice sheet collapse future timeline
Credit: NASA

 

More than half of the world's fresh water is frozen in Antarctica. The Pine Island Glacier and its nearby twin, the Thwaites Glacier, sit at the outer edge of one of the most active ice streams on the continent. Like corks in a bottle, they block the ice flow and keep nearly 10% of the West Antarctic Ice Sheet from draining into the sea.

Studies indicate that the West Antarctic Ice Sheet is particularly unstable, and could collapse within the next 100 years. This would lead to a sea level rise of nearly 3 metres (10 ft), engulfing major U.S. cities such as New York and Miami and displacing 150 million people living on coasts worldwide.

"We need to understand exactly how these valleys and rifts form, and what they mean for ice shelf stability," Howat said. "We're limited in what information we can get from space, so this will mean targeting air and field campaigns to collect more detailed observations. The U.S. and the U.K. are partnering on a large field science program targeted at that area of Antarctica, so this will provide another piece to the puzzle."

The study by Howat and colleagues – "Accelerated ice shelf rifting and retreat at Pine Island Glacier, West Antarctica" – is published this week in the journal Geophysical Research Letters.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

29th November 2016

The speed of light could be variable, say researchers

Scientists behind a theory that the speed of light is variable – and not constant as Einstein suggested – have produced a model with an exact figure on the spectral index, which they say is testable.

 

speed of light future timeline

 

Scientists behind a theory that the speed of light is variable – and not constant as Einstein suggested – have made a prediction that could be tested.

Einstein observed that the speed of light remains the same in any situation, and this meant that space and time could be different in different situations.

The assumption that the speed of light is fixed, and always has been, underpins many theories in physics, such as Einstein's theory of general relativity. It plays an especially important role in models of what happened during the very early universe, seconds after the Big Bang.

But some researchers have suggested that the speed of light could have been much higher in this early universe. Now, one of this theory's originators, Professor João Magueijo from Imperial College London, working with Dr Niayesh Afshordi at the Perimeter Institute in Canada, has made a prediction that could be used to test the theory's validity.

Large structures, such as galaxies, all formed from fluctuations in the early universe – tiny differences in density from one region to another. A record of these early fluctuations is imprinted on the cosmic microwave background – a map of the oldest light in the universe – in the form of a 'spectral index'.

 

cosmic microwave background future timeline

 

Working with their theory that the fluctuations were influenced by a varying speed of light in the early universe, Professor Magueijo and Dr Afshordi have now used a model to put an exact figure on the spectral index. The predicted figure and model it is based on are published this month in the peer-reviewed journal Physical Review D.

Cosmologists have been getting ever more precise readings of this figure, so the prediction could soon be tested – either confirming or ruling out the team's model of the early universe. Their figure is a very precise 0.96478. This is close to the current estimate of readings of the cosmic microwave background, which puts it around 0.968, with some margin of error.

"The theory, which we first proposed in the late-1990s, has now reached a maturity point – it has produced a testable prediction. If observations in the near future do find this number to be accurate, it could lead to a modification of Einstein's theory of gravity," explains Professor Magueijo. "The idea that the speed of light could be variable was radical when first proposed – but with a numerical prediction, it becomes something physicists can actually test. If true, it would mean that the laws of nature were not always the same as they are today."

The testability of the varying speed of light theory sets it apart from the more mainstream rival theory: inflation. Inflation says that the early universe went through an extremely rapid expansion phase, much faster than the current rate of expansion of the universe.

 

big bang universe future timeline
Credit: By Yinweichen, [CC BY-SA 3.0], via Wikimedia Commons

 

These theories are necessary to overcome what physicists call the 'horizon problem'. The universe as we see it today appears to be everywhere broadly the same. For example, it has a relatively homogenous density.

This could only be true if all regions of the universe were able to influence each other. However, if the speed of light has always been the same, then not enough time has passed for light to have travelled to the edge of the universe, and 'even out' the energy.

As an analogy, to heat up a room evenly, the warm air from radiators at either end has to travel across the room and mix fully. The problem for the universe is that the 'room' – the observed size of the universe – appears to be too large for this to have happened in the time since it was formed.

The varying speed of light theory suggests that the speed of light was much higher in the early universe, allowing the distant edges to be connected as the universe expanded. The speed of light would have then dropped in a predictable way as the density of the universe changed. This variability led the team to their prediction published this month.

The alternative theory is inflation, which attempts to solve this problem by saying that the very early universe "evened out" while incredibly small, and then suddenly expanded, with the uniformity already imprinted on it. While this means the speed of light and the other laws of physics as we know them are preserved, it requires the invention of an 'inflation field' – a set of conditions that only existed at the time.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th November 2016

North America’s largest tidal turbine array begins generating power

A massive new tidal turbine has been deployed on the coast of Nova Scotia, Canada, and is now connected to the electrical grid.

 

tidal power future timeline

 

Cape Sharp Tidal, a joint venture between Emera and OpenHydro, is now lighting up homes in Nova Scotia, Canada, after being successfully connected to the grid. North America’s largest tidal turbine array, this 2 megawatt (MW) machine weighs 1,000 tons and is the height of a five-storey building when fitted on its base. Following deployment in the Bay of Fundy, a marine operations team safely connected its subsea cable to an onshore substation and then thoroughly tested the entire system, including all monitors and communications links. More than 300 people have been employed on the project in areas such as fabrication, environmental monitoring, engineering, health and safety and marine services.

“This is a huge achievement for Cape Sharp Tidal, a company combining DCNS, OpenHydro and our partners Emera,” said Thierry Kalanquin, Chairman of OpenHydro and Senior Vice President of Energy and Marine Infrastructure at DCNS. “Last week, the Open-Centre Turbine supported the most powerful tidal stream of the year without any stress to the system. The successful delivery of this turbine, the most powerful in North America, also represents a significant milestone for the global tidal industry.”

The Cape Sharp Tidal project has a simple design with four key components: a horizontal axis rotor, a magnet generator, a hydrodynamic duct and a subsea gravity base foundation. The turbine base sits directly on the seabed floor, out of the way of ships, and without drilling. Nova Scotia’s tides are some of the most powerful in the world. All operational surfaces are treated with anti-fouling protection, to minimise growth from algae and zooplankton which could affect the generator and drag co-efficients of the structure.

The machine will be joined by another turbine next year, the pair together producing 4MW from the strength of the tides. Each will displace the need to burn 1,000 tonnes of coal, and eliminate 3,000 tonnes of greenhouse gas (GHG) CO2 emissions. Subject to regulatory approval, the array will grow to an output of 16MW in 2017, 50MW in 2019, and up to 300MW of energy in the 2020s, generating power for nearly 75,000 customers.

 

tidal power future timeline

 

“Cape Sharp Tidal will be one of the largest generating, in-stream tidal energy arrays anywhere in the world,” Kalanquin added. “The project is providing us with unique insights into what is required to build commercial-scale arrays. It will help us accelerate delivery of the pipeline of ocean energy projects we have secured across the globe and grow our position at the forefront of the tidal power industry.”

Data is being collected from a number of monitoring devices mounted on the turbine to collect information on fish and mammal interactions with the structure. Monitoring reports will contribute to a growing international body of research. The turbine makes about six to eight rotations per minute – similar to walking speed – with fish and mammals able to swim through the 4.5-metre centre.

“We know from other turbine installations around the world that fish and marine mammals are not colliding with turbines,” said Sarah Dawson, spokesperson for Cape Sharp Tidal. “With ten years of similar devices we’ve installed in Scotland, there hasn’t been a single incident where any marine mammal, dolphin or whale, has collided.”

Cape Sharp Tidal is an important part of Nova Scotia’s energy future. By 2020, 40% of the region's energy must be generated from renewable sources. Clean and efficient tidal power can be a part of the solution, while also creating an entire new industry and jobs.

Seawater is 832 times denser than air, so a 5 knot ocean current has more kinetic energy than a 220 mph wind. Therefore, ocean currents have extremely high energy density and require smaller devices to harness than wind power. Since oceans cover 70% of Earth’s surface, ocean energy (including wave power, tidal current power and ocean thermal energy conversion) is a vast untapped resource, estimated at 3,000 terawatt hours (TWh) per year. For comparison, this is greater than all of the nuclear power generation in the world (2,500 TWh during 2011).

 

turbine

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

25th November 2016

Tesla demonstrates its self-driving car technology

Innovative American car company, Tesla, has released a video showcasing its self-driving car technology that would be included in all forthcoming vehicles that it manufactures.

 

 

 

The video above demonstrates just how advanced Tesla's Enhanced Autopilot hardware is. The time-lapse footage follows the car on its journey as it correctly follows the rules of the road by identifying road signs, traffic management systems and other road users. A person is seen sat in the car but the video makes clear that this is purely for legal reasons.

The automated system comes equipped with eight cameras, providing full 360° visibility around the vehicle at up to 250 metres' range. A dozen updated ultrasonic sensors detect both hard and soft objects at nearly twice the distance of Tesla's previous hardware. A forward-facing radar gives additional data about the driving environment on a redundant wavelength that is able to see through heavy rain, fog, dust and even the car ahead.

Tesla's Chief Executive Elon Musk certainly has faith in the technology and has predicted that by the end of 2017 a Tesla will be able to drive itself from one US coast to the other. Drivers wanting to adopt this new technology will have to be patient, however, as in addition to legal legislation, Tesla plans to conduct millions of miles of testing to ensure the safety of operating the system.

 

tesla self driving car technology future timeline

 

Earlier this month, Tesla agreed a deal to buy German automated car specialists, Grohmann Engineering, in a bid to accelerate production. The firm's founder Klaus Grohmann will also be joining Tesla to head a new division within the automaker, called Tesla Advanced Automation Germany.

"Because automation is such a vital part of the future of Tesla, the phrase I've used before is that it's about building the machine that's building the machine," Musk commented. "That actually becomes more important than the machine itself as the volume increases. We think it's important to bring in world-class engineering talent and our first choice was Grohmann."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

24th November 2016

Huge deposit of water ice found below surface of Mars

NASA reports that its Mars Reconnaissance Orbiter has found a huge deposit of water ice just under the surface of the planet Mars, in the region known as Utopia Planitia.

 

utopia planitia mars future timeline
Utopia Planitia on Mars. Credit: NASA

 

Frozen beneath a region of cracked and pitted plains on Mars lies a volume of water equivalent to Lake Superior, the largest of the Great Lakes, researchers using NASA's Mars Reconnaissance Orbiter have determined.

Scientists examined part of Mars' Utopia Planitia region, in the mid-northern latitudes, which forms part of the largest recognised impact basin on Mars and in the Solar System with an estimated diameter of 3300 km. It is also where the Viking 2 lander made its historic touchdown in September 1976.

The Mars Reconnaissance Orbiter's ground-penetrating Shallow Radar (SHARAD) instrument was used to record data from over 600 overhead passes, revealing a deposit more extensive in area than the state of New Mexico. The water ice ranges in thickness from about 80 metres (260 feet) to about 170 metres (560 feet) with a composition that is 50 to 85 percent water ice, mixed with dust or larger rocky particles.

At the latitude of this deposit, halfway from the equator to the pole, water ice cannot persist on the surface today. It sublimes into water vapour in the planet's thin, dry atmosphere. The Utopia deposit is shielded from the atmosphere by a soil covering estimated to be 1 to 10 metres (3 to 33 feet) thick.

 

Click to enlarge

mars map

 

"This deposit probably formed as snowfall accumulating into an ice sheet mixed with dust, during a period in Mars history when the planet's axis was more tilted than it is today," said Cassie Stuurman of the Institute for Geophysics at the University of Texas, Austin. She is the lead author of a report in the journal Geophysical Research Letters.

The name Utopia Planitia translates as the "plains of paradise." The newly surveyed ice deposit represents less than one percent of all known water ice on Mars – but it more than doubles the volume of thick, buried ice sheets known in the northern plains. Ice deposits close to the surface are being considered as a resource for astronauts.

"This deposit is probably more accessible than most water ice on Mars, because it is at a relatively low latitude and it lies in a flat, smooth area where landing a spacecraft would be easier than at some of the other areas with buried ice," said Jack Holt of the University of Texas, co-author of the paper.

 

mars water ice deposit map
Location and distribution of the water ice. Credit: NASA

 

"It's important to expand what we know about the distribution and quantity of Martian water," said Deputy Project Scientist Leslie Tamppari, of NASA's Jet Propulsion Laboratory. "We know early Mars had enough liquid water on the surface for rivers and lakes. Where did it go? Much of it left the planet from the top of the atmosphere. Other missions have been examining that process. But there's also a large quantity that is now underground ice, and we want to keep learning more about that."

"The ice deposits in Utopia Planitia aren't just an exploration resource, they're also one of the most accessible climate change records on Mars," explains Joe Levy of the University of Texas, a co-author of the new study. "We don't understand fully why ice has built up in some areas of the Martian surface and not in others. Sampling and using this ice with a future mission could help keep astronauts alive, while also helping them unlock the secrets of Martian ice ages."

Evidence of a recent, extreme ice age on Mars was published by the journal Science earlier this year. Just 370,000 years ago, the planet would have appeared more white than red.

 

mars viking 2

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

22nd November 2016

Green gas can replace traditional fossil fuel-based gas

British clean energy company Ecotricity has unveiled its plan to make gas from grass, grown on marginal farmland, of which Britain has enough to heat almost every home in the country.

 

green gas grass ecotricity
Credit: Ecotricity

 

Ecotricity has unveiled its plan for Britain to make its own gas from grass, grown on marginal farmland, of which Britain has enough to heat almost every home in the country. The company outlines this potential in a new report, Green Gas: The Opportunity for Britain, which shows that green gas from grass could provide the gas needs for 97% of British homes, pump £7.5 billion annually into the economy, and create a new industry with up to 150,000 jobs.

Green gas made this way is virtually carbon neutral, so could play a significant role in Britain meeting its climate targets. Additionally, it would create new habitats for wildlife on an unprecedented scale. Ecotricity has just received permission to build a prototype 'green gas mill', the first of its kind in Britain. This is expected to be operational in 2018. Grass at the plant will be turned into biomethane within 45 days and then injected into the national network. Around 5,000 of these mills could supply the entire country by 2035, if sufficient efforts were made to scale up production.

Dale Vince, Ecotricity founder, said: "As North Sea reserves run out, the big question is where we're going to get our gas from next. The government thinks fracking is the answer, but this new report shows that we have a better option. Recently, it's become possible to make green gas and put it into the grid, in the same way we've been doing with green electricity for the last two decades. The current way of doing that is through energy crops and food waste – but both have their drawbacks. Through our research, we've found that using grass is a better alternative, and has none of the drawbacks of energy crops, food waste or fracking – in fact, it has no drawbacks at all."

"Our first Green Gas Mill has just been given the go-ahead, and we hope to build it soon – though that does depend on whether government energy policy will support this simple, benign and abundant energy source."

 

green gas from grass ecotricity
Credit: Ecotricity

 

"As our report shows, the benefits of Britain making its gas this way are astounding. And in the light of this new option available to us, I call on Theresa May to review the government's plan for where Britain gets its gas – post-North Sea.

"We now have a more than viable alternative to fracking, which people have been fighting tooth and nail up and down the country to prevent. It's not too late, because fracking hasn't started yet. We need a proper review of where Britain gets its gas from – we can either frack the countryside, or we can grow the grass. It's that simple."

In summary: using green gas from grass would cut CO2 emissions, help Britain become energy independent, support food production by improving soils, create wildlife habitats, and provide support for farmers who are set to lose EU subsidies following Brexit.

Lynne Featherstone, Liberal Democrat MP, commented: "If the government would only throw its weight behind green gas, it would go a long way to delivering on our renewable heating targets and secure our energy for the future. I am very grateful to Ecotricity and others who want and are willing to push forward on this vital part of our energy mix."

Doug Parr, Chief Scientist and Policy Director of Greenpeace UK, said: "As long as it's not competing with food production, green gas like this project can be really helpful in getting the UK onto a cleaner and lower carbon path. Agriculture need not simply be part of the problem in tackling climate change, but through innovation it can be part of the solution, and improve wildlife habitats at the same time."

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

21st November 2016

Gut tissue grown from stem cells

Researchers have used human pluripotent stem cells to grow human intestinal tissues with functioning nerves, and then used these to recreate and study a severe intestinal nerve disorder.

 

gut tissue grown from stem cells

 

Gut tissue is highly complex, so is difficult to create in the lab. It has an inner layer that absorbs nutrients and secretes digestive enzymes, muscles that push food along its length, and nerves that coordinate muscle contractions.

However, researchers at Cincinnati Children's Hospital Medical Center have achieved a breakthrough in creating lab-grown gut tissue. They report using human pluripotent stem cells to grow human intestinal tissues with functioning nerves and pulses like the real thing. They later used this tissue to recreate and study a severe intestinal nerve disorder called Hirschsprung's disease.

Published in the journal Nature Medicine, their findings describe an unprecedented approach to engineer and study tissues in the intestine – the body's largest immune organ, its food processor and main interface with the outside world. The study authors believe that medical science is now a step closer to using human pluripotent stem cells (which can become any cell type in the body) for regenerative medicine and growing patient-specific human intestines for transplant.

"One day, this technology will allow us to grow a section of healthy intestine for transplant into a patient – but the ability to use it now, to test and ask countless new questions, will help human health to the greatest extent," said Michael Helmrath, MD, co-lead study investigator.

This ability starts with being able to model and study intestinal disorders in functioning human organ tissue with genetically-specific patient cells. It will also allow researchers to test new therapeutics in functioning lab-grown human intestine before clinical trials in patients.

 

gut tissue stem cells future timeline
Human intestinal organoids with nerves. Credit: Cincinnati Children's Hospital Medical Center

 

"Many oral medications give you diarrhoea, cramps and impair intestinal motility. A fairly immediate goal for this technology that would help the largest number of people is as a first-pass screen for new drugs to look for off-target toxicities and prevent side effects in the intestine," explained Jim Wells, PhD, co-lead investigator and director of the Pluripotent Stem Cell Facility at Cincinnati Children's.

"We tried a few different approaches largely based on the hypothesis that, if you put the right cells together at the right time in the petri dish, they'll know what to do. It was a longshot, but it worked," said Wells.

The appropriate mix caused enteric nerve precursor cells and intestines to grow together in a manner resembling developing fetal intestine. The result was the first evidence for generating complex and functional three-dimensional intestinal organoids in a petri dish, and fully derived from human pluripotent stem cells.

"This is one of the most complex tissues to have been engineered," said Wells, who explained that the gastrointestinal tract contains the second largest number of nerves in the human body. He and colleagues used their tissue to study a rare form of Hirschsprung's disease – a condition in which the rectum and colon fail to develop a normal nervous system. A severe form of Hirschsprung's is caused by a fault in the gene PHOX2B. Tests in a petri dish and mice demonstrated that mutating PHOX2B causes profound detrimental changes to innervated intestinal tissues.

Helmrath is now making and testing hollow tubes of the lab-grown tissue. These are 2 centimetres long, but if extended to 10 centimetres, they could make good transplants for short bowel syndrome, a condition that can affect premature babies.

As science continues to learn more about how important intestinal health is to overall health, using functioning lab-generated human intestine creates an array of new research opportunities, Wells and Helmrath said. This will include the ability to conduct deeper studies into nutritional health, diabetes, severe intestinal diseases like inflammatory bowel disease and Crohn's disease, and other biochemical changes in the body.

Their work is described today in the paper, Engineered human pluripotent-stem-cell-derived intestinal tissues with a functional enteric nervous system.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

20th November 2016

Researchers discover new antibiotics by sifting through the human microbiome

Scientists at Rockefeller University have identified which genes in a microbe's genome ought to produce antibiotic compounds and then synthesised those compounds to discover two promising new antibiotics.

 

new antibiotics 2016
Credit: Sean Brady

 

Most antibiotics in use today are based on natural molecules produced by bacteria, and given the rise of antibiotic resistance, there's an urgent need to find more of them. Yet coaxing bacteria to produce new antibiotics is a tricky proposition. Most bacteria won't grow in the lab. And even when they do, most of the genes that cause them to churn out molecules with antibiotic properties never get switched on.

Researchers at the Rockefeller University in New York have found a way around these problems, however. By using computational methods to identify which genes in a microbe's genome ought to produce antibiotic compounds and then synthesising those compounds themselves, they were able to discover two promising new antibiotics without having to culture a single bacterium.

The team, led by Sean Brady, head of the Laboratory of Genetically Encoded Small Molecules, began by trawling publicly available databases for the genomes of bacteria that reside in the human body. They then used specialised computer software to scan hundreds of those genomes for clusters of genes that were likely to produce molecules known as non-ribosomal peptides, which form the basis of many antibiotics. They also used the software to predict the chemical structures of the molecules that the gene clusters ought to produce.

The software initially identified 57 potentially useful gene clusters, which the researchers winnowed down to 30. Brady and his colleagues then used a method called solid-phase peptide synthesis to manufacture 25 different chemical compounds. By testing those compounds against human pathogens, the researchers successfully identified two closely related antibiotics, which they dubbed humimycin A and humimycin B. Both are found in a family of bacteria called Rhodococcus – microbes that had never yielded anything resembling the humimycins when cultured via traditional laboratory techniques.

 

Rhodococcus
Rhodococcus. Credit: Jerry Sims

 

The humimycins proved especially effective against Staphylococcus and Streptococcus bacteria, which can cause dangerous infections in humans and tend to grow resistant to various antibiotics. Further experiments suggested that the humimycins work by inhibiting an enzyme that bacteria use to build their cell walls – and once that cell wall-building pathway is interrupted, the bacteria die.

A similar mode of action is employed by beta-lactams, a broad class of commonly prescribed antibiotics whose effect often wanes as bacteria develop ways to resist them. Yet the scientists found that one of the humimycins could be used to re-sensitise bacteria to beta-lactams that they had previously outsmarted.

In one experiment, they exposed beta-lactam resistant Staphylococcus microbes to humimycin A in combination with a beta-lactam antibiotic, and the bugs once again succumbed. Remarkably, that held true even when humimycin A had little effect by itself – a result that Brady attributes to the fact that both compounds work by interrupting different steps in the same biological pathway.

"It's like taking a hose and pinching it in two spots," explains Prof. Brady. Even if neither kink halts the flow altogether on its own, "eventually, no more water comes through."

To further test that proposition, Brady and his colleagues infected mice with a beta-lactam resistant strain of Staphylococcus aureus, a microbe that often causes antibiotic-resistant infections in hospital patients. Mice that were subsequently treated with a mixture containing both humimycin A and a beta-lactam antibiotic fared far better than those treated with only one drug or the other – a finding that could point towards a new treatment regimen for humans infected with beta-lactam resistant S. aureus.

Brady hopes that this discovery will inspire scientists to mine the genomes of bacteria for more molecules that could yield similarly useful results. And he looks forward to applying his methods to the many bacterial species beyond the human microbiome, which might harbour their own molecular treasures – not to mention the even greater number of bacteria whose genomes have not yet been sequenced, but that undoubtedly will be over time.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

16th November 2016

Solar nanotech clothing could revolutionise wearable technology

Scientists at the University of Central Florida are researching and developing solar nanotech-powered clothing.

 

solar nanotech powered clothing
Credit: University of Central Florida

 

Scientists have developed filaments that harvest and store the Sun's energy – and can be woven into textiles. The breakthrough would essentially turn jackets and other clothing into wearable, solar-powered batteries that never need to be plugged in. It could one day revolutionise wearable technology, helping everyone from soldiers who now carry heavy loads of batteries, to smartphone users who could charge a device while on the move by simply placing it in their pocket.

"The idea came to me: we make energy storage devices and we make solar cells in the labs. Why not combine these two devices together?" said Associate Professor Jayan Thomas, a nanotechnology scientist at the University of Central Florida's NanoScience Technology Centre.

Thomas has already been lauded for earlier ground-breaking research. Last year, he received an R&D 100 Award – given to the top inventions of the year worldwide – for his development of a cable that not only transmits energy like a normal cable, but can also store energy like a battery. He's also developing a semi-transparent solar cell that can be applied to windows, allowing some light to pass through, while simultaneously harvesting energy. This new work builds on that research.

Thomas was inspired by the clothing worn by Marty McFly in 80s sci-fi classic, Back to the Future Part II: "That movie was the motivation," he says. "If you can develop self-charging clothes or textiles, you can realise those cinematic fantasies – that's the cool thing."

His research team developed filaments in the form of copper ribbons that are thin, flexible and lightweight. The ribbons have a solar cell on one side and energy-storing layers on the other. Using a small, tabletop loom, the ribbons were woven into a square of yarn.

 

solar nanotech clothing
Credit: University of Central Florida

 

The proof-of-concept shows that the filaments could be laced throughout jackets or other outwear to harvest and store energy to power phones, personal health sensors and other tech gadgets. It's an advancement that overcomes the main shortcoming of solar cells: the energy they produce must flow into the power grid or be stored in a battery that limits their portability.

"A major application could be with our military," Thomas explains. "When you think about our soldiers in Iraq or Afghanistan, they're walking in the Sun. Some of them are carrying over 30 pounds of batteries on their bodies. It's hard for the military to deliver batteries to these soldiers in this hostile environment. A garment like this can harvest and store energy at the same time if sunlight is available."

There are a host of other potential uses, including electric cars that could generate and store energy whenever they're in the Sun.

"That's the future. What we've done is demonstrate that it can be made," Thomas said. "It's going to be very useful for the general public and the military and many other applications."

His team's research is published in the academic journal Nature Communications.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

16th November 2016

Genetically modified "superwheat" could boost yields by 40%

Researchers in the UK have announced a genetically modified "superwheat" that increases the efficiency of photosynthesis to boost yields by 20 to 40 percent. Field trials are expected in 2017.

 

Genetically modified superwheat could boost yields by 40%

 

With global population expected to reach 10 billion by mid-century, and rapid economic growth in emerging nations, the world's agricultural systems will be under immense pressure. The UN Food and Agriculture Organisation (FAO) estimates that global agricultural production in 2050 will need to increase by at least 60% compared to now. But with arable land declining due to climate change, soil depletion and other environmental issues, achieving this could be a major challenge. Wheat grain is grown on more land area than any other commercial crop and is the most important staple food for humans, but the growth rate in yields has slowed significantly in recent years. This trend is likely to worsen in the future, with a 6% drop in production for every 1°C rise in global temperature.

One possible way to increase yields is through genetic engineering. The public tends to be distrusting of such methods, according to most opinion polls. However, the vast majority of scientists agree that genetically modified (GM) foods are safe. The American Association for the Advancement of Science states on its website: "consuming foods containing ingredients derived from GM crops is no riskier than consuming the same foods containing ingredients from crop plants modified by conventional plant improvement techniques." Meanwhile, the U.S. National Academy of Sciences states that "no adverse health effects attributed to genetic engineering have been documented in the human population," and a report issued by the European Commission made the same claim. The World Health Organisation concludes that GM foods "are not likely, nor have been shown, to present risks for human health."

None of the GM crops widely grown around the world are currently designed to boost yields directly – but that could be about to change, as scientists in the UK have announced a potentially transformative breakthrough. A team at Rothamsted Research, the longest running agricultural research station in the world, collaborated with the Universities of Essex and Lancaster. They believe it may be possible to increase wheat yields by up to 40%, based on the promising early results of a glasshouse trial.

The researchers focused on improving the efficiency of photosynthesis, the process by which energy from sunlight increases plant biomass, by adding genes from a grass called stiff brome (pictured below). The new GM wheat was found to assimilate carbon dioxide better than conventional wheat.

 

brachypodium distachyon stiff brome
Brachypodium distachyon (stiff brome). By Neil Harris, University of Alberta (Own work)
[CC BY-SA 4.0], via Wikimedia Commons

 

“We have seen yield increases of 20 to 40% in greenhouse pots, although this is not a yield indication for the field,” said Christine Raines from the University of Essex at a briefing. “The efficiency of the process of photosynthesis integrated over the season is the major determinant of crop yield. However, to date, photosynthesis has not been used to select for high yielding crops in conventional breeding programmes and represents an unexploited opportunity. But there is now evidence that improving the efficiency of photosynthesis by genetic modification is one of the promising approaches to achieve higher wheat yield potential.”

“In this project, we have genetically modified wheat plants to increase the efficiency of the conversion of energy from sunlight into biomass. We have shown that these plants carry out photosynthesis more efficiently in glasshouse conditions. One of the steps in photosynthesis shown to limit this process is carried out by the enzyme, sedoheptulose-1,7-biphosphatase (SBPase). We have engineered GM wheat plants to produce increased levels of SBPase by introducing an SPBase gene from Brachypodium distachyon (common name stiff brome), a plant species related to wheat and used as a model in laboratory experiments” Raines added.

 

christine raines
Christine Raines. Credit: University of Essex

 

Rothamsted Research has now applied to the UK's Department for Environment, Food and Rural Affairs (DEFRA) for permission to conduct a field trial of their GM wheat plants. If approved, just under 100sq m of the crop will be grown in Rothamsted’s fenced GM-dedicated growing area from spring 2017.

“If we are granted permission to perform a controlled experiment in our already established facilities here at Rothamsted Research, it will be a significant step forward,” said Dr Malcolm Hawkesford, Head of the Plant Biology and Crop Science Department at Rothamsted and the lead scientist for this trial. “We will be able to assess in ‘real environmental conditions’ the potential of these plants to ultimately produce more, using the same resources and land area as their non-GM counterparts. These field trials are the only way to assess the viability of a solution that can bring economic benefits to farmers, returns to the UK tax payer of the long-term investment in this research, benefits to the economy as a whole and the environment in general.”

 

wheat field aerial

 

Dr Elizabete Carmo-Silva, co-investigator in this project at Lancaster University, added: “We have produced two types of plants: one in which two extra copies of SBPase are functional and one in which six extra copies of SBPase are functional. If granted permission to carry out the field trial, we will measure the photosynthetic efficiency of the plants in the field and we will determine total above-ground plant biomass and grain yield on an area basis at full maturity. We will also measure the number of wheat ears on an area basis and the grain number and weight per ear. From this data, we will estimate the harvest index, which is the proportion of biomass allocated to the grain.”

The researchers believe wheat yields could be boosted even further, by looking at other enzyme levels. If the field trials are successful, they hope their technique could be applied to other crops too. At present, the only GM crop grown commercially in the EU is maize – mostly in Spain, where it protects against a weevil pest. Opposition to GM food is so high in Europe that no other GM crops have been approved and subsequently grown since 1998. However, George Eustice, the UK's Minister for Agriculture, Fisheries & Food, has suggested that British farmers could grow GM crops once the UK leaves the EU, following the Brexit vote in June.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th November 2016

Lab-grown mini lungs successfully transplanted into mice

Scientists can now grow 3-D models of lungs from stem cells, creating new ways to study respiratory diseases.

 

lungs

Credit: Briana R Dye, Priya H Dedhia, Alyssa J Miller, Melinda S Nagy, Eric S White, Lonnie D Shea, Jason R Spence

 

Researchers at the University of Michigan have transplanted lab-grown mini lungs into immunosuppressed mice where the structures were able to survive, grow and mature.

"In many ways, the transplanted mini lungs were indistinguishable from human adult tissue," says senior study author Jason Spence, Ph.D., associate professor in the Department of Internal Medicine and the Department of Cell and Developmental Biology at U-M Medical School.

The findings were published in eLife and described by authors as a potential new tool to study lung disease.

Respiratory diseases account for nearly 1 in 5 deaths worldwide, and lung cancer survival rates remain poor despite numerous therapeutic advances during the past 30 years. The numbers highlight the need for new, physiologically relevant models for translational lung research.

Lab-grown lungs can help because they provide a human model to screen drugs, understand gene function, generate transplantable tissue and study complex human diseases, such as asthma.

Lead study author Briana Dye, a graduate student in the U-M Department of Cell and Developmental Biology, used numerous signalling pathways involved with cell growth and organ formation to coax stem cells – the body's master cells – to make the miniature lungs.

The researchers' previous study showed mini lungs grown in a dish consisted of structures that exemplified both the airways that move air in and out of the body, known as bronchi, and the small lung sacs called alveoli, which are critical to gas exchange during breathing.

But to overcome the immature and disorganised structure, the researchers attempted to transplant the miniature lungs into mice, an approach that has been widely adopted in the stem cell field. Several initial strategies to transplant the mini lungs into mice were unsuccessful.

Working with Lonnie Shea, Ph.D., professor of biomedical engineering at the University of Michigan, the team used a biodegradable scaffold, which had been developed for transplanting tissue into animals, to achieve successful transplantation of the mini lungs into mice. The scaffold provided a stiff structure to help the airway reach maturity.

"In just eight weeks, the resulting transplanted tissue had impressive tube-shaped airway structures similar to the adult lung airways," says Dye.

They characterised the transplanted mini lungs as well-developed tissue, possessing a highly organised epithelial layer lining the lungs. One drawback was that the alveolar cell types did not grow in the transplants. Still, several specialised lung cell types were present, including mucus-producing cells, multiciliated cells and stem cells found in the adult lung.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th November 2016

Machine learning can identify a suicidal person

Using a person's spoken or written words, a new computer algorithm identifies with high accuracy whether that person is suicidal, mentally ill but not suicidal, or neither.

 

brain words algorithm

 

A new study shows that technology known as machine learning is up to 93% accurate in correctly classifying a suicidal person and 85% accurate in identifying a person who has a mental illness but is not suicidal, or neither. These results provide strong evidence for using intelligent software as a decision-support tool to help clinicians and caregivers identify and prevent suicidal behaviour.

"These computational approaches provide novel opportunities to apply technological innovations in suicide care and prevention, and it surely is needed," explains John Pestian, PhD, professor in Biomedical Informatics & Psychiatry at Cincinnati Children's Hospital Medical Centre and the study's lead author. "When you look around healthcare facilities, you see tremendous support from technology, but not so much for those who care for mental illness. Only now are our algorithms capable of supporting those caregivers. This methodology can easily be extended to schools, shelters, youth clubs, juvenile justice centres, and community centres, where earlier identification may help to reduce suicide attempts and deaths."

Pestian and his team enrolled 379 patients over the study's 18 month period – from emergency departments as well as inpatient and outpatient centres across three sites. Those enrolled included patients who were suicidal, diagnosed as mentally ill but not suicidal, or neither (serving as a control group).

Each patient completed standardised behavioural rating scales and participated in a semi-structured interview, answering five open-ended questions to stimulate conversation such as "Do you have hope?" "Are you angry?" and "Does it hurt emotionally?"

The researchers extracted and analysed both verbal and non-verbal language from the data. They then used machine learning algorithms to classify the patients into one of the three groups. Their results showed that machine learning algorithms could tell the difference between the groups with an accuracy of up to 93%. The scientists also noticed that the control patients tended to laugh more during interviews, sigh less, and express less anger, less emotional pain and more hope.

This software could become more and more useful in the future, as depression is expected to become the number one global disease burden by 2030. However, such intelligent algorithms may raise concerns over privacy and civil liberties, with potential for information to be abused. For example, authorities might use the software to spy on citizens as they communicate via email or social media, perhaps deciding from the data and wording style that a certain individual is dangerous and must be imprisoned, even if that person is actually innocent.

The study is published in the journal Suicide and Life-Threatening Behavior.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

13th November 2016

AI can beat humans at lip-reading

The University of Oxford has demonstrated "LipNet", a new AI algorithm capable of lip-reading over 40% more accurately than a real person.

 

 

 

2016 has been a big year for artificial intelligence, with many important breakthroughs that we've covered on our blog. Yet again, what was once confined to science fiction has become a reality, as this week a research team presented a new AI lip-reading system able to beat humans.

The University of Oxford's Department of Computer Science has developed "LipNet", a visual recognition system that can process whole sentences and learn which letter corresponds to the slightest mouth movement.

"The end-to-end model eliminates the need to segment videos into words before predicting a sentence," the research team explains. "LipNet requires neither hand-engineered spatiotemporal visual features, nor a separately-trained sequence model."

While an experienced human lip-reader can achieve accuracy of 52%, the LipNet efficiency is 93%. It's eerily reminiscent of HAL 9000, the sentient computer in Arthur C. Clarke's 2001: A Space Odyssey.

However, while LipNet has proven to be very promising, it is still at a relatively early stage of development. So far, it has been trained and tested on short, formulaic videos that show a well-lit person face-on. In its current form, LipNet could not be used on more challenging video footage – so it is currently unsuitable for use as a surveillance tool. But the team is keen to develop it further in real-world situations, especially as an aid for people with hearing disabilities.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

12th November 2016

Graphic cigarette warnings could prevent 652,000 deaths over next 50 years

A study published in the journal Tobacco Control finds that graphic warnings on cigarette packs could prevent 652,000 deaths in the U.S. over the next 50 years.

 

cigarettes future timeline

 

Using prominent, graphic images on cigarette packs warning against the dangers of smoking could avert more than 652,000 deaths, up to 92,000 low birth weight infants, up to 145,000 preterm births, and about 1,000 cases of sudden infant deaths in the U.S. over the next 50 years, say researchers from Georgetown Lombardi Comprehensive Cancer Center.

Their study, published online in the journal Tobacco Control, is the first to estimate the effects of pictorial warnings on cigarette packs on the health of both adults and infants in the U.S.

Although more than 70 nations have adopted or are considering adopting the World Health Organisation's Framework Convention for Tobacco Control to use such front and back of-the-pack warnings, they have not been implemented in the U.S. These pictorial warnings have been required by law, but an industry lawsuit has stalled implementation. Currently, a text-only warning appears on the side of cigarette packs in the U.S.

 

cigarettes text warning

 

The study used a tobacco control policy model, known as "SimSmoke", developed by Georgetown Lombardi's David T. Levy, PhD, which looks at the effects of past smoking policies, as well as future policies. SimSmoke is peer-reviewed, and has been used and validated in more than 20 countries.

In this study, Levy and his colleagues looked at changes in smoking rates in Australia, Canada and the UK, which have already implemented prominent pictorial warning labels (PWLs). Eight years after PWLs were implemented in Canada, there was an estimated 12 to 20 percent relative reduction in smoking prevalence. After PWLs began to be used in Australia in 2006, adult smoking prevalence fell from 21.3 percent in 2007 to 19 percent in 2008. After implementation in the UK during 2008, smoking prevalence fell 10 percent in the following year.

The researchers used these and other studies and, employing the SimSmoke model, estimated that implementing PWLs in the U.S. would directly reduce smoking prevalence in relative terms by 5 percent in the near term, increasing to 10 percent over the long-term. If implemented in 2016, PWLs are estimated to reduce the number of smoking attributable deaths (heart disease, lung cancer and COPD) by an estimated 652,800 by 2065.

"The bottom line is that requiring large pictorial warnings would help protect the public health of people in the United States," says Prof. Levy. "There is a direct association between these warnings and increased smoking cessation and reduced smoking initiation and prevalence. That would lead to significant reduction of death and morbidity, as well as medical cost."

As of today, 40 percent of cancers diagnosed in the U.S. may have a link to tobacco use, according to the Centres for Disease Control and Prevention (CDC). It is the leading preventable cause of cancer and cancer deaths. Tobacco causes more than just lung cancer – based on current evidence, it can cause cancers of the mouth and throat, voice box, oesophagus, stomach, kidney, pancreas, liver, bladder, cervix, colon, rectum and a type of leukaemia. At least 70 chemicals found in tobacco smoke are known to cause cancer, with exposure to second-hand smoke (aka passive smoking) also causing it. Cigarette smoking is estimated to result in $289 billion a year in medical costs and productivity loss. About 70% of all smokers want to quit – and if they do so before the age of 40, they can gain almost all of the 10 years of life expectancy they would otherwise have lost.

"There are more than 36 million smokers in the U.S.," says Tom Frieden, CDC Director. "Sadly, nearly half could die prematurely from tobacco-related illnesses, including 6 million from cancer, unless we implement the programs that will help smokers quit."

New data released from the National Health Interview Survey shows that cigarette smoking among U.S. adults declined from 20.9 percent (45.1 million) in 2005 to 15.1 percent (36.5 million) in 2015. During 2014-2015 alone, there was a 1.7 percentage point decline, resulting in the lowest prevalence of adult cigarette smoking since the CDC's NHIS began collecting such data in 1965.

"When states invest in comprehensive cancer control programs – including tobacco control – we see greater benefits for everyone, and fewer deaths from tobacco-related cancers," said Lisa Richardson, director of CDC's Division of Cancer Prevention and Control. "We have made progress, but our work is not done."

 

cigarettes historical trend


---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

11th November 2016

Mouse neurons seen firing in real-time in 3-D

Scientists at Rockefeller University have used a technique called "light sculpting" to see the neurons of a mouse brain firing in real-time in 3-D.

No single neuron produces a thought or a behaviour – anything the brain accomplishes is a vast collaborative effort between cells. When at work, neurons talk rapidly to each other, forming networks as they communicate. Researchers at Rockefeller University in New York are developing technology that would make it possible to record brain activity as it plays out across these networks. In research published by Nature Methods, they recorded the activity of mouse neurons layered in 3-D sections of brain as they signalled to each other in real time.

"The ultimate goal of our work is to investigate how large numbers of interconnected neurons throughout the brain interact in real time and how their dynamics lead to behaviour," says Alipasha Vaziri, Ph.D., head of the Laboratory of Neurotechnology and Biophysics. "By developing a new method based on 'light sculpting' and using it to capture the activity of the majority of the neurons within a large portion of the cortex, a layered brain structure involved amongst others in higher brain function, we have taken a significant step in this direction."

This type of recording presents a considerable technical challenge because it requires tools capable of capturing short-lived events within individual cells, all while observing large volumes of brain tissue. Vaziri began working toward this goal about six years ago. His group first succeeded in developing a light-microscope–based approach to observing the activity in a 302-neuron roundworm brain, before moving on to the 100,000-neuron larval zebrafish. Their next target was the mouse brain, which is more challenging for two reasons: not only is it more complex, with 70 million neurons, but the rodent brain is also opaque, unlike the more transparent worm and larval fish brains.

To make the activity of neurons visible, they had to be altered. The researchers engineered the mice so their neurons could emit fluorescent light when they signalled to one another. The stronger the signal, the brighter the cells would shine. The system they developed had to meet competing demands – it needed to generate a spherically-shaped area, slightly smaller than the neurons and capable of exciting fluorescence from them. Meanwhile, it also had to move quickly enough to scan thousands of these cells in three dimensions as they fired in real time.

The team accomplished this using a technique called "light sculpting," in which short pulses of laser light – each lasting only a quadrillionth of a second – are dispersed into their coloured components. These are then brought back together to generate the "sculpted" excitation sphere. This sphere is scanned to illuminate the neurons within a plane, then refocused on another layer of neurons above or below, allowing neural signals to be recorded in three dimensions.

In this way, Vaziri and his colleagues recorded the activity in one-eighth of a cubic millimetre of the animal's brain cortex, a volume that represents the majority of a unit known as a cortical column. By simultaneously capturing and analysing the dynamic activity of the neurons within a cortical column, researchers think they might be able to understand brain computation as a whole. In this case, the section of cortex they studied is responsible for planning movement. They are currently working to capture the activity of an entire such unit.

"Progress in neuroscience, and many other areas of biology, is limited by the available tools," Vaziri says. "By developing increasingly faster, higher-resolution imaging techniques, we hope to be able to push the study of the brain into new frontiers."

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

5th November 2016

Construction of the James Webb Space Telescope mirror is completed

After 20 years of development, the successor to the Hubble Space Telescope has reached a major milestone, with its primary mirror now complete and ready for testing prior to a 2018 launch.

 

james webb space telescope future timeline
Credit: NASA/Chris Gunn

 

In the massive clean room of NASA's Goddard Space Flight Centre, assembly was finished this week on the James Webb Space Telescope's (JWST) primary mirror. Thousands of people have been involved in its design and construction, including a member of the FutureTimeline forum.

The JWST will replace the aging Hubble Space Telescope (launched in 1990) and provide a fresh pair of eyes on the universe. It will offer unprecedented resolution and sensitivity from long-wavelength visible light, through near-infrared to the mid-infrared. While Hubble has a 2.4 m (7.9 ft) primary mirror, the JWST features a much larger 6.5 m (21.3 ft) mirror, composed of 18 hexagonal segments each measuring 1.3 m (4.2 ft). When combined, these will offer seven times the light collecting area of Hubble, alongside instruments that are 100 times more sensitive.

Located near Sun–Earth Lagrange point 2, about 1.5 million km (930,000 miles) from Earth, a large sunshield will keep its mirror and science instruments below 50 K (−223°C; −370°F). This will minimise interference from external sources of light and heat (like the Sun, Earth, and Moon) as well as from heat emitted by the observatory itself.

When fully operational, the JWST will be the most powerful space telescope ever built, capable of seeing the very first generation of stars which ignited less than 200 million years after the Big Bang – a time when the universe was about 1.4% of its current age.

 

hubble james webb space telescope comparison
Credit: NASA

 

"Today, we're celebrating the fact that our telescope is finished, and we're about to prove that it works," said John Mather, senior project scientist, at the press conference in Maryland. "We've done two decades of innovation and hard work, and this is the result – we're opening up a whole new territory of astronomy."

The telescope would be powerful enough to spot a bumblebee on the Moon's surface, explained Mather – both in its reflected light, and from body heat emitted by the insect. Its mirrors are so smooth that if you scaled the array to the size of the U.S., the hills and valleys of irregularity would be only a few inches high.

After launch phase environment testing at Goddard Space Flight Centre in Maryland, more cryogenic tests will be performed at the Johnson Space Centre in Houston, Texas. The mirror will then be transported to Northrup Grumman in Los Angeles for the final phase of construction that will connect it to the sunshield and spacecraft bus. If all goes according to plan, the mission will be launched in October 2018.

There are four primary scientific objectives:

• to search for light from the first stars and galaxies that formed in the Universe after the Big Bang;
• to study the formation and evolution of galaxies;
• to understand the formation of stars and planetary systems;
• to study planetary systems and the origins of life.

 

james webb space telescope galaxies universe future timeline

 

These goals can be achieved more effectively by observation in near-infrared light, rather than light in the visible part of the spectrum. For this reason, the JWST's instruments will have a much greater capacity for infrared astronomy than Hubble – allowing the study of far more objects and regions obscured by gas and dust; such as molecular clouds where stars are born, circumstellar disks that give rise to planets, and the cores of active galaxies. One of the early goals of the JWST will be to gather enough data to answer which theory is correct about the phenomenon of the dimming light around the mysterious star KIC 8462852.

With a history of major cost overruns and delays, the JWST almost didn't happen. The first realistic budget estimates were around $1.6 billion with a planned launch date of 2011. In that year, however, the U.S. House of Representatives voted to terminate funding, after $3 billion had been spent and 75% of its hardware was in production. Funding was later restored, but capped at $8 billion, with a revised launch date of 2018. The costs have now exceeded $8.7 billion, according to mission director Bill Ochs at this week's news conference.

The JWST is being launched by an Ariane 5 rocket supplied by the European Space Agency (ESA). An adapter ring will be included, in the unlikely event that a major deployment problem occurs. This could be used by a future spacecraft to grapple the observatory. Because of its remote distance, however, the telescope itself will not be serviceable – astronauts will be unable to reach it to fix instruments, unlike the much closer Hubble.

"It's critically important to get it right here on the ground, and that's the purpose for the tests that we're doing here and, most importantly, for the tests when we get it down to Johnson [Space Centre] in Chamber A, the big vacuum chamber," said NASA Administrator Charles Bolden.

The JWST is designed to have a minimum lifetime of 5.5 years after launch, but NASA hopes it can remain in operation for 10 years or more. A number of other, even larger observatories have been proposed for the 2030s and beyond. These include the 11.7 m (38.3 ft) High-Definition Space Telescope and a longer-term concept for a robotically assembled telescope reaching 100 metres in size.

Given these trends in observational capability, combined with exponential growth in the number of exoplanets being found, it seems likely that alien life signatures will be detected in the not-too-distant future.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

4th November 2016

First commercial asteroid prospecting mission to launch by 2020

Planetary Resources, Inc., the asteroid mining company, announced yesterday that it has finalised a 25 million euro agreement that includes direct capital investment of 12 million euros and grants of 13 million euros from the Government of the Grand Duchy of Luxembourg and the banking institution Société Nationale de Crédit et d'Investissement (SNCI). This funding will accelerate the company's technical advancements with the aim of launching the first commercial asteroid prospecting mission by 2020. The milestone fulfilled the intent of the Memorandum of Understanding with the Grand Duchy and its SpaceResources.lu initiative that was agreed upon earlier this year.

"We are excited in welcoming the Grand Duchy as a partner and an investor," said Chris Lewicki, President and CEO of Planetary Resources. "Just as the country's vision and initiative propelled the satellite communications industry through its public-private partnerships, this funding and support will fast-track our business – advancing and building upon our substantial accomplishments. We plan to launch the first commercial asteroid prospecting mission by 2020 and look forward to collaborating with our European partner in this pivotal new industry."

 

asteroid mining 2020 future timeline

 

Étienne Schneider, Deputy Prime Minister and Minister of the Economy, Government of Luxembourg, said: "The Grand-Duchy of Luxembourg becoming a shareholder in Planetary Resources seals our partnership and lays the ground of the principles of our cooperation in the years to come, while demonstrating the Government's strong commitment to support the national space sector by attracting innovative activities in space resource utilisation and other related areas. The Grand Duchy has a renowned history in public-private partnerships. In 1985, Luxembourg became one of the founding shareholders of SES, a landmark for satellite telecommunications and now a world leader in this sector."

Planetary Resources is establishing a European headquarters in Luxembourg that will conduct key research and development activities in support of its commercial asteroid prospecting capabilities, as well as support international business activities.

Core hardware and software technologies developed at Planetary Resources were tested in orbit last year. The company's next mission, now undergoing final testing, will validate a thermographic sensor that will precisely measure temperature differences of objects on Earth. When deployed on future commercial asteroid prospecting missions, the sensor will acquire key data related to the presence of water and water-bearing minerals on asteroids. Obtaining and using these key resources in space promises to fast-track the development of off-planet economic activities as the commercial industry continues to accelerate.

Planetary Resources was founded in 2009 by Eric Anderson, Peter Diamandis and Chris Lewicki. The company's vision is to establish a new paradigm for resource utilisation that will bring the Solar System within humanity's economic sphere of influence. The pathway to identifying the most commercially viable near-Earth water-rich asteroids has led to the development of multiple transformative technologies that are applicable to global markets, including the agriculture, energy, mining and insurance industries. Planetary Resources is financed by industry-launching visionaries who are committed to expanding the world's resource base so humanity can continue to grow and prosper for centuries to come.

 

 

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

3rd November 2016

A virus-sized computing device

Researchers at University of California, Santa Barbara, have designed a functional nanoscale computing element that could be packed into a space no bigger than 50 nanometres on any side.

 

red blood cell nanotechnology nanotech future timeline

 

In 1959, renowned physicist Richard Feynman, in his talk “Plenty of Room at the Bottom” spoke of a future in which tiny machines could perform huge feats. Like many forward-looking concepts, his molecule and atom-sized world remained for years in the realm of science fiction. And then, scientists and other creative thinkers began to realise Feynman’s nanotechnological visions.

In the spirit of Feynman’s insight, and in response to the challenges he issued, electrical and computer engineers at UC Santa Barbara have developed a design for a functional nanoscale computing device. The concept involves a dense, three-dimensional circuit operating on an unconventional type of logic that could, theoretically, be packed into a block no bigger than 50 nanometres on any side.

“Novel computing paradigms are needed to keep up with demand for faster, smaller and more energy-efficient devices,” said Gina Adam, a postdoctoral researcher at UCSB’s Department of Electrical and Computer Engineering and lead author of the paper “Optimised stateful material implication logic for three dimensional data manipulation” published in the journal Nano Research. “In a regular computer, data processing and memory storage are separated, which slows down computation. Processing data directly inside a three-dimensional memory structure would allow more data to be stored and processed much faster.”

While efforts to shrink computing devices have been ongoing for decades – in fact, Feynman’s challenges as he presented them in 1959 have been met – scientists and engineers continue to carve out room at the bottom for even more advanced nanotechnology. An 8-bit adder operating in 50 x 50 x 50 nanometre dimensions, put forth as part of the current Feynman Grand Prize challenge by the Foresight Institute, has not yet been achieved. However, the continuing development and fabrication of progressively smaller components is bringing this virus-sized computing device closer to reality.

“Our contribution is that we improved the specific features of that logic and designed it so it could be built in three dimensions,” says Dmitri Strukov, UCSB professor of computer science.

 

nanoscale computer device nanotechnology future timeline

 

Key to this development is a system called material implication logic, combined with memristors – circuit elements whose resistance depends on the most recent charges and the directions of those currents that have flowed through them. Unlike the conventional computing logic and circuitry found in our present computers and other devices, in this form of computing, logic operation and information storage happen simultaneously and locally. This greatly reduces the need for components and space typically used to perform logic operations and to move data back and forth between operation and memory storage. The result of the computation is immediately stored in a memory element, which prevents data loss in the event of power outages – a critical function in autonomous systems such as robotics.

In addition, the researchers reconfigured the traditionally two-dimensional architecture of the memristor into a three-dimensional block, which could then be stacked and packed into the space required to meet the Feynman Grand Prize Challenge.

“Previous groups show that individual blocks can be scaled to very small dimensions,” said Strukov, who worked at technology company Hewlett-Packard’s labs when they ramped up development of memristors. By applying those results to his group’s developments, he said, the challenge could easily be met.

Memristors are being heavily researched in academia and in industry for their promising uses in future memory storage and neuromorphic computing. While implementations of material implication logic are rather exotic and not yet mainstream, uses for it could pop up any time, particularly in energy-scarce systems such as robotics and medical implants.

“Since this technology is still new, more research is needed to increase its reliability and lifetime and to demonstrate large-scale, 3-D circuits tightly packed in tens or hundreds of layers,” Adam said.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

3rd November 2016

1,000-fold increase in 3-D scanning speed

Researchers at Penn State University report a 1,000-fold increase in the scanning speed for 3-D printing, using a space-charge-controlled KTN beam deflector with a large electro-optic effect.

 

3d printer scanner future timeline

 

A major technological advance in the field of high-speed beam-scanning devices has resulted in a speed boost of up to 1000 times, according to researchers in Penn State's College of Engineering. Using a space-charge-controlled KTN beam deflector – a kind of crystal made of potassium tantalate and potassium niobate – with a large electro-optic effect, researchers have found that scanning at a much higher speed is possible.

"When the crystal materials are applied to an electric field, they generate uniform reflecting distributions, that can deflect an incoming light beam," said Professor Shizhuo Yin, from the School of Electrical Engineering and Computer Science. "We conducted a systematic study on indications of speed and found out the phase transition of the electric field is one of the limiting factors."

To overcome this issue, Yin and his team of researchers eliminated the electric field-induced phase transition in a nanodisordered KTN crystal by making it work at a higher temperature. They not only went beyond the Curie temperature (at which certain materials lose their magnetic properties, replaced by induced magnetism), they went beyond the critical end point (in which a liquid and its vapour can co-exist).

 

3d printer scanner future timeline

Credit: Penn State

 

This increased the scanning speed from the microsecond range to the nanosecond range, and led to improved high-speed imaging, broadband optical communications and ultrafast laser display and printing. The researchers believe this could lead to a new generation of 3-D printers, with objects that once took an hour to print now taking a matter of seconds.

Yin said technology like this would be especially useful in the medical industry – high-speed imaging will now be possible in real-time. For example, optometrists who use a non-invasive test that uses light waves to take cross-section pictures of a person's retina, would be able to have a 3-D image of their patients' retinas as they are performing the surgery, so they can see what needs to be corrected during the procedure.

The group's findings are published in the journal Nature Scientific Reports.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

1st November 2016

Urban trees can improve millions of lives by reducing air pollution and temperature, says report

A global investment of just $100 million in tree planting annually could provide 77 million people with cooler cities and 68 million with significant reductions in particulate matter pollution. A new study by the Nature Conservancy claims this pollution can be reduced by up to 24% near trees, while the local cooling effect is up to 2°C (3.6°F).

 

urban trees future timeline

 

Investment in tree planting of just $4 per resident in some of the world’s largest cities could improve the health of tens of millions of people by reducing air pollution and cooling city streets. The Nature Conservancy's Planting Healthy Air study, released at the American Public Health Association (APHA) annual meeting, applies well-established research into how trees clean and cool the air locally, looking instead on a global scale to identify places where tree planting can make the biggest impact.

The Conservancy partnered with the C40 Cities Climate Leadership group to develop the study, with the aim of providing urban leaders with the data they need to demonstrate that investments in tree planting can improve public health in their cities.

“Trees can have a significant local impact on pollution levels and temperatures,” said Rob McDonald, lead scientist for global cities at the Nature Conservancy and the study’s primary author. “Urban trees can save lives, and are just as cost-effective as more traditional solutions like putting scrubbers on smokestacks or painting roofs white.”

 

urban trees air quality

 

The challenges facing cities are significant, but trees can be an important part of the solution:

• Every year, more than 3 million people die from the effects of fine particulate matter – air pollution so small that it can enter the bloodstream and lungs, causing such ailments as asthma, heart disease and stroke. In cities, much of this pollution comes from the burning of fossil fuels, including in car engines. Trees remove as much as a quarter of the particulate matter pollution within a few hundred metres, and when planted in the right places, can offer a very effective barrier, filtering bad air and protecting local residents.

• Urban heat is already the deadliest type of weather-related disaster facing the world, and the impacts will only increase as our climate continues to change. In France in 2003, a summer heat wave killed 11,000 people in one week, so many that the Paris city morgue was overwhelmed and the bodies had to be stored at a vegetable market. The most vulnerable to deadly heat waves are elderly people without access to air conditioning. Trees can cool their immediate vicinity by as much as 2 degrees C, offering a means of protecting people from the impacts of a changing climate.

 

urban trees 2 degrees C

 

The Conservancy’s Planting Healthy Air study found that an annual global investment of US$100 million in tree planting could provide 77 million people with cooler cities and 68 million people with measurable reductions in particulate matter pollution.

Cities with high population density, high pollution and heat levels, and low cost of tree planting showed the highest return on investment, with nations like India, Pakistan and Bangladesh topping the global rankings. But the data also shows neighbourhoods in every city that offer a high potential benefit to residents from tree planting.

Trees are the only solution that both clean and cool the air, while simultaneously offering other benefits, including urban green space for residents, habitat for wildlife and carbon sequestration. Tree planting is a solution that mayors and other municipal leaders around the globe can implement to improve the lives of residents within their communities, reducing air pollution and slowing climate change.

“Trees alone cannot solve all of the world’s urban air and heat challenges – but they’re an important piece of the solution. In this urban century when there are going to be an extra two billion people living in cities, smart cities should be thinking about how nature and trees can be part of the solution to keep air healthy," explained McDonald. "One of our goals outlined in the report is to remind cities that you have the parks or urban forestry department on one side, and the health department on the other side. On this issue at least, they need to be talking to each other. I am really hopeful that if more cities start thinking that way, then we will see a rebirth in urban tree planting."

 

urban trees future timeline

 

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

1st November 2016

New pollution limits for shipping by 2020

In a landmark decision for both the environment and human health, 1st January 2020 has been set as the implementation date for a significant reduction in the sulphur content of the fuel oil used by ships.

 

ships sulphur pollution limit 2020

 

The decision to implement a global sulphur cap of 0.50% m/m (mass/mass) in 2020 was taken by the International Maritime Organisation (IMO), the regulatory authority for international shipping, during its Marine Environment Protection Committee (MEPC), meeting for its 70th session in London.

This represents a significant cut from the 3.5% m/m global limit currently in place and demonstrates a clear commitment by the IMO to ensure shipping meets its environmental obligations.

"The reductions in sulphur oxide emissions resulting from the lower global sulphur cap are expected to have a significant beneficial impact on the environment and on human health, particularly that of people living in port cities and coastal communities, beyond the existing emission control areas," said IMO Secretary-General Kitack Lim.

Further work to ensure effective implementation will continue in the Sub-Committee on Pollution Prevention and Response. Under the new global cap, ships will have to use fuel oil on board with a sulphur content of no more than 0.50% m/m, against the current limit of 3.50%, which has been in effect since 1st January 2012. The interpretation of "fuel oil used on board" includes use in main and auxiliary engines and boilers. Exemptions are provided for situations involving the safety of the ship or saving life at sea, or if a ship or its equipment is damaged.

Ships can meet the requirement by using low-sulphur compliant fuel oil. They can also use exhaust gas cleaning systems or "scrubbers", which "clean" the emissions before they are released into the atmosphere. Overall, the new limit is expected to slash SO2 emissions in the shipping industry by 85% compared to today's levels, and reduce the number of premature deaths by 200,000 globally every year.

In addition to limits on sulphur pollution, another agreement was reached at the IMO's committee meeting. Ships of 5,000 gross tonnage and above will be required to collect consumption data for each type of fuel oil they use, as well as other, additional, specified data including proxies for transport work. These larger ships account for approximately 85% of CO2 emissions from international shipping. The data collected will provide a firm basis on which future decisions on additional measures, over and above those already adopted, can be made. The MEPC approved a roadmap (2017 through to 2023) for developing a "Comprehensive IMO strategy on reduction of GHG emissions from ships", which foresees an initial GHG strategy to be adopted in 2018.

---

• Follow us on Twitter

• Follow us on Facebook

 

  speech bubble Comments »
 

 

 

 
     
   
« Previous Next »
   
     
   

 
     
 

Blogs

AI & Robotics Biology & Medicine Business & Politics Computers & the Internet
Energy & the Environment Home & Leisure Military & War Nanotechnology
Physics Society & Demographics Space Transport & Infrastructure

 

 

Archive

2015

 

2014

 

2013

 

2012

 

2011

 

2010

 

 
 
 
 

 


future timeline twitter future timeline facebook group future timeline youtube channel account videos future timeline rss feed

Privacy Policy