16th April 2016
Artificial intelligence finds cancer cells more efficiently
By using a laser at nanosecond speeds, in combination with deep learning algorithms, a new microscope detects cancer cells more efficiently than standard methods.
Scientists at the University of California, Los Angeles (UCLA) have developed a new technique for identifying cancer cells in blood samples, faster and more accurately than current standard methods.
One common approach to testing for cancer involves doctors adding biochemicals to blood samples. These biochemicals attach biological "labels" to cancer cells, which enable instruments to detect and identify them. However, the biochemicals can damage cells and render the samples unusable for future analyses. Other techniques are available that don't use labelling, but these can be inaccurate, because they only identify cancer cells based on a single physical characteristic.
The new technique, demonstrated by the California NanoSystems Institute at UCLA, images cells without destroying them. Not only that, but it can identify up to 16 physical characteristics – including size, granularity and biomass – instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, for rapidly imaging cells in blood samples, and a deep learning program that identifies cancer cells with over 95 percent accuracy.
The "photonic time stretch" was invented by Professor Barham Jalali, who holds a patent for this technology, and its use in microscopes is just one of many possible applications. It works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly – in nanoseconds, or billionths of a second – that the images would be too weak to be detected and too fast to be digitised by normal instrumentation. The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitised at a rate of 36 million images per second. It then uses deep learning to distinguish the cancer cells from healthy white blood cells. Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data, with the goal of achieving accurate decision making.
Time-stretch quantitative phase imaging (TS-QPI) and analytics system (credit: Claire Lifan Chen et al./Nature)
"Each frame is slowed down in time and optically amplified so it can be digitised," explains Ata Mahjoubfar, a UCLA postdoctoral fellow. "This lets us perform fast cell imaging that the artificial intelligence component can distinguish."
Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA method eliminates that problem too: "The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination," said Claire Lifan Chen, a UCLA doctoral student.
In their paper – published in the journal Nature Scientific Reports – the researchers write that their system could lead to data-driven diagnoses by the cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and a better understanding of the tumour-specific gene expression in cells, leading to new treatments for disease.
• Follow us on Twitter
• Follow us on Facebook
11th April 2016
The first high-res 3D images of DNA segments
First-of-their-kind images by researchers at Berkeley Lab could aid in the use of DNA to build nanoscale devices.
Credit: Berkeley Lab
An international team working at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has captured the first high-resolution 3-D images from individual double-helix DNA segments, attached at either end of gold nanoparticles. The images detail the flexible structure of the DNA segments, which appear as nanoscale "jump ropes".
This unique imaging capability, pioneered by Berkeley Lab scientists, could aid in the use of DNA segments as building blocks for molecular devices that function as nanoscale drug-delivery systems, markers for biological research, and components for computer memory and electronic devices. It could also lead to images of disease-relevant proteins that have proven elusive for other imaging techniques, and of the assembly process that forms DNA from separate, individual strands.
The shapes of the coiled DNA strands, which were sandwiched between polygon-shaped gold nanoparticles, were reconstructed in 3-D using a cutting-edge electron microscope technique called individual-particle electron tomography (IPET). This was combined with a protein-staining process and sophisticated software that provided structural details down to a scale of just 2 nanometres (nm), or about two billionths of a metre.
"We had no idea about what the double-strand DNA would look like between the nanogold particles," said Gang Ren, a Berkeley Lab scientist who led the research. "This is the first time for directly visualising an individual double-strand DNA segment in 3-D."
While the 3-D reconstructions show the basic nanoscale structure of the samples, Ren said the next step will be to improve the resolution to the sub-nanometre scale: "Even in this current state, we begin to see 3-D structures at 1- to 2-nanometre resolution," he said. "Through better instrumentation and improved computational algorithms, it would be promising to push the resolution to that visualising a single DNA helix within an individual protein."
The technique, he said, has already excited interest among some prominent pharmaceutical companies and nanotechnology researchers, and his science team already has dozens of related research projects being planned. In future studies, they could attempt to improve the imaging resolution for complex structures that incorporate more DNA segments as a sort of "DNA origami," Ren said. Researchers hope to build and better characterise nanoscale molecular devices using DNA segments that can, for example, store and deliver drugs to targeted areas in the body.
"DNA is easy to program, synthesise and replicate, so it can be used as a special material to quickly self-assemble into nanostructures and to guide the operation of molecular-scale devices," he said. "Our current study is just a proof of concept for imaging these kinds of molecular devices' structures."
His team's work is published in the journal Nature Communications.
Berkeley Lab researchers Gang Ren (standing) and Lei Zhang. Photo by Roy Kaltschmidt/Berkeley Lab.
• Follow us on Twitter
• Follow us on Facebook
29th March 2016
World's first minimal synthetic bacterial cell
Craig Venter's team has synthesised a minimal bacterial genome, containing only the genes necessary for life, and consisting of just 473 genes. This builds upon their earlier research that synthesised Mycoplasma laboratorium in 2010.
JCVI-syn3.0 cells magnified about 15,000 times. Credit: J. Craig Venter Institute (JCVI)
A newly created synthetic organism, pictured above, has been announced by Dr. Craig Venter and his team of researchers, whose earlier work includes the Human Genome Project and Mycoplasma laboratorium. Known as JCVI-syn3.0, it contains just 473 genes, which makes it the world's first minimal synthetic bacterial cell, and the smallest genome of any organism that can be grown in laboratory media. For comparison, the earlier JCVI-syn1.0 (synthesised in 2010) contained 901 genes, while a human cell has over 20,000 genes.
Nearly one-third (149) of this organism's genes are of unknown biological function, suggesting the presence of undiscovered functions that are essential for life. World-renowned geneticist Dr. Venter explains: "Our attempt to design and create a new species – while ultimately successful – revealed that 32% of the genes essential for life in this cell are of unknown function, and showed that many are highly conserved in numerous species. All the bioinformatics studies over the past 20 years have underestimated the number of essential genes by focusing only on the known world. This is an important observation that we are carrying forward into the study of the human genome."
In 2010, Dr. Venter and his team created the first artificial cell (JCVI-syn1.0), providing proof of principle that genomes can be designed in the computer, chemically made in the lab, and transplanted into a recipient cell to produce a new, self-replicating organism controlled only by the synthetic genome. They then set about their ultimate objective: to synthesise a minimal cell containing only the genes necessary for life in its simplest form, an effort that could help scientists understand the function of every essential gene in a cell.
To achieve this, Venter and colleagues again turned to Mycoplasma. They designed minimal genomes in eight different segments, each of which could be tested to accurately classify constituent genes as "essential" or not. During this design-build-test process, quasi-essential genes were also identified – those needed for robust growth, but not absolutely required for life. They whittled away at the synthetic, reduced genome, repeating experiments until no more genes could be disrupted and the genome was as small as possible. Their study also revealed some genes initially classified as "non-essential" do in fact perform the same essential function as a second gene; thus, one of the pair of genes needs to be retained in the minimal genome.
A paper describing the organism is published in the journal Science. The team hopes to decode the unknown 149 genes in the future. They conclude that a major outcome of this minimal cell program is new tools and semi-automated processes for whole genome synthesis.
"This paper represents more than five years of work by an amazingly talented group of people," says co-author Dr. Clyde Hutchison. "Our goal is to have a cell for which the precise biological function of every gene is known."
"This paper signifies a major step toward our ability to design and build synthetic organisms from the bottom up with predictable outcomes," says Daniel Gibson, PhD, an associate professor at the J. Craig Venter Institute (JCVI). "The tools and knowledge gained from this work will be essential to producing next-generation production platforms for a wide range of disciplines."
In the future, synthetic organisms could be used for industrial applications to create revolutionary new medicines, biochemicals, biofuels, agricultural processes and much more.
• Follow us on Twitter
• Follow us on Facebook
10th March 2016
Gene mutation lowers heart attack risk by 50%
German researchers have identified a specific gene mutation in humans that provides a 50 percent lower risk of suffering a heart attack.
Worldwide, heart attacks are responsible for an estimated 7.3 million deaths each year, or about 13% of all deaths globally. They are the leading cause of death in high- or middle-income countries and second only to lower respiratory infections in lower-income countries. In the United States, someone has a heart attack every 34 seconds.
This week, researchers have announced a significant finding that could lead to future treatments able to prevent heart attacks. An international team headed by cardiologist Prof. Heribert Schunkert, of the German Heart Centre at the Technical University of Munich (TUM), found that people with a specific gene mutation have a 50 percent lower risk of suffering a heart attack. If this gene were switched off with medications, it could greatly reduce the risk of coronary disease.
"This discovery makes it considerably easier to develop new medications that simulate the effect of this mutation," explains Prof. Schunkert. "This gives follow-on research aiming at reducing heart attacks in the future a concrete goal."
For the large-scale study, the scientists analysed 13,700 different genes from a pool of 200,000 participants – both heart attack patients and a group of healthy control persons. They were on the lookout for correlations between gene mutations and coronary artery disease. For a number of genes, researchers noticed a clear correlation, including the ANGPTL4 (angiopoietin-like 4) gene. In addition, subjects with the mutated ANGPTL4 gene had significantly lower triglyceride values in their blood. Triglycerides are the main constituent of body fat in humans and animals.
"The blood fat triglyceride serves as an energy store for the body," explains Prof. Jeanette Erdmann, from the University of Lübeck, who collaborated on the work. "However, as with LDL cholesterol, elevated values lead to increased risk of cardiovascular disease. Low values, by contrast, lower the risk."
Credit: Jeanette Erdmann / University of Luebeck / New England Journal of Medicine
Until now, the significance of triglycerides for human health was underestimated, according to Prof. Schunkert: "For most patients, the focus still lies on cholesterol. A differentiation is always made between the healthy HDL and the harmful LDL cholesterol variants. However, in the mean time we know that the HDL values always run inversely proportional to those of the triglycerides and that HDL itself actually tends to behave in a neutral manner."
"The triglycerides, on the other hand, are the second important blood fat, alongside the harmful LDL cholesterol. We published this in the Lancet two years ago. The only reason HDL blood values are still measured is because, together with HDL and triglyceride values, they can be used to derive LDL values, which cannot be measured directly."
The current study now shows that the concentration of triglycerides in the blood is influenced not only by nutrition and predisposition, but also by the ANGPTL4 gene. "At the core of our data is the lipoprotein lipase (LPL) enzyme. It causes the decomposition of triglycerides in the blood," says Erdmann.
Normally, ANGPTL4 hems the LPL enzyme, causing blood fat values to rise. The mutations identified by the researchers disable the function of this gene and thereby ensure that the triglyceride value drops significantly.
"At the same time," says Erdmann, "we discovered that the body does not even need the ANGPTL4 gene and manages wonderfully without it. It seems to be superfluous." Shutting down the gene or inhibiting the LPL enzyme in another manner may ultimately protect against coronary disease.
"Based on our results, medications now need to be developed to neutralise the effect of the ANGPTL4 gene, thereby reducing the risk of a heart attack," adds Prof. Schuinkert. "Other researchers have already done this successfully in animal tests. They drastically reduced the blood fat levels in monkeys that received a neutralising antibody against ANGPTL4. This feeds the hope that antibody preparations with a similar effect can soon be used successfully in humans."
The study is published in the New England Journal of Medicine.
• Follow us on Twitter
• Follow us on Facebook
25th February 2016
Pancreatic cancer breakthrough: four subtypes identified
Pancreatic cancer has been found to have four separate subtypes, each with a different cause and requiring a different treatment.
An international team led by Australian researchers has studied the genetics of pancreatic cancer, revealing it is actually four separate diseases each with different genetic triggers and survival rates, paving the way for more accurate diagnoses and treatments.
These major findings also include 10 genetic pathways at the core of transforming normal pancreatic tissue into cancerous tumours. Some of these processes are related to bladder and lung cancers – opening up the possibility of using treatments for these cancers to also treat pancreatic cancer.
The study, led by Prof Sean Grimmond at the University of Melbourne Centre for Cancer Research, was published yesterday in Nature. Over seven years, his team analysed the genomes of 456 pancreatic tumours to find the core processes that are damaged when normal pancreatic tissues change into aggressive cancers.
Professor Grimmond said there was an urgent need for more knowledge about the genetic causes of pancreatic cancer, given its very low survival rate with most patients only living a few months after diagnosis – and the condition is predicted to become the second most common cancer in Western countries by 2025.
"We identified 32 genes, from 10 genetic pathways that are consistently mutated in pancreatic tumours, but further analysis of gene activity revealed four distinct subtypes of tumours," said Prof. Grimmond. "This study demonstrates that pancreatic cancer is better considered as four separate diseases, with different survival rates, treatments and underlying genetics. Knowing which subtype a patient has would allow a doctor to provide a more accurate prognosis and treatment recommendations."
Importantly, Prof. Grimmond said there are already cancer drugs, and drugs in development, that can potentially target parts of the 'damaged machinery' driving pancreatic cancers to start. For example, some strains of pancreatic cancer are unexpectedly associated with mutations normally associated with colon cancer or leukaemia and for which experimental drugs are available or in development. Other pancreatic cancers bear strong similarities to some bladder and lung cancers and researchers can now start to draw on that knowledge to improve treatments.
In a world first, his team performed an integrated genomic analysis – meaning they combined results of several techniques to examine not only the genetic code, but also variations in structure and gene activity, revealing more information than ever before about the genetic damage that leads to pancreatic cancer.
22nd February 2016
Half the world to be short-sighted by 2050
By 2050, half the world's population (nearly 5 billion) will be short-sighted (myopic), with up to one-fifth of them (1 billion) at significantly increased risk of blindness if current trends continue, says a new study.
The number of people with vision loss from high myopia is expected to increase seven-fold from 2000 to 2050, with myopia to become a leading cause of permanent blindness worldwide.
The rapid increase in the prevalence of myopia globally is attributed to, "environmental factors (nurture), principally lifestyle changes resulting from a combination of decreased time outdoors and increased near work activities, among other factors," say study authors from Brien Holden Vision Institute, University of New South Wales Australia and the Singapore Eye Research Institute.
Their findings – published in the journal Ophthalmology – point to a major public health problem. The authors suggest that planning for comprehensive eye care services are needed to manage the rapid increase in high myopes (a five-fold increase from 2000), along with the development of treatments to control the progression of myopia and prevent people from becoming highly myopic.
Graph showing the number of people estimated to have myopia and high myopia for each decade from 2000 through 2050. Error bars represent the 95% confidence intervals.
"We also need to ensure our children receive a regular eye examination from an optometrist or ophthalmologist, preferably each year, so that preventative strategies can be employed if they are at risk," said co-author Professor Kovin Naidoo, CEO of Brien Holden Vision Institute. "These strategies may include increased time outdoors and reduced time spent on near based activities including electronic devices that require constant focussing up close.
"Furthermore, there are other options such as specially designed spectacle lenses and contact lenses or drug interventions, but increased investment in research is needed to improve the efficacy and access of such interventions."
18th February 2016
Animals revived after being in a frozen state for over 30 years
A study in Cryobiology describes how microscopic tardigrades were successfully revived, and reproduced, after being frozen for over 30 years.
This week, it is reported that tardigrades (water bears) were successfully revived and reproduced after having been frozen for 30 years. A moss sample collected from Antarctica in November 1983, then stored at -20°C (-4°F), was thawed in May 2014. Two individuals and a separate egg retrieved from the thawed sample were revived, thereby providing the longest record of survival for tardigrades as animals or eggs. Subsequently, one of the revived tardigrades and the hatchling repeatedly reproduced after recovering from their long-term cryptobiosis.
The previous records for tardigrade revival after long-term storage were 9 years for eggs in dried storage at room temperature and 8 years for dried storage under a frozen condition. These animals have the ability to temporarily shut down their metabolic activities, induced by certain physiological stimuli including desiccation and freezing, which is called "cryptobiosis."
Tardigrades have a typical lifespan of up to six months – meaning these specimens survived at least 61 times longer than they normally would. So in terms of human years, that is equivalent to a U.S. adult male being frozen today and waking up around 6800 AD.
In previous studies on the long-term survival of small cryptobiotic animals, survival has been the primary observation, whereas the recovery of animals or their subsequent reproduction (i.e. indicating long-term viability) has generally not been reported. Thus, the recovery conditions and reproduction following the revival of tardigrades, from an Antarctic moss sample frozen for 30 years, were documented to help further understand the mechanisms underlying their long-term survival in cryptobiosis.
The tardigrades were approximately 0.2 mm long, which is barely visible to the naked eye. After 30 years of storage, the moss containing the animals was defrosted at 3°C for 24 hours, then soaked in water for an additional 24 hours. Two individuals and one egg were collected from the moss sample and reared on agar plates, with algae provided as food. One of the tardigrades and the juvenile that hatched from the revived egg went on to continuously reproduce.
On the first day after rehydration, one of the revived tardigrades slightly moved its fourth pair of legs. The recovery process was slow, taking two weeks for this animal to crawl and eat. It laid 19 eggs, of which 14 hatched successfully. The time taken for the first egg laid after revival of this individual to hatch was almost double (19 days) the median time taken by all the eggs (9.5 days). The other revived tardigrade also moved its fourth pair of legs on the first day. However, it did not recover successfully and died 20 days after rehydration. The juvenile that hatched from a revived egg ate, grew, and reproduced without any obvious abnormality observed. It laid 15 eggs, of which 7 successfully hatched. The offspring were morphologically identified as Acutuncus antarcticus, a species endemic to Antarctica.
Possible damage accumulated over 30 years of cryptobiosis was indicated by the long recovery time required for the animals and the longer time required for the first egg laid after the revival to hatch. On the other hand, no obvious damage was observed in the specimen that hatched from the revived egg.
"Our team now aims at unravelling the mechanisms underlying the long-term survival of cryptobiotic organisms by studying damage to tardigrades' DNA and their ability to repair it," said Megumu Tsujimto, lead researcher at the National Institute of Polar Research. His team's paper, "Recovery and reproduction of an Antarctic tardigrade retrieved from a moss sample frozen for over 30 years" appears in the February 2016 issue of Cryobiology.
16th February 2016
Virtual reality therapy could help people with depression
A new immersive virtual reality therapy could help people with depression to be less critical and more compassionate towards themselves, reducing depressive symptoms, finds a new study from University College London (UCL) and ICREA-University of Barcelona.
This new therapy, previously tested by healthy volunteers, was used by 15 depressed patients aged 23-61. Nine reported reduced depressive symptoms a month after the therapy, of whom four experienced a clinically significant drop in depression severity. The study is published in the British Journal of Psychiatry Open and was funded by the Medical Research Council.
Patients in the study wore a virtual reality headset to see from the perspective of a life-size 'avatar' or virtual body. Seeing this virtual body in a mirror moving in the same way as their own body typically produces the illusion that this is their own body. This is called 'embodiment'.
While embodied in an adult avatar, participants were trained to express compassion towards a distressed virtual child. As they talked to the child it appeared to gradually stop crying and respond positively to the compassion. After a few minutes, the patients were embodied in the virtual child and saw the adult avatar deliver their own compassionate words and gestures back to them. This brief eight minute scenario was repeated three times at weekly intervals and patients were followed up a month later.
"People who struggle with anxiety and depression can be excessively self-critical when things go wrong in their lives," explains study lead Professor Chris Brewin (UCL Clinical, Educational & Health Psychology). "In this study, by comforting the child and then hearing their own words back, patients are indirectly giving themselves compassion. The aim was to teach patients to be more compassionate towards themselves and less self-critical, and we saw promising results. A month after the study, several patients described how their experience had changed their response to real-life situations in which they would previously have been self-critical."
The study offers a promising proof-of-concept, but as a small trial without a control group it cannot show whether the intervention is responsible for the clinical improvement in patients.
"We now hope to develop the technique further to conduct a larger controlled trial, so that we can confidently determine any clinical benefit," says co-author Professor Mel Slater (ICREA-University of Barcelona and UCL Computer Science). "If a substantial benefit is seen, then this therapy could have huge potential. The recent marketing of low-cost home virtual reality systems means that methods such as this could potentially be part of every home and be used on a widespread basis."
2nd February 2016
Graphene shown to safely interface with neurons in the brain
Researchers in Europe have demonstrated that graphene can be successfully interfaced with neurons, while maintaining the integrity of these vital nerve cells. It is believed this could lead to greatly improved brain implants.
A new study published in the journal ACS Nano demonstrates how it is possible to interface graphene with neurons, whilst maintaining the integrity of these vital nerve cells. The research was part of the EU's Graphene Flagship – a €1 billion project that aims to bring graphene from laboratories into commercial applications within 10 years. The study involved a collaboration between nanotechnologists, chemists, biophysicists and neurobiologists from the University of Trieste in Italy, the University Castilla-La Mancha in Spain and the Cambridge Graphene Centre in the UK.
Prof. Laura Ballerini, lead neuroscientist in the study: "For the first time, we interfaced graphene to neurons directly, without any peptide coating used in the past to favour neuronal adhesion. We then tested the ability of neurons to generate electrical signals known to represent brain activities and found that the neurons retained unaltered their neuronal signalling properties. This is the first functional study of neuronal synaptic activity using uncoated, graphene-based materials."
Using electron microscopy and immuno-fluorescence in rat brain cell cultures, the researchers observed that the neurons interfaced well with the untreated graphene electrodes – remaining healthy, transmitting normal electric impulses and, importantly, showing no adverse glial reaction which can lead to damaging scar tissue. This is therefore the first step towards using pristine, graphene-based material for a neuro-interface.
Graphene-based electrodes implanted in the brain could restore sensory functions for amputees or paralysed patients, or treat individuals with motor disorders such as epilepsy or Parkinson's disease. Further into the future, perhaps they could be used to enhance or upgrade the abilities of normal, healthy people too, bringing the age of transhumanism closer to reality.
Too often, the modern electrodes used for neuro-interfaces (based on tungsten or silicon) suffer partial or complete loss of signal over time. This is often caused by scar tissue formation during the electrode insertion and by its rigid nature preventing the electrode from moving with the natural movements of the brain. Graphene, by contrast, appears to be a highly promising material to solve these problems. It has excellent conductivity, flexibility, biocompatibility and stability within the body.
"Hopefully this will pave the way for better deep brain implants to both harness and control the brain, with higher sensitivity and fewer unwanted side effects," said Ballerini.
"These initial results show how we are just at the tip of the iceberg when it comes to the potential of graphene and related materials in bio-applications and medicine," said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre. "The expertise developed at the Cambridge Graphene Centre allows us to produce large quantities of pristine material in solution, and this study proves the compatibility of our process with neuro-interfaces."
29th January 2016
Pen-sized microscope identifies cancer cells
Researchers at the University of Washington have developed a new handheld, pen-sized microscope that could identify cancer cells in doctor's offices and operating rooms.
Surgeons removing a malignant brain tumour don't want to leave cancerous material behind. But they're also trying to protect healthy brain matter and minimise neurological harm. Once they open up a patient's skull, there's no time to send tissue samples to a pathology lab – where they are typically frozen, sliced, stained, mounted on slides and investigated under a bulky microscope – to clearly distinguish between cancerous and normal brain cells.
But a handheld, miniature microscope being developed by University of Washington (UW) engineers could allow surgeons to "see" at a cellular level in the operating room and determine precisely where to stop cutting. This technology, made in collaboration with Memorial Sloan Kettering Cancer Centre, Stanford University and the Barrow Neurological Institute, is outlined in the February 2016 issue of Biomedical Optics Express.
"Surgeons don't have a very good way of knowing when they're done cutting out a tumour," said Jonathan Liu, senior author on the paper and a UW assistant professor of mechanical engineering. "They're using their sense of sight, sense of touch, pre-operative images of the brain – and oftentimes it's pretty subjective. Being able to zoom and see at the cellular level during the surgery would really help them to accurately differentiate between tumour and normal tissues and improve patient outcomes."
Similarly, dentists who find a suspicious-looking lesion in a patient's mouth will often have to cut it out and send it to a lab to be biopsied for oral cancer, a process that subjects patients to an invasive procedure and overburdens pathology labs. A miniature microscope with high enough resolution to see changes at a cellular level could be used in dental or dermatological clinics to assess which lesions or moles are normal and which need to be biopsied.
Real-time microscope images (bottom) illuminate similar details in mouse tissues as the images (top) produced during an expensive, multi-day process at a clinical pathology lab. Credit: University of Washington
"The microscope technologies that have been developed over the last couple of decades are expensive and still pretty large," said Milind Rajadhyaksha, at the Memorial Sloan Kettering Cancer Centre in NYC, co-author on the study. "So there's a need for creating much more miniaturised microscopes."
The new microscope revealed by UW is able to combine technologies in a compact and novel way, generating high-quality images at faster speeds than existing bulkier devices. It uses "dual-axis confocal microscopy" to illuminate and more clearly see through opaque tissue, capturing details up to half a millimetre beneath the tissue surface, where some types of cancerous cells originate. In the video below, for example, the team produced images of fluorescent blood vessels in a mouse ear. In their paper, they demonstrate how their invention has sufficient resolution to see subcellular details.
"For brain tumour surgery, there are often cells left behind that are invisible to the neurosurgeon. This device will really be the first to let you identify these cells during the operation and determine exactly how much further you can reduce this residual," said project collaborator Nader Sanai, a professor of neurosurgery at the Barrow Neurological Institute in Phoenix. "That's not possible to do today."
Human clinical trials are expected to start in 2017 and the team hopes it can be introduced into surgeries by 2018-2020.
22nd January 2016
Brain implant will connect a million neurons with superfast bandwidth
A neural interface being created by the United States military aims to greatly improve the resolution and connection speed between biological and non-biological matter.
The Defence Advanced Research Projects Agency (DARPA) – a branch of the U.S. military – has announced a new research and development program known as Neural Engineering System Design (NESD). This aims to create a fully implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world.
The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. A communications link would be achieved in a biocompatible device no larger than a cubic centimetre. This could lead to breakthrough treatments for a number of brain-related illnesses, as well as providing new insights into possible future upgrades for aspiring transhumanists.
“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” says Phillip Alvelda, program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”
Among NESD’s potential applications are devices that could help restore sight or hearing, by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.
Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that communicate clearly and individually with any of up to one million neurons in a given region of the brain.
To achieve these ambitious goals and ensure the technology is practical outside of a research setting, DARPA will integrate and work in parallel with numerous areas of science and technology – including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques, to transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent the data with minimal loss.
The NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping, manufacturing services and intellectual property. In later phases of the program, these partners could help transition the resulting technologies into commercial applications. DARPA will invest up to $60 million in the NESD program between now and 2020.
21st January 2016
Nanoparticles kill 90% of antibiotic-resistant bacteria
Light-activated nanoparticles able to kill over 90% of antibiotic-resistant bacteria have been demonstrated at the University of Colorado.
Salmonella bacteria under a microscope. Photo by NIAID / Wikipedia.
Antibiotic-resistant bacteria such as Salmonella, E. Coli and Staphylococcus infect some two million people and kill 23,000 in the U.S. each year. Efforts to defeat these so-called "superbugs" have consistently fallen short, due to the bacteria's ability to adapt rapidly and develop immunity to common antibiotics such as penicillin. In 2014, the World Health Organisation declared this a "major global threat" and warned that the world is heading for a post-antibiotic era, in which even common infections and minor injuries which have been treatable for decades can once again kill.
In this ever-escalating evolutionary battle with drug-resistant bacteria, we may soon have an advantage, however, thanks to adaptive, light-activated nanotherapy developed by scientists at the University of Colorado Boulder. Their latest research suggests that the solution to this big global problem might be to think small – very small.
In findings published by the journal Nature Materials, researchers at the Department of Chemical and Biological Engineering and the BioFrontiers Institute describe new light-activated nanoparticles known as "quantum dots." These dots, which are 20,000 times smaller than a human hair and resemble the tiny semiconductors used in consumer electronics, successfully killed 92% of drug-resistant bacterial cells in a lab-grown culture.
"By shrinking these semiconductors down to the nanoscale, we're able to create highly specific interactions within the cellular environment that only target the infection," said Prashant Nagpal, senior author of the study.
Credit: University of Colorado Boulder / BioFrontiers Institute
Previous research has shown that metal nanoparticles – created from silver and gold, among various other metals – can be effective at combating antibiotic resistant infections, but can indiscriminately damage surrounding cells as well. Quantum dots, however, can be tailored to particular infections thanks to their light-activated properties. The dots remain inactive when in darkness, but can be "activated" on command by exposing them to light, allowing researchers to modify the wavelength in order to alter and kill the infected cells.
"While we can always count on these superbugs to adapt and fight the therapy, we can quickly tailor these quantum dots to come up with a new therapy and therefore fight back faster in this evolutionary race," said Nagpal.
The specificity of this innovation may help reduce or eliminate the potential side effects of other treatment methods, as well as provide a path forward for future development and clinical trials.
"Antibiotics are not just a baseline treatment for bacterial infections, but HIV and cancer as well," said Anushree Chatterjee, an assistant professor in the Department of Chemical and Biological Engineering at CU-Boulder and a senior author of the study. "Failure to develop effective treatments for drug-resistant strains is not an option, and that's what this technology moves closer to solving."
Nagpal and Chatterjee are co-founders of PRAAN Biosciences, a Colorado-based startup that can sequence genetic profiles using just a single molecule – technology that may aid in the diagnosis and treatment of superbug strains. The authors have filed a patent on their new quantum dot technology.
21st January 2016
Tiny electronic implants that monitor brain injury, then melt away
Researchers have developed a new class of small, thin electronic sensors that monitor temperature and pressure within the skull – crucial health parameters after a brain injury or surgery – then melt away when no longer needed. This eliminates the need for additional surgery to remove the monitors and reduces the risk of infection and haemorrhage.
Similar sensors can be adapted for postoperative monitoring in other body systems as well, the researchers say. Led by John A. Rogers, a professor of materials science and engineering at the University of Illinois at Urbana-Champaign, and Wilson Ray, a professor of neurological surgery at the Washington University School of Medicine in St. Louis, the researchers have published their work in the journal Nature.
"This is a new class of electronic biomedical implants," said Professor Rogers. "These kinds of systems have potential across a range of clinical practices, where therapeutic or monitoring devices are implanted or ingested, perform a sophisticated function, and then resorb harmlessly into the body after their function is no longer necessary."
After a traumatic brain injury or brain surgery, it is crucial to monitor the patient for swelling and pressure on the brain. Current monitoring technology is bulky and invasive, Rogers said, and the wires restrict the patent's movement and hamper physical therapy as they recover. Because they require continuous, hard-wired access into the head, such implants also carry the risk of allergic reactions, infection and haemorrhage, and could even exacerbate the inflammation they are meant to monitor.
"If you simply could throw out all the conventional hardware and replace it with very tiny, fully implantable sensors capable of the same function, constructed out of bioresorbable materials in a way that also eliminates or greatly miniaturises the wires, then you could remove a lot of the risk and achieve better patient outcomes," Rogers said. "We were able to demonstrate all of these key features in animal models, with a measurement precision that's just as good as that of conventional devices."
The new devices incorporate dissolvable silicon technology developed by Rogers' group. The sensors, smaller than a grain of rice, are built on extremely thin sheets of silicon – which are naturally biodegradable – that are configured to function normally for a few weeks, then dissolve away, completely and harmlessly in the body's own fluids.
Rogers' group teamed with Illinois materials science and engineering professor Paul V. Braun to make the silicon platforms sensitive to clinically relevant pressure levels in the intracranial fluid surrounding the brain. They also added a tiny temperature sensor and connected it to a wireless transmitter roughly the size of a postage stamp, implanted under the skin but on top of the skull.
A downloadable image gallery is available.
The Illinois group worked with clinical experts in traumatic brain injury at Washington University to implant the sensors in rats, testing for performance and biocompatibility. They found that the temperature and pressure readings from the dissolvable sensors matched the conventional monitoring devices for accuracy.
"The ultimate strategy is to have a device that you can place in the brain – or in other organs in the body – that is entirely implanted, intimately connected with the organ you want to monitor and can transmit signals wirelessly to provide information on the health of that organ, allowing doctors to intervene if necessary to prevent bigger problems," said Rory Murphy, a neurosurgeon at Washington University and co-author of the paper. "After the critical period that you actually want to monitor, it will dissolve away and disappear."
The researchers are moving toward human trials for this technology, as well as extending its functionality for other biomedical applications.
"We have established a range of device variations, materials and measurement capabilities for sensing in other clinical contexts," Rogers said. "In the near future, we believe that it will be possible to embed therapeutic function – such as electrical stimulation or drug delivery – into the same systems while retaining the essential bioresorbable character."
Dissolution of NFC system. All images credit: University of Illinois
26th December 2015
New genes associated with extreme longevity identified
A new Big Data statistical method has identified five longevity loci, providing clues about the physiological mechanisms of successful aging.
Centenarians – that is, people who live to be 100 or more – make up around 0.1% of the 40 million U.S. adults aged 65 and older. These individuals demonstrate successful aging as they remain active and alert even at very old ages. In a study this month, scientists at Stanford University and the University of Bologna have uncovered new clues about the basis for longevity, by finding genetic loci associated with extreme lifespans.
Previous research has indicated that centenarians have health and dietary habits similar to the average person, suggesting that factors in their genetic make-up could contribute to successful aging. However, prior studies have identified only a single gene (APOE, known to be involved in Alzheimer's) that was different in centenarians versus normal agers. The results from this latest study indicate that, in fact, several disease variants may be absent in centenarians versus the general population.
Disease GWAS show substantial genetic overlap with longevity. Shown are results for coronary artery disease and Alzheimer's disease. The y-axis is the observed P values for longevity, and the x-axis is the expected P values under the null hypothesis that the disease is independent of longevity. Cyan, blue and purple lines show the P values for longevity of the top 100, 250, and 500 disease SNPs from independent genetic loci, respectively. Red lines show the background distribution of longevity P values for all independent genetic loci tested in both the longevity and disease GWAS. The grey diagonal line corresponds to threshold for nominal significance (P< = 0.05) for longevity.
The report by Kristen Fortney and colleagues, published in PLOS Genetics, is an example of using Big Data to glean information about an extremely complicated trait such as longevity. To find the longevity genes, they first developed a new statistical method, known as informed genome-wide association studies (iGWAS). This took advantage of existing data from 14 diseases to narrow the search for genes associated with longevity. By using their iGWAS method, the scientists found five longevity loci, providing valuable clues about the physiological mechanisms for healthy aging. These loci are known to be involved in various processes – including cell senescence, autoimmunity and cell signalling, as well as Alzheimer's disease.
The incidence of nearly all diseases increases with age, so understanding the genetic factors for successful aging could have a large impact on health. Future work may lead to a better understanding of precisely how these genes enable successful aging. Future studies could also identify additional longevity genes by recruiting a greater number of centenarians for analysis.
8th December 2015
Genes for longer and healthier life identified
From a 'haystack' of 40,000 genes in three different organisms, scientists have found genes that are involved in physical aging. If you influence only one of these genes, the healthy lifespan of laboratory animals is extended – and possibly that of humans, too.
Driven by the quest for eternal youth, humankind has spent centuries obsessed with the question of how exactly it is that we age. With advancements in molecular genetics in recent decades, the search for genes involved in the aging process has greatly accelerated. Until now, this was mostly limited to genes of individual model organisms such as the C. elegans nematode, which revealed that around 1% of its genes could influence life expectancy. However, researchers have long assumed that such genes arose during the course of evolution and in all living beings whose cells preserved a nucleus – from yeast to humans.
Researchers at ETH Zurich and the JenAge consortium in Germany have now systematically gone through the genomes of three different organisms in search of the genes associated with the aging process that are present in all three species – and thus, derived from a common ancestor. Although they are found in different organisms, these so-called orthologous genes are closely related to each other, and they are all found in humans, too.
To detect them, researchers examined around 40,000 genes in the nematode C. elegans, zebra fish and mice. By screening them, the scientists wanted to determine which genes are regulated in an identical manner in all three organisms in each comparable aging stage: young, mature and old. As a measure of gene activity, they recorded the amount of messenger RNA (mRNA) molecules found in the cells of these animals. mRNA is the transcript of a gene and the blueprint of a protein. When there are many copies of an mRNA of a specific gene, it is very active; the gene is said to be "upregulated". Fewer mRNA copies, to the contrary, are regarded as a sign of low activity.
From this information, the researchers used statistical models to establish an intersection of genes that were regulated in the same manner in the worms, fish and mice. This showed that the three organisms have only 30 genes in common that significantly influence the aging process.
From left to right: C. elegans nematode, zebra fish and mouse.
Credit: Bob Goldstein [CC BY-SA 3.0]
By conducting experiments in which the mRNA of the corresponding genes were selectively blocked, the researchers pinpointed their effect on the aging process in nematode worms. With a dozen of these genes, blocking them extended the lifespan by at least five percent.
One of these genes proved to be particularly influential: the bcat-1 gene. "When we blocked the effect of this gene, it significantly extended the mean lifespan of the nematode by up to 25 percent," says Professor Michael Ristow, coordinating author of the recently published study and Professor of Energy Metabolism at ETH.
When the gene activity of bcat-1 was inhibited, branched-chain amino acids accumulated in the tissue, triggering a molecular signalling cascade that increased longevity. Moreover, the timespan during which the worms remained healthy was extended. As a measure of vitality, the researchers observed the accumulation of aging pigments, the speed at which the creatures moved, and how often the nematodes successfully reproduced. All of these parameters improved markedly.
Professor Ristow has no doubt that the same mechanism occurs in humans: "We looked only for the genes that are conserved in evolution and therefore exist in all organisms including humans," he says. A follow-up study is already planned. "However, we can't measure the life expectancy of humans for obvious reasons," he adds. Instead, they plan to incorporate various health parameters, such as cholesterol or blood sugar levels in their study to obtain indicators on the health status of their subjects.
Multiple branched-chain amino acids are already being used to treat liver damage and also feature in sports nutrition products. This follow-up study will deliver new and important indicators on how the aging process could be influenced and how age-related diseases might be prevented.
"However, the point is not for people to grow even older – but rather, to stay healthy for longer," the researchers argue. Given the unfavourable demographics and steadily increasing life expectancy, it is important to extend the healthy life phase – or "healthspan" – and not to simply reach an even higher age that is characterised by chronic diseases. With such preventive measures, elderly people could greatly improve their quality of life, while at the same time cutting their healthcare costs by more than half.
28th November 2015
Cell survival genes identified
By switching off, one by one, almost 18,000 genes — about 90 per cent of the entire human genome — scientists have identified the genes that are essential for cell survival. This could improve our understanding of which genes are most important in diseases like cancer.
Cancer cells grown in a dish. Credit: University of Toronto
Scientists from the University of Toronto's Donnelly Centre have mapped out the genes that keep our cells alive, creating a long-awaited foothold for understanding how our genome works and which genes are crucial in disease like cancer. A team of researchers led by Professor Jason Moffat have switched off, one by one, almost 18,000 genes — 90% of the entire human genome — to find the genes that are essential for cell survival.
The data, published on 25th November in the peer-reviewed journal Cell, reveals a "core" set of 1,500 essential genes. This lays the foundation for reaching the long-standing goal in biomedical research of pinpointing a role for every single gene in the human genome.
By turning genes off in five different cancer cell lines — including brain, retinal, ovarian, and two kinds of colorectal cancer cells — the team uncovered that each set of cells relies on a unique set of genes that can be targeted by specific drugs. This finding raises hope of devising new treatments that would target only cancerous cells, leaving the healthy tissue unharmed.
"It's when you get outside the core set of essential genes, that it starts to get interesting in terms of how to target particular genes in different cancers and other disease states," says Moffat.
Sequencing of the human genome in 2003 allowed scientists to compile a list of parts – our 20,000 genes – which form our cells and bodies. But despite this major achievement, they still didn't understand the function of each individual gene, or how some genes make us sick when they go wrong. For this, scientists realised they would have to switch genes off, one by one across the entire genome to determine what processes go wrong in the cells. But the available tools were either inaccurate or too slow.
The recent arrival of the gene editing technology CRISPR has finally made it possible to turn genes off, swiftly and with pinpoint accuracy, kicking off a global race among multiple competing research teams. The Toronto study, along with the paper from Harvard and MIT, published recently in Science, found that roughly 8% of our genes are essential for cell survival.
These findings show the majority of human genes play more subtle roles in the cell, because switching them off doesn't kill the cell. But if two or more of such genes are mutated at the same time, or the cells are under environmental stress, their loss begins to count.
Because different cancers have different mutations, they tend to rely on different sets of genes to survive. Professor Moffatt's team have identified distinct sets of "smoking gun" genes for each of the tested cancers – each set susceptible to different drugs.
"We can now interrogate our genome at unprecedented resolution in human cells that we grow in the lab with incredible speed and accuracy," he says. "In short order, this will lead to a functional map of cancer that will link drug targets to DNA sequence variation."
Already, his team has shown how this can work. In their study, a widely prescribed diabetes drug called Metformin successfully killed brain cancer cells and those of one form of colorectal cancer – but was useless against the other cancers he studied. However, the antibiotics chloramphenicol and linezolid were effective against another form of colorectal cancer, and not against brain or other cancers studied. These data illustrate the clinical potential of the data in pointing to more precise treatments for the different cancers – and show the value of personalised medicine.
"The Moffat group has developed a powerful CRISPR library that could be used by investigators around the world to identify new strategies for the treatment of cancer," says Dr. Aaron Schimmer from Princess Margaret Cancer Centre in Toronto, who was not involved in the study. "I would be interested in using this tool to identify new treatment approaches for acute myeloid leukaemia – a blood cancer with a high mortality rate."
23rd November 2015
Global drug spending to increase 30% by 2020
Global spending on medicines is predicted to rise by 30% over the next five years – driven by expensive new drugs, price hikes, aging populations and increased generic drug use in developing countries, according to a new forecast by IMS Health.
More than half of the world’s population will live in countries where medicine use will exceed one dose per person per day by 2020 – up from 31 percent in 2005, as the medicine use gap between the developed and “pharmerging” markets narrows. According to new research by the IMS Institute for Healthcare Informatics, total spending on medicines will reach $1.4 trillion by 2020, due to greater patient access to chronic disease treatments and breakthrough innovations in drug therapies. Global spending is forecast to grow at a 4-7 percent compound annual rate over the next five years.
The report, Global Medicines Use in 2020: Outlook and Implications, found that total global spend for pharmaceuticals will increase by $349 billion on a constant-dollar basis, compared with $182 billion during the past five years. Spending is measured at the ex-manufacturer level before adjusting for rebates, discounts, taxes and other adjustments that affect net sales received by manufacturers. The impact of these factors is estimated to reduce growth by $90 billion or approximately 25 percent of the growth forecast through 2020.
“During the next five years, we expect to see a surge of innovative medicines emerging from R&D pipelines, as well as technology-enabled advances that will deliver measurable improvements to health outcomes,” said Murray Aitken, IMS Health senior vice president and executive director of the IMS Institute for Healthcare Informatics. “With unprecedented treatment options, greater availability of low-cost drugs and better use of evidence to inform decision making, stakeholders around the world can expect to get more ‘bang for their medicine buck’ in 2020 than ever before.”
In its latest study, the IMS Institute highlights the following findings:
• Global medicine use in 2020 will reach 4.5 trillion doses, up 24 percent from 2015. Most of the global increase in use of medicines will take place in pharmerging markets, with India, China, Brazil and Indonesia representing nearly half of that growth. Volumes in developed markets will remain relatively stable and trend toward original branded products, as use of specialty medicines becomes more widespread. Generics, non-original branded and over the counter (OTC) products will account for 88 percent of total medicine use in pharmerging markets by 2020, and provide the greatest contribution to increased access to medicines in those countries. Newer specialty medicines, which typically have low adoption rates in pharmerging countries lacking the necessary healthcare infrastructure, will represent less than one percent of the total volume in those markets.
• Global spending will grow by 29-32 percent through 2020, compared with an increase of 35 percent in the prior five years. Spending levels will be driven by branded drugs primarily in developed markets, along with the greater use of generics in pharmerging markets – offset by the impact of patent expiries. Brand spending in developed markets will rise by $298 billion as new products are launched and as price increases are applied in the U.S., most of which will be offset by off-invoice discounts and rebates. Patent expiries are expected to result in $178 billion in reduced spending on branded products, including $41 billion in savings on biologics as biosimilars become more widely adopted. Many of the newest treatments are specialty medicines used to address chronic, rare or genetic diseases and yielding significant clinical value. By 2020, global spending on these medicines is expected to reach 28 percent of the total.
• More than 90 percent of U.S. medicines will be dispensed as generics by 2020. Generic medicines will continue to provide the vast majority of the prescription drug usage in the U.S., rising from 88 percent to 91-92 percent of all prescriptions dispensed by 2020. Spending on medicines in the U.S. will reach $560-590 billion, a 34 percent increase in spending over 2015 on an invoice price basis. While invoice price growth – which does not reflect discounts and rebates received by payers – is expected to continue at historic levels during the next five years, net price trends for protected brands will remain constrained by payers and competition, resulting in 5-7 percent annual price increases. The impact of the Affordable Care Act (ACA) will continue to have an effect on medicine spending during the next five years largely due to expanded insurance coverage. By 2020, there will be broad adoption of ACA provisions that encourage greater care coordination and movement of at least one-third of spending to an outcomes or performance basis.
• More than 225 medicines will be introduced by 2020, with one-third focused on treating cancer. Disease treatments in 2020 will be transformed by the increased number and quality of new drugs in clusters of innovation around cancer, hepatitis C, autoimmune disorders, heart disease and an array of rare diseases. During the next five years, an additional 75 new orphan drugs are expected to be available for dozens of therapeutic areas that currently have limited or no treatment options. By 2020, technology will be enabling more rapid changes to treatment protocols, increasing patient engagement and accountability, shifting patient-provider interaction, and accelerating the adoption of behaviour changes that will improve patient adherence to treatments. Every patient with multiple chronic conditions will have the potential to use wearables, mobile apps and other technologies to manage their health, interact with providers, fellow patients and family members. The ubiquity of smartphones, tablets, apps and related wearable devices, as well as electronic medical records and exponentially increasing real-world data volumes, will open new avenues to connect healthcare while offering providers and payers new mechanisms to control costs.
The full report, including a detailed description of the methodology, is available at theimsinstitute.org. It can also be downloaded as an app via iTunes at https://itunes.apple.com. The study was produced independently as a public service, without industry or government funding.
22nd November 2015
Genetically modified salmon approved by FDA
For the first time, the U.S. Food and Drug Administration (FDA) has approved genetically modified fish for human consumption.
AquaBounty Technologies, Inc., a biotechnology company focused on enhancing productivity in aquaculture, announced this week that the FDA has approved its application for the production, sale and consumption of "AquAdvantage Salmon". This Atlantic salmon has been genetically enhanced to reach market size in less time than conventional farmed Atlantic salmon.
Ronald Stotish, Ph.D., CEO of AquaBounty, commented: "AquAdvantage Salmon is a game-changer that brings healthy and nutritious food to consumers in an environmentally responsible manner without damaging the ocean and other marine habitats. Using land-based aquaculture systems, this rich source of protein and other nutrients can be farmed close to major consumer markets in a more sustainable manner."
The U.S. currently imports over 90% of its seafood – and more specifically, over 95% of the Atlantic salmon it consumes. AquAdvantage Salmon will create the opportunity to grow an economically viable, domestic aquaculture industry. Through greater efficiency and localised production, AquaBounty claims it can increase productivity while reducing the costs and environmental impacts of current salmon farming operations. Land-based aquaculture systems can provide a continuous supply of fresh, safe, traceable and sustainable GM salmon to communities across the U.S. and do so with a lower carbon footprint. This offers an alternative approach to fish farming that does not exploit the oceans.
Jack Bobo, Senior Vice President and Chief Communications Officer at parent company Intrexon, stated: "The U.S. Dietary Guidelines Advisory Committee encourages Americans to eat a wide variety of seafood, including wild caught and farmed, as part of a healthy diet rich in healthy fatty acids. However, this must occur in an environmentally friendly and sustainable manner. FDA's approval of the AquAdvantage Salmon is an important step in this direction."
The AquAdvantage fish program is based on a molecular modification that results in more rapid growth during early development. A gene responsible for growth hormone regulation is taken from a Pacific Chinook salmon, combined with a promoter from an ocean pout, then added to the Atlantic salmon's 40,000 genes. This makes it grow year-round, instead of only during spring and summer, without affecting its ultimate size or other qualities. The GM fish grows to market size in 16 to 18 months, rather than three years.
The AquaAdvantage program has other qualities that improve its sustainability credentials. The fish require 25% less feed than other Atlantic salmon on the market today. When farmed in land-based facilities close to major metropolitan areas, they will travel only a short distance to the consumer. Not only will this make them the freshest fish on the market, it will significantly cut the transportation distance from farm to table. Unlike salmon imported from Norway and Chile that travel thousands of miles by airfreight and are then trucked to markets, AquaBounty's salmon will have a carbon footprint that is 23-25 times less.
The FDA determined that the approval of the GM technology would not have a significant environmental impact, because of multiple and redundant measures taken to contain the fish and prevent their escape into the wild. These measures include a series of physical barriers placed in the tanks and in the plumbing that carries water out of the facilities to block the eggs and fish. Furthermore, the AquAdvantage Salmon are reproductively sterile, so that even in the highly unlikely event of an escape, they would be unable to interbreed or establish populations in the wild. The FDA will maintain regulatory oversight of the production and facilities and will conduct inspections to confirm these containment measures remain adequate.
Despite a lengthy and detailed review process, however, the FDA's approval has provoked an angry response from some, who have questioned the safety aspects and object to the fact that no labelling will be required to indicate the fish were genetically engineered. The Centre for Food Safety (CFS), a non-profit organisation working to protect human health and promote organic food methods, has already announced plans to sue the FDA and prevent the modified salmon being sold in the U.S.
"The review process by FDA was inadequate, failed to fully examine the likely impacts of the salmon's introduction and lacked a comprehensive analysis," said executive director Andrew Kimbrell in a press statement, citing the 2 million people who filed public comments in opposition, the largest number of comments the FDA has ever received on any issue. "This decision sets a dangerous precedent, lowering the standards of safety in this country. CFS will hold FDA to their obligations to the American people."
Globally, traditional "capture" fisheries have been on a plateau since the late 1980s due to unsustainable yields. Aquaculture is now among the fastest growing industries in the agricultural sector and is projected to supply the majority of the world's seafood by the mid-2020s, overtaking wild catch harvests by weight. With fisheries collapsing from over-exploitation, pollution, climate change and other problems, aquaculture is likely to become a sustainable and vitally important industry of the 21st century.
20th November 2015
Self-healing sensor brings 'electronic skin' closer to reality
Scientists have developed a self-healing, flexible sensor that mimics the self-healing properties of human skin. Cuts or scratches to the sensors "heal" themselves in less than one day.
Flexible sensors have been developed for use in consumer electronics, robotics, health care, and space flight. Future possible applications could include the creation of ‘electronic skin’ and prosthetic limbs that allow wearers to ‘feel’ changes in their environments.
One problem with current flexible sensors, however, is that they can be easily scratched and otherwise damaged, potentially destroying their functionality. Researchers in the Department of Chemical Engineering at the Technion – Israel Institute of Technology in Haifa (Israel), who were inspired by the healing properties in human skin, have developed materials that can be integrated into flexible devices to “heal” incidental scratches or damaging cuts that might compromise device functionality. The advancement, using a new kind of synthetic polymer (a polymer is a large molecule composed of many repeated smaller molecules) has self-healing properties that mimic human skin, which means that e-skin “wounds” can quickly “heal” themselves in remarkably short time – less than a day.
A paper outlining the characteristics and applications of the unique, self-healing sensor has been published in the current issue of Advanced Materials.
“The vulnerability of flexible sensors used in real-world applications calls for the development of self-healing properties similar to how human skin heals,” said self-healing sensor co-developer Professor Hossam Haick. “Accordingly, we have developed a complete, self-healing device in the form of a bendable and stretchable chemiresistor where every part – no matter where the device is cut or scratched – is self-healing.”
The new sensor is comprised of a self-healing substrate, high conductivity electrodes, and molecularly modified gold nanoparticles. “The gold particles on top of the substrate and between the self-healing electrodes are able to “heal” cracks that could completely disconnect electrical connectivity,” explains Prof. Haick.
Once healed, the polymer substrate of the self-healing sensor demonstrates sensitivity to volatile organic compounds (VOCs), with detection capability down to tens of parts per billion. It also demonstrates superior healability at the extreme temperatures of -20 degrees C to 40 degrees C. This property, said the researchers, can extend applications of the self-healing sensor to areas of the world with extreme climates. From sub-freezing cold to equatorial heat, the self-healing sensor is environment-stable.
The healing polymer works quickest, said the researchers, when the temperature is between 0 degrees C and 10 degrees C, when moisture condenses and is then absorbed by the substrate. Condensation makes the substrate swell, allowing the polymer chains to begin to flow freely and, in effect, begin “healing.” Once healed, the nonbiological, chemiresistor still has high sensitivity to touch, pressure and strain, which the researchers tested in demanding stretching and bending tests.
Another unique feature is that the electrode resistance increases after healing and can survive 20 times or more cutting/healing cycles than prior to healing. Essentially, healing makes the self-healing sensor even stronger. The researchers noted in their paper that “the healing efficiency of this chemiresistor is so high that the sensor survived several cuttings at random positions.”
The researchers are currently experimenting with carbon-based self-healing composites and self-healing transistors.
“The self-healing sensor raises expectations that flexible devices might someday be self-administered, which increases their reliability,” explained co-developer Dr. Tan-Phat Huynh, also of the Technion, whose work focuses on the development of self-healing electronic skin. “One day, the self-healing sensor could serve as a platform for biosensors that monitor human health using electronic skin.”
9th November 2015
Fastest ever brain-computer interface for spelling
Researchers in China have achieved high-speed spelling with a noninvasive brain-computer interface.
Brain–computer interfaces (BCI) are a relatively new and emerging technology allowing direct communication between the brain and an external device. They are used for assisting, augmenting, or repairing cognitive or sensory-motor functions. Research on BCIs began in the 1970s and the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.
The past 20 years have seen major progress in BCIs. However, they are still limited by low communication rates, caused by interference from spontaneous electroencephalography (EEG) signals. Now, a team of researchers from Tsinghua University in China, State Key Laboratory Integrated Optoelectronics, Institute of Semiconductors (IOS), and the Chinese Academy of Sciences have developed a greatly improved system. Their EEG-based BCI speller can achieve information transfer rates (ITRs) of 60 characters (∼12 words) per minute, by far the highest ever reported in BCI spellers for either noninvasive or invasive methods. In some of the tests, they reached up to 5.32 bits per second. For comparison, most other systems in recent years have been at 1 or 2 ITRs.
According to the researchers, they achieved this via an extremely high consistency of frequency and phase between the visual flickering signals and the elicited single-trial steady-state visual evoked potentials. Specifically, they developed a new joint frequency-phase modulation (JFPM) method to tag 40 characters with 0.5-seconds-long flickering signals, and created a user-specific target identification algorithm using individual calibration data. A paper describing this breakthrough appears in the 3rd November edition of the journal Proceedings of the National Academy of Sciences (PNAS).
In the not-too-distant future, this kind of technology could be applied to other uses, besides medicine. For example, it could be incorporated into smartphones and other consumer electronics to allow texting, typing or other on-screen actions by thought power alone. A partnership between the Japanese government and private sector aims to achieve this by 2020. With continued progress in the speed of BCIs, a new form of "virtual telepathy" could emerge within a few decades.