The second quarter of the 21st century is marked by a rapid rise in unemployment around much of the world.* This results in considerable economic, political and cultural upheaval. For most of the 200 years since the Industrial Revolution, new advances in technology and automation had tended to create more jobs than they destroyed. By the 21st century, however, this was no longer true. A fundamental change had begun to occur.**
Median wages, already falling in recent decades, had continued to stagnate – particularly in the West.*** Globalisation and the outsourcing of jobs to overseas markets with lower international labour rates had, of course, been partly responsible in the past. But a growing and rapidly accelerating trend was the impact of machines and intelligent software programs. Not only were their physical abilities becoming more humanlike;******** in many ways their analytical and cognitive skills were beginning to match those of people too.******
Blue collar workers had traditionally borne the brunt of layoffs from technological unemployment. This time, white collar jobs were no longer safe either.* Advanced robotics, increasingly sophisticated algorithms, deep learning networks, exponential growth in computer processing power and bandwidth, voice/facial recognition and other tech – all were paving the way towards a highly automated society. Furthermore, of the (few) new jobs being created, most were in highly skilled roles, making it hard or impossible for those made redundant to adapt. Many workers now faced permanent unemployment.
By 2025, transport was among the sectors feeling the biggest impacts.* The idea of self-driving vehicles had once been science fiction, but money was being poured into research and development. In 2015, the first licenced autonomous truck was announced. These hi-tech vehicles saw rapid adoption. Initially they required a driver to be present, who could take over in case of emergencies, but later versions were fully autonomous.* In the US alone, there were 3.5 million truck drivers, with a further 5.2 million people in non-driving jobs that were dependent on the truck-driving industry, such as highway cafes and motels where drivers would stop to eat, drink, rest and sleep. A similar trend would follow with other vehicle types,* such as taxis, alongside public transport including trains – notably the London Underground.* With humans totalling 1/3rd of operating costs from their salaries alone, the business case was strong. Self-driving vehicles would never require a salary, training, sleep, pension payments, health insurance, holidays or other associated costs/time, would never drink alcohol, and never be distracted by mobile phones or tempted by road rage.
Manufacturing was another area seeing rapid change. This sector had already witnessed heavy automation in earlier decades, in the form of robots capable of constructing cars. In general, however, these machines were limited to a fixed set of pre-defined movements – repetitive actions performed over and over again. Robots with far more adaptability and dynamism would emerge during the early 21st century. Just one example was "Baxter", developed by Rethink Robotics.* Baxter could understand its environment and was safe enough to work shoulder-to-shoulder with people while offering a broad range of skills. Priced at only $22,000 this model was aimed at midsize and small manufacturers, companies that had never been able to afford robots before. It was fast and easy to configure, going from delivery to the factory floor in under an hour, unlike traditional robots that required manufacturers to develop custom software and make additional capital investments.
Robots were increasingly used in aerospace,* agriculture,*** cleaning,* delivery services (via drone),** elderly care homes, hospitals,* hotels,** kitchens,** military operations,**** mining,* retail environments,* security patrols** and warehouses.* In the scientific arena, some machines were now performing the equivalent of 12 years' worth of human research in a week.* Rapid growth in solar PV installations led some analysts to believe that a new era of green jobs was about to explode,* but robots were capable of this task with greater speed and efficiency than human engineers.*
Holographic representations of people were also being deployed in various public assistant/receptionist roles. While the first generation lacked the ability to hold a two-way conversation, later versions became more interactive and intelligent.**
Other examples of automation included self-service checkouts,* later followed by more advanced forms of "instant" payment via a combination of RFID tracking and doorway scanners* (which also enabled stock levels to be monitored and audited without humans). Cafes and restaurants had begun using a system of touchscreen displays, tablets and mobile apps to improve the speed and accuracy of the order process,* with many establishments also providing machines to rapidly create and dispense meals/drinks,* particularly in fast food chains like McDonalds.
AI software, algorithms and mobile apps had exploded in use during the 2010s and this trend continued in subsequent decades. Some bots were now capable of writing and publishing their own articles online.* Virtual lawyers were being developed to predict the likely outcome and impact of law suits; there were virtual doctors and medical bots (such as Watson), with increasingly computerised analysis and reporting of big data (able to find the proverbial "needle in a haystack" with hyper-accuracy and speed);* virtual teachers and other virtual professions.
3D printing was another emerging trend, which by the 2020s had become a mainstream consumer phenomenon for the home* and was increasingly used in large-scale formats and industrial settings too; even for the construction of buildings and vehicles. By 2040, traditional manufacturing jobs had been largely eliminated in the US* and many other Western societies. Meanwhile, the ability to quickly and cheaply print shoes, clothing and other personal items was impacting large numbers of jobs in developing nations, particularly those in Asian sweatshops.*
The tide of change was undeniable. All of these developments led to a growing unemployment crisis; not immediately and not everywhere, but enough to become a major issue for society. Unions in the past had attempted to protect their workers from such impacts, but memberships were at record lows – and in any case, they had never been particularly effective in slowing the march of technology and economics.
Sources: World Bank* and the Oxford Martin Programme on the Impacts of Future Technology*
Governments were now facing profound questions about the nature and future direction of their economies. If more and more people were being made permanently unemployed, how could they afford to buy goods and services needed to stimulate growth? Where would tax revenues come from? Confronted by increasingly angry and desperate voters, now protesting on scales dwarfing Occupy Wall Street, many leaders between 2025 and 2050 began formulating a welfare system to handle these extraordinary circumstances. This had gone by several names in the past – such as basic income, basic income guarantee, universal basic income, universal demogrant and citizen's income – but was most commonly referred to as the unconditional basic income (UBI).
The concept of UBI was not new. A minimum income for the poor had been discussed as far back as the early 16th century; unconditional grants were proposed in the 18th century; the two were combined for the first time in the 19th century to form the idea of unconditional basic income.* This theory received further attention during the 20th century. The economist Milton Friedman in 1962 advocated a guaranteed income via a "negative income tax". Martin Luther King Jr. in his final book, Where Do We Go from Here: Chaos or Community?, wrote: "I am now convinced that the simplest approach will prove to be the most effective – the solution to poverty is to abolish it directly by a now widely discussed measure: guaranteed income." US President Richard Nixon supported the idea and tried (unsuccessfully) to pass a version of Friedman's plan. His opponent in the 1972 election, George McGovern, also suggested a guaranteed annual income.
Traditional welfare payments, such as housing benefit and jobseeker's allowance, were heavily means-tested. In general, they provided only the bare minimum for survival and well-being of a household. By contrast, UBI would be more generous. Unconditional and automatic, it could be paid to each and every individual, regardless of other income sources and with no requirement for a person to work or even be looking for work. The amount paid would make a citizen "economically active", rather than idle, in turn stimulating growth. Some would use the UBI to return to education and improve their skills. Those with jobs would continue to earn more than those who did not work.
In most countries, UBI would be funded, in part, by increased taxation on the very rich.* At first glance, this appeared to be a radical left-wing concept involving massive wealth redistribution. For this reason, opposition was initially strong, particularly in the US. As time went by, however, the arguments in favour began to make sense to both sides of the political spectrum. For example, UBI could also be funded by cutting dozens of entitlement programs and replacing them with a single unified solution, reducing the size of government and giving citizens more freedom over their personal finances. Demographics in the US were also shifting in ways that made it very difficult for Republicans to maintain their traditional viewpoints.* With pressure mounting from mass social protests – and few other plausible alternatives to stimulate consumer spending – bipartisan support was gradually achieved. Nevertheless, its adoption in the United States (as with universal healthcare) occurred later than most other countries. Switzerland, for example, conducted a popular referendum on UBI as early as 2016,* with a proposed amount of $2,800/month. Meanwhile, a small-scale pilot project in Namibia during 2004 cut poverty from 76% to 37%, boosted education and health, increased non-subsidised incomes, and cut crime.* An experiment involving 6,000 people in India had similar success.*
In the short to medium term, rising unemployment was highly disruptive and triggered an unprecedented crisis.* For the US, in particular, it led to some of the biggest economic reforms in modern history.* In the longer term, however, it was arguably a positive development for humanity.* UBI acted as a temporary bridge or stepping stone to a post-scarcity world, with even greater advances in robotics and automation occurring in the late 21st century and beyond.**
All television is becoming Internet-based
During this period, cable TV and other traditional modes of television are beginning to disappear in favour of Internet-based streaming. The inflexibility of scheduled programmes had made them increasingly unattractive, with users shifting instead towards on-demand services providing greater choice, convenience and value for money. By the late 2010s, more people were streaming video online each day than watching scheduled linear TV.* This trend continued into the following two decades,* resulting in a huge loss of subscribers for older traditional media companies,* which were forced to either evolve or die.
In Britain, the traditional TV licence fee (which stood at £145 annually, as of 2016) is now being seriously called into question. A Royal Charter had guaranteed licence fee funding until 2026. However, briefing papers for the government proposed alternative methods of financing the BBC in the future and moves are now underway to implement a new system.* Options being considered include the use of advertising revenue, a new broadcasting levy and the switch to a subscription-based system.* Longer term, this reduces the power and influence of the BBC.
The visual quality of TV sets, tablets and other devices has markedly improved compared to previous generations, with 4K and even higher resolutions now cheap and ubiquitous. Connection speeds are improving in parallel, with 5G and its successor generating exponential growth in web data. By the early 2030s, it is fairly common for users in developed nations to have a terabit web connection.
Furthermore, access and coverage has been made easier via expanded rural and remote networks, greater use of public Wi-Fi, high-altitude balloons (such as Google's Project Loon), etc. As more of the world comes online, the flow of knowledge it brings contributes to further public uprisings like the Arab Spring. Increased availability of the web leads to increased awareness of political issues, corruption and injustice. Citizen journalists in these volatile regions can record and disseminate their experiences on video – using mobile apps to capture footage of war crimes and human rights abuses, for example.*
Small modular nuclear reactors gain widespread adoption
Small modular reactors (SMRs) are a new class of smaller, cheaper, safer and more adaptable nuclear power plants that gain widespread adoption from the mid-2020s to the mid-2030s.* They are defined by the International Atomic Energy Agency as generating an electric output of less than 300 MW, reaching as low as 10 MW for some of the smallest versions. This is compared to larger, conventional reactors, which typically produce 1 to 2 GW.
Electricity was first generated from nuclear energy in 1951, during tests of an experimental reactor in the high desert of Idaho. The original output was estimated at 45 kW. In subsequent decades, reactors grew much larger, with outputs reaching the gigawatt scale. Later, more than half a century after the first commercial use of nuclear energy, reactor designs with lower electrical outputs were starting to be developed again.
In the early decades of the 21st century, the need for small modular reactors was arising due to several different factors. Firstly, they could be built at a much lower cost than traditional reactors, making them less risky from an investment viewpoint. They were especially attractive to developing nations (which lacked the ability to spend tens of billions of dollars on infrastructure), to remote communities without long distance transmission lines, and for areas with limited water and/or space.
SMRs could be designed with flexibility in mind. Unlike the larger power plants (most of which used "light water" designs based on uranium fuel and ordinary water for cooling), they were being developed in a broad range of shapes and sizes, with various fuels and cooling systems. Some could even use existing legacy radioactive waste as an energy source. Among the most promising concepts were those able to be assembled in factories and delivered in sealed containers – meaning the plant would never require decommissioning, but could simply have its power source replaced like a battery, further reducing costs. In a similar vein, some of the other proposed concepts generated far less waste than conventional reactors. SMRs would also allow increments of capacity to be gradually added as power needs increased over time.
There were yet more advantages. The smaller size and safety features of the SMRs would mean both a reduced environmental impact and little or no damage from an accident – easing public concerns – while ensuring a faster and simpler planning process. Being much smaller and easier to construct, the time required from ground breaking to commercial operation could be greatly reduced, compared to larger power plants that often required decades to plan, build and test. Additionally, the threat of nuclear weapons proliferation was eliminated by the design, materials and safety aspects of SMRs.
This variety and flexibility, alongside the demand for lower carbon energy, was leading to a renaissance in nuclear power generation. By the mid-2010s, around 50 experimental prototype SMRs were in development (excluding nuclear submarines and ships). A small number achieved commercial viability in the early 2020s** and these paved the way to greater adoption through the following decade.* By 2035, the SMR industry is generating several tens of gigawatts in energy and is valued at nearly half a trillion dollars worldwide.*
Manned missions to the Moon
During this period, at least two space agencies conduct manned exploration of the Moon. This occurs in parallel with private commercial ventures including lunar tourists. The huge length of time since Apollo had led to a perception among the general public that space travel was making little or no progress. In reality, a number of developments were underway.
Perhaps most notable was the rapid emergence of China. In 2003, its first astronaut had been placed into orbit. This was followed by two additional manned missions in 2005 and 2008. Within a decade, China was building its first space station,* while launching probes to the lunar surface including a sample return mission.* The country had even greater ambitions, however, putting its first astronauts on the Moon by the late 2020s.* This would take place in the southern polar region, with abundant solar energy, relatively stable temperatures and the presence of water-ice.*
Russia was making strides too. After years of stagnation, its space program saw a resurgence in the 2010s with a major increase in funding.* A new spaceport is operational by 2025, while rockets have been developed based on cheaper acetylene and ammonia fuel,* along with huge payloads up to 180 tons. By the early 2030s, this combination of better infrastructure and technology, increased funding and government commitment would lead to a manned Russian presence on the Moon.*
China on the Moon, 2025-2030.
NASA had been developing a new rocket – the Space Launch System (SLS)* – along with a manned spacecraft placed at Earth-Moon Lagrange Point 2.* The agency's longer term goals included sending astronauts to Mars, rather than the Moon's surface.* However, private commercial ventures, such as inflatable modules designed by Bigelow Aerospace, were also getting underway and involved some testing and collaboration with NASA.** Additionally, the SLS had performed lunar orbits during its testing,* along with crewed asteroid missions.*
The European Space Agency (ESA) was less vocal than other agencies when it came to manned lunar missions. Announced in 2001, its Aurora Programme included the goal of sending astronauts to the Moon and Mars during the late 2020s and early 2030s. However, these plans were quietly dropped after being challenged by ESA's main financial contributors (France, Germany and Italy). Lacking direction and leadership, the programme became focused on robotic-only exploration of Mars.*
Other nations had shown an interest in manned lunar exploration and even permanent bases – including Japan, India and Iran. However, despite making significant progress, a lack of technical experience and the sheer financial commitment needed would postpone their goals until somewhat further into the future.
The Advanced Technology Large-Aperture Space Telescope (ATLAST) conducts its life-searching mission
The Advanced Technology Large-Aperture Space Telescope (ATLAST) is a major new space observatory launched by NASA. It has substantially higher resolution than Hubble and the James Webb Space Telescope (JWST), with a primary mirror that dwarfs both. Its angular resolution is 10 times better than JWST, with a sensitivity limit up to 2,000 times better than Hubble.
ATLAST is a flagship mission of the 2025-2035 period, designed to address one of the most compelling questions of our time: is there life elsewhere in the Galaxy? It attempts to accomplish this by detecting "biosignatures" (such as molecular oxygen, ozone, water and methane) in the spectra of terrestrial exoplanets.*
Operating in the ultraviolet, optical and infrared wavelengths, its mirror is so powerful that it can distinguish the atmosphere and surface of Earth-sized exoplanets, at distances up to 150 light years – including their climate and rotation rate.* ATLAST enables astronomers to glean information on the nature of dominant land features, along with changes in cloud cover. It even has the potential to detect seasonal variations in surface vegetation.
In addition to searching for life, ATLAST has the performance required to reveal the underlying physics driving star formation and to trace complex interactions between dark matter, galaxies and the intergalactic medium.
The observatory is placed at Sun-Earth Lagrange point L2. Servicing and maintenance are performed using a robotic ferry, with occasional help from astronaut crews flying in the Orion spacecraft (which allows NASA to gain experience for manned Solar System missions). Like the Hubble Space Telescope, ATLAST has a 20-year lifespan. By the 2050s, it is being succeeded by telescopes of truly prodigious magnitude, offering detailed close-up views of distant exoplanets.*
Mouse revival from cryopreservation
Cryopreservation – a process where cells or whole tissues are preserved by cooling to sub-zero temperatures – witnesses major advances during this period. By far the most notable achievement is a mouse being revived from storage at −196°C.
In the past, among the most serious challenges to overcome had been damage from crystallisation as a result of the freezing process. During the first decade of the 21st century, this problem was comprehensively solved by the development of cryoprotectants offering complete vitrification. In other words, the body being preserved was turned into a glass, rather than crystalline solid.
A number of issues remained, however – such as the toxicity of these cryoprotectants, as well as the fracturing that occurred due to simple thermal stress. In subsequent decades, research saw a dramatic acceleration and resulted in progressively more successful techniques, culminating in the mouse revival.*
Although a human revival is still many years away (and fraught with ethical, legal and social hurdles), such a feat now appears to be a realistic prospect. Once considered the stuff of science fiction, cryopreservation becomes an increasingly regular feature in mainstream scientific literature. Many new startup companies are formed around this time, promising to "resurrect" people at some future date.
Photo courtesy of Alcor Life Extension Foundation.
threat of bioterrorism is peaking
is now sufficiently advanced, widespread and inexpensive that a small
group of people – or even a single person – can threaten the survival
of humanity. Desktop fabrication labs, genetic databases and AI software
are becoming accessible to the public. These enable the rapid research
and synthesis of DNA, for those with appropriate technical knowledge.
have already begun to exploit this – providing access to drugs and other
substances without prescriptions, for example (like offshore Internet
pharmacies of earlier decades) – and now terrorists are making use of
past, government agencies were able to combat bioterrorism by restricting
access to pathogens themselves. This was achieved by regulating the
laboratory use of potentially deadly agents, such as the Ebola virus.
However, the advent of DNA synthesis technology means that simply restricting
access to the actual pathogen no longer provides the security it once
did. Since the gene sequence is a "blueprint", i.e. a form
of coded information, once an organism has been sequenced it can be
synthesised without using culture samples or stock DNA.
technology has continued to advance, it has become cheap, more accessible
and far easier to utilise. Like the personal computer revolution of
the early 1980s, biotechnology is diffusing into mainsteam society.
At the same time, the ongoing need for medical breakthroughs has necessitated
a gradual easing of database regulations. Furthermore, the DNA sequences
for certain pathogens – such as anthrax, botulism and smallpox – have
already been available on the Internet, for decades.
become alarmingly easy to produce a new virus (possibly an even deadlier
version of an existing one) using a relatively low level of knowledge
and equipment. Another, more sinister consequence, is the ability to
target specific races or genetic groups of people.
"home made" bioweapon is unleashed around this time, with
devastating results. There are significant casualties worldwide.* The threat
begins to subside in the 2030s, as new defensive technologies – such
as nanobots – become available
to the general population. These devices, injected into the bloodstream,
can be programmed to identify and eliminate harmful pathogens.
Hypersonic missiles are in military use
When launched, cruise missiles have typically reached 500-600 mph (800-965 km/h). However, a new generation of air-breathing supersonic combustion ramjet (scramjet) engines is now emerging on the battlefield after many years of testing and development. These are capable of exceeding Mach 5, or about 3,840 mph (6,150 km/h), making them hypersonic.*
As well as enhancing the responsiveness of a warfighter, the survivability of these missiles as they fly over enemy territory is greatly improved, since they are difficult (if not impossible) to hit at such a high speed.
Now that military use of scramjets has been perfected, commercial use will soon follow. In the 2030s, the first hypersonic airliners begin to appear, capable of travelling around the globe in under four hours.**
Some of Britain's most well-known animal species are going extinct
Due to a combination of habitat loss, agricultural intensification, road accidents, pesticides, pollution and other human interference, some of Britain's most iconic and well-known animals are disappearing. This includes hedgehogs, red squirrels, cuckoos, brown hares, Scottish wildcats, natterjack toads, red-necked phalaropes, woodland grouse, and turtle doves.*** Many butterfly species have also declined drastically.*
Rhinos are going extinct in the wild
Rhinos are among the largest remaining megafauna, a class of giant animals that were common in the last ice age.* Of the five main species of rhino, the white rhinoceros is the heaviest, with adults weighing 3,500 kg (7,700 lb) and reaching a head-to-body length of 4.6 m (15 ft).
Because of their size, rhinos have few natural predators other than humans. Alongside the mammoth, woolly rhinos became numerous during the Last Glacial Maximum (20-25,000 years ago), but were eventually hunted to extinction by early man. In modern times, the remaining species have declined even more rapidly. The black rhino, for example, fell in numbers from 70,000 in the late 1960s to only 3,600 in 2004.* A subspecies – the West African black rhino – was declared extinct in 2011, while the Javan rhino died out in Vietnam the same year.
The early 21st century witnessed an alarming upward trend in poaching. By 2012, more rhinos were being killed in South Africa during a single week than were killed in a whole year a decade previously. Fetching a street value of £40,000 a kilo, rhino horn was becoming even more valuable than gold, due to the misguided perception that it cured cancer. It was also popular in some cultures as a form of jewellery. Organised crime had become involved, with gangs now using hi-tech equipment for industrial-scale killing.
Despite conservation efforts, the situation continued to worsen. By the late 2020s, the last remaining rhinos are disappearing from the wild.* Only a handful remain in captivity. It is doubtful that any viable breeding population can be restored anytime soon, if ever.
A billion human genomes have been sequenced
DNA testing is now so cheap, fast and routinely accessible that over a billion human genomes have been sequenced around the world. Back in 1990, when the first attempt was made to identify and map all 3.3 billion base pairs in a person – an effort known as the Human Genome Project – the cost of doing so ran into billions of dollars. The time required was over a decade and involved many scientists from all over the globe in what became the largest ever collaboration on a biological project.
In the years following the completion of the Human Genome Project, tremendous improvements were made in sequencing times and costs. These new techniques allowed many more individuals to have their DNA read. The cost per genome fell by orders of magnitude – from $100 million by 2001, to under a million dollars by 2008, less than $10K by 2011 and just $1,000 by 2016. This was a trend even faster than Moore's Law.*
DNA sequencing began to enter the mainstream in the second half of the 2010s.* In the United Kingdom, for example, the National Health Service (NHS) offered its first medical diagnoses via genetic testing in 2015 and three years later had completed the 100,000 Genomes Project. Similar initiatives were attempted in many other regions, as the benefits of large-scale health databases became clear. The increasing portability and availability of consumer testing kits, such as those offered by 23andMe, led to a further acceleration of this trend. Initially restricted to partial scans, it was now technically and financially viable to conduct whole genome sequencing to provide a full and complete analysis of an individual's DNA. As well as future health risks and personalised treatments, information could also be gleaned about their ancestry and family history.
By 2025, a billion human genomes have been sequenced – about one-eighth of the world's population.* The quantity of genomic data is now reaching into the exabyte scale,* larger than the video file content of the entire YouTube website. This has created huge demand for improved storage capacities and led to a surge in cloud computing networks. The sheer volume and complexity of Big Data has made AI programs such as IBM's Watson far more commonly used for medical and research purposes. Among the latest discoveries are thousands of genes for intelligence,* providing new insights and targets for the treatment of impaired cognitive abilities. With around 75% of a person's IQ attributed to genetic differences,* these genes will play a role in creating super-intelligent humans in the more distant future.
While great progress is now being made in genetics, there are privacy and security implications of so much health information being generated and stored online. Various hacking scandals involving theft and selling of personal data have made the news headlines recently. Insurance firms and others with vested interests, particularly in the U.S., are keen to exploit the treasure trove of medical information now available and have stepped up their lobbying efforts. There is growing concern about the injustice of genetic prejudice and discrimination.
brain simulations are becoming possible
The first complete simulation of a single neuron was perfected in 2005. This was followed by a neocortical column with 10,000 neurons in 2008; then a cortical mesocircuit with 1,000,000 neurons in 2011. A team of researchers from the Allen Institute in Seattle achieved a major milestone in 2014 by simulating an entire mouse brain, containing 75 million neurons.
By 2025, the exponential
growth of data has made it possible to form accurate models
of every part of the human brain and its 100 billion neurons.** Between
2000 and 2025, there was a millionfold increase in computational
power, together with vastly improved scanning resolution and bandwidth. Much like the Human Genome Project, there were many in the scientific community who doubted that the brain could be mapped so quickly. Once again,
they failed to account for the exponential (rather than linear) growth of information technology.
Although it's now possible to scan and map a complete human brain down to the neuron level, analysing the enormous volumes of data it contains and using that to fully understand its workings will take much longer. Nonetheless, this represents a major milestone in neurology and leads to increased funding towards various brain-related ailments.
Credit: Sergey Nivens
3D-printed human organs
Additive manufacturing, also known as 3D printing, was first developed in the mid-1980s. Initially used for industrial applications such as rapid prototyping, it fell dramatically in cost during the 2010s and became available to a much wider audience.*
Arguably the most transformative breakthroughs were occurring in health and medicine. Customised, 3D-printed body parts were saving peoples' lives and included artificial jaw bones,* bioresorbable splints for breathing* and replacement skull parts,* among many other uses. Non-critical applications included dental implants* and exoskeletons to assist with mobility and joint movement.*
Even greater advances were taking place, however. 3D printing was no longer limited to inorganic materials like polymers or metals. It was being adapted to construct living, biological systems. Layer after layer of cells, dispensed from printer heads, could be placed exactly where needed with precision down to micrometre scales. Initially demonstrated for simple components like blood vessels and tissues,** more sophisticated versions later emerged in combination with scaffolds to hold larger structures in place. Eventually, the first complete organs were developed with sufficient nutrients, oxygen and growth vectors to survive as fully-functioning replacements in mouse models.
By 2025 – after testing on animals – customised 3D-printing of major human organs is becoming feasible for the first time.** Although yet to be fully perfected (as certain types of organs remain too complex), this is nevertheless a major boost for life extension efforts. In the coming decades, more and more of the 78 organs in the human body will become printable.*
of China is now highly urbanised and densified. Its growing economy
has led to the construction of literally tens of thousands of new skyscrapers
all over the country. There are now over 200 cities with more than a
million inhabitants, compared with just 35 in the whole of Europe circa
2010.* Even remote and isolated
regions have seen development on an unprecedented scale. Widespread
infrastructure such as maglev trains, airports, bridges and tunnels
is forming an extensive network to all corners of the nation, leaving
few areas untouched. China is well on its way to becoming a developed
the largest metropolitan areas – such as Hong Kong and Shenzhen – have
actually begun to overlap and form mega-cities overtaking
Tokyo in population and land area. Many of the world's tallest buildings
can now be found in China, including kilometre-high "supertalls".
this has had a considerable impact on the price of steel and other materials,
leading to cutbacks of many projects in Europe,
America and elsewhere. The rise of neighbouring India is adding to this.
The West now has reduced influence on setting the price of metals.
Meanwhile, enormous profits are being made by construction and mining firms,
leading to many high profile takeovers and acquisitions. At the same
time, record accident numbers during this time – as a result of so much
construction activity – are leading to tighter industrial regulations and improved
grows, its energy requirements are soaring. The country has prepared
for this by strengthening relations with Central Asian countries
and importing more oil and gas from them, especially Turkmenistan which
has made significant new discoveries. China's entry into Central Asia
was also partly motivated by the need to reduce its dependency on (a)
the Middle East, and (b) the Malacca Strait for shipping oil from the
Persian Gulf and Africa; this stretch of water had become increasingly
vulnerable to pirate attacks, and was the subject of ongoing political
tensions regarding its control.
as strengthening its oil imports, gains have been made from
efficiency and conservation programmes, along with increased
use of nuclear power. By 2025, China's nuclear generating capacity
is nearly 150 billion kilowatthours (khwh), passing that of both Canada
and Russia.* In the near future,
this will increase still further, as 4th generation plants
become available. Falling prices have also greatly boosted
solar and wind.
With a total population fast approaching 8 billion, world food demand has continued to climb. At the same time, however, the increasingly dire effects of climate change, as well as other environmental factors, are now having a serious impact. Droughts, desertification and the growing unpredictability of rainfall are reducing crop yields in many countries, while shrinking fossil fuel reserves are making large-scale commercial farming ever more costly. Decades of heavy pesticide use and excess irrigation have also played a role. The United States, for example, has been losing almost 3 tons of topsoil per acre, per year. This is between 10 and 40 times the rate at which it can be naturally replenished – a trend that, if allowed to continue, would mean all topsoil disappearing by 2070.* As this predicament worsens and food prices soar, the world is now approaching a genuine, major crisis.*
Amid the deepening sense of urgency and panic, a number of potential solutions have emerged. One such innovation has been the appearance of vertical farms. These condense the enormous resources and land area required for traditional farming into a single vertical structure, with crops being stacked on top of each other like the floors of a building. Singapore opened the world's first commercial vertical farm in 2012.* By the mid-2020s, they have become widespread, with most major urban areas using them in one form or another.*
Vertical farms offer a number of advantages. An urban site of just 1.32 hectares, for example, can produce the same food quantity as 420 hectares (1,052 acres) of conventional farming, feeding tens of thousands of people. Roughly 150 of these buildings, each 30 stories tall, could potentially give the entire population of New York City a sustainable supply of food.* Genetically modified crops have increased in use recently* and these are particularly well-suited to the enclosed, tightly-controlled environments within a vertical farm. Another benefit is that food can then be sold in the same place as it is grown. Farming locally in urban centres greatly reduces the energy costs associated with transporting and storing food, while giving city dwellers access to fresher and more organic produce.
Another major advantage of vertical farming is its sustainability. Most structures are primarily powered on site, using a combination of solar panels and wind turbines. Glass panels coated in titanium oxide cover the buildings, protecting the plants inside from any outside pollution or contaminants. These are also designed in accordance with the floor plan to maximise natural light. Any other necessary light can be provided artificially. The crops themselves are usually grown through hydroponics and aeroponics, substantially reducing the amount of space, soil, water and fertiliser required.
Computers and automation are relied upon to intelligently manage and control the distribution of these resources. Programmed systems on each level control water sprayers, lights and room temperature. These are adjusted according to the species of plant and are used to simulate weather variations, seasons and day/night cycles. Some of the more advanced towers even use robots to tend to crops.* Excess water lost through evapotranspiration is recaptured via condensers in the ceiling of each level, while any runoff is funnelled into nearby tanks. This water is then reused, creating a self-contained irrigation loop. Any water still needed for the system can be filtered out of the city's sewage system.
Vertical farms also offer environmental benefits. The tightly controlled system contained in each structure conserves and recycles not just water – but also soil and fertilisers such as phosphorus, making the total ecological footprint orders of magnitude smaller than older methods of agriculture. On top of that, the reduced reliance on arable land helps to discourage deforestation and habitat destruction. Vertical farms can also be used to generate electricity, with any inedible organic material transformed into biofuel, via methane digesters.
Jacobs, Gordon Graff, Spa Atelier
Solid waste is reaching crisis levels
Solid waste has been accumulating in urban areas and landfills for many decades. Poor funding for waste disposal and lack of adequate recycling measures, together with population growth and associated consumption have ensured a never-ending rise in trash levels. By the mid-2020s, global output of solid waste has almost doubled to nearly 2.5 billion tons annually, compared to 1.4 billion in 2012.** The cost of dealing with this quantity of garbage has nearly doubled as well, rising to $375 billion annually.
Developing nations, lacking the money and infrastructure to properly dispose of their trash, face the greatest crisis, with solid waste increasing five-fold in some regions. Public health is being seriously affected, since groundwater is becoming more and more polluted as a result. E-waste is proving to be even more damaging. In India, for example, discarded cellphones have increased eighteen-fold.* Rapid advances in technology, ever-more frequent upgrades to electronic products, and the aspiration for Western lifestyles have only exacerbated this situation.
Developed nations are better able to handle the problem, but since only 30% of their waste is recycled it continues to build rapidly. Plastics are a particular problem, especially in oceans and rivers, since they require centuries to fully degrade.* As well as direct environmental damage, this waste is releasing large amounts of the greenhouse gas methane, which contributes to global warming.* Public activism, though increasing at this time, has little effect in halting the overall trend.
Kivalina has been inundated
Kivalina was a small Alaskan village located on the southern tip of a 7.5 mi (12 km) long barrier island. Home to around 400 indigenous Inuit, its people survived over countless generations by hunting and fishing. During the late 20th and early 21st centuries, a dramatic retreat of Arctic sea ice left the village extremely vulnerable to coastal erosion and storms. The US Army built a defensive wall, but this was only a temporary measure and failed to halt the advancing sea. By 2025, Kivalina has been completely abandoned, its small collection of buildings disappearing beneath the waves. The Alaska region has been warming at twice the rate of the USA as a whole, affecting many other Inuit islands. At the same time, opportunities are emerging to exploit untapped oil reserves made available by the melting ice.*
Completion of the East Anglia Zone
The United Kingdom, one of the best locations for wind power in the world, greatly expanded its use of this energy source in the early 21st century – offshore wind in particular. With better wind speeds available offshore compared to on land, offshore wind's contribution in terms of electricity supplied could be higher, and NIMBY opposition to construction was usually much weaker. The United Kingdom became the world leader in offshore wind power when it overtook Denmark in 2008. It also developed the largest offshore wind farm in the world, the 175-turbine London Array.
As costs fell and technology improved, various new projects got underway. By 2014, the United Kingdom had installed 3,700MW – by far the world's largest capacity – more than Denmark (1,271MW), Belgium (571MW), Germany (520MW) the Netherlands (247MW) and Sweden (212MW) combined. Growing at between 25 and 35 per cent annually, the United Kingdom's offshore wind capacity was on track to reach 18,000MW by 2020,* enough to supply one-fifth of the country's electricity.
The largest of these projects, known as "Dogger Bank", was built off the northeast coast of England in the North Sea. This gigantic installation featured 600 turbines covering an area the size of Yorkshire* and generating 7,200MW from the early 2020s. Eight other major sites were being planned around the United Kingdom* with potential for up to 31,000MW.
Among the biggest of these other sites was the East Anglia Zone. This was divided into six separate areas, each with 1,200MW capacity for a combined total of 7,200MW – the same as Dogger Bank. Each turbine would have a rotor diameter of 200m, and a tip height up to 245m. The first stage received planning permission in 2014 and was operational by 2019,* providing a clean, renewable energy source for 820,000 homes. The remaining five stages were approved between 2016 and 2020,* followed by a similar schedule for construction. When fully completed in 2025, the whole East Anglia Zone would supply a total of four million homes.
With ongoing concerns over energy and climate change, offshore wind capacity in the United Kingdom continued to grow rapidly in subsequent decades. Eventually it became integrated into a continent-wide "supergrid" stretching across Europe.* This was followed by "peak wind" in the late 21st century* as the resources utilised offshore reached a theoretical maximum of 2,200 GW* – though alternative energies such as fusion had arrived by then.*
Click to enlarge
The UK phases out coal power
As the world's first industrialised country, the United Kingdom had a long history of coal use. Even before the Industrial Revolution, there was some evidence of coal mining in ancient and medieval times. Stone and Bronze Age flint axes, for example, were discovered embedded in coal, showing that it was mined in Britain before the Roman invasion.
The surge of coal mining in the 18th and 19th centuries was driven by demand for steam engines, the rapid expansion of the rail network and other industries throughout the Victorian period. Coal was widely used for domestic heating, due to its low cost and widespread availability. The manufacture of coke also provided coal gas, which could be used for heating and lighting. Coal production peaked in 1913 at 287 million tonnes. Until the late 1960s, coal was the main source of energy produced in the UK, peaking at 228 million tonnes in 1952.
From the 1970s onwards, the UK became increasingly reliant on imports, which coincided with initiatives for cleaner energy generation. By the 2010s, only a dozen or so coal-fired power stations remained in the UK. One third of these were closed by 2016 to meet EU air quality legislation. As part of the ongoing drive towards cleaner energy, the UK Energy Secretary proposed that coal power should be phased out within 10 years. The last remaining coal power plants in the UK are shut down by the mid-2020s. Most of the resulting shortfall in electricity production is made up by gas and nuclear, a move criticised by environmentalists.*
Photo credit: Lynne Kirton [CC BY-SA 2.0], via Wikimedia Commons
Graph by Plazak (Own work) [CC BY-SA 3.0], via Wikimedia Commons
The Giant Magellan Telescope is fully operational
The Giant Magellan Telescope (GMT) is a major new astronomical observatory completed in 2025.* Costing around $1 billion, this international project is led by the US, in partnership with Australia, Brazil, and Korea, with Chile as the host country. The telescope is built on a mountain top in the southern Atacama Desert of Chile with an altitude of 2,516 m (8,255 ft). This site was chosen as the instrument's location because of its outstanding night sky quality and clear weather throughout most of the year, along with a lack of atmospheric pollution and sparse population giving it low light pollution.
The GMT consists of seven 8.4 m (27.6 ft) diameter primary segments, with a combined resolving power equivalent to a 24.5 m (80.4 ft) mirror. It has a total light-gathering area of 368 m sq (3,960 sq ft), which is 15 times greater than the older, neighbouring Magellan telescopes. It is 10 times more powerful than the Hubble Space Telescope.
The GMT operates at near infrared and visible wavelengths of the spectrum. It features adaptive optics, which helps to correct image blur caused by the Earth's atmospheric interference. The first of the seven mirrors was cast in 2005, with polishing completed to a surface accuracy of 19 nanometres, rms. By 2015, four of the mirrors had been cast and the mountain top was being prepared for construction.
The GMT achieves first light in 2021, with full operational capability in 2025.* It is just the latest in a whole series of major telescopes being constructed around this time, heralding a new era of higher resolution astronomy. Others include the European Extremely Large Telescope (2022), Thirty Metre Telescope (2024) and Square Kilometre Array (2024), in addition to numerous space-based observatories. This new generation of telescopes leads to huge advances in knowledge of the early universe, major new discoveries of Earth-like planets around other stars, and breakthroughs in understanding the mysterious dark matter and dark energy that influence the structure and expansion of the universe.*
By Giant Magellan Telescope - GMTO Corporation [CC BY-SA 3.0], via Wikimedia Commons
The first test flight of the Skylon spaceplane
Until now, all spacecraft launching from Earth into space have used multiple stages. This has required jettisoning parts of a launch vehicle while in flight, in order to reduce weight. During the 2020s, however, a new reusable spaceplane is developed that can operate without the need for booster rockets, fuel tanks, engines or other external components – instead utilising a single stage, hybrid jet/rocket system.*
Known as Skylon, the vehicle is designed by Reaction Engines Limited, a British aerospace manufacturer based in Oxfordshire, England, with funding provided by the UK government, European Space Agency and BAE Systems. The total program cost was projected to be £7.1 billion ($10.1 billion), with a unit cost of about £190 million ($270 million). BAE Systems acquired a 20% stake in the company during 2015, investing an initial amount of £20.6 million ($29.4 million) to develop the engine system.
Skylon takes off from a specially strengthened runway. It uses a precooled jet engine (rather than scramjet) to reach speeds of Mach 5.5 (1,700 m/s) at 26 km (16 miles) altitude using oxygen in the atmosphere to "breathe". This provides a significant reduction in propellant consumption. It then closes the air inlet and operates as a highly efficient rocket to complete the remainder of its journey to orbit, 300 km (186 miles) above the Earth. This concept is known as the Synergetic Air-Breathing Rocket Engine ("SABRE").*
Although its payload capacity is only 15 tons (about 1/3rd that of the Space Shuttle), each plane is cheaper (about 1/10th) and vastly more fuel efficient than earlier spacecraft, largely thanks to the reduced weight offered by the SABRE. After completing a mission, it reenters the atmosphere with its skin protected by a strong ceramic, landing back on the runway like a normal aeroplane. It then undergoes any necessary maintenance and is capable of flying again in just two days (compared to two months for the Space Shuttle).
Ground-based tests of the SABRE engine commence in 2019. The first unmanned test flights were originally planned for 2020, but subsequently faced delays until 2025.* Although initially crewless, the Skylon is later used to carry astronauts to and from space stations. Future versions are even capable of being adapted for space tourism, transporting up to 30 passengers in a purpose-built module and costing under $500,000 per person. Skylon is hailed as the biggest breakthrough in aerospace propulsion technology since the invention of the jet engine – revolutionising access to space.* It also leads to commercial airliners capable of travelling around the globe in under four hours.
The first manned flights from Russia's new spaceport
Despite being a major space power, Russia for decades lacked its own proper independent space launch facility for manned flights. Instead it was reliant on the Baikonur Cosmodrome in neighbouring Kazakhstan – leased from the government of that nation until 2050, at a cost of $115 million per year.
In 2011, construction began on the Vostochny Cosmodrome, a new spaceport located in the Amur Oblast region in Russia's Far East. This was intended to reduce Russia's dependency on Kazakhstan, enabling most missions to be launched from its own soil. The area devoted to this new infrastructure would be nearly 100 sq km (39 sq mi) with four separate launch pads, an airport, train station, academic campus, training and space tourism facilities, business centres and a town of 30,000 capacity for housing workers and their families.*
Roscosmos had suffered a number of setbacks and launch failures in the 2000s and early 2010s, including the loss of its Phobos-Grunt probe. To address this issue and restore the nation's reputation in space, Vladimir Putin announced a major boost in funding; a budget of 1.6 trillion rubles ($51.8 billion or €39 billion euros) for 2013-2020, a far greater increase than any other space agency in the world.
Nevertheless, the spaceport faced delays. The first manned flights had been scheduled for 2018,* but were subsequently put back until 2025. Plans for the launch vehicle were also revised to incorporate a new craft with a two-stage, heavy-lift Angara A5B rocket, instead of the older Soyuz. Russia is now beginning a moon exploration program based on this modernised launch vehicle.*
with the Voyager probes is lost
I is the farthest man-made object from Earth – more than 14 billion miles (22 billion km) away, or 150 times
the distance between the Sun and Earth. Both Voyager I and its sister probe, Voyager II, have remained operational for nearly half a century, continuing to transmit
data back to NASA. They have left the heliosphere and
are now headed towards the Oort Cloud. By 2025, however, onboard power is finally starting to wane. Instruments
begin shutting down, one by one, until eventually all contact is lost.*
carries a gold-plated audio-visual disc, in the event that either spacecraft
is ever found by intelligent alien life. The discs carry images of Earth
and its lifeforms, a range of scientific information, along with a medley,
"Sounds of Earth", that includes the sounds of whales, a baby
crying, waves breaking on a shore, a variety of music from different
cultures and eras, plus greetings in 60 different languages.*
rail networks are being expanded in many countries
Many countries have radically overhauled their rail transport infrastructure.
more than 10,000km of high-speed track has been laid, making it the
most extensive network in the world. 90 percent of the country's population
now live within 50 km of a bullet train station.*
In Britain, the first phase of a major high-speed rail line is nearing completion. This will travel
up the central spine of the country – connecting London with England's next largest city, Birmingham. It will eventually be expanded to Manchester and the north. Trains will be
capable of reaching 250 mph, slashing previous journey times.*
Tokyo will soon be connected with Nagoya via superfast magnetic levitation trains. Tests conducted in
previous decades showed that it was possible to build a railway tunnel
in a straight route through the Southern Japanese Alps. The first generation
of these trains already held the world speed record, at 581 km/h (or
361 mph); but recent advances in carriage design have pushed this still
further, to speeds which are fast enough to compete
with commercial airliners.*
countries are investing in high-speed rail during this time, due to
its speed and convenience, along with soaring fuel costs and
environmental factors which have made car and air travel less desirable. Even America – which for decades had neglected its rail network – is making big
progress in this area.*
A comprehensive overhaul of the U.S. airspace system is complete
The final upgrades of the Next Generation Air Transportation System (NextGen) are completed this year. This has involved a complete overhaul of the existing air transport network. Many aspects of the National Airspace System (NAS) had been failing because of a reliance on largely obsolete technology. The navigation system, for example, which relied on ground-based radar beacons, was based on technology from the 1940s.
NextGen brings pervasive upgrades and improvements to the entire system during the 2010s and early 2020s. This includes physical infrastructure as well as computer systems. Hundreds of new ground-based stations are built to allow satellite surveillance coverage of nearly the entire country. New safety and navigation procedures are introduced that markedly reduce flight times, while offering a more dynamic method of air traffic control.
Advances in computer power and digital communication have produced what is now a far more integrated and efficient national system. One of the largest technical advances is the complete replacement of the previous radar navigation system with a modern, GPS-based version. This creates detailed, three-dimensional highways in the sky, and takes into account variations in topography and weather – enabling pilots to fly shorter, more precise routes. By 2018, this system was in place at every major US airport.
Once on the runway, taxiing planes are guided by automated systems. These use data gathered on the position of every other plane and vehicle to present pilots and controllers with detailed, real-time traffic maps of the tarmac. Runway capacity is increased with the introduction of multiple take-off and landing pathways, as opposed to the older, single route approach.
Overall, these upgrades offer substantial improvements in flight-times, air pollution and fuel consumption. Delays are reduced by nearly 40%, saving tens of billions of dollars. Over 1.4 billion gallons of fuel are saved and CO2 emissions are cut by 14 million metric tons. These numbers will continue to improve steadily over the years.*
Aircraft themselves are evolving in form, function and efficiency. A number of striking new designs have emerged with significant technological and environmental benefits.*
U.S. fuel economy standards have dramatically improved
In addition to rail and air travel (described earlier), road vehicles are witnessing major improvements. In the U.S., fuel economy standards have reached almost 55 miles per gallon (mpg) for cars and 39 miles per gallon for trucks. In other words, these vehicles now travel nearly twice as far on the same amount of fuel as they did in 2010.
This surge in efficiency, enacted by the Obama administration, was prompted by concerns over energy security and reducing U.S. dependence on foreign oil. Another factor was the urgent need to reduce carbon emissions. Light duty vehicles reached an average of 34 mpg by 2016 and these advances continued into the 2020s.* Around 6 billion metric tons of greenhouse gases have now been curtailed by this program – more than the total CO2 emitted by the entire United States in 2010.
Electric and hybrids are growing rapidly in number thanks to falling costs and improvements in battery life. By the early 2030s, they will account for the majority of new vehicles on the nation's roads.
Railguns are in use by the U.S. navy
After years of research and development, railguns are now in common use on U.S. naval ships.* Unlike traditional artillery, which create force with explosive materials, the railgun is powered entirely by electricity from the ship's grid. It works by storing up a supply of electrical power, using what is called a pulse-forming network, which is then converted to an electromagnetic pulse. This travels up the barrel along parallel tracks of magnetic rails, forcing the projectile out of the gun, away from the power source.
The weapon is capable of firing an 18-inch metal projectile, itself equipped with complex internal guidance systems, over 100 miles at close to mach 6. This is fast enough to set the air around the projectile ablaze, while delivering it to targets in mere minutes. Explosive rounds are unnecessary, since the kinetic energy released upon impact yields more power than traditional bombs of much greater size. New rapid-fire systems allow for a launch rate of around ten per minute.
A number of technical issues first had to be overcome to reach this point though. Advances in materials technology were required to keep the barrel from wearing out after repeated firings, while the projectiles needed to be outfitted in a way that protected internal guidance systems during launch. New cooling techniques also had to be introduced. The guns themselves originally required more electricity than standard naval ships could provide. This was overcome with advances in energy efficiency, along with ultra-dense storage batteries.
In combat situations, the railgun offers major benefits. It has greater accuracy over extremely long ranges. It can be used as initial cover fire for marines landing on shore, or as a defense against incoming missiles and other threats. Ships armed with these hi-tech weapons are able to attack with virtual impunity, safe from almost any retaliatory strike. Railguns become widespread around the world in the 2030s, adopted by many other navies. This devastating form of weaponry provides a considerable advantage in modern conflicts.**
The global crowdfunding market reaches $100bn
Crowdfunding is a form of alternative finance that involves raising monetary contributions from a large number of people – usually online – to collectively fund a project or venture. It first emerged in the arts and music communities, before eventually spreading into other areas. The rise of social media allowed it to gain popular and mainstream use. In 2009, crowdfunding generated slightly under a billion dollars worldwide, but by 2016 this had expanded 20-fold. Some of the biggest platforms now available were Gofundme, Indiegogo, Kickstarter, Patreon and Teespring.
With even greater potential yet to be fully realised, crowdfunding saw ongoing, rapid growth in the late 2010s and into the 2020s.* Further momentum was gained from the billions of new Internet users appearing online (from 1.7 billion in 2010 to 5 billion by 2020), with social media continuing to play a major role. China was now the largest market, representing half the global total, followed by the rest of East Asia. By 2025, the crowdfunding market has reached almost $100bn worldwide* – roughly 1.8 times the size of the global venture capital industry a decade earlier.
Crowdfunding enables creators to attain low-cost capital from people around the world, reaching untapped markets. It also creates a forum to engage with audiences in the production process via updates and sharing of feedback. Pre-release access to content, or the opportunity to beta-test products, can be offered to project backers as part of the funding incentives. Fraud is also reduced through standards-based crowdfunding platforms.
The democratisation of fundraising through crowdfunding is a major breakthrough for entrepreneurs and non-profit organisations, allowing them to outmanoeuvre larger companies and corporations. Some of the more ambitious projects being crowdfunded include satellites and space probes.
21 "...robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance." See AI, Robotics, and the Future of Jobs, Pew Research: http://www.pewinternet.org/2014/08/06/future-of-jobs/
Accessed 14th June 2015.
23 "Daimler and other manufacturers, including Nissan and Tesla, are planning to introduce fully autonomous vehicles (with no human driver on board) during the early 2020s."
See The first licenced autonomous driving truck in the US, Future Timeline Blog: http://www.futuretimeline.net/blog/2015/05/7.htm
Accessed 14th June 2015.
71 Peter Diamandis on technology: "I'm a libertarian-capitalist at heart, but we are heading towards a future of socialism."
See Peter Diamandis talks about "Robots Will Steal Your Job, But That's OK", YouTube: https://www.youtube.com/watch?v=sXdA1lP7DKY
Accessed 14th June 2015.
72 "The endgame, where we're going as a species if we don't screw up badly and destroy ourselves or burn out all our resources before we get there, is some kind of post-scarcity society."
See America Has Hit "Peak Jobs", TechCrunch: http://techcrunch.com/2013/01/26/america-has-hit-peak-jobs/
Accessed 14th June 2015.
89 "China will be able to land two to three astronauts on the moon by 2025, with the South Pole the most likely landing site, authorities of the Chinese lunar exploration program said." See Chinese astronauts could land on moon by 2025, China.org.cn: http://www.china.org.cn/china/2010-11/08/content_21295188.htm
Accessed 31st May 2013.
107 Commercial hypersonic scramjet flight would likely come soon after the technology achieves military use, he added. "You look historically, after things are used for military applications, there's a couple of years and it usually then works its way into commercial application," Vogel said. But in this case, he added, commercial and military use "may happen about the same time, because the timeframes have been shrinking over the course of many years."
See Hypersonic Weapons Could Hit Battlefield by 2025, Space.com: http://www.space.com/21089-hypersonic-weapons-air-force.html
Accessed 22nd May 2013.
132 "By 2025, it is feasible that we will be able to fabricate complete functional organs, tailored for an individual patient." See Scientists 'grow' new cartilage with help of 3D printing technology, University of Wollongong: http://media.uow.edu.au/news/UOW155364.html
Accessed 10th August 2013.
133 "Sometime next decade, replacement human organs may be bioprinted to patient specification using technology derrived from today's inkjet printers."
Bioprinting, ExplainingTheFuture.com: http://www.explainingthefuture.com/bioprinting.html
Accessed 10th August 2013.
141 "Imagine a cluster of 30-story towers on Governors Island or in Hudson Yards producing fruit, vegetables, and grains while also generating clean energy and purifying wastewater. Roughly 150 such buildings, Despommier estimates, could feed the entire city of New York for a year."
Skyfarming, New York Magazine: http://nymag.com/news/features/30020/
Accessed 6th February 2013.