Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

The Geopolitics of Artificial Intelligence

artificial intelligence AI geopolitics China Russia labor society data science big data economics

  • Please log in to reply
2 replies to this topic

Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,573 posts
  • LocationNew Orleans, LA

The Geopolitics of Artificial Intelligence

Something stood out of the ordinary during a speech by China’s president, Xi Jinping, in January 2018. Behind Xi, on a bookshelf, were two books on artificial intelligence (AI). Why were those books there? Similar to 2015, when Russia “accidentally” aired designs for a new weapon, the placement of the books may not have been an accident. Was China sending a message?
If it was, perhaps, it was this: For decades, China has been operating in an Americanized-world. To escape, China is turning to AI.
By 2030, China wants to be the world’s leading AI power, with an AI industry valued at $150 billion. How does China plan to achieve this?
Take health care. Ping An, a large Chinese conglomerate, has unveiled AI doctors. It has launched clinics known as “One-Minute Clinic,” where AI doctors diagnose symptoms and propose medications. Within three years, Ping An plans to build hundreds of thousands of these clinics across China.
Could China export 10,000 AI doctors to Russia? Such a move would transform geopolitics.
The biggest impact is that it would shift the China-Russia relationship, from energy and currency, areas that the U.S. can influence, to Chinese AI, over which the U.S. has no control. The AI doctors may make Russian society more China-centric, and future generations in Russia may be more familiar with Ping An than with IBM or Intel.
There are other geopolitical implications too.

The U.S. should not try to stop China from taking its AI around the world. It’s too late for that. Instead, the U.S. should focus on controlling how Chinese AI behaves.
To do this, the U.S. should create the world’s first “AI Trade Organization” or AITO. Just like in the 20th century, when the U.S. created the World Trade Organization (WTO) to govern traditional trade, AITO would govern AI trade.
AITO would establish the international rules, ethics and standards for AI.
Of course, China, and others, may refuse to join AITO, for obvious reasons. But, those nations are not the target. The target is countries that China could take its AI to in the future.

As nations compete around AI, they are part of the biggest battle for global power since World War II. Except, this battle is not about land or resources. It is about data, defense and economy. And, ultimately, how these variables give a nation more control over the world.
This is not a cold war. It is an algorithmic war.
Except this battle is not just between the U.S. and China. There are also countries like India, Russia, Israel and Japan, each of whom have their own ambition and vision.
The U.S. and China. though, stand to lose and gain the most. For the U.S., AI could lead to a de-Americanized world. For China, AI could truly ring in the Chinese century. Perhaps Russian President Vladimir Putin’s words are more important than ever before, when he warned that the country that controls AI will control the world.
The question is however, was this just a casual statement, or as with the AI books present during the Chinese president's speech, was Russia’s president sending a message?
Here's an angle of AI that we've touched upon many times before: AI impacts on geopolitics. 
It, along with synthetic media, are solid areas in what I consider to be "Near-Term AI". Whenever we talk about AI impacting politics and society, we almost always default to general AI in Skynet scenarios, where the machines dominate nations and become presidents. That's more "Long-Term AI"— stuff beyond 2040. Near-Term AI involves the impacts of AI from the present to roughly 2030, give or take a few years. This, of course, is the age of "AXI", where neural networks are becoming capable of multi-purpose functionality. This is the age of "cognitive agents" and "collaborative data agents" that are basically facsimiles of you online, monitoring your health and content.
This is when we've mastered the narrow AI field and no longer have to take brute-force shortcuts to make things work.
Likewise, AI impacts on politics is going to a tale of optimization and subterfuge in industry. It's AI networks developed by media and retail corporations (e.g. Facebook, Alibaba, Alphabet) used to subtly alter hearts and minds. It's algorithms becoming part of infrastructure and medicine. 

  • caltrek and starspawn0 like this

And remember my friend, future events such as these will affect you in the future.




  • Members
  • PipPipPipPipPipPipPip
  • 1,062 posts

I thought the discussion about AI doctors was interesting.  It hadn't occurred to me that China could export them to Russia, and forge a stronger trading partnership in the process.  It could have a big effect on Russian health, too, extending lifespan significantly.


Economists like Paul Krugman don't seem that interested in thinking about the economic implications of AI.  They treat it like just another technology, and roll their eyes at mention of tech unemployment.  Because I respect Krugman, it causes me to question my own beliefs on this.  "Could I be missing something?  Or are these guys too old and don't keep up with the latest developments?", I wonder.  Krugman, it seems, thinks AI is improving very gradually, based on the accuracy of the digital transcription service he uses.  I can't shake the feeling that he is grossly underestimating the rate of change.

  • Yuli Ban and caltrek like this




  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 9,526 posts

Here are a couple more articles relevant to the subject:






The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls

Also tagged with one or more of these keywords: artificial intelligence, AI, geopolitics, China, Russia, labor, society, data science, big data, economics

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users