Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

How the Enlightenment Ends | Henry Kissinger: Philosophically, intellectually—in every way—human society is unprepared for the rise of AI

artificial intelligence AI Enlightenment Henry Kissinger geopolitics Age of Reason liberalism Singularity AGI DeepMind

  • Please log in to reply
4 replies to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 19,565 posts
  • LocationNew Orleans, LA

How the Enlightenment Ends


Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence

By HENRY KISSINGER

Three years ago, at a conference on transatlantic issues, the subject of artificial intelligence appeared on the agenda. I was on the verge of skipping that session—it lay outside my usual concerns—but the beginning of the presentation held me in my seat.
The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which color he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilizes his or her opponent by more effectively controlling territory.
The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.
As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practicing statesman gave me pause. What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?
Aware of my lack of technical competence in this field, I organized a number of informal dialogues on the subject, with the advice and cooperation of acquaintances in technology and the humanities. These discussions have caused my concerns to grow.
Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.
But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.


  • starspawn0 likes this

And remember my friend, future events such as these will affect you in the future.


#2
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPip
  • 464 posts

This article didn't impress me. It said nothing new, and I don't see how Henry Kissinger is in any way an expert on the implications of AI. 



#3
That-Guy

That-Guy

    Member

  • Members
  • PipPipPip
  • 56 posts

Why would the war criminal even care, he has more than likely contributed money and ideas to the foundation of AI in warfare. 

 

Here's hoping he kicks the bucket slow and painfully.



#4
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,235 posts

I'd be more afraid of humans. They're ending it as we speak with their extremist religoius bs and anti-government bs.



#5
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,235 posts

At least A.i would keep advancing ahead  in these fields. It might kick humans out but who knows maybe we'll merge with them.

 

 

What can I say about a rise of a talibanian like Christianity in America? Give me the goddamn A.i.







Also tagged with one or more of these keywords: artificial intelligence, AI, Enlightenment, Henry Kissinger, geopolitics, Age of Reason, liberalism, Singularity, AGI, DeepMind

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users