Probably you know the first two authors. Huttenlocher, you might not. Here is his bio:
Daniel Huttenlocher is the Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech and in the Computer Science Department and the Johnson School at Cornell University. As Dean, he has overall responsibility for the new campus, including the academic quality and direction of the campus’ degree programs and research. Working with both internal and external stakeholders, he is developing approaches for working with companies, nonprofits, government agencies and early stage investors, as well as overseeing the faculty recruitment and entrepreneurial initiatives of the campus. Huttenlocher has a mix of academic and industry background, having worked at the Xerox Palo Alto Research Center (PARC) and served as CTO of Intelligent Markets, as well as being a faculty member at Cornell for over two decades. He received his bachelor’s degree from the University of Michigan and both his Master’s and Doctorate degree from Massachusetts Institute of Technology (MIT). He currently serves as a Director of Amazon, Inc., Corning, Inc. and the John D. and Catherine T. MacArthur Foundation.
The above article is heavy-serious-sounding. Stuff like:
Humanity is at the edge of a revolution driven by artificial intelligence. It has the potential to be one of the most significant and far-reaching revolutions in history, yet it has developed out of disparate efforts to solve specific practical problems rather than a comprehensive plan. Ironically, the ultimate effect of this case-by-case problem solving may be the transformation of human reasoning and decision making.
This revolution is unstoppable. Attempts to halt it would cede the future to that element of humanity more courageous in facing the implications of its own inventiveness. Instead, we should accept that AI is bound to become increasingly sophisticated and ubiquitous, and ask ourselves: How will its evolution affect human perception, cognition, and interaction? What will be its impact on our culture and, in the end, our history?
Hardly any of these strategic verities can be applied to a world in which AI plays a significant role in national security. If AI develops new weapons, strategies, and tactics by simulation and other clandestine methods, control becomes elusive, if not impossible. The premises of arms control based on disclosure will alter: Adversaries’ ignorance of AI-developed configurations will become a strategic advantage—an advantage that would be sacrificed at a negotiating table where transparency as to capabilities is a prerequisite. The opacity (and also the speed) of the cyberworld may overwhelm current planning models.
They seem to be worried about awesome power to transform the world that AI will unleash, and are looking for ways to muzzle it, so that it doesn't result in obliteration.