The Pentagon’s Rush To Deploy AI-Enabled Weapons Is Going To Kill Us All

0
173

by Michael Klare, Activist Post:

While experts warn about the risk of human extinction, the Department of Defense plows full speed ahead…

The recent boardroom drama over the leadership of OpenAI—the San Francisco–based tech startup behind the immensely popular ChatGPT computer program—has been described as a corporate power struggle, an ego-driven personality clash, and a strategic dispute over the release of more capable ChatGPT variants. It was all that and more, but at heart represented an unusually bitter fight between those company officials who favor unrestricted research on advanced forms of artificial intelligence (AI) and those who, fearing the potentially catastrophic outcomes of such endeavors, sought to slow the pace of AI development.

TRUTH LIVES on at https://sgtreport.tv/

At approximately the same time as this epochal battle was getting under way, a similar struggle was unfolding at the United Nations in New York and government offices in Washington, D.C., over the development of autonomous weapons systems—drone ships, planes, and tanks operated by AI rather than humans. In this contest, a broad coalition of diplomats and human rights activists have sought to impose a legally binding ban on such devices—called “killer robots” by opponents—while officials at the Departments of State and Defense have argued for their rapid development.

At issue in both sets of disputes are competing views over the trustworthiness of advanced forms of AI, especially the “large language models” used in “generative AI” systems like ChatGPT. (Programs like these are called “generative” because they can create human-quality text or images based on a statistical analysis of data culled from the Internet). Those who favor the development and application of advanced AI—whether in the private sector or the military—claim that such systems can be developed safely; those who caution against such action, say it cannot, at least not without substantial safeguards.

Without going into the specifics of the OpenAI drama—which ended, for the time being, on November 21 with the appointment of new board members and the return of AI whiz Sam Altman as chief executive after being fired five days earlier—it is evident that the crisis was triggered by concerns among members of the original board of directors that Altman and his staff were veering too far in the direction of rapid AI development, despite pledges to exercise greater caution.

As Altman and many of his colleagues see things, humans technicians are on the verge of creating “general AI” or “superintelligence”—AI programs so powerful they can duplicate all aspects of human cognition and program themselves, making human programming unnecessary. Such systems, it is claimed, will be able to cure most human diseases and perform other beneficial miracles—but also, detractors warn, will eliminate most human jobs and may, eventually, choose to eliminate humans altogether.

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” Altman and his top lieutenants wrote in May. “We can have a dramatically more prosperous future; but we have to manage risk to get there.”

For Altman, as for many others in the AI field, that risk has an “existential” dimension, entailing the possible collapse of human civilization—and, at the extreme, human extinction. “I think if this technology goes wrong, it can go quite wrong,” he told a Senate hearing on May 16. Altman also signed an open letter released by the Center for AI Safety on May 30 warning of the possible “risk of extinction from AI.” Mitigating that risk, the letter avowed, “should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

Read More @ ActivistPost.com