My Turn: AI, us and unintended consequences

1
My Turn: AI, us and unintended consequences

By Robert Scott and Len Kennedy

The responses to developments in Artificial Intelligence and generative computing (or machine learning from prompts) are increasingly loud. They include U.S. Sen. Chuck Schumer’s (D-NY) proposals for new federal rules s European countries have done. They also include industry calls for a pause in developments that are already decades ahead of estimates.

Many assert that AI is as serious a threat as climate change and nuclear war because we are not prepared either individually or collectively to protect society from its nefarious uses. The concerns are not only about automation eliminating jobs and dislocating communities and families, but also about privacy violations and ruptures to cybersecurity.

We have witnessed the unintended consequences, as well as the benefits, of algorithms. Some were designed to foster online communities and became tools to recruit radicals of every stripe. In praising the algorithms for increasing the numbers of those interested in streaming videos and other products, the progenitors failed to think of what harmful online communities, whether pornographic, terrorist, or hate-focused could be launched and promoted using the same tools.

Some advocates for a “pause” in the development of AI until safety concerns can be addressed are industry leaders. However, we question the adequacy of a pause and the sincerity of the advocates. Who can have confidence that a voluntary pause would be honored? Have we forgotten the lessons learned from the development of the internet, the status of voluntary compliance to health and safety standards, and the anti-regulation spirit in the country?

Why should we trust those who benefit most from the commercial development of AI, from the use and sale of our private information? Have they shown any conscience in policing their own behavior? They have even tried to dodge mandated compliance with federal protections for consumer information.

But we need to do something. We need to protect against the loss of human judgment because of subservience to machines. We must be prepared for the “act” or “delay” decisions that could risk the extinction of human and animal life. These may be low probability events, but they are high-risk possibilities.

Imagine an AI-generated image of a missile headed toward the White House or a false image of Ukraine’s surrender to Russia or misinformation about basic medical and health care. We need to be able to authenticate information quickly and accurately, even though machines can calculate at lightning speed.

One place to start in attempting to ensure the transparency, safety, reliability and management control of AI is to establish audit standards related to the material risks of AI. Other countries are doing this.  These standards would require a recognition and sign-off of potential unintended consequences by directors and officers of both public and private companies. This is risk management at a new level.

President Truman created the Atomic Energy Commission to promote safety standards. Its mission was superseded by the Nuclear Regulatory Commission. We now have the Cyber Safety Review Board’s Review of Inaugural Proceedings. It provides a roadmap for ensuring that the board “endures as a sustainable, replicable, and professional model for public-private collaboration”. This could become a path to limit what is called computer-aided “surveillance capitalism.” Once it becomes law, it could help ensure that our systems will work to strengthen the security of critical infrastructure owners, operators, and users of all sizes, locations, and sectors.

As noted for many years, the so-called “law of unintended consequences” states that the actions of people always have effects that are unanticipated. While social scientists and economists have considered these effects for many years, politicians and the public generally ignore them at our peril.

The American sociologist Robert Merton identified five sources of unintended consequences, including “ignorance” and “error.” Another was the purposeful ignoring of unintended results, resulting in conscious or unconscious bias. The fourth form he called “Basic Values,” by which he meant that hard work and asceticism can lead to their own decline through the accumulation of wealth and possessions. His final unintended consequence is the “self-defeating” prediction that proves false because the prediction itself alters the course of history.

While complaints of unintended consequences, especially due to ignorance and error, are often levelled at governmental programs, they occur in private enterprise and in our private lives as well. Think of businesses ignoring unintended results because they did not fit the expected results.

Given our vulnerability to unintended consequences of any type, it is especially important to map alternative outcomes in our planning. Cybersecurity is, or should be, a concern for all individuals and organizations. We have the ability if not the authority to wage war without the time for analysis and consultation.

Cyber-fraud of bank and credit card accounts, hacking of emails and other transmissions, some of high security, interference with the electrical grid, and even the flight paths of airlines and drones, are major concerns. Our homes are a hub for the internet of things because all appliances linked to the worldwide web may be accessible to others without authorization. The threats from unintended consequences in the use of AI are everywhere, and affect our nation, our communicates and even our relationships with others.

Our legislators need to follow Schumer’s lead in considering the outsized threats we have encountered in the digital age so far in government, civil society and private affairs, and embrace the challenges posed by AI and generative computing. We need to protect individual rights, deter misinformation and expect trust between providers and users even as we seek to benefit from new technologies. It is incumbent upon leaders in these sectors to be active participants in the legislative process to ensure pragmatic results will follow if we are to protect ourselves and society from threats that have yet to emerge.

 

Robert A. Scott, president Emeritus, Adelphi University, and Len Kennedy, Esquire. Dr. Scott served in the Naval Security Group when on active duty in the United States Navy and was a member of the Department of Homeland Security Academic Advisory Committee under Secretary Jeh Johnson. Kennedy is a specialist in telecommunications and media law and Adjunct Professor of Law at Cornell Law School.

No posts to display

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here