California Forum

Artificial Intelligence keeps Gov.-elect Newsom up at night. Here’s what he can do about it

Shocking video shows pedestrian hit by self-driving Uber

Footage released by police in Tempe, Arizona, shows the moment a female pedestrian pushing her bicycle was knocked down by a self-driving Uber car on March 18. The victim, 49-year-old Elaine Herzberg, later died in the hospital from her injuries.
Up Next
Footage released by police in Tempe, Arizona, shows the moment a female pedestrian pushing her bicycle was knocked down by a self-driving Uber car on March 18. The victim, 49-year-old Elaine Herzberg, later died in the hospital from her injuries.

An out of control trolley rushes toward five people tied to the tracks. Your hand is on the switch that can redirect the trolley to a side track. But the side track has one person similarly incapacitated. Do you pull the switch, choosing to kill one but spare five?

This ethical thought experiment — the “trolley problem” — has new relevance following the first self-driving car fatality, a 2018 incident that did little to slow the race to a world of autonomous vehicles and artificial intelligence. How does an AI system make an ethical call about sparing a driver vs. a pedestrian or a school bus?

Nowhere is this existential question more urgent than in California, the global center of technology and AI. And no single person in 2019 has a more important role to play in making sure we craft policy that gets AI right – pursuing its opportunities, protecting against its risks – than Governor-elect Gavin Newsom. Newsom has said the issue keeps him up at night. And it should: The choices his administration makes on AI policy will shape California’s future.

Two principles should guide Newsom on AI policymaking: urgency and an open mind. Urgency to get this right and an open mind to consider AI’s opportunities as well as its risks.

A look around the country clarifies the policy landscape Newsom needs to chart. Most states look at AI from a defensive posture. In New York, for instance, lawmakers formed an unpaid, temporary commission to regulate AI. That’s short shrift for a matter of such wide-ranging economic and ethical importance.

Opinion

Indiana and Michigan are doing it better. Indiana’s future of work taskforce is looking for ways to boost growth and productivity and protect vulnerable populations. Michigan is using AI to predict the likelihood of drug-related deaths and better understand the state’s opioid epidemic.

Because AI is so new, the general public has very little understanding of it. That’s why it’s important for the governor-elect to reach out to the public, even if he has his own well-formed views. If he engages the public in a discussion, his ideas will be better and more likely to last.

OPEDMUGS.jpg
Andrew Sullivan, left, and David Beier

Social scientist Daniel Yankelovich said that people work through complex issues in seven stages. The first is a period of “dawning awareness,” where people’s attitudes are only beginning to take shape and are therefore capable of shift. The final stage, “public judgment,” is where people come to a fixed point of view after working through the issue.

This framework is important for AI because the public is in the earliest stages of the path to public judgment. Opinions are raw and unformed. Even the strongest views are still prone to shift.

It’s critical for the governor-elect and state policymakers to help the public understand the choices posed by AI. By sharing their views, collecting feedback and considering tradeoffs alongside the public, our state leaders will be in a better position to craft policy which reflects the public’s priorities.

California’s Little Hoover Commission, a state oversight agency, has produced a detailed set of recommendations on AI. It’s a head start the governor-elect can use as the basis for a coherent AI agenda.

Among its key points, the commission calls on California to develop a holistic AI plan that looks at risks and opportunities in equal measure. A holistic plan should offer views of how to apply state resources to high-priority projects not generally viewed as AI, such as forecasting floods and wildfires in disaster-prone areas or detecting lead in drinking water.

The Little Hoover Commission also suggests establishing a formal AI leadership role within state government. The role should have a direct line to the governor and should be supported by a governance structure spanning the public and private sectors. One possible model is an advisory board of business leaders, educators, community leaders, worker representatives and policy experts.

Most important, the commission argues for focusing like a laser on AI in education and lifelong learning. California — from local school districts to the UC system, regional workforce development organizations and beyond — will need a tactical plan to upskill the state’s current and future workforce.

AI will affect nearly every aspect of our lives. In the future, nearly every California worker will need a basic understanding of computer science and AI-related disciplines. These disciplines include engineering, mathematics, psychology and statistics, to name just a few.

Above all, of course, is ethics, which will need to be at the heart of California’s education and training on artificial intelligence.

Our new governor has a long list of priorities, but few have greater urgency than this.

Andrew Sullivan is a founding partner of Hudson Pacific, a San Francisco political and public affairs strategy firm. David Beier is a Managing Director of Bay City Capital, a San Francisco venture firm, and a Commissioner of California’s Little Hoover Commission.
  Comments