The Case for a Light Hand With AI and a Hard Line on China - 5 minutes read


The Case for a Light Hand With AI and a Hard Line on China

Last week, at the WIRED HQ at CES, I spoke with Michael Kratsios, the chief technology officer of the United States. We dug into the government’s recent regulatory framework for AI, the potential for an AI cold war with China, and whether or not the NSA is building a quantum computer. The conversation has been lightly edited for clarity.

Nicholas Thompson: You've just laid out a memorandum to the heads of executive departments and agencies on how to regulate artificial intelligence. Why don’t you give us the quick top-line summary of what you've done and why it matters?

Michael Kratsios: Yeah, absolutely. To kind of step back a second, this requires a little bit of context to see where this fits into the larger effort we've been making as a country.

Beginning in 2017, we identified some of the core emerging technologies that we need to ensure American leadership. First and foremost was artificial intelligence. And we launched the US national strategy on AI beginning last year, called the American AI Initiative. And the American AI Initiative really is a whole-of-government approach to ensuring American leadership in AI and consists of four key areas: One is research and development leadership, two is workforce, three is regulations, and four is international engagement.

Now for each of those pillars, we continued to work very diligently over the last year. And then the big announcement that came out yesterday was on that third pillar of regulations. We want to create an environment where it’s the United States that encourages and drives entrepreneurs to make next-generation AI discoveries here in the US. We want to do that in a way that's still true to the values that we as Americans hold dear, those of privacy and civil rights and freedoms. And in order to do that, we developed this regulatory memo. What this memo essentially does, this is a direction from the White House to agencies that regulate AI-powered technologies and gives them considerations or more guidance as to how we should do that.

So in our system, we have lots of agencies that touch AI-powered technology. You could be at the Food and Drug Administration looking at AI-powered medical diagnostics, and those need to be approved. You can be at the [Federal Aviation Administration] and dealing with drones that are flying, or the [Department of Transportation] looking at autonomous vehicles. Each of those agencies is dealing with an AI-powered technology, but they need to have some flexibility in the way they approach the regulation of that particular AI-powered tech. So what our memo does is essentially provide 10 principles that they should be focusing on. And that's out for comment now in the community; we're excited to get comments back on how we can improve this in the next 60 days, and then, once the memo is final, any time an agency is attempting to put forward a regulation impacting our technology, they must essentially comply.

NT: So my quick summary of the memo—you can tell me whether this is accurate or not—is: AI is important, there are lots of really hard questions involved, please don't overregulate and fuck this up.

MK: That's about right, yes. Generally there are three main categories of considerations that agencies should have. The first is to ensure public engagement. First and foremost, we recognize as bureaucrats in Washington that we don't have all the answers, that even if we pull the best people together from all across government to sit down and think about what are the AI regulatory considerations you have for the next-generation autonomous vehicle, we don't have the right answers. So first and foremost, we're directing all of our agencies, when they are moving forward with regulating AI, to call a committee, go talk to the community, have stakeholder meetings, bring people to Washington to discuss it with them.

Number two is to promote this light-touch regulatory approach, this idea that if we're too heavy-handed with artificial intelligence, we end up stifling entire industries that we want to make sure to foster and generate here in the United States. And the last, and I think this is the one that tends to get the most coverage, is this idea of promoting trustworthy AI. We want Americans who interact with these technologies in the private sector to trust them. Like when you go and take a prescription drug, you have confidence that the FDA has done a very thorough process ensuring that that drug is safe. We want to make sure the same thinking goes into play with types of AI technology.

Source: Wired

Powered by NewsAPI.org

Keywords:

Artificial intelligenceWired (magazine)Consumer Electronics ShowChief Technology Officer of the United StatesGovernmentArtificial intelligenceArtificial intelligenceCold WarChinaNational Security AgencyQuantum computingNicholas Thompson (editor)Artificial intelligenceEmerging technologiesArtificial intelligenceArtificial intelligenceArtificial intelligenceArtificial intelligenceGovernmentLeadershipArtificial intelligenceResearch and developmentWorkforceUnited StatesArtificial intelligenceValue (ethics)PrivacyCivil and political rightsRegulationMemorandumMemorandumManagementWhite HouseRegulationArtificial intelligenceTechnologySystemArtificial intelligenceTechnologyFood and Drug AdministrationArtificial intelligenceMedical diagnosisFederal Aviation AdministrationUnmanned aerial vehicleUnited States Department of TransportationAutonomous carArtificial intelligenceTechnologyArtificial intelligenceTechnologyArtificial intelligenceRegulationVehicular automationNatural and legal rightsRegulationStakeholder (corporate)PersonArtificial intelligenceUnited StatesArtificial intelligenceDrug interactionTechnologyPrescription drugFood and Drug AdministrationScientific methodArtificial intelligenceTechnology