Lots of people are worried about the effects of artificial intelligence. Misused AI can cause harm. The Federal Drive with Tom Temin spoke with someone who said federal contracts can provide a line of defense against improper use of AI: University of Pennsylvania law professor and federal regulation expert Cary Coglianese.
Cary Coglianese Contracts can include provisions to address transparency problems, which is a big, big concern about the use of a AI requiring contractors to disclose information about how these tools are designed and structured. Contracts can also address other kinds of substantive concerns about AI in terms of bias, for example, and safety. And they can also impose requirements for procedures for auditing and validation. So we often are thinking in Washington these days about the need for legislation on a AI and regulation on AI, and that may be needed, but it’s going to be some time coming. Contracts can be and are being written every day, and they can be written in a way to require that contractors use AI tools in a responsible manner.
Tom Temin And as you point out, there is no law or much regulation. But there is some case history, not at the federal level. You cite a case that happened in Houston with public school teachers who found that an algorithm amounted to a black box and a judge agreed with them. Tell us about that case and what it says about the use of AI.
Cary Coglianese Yeah, it was relatively unsophisticated algorithm by the standards of what we have today. But back several years ago, the school teachers in the city of Houston took the school district to court, because the school district had been applying a performance algorithm that was being used to evaluate teachers for pay and continued employment. The algorithm had been developed and was run by a private contractor who claimed trade secret protection over the algorithm. And the teachers said, Wait a minute, we’re public employees, school district, you’re a public entity. We have constitutional due process rights to some degree of transparency and fairness in how we are being evaluated. And we can’t even know what that is. And the court agreed with them. And it seemed to me, in retrospect, an obvious fix for that would have been to have the school district during the contracting process require the vendor to provide adequate information. And this doesn’t mean turning over everything, but due process considerations would require a minimal amount of information about how the algorithm was structured, what it was designed to optimize for, what were the sources of data, how is it tested, validated. In fact, a lot of private companies right now are on their own disclosing that kind of information in what are called model cards or system cards. And that suggests that you can actually expect companies to release adequate information about how their algorithms are working without running a file of legitimate concerns about confidential business information.
Tom Temin Well, right. And even in the intellectual property space, which is not our topic today, you have to tell something about what it is before you can get a patent on something. So even revealing those trade secrets doesn’t give anyone the right to copy them. It just means that you’re transparent about it. Fair to say?
Cary Coglianese That’s right. This is an avoidable nested opacity problem, as I call it. We’re concerned legitimately about the opacity of AI tools. Can we really understand why they’re generating the results that they are generating? But then there’s this second layer of opacity that can be created when private vendors are developing these tools if they won’t share information about them. There is information that can be disclosed and should be disclosed. And it’s a no brainer that today, as government entities are contracting for digital services and AI tools, that they are careful about ensuring that the contractual language will provide a basis for the government entity to demand and expect the disclosure of some basic information that the public deserves to know about how these tools are being designed and used.
Tom Temin We are speaking with Cary Coglianese. He’s a law professor and director of the Penn program on regulation. All this at the University of Pennsylvania. And I found it interesting that you call procurement and AI a two way relationship, because not only can contracts ensure that this visibility and transparency is available to the contracting entity, but there are ways that AI can transparently help the procurement process itself. Tell us more about that.
Cary Coglianese That’s right. There’s an emerging area, procure tech, and tools are being developed that use a AI based algorithms to parse contract proposals and flag issues, which may be where the proposals are deficient. Government agencies are experimenting with chat bots that can provide questions and answer services for understanding regulatory requirements for procurement tools can be developed using AI to help government agencies assess proposals for various risks or possible delays and how those proposals are evaluated. Accepted contracts ,AI tools can be used in that context for auditing contractual performance, managing supply chains. So there’s a lot of potential for using a AI within the procurement process at the same time that the procurement process itself can be used as a means for governing governmental use of AI.
Tom Temin So in the first situation, it would be incumbent upon the contractor to provide this window into how its algorithm works. But in the second case, it’s the government that would have to provide the transparency. Otherwise, every time they use the algorithm, there’d be a protest from everyone who didn’t get the contract.
Cary Coglianese That’s right. And it may be sort of a complex loop here because the government may be using a private contractor to design and develop a procure tech tool that it’s use to assess procurement bids and would need to make sure that it has adequate information and access to that information to withstand those contests, to denial of awards that would certainly be expected. But there’s nothing, I think, inherent in the use of AI tools that should keep government agencies from going forward and using them in the procurement context or in many other contexts, as long as they’re careful to ensure that there will be adequate information about how these tools are designed, what they’re aiming for, what data they’re relying upon and been trained on, and how they have been validated and shown to work. I like to say that in some ways these tools can be analogized to the use of a thermometer or any other kind of machine or mechanical instrument. Government agencies are not precluded from relying upon those tools to make determinations that affect private interests, whether it’s in the procurement context or any other context. You just have to make sure that those tools are validated ones and they’re working properly. They’ve been designed for the proper purpose. And if we think about AI in those terms and ensure that there’s adequate disclosure about the responsible assurance that these machines, if you will, have been well validated, government entities I think can safely rely upon them. But they have to make sure they can disclose the information to demonstrate that.
Tom Temin And what form does that disclosure take as a final question? A lot of agencies are pursuing software builds of material for cybersecurity and supply chain purposes. And I’m making an analogy here. The software bill of materials can be an incomprehensibly long digital document. And so you get a great what do I do with it now? What are some of the elements that might be on these model cards or system cards such that people could decipher what it is that the vendor is showing about their own algorithm?
Cary Coglianese Well, I think definitely we would want to see what the mathematical objective is that the algorithm has been designed to optimize for, that’s critical. And that’s going to be human determined. What is it supposed to be doing? And then where is it getting its data that’s training the algorithm? That’s important. And what measures are the contractors for the agencies using to audit and ensure the accuracy, first and foremost, that these data and the model design is actually achieving its objective. And then I think there’s a range of reasonable side effects that would be worth disclosing as well. Has it been audited for bias? For example, would be among those. And that’s standard in what seems to be an emerging practice among the mainly big tech firms in their model and system cards. And I think over time we’re going to develop, I think, a more systematic understanding of what proper disclosure entails. And it’s also going to be something that may well vary from use case to use case, but I think roughly speaking, objectives, data validation, auditing.
Tom Temin And do I detect a preference for a market driven approach rather than a regulatory or legal approach to keeping AI in its swim lane?
Cary Coglianese Well, I think in many respects this is kind of an all hands on deck governance approach that we’re going to need. I don’t think that one can say across the board there should be a preference for one set of tools or another. I think there are aspects of a AI and uses of AI that will demand legislative and regulatory responses, but even those regulatory responses will probably need to be fairly flexible and adaptable because AI, we might talk about it as a singular technology, but it’s actually many, many different technologies. And once you recognize that, that’s also an advantage of thinking about procurement as a governance tool. Procurement, as I’ve said, could be something that’s used today. We don’t have to wait for legislative or regulatory action. It’s also something that can be customized to the specific use case at issue. And that’s going to be, I think, very important to any kind of governance approach to AI. So you can call that market driven, I might call it sort of customization or a holistic approach to AI, I also think, by the way, that there’s absolutely a need for government vigilance. So even in the procurement contracting context, no government agency should be lulled into thinking that we’ve put these terms in the contract and we’ve done all that we need to do now to govern the use of AI under this contract. AI is a dynamic technology as data upon which it’s trained vary, its results may vary, the models may need to be updated over time. And as a result, any contracts for AI themselves may need to be updated or at least have provisions that allow for an ongoing flow of information about how they’re being used. So ongoing vigilance is really, really important in AI governance today.
Copyright
© 2023 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.