Much of the discussion around implementing artificial intelligence systems focuses on whether an AI application is “trustworthy”: Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new paper published in Frontiers in Artificial Intelligence poses a different question: What if an AI is just too good?
Carrie Alexander, a postdoctoral researcher at the AI Institute for Next Generation Food Systems, or AIFS, at the University of California, Davis, interviewed a wide range of food industry stakeholders, including business leaders and academic and legal experts, on the attitudes of the food industry toward adopting AI. A notable issue was whether gaining extensive new knowledge about their operations might inadvertently create new liability risks and other costs.
For example, an AI system in a food business might reveal potential contamination with pathogens. Having that information could be a public benefit but also open the firm to future legal liability, even if the risk is very small.
“The technology most likely to benefit society as a whole may be the least likely to be adopted, unless new legal and economic structures are adopted,” Alexander said.
Alexander and co-authors professor Aaron Smith of the UC Davis Department of Agricultural and Resource Economics and professor Renata Ivanek of Cornell University, argue for a temporary “on-ramp” that would allow companies to begin using AI, while exploring the benefits and risks and ways to mitigate them. This would also give the courts, legislators, and government agencies time to catch up and consider how best to use the information generated by AI systems in legal, political, and regulatory decisions.
“We need ways for businesses to opt in and try out AI technology,” Alexander said. Subsidies, for example for digitising existing records, might be helpful especially for small companies. “We’re really hoping to generate more research and discussion on what could be a significant issue,” Alexander said. “It’s going to take all of us to figure it out.”