Ethics and Legal Considerations for AI in the Enterprise

Deploying artificial intelligence (AI) application in the enterprise raises a host of issues beyond technology. “We think of AI as being a neutral application, but it’s not,” said attorney Martha Buyer of Buffalo, NY. “You need to understand the inputs, variables, assumptions, and math before knowing how much value to give the outcome.”

An AI solution also needs to be compliant with legal and regulatory provisions, such as HIPAA in healthcare and Europe’s GDPR rules. “Do not rely on a vendor’s guidance – do your own due diligence,” Buyer said.

From writing code to deciding where to deploy an AI solution, IT professionals need to take a close look at the underlying biases and assumptions, said Buyer at Enterprise Connect 2019 session, “AI in the Enterprise: Factoring Ethics and Law into the Equation.”

Some questions to consider:

            • Whose AI tools are being used?

            • How reliable is the data going into the AI model?

            • What factors are included – and what may have been missed?

            • How are the factors weighted?

            • How is the outcome determined?

            • How will the outcome be used?

            • How much will the provider share about how it does the math?

“Most providers are reluctant to tell you their ‘secret sauce,'” Buyer said. “But if you don’t know what’s going into the AI application, you won’t know the outcomes either.”

Earlier in her career, Buyer worked at a bank’s call center, where the agents were ranked monthly. “The smartest agent we had always came in second behind an agent who would disconnect complex calls so she could show a higher volume,” Buyer said.

Another example of the misuse of metrics occurred in Washington, DC, where teachers whose students performed poorly on an end-of-year test were let go. However, the students had not been tested at the start of the year, so there was no way to know whether a teacher had improved their performance or not, she said.

Buyer then turned to the question of liability concerns and other risks connected with AI in the enterprise. “What happens if a business decision based on AI outcomes turns out to be wrong,” she said. “Who is at fault here?” 

Enterprises can also face risks related to data security and privacy as an AI application crunches information from customers, prospects, or employees to generate scenarios or likely outcomes. An AI solution also needs to be compliant with legal and regulatory provisions, such as HIPAA in healthcare and Europe’s GDPR rules. “Do not rely on a vendor’s guidance – do your own due diligence,” Buyer said.

IT professionals should also take a close look at the provisions in AI contracts, including who owns the data, and how the data will be compiled and weighted. Other contract issues include service expectations and planning for future technology innovations.  If an AI model evolves, for instance, does the service provider have an obligation to deliver a more advanced version?

“You want as much contract flexibility as possible since this is such a fast-moving field,” Buyer said. “You should also consider a termination strategy right from the start. If the implementation doesn’t work, you need to be able to get out of the contract as efficiently as possible.”

Finally, Buyer reminded IT professionals that AI, however powerful, is only a tool that can’t measure the intangibles in the workplace. As she said, “When you reduce everything to numbers, you lose the human aspect, which is critical to making smart decisions.”

IAUG offers Avaya support, education, and community. Join IAUG today!
erint’s solutions help organizations address two areas of the Actionable Intelligence market – Customer Engagement and Cyber Intelligence. Learn more today.