This article is based on a presentation at the Fall 2023 Accounting & Finance Forum on Data with Dignity by Chad Dau, Associate Vice President – Decision Analytics and Optimization at Lilly.
AI-driven resource allocation is quickly becoming one of the most common applications of artificial intelligence in the business world, and for good reason. Leveraging data to make decisions about how resources are applied is tremendously attractive for organizations like pharmaceutical company Eli Lilly, that are continually looking for opportunities to make efficiency gains and optimize costs.
But AI-driven decision making isn’t exclusively upside, says Chad Dau, Associate Vice President of Decision Analytics and Optimization at Lilly. Introducing AI into decision-making processes poses an entirely new set of risks, and organizations may not be aware of the growing challenges for responsible use of AI.
“AI modeling tends to focus on short-term impacts and proximal drivers, and leaves out the complexity of interactions between decision makers,” says Dau. “When those things are ignored, you can build biases into your models that affect people’s lives and health.”