This article originally appeared on Forbes.com.
As the use of decision intelligence technology becomes more widespread, its traditional user base is starting to evolve. For those of us in the industry, this typically means that instead of speaking primarily with mathematical optimization experts, we’re increasingly talking with AI specialists.
This is a great opportunity to see our own field from a different perspective—and recognize important differences in how we think and talk about these two technologies.
While most of the differences are expected and fairly easy to overcome, we’ve found one that was quite surprising and often difficult to bridge.
Mathematical Optimization Vs. AI: Some Fundamental Differences
The first, probably least surprising difference we typically notice is in assumptions about the types of problems decision intelligence technology can solve.
AI models are generally oriented around predictions—even large language models (LLMs) are fundamentally about predicting the next word in a sentence. In contrast, the goal of a mathematical optimization model is to use such predictions to guide complex decisions.
While not all decisions are complex, and AI models can make reasonable decisions even in the presence of some complexity, it can be challenging to convey decision intelligence technology’s ability to deal with a lot more complexity—and typically deal with it much more efficiently—compared to AI approaches.
A second unsurprising difference is in assumptions about how AI and optimization models are built. AI models infer their models from reams of data through a training process. Mathematical optimization models, on the other hand, are built based on the modeler’s detailed understanding of a particular business process.
There are definitely tradeoffs between the two approaches. The most fundamental are typically (1) using machine time to train the model versus using expert modeler time to craft the model and (2) the robustness of the resulting model in the face of data that isn’t fully represented in the training set.
Finally, there are also differences in terminology—which is almost inevitable when people from two different fields interact. This is often just a matter of learning the lingo of the other field, but there can also be subtle differences in how the same words are used.
Conveying Impact Through Storytelling
I’m sure that none of the differences I've raised so far have come as a surprise. The big surprise to us was the difference in stories we tell to establish credibility.
Mathematical optimization has a long track record of solving important problems in a variety of domains, including supply chain management, electrical power distribution, financial modeling and even sports scheduling. We are quite proud of the impact this technology has had, so it probably isn’t a shock that we tend to tell stories of the many ways the technology has impacted modern society.
There’s no doubt that AI has also had a major impact. But when people talk about it, they typically focus on potential future applications: self-driving cars, robotic home assistants or computers replacing programmers in software development.
I’m not claiming that decision intelligence can make predictions that are as bold as these—my claim is simply that there are many bold, credible claims that we can make but don’t, primarily for cultural reasons. This can lead to the conclusion that decision intelligence is old technology while AI is the future. That's far from the case.
To give a concrete example, I can relate a conversation I had with some colleagues quite recently. We were trying to produce a rough estimate of the number of decision intelligence applications in use today. While we have lots of quantitative data about the number of users of our product and lots of qualitative information about how they use it, there are several factors that we couldn’t easily pin down.
It would be tough to get within a factor of 10 of the real number, and our conclusion was that it would be irresponsible for us to put out a number that inaccurate—no matter how many caveats we surround it with. To me, that was evidence of a cultural difference. When Mark Zuckerberg predicts that AI will replace coders in 2025, nobody talks about the margin of error in that statement.
Changing Perspective: Learning To Look Ahead
My conclusion is that we in the decision intelligence space need to get better at talking about our technology’s groundbreaking potential. While there is clearly value in talking proudly about past accomplishments, I also see value in articulating potential future impact, even if we can’t provide statistically valid confidence intervals around those predictions.
To that end, here’s a prediction about mathematical optimization that I feel safe in making: The world is moving to electrification, with electricity demand predicted to grow faster in the next five years than it has since the 1980s. With new types of generation coming online constantly—including wind, solar and battery—the complexity of matching supply to demand is exploding.
The primary technology that has made this growth in the grid possible—and that will enable massive future growth and greatly help us as a society reach our ambitious climate goals—is mathematical optimization.
I encourage everyone in our field to speak more about the future that our technology will make possible.
