Skip to content Skip to sidebar Skip to footer

Source: Forbes as of 20-04-2020

All too often with data projects, the knee-jerk reaction is to “throw as much AI at it as possible.” Given AI’s promise, this is understandable. But it pays to step back and think about the problem before we rush in. To work effectively, AI needs to be used in the right way for the right applications. You wouldn’t use a precision laser to tailor clothing. It could work, but it would create unnecessary cost and risk with no meaningful improvement over fabric scissors. Likewise, you shouldn’t use complex AI for a problem that can be better solved simply. Complex AI is useful if there is a genuine need to process vast complex data such as self-driving cars or Google’s image search. But often, it isn’t. From my experience, I’ve seen many examples of AI rolled out because it is the most powerful tool available, not the most suitable. At best, this can mean wasted time and money doing something that could have been done faster and cheaper. At worst, it can mean project failure or difficult-to-understand systems that users don’t trust.

The AI Complexity/Failure Problem

AI applications don’t operate like traditional software applications, which are programmed to respond in a certain way to certain inputs. AI ingests data and learns about the relationships within it. More complexity means more opportunity for confusion and failure. A common situation is as follows: An organization has reams of sensor data coming off machines — vibrations, temperature, movements, etc. It wants to use it to predict when a machine might fail. It builds a neural network to deal with these complex datasets, feed them in and learn what combinations of sensor measurements are correlated with imminent failure. This may well be a good approach, but the problem with this thinking is it misses the opportunity to find a potentially better solution. Every problem has a differing degree of complexity, and the key is to match the complexity of the machine learning solution to the complexity of the problem. Anyone who has studied machine learning will be familiar with the trade-off between problem complexity, model complexity and model error.

There are times when complex AI is the right route; image recognition or natural language, for example, often have too much complex data to do things simply. Google’s image search is pretty good, but any search will soon start showing things outside what we expected. We may accept that when searching for shopping inspiration but not when trying to spot when a machine needs to be immediately shut down. The vast majority of problems are not that complex. “Lots of data” is not the same as “lots of complexity.” A better measure is “lots of relevant data.” Often, the data needed to solve the problem at hand comes from just a few sensors. If we take time to identify those key datasets, we may find a more accurate and robust solution with simpler models that we can build quickly. The trick for a successful AI project is finding the optimum level of complexity for a specific problem.

The Most Appropriate Solution

At the start of a data project, we don’t usually know where on the complexity scale our problem resides. We can either start simple and work up or start complex and work down. We favor the former. There are two advantages this way around. First, if your simple model (LASSO, Ridge Regression, Trees, Random Forests) works, you will save time and money, and it will be easier to deploy. Simpler models are more robust in an industry setting. Second, you can iteratively add layers of complexity until you start crossing the optimal curve intersection. Statistical models are easy to understand, so as you iterate, you learn what’s actually driving the problem in the real world and can make informed improvements rather than guesses based on assumptions about data. On the other hand, it’s very hard to go from complex to simple. Neural networks are complex black boxes. Working out what is going on the inside and what is driving decisions is extremely hard (though this is an active area of research). In many cases, we have achieved 98% to 99% accuracy with simple models and then built more sophisticated neural networks to explore whether this could be improved — only to find that the accuracy drops.

The AI Explainability Problem

Another issue with complex AI is that because it learns to make connections between many complex relationships in data, it is sometimes impossible for a human to understand the decision-making logic. This means complex AI requires trust without certainty from the user — which isn’t to say it’s wrong, but it’s harder to be sure. This might not be acceptable if its decisions are high risk (critical component failure) or require an explanation (loan rejection, a regulatory case for drug approval).

A Sensible Approach To AI: Keep It Simple

Many problems stem from the hype around AI. Everyone seemingly wants to have the latest, coolest tech. Once most people start looking for AI solutions, they find no shortage of newly formed AI vendors willing to sell them one, irrespective of whether it is the best fit. It’s far better to keep things as simple as possible. Start with the simplest likely solution, and build accuracy iteratively, rather than starting with the most complex. Gradually work up until you match the complexity of the problem to the right solution. Although industrial neural networks are sometimes the right answer, more often than not, there are more suitable solutions. I believe AI decision-makers should work with people with experience of their challenge and ask the best way to solve it, not with people who claim to have the most powerful AI solution. As with anything, the solution should be designed to solve the specific problem, not based on how much firepower you can throw at it. As stated in a paraphrase commonly attributed to Albert Einstein, “Everything should be made as simple as possible, but no simpler.”

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.