NXP opens a window into how ML algorithms work

As we place more decision-making on machine learning (ML) algorithms, companies are investing in technologies that let algorithms “show their work” so that we can avoid errors. I wrote about this need  back in January for  IEEE Spectrum , and this month I had a conversation with Gowri Chindalore, head of technology and business strategy for NXP’s microcontrollers business, about how the chip giant is trying to help data scientists build what it calls explainable AI.
At its core, NXP is trying to help machine learning models alert data scientists when they work from a compromised image (maybe it’s blurry or super cropped) or images they have never encountered before. To understand why this helps, we should probably talk about how machine learning models are built and what actually happens when an AI identifies an object or recognizes an anomaly.
NXP wants to improve ML models so they can “show their work.” When building a model, a data scientist inputs a lot of data into a computer. In image recognition, for example, to teach a computer to “see” COVID-19 in lungs, people first annotate the data (tell the computer what it’s looking at) and then the data scientist feed those images into the computer. From there, the computer starts spitting out suggestions, and the data scientist tweaks the way the computer is weighing different values to get it closer to an accurate — and potentially COVID-19 positive — diagnosis.
But the data scientist doesn’t really “know” how the computer draws its conclusion about what it sees. Indeed, this black-box situation can result in hilarity when machine learning algorithms come to radically different conclusions than people. But it’s also worth noting that with machine learning, computers don’t come to a definitive conclusion, but a probability. So with our example, the computer will look at an image of X-rayed lungs, run that image through its algorithm, and declare that it is 90% sure the lungs in the image match those of people diagnosed with COVID-19.
The higher that percentage, the more confident the computer is. But sometimes even a really confident computer gets it wrong. And when it does, the folks at NXP think it’s being led astray in two ways. The first is when it’s given bad input. For example, the lung X-ray may be blurry. The second is when it encounters something new. Maybe the lung x-ray is from someone who had a rare form of cancer that disfigured their lungs pre-COVID. It’s unlikely that the model was trained with a similar lung image.
But both errors can still lead to high certainties on the part of the computer, which then leads to misclassification. To stop the misclassification, NXP researchers have come up with a way to teach a machine learning model how to “tell” data scientists that the input data is wonky or that it has just encountered something new and is using an “educated guess” to build out its ultimate conclusion.
The idea is...