The application of artificial intelligence (AI) is growing rapidly and is becoming increasingly intertwined with our daily lives and high-stakes industries such as healthcare, telecom, and energy. But with great power comes great responsibility: AI systems sometimes make mistakes or give uncertain answers that can have major consequences.
MIT’s Themis AI, co-founded and led by Professor Daniela Rus of the CSAIL lab, offers a groundbreaking solution. Their technology enables AI models to ‘know what they don’t know’. This means that AI systems can indicate when they are uncertain about their predictions, preventing errors before they cause harm.
Why is this so important?
Many AI models, even advanced ones, can sometimes exhibit so-called ‘hallucinations’—they give incorrect or unfounded answers. In sectors where decisions carry significant weight, such as medical diagnosis or autonomous driving, this can have disastrous consequences. Themis AI developed Capsa, a platform that applies uncertainty quantification: it measures and quantifies the uncertainty of AI output in a detailed and reliable way.
How does it work?
By teaching models uncertainty awareness, they can provide outputs with a risk or reliability label. For example: a self-driving car can indicate that it is uncertain about a situation and therefore activate human intervention. This not only increases safety but also user confidence in AI systems.
capsa_torch.wrapper()
where the output consists of both the prediction and the risk:
Conclusion
The MIT team shows that the future of AI is not just about becoming smarter, but especially about functioning more safely and fairly. At NetCare, we believe that AI only becomes truly valuable when it is transparent about its own limitations. With advanced uncertainty quantification tools like Capsa, you can also put that vision into practice.