The application of artificial intelligence (AI) is growing rapidly and becoming increasingly intertwined with our daily lives and high‑stakes industries such as healthcare, telecom, and energy. But with great power comes great responsibility: AI systems sometimes make mistakes or provide uncertain answers that can have serious consequences.
MIT’s Themis AI, co‑founded and led by Professor Daniela Rus of the CSAIL lab, offers a groundbreaking solution. Their technology enables AI models to ‘know what they don’t know’. This means AI systems can indicate when they are uncertain about their predictions, allowing errors to be prevented before they cause harm.
Why is this so important?
Many AI models, even advanced ones, can sometimes exhibit so‑called ‘hallucinations’—providing erroneous or unfounded answers. In sectors where decisions carry heavy weight, such as medical diagnosis or autonomous driving, this can have disastrous consequences. Themis AI developed Capsa, a platform that applies uncertainty quantification: it measures and quantifies the uncertainty of AI output in a detailed and reliable manner.
How does it work?
By teaching models uncertainty awareness, they can equip outputs with a risk or confidence label. For example, a self‑driving car can indicate that it is uncertain about a situation and therefore trigger human intervention. This not only enhances safety but also increases users’ trust in AI systems.
capsa_torch.wrapper() where the output consists of both the prediction and the risk:

Conclusion
The MIT team demonstrates that the future of AI is not only about becoming smarter, but especially about operating more safely and fairly. At NetCare we believe AI only becomes truly valuable when it is transparent about its own limitations. With advanced uncertainty quantification tools such as Capsa, you can also put that vision into practice.