MIT team at work

MIT Team Teaches AI Models What They Didn't Already Know.

The application of artificial intelligence (AI) is rapidly expanding, becoming increasingly intertwined with our daily lives and high-stakes industries such as healthcare, telecommunications, and energy. However, with great power comes great responsibility: AI systems sometimes make errors or provide uncertain answers that can have significant consequences.

MIT’s Themis AI, co-founded and led by Professor Daniela Rus of the CSAIL lab, offers a groundbreaking solution. Their technology enables AI models to “know what they don’t know.” This means AI systems can indicate when they are uncertain about their predictions, preventing errors before they cause harm.

Why is this so important?
Many AI models, even advanced ones, can sometimes exhibit “hallucinations”—providing incorrect or unfounded answers. In sectors where decisions carry significant weight, such as medical diagnosis or autonomous driving, this can have disastrous consequences. Themis AI developed Capsa, a platform that applies uncertainty quantification: it measures and quantifies the uncertainty of AI output in a detailed and reliable manner.

 How does it work?
By instilling uncertainty awareness in models, they can provide outputs with a risk or reliability label. For example, a self-driving car can indicate that it is unsure about a situation and therefore activate human intervention. This not only increases safety but also user trust in AI systems.

Examples of technical implementation

  • When integrating with PyTorch, the model is wrapped via capsa_torch.wrapper(), with the output consisting of both the prediction and the risk:

Python example met capsa

For TensorFlow models, Capsa works with a decorator:

tensorflow

The impact for businesses and users
For NetCare and its clients, this technology represents a huge step forward. We can deliver AI applications that are not only intelligent but also safe and more predictable, with a reduced chance of hallucinations. It helps organizations make better-informed decisions and reduce risks when implementing AI in business-critical applications.

Conclusion
The MIT team demonstrates that the future of AI is not just about becoming smarter, but primarily about functioning more safely and fairly. At NetCare, we believe that AI truly becomes valuable when it is transparent about its own limitations. With advanced uncertainty quantification tools like Capsa, you can put that vision into practice.

Gerard

Gerard

Gerard serves as an AI consultant and manager. With extensive experience at large organizations, he excels at quickly dissecting complex problems and developing effective solutions. His economic background further ensures that all choices are sound and commercially viable.

AIR (Artificial Intelligence Robot)