MIT doet onderzoek om AI slimmer te maken

MIT team teaches AI models what they did not yet know.

The application of artificial intelligence (AI) is growing rapidly and becoming increasingly intertwined with our daily lives and high-stakes industries such as healthcare, telecom and energy. But with great power comes great responsibility: AI systems sometimes make mistakes or provide uncertain answers that can have significant consequences.

MIT’s Themis AI, co-founded and led by Professor Daniela Rus of the CSAIL lab, offers a groundbreaking solution. Their technology enables AI models to "know what they don't know." This means AI systems can indicate when they are uncertain about their predictions, allowing errors to be prevented before they cause harm.

Why is this so important?
Many AI models, even advanced ones, can sometimes exhibit so-called "hallucinations"—they produce incorrect or unfounded answers. In sectors where decisions carry heavy weight, such as medical diagnosis or autonomous driving, this can have disastrous consequences. Themis AI developed Capsa, a platform that applies uncertainty quantification: it measures and quantifies the uncertainty of AI output in a detailed and reliable way.

 How does it work?
By teaching models uncertainty awareness, they can provide outputs labeled with a risk or confidence score. For example: a self-driving car can indicate that it is unsure about a situation and therefore trigger human intervention. This not only increases safety but also user trust in AI systems.

Examples of technical implementation

  • When integrated with PyTorch, wrapping the model is done via capsa_torch.wrapper() where the output consists of both the prediction and the risk:

Python example met capsa

For TensorFlow models, Capsa works with a decorator:

tensorflow

The impact for companies and users
For NetCare and its clients, this technology represents a major step forward. We can deliver AI applications that are not only intelligent but also safer and more predictable with a lower risk of hallucinations. It helps organizations make better-informed decisions and reduce risks when introducing AI into mission-critical applications.

Conclusion
MIT team shows that the future of AI is not only about becoming smarter, but above all about operating more safely and fairly. At NetCare we believe AI only becomes truly valuable when it is transparent about its own limitations. With advanced uncertainty quantification tools like Capsa, you can put that vision into practice.

Gerard

Gerard works as an AI consultant and manager. With extensive experience at large organizations, he can unravel a problem exceptionally quickly and work toward a solution. Combined with an economics background, he ensures commercially responsible decisions.