Making AI systems more trustworthy through uncertainty disentanglement
Room A008
- time: 11h00-12h00
Given the increasing use of machine learning (ML) models for decisions that directly affect humans, it is essential that these models not only provide accurate predictions but also offer a credible representation of their uncertainty. Recent advances have led to probabilistic models capable of disentangling two types of uncertainty: aleatoric and epistemic. Aleatoric uncertainty is inherent to the data and cannot be eliminated, while epistemic uncertainty is related to the ML model and can be reduced with better modeling approaches or more data. In this talk I will elaborate on the opportunities and limitations of uncertainty disentanglement in explaining why an ML model fails to deliver accurate predictions. Furthermore, I will discuss several use cases that demonstrate the potential of uncertainty disentanglement for biotechnology applications.
Back to top