Decision-making in science and engineering is mainly informed through model-based predictions, where we first understand and define the questions we are posing and then define models to answer them. However, predictions without uncertainty quantification (UQ) do not provide the trust needed to inform decisions. On the other hand, when neither the questions nor the underlying models are known, machine learning (ML) can be utilized to develop data-driven models. Similarly, the predictions of ML models cannot be used for decision-making without UQ.

In a nutshell, uncertainty quantification (UQ) involves the mathematical treatment of uncertainties in numerical models. More specifically, UQ identifies the sources of uncertainty and quantifies its impact on the behavior of the model in order to:

  • Enable robust predictions (forward UQ).
  • Infer uncertainties in the model from data (inverse UQ)
  • Calculate probability of failure (reliability analysis)
  • Prioritize the sources of uncertainty (sensitivity analysis)

My research is contoured around methodological research for uncertainty quantification and machine learning. Specific projects include:

  • Manifold learning for low-dimensional representation of high-dimensional models
  • Neural networks for supervised learning tasks
  • Uncertainty quantification in machine learning models
  • Active learning for exploring design spaces
  • Statistical inference in the presence of small or incomplete data
  • Sensitivity and reliability analysis on the manifold

I am also interested in Scientific software design and development. I am the Lead Developer of the general-purpose, open-source Python package UQpy that contains a wide variety of methods for inverse and forward propagation of uncertainty, surrogate modeling, reliability and sensitivity analysis.