Copyright © 2017 University of Notre Dame

CSE Laboratory 372 Fitzpatrick Hall, Notre Dame, IN 46556, USA

Phone 574-631-2429   nzabaras@nd.edu

Contact us

Accessibility Information

Please reload

Recent Posts

Our new website is launched today.

March 14, 2017

1/1
Please reload

Featured Posts

Expectation Propagation, Model uncertainty in RANS simulation, Variational Auto-Encoders

Introduction to Expectation Propagation [Slides]

Souvik Chakraborty

 

Expectation propagation (EP) is an approximate Bayesian inference algorithm which constructs tractable approximations to complex probability distributions. EP is an extension of the assumed density filtering which in turn is an approximation of Kalman filtering algorithm. This seminar reviews EP and its variants that have been developed over the last decade or so.

 

References:

Minka, T. P. (2001). Expectation propagation for approximate Bayesian inference. Uncertainty in Artificial Intelligence, 17, 362–369.

Minka, T. P. (2001). A Family of Algorithms for Approximate Bayesian Inference. PhD Thesis, Massachusetts Institute of Technology.Minka, T. P. (2005).

Power EP. Technical Report MSR-TR-2004-149, Microsoft Research, Cambridge.

Qi, Y. and Guo, Y. (2012). Message passing with relaxed moment matching. arXiv preprint. arXiv:1204.4166

 

 

 

Uncertainty Quantification of Model-Form Error with RANS Simulations

Nick Geneva

 

While Reynolds-Averaged Navier-Stokes (RANS) simulations are the work horse of the practical CFD community, the need to model smaller turbulent scales can introduce significant error into the simulation. We review a Bayesian approach to quantifying this model-form error to further improve the accuracy of RANS simulations.

 

References:

Xiao, Heng, et al. "Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach." Journal of Computational Physics 324 (2016): 115-136.

 

 

 

Variational Auto-Encoders [Slides]

Yinhao Zhu

 

Generative models allow us to create new samples from certain underlying (high-dimensional) data distribution instead of running expensive simulations. This seminar reviews a variational auto-encoder, one of the most successful generative models which scales variational Bayes to deep neural networks using the reparameterization trick.

 

References:

Kingma, Diederik P., and Max Welling. "Auto-encoding variational Bayes." arXiv preprint arXiv:1312.6114 (2013).

Tags:

Share on Twitter
Please reload

Please reload

Search By Tags
Please reload