Deep Feedforward Networks, Bayesian Gaussian Process Latent Variable Model
Deep Feedforward Networks [Slides]
Navid Shervani-Tabar
We review the deep feedforward networks. General setup and design decisions would be discussed - choosing the optimizer, the cost-function, and the form of the output units. We review the basics of gradient-based learning, then proceed to confront some of the design decisions e.g. choosing the activation functions that will be used to compute the hidden layer values, and the architecture of the network. We present th
Implementation of Neural networks using PyTorch
We review and discuss the structure and implementation of basic neural networks using PyTorch. Polynomial fitting, classification, and mixture density networks will be discussed along with coding details for replications of results found in the literature. [Slides] References: Bishop, Christopher M. Pattern Recognition and Machine Learning, Chapter 5, Springer, 2006.
Sparse Gaussian Processes
We review the motivation for and implementation of sparse Gaussian processes. Special attention is given to the variational method of Titsias (2009), which addresses many of the shortcomings of the previous state of the art and serves as a foundation for many current extensions. [Slides] References: Titsias, M. "Variational Learning of Inducing Variables in Sparse Gaussian Processes." International Conference on Artificial Intelligence and Statistics (2009): 567-574
Expectation Propagation, Model uncertainty in RANS simulation, Variational Auto-Encoders
Introduction to Expectation Propagation [Slides] Souvik Chakraborty Expectation propagation (EP) is an approximate Bayesian inference algorithm which constructs tractable approximations to complex probability distributions. EP is an extension of the assumed density filtering which in turn is an approximation of Kalman filtering algorithm. This seminar reviews EP and its variants that have been developed over the last decade or so. References: Minka, T. P. (2001). Expectation