Parallel approaches for the Bayesian Gaussian process latent variable model; Regularization in optim
Parallel approaches for the Bayesian Gaussian process latent variable model [Slides]
Steven Atkinson
We consider the task of training a Bayesian GP-LVM and uncover opportunities to employ parallelism in computing the collapsed lower bound to the model evidence and its gradients with respect to its variational parameters. We will discuss its implementation within a C++ code for deep GPs.
Reference:
Gal, Yarin, Mark van der Wilk, and Carl Edward Rasmussen. "Distributed variational inference in sparse Gaussian process regression and latent variable models." Advances in Neural Information Processing Systems. 2014.
Regularization in optimization [Slides]
Govinda Anantha Padmanabha