Deep Gaussian Processes (DGPs)  were introduced as a deep learning model that extends traditional single-layer Gaussian process models. Unlike an ordinary deep neural network (e.g. a stacked RBM), the model evidence of a DGP admits an analytical approximation using variational methods that makes training and inference possible within a Bayesian framework. As a consequence, DGPs naturally guard against overfitting and automatically select the dimensionality of their learned latent spaces, making them naturally suitable for situations in which data is scarce and traditional deep learning approaches would fail.
These properties motivate us to consider  the application of DGPs to tasks in uncertainty quantification (UQ). In order to obtain a complete characterization of the model uncertainty, we extend the model proposed in  to additionally treat all model hyperparameters in a Bayesian manner, thus providing us with the ability to measure the epistemic uncertainty in the trained models due to limited data.
We demonstrate how our fully Bayesian DGP can be used to perform nonlinear dimensionality reduction on high-dimensional stochastic inputs as well as construct surrogate models to characterize stochastic systems’ response surfaces. The former is accomplished with an unsupervised variant that allows us to both project observed inputs to a reduced-dimensional representation and uplift any point in the learned latent space to generate a corresponding instance in data space. Both mappings are probabilistic and thus rigorously encode information regarding the uncertainty of the dimensionality reduction. For the latter, we show that supervised DGP models allow for enhanced modeling capacity over standard single-layer Gaussian processes, allowing us to model systems that exhibit complex, nonstationary behavior and sensitivity to initial conditions.
 A. C. Damianou and N. D. Lawrence, Deep Gaussian Processes, AISTATS 2013