[Paper Reading] A Fast Learning Algorithm for Deep Belief Nets
DBN
It is a Unsupervised Probabilistic generative graphical model to learn P(X), while LeNet/AlexNet and so on are discriminative models that focus on P(Y|X).
The top two layers of the DBN form an undirected bipartite graph called Restricted Boltzmann Machine
The lower layers form a directed sigmoid belief network
DBN can be formed by “stacking” RBMs. Later Autoencoder is used instead.
Greedy, layer-by-layer learning
Optionally fine-tuned with gradient descent and backpropagation.
RBM
RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph
Architecture: RBM has an input layer (also referred to as the visible layer) and one single hidden layer and the connections among the neurons are restricted. So RBM looks like a MLP connection between two layers
Salakhutdinov, Ruslan, and Hugo Larochelle. “Efficient learning of deep Boltzmann machines.” Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.