参考文献/References:
[1] HINTON G E, ZEMEL R S. Autoencoder minimum description length and helmholtz free energy[C]//Conference on Neural Information Processing Systems(NIPS). Denver, USA, 1993: 3-10.
[2] SOCHER R, HUANG E H, PENNINGTON J, et al. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection[C]//Proc Neural Information and Processing Systems. Granada, Spain, 2011: 801-809.
[3] SWERSKY K, RANZATO M, BUCHMAN D, et al. On score matching for energy based models: generalizing autoencoders and simplifying deep learning[C]//Proc Int’l Conf Machine Learning Bellevue. Washington, USA, 2011: 1201-1208.
[4] FISCHER A, IGEL C. An introduction to restricted Boltzmann machines[C]//Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Guadalajara, Mexico, 2012: 14-36.
[5] BREULEUX O, BENGIO Y, VINCENT P. Quickly generating representative samples from an RBM-derived process[J]. Neural computation, 2011, 23(8): 2053-2073.
[6] COURVILLE A, BERGSTRA J, BENGIO Y. Unsupervised models of images by spike-and-slab RBMs[C]//Proc Int’l Conf Machine Learning. Bellevue, Washington, USA, 2011: 1145-1152.
[7] SCHMAH T, HINTON G E, ZEMEL R, et al. Generative versus discriminative training of RBMs for classification of fMRI images[C]//Proc Neural Information and Processing Systems. Vancouver, Canada, 2008: 1409-1416.
[8] ERHAN D, BENGIO Y, COURVILLE A, et al. Why does unsupervised pre-training help deep learning?[J]. Machine learning research, 2010, 11: 625-660.
[9] VINCENT P. Extracting and composing robust features with denoising auto-encoders[C]//International Conference on Machine Learning(ICML). Helsinki, Finland, 2008: 1096-1103.
[10] VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion[J]. Machine learning research, 2010, 11: 3371-3408.
[11] CHEN M, XU Z, WINBERGER K Q, et al. Marginalized denoising autoencoders for domain adaptation[C]//International Conference on Machine Learning. Edinburgh, Scotland, 2012: 1206-1214.
[12] VINCENT P. A connection between score matching and denoising autoencoders[J]. Neural computation, 2011, 23(7): 1661-1674.
[13] RIFAI S. Contractive auto-encoders: explicit invariance during feature extraction[C]//Proceedings of the Twenty-eight International Conference on Machine Learning. Bellevue, USA, 2011: 833-840
[14] RIFAI S, BENGIO Y, DAUPHIN Y, et al. A generative process for sampling contractive auto-encoders[C]//Proc Int’l Conf Machine Learning. Edinburgh, Scotland, UK, 2012: 1206-1214.
[15] LECUN Y. Neural networks: tricks of the trade (2nd ed.) [M]. Germany: Springer, 2012: 9-48
[16] ZEILER M D, TAYLOR G W, FERGUS R. Adaptive deconvolutional networks for mid and high level feature learning[C]//IEEE International Conference on Computer Vision. Barcelona, Spain, 2011: 2013-2025.
相似文献/References:
[1]莫凌飞,蒋红亮,李煊鹏.基于深度学习的视频预测研究综述[J].智能系统学报,2018,13(01):85.[doi:10.11992/tis.201707032]
MO Lingfei,JIANG Hongliang,LI Xuanpeng.Review of deep learning-based video prediction[J].CAAI Transactions on Intelligent Systems,2018,13(06):85.[doi:10.11992/tis.201707032]
[2]杨文元.多标记学习自编码网络无监督维数约简[J].智能系统学报,2018,13(05):808.[doi:10.11992/tis.201804051]
YANG Wenyuan.Unsupervised dimensionality reduction of multi-label learning via autoencoder networks[J].CAAI Transactions on Intelligent Systems,2018,13(06):808.[doi:10.11992/tis.201804051]
[3]张潇鲲,刘琰,陈静.引入外部词向量的文本信息网络表示学习[J].智能系统学报,2019,14(05):1056.[doi:10.11992/tis.201809037]
ZHANG Xiaokun,LIU Yan,CHEN Jing.Representation learning using network embedding based on external word vectors[J].CAAI Transactions on Intelligent Systems,2019,14(06):1056.[doi:10.11992/tis.201809037]