[1]YANG Mengduo,LUAN Yonghong,LIU Wenjun,et al.Feature transfer algorithm based on an auto-encoder[J].CAAI Transactions on Intelligent Systems,2017,12(6):894-898.[doi:10.11992/tis.201706037]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
12
Number of periods:
2017 6
Page number:
894-898
Column:
学术论文—机器学习
Public date:
2017-12-25
- Title:
-
Feature transfer algorithm based on an auto-encoder
- Author(s):
-
YANG Mengduo1; LUAN Yonghong1; LIU Wenjun1; LI Fanzhang2
-
1. Department of Software and Service Outsourcing, Suzhou Vocational Institute of Industrial Technology, Suzhou 215104, China;
2. School of Computer Science and Technology, Soochow University, Suzhou 215006, China
-
- Keywords:
-
auto-encoder; feature transfer; deep network; deep learning; image classification; mid-level image representation; visual recognition; large-scale datasets
- CLC:
-
TP181
- DOI:
-
10.11992/tis.201706037
- Abstract:
-
The stacked auto-encoder (SAE) has recently shown outstanding image classification performance in large-scale datasets. Relative to the low-level features of artificial design in other image classification methods, the success of SAE is its deep network that can learn rich mid-level image features. However, estimating millions of parameters requires a very large number of annotated image samples, and this prevents many SAE applications to small-scale training data. In this paper, the proposed algorithm shows how to efficiently transfer image representation learned by SAE on a large-scale dataset to other visual recognition tasks with limited training data. In the experimental section, a method is designed to reuse the hidden layers trained on MNIST datasets to compute the mid-level image representation of MNIST-variation datasets. Experimental results show that, despite differences in image datasets, the transferred image features can significantly improve the classification performance of the model.