Download Full Issue in PDF - Academy Publisher
Download Full Issue in PDF - Academy Publisher
Download Full Issue in PDF - Academy Publisher
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
1608 JOURNAL OF COMPUTERS, VOL. 8, NO. 6, JUNE 2013<br />
In this paper, we first extended the traditional kernelbased<br />
learn<strong>in</strong>g problem to multiclass cases <strong>in</strong> an Extreme<br />
Learn<strong>in</strong>g Mach<strong>in</strong>e context. To enhance the performance<br />
of ELM, a deformed kernel was proposed, which can<br />
make use of underly<strong>in</strong>g <strong>in</strong>formation from both labeled<br />
and unlabeled samples. To speed up our algorithm, we<br />
further approximated the deformed kernel by means of<br />
random feature mapp<strong>in</strong>g. Our algorithm does not need<br />
kernel parameter tun<strong>in</strong>g. The experimental results have<br />
shown that the DKELM algorithm achieves better<br />
generalization performance at a relatively faster learn<strong>in</strong>g<br />
speed than traditional semi-supervised classification<br />
algorithms. In the future, we will further optimize our<br />
proposed framework and study the sparse regularization<br />
problems <strong>in</strong> our framework.<br />
ACKNOWLEDGMENTS<br />
This work was supported by the National Natural<br />
Science Foundation of Ch<strong>in</strong>a under Grant Nos. 50674086.<br />
REFERENCES<br />
[1] X. Tang and M. Han, “Partial Lanczos extreme learn<strong>in</strong>g<br />
mach<strong>in</strong>e for s<strong>in</strong>gle-output regression problems,”<br />
Neurocomput<strong>in</strong>g, vol. 72, no. 13–15, pp. 3066–3076, Aug.<br />
2009.<br />
[2] Q. Liu, Q. He, and Z.-Z. Shi, “Extreme support vector<br />
mach<strong>in</strong>e classifier,” Lecture Notes <strong>in</strong> Computer Science,<br />
vol. 5012, pp. 222–233, 2008.<br />
[3] B. Frénay and M. Verleysen, “Us<strong>in</strong>g SVMs with<br />
randomised feature spaces: An extreme learn<strong>in</strong>g approach,”<br />
<strong>in</strong> Proc. 18th ESANN, Bruges, Belgium, Apr. 28–30, pp.<br />
315–320,2010.<br />
[4] Y. Miche, A. Sorjamaa, P. Bas, O. Simula, C. Jutten, and<br />
A. Lendasse, “OP-ELM: Optimally pruned extreme<br />
learn<strong>in</strong>g mach<strong>in</strong>e,” IEEE Trans. Neural Netw., vol. 21, no.<br />
1, pp. 158–162, Jan. 2010.<br />
[5] W. Deng, Q. Zheng, and L. Chen, “Regularized extreme<br />
learn<strong>in</strong>g mach<strong>in</strong>e,” <strong>in</strong> Proc. IEEE Symp. CIDM, Mar. 30–<br />
Apr. 2, pp. 389–395,2009.<br />
[6] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme<br />
learn<strong>in</strong>g mach<strong>in</strong>e: A new learn<strong>in</strong>g scheme of feedforward<br />
neural networks,” <strong>in</strong> Proc. IJCNN, Budapest, Hungary, Jul.<br />
25–29, vol. 2, pp. 985–990.,2004.<br />
[7] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme<br />
learn<strong>in</strong>g mach<strong>in</strong>e: Theory and applications,”<br />
Neurocomput<strong>in</strong>g, vol. 70, no. 1–3, pp. 489–501, Dec. 2006.<br />
[8] G.-B. Huang, L. Chen, and C.-K. Siew, “Universal<br />
approximation us<strong>in</strong>g <strong>in</strong>cremental constructive feedforward<br />
networks with random hidden nodes,” IEEE Trans. Neural<br />
Netw., vol. 17, no. 4, pp. 879–892, Jul. 2006.<br />
[9] G.-B. Huang and L. Chen, “Convex <strong>in</strong>cremental extreme<br />
learn<strong>in</strong>g ma-ch<strong>in</strong>e,” Neurocomput<strong>in</strong>g, vol. 70, no. 16–18,<br />
pp. 3056–3062, Oct. 2007.<br />
[10] G.-B. Huang and L. Chen, “Enhanced random search based<br />
<strong>in</strong>cremental extreme learn<strong>in</strong>g mach<strong>in</strong>e,” Neurocomput<strong>in</strong>g,<br />
vol. 71, no. 16–18, pp. 3460–3468, Oct. 2008.<br />
[11] D. E. Rumelhart, G. E. H<strong>in</strong>ton, and R. J. Williams,<br />
“Learn<strong>in</strong>g representations by back-propagation errors,”<br />
Nature, vol. 323, pp. 533–536, 1986.<br />
[12] T. Joachims. Transductive <strong>in</strong>ference for text classification<br />
us<strong>in</strong>g support vector mach<strong>in</strong>es. In ICML 16, pp:200-209,<br />
1999.<br />
[13] K. P. Bennett and A. Demiriz. Semi-supervised support<br />
vector mach<strong>in</strong>es. In NIPS, pages 368-374, 1999.<br />
[14] O. Chapelle, V. S<strong>in</strong>dhwani, and S. S. Keerthi. Branch and<br />
bound for semi-supervised support vector mach<strong>in</strong>es.<br />
InNIPS, pages 217-224, 2006.<br />
[15] O. Chapelle and A. Zien. Semi-supervised classification<br />
by low density separation. In AISTATS 10, 2005.<br />
[16] O. Chapelle, B. Scholkopf, and A. Zien. Semi-Supervised<br />
Learn<strong>in</strong>g. MIT Press: Cambridge, MA, 2006.<br />
[17] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans.<br />
Maximum marg<strong>in</strong> cluster<strong>in</strong>g. In NIPS, 2004.<br />
[18] L. Xu and D. Schuurmans. Unsupervised and semisupervised<br />
multi-class support vector mach<strong>in</strong>es. In AAAI,<br />
2005.<br />
[19] Z. Xu, R. J<strong>in</strong>, J. Zhu, I. K<strong>in</strong>g, and M. Lyu. Efficient convex<br />
relaxation for transductive support vector mach<strong>in</strong>e. In<br />
NIPS, 2007.<br />
[20] O. Chapelle, M. Chi, and A. Zien. A cont<strong>in</strong>uation method<br />
for semi-supervised svms. In ICML 23, pages 185-<br />
192,2006.<br />
[21] A. Astor<strong>in</strong>o and A. Fuduli. Nonsmooth optimization<br />
techniques for semi-supervised classification. IEEE<br />
Trans.Pattern Anal. Mach. Intell., 29(12):2135-2142, 2007.<br />
[22] R. Collobert, F. S<strong>in</strong>z, J. Weston, and L. Bottou. Large scale<br />
transductive svms. Journal of Mach<strong>in</strong>e Learn<strong>in</strong>g Research,<br />
7:1687-1712, 2006.<br />
[23] G. Fung and O. Mangasarian. Semi-supervised support<br />
vector mach<strong>in</strong>es for unlabeled data classification.<br />
Optimization Methods and Software, 15:29-44, 2001.<br />
[24] V. S<strong>in</strong>dhwani, S. S. Keerthi, and O. Chapelle.Determ<strong>in</strong>istic<br />
anneal<strong>in</strong>g for semi-supervised kernel mach<strong>in</strong>es. In ICML<br />
23, pp: 841-848, 2006.<br />
[25] B<strong>in</strong> Zhao, Fei Wang, Changshui Zhang.CutS3VM: A Fast<br />
Semi-Supervised SVM Algorithm.The 14th ACM<br />
SIGKDD International Conference on Knowledge<br />
Discovery & Data M<strong>in</strong><strong>in</strong>g(KDD).pp:830-838,August 24-<br />
27,2008.<br />
[26] V. S<strong>in</strong>dhwani, S.S. Keerthi. Large Scale Semi-supervised<br />
L<strong>in</strong>ear SVMs. 29th Annual International ACM SIGIR,<br />
Technical report, 2006.<br />
[27] M. Belk<strong>in</strong>, V. S<strong>in</strong>dhwani, P.Niyogi, Manifold<br />
regularization: a geometric framework for learn<strong>in</strong>g from<br />
labeled and unlabeled examples, J. Mach. Learn. Res.<br />
7:2399–2434,2006.<br />
[28] H. Xue, S. Chen, Q. Yang, Discrim<strong>in</strong>atively regularized<br />
least-squares classification, Pattern Recognition 42(1)<br />
pp:93–104,2009.<br />
[29] M<strong>in</strong>gyu Fan, Nannan Gu, Hong Qiao, etc. Sparse<br />
regularization for semi-supervised classification .Pattern<br />
Recognition, 44(8,), pp: 1777-1784,2011<br />
[30] G.-B. Huang, H.-M Zhou, X.-J. D<strong>in</strong>g. Extreme Learn<strong>in</strong>g<br />
Mach<strong>in</strong>e for Regression and Multiclass Classification<br />
IEEE Transactions on Systems, Man, and Cybernetics-<br />
PART B: Cybernetics, Vol. 42, no. 2, 513 – 529,2012.<br />
[31] Huang et al. Extreme learn<strong>in</strong>g mach<strong>in</strong>es: a<br />
survey .International Journal of Mach<strong>in</strong>e Learn<strong>in</strong>g and<br />
Cybernetics. pp:107–122,2011.<br />
© 2013 ACADEMY PUBLISHER