PaGaLiNNeT-Return-Phase

The aim the Return Phase of this Marie Curie International Incoming Fellowship 908524 “PaGaLiNNeT – Parallel Grid-aware Library for Neural Networks Training” was to deploy the developed software library containing the enhanced parallel algorithms of neural networks training on heterogeneous computational Grids and test it on a number of practical tasks.

The objectives set out at the beginning of the project (for the period 01/08/2011 – 31/07/2012) were as follows:

1. to deploy parallel Grid-aware library for neural networks training on the computational grid of return host;

2. to test experimentally parallel Grid-aware library on computational systems of both host institution and return host.

The main outcomes of the Return Phase of this Fellowship are:

  • The parallel Grid-aware library for neural networks training has been deployed and tested on different computational systems: on the level of separate high-performance computer/cluster with homogeneous processors, (ii) on the level of cluster with heterogeneous processors and (iii) on the level of computational Grid-system. The higher values of parallelization efficiencies of parallel neural network training algorithms were obtained on the cluster with Infiniband interconnection. The library’s existing subroutines of parallel batch pattern back propagation training algorithm of multi-layer perceptron and recurrent neural network, the resource broker and modular parallelization with dynamic mapping were extended by the parallel batch pattern back propagation training algorithms of recirculation neural network and radial basis function neural network. The experimental researches of the latter algorithms have showed high parallelization efficiency on the level of 93-89-81% on 2-8 processors of high performance computing systems.
  • The parallel Grid-aware library for neural networks training has been successfully applied for the number of application tasks: (i) stock price prediction for financial markets and (ii) spot Instances price prediction of Cloud resources for the host institution (University of Calabria, Italy) and (iii) data compression and principal component analysis within the neural network-based intrusion detection and classification method for computer networks and (iv) identification of individual conversion characteristic of a multisensor for the return host (Research Institute of Intelligent Computer Systems, Ternopil National Economic University, Ukraine). In all application cases the library showed a high speedup of execution of the parallel algorithms, which improved the quality of the scientific research.

The results of the return phase project fulfillment have been published in 6 scientific papers, 1 paper is in press. The results were presented in 5 scientific seminars and 3 international scientific conferences.

The socio-economic impact of the project could be considered in (i) enlargement of a number of scientific and computationally intensive real-world applications based on neural networks where the developed parallel Grid-aware library could significantly speedup the training process of neural networks, (ii) effective usage of the existing huge pool of high-performance and Grid computational resources for new scientific and industry-based applications and (iii) improvement of the quality of scientific research.

Return Host: Research Institute of Intelligent Computer Systems, Ternopil National Economic University, 3 Peremoga Square, Ternopil, 46004, UKRAINE

Marie Curie researcher: Dr. Volodymyr Turchenko

Tel: +380352-475050 ext.12315
e-mail: vtu@tneu.edu.ua

Scientist in charge: Prof. Anatoly Sachenko

Tel: +380352-475050 ext.12322
e-mail: as@tneu.edu.ua

Address of the project web-page

http://uweb.deis.unical.it/turchenko/research-projects/pagalinnet/

Publications resulting from FP7 Marie Curie IIF 908524 PaGaLiNNeT – Return Phase

  1. Turchenko V., Puhol T., Sachenko A. and Grandinetti L. Cluster-Based Implementation of Resource Brokering Strategy for Parallel Training of Neural Networks,  Proceedings of the 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS 2011), 15-17 September 2011, Prague, Czech Republic, pp. 212-217, print ISBN: 978-1-4577-1426-9, CD ISBN 978-1-4577-1424-5, electronic link http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6072743&contentType=Conference+Publications&ranges%3D2011_2011_p_Publication_Year%26queryText%3Dturchenko
  2. Turchenko V., Beraldi P., De Simone F. and Grandinetti L. Short-term Stock Price Prediction Using MLP in Moving Simulation Mode, Proceedings of the 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS 2011), 15-17 September 2011, Prague, Czech Republic, pp. 666-671, print ISBN: 978-1-4577-1426-9, CD ISBN 978-1-4577-1424-5, electronic link http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6072853&contentType=Conference+Publications&ranges%3D2011_2011_p_Publication_Year%26queryText%3Dturchenko
  3. Turchenko V. Efficiency Comparison of Batch Pattern Training Algorithm of Multilayer Perceptron on Parallel Computer and Computational Cluster, Scientific Journal of National Technical University of Ukraine “Kyiv Polytechnic Institute”, Kyiv, 2011, No 54, pp. 130-138 (in Ukrainian), ISSN 0201-744X, ISSN 0135-1729, open access http://it-visnyk.kpi.ua/?page_id=1497&lang=en.
  4. Sachenko A., Kulakov Yu., Kochan V., Turchenko V., Bykovvy P., Borovyy A. Computer Networks: A Tutorial, Ternopil, Ekonomichna dumka, 2012, 476 p. // Chapter 15. Grid-computations based on network technologies, pp. 416-439 (in Ukrainian).
  5. Turchenko V., Grandinetti L., Sachenko A. Parallel Batch Pattern Training of Neural Networks on Computational Clusters, Proceedings of the 2012 International Conference on High Performance Computing & Simulation (HPCS 2012), July 2 – 6, 2012, Madrid, Spain, pp. 202-208, CD ISBN: 978-1-4673-2361-1, electronic link http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6266912&url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D6266912.
  6. Turchenko V., Golovko V., Sachenko A. Parallel Batch Pattern Training of Recirculation Neural Network, Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2012), July 28 – 31, 2012, Rome, Italy, pp. 644-650, ISBN: 978-989-8565-21-1, electronic link http://www.scitepress.org/DigitalLibrary/Default.aspx.
  7. Turchenko V., Golovko V., Sachenko A. Parallel Training Algorithm for Radial Basis Function Neural Network, 7th International Conference on Neural Networks and Artificial Intelligence (ICNNAI’2012), October 10-12, 2012, Minsk, Belarus, in press.

DISSEMINATION ACTIVITIES

  • During the returning phase I have published 6 scientific papers, 1 paper is accepted for publication (see the list of publications below).
  • The dissemination activities I done by the presentations on the following seminars/working meetings/workshops:
  1. “Development of a Library for Parallel Neural Network Training” presented at the joint scientific seminar of the Research Institute for Intelligent Computer Systems and the Department of Information Computing Systems and Control at home institution, Ternopil National Economic University, 02 November 2011;
  2. “Development of Parallel Neural Network Training Algorithms” presented at the working meeting at the Supercomputing Center, National Technical University of Ukraine “Kyiv Polytechnic Institute”, Kyiv, Ukraine, 12 December 2011;
  3. “Parallelization of Batch Pattern Training Algorithms”, presented at the working meeting at the Laboratory of Artificial Neural Networks, Department of Intelligent Information Technologies, Brest State Technical University, Brest, Belarus, 03 April 2012;
  4. “Neural Networks and Parallel Computing Research Group” presented at the University Science Day 2012, the local scientific conference for students of the Faculty of Computer Information Technologies, Ternopil National Economic University, 10 April 2012;
  5. “Application of BSP-Based Computational Cost Model to Predict Parallelization Efficiency of MLP Training Algorithm”, presented at the University Science Day 2012, the local scientific conference for lecturers of the Faculty of Computer Information Technologies, Ternopil National Economic University, 11 April 2012
  • Also the dissemination activities were held during my participation in the following scientific conferences:
  1. The 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS 2011), September 15-17, 2011, Prague, Czech Republic with oral presentation of the paper “Short-term Stock Price Prediction Using MLP in Moving Simulation Mode” and poster presentation of the paper “Cluster-Based Implementation of Resource Brokering Strategy for Parallel Training of Neural Networks”;
  2. The 2012 International Conference on High Performance Computing & Simulation (HPCS 2012), July 2-6, 2012, Madrid, Spain with oral presentation of the paper “Parallel Batch Pattern Training of Neural Networks on Computational Clusters”;
  3. The 9th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2012), July 28-31, 2012, Rome, Italy, special session on Artificial Neural Networks and Intelligent Information Processing with oral presentation of the paper “Parallel Batch Pattern Training of Recirculation Neural Network”.