New article published on Information Sciences

Our latest article on “Distributed Learning for Random Vector Functional-Link Networks“, published in Information Sciences, is now available in early press. In the paper, we investigate the situation where the training data for a RVFL net is distributed in a generic network of agents:

Data-distributed learning in a network of interconnected agents (taken from the paper).

Data-distributed learning in a network of interconnected agents (taken from the paper).

Our question was: how can the agents agree on a single model, taking in account all the local datasets, without a centralized controller and without exchanging data points? In the paper, we investigate two solutions. One is based on optimizing a global cost function in a decentralized fashion, by means of the Alternating Direction Method of Multipliers (ADMM). The second is a heuristic strategy, where the output weights of the network are simply averaged throughout the nodes by means of a consensus protocol. The interesting result is that this latter strategy can obtain excellent results (comparable to ADMM in some cases), while being extremely efficient to implement.

The code for the simulations in the paper is freely available in the Lynx toolbox:
http://ispac.ing.uniroma1.it/scardapane/software/lynx/dist-learning/

 

Leave a Reply

Your email address will not be published / Required fields are marked *