DISTRIBUTED BANACH-PICARD ITERATIONS FOR DISTRIBUTED INFERENCE: THEORY AND APPLICATIONS
Andrade, F. L.
; Xavier, J.
;
Figueiredo, M. A. T.
DISTRIBUTED BANACH-PICARD ITERATIONS FOR DISTRIBUTED INFERENCE: THEORY AND APPLICATIONS, Proc Institute of Mathematics of the University of Seville Iberian Mathematical Meeting 8IMM, Conference Online, Vol. , pp. - , October, 2022.
Digital Object Identifier:
Abstract
Many inference problems can be mathematically formulated as finding a fixed
point of a contractive operator/map. In modern distributed scenarios (e.g., distributed
machine learning or sensor networks), this map can be naturally written as the average
of individual maps held locally and privately (i.e., the agents don’t want to share their
local data with the others) by a set of agents linked by a (maybe sparse) communication
network. Starting with the classical Banach-Picard iteration (BPI), which is is a widely
used natural choice to find fixed points of locally contractive maps, this talk shows how
to extend the BPI to these distributed settings. We do not assume that the locally con-
tractive map comes from an underlying optimization problem, which precludes exploiting
strong global properties such as convexity, coercivity, or Lipschitzianity. Yet, we present a
distributed algorithm (called distributed Banach-Picard iteration – DBPI) that keeps the
linear convergence rate of the standard BPI for the average locally contractive map. As an
application, we derive and prove linear convergence of two distributed algorithms for two
classical data analysis problems: the expectation-maximization algorithm for parameter
estimation from noisy and faulty distributed sensors; principal component analysis with
distributed data (equivalently finding the top m eigenvectors of a positive semidefinite
matrix, which is the average of local matrices held by the network agents).
More details about this work can be found in [1, 2].