Filters








555,441 Hits in 5.0 sec

A Fast Alternating Direction Method of Multipliers Algorithm for Big data Applications

Huihui Wang, Xingguo Chen
2020 IEEE Access  
In this paper, we propose a novel fast distributed algorithm via Alternating Direction Method of Multipliers with Adaptive Local Update (ADMM-ALU), that uses an efficient adaptive local update strategy  ...  Moreover, the data often are distributed and stored in different computation resources in many big data applications.  ...  These algorithms lack of efforts in studying the trade-off between communication and local computation with variable local update.  ... 
doi:10.1109/access.2020.2967843 fatcat:4k7m3rpwsngotdtgxsjtsjp2aa

Adaptive distributed monitoring with accuracy objectives

Alberto Gonzalez Prieto, Rolf Stadler
2006 Proceedings of the 2006 SIGCOMM workshop on Internet network management - INM '06  
It dynamically configures local filters that control whether an update is sent towards the root of the tree.  ...  can be provided in real-time.  ...  This paper was supported in part by the EC IST-EMANICS Network of Excellence (#26854).  ... 
doi:10.1145/1162638.1162649 fatcat:ejcvsjli2zfdzcsat5anup5d3e

Computing histograms of local variables for real-time monitoring using aggregation trees

Dan Jurca, Rolf Stadler
2009 2009 IFIP/IEEE International Symposium on Integrated Network Management  
In this paper we present a protocol for the continuous monitoring of a local network state variable.  ...  Our aim is to provide a management station with the value distribution of the local variables across the network, by means of partial histogram aggregation, with minimum protocol overhead.  ...  Often, management variables that are monitored in these tasks are aggregates that are computed from local device variables across the network.  ... 
doi:10.1109/inm.2009.5188837 dblp:conf/im/JurcaS09 fatcat:hbvuf2llv5hdnjzus7bpxvoqwq

A Local Least Squares Framework for Ensemble Filtering

Jeffrey L. Anderson
2003 Monthly Weather Review  
The ensemble filter methods derived here make a (local) least squares assumption about the relation between prior distributions of an observation variable and model state variables.  ...  First, an update increment is computed for each prior ensemble estimate of the observation variable by applying a scalar ensemble filter.  ...  Fig. 1 but showing the application of local least squares fits, in this case using only the nearest neighbor in y, to compute the updates for x given the updates for y.  ... 
doi:10.1175/1520-0493(2003)131<0634:allsff>2.0.co;2 fatcat:7mdimc4nyzbtzdb2qz4flz43vq

H-GAP: estimating histograms of local variables with accuracy objectives for distributed real-time monitoring

Dan Jurca, Rolf Stadler
2010 IEEE Transactions on Network and Service Management  
We present H-GAP, a protocol for continuous monitoring, which provides a management station with the value distribution of local variables across the network.  ...  Using SUM as an example, we show how general aggregation functions over local variables can be efficiently computed with H-GAP. We evaluate our protocol through simulation using real traces.  ...  We assume that at each time step, each process updates its local variable to a new value from S according to a uniform distribution.  ... 
doi:10.1109/tnsm.2010.06.i8p0292 fatcat:tf4dk7gtkbgfvde46e52tukafm

A-GAP: An Adaptive Protocol for Continuous Network Monitoring with Accuracy Objectives

A.G. Prieto, R. Stadler
2007 IEEE Transactions on Network and Service Management  
The protocol quickly adapts to a node failure and exhibits short spikes in the estimation error. Lastly, it can provide an accurate estimate of the error distribution in real-time.  ...  Based on a stochastic model, it dynamically configures local filters that control whether an update is sent towards the root of the tree.  ...  This paper was supported in part by the EC IST-EMANICS Network of Excellence (#26854).  ... 
doi:10.1109/tnsm.2007.030101 fatcat:dk4ho2kug5gmvb6vzjtaa7e5bm

An Efficient Message Filtering Strategy Based on Asynchronous ADMM with L1 Regularization

Jiafeng Zhang
2019 Journal of Physics, Conference Series  
However, as the data grows, the dimension of the dataset will increase rapidly, which leads to the increment of the communication traffic in the distributed computing cluster and decreases the performance  ...  Besides, we update the algorithm asynchronously to reduce the waiting time of the master node.  ...  In the distributed ADMM algorithm, each slave node is responsible for the update of the local variable and the dual variable .  ... 
doi:10.1088/1742-6596/1284/1/012066 fatcat:q4pkhpcwendufpcrstv3gmvxou

Toward understanding the optimization of complex systems

Jiming Liu, Yu-Wang Chen
2011 Artificial Intelligence Review  
In response to the increasing demands for solving various optimization problems arisen from complex systems, the paper focuses on the study of a general-purpose distributed/decentralized self-organized  ...  computation.  ...  For example, in the Ising computing model the local fitness of the spin σ i can be represented as: f σ (σ i ) = σ i (h i + j ∈N(i) J ij σ j ) (2) where N(i) Local update rules of emergent computation  ... 
doi:10.1007/s10462-011-9256-4 fatcat:kkb2rz6jljcj3mt3o5xsjjnuxy

An iterative scheme for distributed model predictive control using Fenchel's duality

Minh Dang Doan, Tamás Keviczky, Bart De Schutter
2011 Journal of Process Control  
We conclude by discussing open issues of the proposed method and by providing an outlook on research in the field.  ...  The underlying decomposition technique relies on Fenchel's duality and allows subproblems to be solved using local communications only.  ...  Research supported by the European 7th framework STREP project "Hierarchical and distributed model predictive control", contract number INFSO-ICT-223854.  ... 
doi:10.1016/j.jprocont.2010.12.009 fatcat:3r5c7dsfmvd3rhwg45a4xm35ye

Accelerating Expectation-Maximization Algorithms with Frequent Updates

Jiangtao Yin, Yanfeng Zhang, Lixin Gao
2012 2012 IEEE International Conference on Cluster Computing  
Despite the popularity of EM algorithms, it is challenging to efficiently implement these algorithms in a distributed environment.  ...  Accordingly, we propose two approaches to parallelize such EM algorithms in a distributed environment so as to scale to massive data sets.  ...  If the overhead is large, it is reasonable to compute the distribution for a subset of data points (or compute the distribution in a subrange of the hidden variable) and then update the parameters.  ... 
doi:10.1109/cluster.2012.81 dblp:conf/cluster/YinZG12 fatcat:xzghbedpg5eo7dpvwxxr2xchf4

A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization [article]

Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro
2019 arXiv   pre-print
We derive fully decentralized quasi-Newton updates that approximate second order information to reduce the computational burden relative to dual methods and to make the method more robust in ill-conditioned  ...  The PD-QN method performs quasi-Newton updates on both the primal and dual variables of the consensus optimization problem to find the optimal point of the augmented Lagrangian.  ...  permits local and distributed computation to find exact solutions to (1) .  ... 
arXiv:1809.01212v2 fatcat:242ob46pu5bdlprkjxsisreum4

Distributed Parallel Computing Using Navigational Programming

Lei Pan, Ming Kin Lai, Koji Noguchi, Javid J. Huseynov, Lubomir F. Bic, Michael B. Dillencourt
2004 International journal of parallel programming  
Message Passing (MP) and Distributed Shared Memory (DSM) are the two most common approaches to distributed parallel computing. MP is difficult to use, while DSM is not scalable.  ...  Like DSM, NavP supports incremental parallelization and shared variable programming and is therefore easy to use.  ...  Table 1 . 1 A taxonomy of variables. local distributed private (none) agent variable public node variable DSV Table 2 . 2 Problems and solutions in distributed sequential computing.  ... 
doi:10.1023/b:ijpp.0000015563.36375.17 fatcat:6jxfqeabv5dilo3xraj7oi2cdu

A primal-dual Newton method for distributed Quadratic Programming

Emil Klintberg, Sebastien Gros
2014 53rd IEEE Conference on Decision and Control  
In this paper, the local problems are solved using a primal-dual interior point method and the dual variables are updated using a Newton iteration, providing a fast convergence rate.  ...  This paper considers the problem of solving Quadratic Programs (QP) arising in the context of distributed optimization and optimal control.  ...  The local factorisations are re-used to form the dual Hessian alongside linear predictors for the local primal-dual variables and for the dual variables at a negligible computational cost.  ... 
doi:10.1109/cdc.2014.7040304 dblp:conf/cdc/KlintbergG14 fatcat:3ndt4nh245dglb62r4by44zmbm

Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge Computing [article]

Hao Chen, Yu Ye, Ming Xiao, Mikael Skoglund, H. Vincent Poor
2020 arXiv   pre-print
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.  ...  To train large-scale machine learning models, edge/fog computing is often leveraged as an alternative to centralized learning.  ...  In our scheme, local gradients are calculated in dispersed ECNs, while variables including primal and dual variables and global variables are updated in the corresponding agent. III.  ... 
arXiv:2010.00914v1 fatcat:o7oy4w4hznehtok35l546kard4

Distributed Particle Swarm Optimization Based on Primal-Dual Decomposition Architectures

Yuji Wakasa, Sho Yamasaki
2015 Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications  
Distributed optimization methods have been studied recently motivated by emerging applications in smart grid, multi-robot, etc.  ...  In most of the studies, convexity and smoothness of the objective and constraint functions are assumed while such assumptions are not always made in practice.  ...  computation: For i = 1, . . . , N , each agent i computes the local perturbation points by (13). Local variable updates: For i = 1, . . . , N , each agent i updates (14) and (11) sequentially.  ... 
doi:10.5687/sss.2015.97 fatcat:qx7vw6r5uze2jjule73cqktfwm
« Previous Showing results 1 — 15 out of 555,441 results