Communication, Control and Signal Processing Seminar
Abstracts of talks
Spring 2019
01/31 Jie Li (Southeast Jiaotong University), Systematic Constructions of MDS Codes with Small Sub-packetization Level and Near Optimal Repair Bandwidth
ABSTRACT: In the literature, all the known high-rate MDS codes with the optimal repair bandwidth possess a significantly large sub-packetization level, which may prevent the codes to be implemented in practical systems. To build MDS codes with small sub-packetization level, existing results and the theoretical bounds imply that one may sacrifice the optimality of the repair bandwidth.
In this talk, we present a powerful transformation that can greatly reduce the sub-packetization level of the any MDS codes with respect to the same code length $n$, then followed by three applications of the transformation, where three high-rate MDS codes that have both small sub-packetization level and near optimal repair bandwidth can be obtained, however, over a sufficiently large finite field. To further reduce the field size, we also propose two explicit $(n=sn',k)$ MDS codes that have both small sub-packetization level and near optimal repair bandwidth, where the size $q$ of the finite fields required are respectively reduced to $q>n$ and $q>s(n-k)$.
02/14 Melih Bastopcu (UMD), Minimizing Age of Information with Soft Updates
ABSTRACT: We consider an information updating system where an information provider and an information receiver engage in an update process over time. Different from the existing literature where updates are countable (hard) and take effect either immediately or after a delay, but \emph{instantaneously} in both cases, here updates start taking effect right away but \emph{gradually} over time. We coin this setting \emph{soft updates}. When the updating process starts, the age decreases until the soft update period ends. We constrain the number of times the information provider and the information receiver meet (number of update periods) and the total duration of the update periods. We consider two models for the decrease of age during an update period: In the first model, the rate of decrease of age is proportional to the current age, and in the second model, the rate of decrease of age is constant. The first model results in an exponentially decaying age, and the second model results in a linearly decaying age. In both cases, we determine the optimum updating schemes, by determining the optimum start times and optimum durations of the updates, subject to the constraints on the number of update periods and the total update duration.
02/28 Alexander Barg (UMD), Optimal locally private estimation under $\ell_p$ loss for $1 \leq p \leq 2$
ABSTRACT: We consider the minimax estimation problem of a discrete distribution with support size $k$ under locally differential privacy constraints. A privatization scheme is applied to each raw sample independently, and we need to estimate the distribution of the raw samples from the privatized samples. A positive number $\epsilon$ measures the privacy level of a privatization scheme. Recently, we proposed a family of new privatization schemes and the corresponding estimator. We also proved that our scheme and estimator are order optimal in the regime $e^{\epsilon} \ll k$ under both $\ell_2^2$ (mean square) and $\ell_1$ loss.
In this talk, we sharpen this result by showing asymptotic optimality of the proposed scheme under the $\ell_p$ loss for all $1 \leq p \leq 2$. More precisely, we show that for any $p \in [1,2]$ and any $k$ and $\epsilon$, the ratio between the worst-case $\ell_p$ estimation loss of our scheme and the optimal value approaches $1$ as the number of samples tends to infinity. The lower bound on the minimax risk of private estimation that we establish as a part of the proof is valid for any loss function $\ell_p, p \geq 1$. Joint work with Min Ye (Princeton).
03/14 Karthik Abinav Sankararaman (UMD), Adversarial Bandits with Knapsacks
ABSTRACT: In this talk, we will discuss the multi-armed bandits problem with resource constraints, in the adversarial setting. In this problem, we have an interactive and repeated game between the algorithm and an adversary. Given $T$ time-steps, $d$ resources, $m$ actions and budgets $B_1, B_2,\ldots B_d$, the algorithm chooses one of the $m$ actions at each time-step. An adversary then reveals a reward and consumption for each of the $d$ resources, corresponding to this action. The time-step at which the algorithm runs out of the $d$ resources (i.e., the total consumption for resource $j > B_j$), the game stops and the total reward is the sum of rewards obtained until the stopping time. The goal is to maximize the competitive ratio; the ratio of the total reward of the algorithm to the expected reward of a fixed distribution that knows the rewards and consumption for all actions and time-steps beforehand. We give an algorithm for this problem whose competitive ratio is tight (matches the lower-bound). The algorithm achieves the optimal bound, both in expectation and with high-probability. Moreover, the algorithmic framework is modular and can be used in a black-box manner to obtain algorithms to many related problems such as the stochastic, contextual, semi-bandit settings as well as to the more general online constrained convex optimization setting.
The talk is based on this (https://arxiv.org/abs/1811.11881) paper.
03/28 David Hartman (UMD), Supermodular Solution to the Non-Uniform Sampling Problem in Kalman Filtering
ABSTRACT: In the optimal sampling problem, we are interested in selecting the optimal subset of times to sample a sensor such that the mean square estimation error (MMSE) between the unobserved states and the estimated states is minimized. In this problem, the states evolve according to a discrete, LTI system and the sensor takes measurements according to a discrete, LTI system. A Kalman Filter recursively estimates the evolving states based on the sensor measurements. Ideally, we would select all available times (in the horizon of interest) to sample the sensor for estimating the states. However, there are communication and energy costs affiliated with sampling and therefore we aim to minimize the estimation error when the number of times we can sample is fixed. There have been multiple attempts to solve this problem by relaxing the original problem, which is NP-hard. Such relaxations allow for nice algorithms but provide no guarantees on the gap between the solution of the relaxed problem and the solution of the original problem. We leverage the idea of supermodularity in discrete optimization to show a greedy solution to the sampling problem will produce a near-optimal solution with an approximation factor. To prove the supermodularity property for the mean squared estimation error as a function of samples, we make the assumption that the covariance matrix for the measurement noise is diagonal.
04/11 Amrit Singh Bedi (Army Research Laboratory), Distributed and Online Learning with Stochastic Gradient Methods
ABSTRACT: The advancements in sensing and communication capabilities of
recent technologies such as wireless communication and networks have
lead to unprecedented growth in the complexity and bandwidth
requirements of the network services in multi-agent network systems. The
resulting stress on the network infrastructure has motivated the network
designers to move away from simpler or modular architectures to optimum
ones. The problem is challenging for three reasons: first, data is
persistently arriving in a sequential manner from an unknown
environment, second, in multi-agent systems, the agents of the network
may correspond to different classes of objects, and third, each agent in
a network has a different timescale requirement. These challenges
motivate the need for online decentralized optimization algorithms which
can robustly handle network delays and packet losses. This work deals
with the development for online, asynchronous, and distributed
stochastic convex optimization algorithms to address various challenges
in multi-agent network systems. Based on the basic framework of the
proposed algorithm, we categorize the work into three parts.
The first part details the work done in classical online settings where
the network state is sequentially revealed. We develop
the first memory-efficient stochastic algorithms for compositional
online learning with kernels (COLK). The convergence guarantees of COLK
are established, and experiments with robust formulations of supervised
learning are demonstrated. The second part focuses on the development of asynchronous and
distributed stochastic optimization algorithms for multi-agent
heterogeneous networks. The third part considers the problem of online learning in
the time-varying adversarial environments. We formulate a
target tracking problem as a time-varying optimization problem and puts
forth an inexact online gradient descent algorithm for solving it
sequentially. The performance of the proposed algorithm is studied by
characterizing its dynamic regret. This part also considers a problem of designing the user trajectory in a device-to-device setting. The theoretical results are
backed by detailed numerical tests that establish the efficacy of the
proposed algorithms under various settings.
04/18 Viveck Cadambe (Pennsylvania State University), Coded Matrix Multiplications
ABSTRACT: There is a lot of recent interest in the emerging area of coded computing that uses redundancy to mitigate stragglers and errors in distributed data processing. This talk will present some ideas and results in this area. Specifically, given two matrices $A$, $B$, we consider the problem of computing the product $AB$ over $P$ processing nodes in a fault-tolerant manner. We assume that the $i$-th node, for $i = 1,2,\ldots,P$ receives $g_i(A)$, and $h_i(B),$ where $g_i, h_i$ are linear operators on the space of matrices, and the nodes compute the product of the matrices it receives, i.e., $g_i(A) h_i(B)$. The functions are designed so that the product $AB$ is recoverable from any $K$ of the $P$ nodes; for a given scheme, we refer to the smallest such $K$ as the recovery threshold of the scheme.
We will survey results related to the minimum known recovery threshold under various restrictions/constraints on the functions $g_i$, $h_i$. We will describe several schemes via polynomial evaluation based encoding and polynomial interpolation based decoding. We first present conceptually simple code constructions that work well for small $P$, but perform poorly in larger systems due to numerical precision errors. We then present some of our recent results in numerical analysis, and show how the conceptually simple code constructions can be modified to solve some of these precision errors.
05/02 Ashutosh Nayyar (University of Southern California), Decentralized control over unreliable communication links
ABSTRACT: Decentralized control problems have been a topic of significant research interest due to their relevance to multi-agent systems and large-scale distributed systems.The design of optimal decentralized control strategies has been investigated under various models for inter-controller communication such as graph-based communication models and communication with delays. A common feature of much of the prior work is that the underlying communication structure of the decentralized system is assumed to be fixed and unchanging. For example, several works assume a fixed communication graph among controllers whose edges describe perfect communication links between controllers. Similarly, when the communication graph incorporates delays, the delays are assumed to be fixed and known. This is a key limitation since in many situations communication among controllers may suffer from imperfections such as random packet loss and random packet delays. These imperfections introduce a new layer of uncertainty in the information structure that is not present in the models considered in prior work. In this talk, we will describe a decentralized LQG control problem where some of the communication links suffer from random packet loss. We will first identify optimal decentralized control strategies for finite horizon version of our problem. We will then discuss the infinite horizon problem and show that there are critical thresholds for packet loss probabilities above which no strategy can achieve finite cost and below which optimal strategies can be explicitly identified.