Communication, Control and Signal Processing Seminar
Abstracts of talks
Spring 2017
02/09 Abbas Kazemipour (UMD), Fast and Stable Signal Deconvolution via Compressible State-Space Models
ABSTRACT: Common biological measurements are in the form of noisy convolutions of signals of interest with possibly unknown and transient blurring kernels. Examples include EEG and calcium imaging data. Thus, signal deconvolution of these measurements is crucial in understanding the underlying biological processes. In this presentation we develop fast and stable solutions for signal deconvolution from noisy, blurred and undersampled data, where the signals are in the form of discrete events distributed in time and space. We introduce compressible state-space models as a framework to model and estimate such discrete events. These state-space models admit abrupt changes in the states and have a convergent transition matrix, and are coupled with compressive linear measurements. We consider a dynamic compressive sensing optimization problem and develop a fast solution, using two nested Expectation Maximization algorithms, to jointly estimate the states as well as their transition matrices. Under suitable sparsity assumptions on the dynamics, we prove optimal stability guarantees for the recovery of the states. Finally, we present application to calcium deconvolution and sleep spindle detection, which show significant improvement over existing techniques.
03/01 Ayfer Ozgur (Stanford)
ABSTRACT: Formulating the problem of determining the communication capacity of channels as a problem in high-dimensional geometry is one of Shannon's most important insights that has led to the conception of information theory. In his classical paper "Communication in the presence of noise," 1949, Shannon develops a geometric representation of any point-to-point communication system and provides a geometric proof of the coding theorem for the AWGN channel where the converse is based on a sphere-packing argument in high-dimensional space. We show that a similar geometric approach can be used to prove converses for network communication problems. In particular, we solve a long-standing open problem posed by Cover and named "The Capacity of the Relay Channel," in Open Problems in Communication and Computation, Springer-Verlag, 1987. The key step in our proof is a strengthening of the isoperimetric inequality on a high-dimensional sphere, which we use to develop a packing argument on a spherical cap, similar to Shannon's original approach. We discuss the promise of this geometric approach for solving other open problems in network information theory.
03/09 Aneesh Raghavan (UMD)
ABSTRACT: In this presentation, we consider the problem of detecting which Markov chain model generates observed time series data. We consider two Markov chains. The state of the Markov chain cannot be observed directly, only a function of the state can be observed. Using these observations, the aim is to find which of the two Markov chains has generated the observations. We consider two observers. Each observer observes a function of the state of the Markov chains.We formulate a binary hypothesis testing problem for each observer. Each observer makes a decision on the hypothesis based on its observations. Then observer 1 communicates its decision to observer 2 and vice-versa. If the decisions are the same, then a consensus has been achieved. If their decisions are different then the binary hypothesis testing problem is continued. This process is repeated until consensus has been achieved. We solve the binary hypothesis testing problem and prove the convergence of the consensus algorithm.
03/16 Ajaykrishnan Nageswaran (UMD)
ABSTRACT: We consider the Bayesian parameter estimation problem where the value of a finitary parameter X should be decided on the basis of n i.i.d. samples Y^n. In this context, it is well known that the minimum possible probability of error in estimating X is achieved by the maximum a posteriori probability (MAP) estimator. Also, the amount of missing information of X after observing Y^n may be evaluated by the posterior entropy of X given Y^n.
04/06 Proloy Das (UMD)
In this talk, the asymptotic relation between the posterior entropy, MAP error probability and the sample size will be discussed. It will be shown that the posterior entropy as well as the MAP error probability decay to zero with the sample size, at identical exponential rate, and that the maximum achievable exponent for this decay is determined by the minimum Chernoff information over all the possible pairs of distinct parameter values.
This talk is based on ``The Asymptotics of Posterior Entropy and Error Probability for Bayesian Estimation," by F. Kanaya and T. S. Han.
ABSTRACT: Classical non-parametric spectral analysis uses sliding windows to capture the dynamic nature of real-world time series. This widely used approach fails to exploit the temporal continuity in the data and is not well-suited for signals with highly structured time-frequency representations. For a time series whose time-varying mean is the superposition of a small number of oscillatory components, we formulate non-parametric batch spectral analysis as a Bayesian estimation problem. We introduce prior distributions on the time-frequency plane that yield maximum a posteriori (MAP) spectral estimates that are continuous in time yet sparse in frequency. Our spectral decomposition procedure, termed spectro-temporal pursuit, can be efficiently computed using an iteratively re-weighted least-squares algorithm and scales well with increasing data length. We show that spectro-temporal pursuit works by applying to the time series a set of data-derived filters. Using a link between Gaussian mixture models, \ell_{1} minimization, and the expectation-maximization algorithm, we prove that spectro-temporal pursuit converges to the global MAP estimate.
04/13 Min Ye (UMD)
This talk is based on the following paper:
"Robust spectrotemporal decomposition by iteratively re-weighted least squares," by Demba Ba, Behtash Babdi, Patrick L. Purdon and Emery N. Brown.
ABSTRACT: We consider the minimax estimation problem of a discrete distribution with support size $k$ under privacy constraints. A privatization scheme is applied to each raw sample independently, and we need to estimate the distribution of the raw samples from the privatized samples. A positive number $\epsilon$ measures the privacy level of a privatization scheme. For a given $\epsilon,$ we consider the problem of constructing optimal privatization schemes with $\epsilon$-privacy level, i.e., schemes that minimize the expected estimation loss for the worst-case distribution. Two schemes in the literature provide order optimal performance in the high privacy regime where $\epsilon$ is very close to 0, and in the low privacy regime where $e^{\epsilon}\approx k,$ respectively.
04/20 Debdipta Goswami (UMD)
In this talk, we propose a new family of schemes which substantially improve the performance of the existing schemes in the medium privacy regime when $1 << e^{\epsilon} << k.$ More concretely, we prove that when $3.8 < \epsilon <\ln(k/9) ,$ our schemes reduce the expected estimation loss by 50% under $\ell_2^2$ metric and by 30% under $\ell_1$ metric over the existing schemes. We also prove a lower bound for the region $ e^{\epsilon} << k,$ which implies that our schemes are order optimal in this regime.
ABSTRACT:Traditionally, dynamical systems are described in terms of trajectories defined by a flow function or iterative map on finite dimensions. However, there is an alternative framework, an operator-theoretic approach, that relies on the (linear) operators of infinite-dimensional function spaces. Operator-theoretic methods essentially work by embedding finite-dimensional dynamics in an infinite-dimensional function space in which functions evolve under a linear operator. One such operator is the Koopman operator that acts on an observable function to describe its evolution along the trajectory of the original system.
06/12 Amin Rahimian (UPenn/MIT)
This presentation considers the problem of global bilinearization of the drift and control vector fields of a control-affine system. The proposed method utilizes the Koopman Canonical Transform to transform the dynamics into a bilinear system under certain assumptions. This method uses the Koopman eigenfunctions of the drift vector field to transform the dynamics. Bilinearity is ensured from the projection of the Koopman operator associated with the control vector fields on the eigenspace of the drift Koopman operator. The resulting bilinear system is then subjected to controllability analysis using the Myhill semigroup method and Lie algebraic structures.
ABSTRACT:Many important real-world decision-making problems involve group interactions among individuals with purely informational externalities. Such situations arise for example in jury deliberations, expert committees, medical diagnoses, etc. We model the purely informational interactions of group members, where they receive private information and act based on that information while also observing other people's beliefs or actions.
In the first part of the talk, we address the computations that the Bayesian agent should undertake to realize the optimal actions at every decision epoch. We use the iterated eliminations of infeasible signals (IEIS) to model the thinking process as well as the calculations of a Bayesian agent in a group decision scenario. We show that IEIS algorithm runs in exponential time; however, when the group structure is a partially ordered set the Bayesian calculations simplify and polynomial-time computation of the Bayesian recommendations is possible. We also analyze the computational complexity of the Bayesian belief formation in groups and show that it is NP-hard. We also investigate the factors underlying this computational complexity and show how belief calculations simplify in special network structures or cases with strong inherent symmetries. We finally give insights about the statistical efficiency (optimality) of the beliefs and its relations to computational efficiency.
In the second part of the talk, we propose a model of inference for heuristic decision-making that is rooted in the Bayes rule but avoids the complexities of rational inference in group interactions. Accordingly, the group members behave rationally at the initiation of their interactions with each other; however, in the ensuing decision epochs, they rely on heuristics that replicate their experiences from the first stage, and can be justified as optimal responses to simplified versions of their complex environments. We study the implications of the information structure, together with the properties of the probability distributions, which determine the structure of the so-called ``Bayesian heuristics'' that the agents follow in this model. We also analyze the group decision outcomes in two classes of linear action updates and log-linear belief updates and show that many inefficiencies arise in group decisions as a result of repeated interactions between individuals, leading to overconfident beliefs as well as choice-shifts toward extreme actions. Nevertheless, balanced regular structures demonstrate a measure of efficiency in terms of aggregating the initial information of individuals. We end the talk, by explaining some experimental setups that test which of the two models, rational or heuristic, are better descriptors of actual human behavior in groups.