Communication, Control and Signal Processing Seminar
Abstracts of talks
Spring 2013
02/07 Marcelo Firer (U. Campinas, Brazil), Coding in the Presence of Semantic Value of Information
ABSTRACT:
This talk will explore possibilities for coding when errors have different
(semantic) values. The main idea is to exchange the usual question of
"minimizing the quantity of errors" by "minimizing the importance of errors".
We introduce the basic mathematical formulation of the problem, presents
some existential results that justifies this approach and then explore initial
strategies for the encoding-decoding process.
02/21 Itzhak Tamo (UMD), What I heard at the ITA
ABSTRACT:
A short informal discussion of a few talks I heard in CA
last week.
http://ita.ucsd.edu/index.php
03/20 Vijay Subramanian (Northwestern University), Network Games: Spotting Trendsetters and Bargaining With Middlemen
ABSTRACT:
Network games provide a basic framework for studying the
diffusion of new ideas or behaviors through a population. In these models,
agents decide to adopt a new idea based on optimizing pay-off that depends
on the adoption decisions of their neighbors in an underlying network. After
introducing the model, we present results for two problems.
The first considers the problem of inferring early adopters or first
movers given a snap shot of the adoption state at a given time. We present
some results on solving this problem in what is known as the low temperature
regime. We conclude the first part with a discussion on reducing the
complexity of such inference problems for large networks.
The second considers a dynamic and decentralized market modeled by a
non-cooperative networked bargaining game. Our goal is to study how the
network structure of the market and the role of middlemen influence the
market's eciency and fairness. We introduce the concept of limit stationary
equilibrium in a general trading network and use it to analyze how
endogenous delay emerges in trade and how surplus is shared between sellers
and buyers.
03/26 Sidharth Jaggi (CUHK), Robust sparse recovery and applications: order-optimal measurements and complexity
ABSTRACT:
Sparse recovery problems are usually in the setting in which a vector x with ambient dimension n
has only k "significant" entries. This vector x is "compressively measured" (with possibly "noisy"
measurements) as y, where y has just m << n components, and the goal is to reconstruct (a good
approximation to) x.
We consider three sparse recovery problems:
* Compressive Sensing: A linear problem in which y is a (possibly noisy) linear transform of x,
a vector in R^n.
* Group Testing: A non-linear problem in which y is a binary vector comprising of (possibly noisy)
disjunctive measurements of a binary x vector.
* Network tomography: A linear problem in which x, a vector in R^n, denotes network parameters
(such as edge/node delays) to be estimated via constrained, end-to-end measurements
(such as path delays).
In each of these cases we present sparse recovery algorithms that have good reconstruction
performance, and have information-theoretically order-optimal decoding complexity and number of
measurements (O(k) measurements and decoding complexity for compressive sensing and network
tomography, and O(k log(n)) measurements and decoding complexity for group testing.)
03/28, 04/04 Sikai Qu (UMD), Introduction to Hamersley-Clifford Theorem
ABSTRACT:
Markov random field (or Markov network) is a set of random variables having Markov property
described by an undirected graph. i.e. Dependency induces neighborhood structures of the graph.
According to Hamersley-Clifford Theorem, Markov random field can be represented by Gibb measure,
which allows us to factorize the probability distribution of the Markov random field according to the
cliques of the graph. In the seminar, I will give definition of Markov random field and Gibb distribution
and then prove the Hamersley-Clifford Theorem. The application of this theorem is that the realization
of some of the network model therefore can be characterized by the number triangles and stars.
04/11 Praneeth Boda (UMD), A Decomposition theorem for Binary Markov Random Fields
ABSTRACT:
In this talk, Hajek-Berger [1987] paper on decomposition
theorem for Binary Markov Random Fields will be discussed.
A Markov random field is a set of random variables having Markov property described
by an undirected graph. The random field considered here is a stationary binary Markov
random field whose neighborhood structure is specified by a countable graph with nodes of
uniformly bounded degree. Under a minimal assumption, Hajek-Berger [1987] have
shown that such a Markov random field can be represented as the node-wise modulo 2 sum
of two independent binary random fields, one of which is white binary noise of positive
weight. This decomposition can be used to compute an exact expression for the per-site
rate-distortion function of the random field over an interval of distortions not exceeding
this weight.
04/18 Isaak Mayergoyz (UMD), Riemann Hypothesis and Plasmon Resonances
ABSTRACT:
Some very basic facts concerning the Riemann hypothesis for the zeta function will be first
outlined. Then, the possible connection between the Riemann hypothesis and the spectral
analysis of plasmon resonances in nano-wires will be explored. Finally, the approach to
the Riemann hypothesis based on the J.Grommer theorem will be presented.
04/23 Venu Veeravalli (UIUC), Data-Efficient Quickest Change Detection
ABSTRACT:
The problem of detecting abrupt changes or anomalies in
stochastic systems and time series, often referred to as the
quickest change detection (QCD) problem, arises in various
branches of science and engineering. Applications
include infrastructure monitoring, quality control engineering,
intrusion detection in computer networks and security systems,
detection of the onset of an epidemic, biomedical signal and
image processing, and financial markets. The classical QCD
problem is formulated as one where there is a sequence
of observations whose distribution changes at an unknown
time, and the goal is to detect the change as quickly as
possible, subject to a false alarm constraint. In the first half
of this talk, we provide a brief overview of the classical
QCD problem and its solutions. In many engineering applications
of QCD, there may be a cost (e.g., energy) associated
with acquiring the observations. In the second half of
the talk, we consider the QCD problem with an additional
constraint on the cost of the observations used in the
detection process, and we develop QCD algorithms
that are data-efficient or energy-efficient. We demonstrate
the working of one of these algorithms on a mote with a light sensor.
05/02 Daniel Cullina(UIUC), Upper Bounds on the Size of Deletion Correcting Codes
ABSTRACT:
We consider the combinatorial (as opposed to probabilistic) variant of the
deletion/insertion channel. It is well known that a code capable of correcting
s deletions can also correct any combination of s total deletions and insertions.
Somewhat surprisingly, channels performing different mixtures of deletions
and insertions lead to different upper bounds on code size. This observation
allows an improvement upon Levenshtein’s asymptotic upper bound on
the size of an s-deletion correcting code. Levenshtein’s bound comes from
the channel performing only deletions, but whenever the number of errors
is larger than the alphabet size it is better to consider a mixture of insertions
and deletions.
This talk will cover some basics of deletion/insertion channels, the technique,
which can be applied to any combinatorial channel,that leads to both the new
and old upper bounds, and some the intermediate results required for
the new upper bound.
05/08 Rajesh Sundaresan (IISC), Recursive Distributional Equations, Endogeny, and Belief Propagation via Three Examples
ABSTRACT:
In this talk, I will introduce the notion of a recursive distributional
equation (RDE) via three combinatorial optimization problems -- matching,
edge-cover, and traveling salesman -- in the random mean field setting. I
will then highlight a property called endogeny, introduced by Aldous and
Bandyopadhyay, for capturing the notion of correlation decay. On the three
examples, I will then indicate how the RDE and endogeny could be useful in
coming up with and establishing validity of belief propagation equations.
The approach works for the matching and edge-cover problems, but endogeny
and validity of belief propagation remain open for the traveling salesman
problem. The talk will be based on joint work with Mustafa Khandwawala on
the edge-cover problem.
05/23 Ufuk Topcu (UPENN), Networked, Information-Based Systems: Synthesis of Correct-By-Construction Control Protocols
ABSTRACT:
How can we affordably build trustworthy networked,
information-based systems? The talk begins with an interpretation of
this question in the context of autonomous systems and more-electric
air vehicles. Partly motivated by the difficulty and cost of
establishing trust in these applications, I describe a shift from the
traditional "design+verify" approach to "specify+synthesize." I then
discuss our recent advances in automated, correct-by-construction
synthesis of embedded control protocols. I present two results on
receding horizon temporal logic planning and distributed synthesis.
These results combine ideas from control theory with those from
computer science, and exploit underlying system-theoretic
interpretations to alleviate the resulting computational complexity. I
conclude with an outlook on future research opportunities in autonomy,
advanced vehicular systems, and energy networks.