**
Communication, Control**
**
and Signal Processing Seminar****
**

**Abstracts
of talks**

*Fall 2012*

** 09/20 Himanshu Tyagi(UMD)****,
How many quaries will resolve common randomness? **

ABSTRACT: A set of terminals observing correlated signals communicate
among themselves to agree on a sequence of shared bits. What is the
largest number of queries that must be asked by an observer of this
communication to learn the shared bits? We provide an answer to this
question for a general set of correlated signals. When the individual signals
at each terminal are i.i.d., our answer takes a computable single-letter form.
Examples include: querying a biometric signature or a hardware signature, based on
helper data.

This is a joint work with Prakash Narayan.

**
09/27 Woomyoung
Park (UMD), New results on q-ary polar codes, q=2^r**

ABSTRACT: Polarization is a new concept in information theory discovered in the
context of capacity-achieving families of codes for symmetric memoryless channels.
By repeated application of a kernel, H_2, the virtual channels that arise
in the process of channel evolution converge to one of (r+1) extremal configurations
in which j out of r bits are transmitted nearly perfectly while the remaining (r-j) bits
carry almost no information. In this talk, we find a kernel such that some of the
intermediate "levels" disappear in the polarization limit.

This is a joint work with Alexander Barg.

**
10/04, 10/11 Talha Cihad
Gulcu (UMD), Spatially Coupled Ensembles Universally Achieve Capacity Under Belief Propagation**

ABSTRACT: In this paper which is written by S.Kudekar,
T. Richardson and R. Urbanke in January 2012, the
spatially coupled code ensembles are investigated.
It is shown that given a desired error probability and
a gap to capacity, it is possible to construct a
spatially coupled ensemble satisfying these
constraints universally on general binary input
memoryless output-symmetric channels under
belief propagation decoding.
In fact, most codes in that ensemble have that property.
The qualifier universal refers to the single
ensemble/code which is good for all channels
but it is assumed that the channel is known at the receiver.

**10/17 Olgica Milenkovic (UIUC), A Constrained-Distance Based Approach to
Social Choice Theory**

ABSTRACT:
We first consider this as an online bipartite matching problem.
We model this combinatorial problem as a classical multi-armed bandit problem
but with dependent arms, and propose an index-based learning algorithm that
achieves logarithmic regret. From prior results, it is known that this is
order-optimal. We then consider the distributed problem where players do not
communicate or coordinate in any way. We propose a index-based algorithm that
uses Bertsekas' auction mechanism to determine the bipartite matching. We show
that the algorithm has expected regret at most near-log-squared. This is the
first distributed multi-armed bandit learning algorithm in a general setting.

This is a joint work with Farzad Farnoud and Behrouz Touri.

**11/12 Rahul Jain (USC) ****The Art of Gambling in a
Team: Multi-player multi-armed bandits
**ABSTRACT: Multi-Armed bandits are an elegant model of
learning in an unknown and uncertain environment. Such models are relevant in
many scenarios, and of late have received increased attention recently due to
various problems of distributed control that have arisen in wireless networks,
pricing models on the internet, etc. We consider a non-Bayesian multi-armed
bandit setting proposed by Lai & Robbins in mid-80s. There are multiple arms
each of which generates an i.i.d. reward from an unknown distribution. There are
multiple players, each of whom is choosing which arm to play. If two or more
players choose the same arm, they all get zero rewards. The problem is to design
a learning algorithm to be used by the players that results in an orthogonal
matching of players to arms (e.g., users to channels in wireless networks), and
moreover minimizes the expected regret.

ABSTRACT: In this presentation we trace the development of coupled sparse codes on graphs, from their beginning as a way of constructing an LDPC convolutional code by applying an unwrapping procedure to the parity-check matrix of an LDPC block code to the current perspective of edge-spreading and coupling together a chain of protographs. Encoding and decoding procedures will be reviewed, asymptotic minimum distance and iterative decoding threshold results will be summarized, the key concept of code termination, leading to the threshold saturation phenomenon, will be highlighted, and a brief summary of recent advances will be included.

ABSTRACT: MDS array codes are widely used in storage systems to protect data against erasures. We address the rebuilding ratio problem, namely, in the case of erasures, what is the fraction of the remaining information that needs to be accessed in order to rebuild exactly the lost information? It is clear that when the number of erasures equals the maximum number of erasures that an MDS code can correct then the rebuilding ratio is 1 (access all the remaining information). However, the interesting and more practical case is when the number of erasures is smaller than the erasure correcting capability of the code. For example, consider an MDS code that can correct two erasures: What is the smallest amount of information that one needs to access in order to correct a single erasure? Previous work showed that the rebuilding ratio is bounded between 1/2 and 3/4 , however, the exact value was left as an open problem.

In this talk, we present a code construction the resolves this problem and shows that for the case of a single erasure with a 2-erasure correcting code, the rebuilding ratio is 1/2 . In general, we construct a new family of r-erasure correcting MDS array codes that has optimal rebuilding ratio of e/r in the case of e erasures, 0 < e < r+1. The code has a variety of important properties including using small finite field.

If time permits a new type of codes for storage termed Locally Repairable Codes (LRC) will be presented. These codes address a different metric: minimizing the amount of disks to be accessed during the rebuilding process.