CCSP SeminarThe online CCSP Seminar, on recent advances in communication, control and signal processing at large, is a teaching seminar. Each invited speaker is requested to present a lecture (of duration 60 - 90 minutes) that describes just one or two mathematical techniques and just as many key results. The lecture will be given at a Zoom whiteboard in classroom fashion, at classroom pace, and will be videotaped for open access if the speaker so desires.
http://ccsp.ece.umd.edu//
Mitigating Coherent Noise in Quantum Computing using the Classical MacWilliams Identities<p>Noise in quantum systems can be coherent or stochastic (incoherent). The former is more damaging since such noise can accumulate in one direction and grow quadratically in the number of qubits. While standard quantum error correction (QEC) addresses noise actively by measuring syndromes and applying corrections, we develop conditions for a stabilizer code to passively tackle coherent noise. When the noise introduces a coherent Z-rotation by an angle theta on all qubits, our codes remain unaffected and act as a decoherence free subspace (DFS). Given any [[n,k,d]] stabilizer code and even M, we can produce a [[Mn,k,>= d]] code that is a DFS for this noise.</p>
<p>In this talk, we will begin by revisiting the classical result of MacWilliams that relates the weight enumerator of a code to the weight enumerator of the dual. Then, after reviewing the essentials of stabilizer codes, we will discuss conditions for a transversal Z-rotation exp(i<em>theta</em>Z) to fix the code space of a stabilizer code. Subsequently, we consider the case where we impose that this transversal rotation fixes the code for all theta, and show that this necessitates the presence of a large amount of weight-2 Z-stabilizers. By organizing these suitably, we will develop DFSs for the aforesaid form of coherent noise. If time permits, we will briefly discuss how we analyze the case where we want only theta <= pi/2^l to preserve the code, in order to induce non-trivial logical gates.</p>
<h3 id="recorded-talk">Recorded Talk</h3>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/PFr6Ux1GMbg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 08 Apr 2021 07:21:29 -0400
http://ccsp.ece.umd.edu//2021/04/08/rengaswamy-coherent-noise-in-quantum-computing/
http://ccsp.ece.umd.edu//2021/04/08/rengaswamy-coherent-noise-in-quantum-computing/Self-regularizing Property of Nonparametric Maximum Likelihood Estimator in Mixture Models<p>Introduced by Kiefer and Wolfowitz 1956, the nonparametric maximum likelihood estimator (NPMLE) is a widely used methodology for learning mixture models and empirical Bayes estimation. Sidestepping the non-convexity in mixture likelihood, the NPMLE estimates the mixing distribution by maximizing the total likelihood over the space of probability measures, which can be viewed as an extreme form of overparameterization.</p>
<p>In this work, we discover a surprising property of the NPMLE solution. Consider, for example, a Gaussian mixture model on the real line with a subgaussian mixing distribution. Leveraging complex-analytic techniques, we show that with high probability the NPMLE based on a sample of size n has O(\log n) atoms (mass points), significantly improving the deterministic upper bound of n due to Lindsay 1983. Notably, any such Gaussian mixture is statistically indistinguishable from a finite
one with O(\log n) components (and this is tight for certain mixtures). Thus, absent any explicit form of model selection, NPMLE automatically chooses the right model complexity, a property we term self-regularization. Statistical applications and extensions to other exponential families will be given. Connections to rate-distortion functions will be briefly discussed.</p>
<p>This is based on joint work with Yury Polyanskiy (MIT): <a href="https://arxiv.org/abs/2008.08244">https://arxiv.org/abs/2008.08244</a></p>
<h3 id="recorded-talk">Recorded Talk</h3>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/glCfe1Saq2s" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 01 Apr 2021 07:21:29 -0400
http://ccsp.ece.umd.edu//2021/04/01/wu-self-regularising-property-of-npmles/
http://ccsp.ece.umd.edu//2021/04/01/wu-self-regularising-property-of-npmles/A Single-Letter Upper Bound on the Mismatch Capacity<p>The question of finding a single-letter formula for the mismatch capacity, which is the supremum of achievable rates of reliable communication when the receiver uses a sub-optimal decoding rule, has been a long-standing open problem. This question has many applications in communications, Information Theory and Computer Science. For example, the zero-error capacity of a channel is a special case of mismatch capacity.</p>
<p>In this talk, I will give a brief overview of the problem, and introduce a new bounding technique called the “multicasting approach,” which straightforwardly yields single-letter upper bounds on the mismatch capacity of stationary memoryless channels. I will also present equivalence classes of isomorphic channel-metric pairs that share the same mismatch capacity, and a sufficient condition for the tightness of the bound for the entire equivalence class.</p>
<h3 id="recorded-talk">Recorded Talk</h3>
<p>Coming soon!
<!--<div class="video-container">
<iframe src="https://www.youtube.com/embed/0OTczuUDWnw" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>--></p>
Thu, 25 Mar 2021 07:21:29 -0400
http://ccsp.ece.umd.edu//2021/03/25/somekh-baruch-mismatched-capacity/
http://ccsp.ece.umd.edu//2021/03/25/somekh-baruch-mismatched-capacity/Codes for Adversaries - Between Worst-Case and Average-Case Jamming<p>Over the last 70 years, information theory and coding have enabled communication technologies that have had an astounding impact on our lives. This is possible due to the match between encoding/decoding strategies and corresponding channel models. Traditional studies of channels have mostly taken one of two extremes: Shannon-theoretic models are inherently average-case in which channel noise is governed by a memoryless stochastic process whereas coding-theoretic (referred to as “Hamming”) models take a worst-case, adversarial, view of the noise. However, for several existing and emerging communication systems, the Shannon/average-case view may be too optimistic, whereas the Hamming/worst-case view may be too pessimistic. In this talk, I will survey a collection of results on the study of channel models that fall between the Shannon and Hamming perspectives.</p>
<p>The talk is based on joint works with Z. Chen, A. Budkuley, B. K. Dey, I. Haviv, S. Jaggi, A. D. Sarwate, C. Wang, and Y. Zhang.</p>
<h3 id="recorded-talk">Recorded Talk</h3>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/aSYpsqH-B1M" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 11 Mar 2021 06:21:29 -0500
http://ccsp.ece.umd.edu//2021/03/11/langberg-codes-for-adversaries/
http://ccsp.ece.umd.edu//2021/03/11/langberg-codes-for-adversaries/The zero-error list decoding capacity of the q/(q-1) channel<p>We will start by reviewing the arguments of Krichevskii, Hansel and Pippenger on covering graphs using bipartite graphs, and using them motivate Korner’s graph entropy. We will combine the graph covering argument with some counting of increasing complexity to derive the following:</p>
<ol>
<li>The Fredman-Komlos lower bound on the size of a family of perfect hash functions;</li>
<li>A bound on the zero-error list decoding capacity of the <em>4/3</em> channel;</li>
<li>A bound on the zero-error list decoding capacity of the <em>q(q-1)</em> channel.</li>
</ol>
<p>Handwritten notes for the talk can be found <a href="/files/jaikumar-notes.pdf">here</a>.</p>
<h3 id="references">References</h3>
<ol>
<li>M. Dalai, V. Guruswami and J. Radhakrishnan, “An Improved Bound on the Zero-Error List-Decoding Capacity of the 4/3 Channel,” in <em>IEEE Transactions on Information Theory</em>, vol. 66, no. 2, pp. 749-756, Feb. 2020, doi: 10.1109/TIT.2019.2933424. <a href="https://ieeexplore.ieee.org/document/8788642">Link</a></li>
<li>S. Bhandari and J. Radhakrishnan, “Bounds on the Zero-Error List-Decoding Capacity of the q/(q-1) Channel,” <em>2018 IEEE International Symposium on Information Theory (ISIT)</em>, Vail, CO, 2018, pp. 906-910, doi: 10.1109/ISIT.2018.8437609. <a href="https://ieeexplore.ieee.org/document/8437609">Link</a></li>
</ol>
<h3 id="recorded-talk">Recorded Talk</h3>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/qxaiDfJq4h8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 11 Feb 2021 06:21:29 -0500
http://ccsp.ece.umd.edu//2021/02/11/radhakrishnan-zero-error-list-decoding-capacity-q-q-1-channel/
http://ccsp.ece.umd.edu//2021/02/11/radhakrishnan-zero-error-list-decoding-capacity-q-q-1-channel/Sharp Thresholds for Random Subspaces, and Applications to LDPC Codes<p>What combinatorial properties are likely to be satisfied by a random subspace over a finite field? For example, is it likely that not too many points lie in any Hamming ball? What about any cube? We show that there is a sharp threshold on the dimension of the subspace at which the answers to these questions change from “extremely likely” to “extremely unlikely,” and moreover we give a simple characterization of this threshold for different properties. Our motivation comes from error correcting codes, and we use this characterization to make progress on the questions of list-decoding and list-recovery for random linear codes, and also to establish the list-decodability of random Low Density Parity-Check (LDPC) codes.</p>
<p>This talk is based on joint works with Venkat Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Noga Ron-Zewi, and Shashwat Silas.</p>
<h3 id="references">References</h3>
<ul>
<li><a href="https://arxiv.org/abs/1909.06430">LDPC Codes Achieve List Decoding Capacity</a> [FOCS2020]</li>
<li><a href="https://arxiv.org/abs/2004.13247">Bounds for list-decoding and list-recovery of random linear codes</a> [RANDOM2020]</li>
<li><a href="https://arxiv.org/abs/2009.04553">Sharp threshold rates for random codes</a> [ITCS 2021]</li>
</ul>
<h3 id="recorded-talk">Recorded Talk</h3>
<p>Thanks to Mary for allowing us to record the talk!</p>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/W4CqtwKpIX4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 03 Dec 2020 06:21:29 -0500
http://ccsp.ece.umd.edu//2020/12/03/wootters-sharp-thresholds-for-random-subspaces/
http://ccsp.ece.umd.edu//2020/12/03/wootters-sharp-thresholds-for-random-subspaces/Rényi information inequalities and their mathematical ramifications<p>Rényi entropies are a natural one-parameter generalization of Shannon entropy that were first introduced over half a century ago, but about which fundamental questions remain incompletely answered. After a (very) brief introduction to why Rényi information functionals (entropies, divergences, etc.) are of interest from an information-theoretic viewpoint, we will attempt to expose the relevance of Rényi information inequalities for several areas of mathematics. For example, they allow for the unification of several interesting inequalities — including the entropy power inequality (which plays a fundamental role in information theory), the Brunn-Minkowski inequality (which plays a fundamental role in convex geometry), and Rogozin’s convolution inequality (which is fundamental to the area of “small ball” estimates in probability theory). They also allow for the quantification of uncertainty principles in harmonic analysis. In another direction, they are relevant to the field of additive combinatorics, which has seen burgeoning activity over the last two decades due to applications in theoretical computer science as well as other parts of mathematics.</p>
<h3 id="recorded-talk">Recorded Talk</h3>
<p>Thanks to Mokshay for allowing us to record the talk!</p>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/2256Hd7WSKo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 12 Nov 2020 06:21:29 -0500
http://ccsp.ece.umd.edu//2020/11/12/madiman-renyi-information-inequalities/
http://ccsp.ece.umd.edu//2020/11/12/madiman-renyi-information-inequalities/Strong Converse on Bitwise Decoding for Random Linear Code Ensemble<p>In this talk, I will prove a strong converse result on bitwise decoding when communicating with random linear codes over binary symmetric channels (BSC). Our converse theorem shows extreme unpredictability of even a single message bit for random coding at rates slightly above capacity. This talk is based on joint work with Venkatesan Guruswami and Andrii Riazanov (<a href="https://arxiv.org/abs/1911.03858">arXiv:1911.03858</a>), where we proved a more general version of this converse theorem that holds for arbitrary binary-input memoryless symmetric (BMS) channels, and we further used this converse theorem to construct polar codes with near-optimal convergence to channel capacity.</p>
<h3 id="recorded-talk">Recorded Talk</h3>
<p>Thanks to Min for allowing us to record the talk!</p>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/u-3uPf-bUV0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 29 Oct 2020 07:21:29 -0400
http://ccsp.ece.umd.edu//2020/10/29/ye-strong-converse-on-bitwise-decoding-for-random-linear-code-ensemble/
http://ccsp.ece.umd.edu//2020/10/29/ye-strong-converse-on-bitwise-decoding-for-random-linear-code-ensemble/The Auxiliary Receiver Approach in Network Information Theory<p>We introduce the auxiliary receiver approach as a mathematical tool to write outer bounds in network information theory. This technique yields new outer bounds for basic settings in network information theory such as the relay, interference, and broadcast channel settings. These bounds strictly outperform classical outer bounds at least in some regimes. In this teaching seminar, I take a pedagogical approach and (for the most part) only assume the basic knowledge of a first course in information theory.</p>
<p>This is joint work with Chandra Nair</p>
<h3 id="recorded-talk">Recorded Talk</h3>
<p>Thanks to Amin for allowing us to record the talk!</p>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/z0z1Lz53Vek" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 15 Oct 2020 07:21:29 -0400
http://ccsp.ece.umd.edu//2020/10/15/gohari-auxiliary-receiver-approach-in-network-it/
http://ccsp.ece.umd.edu//2020/10/15/gohari-auxiliary-receiver-approach-in-network-it/Revisiting Identification and Common Randomness<p>We revisit the problem of identification via a channel, introduced by Ahlswede and Dueck. In contrast to the standard channel coding problem in which exponentially many messages can be transmitted, doubly exponentially many messages can be identified in the identification problem. This is closely related to how much common randomness can be created. We shall explain a basic construction (achievability result) that connects identification capacity and common randomness capacity. Also, we shall explain the method of Han and Verdu to prove the converse theorem of identification capacity using the concept of channel resolvability. We shall try to review the identification problem from the perspective of information theory as well as theoretical computer science.</p>
<p>If time permits, I shall discuss some open problems, and also describe my recent observation regarding the identification capacity of general (possibly non-ergodic) channels.</p>
<p>A part of the talk is based on the following review paper ‘Communication for generating correlation: A unifying survey’ IEEE Trans. on IT, 2020 (M. Sudan, H. Tyagi, and S. Watanabe).</p>
<h3 id="recorded-talk">Recorded Talk</h3>
<p>Thanks to Shun for allowing us to record the talk!</p>
<div class="video-container">
<iframe src="https://www.youtube.com/embed/HcvABfbXpZI" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
</div>
Thu, 01 Oct 2020 07:21:29 -0400
http://ccsp.ece.umd.edu//2020/10/01/watanabe-revisiting-identification-and-common-randomness/
http://ccsp.ece.umd.edu//2020/10/01/watanabe-revisiting-identification-and-common-randomness/