Communication, Control and Signal Processing Seminar 


Abstracts of talks

Spring 2020

02/13 Gilles Zemor (University of Bordeaux), Recent progress on quantum LDPC codes

ABSTRACT: It is commonly assumed that quantum computers will eventually have to rely on error correcting codes of LDPC type, hence the interest into their research, among other motivations. Many mysteries surround our current knowledge, among them whether we have hit a fundamental limit for the minimum distance of LDPC codes or whether we just don't know how to go beyond. We shall discuss recent constructions of quantum LDPC codes that achieve a minimum distance slightly above the square root of the block length and that rely on higher-dimensional simplicial complexes.

02/27 Sagnik Bhattacharya (UMD), Shared Randomness in Arbitrarily Varying Channels

ABSTRACT: We study an adversarial communication problem where sender Alice wishes to send a message m to receiver Bob over an arbitrarily varying channel (AVC) controlled by a malicious adversary James. We assume that Alice and Bob share randomness K unknown to James. Using K, Alice first encodes the message m to a codeword X and transmits it over the AVC. James knows the message m, the (randomized) codebook and the codeword X. James then inputs a jamming state S to disrupt communication; we assume a state-deterministic AVC where S completely specifies the channel noise. Bob receives a noisy version Y of codeword X; it outputs a message estimate m using Y and the shared randomness K. We show that for AVCs that are `adversary-weakened', exactly log(n) equiprobable and independent bits of shared randomness are both necessary and sufficient for achieving randomized coding capacity. The sufficiency proof is based on deterministic list codes along with a polynomial hashing technique which uses the shared randomness. The converse uses a known approach for binary AVCs and extends it to general `adversary-weakened' AVCs using a notion of confusable codewords.

03/05 Soheil Feizi (UMD), Certifiable Defenses against Adversarial Attacks

ABSTRACT: While neural networks have achieved high performance in different learning tasks, their accuracy drops significantly in the presence of small adversarial perturbations to inputs. In the last couple of years, several practical defenses based on regularization and adversarial training have been proposed which are often followed by stronger attacks to defeat them. To escape this cycle, a new line of work focuses on developing certifiably robust classifiers. In these models, for a given input sample, one can calculate a robustness certificate such that for `any' perturbation of the input within the robustness radius, the classification output will `provably' remain unchanged. In this talk, I will present two certifiable defenses: (1) Wasserstein smoothing to defend against non-additive Wasserstein adversarial attacks, and (2) Curvature-based robust training to certifiably defend against $L_2$ attacks by globally bounding curvature values of the network.

This is a joint work with Alex Levine and Sahil Singla.