We revisit the problem of identification via a channel, introduced by Ahlswede and Dueck. In contrast to the standard channel coding problem in which exponentially many messages can be transmitted, doubly exponentially many messages can be identified in the identification problem. This is closely related to how much common randomness can be created. We shall explain a basic construction (achievability result) that connects identification capacity and common randomness capacity. Also, we shall explain the method of Han and Verdu to prove the converse theorem of identification capacity using the concept of channel resolvability. We shall try to review the identification problem from the perspective of information theory as well as theoretical computer science.
If time permits, I shall discuss some open problems, and also describe my recent observation regarding the identification capacity of general (possibly non-ergodic) channels.
A part of the talk is based on the following review paper ‘Communication for generating correlation: A unifying survey’ IEEE Trans. on IT, 2020 (M. Sudan, H. Tyagi, and S. Watanabe).
Thanks to Shun for allowing us to record the talk!