Program

2021 2021 National Conference on Communications (NCC)  
Streaming codes are relevant to the 5G objective of achieving ultra-reliable, low-latency communication (URLLC) and address the need for an error-correction scheme at the packet level, that ensures reliability in the face of dropped or lost packets. The streaming codes discussed here may be viewed at the packet level as convolutional codes, but which are yet built out of scalar block codes by employing a diagonal-embedding technique. A sliding-window channel model is adopted as a tractable
more » ... ximation to the two-state Gilbert Elliott channel, which is capable of causing both random and burst erasures. Rate bounds and efficient code constructions will be presented, as well as an experimental demonstration that includes adaptation to a time-varying channel. Bio: P. Vijay Kumar received the B.Tech. and M.Tech. degrees from IIT Kharagpur and IIT Kanpur respectively, and the Ph.D. degree from USC. From 1983-2003, he was on the faculty of the EE-Systems Department at USC. Since 2003, he has been on the faculty of IISc, Bengaluru. His current research interests include codes for distributed storage, low-latency communication and lowcorrelation sequences. He is a recipient of the 1995 IEEE Information Theory (IT) Society's Prize-Paper award and the IEEE Data Storage Best Paper Award of 2011/2012. A pseudorandom sequence family designed in a 1996 paper co-authored by him formed the short scrambling code of the 3G WCDMA cellular standard. Abstract: The tutorial is intended for bachelors, masters and doctoral students who may be interested in pursuing research in radar signal processing specifically in context to automotive radars. The tutorial will be divided into five sections. In the first section, an introduction to radar systems will be presented including concepts pertaining to the transmitter, receiver, targets, clutter and noise encountered in automotive radar scenarios. This will be followed by the second section, where the radar signal models for simple and extended targets will be discussed in detail. The third part of the tutorial will delve into the specifics of radar waveforms and the corresponding signal processing algorithms -matched filtering for range estimation, Doppler processing and Fourier based azimuth and elevation estimation. The following part of the tutorial will cover the fundamentals of radar detection with a focus on the ubiquitous Neyman-Pearson detection rule, the likelihood ratio test and the constant false alarm rate detection. In the final section, advanced concepts related to the use of modern machine learning and deep learning algorithms on automotive radar data for varied applications such as pedestrian detection, object classification and parking assistance will be presented. Throughout the tutorial, MATLAB based software demos will be presented to supplement the theoretical concepts. Abstract: Recent years have witnessed a dramatically growing interest in machine learning (ML) methods. These data-driven trainable structures have demonstrated an unprecedented empirical success in various applications, including computer vision and speech processing. The benefits of MLdriven techniques over traditional model-based approaches are twofold: First, ML methods are independent of the underlying stochastic model, and thus can operate efficiently in scenarios where this model is unknown, or its parameters cannot be accurately estimated; Second, when the underlying model is extremely complex, ML algorithms have demonstrated the ability to extract and disentangle the meaningful semantic information from the observed data. Nonetheless, not every problem can and should be solved using deep neural networks (DNNs). In fact, in scenarios for which model-based algorithms exist and are computationally feasible, these analytical methods are typically preferable over ML schemes due to their theoretical performance guarantees and possible proven optimality. A notable application area where model-based schemes are typically preferable, and whose characteristics are fundamentally different from conventional deep learning applications, is wireless communications. In this talk, I will present methods for combining DNNs with traditional model-based algorithms. We will show hybrid model-based/data-driven implementations which arise from classical methods in wireless communications, and demonstrate how fundamental classic techniques can be implemented without knowledge of the underlying statistical model, while achieving improved robustness to uncertainty. Abstract: Since its introduction in 2008 in the form of the Bitcoin blockchain, blockchain technology has been known more as a cryptocurrency enabler than its actual foundations as a platform for enabling trust. The idea of cryptocurrency blockchain was anonymous transactions, mining of currency in the digital platform itself, and to provide trust through transparency of a public, distributed and replicated ledger of all transactions which are cryptographically signed. The use of cryptographic signature, and use of has functions to link blocks of transactions provided the defense against forgery, and attack on integrity of the digital records of transaction. The permanence of the records is ensured by crowd sourcing computational power of a large number of participants making it almost impossible to change the history of transactions unless 51% of computational power is procured by a single participant or a group of colluding participants. Blockchain 2.0 ushered in the concept of smart contracts -thereby enabling more automation of digital transactions, and also the ability to tokenize non-currency assets. However, with the programmability in a Turing complete language, came the possibility of bugs in smart contracts and thereby security vulnerabilities. A number of cyber attacks followed and in several cases, insider attack on crypto-exchanges led to huge losses to account holders. On top of all these, the pseudo-anonymity offered by the crypto-currency blockchains made it a favorite medium of transaction for criminals -starting from the Silk Road to today's ransomware gangs. We also find malicious usage of the cryptocurrency platforms for illegal gambling, phishing, money laundering and various other crimes. While all these led to suspicion about cryptocurrency among regulators and law enforcement, technologists discovered that the trust offered by blockchain technology itself has very many transformative potential. With the advent of Blockchain 3.0 with the introduction of Hyperledger and similar distributed ledger technology platforms, we started seeing use of blockchain platforms for supply chain provenance, renewable energy billing, land registry systems, voting systems, agri-market transactions, and more recently NFTs. In this tutorial, we first introduce the concept of blockchain and its underlying technologies including public key crypto systems, hashing, distributed computing,. Fault-tolerant consensus, byzantine algorithms, public-vs private ledgers, permissioned vs permission-less ledgers etc. We then gently introduce why non-currency usage of blockchain can enhance trust in data integrity in many applications which are today implemented in centralized system -leading to trust deficit among the users of the systems -such as land record registration system, supply chain integrity, DND system by mobile operators, or even DNS system. We will expose the audience to various applications of permissioned private blockchains in creating trust mechanism in such applications. Finally, we will touch on certain security issues for these applications. Abstract: Barely seen in action movies until a decade ago, the progressive blending of UAVs into our daily lives will greatly impact labor and leisure activities alike. Most stakeholders regard reliable connectivity as a must-have for the UAV ecosystem to thrive, and the wireless research community has been rolling up its sleeves to drive a native and long-lasting support for UAVs in 5G and beyond. Moving up, the recent introduction of more affordable insertions into the low orbit is luring new players to the space race, making a marriage between the satellite and cellular industries more likely than ever. In this talk, we will navigate from 5G to 6G use cases, requirements, and enablers involving aerial and spaceborne communications, also acting as a catalyst for much-needed new research. Abstract: The importance and ubiquity of the Discrete Fourier Transform (DFT) cannot be overstated. Algorithms to compute the DFT (collectively referred to as the Fast Fourier Transform or FFT) have a long history, starting probably in the 19th century itself. The goal of this tutorial is to give a brief overview of the development of FFT. The tutorial will be in three parts. In the first part, we review the classical approach to the FFT, including the Cooley-Tukey algorithm, Prime-factor algorithm and Rader's FFT. With the ever-increasing data sizes that we operate with, there is a need to reduce the complexity beyond what a classical FFT provides. In addition, even though data sizes are increasing, they also have underlying structure, thus providing algorithm designers with opportunities to exploit this structure for faster computation. Recent research on the FFT operates at the intersection of these two notions: the focus is on speeding up the DFT computation for a structured (or restrictive) class of signals. The most popular structural model is spectral sparsity: where we assume the signal has very few non-zero frequency coefficients. In the second part of the tutorial, we discuss the key ideas behind sparse FFT algorithms. Our coverage here will be more illustrative than exhaustive. While many of these sparse FFT algorithms are randomized, we also discuss some deterministic algorithms for sparse FFT. In the final portion of the talk, we attempt to go beyond sparsity. In particular, we may try to find the DFT when more structural information on the spectral support is available. We briefly discuss some work on finding the DFT of block-sparse signals, and conclude with some of our recent work that tries to make some progress towards structured FFTs. Bio: Aditya Siripuram received his B.Tech and M.Tech degrees in Electrical Engineering from the Indian Institute of Technology, Bombay in 2009. He completed PhD from Stanford University in 2017 and was a recipient of the Stanford Graduate Fellowship. He is currently a faculty member in the Department of Electrical Engineering at IIT Hyderabad. He is interested broadly in the theory of signal processing and machine learning; particularly in sampling, Fourier analysis and graph signal processing. Abstract: A reconfigurable intelligent surface (RIS) is an emerging technology that enables the control of the electromagnetic waves. An RIS is a thin sheet of electromagnetic material, which is made of many nearly passive scattering elements that are controlled through low cost and low power electronic circuits. By appropriately configuring the electronic circuits, different wave transformations can be realized. Recent research works have shown that RISs whose geometric size is sufficiently large can outperform other technologies, e.g., relays, at a reduced hardware and signal processing complexity, and can enhance the reliability of wireless links by reducing the fading severity. In addition, the achievable performance of RIS-assisted systems has been proved to be robust to various hardware impairments, e.g., the phase noise, which may further reduce the implementation cost. To quantify the performance gains offered by RISs in wireless networks, realistic communication models need to be employed. In this talk, we offer a critical appraisal of the communication models currently employed for analyzing the ultimate performance limits and for optimizing RIS-assisted wireless networks. Furthermore, we introduce a new tractable, electromagnetic-compliant, and circuit-based communication model for RIS-assisted transmission and discuss its applications to the modeling and optimization of wireless systems. Bio: Marco Di Renzo (Fellow, IEEE) received the Laurea (cum laude) and Ph.D. degrees in electrical engineering from the University of L'Aquila, Italy, in 2003 and 2007, respectively, and the Habilitation à Diriger des Recherches (Doctor of Science) degree from University Paris-Sud (now Paris-Saclay University), France, in 2013. Since 2010, he has been with the French Abstract: This talk discusses various aspects of wireless transmitters and the radio frequency (RF) power amplifier (PA) design for 5G cellular applications. The wireless transmitters and RF PA design require several new considerations to be useful for New Generation Radio Access Network (NG-RAN) in 5 G applications. The wireless transmitter design must strive for spectrum and energy efficiency to provide linear power amplification of high crest factor signals with the least consumption of power. Maintaining good linearity can meet the spectrum efficiency requirements but requires special RF power amplification schemes to guarantee low power consumption. For example, linearization schemes such as digital predistortion require load modulation based power amplifier (PA) (Doherty PA or Chireix Outphasing PA etc.) for handling high crest factor signals with good power efficiency. Alternatively, delta-sigma modulation-based transmitters can exhibit good performance in terms of error vector magnitude (EVM), where high-efficiency switch-mode PAs can be used along with RF filters for suppressing out-of-band quantization noise. Apart from the various transmitter and PA architectures, it is essential to enhance the bandwidth of the design to co-op with wideband modulated signals anticipated in 5G communication. In general, switch-mode PAs are a popular choice for systems where high efficiency is required. However, these PAs are inherently narrow bands. Moreover, it is difficult to obtain a feasible design space that led to realizable loads resulting in highefficiency operation over a wide bandwidth. In case load-pull is used, it is difficult to find this appropriate set of loads that can be realizable with a matching network. A continuum of class such as Class B/J and continuous Class F, Class E PAs provide many useful solutions which can be represented by a drain voltage waveform set at each frequency of operation over the band. In such a case, high efficiency is maintained over a wide bandwidth. This waveform engineering is performed by selecting an appropriate set of fundamental and harmonic loads at the intrinsic current generator plane of the transistor. The talk will discuss the design aspects of wireless transmitters and RF PA design while discussing various challenges in transmitter architecture, device selection, circuit design, modeling, etc. Bio: Karun Rawat has received his PhD. degree in electrical engineering
doi:10.1109/ncc52529.2021.9530194 fatcat:ahdw5ezvtrh4nb47l2qeos3dwq