Simple Joint Source-Channel Coding Schemes for Colored Gaussian Sources

Amir Ingber
<span title="">2008</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="" style="color: black;">Data Compression Conference (DCC), Proceedings</a> </i> &nbsp;
Shannon's well known separation theorem is a very powerful result and tool for designing communication systems. However, in order to achieve optimal performance both channel and source coders and decoders may have arbitrarily large complexity and latency. It appears that solving both problems together can result in simpler and more efficient designs. In this work we study simple joint source-channel (JSCC) schemes for transmitting k independent Gaussian source samples with different variances
more &raquo; ... er k additive white Gaussian noise (AWGN) channels. The channels shall have an average power constraint, and the distortion measure shall be the average square error. This problem arises in several practical cases, e.g. in transform coding where the output of the decorrelation process is our source, that may have different variances. For simplicity, we focus on the high signal-to-noise (SNR) regime. In the simple case where the source variances are equal it is known that sending to the channel a scaled version of the source results in optimal performance: the distortion is equal to the distortion-rate function evaluated at the channel capacity (R(D) = C). For a given SNR the optimal signal-to-distortion ratio (SDR) is given by SDR = 1+SNR. For high SNR we get that SDR = SNR. (1) When the source variances are not equal, and the i th source has variance P i , the optimal SDR (resulting from Shannon's R(D) = C bound at high SNR) is given by If we transmit the sources directly to the channel as in the equal variance case, we get SDR = SNR only, which is lower than (2), and gets even worse as the source variances are more spread. When allowing individual scaling of the sources we show by direct optimization that the optimal scaling factors are given by α i ∝ P −1/4 i , and the optimal SDR achievable by such a system is given by We also show that (3) is the optimal performance for general scalar nonlinear functions, and even for matrix operations. When allowing general nonlinear functions, we show an example where the SDR is better than (3). We take k = 3, and a system that includes nonlinear bandwidth expansion and reduction mappings. In the example, (3) is 4.77dB better than (1), but the proposed system achieves an SDR that is 19.3dB better than (1). We then propose a JSCC system where a source X i is transmitted as sign(X i ) · |X i |. This system is considered universal, since it does not need to know the source variances. We show that it achieves an SDR that is a factor of π 8 (only 4.06dB) away from the optimal linear system (Eq. 3), regardless of the source variance distribution. The results presented here can be generalized in a straightforward manner to general SNR and bandwidth expansion / reduction.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="">doi:10.1109/dcc.2008.83</a> <a target="_blank" rel="external noopener" href="">dblp:conf/dcc/Ingber08</a> <a target="_blank" rel="external noopener" href="">fatcat:dcm2leqxvvbv7ksvwscicvffuq</a> </span>
<a target="_blank" rel="noopener" href="" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href=""> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> </button> </a>