Digital Communications and Networks xxx (xxxx) xxx
Contents lists available at ScienceDirect
Digital Communications and Networks journal homepage: www.keaipublishing.com/dcan
The partial side information problem with additional reconstructions Viswanathan Ramachandran Department of Electrical Engineering, Indian Institute of Technology Bombay, IIT Area, Powai, Mumbai, 400076, India
A R T I C L E I N F O
A B S T R A C T
Keywords: Rate-distortion theory Source coding Network information theory Wyner Ziv compression MMSE estimation Correlated Gaussian sources
We consider a quadratic Gaussian distributed lossy source coding setup with an additional constraint of identical reconstructions between the encoder and the decoder. The setup consists of two correlated Gaussian sources, wherein one of them has to be reconstructed to be within some distortion constraint and match with a corresponding reconstruction at the encoder, while the other source acts as coded side information. We study the tradeoff between the rates of two encoders for a given distortion constraint on the reconstruction. An explicit characterization of this trade-off is the main result of the paper. We also give close inner and outer bounds for the discrete memoryless version of the problem.
1. Introduction We consider a quadratic Gaussian distributed lossy source coding set-up with two correlated sources X1 and X2 . There are two encoders, one observing each of the two sources and the other single decoder wishing to reconstruct only X1 to be within some distortion constraint. Thus, the second source only acts as a helper providing coded side information for the reconstruction of the first source. And the only assumption made about the side information here is that, it is ratelimited to R2 , i.e., the rate of operation of encoder two. The justification for this assumption is motivated by practical considerations, wherein obtaining complete side information (which corresponds to R2 → ∞) may be prohibitively costly. However, we also impose an identical reconstruction constraint on X1 , that is, the reconstructions at the encoder observing X1 and the receiver have to agree with high probability. In other words, the estimate of the Gaussian source at the receiver has to agree with the corresponding estimate at the transmitter. The performance of the scheme is characterized in terms of whether an arbitrarily small average probability of error can be achieved for the identical reconstruction of the source, and whether the fidelity criterion is met while operating at a tuple ðR1 ; R2 ; DÞ of respective rates and distortion. (see Fig. 4). Our problem setting is motivated by distributed data sensing problems (see Fig. 1). For example, consider a wireless sensor network deployed for measuring the temperature or rainfall at various locations across a city. Suppose that each sensor node in the sensor network compresses its measurement and transmits it to a common destination via a noiseless link. It is plausible that different sensor measurements are correlated, hence the correlation between sources in our model is
justified. So the question here is, what are the minimum transmission rates needed so that the destination can make lossy estimates of the sensor measurements to be within some tolerable distortion? We study an abstraction of this model by considering two correlated sources. The motivation for studying a helper problem is that, suppose the destination is only interested in a lossy reconstruction of some of the sensor measurements while the other sensor's information acts as correlated side information. Last but not the least, we consider an identical reconstruction constraint as well, because the sensor nodes might want to make sure that the estimates at the destination match with their own estimates, otherwise a retransmission might be done in order to ensure that the destination's estimates are satisfactory. The model we study in the paper is motivated by this sensor network scenario. Other practical aspects that might be relevant here include fading channels, imperfect or noisy observations of the source at the sensor nodes, and so on. However, we are interested in a more holistic information theoretic study of the fundamental performance trade-offs involved in such kinds of systems, so that we do not consider specific implementation aspects of a particular model. Our problem framework falls into the category of rate distortion theory and multi-terminal lossy source coding. The discipline of distributed lossy source coding was initiated in the seminal works of Berger [1] and Tung [2], where inner and outer bounds for the rate distortion region were given. The general Berger-Tung problem is unresolved up to date. Berger et al. [3] considered a relaxation of the problem in which only one of the sources is to be reconstructed while the other source acts as coded side information to aid this reconstruction. An inner bound was given in the paper, whose optimality is unknown till date. We refer to this problem henceforth as the partial
E-mail address:
[email protected]. https://doi.org/10.1016/j.dcan.2019.03.001 Received 29 March 2018; Received in revised form 5 March 2019; Accepted 8 March 2019 Available online xxxx 2352-8648/© 2019 Chongqing University of Posts and Telecommunications. Production and hosting by Elsevier B.V. This is an open access article under the CC BYNC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Please cite this article as: V. Ramachandran, The partial side information problem with additional reconstructions, Digital Communications and Networks, https://doi.org/10.1016/j.dcan.2019.03.001
V. Ramachandran
Digital Communications and Networks xxx (xxxx) xxx
Fig. 1. Motivating scenario.
follows:
side information problem. Oohama [4] gave a complete characterization of the rate distortion region of the partial side information problem in the quadratic Gaussian case. Other variations of the Berger-Tung setup include the case where one of the sources is a deterministic function of the other, i.e., the Berger-Kaspi setup [5], and the case where one of the sources is to be reconstructed in a lossless manner while the other one is reconstructed with distortion, namely, the Berger-Yeung setup [6]. The Berger-Tung outer bound was improved in the work by Wagner and Anantharam [7]. Antonios et al. [8] studied joint source channel coding schemes to send correlated sources over fading multiple access channels. Recently, Shkel and Verdu [9] derived the rate-distortion function for the Wyner-Ziv problem in the single-shot setting under a logarithmic distortion criterion. The state of the art in the field of multiterminal source coding is presented in the work of Leonardo et al. [10], wherein the theory was expanded from two to three terminal networks. A general inner bound was developed therein, which recovered several classical results as special cases. The notion of identical reconstructions shared by the transmitter and the receiver was first introduced in Ref. [11]. This model naturally arises in the compression or transmission of sensitive information, for instance, medical image files like EEG, ECG, etc. In such scenarios, it is desirable to have an identical reconstruction of the receiver's estimate at the transmitter, so that both entities have an identical reference for consultation purposes. Identical reconstruction over a point-to-point channel with state was also analysed by Steinberg [11]. Specifically, Ref. [11] obtained a complete characterization of the distortion-rate trade-off under an identical reconstruction constraint. The same work also contained results on the rate distortion function for the Wyner-Ziv [12] problem when an identical reconstruction constraint is imposed. Since then, the idea of identical reconstruction has been studied in several information theoretic scenarios. In this paper, we address the lossy partial side information/onehelps-one extension of the identical reconstruction model in Ref. [11]. While a discrete memoryless partial side information problem with identical reconstructions is also of interest, as noted in the concluding remarks of [13], this is yet another interesting unsolved problem. Nevertheless, we obtain achievable regions as well as outer bounds for the discrete memoryless counterpart with identical reconstructions. The trade-off between the rates of the encoders and the reconstruction distortion for the quadratic Gaussian case is also characterized in the current sequel. Thus, our main contributions can be summarized as
We formulate an identical reconstruction framework for two correlated sources in the quadratic Gaussian setting, where only one of the sources needs to be reconstructed in a lossy manner. We derive a complete characterization of the optimal trade-off between the rates of the two encoders and the distortion incurred in source reconstruction, under an identical reconstruction constraint. In addition, we also derive close inner and outer bounds for the rates versus distortion trade-off in the discrete memoryless setting, wherein the bounds only differ in certain Markov conditions. The paper is organized as follows: we introduce the system model and state our main theorems in Section 2; Sections 3 and 4 provide the proof of achievability and the proof of converse respectively; in Section 5, we give very close inner and outer bounds for the optimal rate region of the discrete memoryless counterpart of the same problem; finally, we conclude the paper in Section 6. 2. System model and results Consider the distributed lossy source coding setup shown in Fig. 2. The setup consists of a 2-WGNð1; ρÞ (White Gaussian Noise) source, i.e., X1 N ð0; 1Þ, X2 N ð0; 1Þ and E½X1 X2 ¼ ρ. Thus, ρ denotes the correlation between the two sources. The role of encoder 2 here is to provide coded side information for the reconstruction of X1 , and we do not wish to reconstruct the source X2 here. The decoder aims to obtain an estimate b 1 , which should agree with the estimate ψ ðX1 Þ at the encoder with a X probability approaching one (see Fig. 3).
Fig. 2. Partial side information problem with identical reconstructions. 2
V. Ramachandran
Digital Communications and Networks xxx (xxxx) xxx
Proof. The achievability is given in Section 3. See Section 4 for the converse. 3. Proof of achievability We will state and prove the achievability of the discrete memoryless partial side information problem with identical reconstructions, and specialize to Gaussians by the appropriate choice of random varc 1 respectively denote the alphabets of X1 and X b 1, iables. Let X 1 and X c 1 → ℝþ be an arbitrary distortion measure. Let U1 be a and d : X 1 X random variable that satisfies the Markov chain U1 → X1 → U. The b 1 of source X1 , random variable U1 here stands for the reconstruction X c1 → while U stands for coded side information of source X2 . Let ϕ : U c 1 be a map which is known at both encoder 1 and the decoder. Our X result is stated in the form of the following theorem. Theorem 3. The optimal rate distortion trade-off region for the discrete memoryless partial side information problem with arbitrary distortion measure dð Þ and identical reconstruction constraints contains the set of ðR1 ; R2 ; DÞ triples such that
Fig. 3. R2 vs R1 for various values of D.
b 1 jUÞ R1 IðX1 ; X
(5)
R2 IðU; X2 Þ
(6)
D E½dðX1 ; ϕðU1 ÞÞ:
(7)
b 1 → X1 → U, and E½dðX1 ; X b 1 Þ D. for some pmf pðujx2 Þ such that X Proof. To prove the achievable region, we use ideas from the achievability proofs of Wyner’s [14] lossless helper problem as well as Steinberg’s [11] Wyner-Ziv with identical framework reconstructions. We now outline the proof as follows: Fix the pmf pðujx2 Þ. As in [14], the auxiliary U is intended to be a covering of X2 . This gives the rate constraint
Fig. 4. The Berger-Kaspi setup.
While the notation U n :¼ U1 ; ⋯; Un is standard, we will also sometimes write U n as U. In this convention, capital letters in bold signify multi-letter variables, whereas the corresponding normal font represents single-letter variables.
R2 IðU; X2 Þ:
Now U acts as side information for the reconstruction of the source X1 . With U as the side information and X1 as the source to be reconstructed with identical reconstructions, our setup now exactly mirrors that of Steinberg’s [11] Wyner-Ziv model with identical reconstructions. Thus, the minimum rate required is
Definition 1. An ðn; R1 ; R2 ; D; εÞ identical reconstruction communication scheme consists of two encoder maps E i : ℝn → ½1 : 2nRi , i ¼ 1; 2, a sender b quantization map ψ : ℝn → f1; ⋯; 2n R 1 g, and a decoding map g : ½1 : 2nR1 n nR2 ½1 : 2 → ℝ such that, n 1X b 1i 2 D þ ε; E½X1i X n i¼1
(1)
b 1 Þ ε: ℙðψ ðX1 Þ 6¼ X
(2)
b 1 ; X1 jUÞ; R1 Ið X
b 1. X We say that a triple ðR1 ; R2 ; DÞ is achievable provided that an ðn; R1 ; R2 ; D; εÞ communication scheme exists for any ε > 0, possibly by making n large enough. Let R CR ðDÞ denote the optimal ðR; D1 ; D2 Þ trade-off region. Our main result is stated now as:
source X1 , while U stands for the coded side information from the source X2 . Let us choose the auxiliary U as follows (this specifies pðujx2 Þ):
Theorem 2. R CR ðDÞ is the the set of all achievable triples ðR1 ; R2 ; DÞ 2 ℝ3þ , with D 1 such that (3)
logþ ðxÞ ¼ maxð0; logðxÞÞ
(4)
(9)
b 1 → X1 → U as in Ref. [11]. This concludes with the Markov constraint X the outline of the proof. Let us now specialize Theorem 3 to the case of Gaussian sources with squared error distortion metrics. Let U1 be a random variable that satisfies the long Markov chain U1 → X1 → X2 → U. Basically, the random b 1 of the variable U1 in this formulation stands for the reconstruction X
From our notations, let M1 ¼ E 1 ðX n1 Þ, M2 ¼ E 2 ðX n2 Þ and gðM1 ;M2 Þ ¼
1 1 ρ2 þ ρ2 22R2 R1 logþ 2 D
(8)
U1 ¼ X1 þ V1
(10)
U ¼ X2 þ V2
(11)
where V1 N ð0; N1 Þ and V2 N ð0; N2 Þ are independent of each other b 1 be the MMSE (Miniand of ðX1 ; X2 Þ. Further, let the reconstruction X mum mean square error) estimate
3
V. Ramachandran
Digital Communications and Networks xxx (xxxx) xxx
b 1 ¼ E½X1 jU1 ; U: X
nR1 HðM1 Þ HðM1 jM2 Þ
(12)
¼ Hðψ ; M1 jM2 Þ Hðψ jM1 ; M2 Þ n b 1 ; M1 M2 Hðψ jM1 ; M2 ; gðM1 ; M2 ÞÞ ¼H X n b 1 ; M1 ; X n1 M2 nεn I X b n1 ; M1 nεn ¼ h X n1 M2 h X n1 M2 ; X n b 1 nεn h X n1 M2 h X n1 X n n n b n1 X b 1 nεn ¼ h X 1 M2 h X 1 X n n n b 1 nεn h X 1 M2 h X 1 X n X b 1i Þ nεn h X 1 M2 hðX1i X
By evaluating the rate bounds of R1 , R2 and the MMSE error with the above choices, we obtain R1 ¼ IðU1 ; X1 jUÞ ¼ hðX1 jUÞ hðX1 jU; U1 Þ 1 Var½X1 jU ¼ log 2 Var½X1 jU; U1 1 mina E½X1 aU2 ¼ log 2 mina;b E½X1 aU bU1 2 1 ð1 þ N1 Þð1 þ N2 Þ ρ2 ¼ log 2 N1 ð1 þ N2 Þ
!
(13)
i¼1
n 1X
b 1i 2 nεn h X n1 M2 log 2π eE½X1i X 2 n i¼1 n 1X b 1i 2 nεn h X n1 M2 log 2π e E½X1i X 2 n i¼1 n n h X 1 M2 logf2π eDg nεn 2
R2 ¼ IðU; X2 Þ ¼ hðX2 Þ hðX2 jUÞ 1 Var½X2 ¼ log 2 Var½X2 jU 1 1 ¼ log 2 mina E½X2 aU2 1 1 þ N2 ¼ log 2 N2
!
(14)
where (a) follows because the identical reconstruction constraint is met, (b) follows from Fano's inequality, (c) follows from the differential entropy maximizing property of Gaussians under a variance constraint, (d) follows from the concavity of the logarithm and Jensen's inequality and (e) follows from the distortion constraint. For the rate R2 , we can write the following chain of inequalities:
D ¼ E½X1 E½X1 jU1 ; U2 ¼ minE½X1 aU1 bU2 a;b
¼
2
(15)
N1 1 þ N2 ρ ð1 þ N1 Þð1 þ N2 Þ ρ2 With D as in (15) and with 22R2 ¼ N2 =ðN2 þ 1Þ from (14), we see that
1 1 ρ2 þ ρ2 22R2 R1 ¼ log 2 D
1 1 ρ2 þ ρ2 22R2 R1 ¼ log 2 D 9 8 N2 > > > > 1 ρ2 þ ρ2 < = 1 1 þ N2 ¼ log 2 > 2 > > > : N1 1 þ N2 ρ ; ð1 þ N1 Þð1 þ N2 Þ ρ2 1 ð1 þ N1 Þð1 þ N2 Þ ρ2 ¼ log 2 N1 ð1 þ N2 Þ
(20)
X1 ¼ ρX2 þ N
(21)
where N?X2 and N N ð0; 1 ρ2 Þ. Let m be a realization of M2 . Then since M2 → X n2 → X n1 , we can write (17)
X n1 ðmÞ ¼ ρX n2 ðmÞ þ N n
(22)
Now applying the conditional EPI [15], we obtain 2 n 2 n 2n hðX 1 jM2 Þ ρ2 2n hðX 2 jM2 Þ þ ð2π eÞ 1 ρ2 8 9 = n n < 2 2 hðX n jM2 Þ h X 1 M2 log ρ 2n 2 þ ð2π eÞ 1 ρ2 ; 2 :
4. Proof of converse In this section, we consider the converse for the quadratic Gaussian partial side information problem with identical reconstructions. We note that by Fano's inequality, any successful scheme satisfies
H ψ X n1 gðM1 ; M2 Þ nεn
(23)
Hence we have
which is nothing but (13). This proves that the rate in (3) is achievable.
(24)
Applying inequality (20) to expression (24), we obtain n
h X n1 M2 log ð2π eÞ 1 ρ2 þ ρ2 22R2 2
(18)
(25)
Finally, on applying (25) to expression (19)
where εn → 0, as n → ∞. Also, for any successful identical reconstruction b n with high probability. Hence we can use ψ ðX n Þ and scheme, ψ ðX n Þ ¼ X 1
nR2 HðM2 Þ I X n2 ; M2 n ¼ h X 2 h X n2 M2 n ¼ logf2π eg h X n2 M2 2
We now couple the terms hðX n1 M2 Þ and hðX n2 M2 Þ from expressions (19) and (20) via the Entropy Power Inequality (EPI). Notice that we can write
(16)
evaluates to
1
(19)
n 1 ρ2 þ ρ2 22R2 nεn nR1 log 2 D
1
b n interchangeably in the converse proof. For the rate R1 , we can write X 1 the following chain of inequalities:
(26)
Dividing throughout by n and letting n → ∞, and thus εn → 0, we obtain
4
V. Ramachandran
1 1 ρ2 þ ρ2 22R2 R1 log 2 D
Digital Communications and Networks xxx (xxxx) xxx
formation is available. Berger and Kaspi [5] showed that, in their setting, the extra encoder side information is not useful. All we are saying here is that, since the side information is available at both encoder 1 and the decoder, the identical reconstruction constraint is automatically met. This is the reason why the identical reconstruction rate region remains unchanged. Now we describe our outer bound. Let U1 be a random variable that satisfies the Markov chain U1 → X1 → X2 . Also let X1 → X2 → U. The random variable U1 here stands for the reconstruction b 1 of source X1 , while U stands for coded side information of source X2 . X
(27)
which is nothing but expression (3). This completes the proof of converse. We now illustrate the regions of R2 vs R1 for various values of D, with a system parameter of ρ ¼ 0:7 in Fig. 2. 5. Inner and outer bounds for the discrete memoryless case
c 1 be a map which is known at both encoder 1 and the c1 → X Let ϕ : U decoder.
In this section, we derive inner and outer bounds of the optimal rate region for the discrete memoryless counterpart of the partial side information problem. The inner bound is given as follows:
Theorem 6. The optimal rate distortion trade-off region for the discrete memoryless partial side information problem with arbitrary distortion measure dð Þ with identical reconstruction constraints is contained in the set of ðR1 ; R2 ; DÞ triples such that
Theorem 4. The optimal rate distortion trade-off region for the discrete memoryless partial side information problem with arbitrary distortion measure dð Þ contains the set of ðR1 ; R2 ; DÞ triples such that
b 1 jUÞ R1 IðX1 ; X
(34)
R2 IðU; X2 Þ
(35)
b 1 → X1 → X2 → U and E½dðX1 ; X b 1 Þ D. for some pmf pðujx2 Þ such that X
D E½dðX1 ; ϕðU1 ÞÞ:
(36)
Proof. Recall that the achievable region for the setup without identical reconstruction constraints, i.e., Berger’s inner bound [3], is given by the set of ðR1 ; R2 Þ pairs such that
b 1 → X1 → X2 and X1 → X2 → U and for some pmf pðujx2 Þ such that X b 1 Þ D. E½dðX1 ; X
b 1 jUÞ R1 IðX1 ; X
(28)
R2 IðU; X2 Þ
(29)
R1 IðU1 ; X1 jU2 Þ
(30)
R2 IðU2 ; X2 Þ
(31)
Proof. From our above observation, the Berger-Kaspi setup with identical reconstructions has the same rate region as given in this Theorem. But note that any achievable region for the Berger-Kaspi setup with identical reconstructions is always an outer bound for the partial side information problem with identical reconstructions. This is because of the fact that the side information is not available at encoder 1 in the partial side information problem. Hence the proof of the outer bound is as follows.
for some pmf pðu2 jx2 Þpðu1 jx1 Þ such that U1 → X1 → X2 → U2 and E½dðX1 ; b 1 Þ D. Now the proof is completed by setting U1 ¼ X b 1 and U2 ¼ U in X the above region. Notice that both Theorems 3 and 4 have the same rate constraints, b 1 → X1 → U holds and they differ only in the Markov constraints. While X b in Theorem 3, X 1 → X1 → X2 → U holds in Theorem 4. It is obvious that
Remark 7.
the long Markov chain in Theorem 4 is the stronger constraint. Though Theorems 3 as well as 4 hold in the discrete memoryless case, Theorem 4 is stronger. But it turns out that for the quadratic Gaussian case, there is b 1 → X1 → U. no loss of optimality in relaxing the Markov constraint to X
bound. 6. Conclusion
Thus ,Theorem 3 is good enough in the quadratic Gaussian case. Now in order to prove the outer bound, we first introduce the BergerKaspi setup [5], which is shown below. The setup is essentially the same as the Berger et al. setup [3] except that the side information is made available to encoder 1 as well. Berger and Kaspi [5] obtained a complete characterization of the rate region as given below.
We've studied a quadratic Gaussian distributed lossy source coding setting with a constraint of reconstructions between the encoder and the decoder. We've given an explicit characterization of the trade-off between the rates of the two encoders for a given distortion constraint on the reconstruction. We also have given inner and outer bounds for the discrete memoryless version of the problem. Future work would involve studying the case when both sources are to be reconstructed and have to agree with corresponding estimates at the respective encoders. We believe that suitable extensions of the current results can tackle this problem.
Theorem 5. The optimal rate distortion trade-off region for the BergerKaspi setup is the set of ðR1 ; R2 ; DÞ triples such that b 1 jUÞ R1 IðX1 ; X
(32)
R2 IðU; X2 Þ
(33)
Note that our inner and outer bounds have the same rate con-
b 1 → X1 → X2 → straints, and they differ only in the Markov conditions, viz. X b 1 → X1 → X2 and X1 → X2 → U for the outer U for the inner bound and X
Acknowledgements The author would like to thank the reviewers as well as the editors for a careful reading of the manuscript and several insightful comments and suggestions which have greatly helped to improve the presentation of the paper.
b 1 → X1 → X2 and X1 → X2 → U and E½dðX1 ; for some pmf pðujx2 Þ such that X b 1 Þ D. X Proof. Refer [5] for the proof of the Theorem. Now we make the following crucial observation: Note that even if we impose an identical reconstruction constraint in the Berger-Kaspi setup, the rate region would be the same. This is because the side information is available at encoder 1 as well, and hence the identical reconstruction constraint can always be met. Though in general, it is the case that a lower rate should be incurred for a given distortion when more side in-
References [1] T. Berger, Multiterminal Source Coding, Springer-Verlag, 1978. [2] S.-y. Tung, Multiterminal Source Coding, Ph.D. thesis, 1979. [3] T. Berger, K. Housewright, J. Omura, S. Yung, J. Wolfowitz, An upper bound on the rate distortion function for source coding with partial side information at the decoder, IEEE Trans. Inf. Theory 25 (6) (1979) 664–666. 5
V. Ramachandran
Digital Communications and Networks xxx (xxxx) xxx [11] Y. Steinberg, Coding and common knowledge, in: Information Theory and Applications Workshop (ITA 2008), University of California, San Diego, La Jolla, CA, 2008. [12] A. Wyner, J. Ziv, The rate-distortion function for source coding with side information at the decoder, IEEE Trans. Inf. Theory 22 (1) (1976) 1–10, https:// doi.org/10.1109/TIT.1976.1055508. [13] B. Ahmadi, R. Tandon, O. Simeone, H.V. Poor, Heegard–berger and cascade source coding problems with common reconstruction constraints, IEEE Trans. Inf. Theory 59 (3) (2013) 1458–1474. [14] A. Wyner, On source coding with side information at the decoder, IEEE Trans. Inf. Theory 21 (3) (1975) 294–300, https://doi.org/10.1109/TIT.1975.1055374. [15] A. El Gamal, Y.-H. Kim, Network Information Theory, Cambridge University Press, 2011.
[4] Y. Oohama, Gaussian multiterminal source coding, in: Information Theory, 1995. Proceedings., 1995 IEEE International Symposium on, IEEE, 1995, p. 261. [5] A. Kaspi, T. Berger, Rate-distortion for correlated sources with partially separated encoders, IEEE Trans. Inf. Theory 28 (6) (1982) 828–840. [6] T. Berger, R.W. Yeung, Multiterminal source encoding with one distortion criterion, IEEE Trans. Inf. Theory 35 (2) (1989) 228–236. [7] A.B. Wagner, V. Anantharam, An improved outer bound for multiterminal source coding, IEEE Trans. Inf. Theory 54 (5) (2008) 1919–1937. € Alay, P. Palantas, Modeling the lossy transmission of correlated [8] A. Argyriou, O. sources in multiple access fading channels, Phys. Commun. 24 (2017) 34–45. [9] Y.Y. Shkel, S. Verdú, A single-shot approach to lossy source coding under logarithmic loss, IEEE Trans. Inf. Theory 64 (1) (2018) 129–147. [10] L.R. Vega, P. Piantanida, A.O. Hero, The three-terminal interactive lossy source coding problem, IEEE Trans. Inf. Theory 63 (1) (2017) 532–562.
6