Channel Estimation Error Model for SRS in LTE

University essay from KTH/Signalbehandling

Author: Pontus Arvidsson; [2011]

Keywords: ;

Abstract: In 3GPP long term evolution (LTE), sounding is used to gain a wideband estimate of the uplink channel. This channel estimate may then be used for several radio resource management related applications such as frequency selective scheduling and beam forming. Code division multiplexing (CDM) enables several users to broadcast sounding reference signals (SRS) simultaneously on the same time and frequency resource. As the multiplexed users may interfere with one another there is a trade-off between having users broadcast SRS as often as possible to get a frequent channel estimate and getting higher quality estimates with a lower periodicity. To assess this trade-off one must have a good understanding of what causes the errors in the channel estimate so that the sounding resource may be used as efficiently as possible. This thesis proposes a method to model the channel estimation error with sounding for use in a system simulator environment. The method consists of estimating a standard deviation with a per-resource-block resolution of the channel estimates as a function of received signal powers of interfering users as well as the target user and background noise. This estimated estimation error may then, in the system simulator, be applied to a known ideal channel estimate as noise. The main limiting source of error is shown to beinterference, both from sounding users in the same cell and in others as well as some effects of limited frequency resolution. Simulation results indicate that a cleverly designed sounding resource handler is needed to fully utilize the possible gains of sounding.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)