Workshop Program

Downloadable Program (Final): [Short Form] [Long Form]

Tuesday, May 17

8:30 AM - 12:30 PM

School of Information Theory with Instructor Syed Ali Jafar, University of California, Irvine

Interference Alignment : A New Look at Signal Dimensions in Interference Networks
Interference is the primary bottleneck on the data rate capacity of most wireless and many wired networks. The recent emergence of the idea of interference alignment has shown that the throughput limits of interference networks may be orders of magnitude higher than previously imagined. In a relatively short period of three years since its emergence, the idea has gained tremendous momentum in research pursued by industry as well as the academia within the network information theory, communication theory, signal processing, and network coding communities. This tutorial introduces the audience to the idea of interference alignment, traces its origins, reviews a variety of interference alignment schemes, summarizes the diverse settings where the idea of interference alignment is applicable and highlights the common principles that cut across these diverse applications.
Room: ART 214
Chair: Robert Schober (University of British Columbia, Canada)

12:30 PM - 2:00 PM

Lunch Break

2:00 PM - 6:00 PM

School of Information Theory with Instructor Mung Chiang, Princeton University

Optimizing Wireless Network Resource Allocation
This tutorial discusses the optimization models and methods in wireless network resource allocation, especially power control and scheduling. Emphasis will be on deriving distributed algorithms, bridging the theory-practice gap, and evaluating fairness of allocation. Both classical results and recent advances will be covered, and implications to industry standards and practical implementation discussed.
Room: ART 214
Chair: Julian Cheng (University of British Columbia, Canada)

6:30 PM - 9:00 PM

Welcoming Reception

Room: Sun Room, Admin Building

Wednesday, May 18

8:00 AM - 9:00 AM

Plenary Speaker: Frank Kschischang, University of Toronto

Subspace Codes and Network Coding
In classical coding theory, information transmission is modeled as vector transmission: the transmitter sends a vector, the receiver gathers a vector possibly perturbed by noise, and the coding problem is to design a codebook having a large minimum distance between vectors. In this talk we generalize to the case of network coding and, motivated by the property that linear network coding is vector-space preserving, we model information transmission as vector-space transmission: the transmitter sends a (basis for a) vector space, the receiver gathers a (basis for a) vector space possibly perturbed by noise, and the coding problem is to design a codebook having a large minimum distance between vector spaces. We will show that so-called "lifted" maximum rank distance (MRD) codes such as Gabidulin codes play essentially the same role as that played by maximum distance separable (MDS) codes such as Reed-Solomon codes, both for information transmission in the presence of adversarial errors and for security against a wiretapper. When errors are introduced randomly (rather than chosen by an adversary), we show that a simple matrix-based coding scheme can approach capacity. Finally, we describe how some of these ideas may be useful in the context of lattice-theoretic physical-layer network-coding schemes based on compute-and-forward relaying.
Room: ASC 140
Chair: Norman C. Beaulieu (University of Alberta, Canada)

9:00 AM - 10:40 AM

Coding and Information Theory I

Room: ASC 140
Chair: Frank R. Kschischang (University of Toronto, Canada)
Information Rates of Cyclostationary Faster-than-Nyquist Signaling
Yong Jin Daniel Kim (McGill University, Canada); Jan Bajcsy (McGill University, Canada)
Faster than Nyquist (FTN) signaling has been studied as an alternative transmission technology in communication systems when information carrying symbols are sent faster than the Nyquist rate determined by the physical channel bandwidth. This paper analyzes the information rates of cyclostationary FTN signaling on AWGN and linear time-invariant (LTI) channels, presents related numerical capacity results and discusses potential merits to the FTN signaling.
Design of Multi-Edge-Type LDPC Codes for High-Order Coded Modulation
Lei Zhang (University of Toronto, Canada); Frank R. Kschischang (University of Toronto, Canada)
A design method for bandwidth-efficient LDPC coded modulation for $2^{2m}$-QAM constellations at rate $(2m-1)/(2m)$ in complex AWGN is presented. A multi-edge-type parameterization is used to exploit the distinct bit-channel capacities unique to high-order modulation using LDPC structures. EXIT analysis is adapted to multi-edge by introducing multi-dimensional EXIT iterated-function system analysis. Under this conceptualization, a successful decoding condition is developed by estimating fixed points of the dynamical system using its numerical gradient. Optimized ensembles are found for 16-QAM with thresholds matching the best known ensembles of equal complexity. For 64 to 1024-QAM, sufficiently high bit-channel capacities allow for extension of lower-order optimized ensembles, leading a practical nested code structure. The nested structure provides flexible rate selection with a single decoder. The gap to constrained, non-iterative capacity of all optimized code ensembles of maximum variable degree of 15 is within 0.21 dB.
A New Approach for FEC Decoding Based on the BP Algorithm in LTE and WiMAX Systems
Ahmed Refaey Hussein (Laval University, Canada); Sebastien Roy (Laval University, Canada); Paul Fortier (Laval University, Canada)
Many wireless communications systems such as IS-54, enhanced data rates for the GSM evolution (EDGE), worldwide interoperability for microwave access (WiMAX) and long term evolution (LTE) have adopted low-density paritycheck (LDPC), tail-biting convolutional, and turbo codes as the forward error correcting codes (FEC) scheme for data and overhead channels. Therefore, many efficient algorithms have been proposed for decoding these codes. However, the different decoding approaches for these two families of codes usually lead to different hardware architectures. Since these codes work side by side in these new wireless systems, it is therefore a good idea to introduce a universal decoder to handle these two families of codes. The present work is exploits the parity-check matrix (H) representation of tail-biting convolutional and turbo codes, thus enabling decoding via a unified belief propagation (BP) algorithm. Indeed, the BP algorithm provides a highly effective general methodology for devising low complexity iterative decoding algorithms for all convolutional code classes as well as turbo codes. While a small performance loss is observed when decoding turbo codes with BP instead of MAP, this is offset by the lower complexity of the BP algorithm and the inherent advantage of a unified decoding architecture.
The Delay Selector Channel: Definition and Capacity Bounds
Lu Cui (York University, Canada); Andrew Eckford (York University, Canada)
A new timing channel, known as the delay selector channel (DSC), is proposed as an abstract model for applications with timing noise. In this model, channel inputs are delayed by a random amount, and delayed transmissions are summed at the output. Molecular communication is discussed as a principal application of the DSC, since the channel mimics the propagation and reception of molecules under Brownian motion. In this paper, the DSC is described in detail, and a closed-form lower bound is given on capacity.
On the Transmission of a Gaussian Source over an AWGN channel with Correlated Interference
Morteza Varasteh (Sharif University of Technology, Iran); Hamid Behroozi (Sharif University of Technology, Iran)
We study hybrid digital-analog (HDA) joint source-channel coding schemes in transmitting an analog Gaussian source over an AWGN channel in the presence of an interference, correlated with the source. We analyze these schemes to obtain achievable (squared-error) distortion-power tradeoffs. For comparison, we also obtain an outer bound for the achievable distortion-power tradeoff. Using numerical examples, we demonstrate that, a two-layered coding scheme consisting of analog and Costa coding performs well compared to other provided HDA schemes.

10:40 AM - 11:00 AM

Morning Coffee Break

Room: ASC 120 Foyer

11:00 AM - 12:20 PM

Interference and Cognitive Radio

Room: ASC 140
Chair: Andrew Eckford (York University, Canada)
Optimum Cognitive Radio Transmission Scheme for Reducing Average Interference Power
Sang Joon Kim (Samsung, Korea); Saeed S. Ghassemzadeh (AT&T Labs - Research, USA); Robert Miller (AT&T Labs - Research, USA); Vahid Tarokh (Harvard University, USA)
Reducing interference induced by secondary users is one of the most challenging problems in spectrum sharing. In this paper, we derive the optimum transmission strategy that minimizes the average interference power created by cognitive radios at a given point of space i.e. the location of a primary receiver. Surprisingly, this is not given by the water-filling solution.
Interference Alignment and Neutralization in a Cognitive 3-User MAC-Interference Channel: Degrees of Freedom
Anas Chaaban (Ulm University, Germany); Aydin Sezgin (Ulm University, Germany)
A network consisting of a point-to-point (P2P) link and a multiple access channel (MAC) sharing the same medium is considered. The resulting interference network, with three transmitters and two receivers is studied from degrees of freedom (DoF) perspective, with and without cognition. Several cognition variants are examined. Namely, the setup is studied with (1) no cognitive transmitters, (2) a cognitive P2P transmitter, (3) one cognitive MAC transmitter, and (4) with two cognitive MAC transmitters. It is shown that having a cognitive P2P transmitter does not bring any DoF gain to the network. This is obtained by showing that the DoF of the two former cases (1) and (2) is 1. However, it is shown that a cognitive MAC transmitter is more beneficial since the latter two cases (3) and (4) have 3/2 DoF. The achievability of 3/2 DoF is guaranteed by using a combination of interference neutralization and interference alignment.
On the Capacity of the Cognitive Z-Interference Channel
Mojtaba Vaezi (McGill University, Canada); Mai Vu (McGill University, Canada)
We study the cognitive interference channel (CIC) with two transmitters and two receivers, in which the cognitive transmitter non-causally knows the message and the codeword of the primary transmitter. We first introduce a discrete memoryless more capable CIC, which is an extension to the more capable broadcast channel (BC). Then, using superposition coding, we propose an inner bound and an outer bound on its capacity region. Besides, we apply these bounds to the cognitive Gaussian Z-interference channel (CGZIC) where only the primary receiver suffers from interference.We show that jointly Gaussian distribution maximizes the outer bound for the CGZIC, and we compute the outer bound for the CGZIC. This outer bound appears to be the best outer bound on the capacity of the the CGZIC at strong interference. More importantly, this outer bound coincides with the inner bound for a> sqrt(1 + P1 ). Thus, we establish the capacity of the CGZIC at this range and show that superposition encoding at the cognitive transmitter and successive decoding at the primary receiver are capacity-achieving.
Capacity Bounds for the Three-User Cognitive Z-Interference Channel
Mahtab Mirmohseni (Sharif University of Technology, Iran); Bahareh Akhbari (Sharif University of Technology, Iran); Mohammad Reza Aref (Sharif University of Tech., Iran)
In this paper, we consider a three-user cognitive radio network with two primary users and one cognitive user. We concentrate on a three-user Interference Channel (IFC) where one of the transmitters has cognitive capabilities and non-causally knows the messages of the other two transmitters. Moreover, we assume that the cognitive transmitter does not cause any interference at the receivers of the primary users and we introduce three-user Cognitive Z-Interference Channel (CZIFC). We first obtain an inner bound on the capacity region of the three-user C-ZIFC, where our coding scheme makes use of collaborative strategy by rate splitting and cooperative strategy by superposition coding. Moreover, the cognitive user uses Gel'fand-Pinsker binning in order to reduce the interference at its own receiver. We also provide an outer bound on the capacity region of this channel and show that characterizing the capacity region of the three-user C-ZIFC is non-trivial based on the two-user case; the reason lies in the fact that the three-user setup captures the specifications of IFC like independent channel inputs. We also consider the Gaussian case and find an achievable rate region for the Gaussian C-ZIFC.

12:20 PM - 2:00 PM

Lunch Break

2:00 PM - 3:20 PM

Sequences and Reed-Solomon Codes

Room: ASC 140
Chair: Vahid Tarokh (Harvard University, USA)
Peak Power Analysis of MC-CDMA Employing Golay Complementary Sequences
Lin Dong (Lakehead University, Canada); Nam Yul Yu (Lakehead University, Canada)
Golay complementary sequences are a good solution to reduce the high peak-to-average power ratio (PAPR) of multicarrier communication systems. In this paper, we present a simple but novel technique to develop theoretical PAPR bounds of downlink MC-CDMA system using Golay complementary sequences for spreading and coding. The developed PAPR bounds are independent of the spreading factor in uncoded MC-CDMA. Furthermore, they have no dependency on the number of spreading processes as well as the spreading factor in coded MC-CDMA. Simulation results demonstrate that the theoretical bounds are well followed by 99.9% PAPRs which are also independent of spreading factors (for uncoded and coded cases), and the number of spreading processes (for coded case only). Practically, the independency gives us a useful insight for peak power control in MC-CDMA employing Golay complementary sequences.
Group Randomness Properties of Pseudo-Noise and Gold Sequences
Behtash Babadi (Harvard University, USA); Saeed S. Ghassemzadeh (AT&T Labs - Research, USA); Vahid Tarokh (Harvard University, USA)
In this paper, we study the group randomness of pseudo-random sequences based on shortened first-order Reed-Muller codes and the Gold sequences. In particular, we characterize the empirical spectral distribution of random matrices from shortened first-order Reed-Muller codes. We show that although these sequences have very appealing randomness properties across individual codewords, they do not possess certain group randomness properties of i.i.d. sequences. In other words, the spectral distribution of random matrices from these sequences dramatically differs from that of the random i.i.d. generated matrices. In contrast, Gold sequences manifest the group randomness properties of random i.i.d. sequences. Upper bounds on the Kolmogorov complexity of these sequences are established, and it has been shown that these bounds are much lower than those of the random i.i.d. sequences, when the sequence length is large enough. We discuss the implications of these observations and motivate the need to develop novel randomness tests encompassing both individual and group randomness of sequences.
Adaptive Single-Trial Error/Erasure Decoding of Reed-Solomon Codes
Christian Senger (Ulm University, Germany); Vladimir Sidorenko (University of Ulm, Germany); Steffen Schober (Ulm University, Germany); Martin Bossert (University of Ulm, Germany); Victor V. Zyablov (Institute for Information Transmission Problems (IITP) RAS, Russia)
Algebraic decoding algorithms are commonly applied for the decoding of Reed-Solomon codes. Their main advantages are low computational complexity and predictable decoding capabilities. Many algorithms can be extended for correction of both errors and erasures. This enables the decoder to exploit binary quantized reliability information obtained from the transmission channel: Received symbols with high reliability are forwarded to the decoding algorithm while symbols with low reliability are erased. In this paper we investigate adaptive single-trial error/erasure decoding of Reed-Solomon codes, i.e. we derive an adaptive erasing strategy which minimizes the residual codeword error probability after decoding. Our result is applicable to any error/erasure decoding algorithm as long as its decoding capabilities can be expressed by a decoder capability function. Examples are Bounded Minimum Distance decoding with the Berlekamp-Massey- or the Sugiyama algorithms and the Guruswami-Sudan list decoder.
Multivariate Interpolation Decoding of Heterogeneous Interleaved Reed-Solomon Codes
Farnaz Shayegh (Concordia University, Canada); M. Reza Soleymani (Concordia Univerisity, Canada)
We derive and analyze an algorithm for collaborative decoding of heterogeneous Interleaved Reed-Solomon (IRS) codes. In order to generate IRS codes, several codewords from different RS codes with the same length over the same Galois field are interleaved. The basis of the decoding algorithm is similar to the Guruswami-Sudan (GS) decoding method. However, here multivariate interpolation is used in order to decode all the codewords of the interleaved scheme simultaneously. In the presence of burst errors, it is shown that the error correction capability of this algorithm is larger than that of independent decoding of each codeword using the standard GS method. In the latter case, the error correction capability is equal to the decoding radius of the GS algorithm for the RS code with the largest dimension.

3:20 PM - 3:40 PM

Afternoon Coffee Break

Room: ASC 120 Foyer

3:40 PM - 5:20 PM

Source Coding

Room: ASC 140
Chair: Jing Wang (Research In Motion Limited, Canada)
On Optimum Fixed-Rate Causal Scalar Quantization Design for Causal Video Coding
Lin Zheng (University of Waterloo, Canada); En-hui Yang (University of Waterloo, Canada)
Causal video coding is a coding paradigm where video source frames are encoded in a frame by frame manner, the encoder for each frame can use all previous frames and all previous encoded frames, and the corresponding decoder can use only all previous encoded frames. In this paper, the design of causal video coding is considered from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization. By extending the classic Lloyd-Max algorithm for a single source to this multiple sources case, we then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. The proposed algorithm converges in the sense that the total distortion cost is monotonically decreasing until a stationary point is reached. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as $16\%$ quality improvement (distortion reduction).
Hard-decision Quantization with Adaptive Reconstruction Levels for High Efficiency Video Coding
Jing Wang (Research In Motion Limited, Canada); Xiang Yu (Research In Motion Limited, Canada); Da-ke He (Research In Motion/SlipStream, Canada)
In this paper, a quantization scheme based on hard-decision partition and adaptive reconstruction levels is proposed for High Efficiency Video Coding (HEVC), the video coding standard currently under development, which has demonstrated approximately 40% bit saving compared to H.264/AVC. For each video frame, the residual signal after motion estimation is transformed and then quantized by a hard-decision scalar quantizer. The reconstruction levels of the quantization outputs are adaptively computed based on the statistics of the residual signals of previously coded frames, and are selectively transmitted to the decoder. Compared with the rate-distortion optimized quantization (RDOQ) method in HEVC, which is attempting optimal among all encoder-side quantization techniques, the proposed scheme addresses the decoder-side reconstruction problem and achieves further rate reduction in the low-delay low-complexity setting while the encoding computational complexity is significantly reduced, and the decoder complexity remains almost the same.
Exploiting Memory and Soft-Decision Information in Channel Optimized Quantization for Correlated Fading Channels
Shervin Shahidi (Queen's University, Canada); Fady Alajaji (Queen's University, Canada); Tamas Linder (Queen's University, Canada)
A channel optimized vector quantizer (COVQ) scheme is studied and evaluated for a recently introduced discrete binary-input 2q-ary-output channel with Markovian ergodic noise based on a finite queue. This channel can effectively model binary-modulated correlated Rayleigh fading channels with output quantization of resolution q. It is shown that the system can successfully exploit the channel's memory and soft-decision information. Signal-to-distortion gains of up to 1.4 dB are obtained for only 2 bits of soft-decision quantization over COVQ schemes designed for a hard-decision (q = 1) demodulated channel. Furthermore, gains as high as 4.6 dB can be achieved for a highly correlated channel, in comparison with systems designed for the ideally interleaved (memoryless) channel. Finally, the queue-based noise model is validated as an effective approximation of correlated fading channels by testing a COVQ trained using this model over the fading channel.
Hybrid Digital-Analog Source-Channel Coding with One-to-Three Bandwidth Expansion
Ahmad Abou Saleh (Queen's University, Canada); Fady Alajaji (Queen's University, Canada); Wai-Yip Geoffrey Chan (Queen's University, Canada)
We consider the problem of bandwidth expansion for lossy joint source-channel coding over a memoryless Gaussian channel. A low delay 1:3 bandwidth expansion hybrid digital-analog coding system, which combines a scalar quantizer and a 1:2 nonlinear analog coder, is proposed. It is shown that our proposed system outperforms the 1:3 generalized hybrid scalar quantizer linear coder in terms of signal-to-distortion ratio (SDR). A lower bound on the system SDR is also derived.
Credit-Based Variable-to-Variable Length Coding: Key Concepts and Preliminary Redundancy Analysis
Jin Meng (University of Waterloo, Canada); En-hui Yang (University of Waterloo, Canada)
A new coding concept called credit-based variable-to-variable length (cbv2v) coding is proposed in this paper. A binary cbv2v code is constructed, and analysis of its performance shows that cbv2v coding can achieve much better trade-off among the coding delay, redundancy, and space complexity than does variable-to-variable length (v2v) coding. Specifically, let $L$ be the total number of source words. With finite coding delay, the redundancy of our proposed cbv2v code decreases in the order of $O(L^{-0.5})$ while the redundancy of binary v2v coding is lower bounded by $\Omega( (\log L)^{-5 - \epsilon} )$ where $\epsilon$ is an arbitrary positive real number. Furthermore, we also show that under mild conditions, the redundancy of any cbv2v code can be lower bounded by $\Omega(L^{-2|\mathcal{X}| -1 -\epsilon})$, where $|\mathcal{X}|$ is the size of source alphabet.

5:00 PM - 6:30 PM

Canadian Society of Information Theory Meeting

Room: UNC 334

Thursday, May 19

8:00 AM - 9:00 AM

Plenary Speaker: Vahid Tarokh, Harvard University

Near Theoretical Lower bound Sparse Representation and Compressive Sampling
In this talk, we consider the problem of sparse representation of signals, and the associated problem of sampling of sparse signals. By making a connection to Shannon theories of information and coding, we develop universal (Shannon type) lower bounds on sparse representations of signals, and study the minimum number of samples required to be able to reconstruct sparse signals both in the absence and presence of noise. These results significantly improve up on those of L_1/Dantzig-based approach commonly taken in the literature. We then produce concrete sampling methods that perform within 2 dB from the theoretical lower bounds. Applications to image coding will also be discussed.
Room: ASC 140
Chair: Robert Schober (University of British Columbia, Canada)

9:00 AM - 10:40 AM

Network Coding and Cooperative Diversity

Room: ASC 140
Chair: Ashish Khisti (University of Toronto, Canada)
Lattice Network Coding over Finite Rings
Chen Feng (University of Toronto, Canada); Danilo Silva (Federal University of Santa Catarina, Brazil); Frank R. Kschischang (University of Toronto, Canada)
Lattice network coding is recently proposed as a practical implementation of Nazer-Gastpar's compute-and-forward relaying strategy. Previous investigation of lattice network coding is mainly over finite fields. In this paper, we extend lattice network coding from finite fields to finite rings. In addition to having its own theoretical interest, this extension provides an alternative viewpoint of Nazer-Gastpar's relaying strategy and this extension expands the design space of lattice network codes. In particular, we show that this extension enables the use of complex Construction D to design lattice network codes, leading to potentially higher encoder rates.
On Noisy Network Coding for a Gaussian Relay Chain Network with Correlated Noises
Lei Zhou (University of Toronto, Canada); Wei Yu (University of Toronto, Canada)
Noisy network coding, which elegantly combines the conventional compress-and-forward relaying strategy and ideas from network coding, has recently drawn much attention for its simplicity and optimality in achieving to within constant gap of the capacity of the multisource multicast Gaussian network. The constant-gap result, however, applies only to Gaussian relay networks with independent noises. This paper investigates the application of noisy network coding to networks with correlated noises. By focusing on a four-node Gaussian relay chain network with a particular noise correlation structure, it is shown that noisy network coding can no longer achieve to within constant gap to capacity with the choice of Gaussian inputs and Gaussian quantization. The cut-set bound of the relay chain network in this particular case, however, can be achieved to within half a bit by a simple concatenation of a correlation-aware noisy network coding strategy and a decode-and-forward scheme.
Soliton-like network coding for a single relay
Andrew Liau (Queen's University, Canada); Shahram Yousefi (Queen's University, Canada); Il-Min Kim (Queen's University, Canada)
For the binary erasure channel, Luby Transform (LT) and Raptors codes have been shown to achieve capacity by carefully designed degree distributions for multicasting scenarios. Generalizing fountain codes to multihop networks requires transport nodes to perform network coding (NC). However, if intermediate nodes perform decentralized NC blindly, the statistical properties imposed by the fountain code are lost, and thus, a Gaussian elimination decoder must be used at the sink at the cost of significant increase in complexity compared to a belief propagation (BP) decoder. Addressing this problem, in this paper, we propose a new protocol, namely Soliton-like rateless coding (SLRC), by exploiting the benefits of fountain coding and NC coding over a Y-network. Ensuring key properties of the fountain code are preserved, BP can be effectively applied when transport nodes perform NC. The SLRC scheme is evaluated against buffer-and-forwarding, and other state of the art schemes; SLRC exhibits a 5% reduction in overhead at high decoding success rates. Simulations show that the proposed scheme preserves the benefits of NC and fountain coding.
Optimal Rates for Decode-And-Forward Cooperative Networks with Partial CSI
Ehsan Karamad (University of Toronto, Canada); Raviraj Adve (University of Toronto, Canada)
Centralized algorithms for relay selection and power allocation in cooperative networks have been widely considered in the literature. As effective as the proposed algorithms are, the complexity of centralized implementation and feedback required to communicate the required channel state information makes these solutions impractical. Here we investigate, in terms of the achievable rates for the nodes, relaying and power allocation for cooperative networks and consider the effects of partial channel state information. We first consider 1-bit knowledge of relevant channels followed by the multi-bit case. We also consider the case of multiple sources wherein relay resources must be sub-divided amongst the sources
Average Outage and Non-Outage Duration of Selective Decode-and-Forward Relaying
Nikola Zlatanov (University of British Columbia, Canada); Robert Schober (University of British Columbia, Canada); Zoran Hadzi-Velkov (Ss. Cyril and Methodius University, Macedonia); George K. Karagiannidis (Aristotle University of Thessaloniki, Greece)
In this paper, assuming transmission with fixed rate and fixed power, we derive the average duration of capacity outage and non-outage events of selective decode-and-forward relaying with repetition coding over slow Rayleigh fading channels. Furthermore, we develop high signal-to-noise ratio (SNR) approximations for both durations which provide significant insight into the impact of various system and channel parameters. For high SNR, on a double logarithmic scale, both the average outage duration (AOD) and the average non-outage duration (ANOD) become straight lines when plotted as functions of the SNR. However, while the slope of the ANOD improves with increasing diversity order, the slope of the AOD is −1/2 independent of the diversity order.

10:40 AM - 11:00 AM

Morning Coffee Break

Room: ASC 120 Foyer

11:00 AM - 12:20 PM

Optical Communications

Room: ASC 140
The Per-Sample Capacity of Zero-dispersion Optical Fibers
Mansoor Isvand Yousefi (University of Toronto, Canada); Frank R. Kschischang (University of Toronto, Canada)
The capacity of the channel defined by the stochastic nonlinear Schr\"odinger equation, which includes the effects of the Kerr nonlinearity and amplified spontaneous emission noise, is considered in the case of zero dispersion. For the first time, the exact capacity subject to peak and average power constraints is numerically quantified using dense multiple ring modulation formats. It is shown that, for a fixed noise power, the per-sample capacity grows unbounded with input signal power, clarifying recent contradictory results in the literature for this particular problem. A distribution with a half-Gaussian profile on amplitude and uniform phase is shown to provide an excellent lower bound to the capacity which is simple and asymptotically optimal at high SNRs. The channel can be partitioned into amplitude and phase subchannels, and it is shown that, although the capacity of the entire channel has no peak as a function of the signal input power, the capacity of the phase channel declines for large input powers.
Spectrally Factorized Optical OFDM
Kasra Asadzadeh (McMaster University, Canada); Ahmed A. Farid (McMaster University, Canada); Steve Hranilovic (McMaster University, Canada)
A novel bandwidth efficient method to implement orthogonal frequency division multiplexing (OFDM) on intensity modulated direct detection (IM/DD) channels is presented and termed spectrally factorized optical OFDM (SFO-OFDM). It is shown that a necessary and sufficient condition for a band-limited periodic signal to be positive for all time is that the frequency coefficients form an autocorrelation sequence. Instead of sending data directly on the subcarriers, the autocorrelation of the complex data sequence is performed before transmission to guarantee non-negativity. In $z$-domain, the average optical power is linked to the position of the zeros and used for the design of signal sets. In contrast to previous approaches, SFO-OFDM is able to use the entire bandwidth for data transmission and does not require reserved subcarriers. Using a sub-optimal design technique with 9 subcarriers and 8 bits per symbol, SFO-OFDM has a 0.5dB gain over ACO-OFDM at a BER of 10^{-5} and a reduction in peak-to-average ratio of more than 30%.
Multi-Hop Relaying over the Atmospheric Poisson Channel
Majid Safari (University of Waterloo, Canada); Mohammad Rad (Université Laval, Canada); Murat Uysal (Ozyegin University, Turkey)
In this paper, we study the outage behavior of a decode-and-forward multi-hop free-space optical (FSO) system over a Poisson channel degraded by atmospheric turbulence. We assume that perfect channel side information (CSI) is available at the receiver side and consider both cases of perfect CSI and no CSI at the transmitter side. We solve the outage probability minimization problem subject to a peak power constraint as well as a short- or long-term average sum power constraint. As a result, optimal power control strategies are presented for different scenarios under consideration. A sub-optimal yet low-complexity solution is further proposed under the short-term power constraint. Our results demonstrate that multi-hop relaying yields significant performance improvements which are particularly important for long-range FSO links.
DMT Analysis of Multi-Hop Coherent FSO Communication over Atmospheric Channels
Sahar Molla Aghajanzadeh (University of Waterloo, Canada); Murat Uysal (Ozyegin University, Turkey)
In this paper, we investigate the diversity multiplexing tradeoff (DMT) of a multi-hop coherent free-space optical (FSO) system over atmospheric channels. We consider an FSO relay-assisted system with decode-and-forward relay nodes and multiple heterodyne receivers with modal compensation. Based on a recently introduced statistical characterization for the combined effects of turbulence-induced amplitude fluctuations and phase aberrations, we quantify the potential performance improvements through the derivation of DMT. Our results demonstrate significant performance gains that can be achieved by multi-hop transmission in coherent FSO systems.

12:20 PM - 2:00 PM

Lunch Break

2:00 PM - 3:20 PM

Relay-Assisted Communication

Room: ASC 140
Chair: Chintha Tellambura (University of Alberta, Canada)
Joint Typicality Analysis for Half-Duplex Cooperative Communication
Ahmad Abu Al Haija (McGill University, Canada); Mai Vu (McGill University, Canada)
We propose a half-duplex cooperative scheme for a discrete memoryless channel (DMC) consisting of two users communicating with one destination. The half-duplex constraint is satisfied by performing the communication over 3 time slots with variable durations in each code block. Each user alternatively transmits and receives during the first 2 time slots, then both of them transmit during the last one. Different from [1], here we use joint typicality instead of maximum likelihood (ML) decoding to derive the rate region. The main contribution is in the joint decoding and proof techniques that concurrently combine code segments of different lengths.
Distributed Optimization of the Bhattacharyya Parameter in Wireless Relay Networks
Josephine Chu (MaxLinear Inc., USA); Andrew Eckford (York University, Canada); Raviraj Adve (University of Toronto, Canada)
A distributed optimization algorithm is presented for resource allocation in wireless relay networks. This algorithm is particularly useful in multiple-source, multiple-relay wireless networks employing fractional cooperation, in which each relay contributes only a fraction of its resources to each source; in this case, the algorithm is used to find the optimal fractions. The advantage of this distributed algorithm is to permit optimization in a large network, even though each node is only aware of the local topology of the network. Examples and simulation results are presented, illustrating the use of this algorithm.
Complexity-Efficient Detection for MIMO Relay Networks
Shuangshuang Han (University of Alberta, Canada); Chintha Tellambura (University of Alberta, Canada)
This paper provides the equivalent maximum likelihood (ML) detector at the destination of multi-branch dual-hop multiple-input multiple-output (MIMO) relay networks. Complexity-efficient detections by extending both the complexity-efficient sphere decoder (CSD) and the fixed complexity sphere decoder are proposed. Comparing to the direct link and the cooperative partial detection, our detection method based on the CSD shows the almost-fixed, reduced complexity at a negligible performance loss. Although detect-and-forward relaying is the main focus, this detection also performs well in amplify-and-forward relaying. The simulation results show that the CSD performs nearly optimal ML performance, while keeping the complexity of MIMO relay detection fixed and reduced, making this algorithm suitable for hardware implementation.
Packet Level Analysis for AMC in a Wireless Cooperative Communication System over Nakagami-m Fading Channels
Ning Wang (University of Victoria, Canada); T. Aaron Gulliver (University of Victoria, Canada)
We study a single-source single-destination wireless cooperative communication system with multiple relays operating in a modified decode-and-forward mode. Nakagami-$m$ fading with additive white Gaussian noise is assumed for all inter-node channels. A packet feedback model is proposed to approximately characterize packet loss events in the wireless channel. An approximation to the steady state distribution of the proposed generalized discrete time M/G/1 queue at the source is obtained by state truncation. The packet level performance of four transmission modes is analyzed under a variety of channel and traffic conditions using this truncated model. The calculated network power is used as a criterion in finding boundary points for adaptive modulation and coding scheduling.

3:20 PM - 3:40 PM

Afternoon Coffee Break

Room: ASC 120 Foyer

3:40 PM - 5:20 PM

Coding Theory and Information Theory II

Room: ASC 140
Chair: Wei Yu (University of Toronto, Canada)
Combinatorial Properties as Predictors for the Performance of the Sum-Product Algorithm
Sotiria Lampoudi (UCSB, USA); John Brevik (California State, Long Beach, USA); Michael O'Sullivan (San Diego State University, USA)
We examine various algebraic/combinatorial properties of Low-Density Parity-Check codes as predictors for the performance of the sum-product algorithm on the AWGN channel in the error floor region. We consider three families of check matrices, two algebraically constructed and one sampled from an ensemble, expurgated to remove short cycles. The three families have similar properties, all are (3, 6)-regular, have girth 8, and have code length roughly 280. The best predictors are small trapping sets, and the predictive value is much higher for the algebraically constructed families than the random ones.
On the Equivalence of the Berlekamp-Massey and the Euclidean Algorithms for Algebraic Decoding
Todd Mateer (Howard Community College, USA)
Dornstetter, Heydtmann, and Jensen have previously demonstrated that the Extended Euclidean Algorithm and the Berlekamp-Massey Algorithm are two equivalent methods used for solving the so-called Key Equation in BCH and Reed-Solomon decoding. This paper presents a new algorithm which makes this correspondence more explicit and is an improvement over each of the two algorithms.
Using Variable-Length Codes to Correct Insertion, Deletion and Substitution Errors
Victor Buttigieg (University of Malta, Malta)
A maximum likelihood metric is derived for the decoding of variable-length codes over a Binary Substitution, Insertion and Deletion channel. Using this metric a near-maximum likelihood decoder is derived. It is shown that variable-length codes can be used effectively to correct for insertion, deletion and substitution errors.
Streaming Codes for a Double-Link Burst Erasure Channel
Devin Lui (University of Toronto, Canada); Ahmed Badr (University of Toronto, Canada); Ashish Khisti (University of Toronto, Canada)
A sender and receiver are connected by two links, which both pass through a burst erasure channel. The channel induces an erasure burst of length B onto both links, but the bursts are separated by d time units. Source packets arrive at the sender, and are encoded with a streaming code such that the receiver can decode with a delay T. If source packet s[t] arrives at the sender at time t, then the receiver must be able to decode s[t] by time t+T from its received packets. Given the parameters B, T and d, we find the upper bound for the rate of the streaming code, and also discover codes that can operate at capacity for certain parameter values. The code constructions also internally make use of SCo codes. Finally, we find that by exploiting the dependence of the burst erasure locations on either link, we can achieve a higher rate than if we simply used single-link SCo codes on each link.
A New Secret Key Agreement Scheme in a Four-Terminal Network
Parisa Babaheidarian (Sharif University of Technology, Iran); Somayeh Salimi (Sharif University of Technology, Iran); Mohammad Reza Aref (Sharif University of Tech., Iran)
A new scenario for generating a secret key and two private keys among three Terminals in the presence of an external eavesdropper is considered. Terminals 1, 2 and 3 intend to share a common secret key concealed from the external eavesdropper (Terminal 4) and simultaneously, each of Terminals 1 and 2 intends to share a private key with Terminal 3 while keeping it concealed from each other and from Terminal 4. All four Terminals observe i.i.d. outputs of correlated sources and there is a public channel from Terminal 3 to Terminals 1 and 2. An inner bound of the "secret key-private keys capacity region" is derived and the single letter capacity regions are obtained for some special cases.

5:45 PM - 9:00 PM

Workshop Banquet

Room: UNC 200, University Centre Ball Room

Friday, May 20

8:00 AM - 9:00 AM

Plenary Speaker: David Tse, University of California, Berkeley

Feedback for Interference Mitigation
As the density of wireless devices increases and network architectures become more heterogeneous, interference becomes a central barrier to improving the capacity of wireless networks. Recently proposed approaches to overcome this interference barrier include multiuser MIMO, coordinated multipoint transmission and interference alignment. In all of these approaches, a bottleneck limiting the success of existing schemes is the availability of timely channel information at the transmitters. We show, contrary to popular belief, that timely channel information is not fundamentally necessary for these approaches to work. We propose alternative schemes which can extract a significant part of the multiplexing gains even when the fed back channel information is so delayed that it is completely independent of the current channel state. We discuss these results in the context of the role of feedback in information theory and communications.
Room: ASC 140
Chair: Julian Cheng (University of British Columbia, Canada)

9:00 AM - 10:40 AM

MIMO and Multi-Antenna Systems

Room: ASC 140
Chair: Sebastien Roy (Laval University, Canada)
Contribution of Multiplexing and Diversity to Ergodic Capacity of Spatial Multiplexing MIMO Channels at Finite SNR
Maher Arar (University of Ottawa, Canada); Abbas Yongacoglu (University of Ottawa, Canada)
We show that while the multiplexing effect of the spatial multiplexing MIMO channel has the greatest influence on ergodic capacity at infinite SNR, the contribution of diversity to enhancing ergodic capacity at finite SNR is comparable to or greater than that of multiplexing as the number of antennas increases to a moderate level. For instance, we show that the contribution of diversity to ergodic capacity surpasses that of multiplexing for an equal number of transmit and receive antennas of eight and SNR < 13dB, a possible practical scenario for the upcoming 4G standards such as LTE-Advanced. This result leads us to conclude that the use of spatial multiplexing detection algorithms that extract the full diversity, such as the Maximum Likelihood (ML) algorithm, becomes more crucial when the number of antennas grows to a moderate level.
Sparse Space Codes for Multi-Antenna Systems
Sagar Dhakal (Research In Motion, USA); Alireza Bayesteh (Research In Motion, Canada)
Sparse space codes (SSC) are proposed as a novel transmission scheme for an under-determined MIMO channel. Each SSC codeword is a sparse vector of the size of the number of transmit antennas. The information is imparted through: (i) uncertainty in the positions of non-zero elements, and (ii) uncertainty in the symbol-space of non-zero elements. Basis-pursuit (BPD) and LASSO detectors are used with knowledge of code sparsity to detect SSC. However, their performance is found to be degraded compared to the Maximum Likelihood Detector (MLD). A runner-up basis pursuit algorithm is proposed that provides MLD-like performance with a small increment in complexity over BPD. Analytical and simulation results show that SSC outperforms orthogonal space time block codes in terms of word error rate at varying SNR ranges.
On Joint Detection and Channel Estimation over Rank-Deficient MIMO Links with Sphere Decoding
José L Lagunas Morales (Université Laval, Canada); Sebastien Roy (Laval University, Canada)
In this paper, we present a method which jointly performs robust channel estimation and signal detection in MIMO systems. This technique achieves an excellent estimation of the channel matrix by using all detected symbols within a data frame. This is equivalent to having the training preamble occupy the entire frame. The Mean Square Error (MSE) of the channel estimation for this method is very low over a wide range of preamble lengths when compared to linear estimation techniques. The approach is applicable in both full rank (N=M) and rank-deficient (N<M) channels with practically the same performance as that of a clairvoyant detector with perfect CSI knowledge. This characteristic is crucial in receiving more co-channel signals than there are receive antennas, i.e. virtual MIMO processing.
Low Complexity Piecewise Linear LLR Calculation for MIMO-BICM Systems
Chao Zheng (University of Alberta, Canada); Raman Yazdani (University of Alberta, Canada); Masoud Ardakani (University of Alberta, Canada)
Log-likelihood ratios (LLRs) are efficient metrics used in the decoding of modern channel codes such as low-density parity-check and turbo codes. LLR calculation, however, can be a complex task since LLRs are usually complicated non-linear functions of the channel output. In multi-input multi-output (MIMO) channels, the complexity of the calculation increases exponentially with the number of antennas. In this paper, an LLR approximation method is proposed for MIMO-BICM systems. In particular, piecewise linear approximation functions are introduced and an accuracy measure is provided to optimize the parameters. Simulations verify that the performance of the optimized approximate LLRs is better than the existing approximation method and quite close to that of true LLRs.
RF Beamforming with Closely Spaced Antennas
William Chou (University of Toronto, Canada); Raviraj Adve (University of Toronto, Canada)
This paper investigates performance enhancements using radio-frequency (RF) beamforming in a multiple-input multiple-output (MIMO) communication system. Of specific interest is the case of a array of extremely closely spaced antennas with complexity constraints, here implemented as the array being restricted to a single RF chain. In this paper, we re-derive the capacity equations for both RF and digital beamforming techniques. We also compare the capacity improvement between different receiver combining techniques.Moreover, the simulation results indicate that in a closely spaced system with only one RF-chain available, selection provides higher capacity than the using the traditional approach of element-selection. Further, beamforming based MIMO processing is more robust to different scattering environment.

10:40 AM - 11:00 AM

Morning Coffee Break

11:00 AM - 12:40 PM

Interference Channels and Communication Systems

Room: ASC 140
Chair: Amin Mobasher (Research In Motion, Canada)
Strong Interference Conditions for Multiple Access-Cognitive Interference Channel
Mahtab Mirmohseni (Sharif University of Technology, Iran); Bahareh Akhbari (Sharif University of Technology, Iran); Mohammad Reza Aref (Sharif University of Tech., Iran)
In this paper, we analyze the capacity region of a communication network where a Multiple Access Channel (MAC) and a point-to-point channel share a same medium and interfere with each other, and the transmitter of the point-to-point channel has cognitive capabilities. We introduce Multiple Access-Cognitive Interference Channel (MA-CIFC), which includes a two-user MAC as a primary network and a cognitive transmitter-receiver pair where the cognitive transmitter knows the message being sent by all of the transmitters in a non-causal manner. We obtain an inner bound on the capacity region of MA-CIFC and derive two sets of strong interference conditions under which we establish the capacity regions. We also consider the Gaussian case and find capacity results for the Gaussian MA-CIFC. Some numerical examples are also provided.
Downlink Multi-User Interference Alignment in Two-Cell Scenario
Alireza Bayesteh (Research In Motion, Canada); Amin Mobasher (Research In Motion, Canada); Yongkang Jia (Research in Motion, Canada)
In this paper, the problem of Downlink Multi-User MIMO (DL MU-MIMO) transmission from two interfering transmitters, each equipped with M antennas to multiple User Equipments (UEs) each equipped with K antennas is considered. It is assumed that all UEs receive a single data stream of rank one from only one of the transmitters. A novel transmission/reception scheme is proposed based on the idea of Interference Alignment (IA), which aligns the interference coming from each transmitter to the UEs in the other cell along a single predetermined vector, and hence, leaves more degrees of freedom for signal transmission from each transmitter. Furthermore, unlike other IA-based schemes in the literature, only local Channel State Information (CSI) is required at nodes. It is shown that for the case of K ≥ M, the total degrees of freedom of 2M − 2 is achievable. The proposed scheme is also extended to the case of K <M based on the ideas of Euclidean distance minimization and time/frequency extension. Finally, simulation results are provided to compare the performance of the proposed scheme with that of the existing results in the literature.
Downlink Multi-User Interference Alignment in Compound MIMO-X Channels
Amin Mobasher (Research In Motion, Canada); Alireza Bayesteh (Research In Motion, Canada); Yongkang Jia (Research in Motion, Canada)
In this paper, we elaborated on a new interference alignment technique for downlink multi-user system in a MIMO-X channel with multiple antennas. We considered two transmitters, each equipped with M antennas, and L users, each equipped with K antennas. Here, we consider a MIMO-X scenario that some users are receiving transmissions from both transmitters. The proposed approach is focused on rank-1 transmission to each user while each transmitter only needs to know the CSI for its serving users and each user only needs to know its corresponding channels to the transmitters.
Channel Estimation with Amplitude Constraint: Superimposed Training or Conventional Training ?
Gongpu Wang (University of Alberta, Canada); Feifei Gao (Jacobs University, Bremen, Germany); Chintha Tellambura (University of Alberta, Canada)
This paper utilizes a general superimposed training based transmission scheme that includes superimposed training and pilot symbol assisted modulation (PSAM) as special cases. The channel estimator of the scheme is the linear minimum mean square error (LMMSE) estimator. By taking into account errors of this method, we derive the closed-form lower bound of the data throughput under the constraint of limited amplitude for each symbol. This lower bound is maximized by varying the power and time allocations between pilot symbols and information symbols. Our study shows that with the constraint of total amplitude for each symbol, the conventional PSAM performs better in the high signal-to-noise ratio (SNR) region while at low SNR, the superimposed scheme performs better.
Spectrum Trading for Risky Environments in IEEE802.22 Cognitive Networks
Mohsen Nader Tehrani (University of waterloo, Canada); Murat Uysal (Ozyegin University, Turkey)
In this paper, we consider the shared used model in the context of IEEE802.22 cognitive and investigate spectrum trading via auction approach.We consider two different scenarios in which the dominant risk is either associated with imperfect spectrum sensing or the uncertainties in the environment. Taking into account these risks, we propose a bidding strategy for sealed-bid first-price auction. Our numerical results demonstrate significant revenue increases in comparison to the conventional auction methods in risky environments.

12:40 PM - 2:00 PM

Lunch Break

2:00 PM - 3:40 PM

Resource Allocation

Room: ASC 140
Chair: Derrick Wing Kwan Ng (University of British Columbia, Canada)
Resource Allocation to Achieve Cross-Layer Metrics in Cooperative Networks
Ehsan Karamad (University of Toronto, Canada); Raviraj Adve (University of Toronto, Canada)
We consider a network utility maximization (NUM) framework to allocate resources in a cooperative network comprising multiple sources, dedicated relays and a single destination. The allocation is designed to ensure the average queue length at each source is below a chosen demand. The optimization is over power allocation at all nodes, relay selection and relaying strategy. We formulate the NUM problem and propose a solution to achieve the optimal allocation scheme. The two main contributions here are the formulation including queue length and an efficient solution that has only linear complexity in the number of source nodes. Furthermore, unlike previous works, it avoids a brute-force search over rates.
Resource Allocation for Secure OFDMA Decode-and-Forward Relay Networks
Derrick Wing Kwan Ng (University of British Columbia, Canada); Robert Schober (University of British Columbia, Canada)
In this paper, we formulate an optimization problem for resource allocation and scheduling in orthogonal frequency division multiple access (OFDMA) half-duplex decode-and-forward (DF) relay assisted networks. Our problem formulation takes into account artificial noise generation to combat a multiple antenna eavesdropper. The secrecy data rate, power, and subcarrier allocation policies are optimized to maximize the average secrecy outage capacity (bit/s/Hz securely delivered to the users via relays). The optimization problem is solved by dual decomposition which results in an efficient iterative algorithm. Simulation results illustrate that the proposed iterative algorithm converges in a small number of iterations and guarantees a non-zero secrecy date rate for a given target secrecy outage probability.
Green Resource Allocation With QoS Provisioning for Cooperative Cellular Network
Umesh Phuyal (University of British Columbia, Canada); Satish Chandra Jha (University of British Columbia, Canada); Vijay Bhargava (University of British Columbia, Canada)
Relay-based cooperative transmission in cellular network has been an area of tremendous research recently. Transmission via relays introduces power consumption at both the source and relay stations which may lead to less efficient system in terms of power consumption. Because of increasing energy cost for cellular systems and concern over environmental issues, an energy efficient design of resource allocation scheme in cooperative cellular network is of prime importance. In this paper, we propose a novel resource allocation scheme in order to maximize the energy aware system performance. The proposed low-complexity scheme allocates powers for base station and relay by using a strategy that minimizes required transmit power per unit achieved throughput and at the same time guarantees a predefined quality of service (QoS) which is specified in terms of minimum end-to-end data rate required by each user. Simulation results show that proposed scheme outperforms existing power allocation schemes by decreasing required power to guarantee the QoS without increasing system outage penalty, which is essential for green communication systems.
Cross-Layer Resource Allocation Approach for Multi-hop Distributed Cognitive Radio Network
Satish Chandra Jha (University of British Columbia, Canada); Umesh Phuyal (University of British Columbia, Canada); Vijay Bhargava (University of British Columbia, Canada)
In multi-hop distributed cognitive radio network, link layer resource allocation must consider the information about number of hops packets have already traveled in the network in order to optimize the overall resource utilization. The loss of a packet after traveling some hops results in waste of all the resources allocated to it in previous hops. The existing resource allocation schemes may not provide optimal resource utilization in such network as this issue has been greatly ignored. Therefore, in this paper, we propose a scheme to allocate transmit power to different packets favoring those which have traveled more hops before reaching a particular node. We present a cross-layer approach in which link layer gets the hop-count information from network layer module. Distributed implementation is possible with the proposed scheme because each node can access this information. We formulate the power allocation problem as a convex optimization problem and obtain its analytical solution by using Lagrangian duality. Simulation results show that the proposed scheme is capable of minimizing wastage of network resources used by packets in their previous hops without any degradation in throughput and outage performance.
Energy-Aware User Selection and Power Allocation for Cooperative Communication System with Guaranteed Quality-of-Service
Rajiv Devarajan (University of British Columbia, Canada); Satish Chandra Jha (University of British Columbia, Canada); Umesh Phuyal (University of British Columbia, Canada); Vijay Bhargava (University of British Columbia, Canada)
Energy consumption in wireless communication system is rapidly increasing due to the growth in wireless multimedia access. Combating the adverse effects of excessive energy consumption demands for energy-aware system design called green communication, which has become the major focus of many researchers recently. In this paper, we propose user selection and power allocation schemes for a multi-user multi-relay cooperative cellular system in order to minimize the cost of transmission. In the proposed schemes, the cost function is first formulated with the objective of optimizing the sum power consumption. Then it is extended to a more general multi-objective optimization scheme which jointly optimizes the sum power and throughput. The former scheme makes the system energy efficient, while the latter scheme keeps a balance between energy efficiency and throughput. In both of the schemes, quality-of-service is guaranteed in terms of end-to-end signal-to-noise ratio. Numerical results are also presented to confirm the system performance enhancement with the proposed schemes.