21 Feb 2017
Realizing the Promise of 5G: utilizing the technologies
Harpinder Matharu and Paul Newson of Xilinx look at the technologies needed to ensure 5G reaches its potential
The 5th generation of mobile broadband wireless access networks are expected to meet system and service requirements of the new use cases and applications in 2020 and beyond.
Connecting industries and enabling new services is the most significant aspect of 5G in preparation to meet the demands of the information society of 2020 and beyond. The 4th generation or 4G LTE is all about people and places with communication and information sharing being the core theme.
5G extends the scope to people, places and machines by adding reliable and resilient control and monitoring to the 4G theme of communication and information sharing. This shift has a profound impact on the system requirements and design principles. 5G vision is all encompassing and touches all aspects of our lives from how we produce goods, manage energy and environment in process of production, transport, store, and consume goods to how we live, work, commute, entertain, and even relax.
As a result, there is a need to push the envelope of 5G system/network performance in order to guarantee higher network capacity, higher user throughput, higher spectrum, wider bandwidths, lower latency, lower power consumption, higher reliability and more connection density using virtualized and software-defined networks. The 5G architecture will include modular network functions that would need to be deployed and scaled on demand in order to accommodate various use cases in a cost efficient manner.
4G LTE is highly successful and good technology for <6GHz spectrum. 5G adds >6GHz spectrum to open up large chunks of underutilized spectrum for radio access network. It includes support for >20MHz carriers, reduces control overhead and introduces flexibility in the RAN to address multiple use cases. Support for >6GHz is one of the most promising attribute of 5G and the most challenging one as well. The accuracy of channel models for >6GHz, released in June 2016 by 3gpp, is extremely critical to get the base station and UE design right. The reality is that a lot more work, time and field trials need to happen to improve the accuracy of these models. During this time, system designs will need to incorporate flexibility and inherent programmability to adapt and improve underlying algorithms based on the lessons learnt in the field.
5G frame and TTI
Reduction in the end to end latency to <1ms is another important 5G goal to address ultra-reliable low latency use cases for mission critical applications and extended mobile broadband use cases such as gaming that show promise of higher revenue to service providers. 5G is improving the frame structure to achieve this objective. Figure 1 shows one of the pre-standard 5G frame structure proposal.
Short transmit time interval (TTI) in the order of 100-200 microseconds, 10x smaller than 1ms TTI in 4G LTE, with fast Hybrid ARQ acknowledgements is being considered to reduce system latency. Front loaded demodulation reference and control signals enable frame processing while the frame is being received as opposed to waiting to buffer complete sub-frame. Frame structure is also designed to simplify and speed up fast scheduling request on sub-frame basis. For this reason, compute required in 5G baseband design, compared to 4G LTE systems, jumps up significantly to process sub-frame within one TTI.
5G is expected to support flexible frame structure to adapt to different use cases and application requirements such as packet length and end to end latency. Two sub-frame scaling methodologies with flexible number of symbols per sub-frame and variable sub-frame length are under consideration. A hybrid scheme is also possible. Both methodologies support multiple transmission types (Downlink, Uplink, and Hybrid). Sub frame duration and sample rate remain the same as defined for the baseline 5G numerology.
Flexible frame structure has implications on physical (PHY) layer implementation. FFT lengths and Cyclic Prefix may vary on symbol by symbol basis. Number of symbols, OFDM subcarriers per physical resource block and QAM symbols may vary on a sub-frame basis with a variable guard period position and length. This significantly increases the complexity in implementing 5G PHY. Most expedient way to build 5G systems, at least in the early years, would be to leverage programmable FPGAs & SoCs to scale up and change systems as standards evolve, mature and adapt implementation schemes based on the performance measurements in the field.
Figure 1: One of the pre-standard Baseline 5G Frame structure Proposal
MIMO for 5G
MIMO techniques are well suited for centimetre (3-30 GHz) and millimetre (30-300GHz) frequencies, an inexpensive and under-utilised spectrum resource that is available in large contiguous chunks. At higher frequencies, the transmitted signal experiences higher propagation loss. However, narrow pencil beams possible at higher frequencies result in large antenna gains that compensate for the high propagation loss. In addition, as the carrier frequency gets higher, the antenna elements get smaller. With this, it is possible to pack more antenna elements into a smaller area. For example, a state-of-the-art antenna containing 20 elements at 2.6GHz is roughly one meter tall.
At 15GHz, it is possible to design an antenna with 200 elements that is only 5 cm wide and 20 cm tall. With more antenna elements, it becomes possible to precisely steer signal transmission towards the intended receiver. Since systems are concentrating the transmission in a certain direction with many such beams in a system, coverage and capacity is significantly improved.
The 5G NR (New Radio) specifications supports up to 16 layers of MIMO. 5G systems intend to support rapid re-configurability of user resource allocation on per TTI basis for higher spectrum utilization. This has compounding effect on the system complexity when supporting multiple MIMO layers. Figure 2 depicts an example of user resource allocation in 5G MIMO system. Time division duplex (TDD) helps to ease 5G Massive MIMO implementation where channel state information is determined using channel reciprocity.
This approach does not account for non-linearities in customer premises equipment or terminals. It is important to point out that in a 5G base station implementation, terminals are expected to keep track of multiple beams and request base station on a periodic basis for resource allocation on best beam for uplink data transmission. Channel state information need to be re-computed as the UE terminal switches the beam. In order to practically realize such complex systems, it is important to incorporate sufficient flexibility and programmability to adapt implementation to achieve desired performance with different terminals.
Figure 2: MIMO in Baseline 5G System
5G systems are typically expected to have up to 64 antenna elements for below 6GHz deployments. Higher number of antenna elements are feasible above 6GHz. Digital beamforming is likely to be used in < 6GHz frequencies (done in baseband) and hybrid scheme using combination of digital and analog beamforming for deployments in higher than 6GHz frequencies. Massive MIMO system configurations comprising of 64 or higher number of antenna elements result in significant increase in complexity and cost for supporting large number of active radio signal chains and pre-coding computation in L1 baseband for digital beamforming.
Connectivity requirements increase sharply between baseband processing signal chains and remote radio heads. In order to economically realize these systems, it is necessary to integrate layer 1 baseband signal processing or portion of it with the radio. Such a functional split in the future may lead to network nodes where L1-L2 and radio functions are co-located. Figure 3 depicts connectivity requirements for 64 antenna element massive MIMO at various system functional boundaries underscoring the need for colocation of L1 with radio.
Figure 3: Connectivity challenges in 5G massive MIMO systems
The scope of 5G is fairly broad and industry community hyperactive in submitting hundreds of proposals resulting in prolonged deliberations. Simulations of proposed algorithm and network configurations are good but not sufficient. Proof of concepts and pilot field trials and test beds are critical in evaluating these proposals. This is making it difficult for standard bodies to review all the proposals. In addition, there is a mounting pressure within the market to release the 5G specification sooner.
Some of the operators are not happy with pushing out massive machine type communication (mMTC) and ultra-reliable low latency use cases (URLLC) standardization to a later phase, which is expected to complete late CY2019. 3gpp has selected LDPC for data and Polar code for eMBB use case. For mMTC and URLLC use cases, LDPC, Polar, and Turbo codes are under consideration but industry will have to wait longer for conclusion on these use cases. In many cases user terminals as well as 5G base stations are likely to support multiple 5G use cases and this makes it challenging and expensive to design baseband codecs.
To compound the matters, operators are not clear on how 5G use cases will be commercially deployed and which ones will come to the forefront in market adoption. Fixed wireless access for the last mile fiber replacement and smart cities are the two leading use cases. Vertical industry integration using URLLC, automated transportation, etc. will take longer to emerge out of the realms of lab and restricted field trials for broader market adoption. For these reasons, 5G systems are expected to build sufficient flexibility and programmability to fine tune system functions and performance to evolve and adapt to market realities as these use cases get adopted.
Xilinx All Programmable FPGA and SoCs are playing critical role in implementing 5G proof of concepts, test beds and early commercialization trials for eMBB, URLLC, and mMTC use cases. Merchant silicon do not exist and ASICs are not viable this early in 5G standardization phase. Key value proposition of the platforms based on Xilinx All Programmable FPGAs & SoCs is that these systems can be dynamically repurposed to support any function and enhance algorithmic implementations. Vendors are using these platforms to run field trials to measure performance in actual deployment scenarios to optimize system implementation. First wave of commercial 5G systems are likely to rely on these optimized systems. Xilinx® UltraScale™ and UltraScale+™ All Programmable FPGAs and SoCs are specifically designed to address 5G market requirements.
Page 1 of 1
About the author
Paul Newson is a system architect for wireless communication systems at Xilinx. He has more than 25 years of experience in the design and development of wireless systems focusing principally on Physical Layer signal processing algorithm definition and implementation. He has experience with telecommunications operators, wireless equipment manufacturers and silicon device suppliers and has published many technical papers and contributed to numerous patents.
Harpinder S Matharu is Director, Communications Market, Xilinxand he manages the strategy and technical marketing for communications market at Xilinx. He has more than 25 years of experience working in different capacities in the high technology, embedded, wireline, and wireless industry. At Xilinx, he has managed connectivity, radio, switching and packet processing products for wireless infrastructure and backhaul. He has published many papers, spoken at industry events, and chaired Standards technical/marketing bodies in the past.
Xilinx is the leading provider of All Programmable FPGAs, SoCs, MPSoCs and 3D ICs. Xilinx uniquely enables applications that are both software defined, yet hardware optimized – powering industry advancements in Cloud Computing, SDN/NFV, Video/Vision, Industrial IoT and 5G Wireless. For more information, visit www.xilinx.com.
Most popular articles in Cellular telecoms
Share this page
Want more like this? Register for our newsletter