02 Apr 2012
Realising Telecommunications User Expectations
Geoff Varrall of RTT looks at whether cellular telecommunications technology can keep pace with increasing user expectations
With nearly six billion mobile handsets now in use and with some markets having smart phone penetration in excess of 50 % it would seem that the electronics industry has been profoundly successful at meeting customer needs, creating demand for new applications which people are willing to pay for.
The smart phone is a miracle of materials and manufacturing innovation- the radio front ends contain surface acoustic wave and FBAR filters which combine high Q with low insertion loss, silicon on sapphire switches handle ten or more switch paths with high isolation and linearity.
RF power amplifiers handle high amplitude signals with high efficiency, low noise amplifiers on the receive path combine high dynamic range with high gain.
Parallel improvements in display technologies, memory technologies and baseband processing mean that every new smart phone brought to market works better than the one it replaces , costs less and realises additional user value.
The two Nokia smart phones illustrated demonstrate ten years of technology progress.
Nokia 9210 Introduced in 2002
Reproduced with permission of Nokia Corporation
The Nokia 9210 introduced in 2002 had a 4096 colour (12 bit) 110 by 35 mm display with 150 dpi resolution and a refresh display rate of 50 milliseconds which limited the frame rate to 12 frames per second. The device had 40 MB of memory and a 32 bit ARM9 processor running at 80 MHz and a 1300 mAh lithium ion battery giving between four and ten hours of talk time and between 80 and 230 hours of standby.
Nokia N8 Introduced in 2011/2012
Reproduced with permission of Nokia Corporation
The contemporary N8 phone has a 16.7 million colour capacitive touch screen display with an orientation sensor (accelerometer), compass (magnetometer), the ability to support high definition video recording and play back. The device has 16 GB storage including 256MB of RAM and 512 MB of ROM and up to 32 GB of plug in memory using a Micro SD card. There is a 12 mega pixel camera. The ARM 11 processor clocks at 680MHz. The device runs off a 1200 mAh battery giving twelve hours of talk time on GSM, six hours on WCDMA and 400 hours of standby.
The 9210 was a dual band EGSM 900/1800 phone supporting ‘high speed’ circuit switched data at 48 kbps. Ten years on the N8 supports quad band GSM/EDGE 850/900, 1800 and 1900 MHz and Band 1 WCDMA (1900/2100 MHz). The peak downlink data rate is 10.2 Mbps albeit under ideal conditions and a 2 Mbps uplink.
Over ten years that represents an order of magnitude increase in processor clock speed, an 800 times increase in solid state memory bandwidth and a 250 times increase in downlink speed.
Similar step function increases are also happening on the uplink, an HSPA Plus phone has potentially 5 Mbps of uplink data rate available, an LTE handset increases that to 50 Mbps.
And this is where we come across a disconnect. An old fashioned mobile phone used for voice and texting generates an average of 30 megabytes of traffic per month, smart phones generate between 300 and 500 megabytes of traffic and sometimes much more.
Operators have therefore decided that they will need additional spectrum and have just spent six weeks in Geneva at the ITU World Radio Congress negotiating the release of additional bands that will add approximately 500 MHz to the existing band plan. The quad band N8 can access 510 MHz of MHz so this means a doubling of the bandwidth that needs to be supported.
The reason this is a problem is illustrated by the image of the circuit board used in the iPhone which like the Nokia N8 is a quad band device.
Cellular Handset Components
Reproduced with permission of UBM techInsights
Each separate band has its own filter path, power amplifier and low noise amplifier. It is possible to have broadband amplifiers that cover multiple bands but each band has to be individually noise matched on the receive path and power matched on the transmit path. By the time this has been realised then you may as well have separate power amplifiers for each path.
Either way each additional band incurs extra component cost and a performance loss of approximately 1 dB per band as a result of increased insertion loss.
A similar performance trade off happened in the TV world in the 1960’s when the growth of broadcasting in terms of number of stations and the increase in the bandwidth needed to support colour transmission meant that receivers had to be designed to cover the whole of the UHF band (from 470 to 790 MHz).
Getting TV tuners to work efficiently over these bandwidths was challenging and most analogue TV receivers suffered from poor receive sensitivity and limited dynamic range meaning that the front end of the devices were easily compressed by other proximate signals.
Going digital has not helped mainly because the front end of the TV is of course still analogue!
And smart phones are of course transceivers which need to receive and transmit. The transmit power is only of the order of 200 or 250 milliwatts but this can be sufficient to desensitize the front end of the smart phone, particularly if the front end is a wide band device and desensitising other proximate devices. Consider that a cellular phone in 1985 had to be capable of transmitting between 880 and 915 MHz and receiving between 925 and 960 MHz – a frequency spread of 80 MHz.
A phone capable of accessing any of the twenty five bands already identified by the ITU plus the additional 500 MHz of bandwidth expected to come out of the World Radio Congress means that the next generation of phones will need to be able to transmit and receive on radio bands that could be anywhere between 700 MHz and 3700 MHz, a frequency spread of 3 GHz. In addition the most recent standardisation efforts are requiring future designs to be capable of supporting simultaneous transmission and reception across more than one band.
This increases the likelihood of unwanted intermodulation of multiple frequencies either within the phone or between phones within the network or between networks which of course includes the possibility of interference into TV receivers at 700 and 800 MHz.
The solution is to change the architecture used in the front end of the phone but this is easier said than done. Some phones still use the superheterodyne radio architecture invented by Edwin Armstrong in 1918, others use direct conversion first used in Plessey paging devices in the 1980’s.
The superhet still works but the need for intermediate frequencies in the transmit and receive chain compounds the problem of multiple components.
Direct conversion receivers similarly have their drawbacks and practically don’t solve the multi band problem.
The snag is that no one has come up with anything different--- yet.
And the reason for this is that materials innovation is needed in order to enable a different architectural approach.
In 1918 Armstrong was able to revisit the principle of the superhet (the mixing of two frequencies to produce a difference product that would be at a lower frequency and hence easier to process) because valves had enabled circuits with sufficient frequency stability to allow stable mixing to be realised.
Forty years later, transistors and integrated circuits made the superhet and direct conversion, an even older technique than the superhet, more efficient but didn’t enable new architectural choices. The contemporary candidate for change is graphene, essentially graphite constructed in sheets an atom thick.
Mid-way through last year and to mark their 100th anniversary IBM Research scientists announced that they had built an integrated circuit fabricated from wafer-size graphene applied as a broadband frequency mixer operating at frequencies up to 10 GHz. Potentially this solves a number of the problems presently exercising multi band cellular phone design teams including noise and insertion loss in broad band devices across present allocated bands and a need to maintain efficiency at higher frequencies and across broader bandwidths in the very near future.
In September 2011 the University of California and Samsung announced work on the use of graphene as a storage layer within a silicon substrate.
With silicon based flash, as memory gates get smaller the transistor gates have to be thicker relative to the rest of the circuit in order to store enough charge and the thick gated cells start to interfere with their neighbours. Gates made from graphene are ultra-thin and therefore do not interfere with one another. They also hold more charge than silicon with lower leakage. Graphene based devices will scale down to about 10 nm. Conventional flash scales down to about 22nm below which they become unstable.
In January 2012 the University of California and the University of Texas in Austin and Dallas and Xiamen University in China announced work on structurally modified graphene with high thermal conductivity which could be used in mobile phones and lap tops to improve heat dissipation. In terms of solving the multi band problem, the IBM announcement is significant though whether the implementation time scales are close enough to produce a solution fast enough is open to question. The conundrum is that smart phones are successfully meeting a market need but are becoming a victim of that success.
Each additional band increases cost and compromises performance.
The operator can compensate for this by building denser networks but this increases network cost including capital cost and operational cost.
Architectural innovation in user devices is clearly the best option but is dependent on materials and manufacturing innovation, mass producing materials at atomic level has never been done before. In another part of Switzerland and not that far from Geneva the scientists at Cern are busy studying the science of the small in order to gain insight into the science of the large – 13.7 billion years of history contained within a fleeting moment of time.
The science of the very small may well enable a new generation of super-fast super-efficient smart phones. At least it’s only years not light years away.
This topic is covered in more detail in the new book from RTT, Making Telecoms Work - from technical innovation to commercial success published by John Wiley and available from Amazon either directly or via the RTT book shop. It is the featured book this month on Radio-Electronics.com
Page 1 of 1
About the author
Geoff Varrall is the author of four books including his latest entitled ‘Making Telecoms Work – from technical innovation to commercial success’ and director of RTT Programmes. He joined RTT in 1985 as an executive director and shareholder to develop RTT's international business as a provider of technology and business services to the wireless industry. Varrall co-developed RTT's original series of design and facilitation workshops including 'RF Technology', 'Data Over Radio', 'Introduction to Mobile Radio', and 'Private Mobile Radio Systems and developed 'The Oxford Programme', a five day strategic technology and market programme presented annually with the Shosteck Group. Over fifteen years, thousands of senior level delegates have attended these programmes. Varrall is also a Director of Cambridge Wireless
Most popular articles in Cellular telecomsDrive Testing: Challenges on the Road Ahead
Carrier Aggregation – How to Test the Key Enabler for LTE Advanced
Current VoLTE Development and Deployment
Envelope Tracking Technology for 4G Smartphones
2013 Cellular / Mobile Technology Trends