Next Generation Memory Technology Needed for Developing Systems

Manish Deo
Senior Product Marketing Manager
New memory technology for new systems
As electronic systems are advancing and functionality and sophistication increasing, there is a growing need for revolutionary next generation memory technologies, especially for applications requiring Big Data.

A combination of ever-growing communications bandwidth, the internet of things (IoT), 8K broadcasting and advances in Big Data analytics, is seeing data volumes around the world explode. Total data centre traffic is expected to hit 10.4 ZB by 2019 – that’s 10.4 billion terabytes. This is data on an unprecedented scale.

And more data – combined with our incessant need to get things done as quickly as possible – means we’re putting ever-greater demands on the technologies that process it.

One of the most important components in any data-handling system is the memory, or RAM. And as system designers push traditional memory technology to the furthest limits of what it’s capable of, it’s really starting to feel the strain. So much so, in fact, that we’re fast reaching a point where conventional memory technology will no longer be able to keep up with modern data needs, meaning we urgently need an alternative solution.

In this piece, we’ll look at the limitations of traditional memory technology and explore some of the alternative revoutinary new memory technologies that are aiming to meet tomorrow’s data requirements.

The challenges of conventional memory technology in today’s data-driven world

The first issue with traditional memory is that we’re getting to a stage where it’s not physically possible to have enough I/O pins to support a memory bus capable of delivering the required bandwidth. In wireline networking, for example, a 200Gbps system needs more than 700 pins and five DDR4 DIMMs for basic data plane memory functionality. A 400Gbps system will require in excess of 1,100 pins and eight DDR4 DIMMs. In future, I/O pin counts won’t be able to increase sufficiently to meet bandwidth needs.

The second challenge is around power. Using conventional memory technology, one way to achieve greater bandwidth is to combine more components on the printed circuit board (PCB). But more components mean longer PCB traces to connect them together, and the longer the trace, the more power you require. However, as memory power needs rise, system-level power budgets have at best remained constant, and in some cases are shrinking. The current trajectory isn’t sustainable - the introduction and use of next generation  memory technologies is imperative.

Thirdly, there are physical size constraints. Using conventional memory technology to deliver greater bandwidth typically means combining an ever-greater number of components on the PCB. But board layout guidelines limit designers’ ability to pack these components more closely together. This means we’ll soon hit a point where it simply isn’t possible to deliver the required bandwidth with traditional memory, without some impact on the form factor of the device it’s powering. Another driver for the introduction of next generation memory technologies.

The final area to think about is the speed at which DDR, the dominant conventional memory technology, has developed. Each generation of DDR has delivered approximately twice the bandwidth of its predecessor. This suggests that DDR5, when it comes around, will give us about 40Gbps per DIMM. However, next-generation applications are expected to need far more bandwidth than this, meaning that DDR is unlikely to be sufficient to meet tomorrow’s technology needs.

What must tomorrow’s memory deliver?

It’s clear that the next generation of memory must deliver significant improvements over conventional technology. It needs to offer much higher bandwidth than DDR. It must deliver a much greater density of bandwidth, to free up space on the PCB that can be used for other purposes. And it must offer significantly better performance per watt of power, to ensure it isn’t limited by the trend of decreasing power budgets.

Technology options

There are various new technologies being developed to address these challenges, many of them tailored towards specific use cases.

Technologies such as low-power DDR (LPDDR) are highly energy-efficient, so address the power challenge and are ideal for mobile devices. However, the bandwidth they provide is relatively low.

Then there are technologies that offer medium bandwidth and medium-high power efficiency. DDR3 and DDR4 are at the lower end of this band, with wide I/O 2 (WIO2), which stacks memory components on top of a computing element, delivering both better bandwidth and power efficiency.

Lastly, there are the high-bandwidth solutions, notably hybrid memory cube (HMC) and high-bandwidth memory (HBM and its successor HBM2).

The image shows where these differing technologies sit in the power efficiency vs bandwidth spectrum.

One step further

While these new technologies offer interesting opportunities to solve some of the challenges we’ve talked about, there’s something truly exciting happening at the very high end of the bandwidth spectrum.

Here, solutions are emerging that combine HBM2 memory tiles with a field programmable gate array (FPGA) to create a single memory package. This delivers exceptionally high bandwidth of up to 1TBps – sufficient to support tasks such as high-performance compute (HPC) and data centre use, 8K UHD video processing, machine learning, data analytics, image recognition or overall workload acceleration.

FPGA-based HBM2 memory modules use less power than a DDR4-based architecture, thanks in part to much shorter traces. They’re also significantly smaller, and the form factor savings vs DDR4 get larger as the memory bandwidth grows, meaning this is a much more sustainable long-term technology.

The small overall size of these memory modules also frees up space on the PCB, while the single-unit approach (vs the traditional method of combining multiple components) means the board is simpler to design and cheaper to produce.

Tomorrow’s memory technology

With data volumes and the demands we’re placing on our computing growing faster than ever, traditional memory technology is struggling to keep up. We need to embrace revolutionary and disruptive technologies that can sustainably deliver significantly higher bandwidth, while requiring relatively low amounts of power and small amounts of space compared to their traditional counterparts.

It’s these technologies that will enable us to reach new heights in advanced data analytics, UHD broadcast, wireline networking and high-performance computing – applications that will have profound positive effects on our everyday lives.

Share this page


Want more like this? Register for our newsletter
developing bluetooth BLE beacons Joe Tillison | Silicon Labs
Determining BLE Beacon Proximity is a Challenge. Here's How It's Done Today
Bluetooth is an ideal wireless technology. It has developed over the years and as a result it is being used in many applications, although in some cases it presents some interesting challenges.
Training
Online - RF and Wireless Essentials for Engineers
This on-line course enables you to quickly get up-to-speed & understand key concepts of modern radio frequency, RF & wireless communications systems

More training courses

Whitepapers
Using Digital Control Designs for Stable Power Supplies
Find out how to achieve stable power supply designs with fast transient response by using digital control techniques in this whitepaper from Intersil.

More whitepapers