We’ve all seen the predictions about the amount of data there will be in the world in years to come. Much of this is driven by the Internet of Things (IoT), which is fuelling a boom in connected kit. Some predictions suggest there will be as many as 50 million internet-connected devices by 2020.
This includes everything from our cars to things in our homes, offices and even the clothes we wear, all of which are being equipped with sensors to collect data. The ability to put lightweight, low-cost sensors into everyday things is great, but to be of value, the data they generate needs to be turned into insights that drive better decision-making or appropriate actions by the host device or those around it.
The drivers for greater compute power
And this is where the challenge comes in. Turning such enormous volumes of data into something meaningful – and doing so quickly – requires significant compute power. Some IoT data will be sent to the cloud for processing, but other data will need to be analysed and acted upon on the device itself (known as ‘edge’ computing), or in the ‘fog’ that sits between the edge and the cloud.
Consider a self-driving car: as well as dealing with ‘normal’ road conditions and communicating with the cloud to obtain traffic data, the vehicle will need to be able to handle unexpected and emergency situations safely. In these split-second scenarios, the car itself needs to be capable of responding correctly, without relying on external compute capability. And this requires substantial on-board compute power.
Security is another issue that demands large amounts of compute capability: connected devices must be able to make sure instructions they’re receiving are from reputable sources. This demands complex algorithms, which must continually evolve to keep pace with threats.
A new approach to computing
Wherever the processing power is required – be it in the cloud, the fog or the edge – the magnitude of it is like nothing ever seen before, and it demands a whole new approach. Current hardware infrastructure is already struggling to keep pace with the data explosion, which is why many businesses are working on what comes next. The field-programmable gate array (FPGA) will play an important role.
According to research by Forbes, only 37% of organizations currently use FPGAs in their hardware designs, compared to 55% who use traditional CPUs. But looking ahead, the report found a lot of support for FPGAs, with almost two-thirds of respondents believing they’re necessary if hardware is to match the potential of future software.
The advantages of FPGAs
This popularity is perhaps because FPGAs typically offer better performance than other types of processor, while consuming less energy. With many systems constrained by power budgets, performance-per-watt is an important measure. Both FPGAs and application-specific integrated circuits (ASICs) offer superior performance per watt. But FPGAs have an additional advantage over ASICs: they can be dynamically reconfigured.
This flexibility means FPGAs are suited to all manner of workloads, from data centres to embedded applications. In particular, FPGAs are an attractive solution for servers that need to deal with a mixture of workloads, where ASICs aren’t a practical solution. Moreover, FPGAs’ lower development costs and quicker times to market than ASICs mean they’re starting to replace them.
FPGAs are also an ideal way to meet IoT security demands. They can be dynamically reprogrammed to handle changing security keys, which isn’t possible with silicon.
Once developers get to grips with FPGAs, the benefits they can enjoy are many. These include the possibility to have more than one FPGA performing tasks simultaneously, the option to create purpose-built hardware for very-high-speed processing, and the capability to switch between tasks very quickly. Figure 1 shows further advantages.
FPGAs in use
Microsoft is one example of a cloud data centre provider already using FPGAs. Wireless communication providers, including Sprint, are doing the same.
Other uses for FPGAs include robotics: the FPGA could be set up to control and analyse data from visual sensors, perhaps to inspect an asset. Once the analysis is complete and any potential asset faults identified, the same FPGA can be reconfigured to control tools to repair the asset. It’s easy to see how this approach could equally be applied to surgery.
The challenges of delivering high-performance computing
The existence of FPGAs in itself is only part of the solution: a third of those surveyed in the Forbes report cited difficulties with programming FPGAs. Other challenges highlighted include development or implementation costs being too high, and too little support from hardware providers.
These things are changing: FPGA programming languages (such as OpenCL) are being improved, while the number of specially created software libraries is increasing, with significant support from FPGA-makers, such as Intel. This is helping make FPGA programming simpler and, as a consequence, faster and cheaper.
Harnessing the data deluge
As the data explosion continues, organisations across all sectors need to find ways to take advantage of this new goldmine of potential insights. Traditional computing hardware is struggling to keep pace, so a new approach is needed, and FPGAs are an ideal solution.
By offering a blend of high performance, low energy use and inherent flexibility, they’re effectively ‘custom silicon’ that can alter the behaviour of a system on-the-fly, perhaps to change functionality or upgrade security settings.
Intel, for example, has launched devices with both a Xeon processor and an FPGA, the goal being to empower designers to create products that deliver higher performance than what individual devices can achieve.