27 Aug 2013

Maximising the Return from Testing

Jeremy Twaits, Automated Test Product Manager, National Instruments UK & Ireland, looks at ways of maximising the return from testing.

Sometimes, testing can be hard to justify. Everyone knows the benefits – better quality products, fewer returns, more satisfied customers – but there will always be the counter-arguments that it’s “too expensive”, “delays new product introduction” or that “if we just designed the product better, we wouldn’t need to test it”.

Most of the concerns are based on two main topics: time and cost. And to use the well-used phrase, time is money, so really they are one and the same.

The concern is understandable. When it comes down to it, none of us would probably have jobs if we weren’t contributing the bottom line of the business. The lazy approach would be to simply view test as something that companies must spend money on, and hence detract from the bottom line. This would be a mistake. The question executives ask should not be “how do I reduce my cost of test?” but rather “which test investments do I need to improve business metrics”.

Return on Investment, ROI

A good approach for a test organisation to prove its worth is to measure financial metrics like return on investment, ROI, cost per unit tested, annual test costs and savings, payback periods on investments and the breakdown of capital versus noncapital costs. Proper modelling uncovers all lifetime costs of certain assets and provides a financial framework for justifying future investments.

The keystone to this financial framework is to calculate Total Cost of Ownership, TCO. The reason for highlighting TCO is that it gives a more accurate representation of the expenditure of the test organisation, since it takes into account more than just the capital costs that are typically considered.

Total cost of ownership

The elements of the total cost of ownership

Now this doesn’t immediately sound like it will help the test engineer - surely this means the stated expenditure will be greater than before, if development costs like strategy planning, software tools, Non-Recurring Engineering, NRE and internal training are included into the equation? The key point is that if test leaders can accurately calculate TCO, they can show the added value of a test strategy and justify future investments.

Using TCO models, test managers can determine where to allocate resources and invest for maximum ROI. Many companies who have performed financial modelling for test have taken different metrics, but on the whole, they tend to measure ROI and payback period. I will discuss three examples, all of which display some form of test organisation optimisation, but via differing approaches.

Engineering Test Reuse

The first example is Philips Home Healthcare Solutions (HHS), who were able to save $4.5 million annually by reducing the cost of quality by 81%. The definition of product quality is not uniform across all companies – for a physical product it could be return rate (or reliability), appearance or safety and for software it could be the number of bugs found or corrective action requests filed. Philips HHS’ products are software-centric embedded systems that deliver therapy and track patients’ progress in their own homes.

This, of course, means the products must be extremely safe, but also drives a need to validate the quality of the embedded software. In situations like this, where the company’s ability to differentiate its products is governed by its ability to adhere to high quality standards, the focus is on investing more in systems that reuse test components to capture defects earlier.

To do this, test engineers are taking a leaf out of the book of design engineers. Product designers, particularly in regulated industries like automotive, aerospace and medical, have long followed rigorous development standards and certifications. Many of you will be familiar with the V-Model software development process, in which each stage in the project definition stage has a corresponding step in project test and integration. This model is commonly used to ensure the quality of embedded software, but now is being utilised more frequently in validating test software too.

Tools Reuse

Re-Use of Tools

Essentially, the tester is being tested. This means more effective test software, and hence more chance of catching bugs. A benefit of the V-model is the ability to reuse code modules at the same level on the corresponding sides of the “V”, and this idea of code reuse is where Philips HHS were able to make such astounding cost savings.

They formalised a test tools engineering team, whose role is to develop and maintain test software applications and automated test equipment for use throughout the development process. This meant that product design and V&V engineers could stop worrying about developing ad-hoc test systems and focus instead on developing new scenarios to increase test coverage. The team elected to use an integrated hardware and software approach, to automate measurements and reduce manual testing time.

By using test components earlier in the design process, defects are tracked down and eliminated sooner. There is an oft-quoted “rule of ten” that states that software bugs cost 10x to detect and fix at the system test stage compared to requirements, and a further 10x if discovered post-release. In fact, a recent study of 63 software development projects at companies like IBM, GTE and TRW Automotive shows the cost of finding a product defect during production was 21 to 78 times more expensive than during design. The result of the cross-functional team at Philips HHS was an 88% reduction in embedded software defect capture cost, resulting in a 316 % ROI and a payback period of just three months.

Scaling Test Throughput

The second example of a test strategy that can help boost the value of a test organisation is to implement parallel multi-unit test. In situations where product demand, and hence volume, is growing, moving to a parallel testing methodology can have a significant impact on test throughput and TCO per unit tested.

Harris, an international supplier of communications and information technology to governmental and commercial organisations, made significant savings by moving to a parallel approach. As demand for the Falcon line of military radios increased, they needed to re-evaluate the testing methodology for the Falcon III to meet growing volume, whilst maintaining quality. Using NI TestStand, LabVIEW and PXI, they were able to reduce the cost of test by 74%, leading to an ROI of 185% and a payback period of less than three months.

Another example comes from Qualcomm Atheros. They scaled test throughput not by testing multiple units in parallel, but by combining and synchronising multiple test functions together – namely RF measurements and Device Under Test, DUT control. In this case, the device being tested was a three-radio Multiple Input Multiple Output, MIMO transceiver for the 802.11ac Wi-Fi standard. As new wireless standards become more complex, the number of operational modes of the devices increases exponentially.

Test Hardware Development

Test Hardware Development

Consequently, devices must have new modulation schemes, more channels, more bandwidth settings and additional spatial streams. On top of this, characterising WLAN transceivers is particularly challenging when faced with thousands of independent operational gain settings. This means that for a single mode of operation, there can be hundreds of thousands of data points to be measured, and poses a major threat to keeping the test time low and manageable.

Data Scaling

Modern Tools Allow Much More Data to be Collected

Doug Johnson, of Qualcomm Atheros, explained that, “Traditional rack-and-stack measurements are limited to best estimate gain table selections, a slow process that produced approximately 40 meaningful data points per iteration.”

By switching to modular, PXI-based instrumentation, Qualcomm Atheros were able to utilise the NI PXIe-5644R Vector Signal Transceiver, VST, a module that comprises a vector signal analyser, vector signal generator, high-speed digital I/O and a user-programmable FPGA. This meant that the digital interface to the chip could be controlled simultaneously with RF stimuli and measurements.

Johnson continued, “After switching to the NI PXI vector signal transceiver, we could perform full gain table sweeps instead of using the iterative approach, because of the test time improvements. The team could then characterise the entire range of radio operation in one test sweep per device, acquiring all 300,000 data points for better determination of the optimal operational settings empirically. The availability of this data gave us a view of the device operation we had never seen before so that the team could explore operational regimes not previously considered. By synchronising the timing of digital control directly with the RF front end of the instrument, we have seen test times improve by more than 20x over our previous PXI solution and up to 200x over the original solution that used traditional instruments.”

Production Test Standardisation

The third approach is to ensure standardisation of test systems across a company. Separate business units (BUs) or product lines within a large organisation will often have a degree of autonomy, allowing them to own their product development and manufacturing process resources. Whilst in some cases this can be beneficial, it usually leads to each BU developing testers specific to its own products, resulting in mix of test equipment across different product lines.

Often many of the specific tests or even instruments used will be similar for many different product lines, and by focusing on these commonalities, organisations can find ways to standardise on the hardware and software they use. On the software side, if all of the test systems need, for example, test sequencing and logging reports to a database, it makes little sense for each BU to task a developer with doing this separately. On the hardware side, it’s highly likely that BUs will be using similar instrument classes – DMMs, scopes, arbitrary waveform generators etc. – across different test systems. By standardising on the same equipment across BUs, capital costs can be reduced through economies of scale, but operating and maintenance costs also decrease through needing to train staff in only a single type of system.

Hella KGaA Hueck & Co., a developer and manufacturer of lighting and electronics for the automotive industry, reaped the benefits of standardisation by aligning multiple product lines and creating what they termed the SUT (Standard Universal Tester) platform. As the company expanded globally, the mix of test equipment they used across various sites led to an inefficient use of capital and personnel resources. By standardising on the SUT platform, based on National Instruments software and hardware, they were able to form a global competency network of SUT-trained engineers. Executive Vice President of Electronic Operations, Michael Follmann, stated that, “global production test standardisation allows Hella to maintain its high product quality in a cost-effective and scalable manner. National Instruments was an integral partner in this effort, helping us realise a 46 percent reduction in operational test cost and savings of an additional investment of a million euros every year”. Additionally, the SUT platform led to 57% increased test throughput, resulting in a payback period of just eight months.


Each of the three approaches discussed offers a method of increasing the value test delivers back to an organisation. Whether it’s through reusing test code throughout the development cycle to catch bugs sooner, implementing parallel testing to scale throughput as demand increases, or standardising test equipment across sites to reduce capital, operational and training costs, these approaches can help save money and time.

The key point, however, is that by fully modelling the total cost of ownership of test systems, it is then possible to understand the positive fiscal implications of testing. It becomes more straightforward to convince executives of the benefits of new and innovative test approaches if the payback period can be measured. It becomes easier to justify spending money when the return on investment is clearly shown to outweigh the expenditure. These metrics hold the key to elevating an organisation’s view of test from a resource-draining necessity to a revenue-driving advantage.

Page 1 of 1


About the author

Jeremy Twaits is Automated Test Product Manager, National Instruments UK & Ireland. His focus is on graphical software and modular PXI hardware for RF and automated test systems. Jeremy graduated from the University of Bath with a Masters degree in Physics, and has previously worked for MBDA before joining National Instruments as an applications engineer in 2008.

Since 1976, National Instruments has equipped engineers and scientists with tools that accelerate productivity, innovation, and discovery. NI’s graphical system design approach provides an integrated software and hardware platform that simplifies development of any system that needs measurement and control. Its equipment leads the way with innovation and provides the necessary hardware implementation to provide the best in test, measurement and data acquisition.

Most popular articles in Test & measurement

  • RF & microwave power measurements: making the right field test choices
  • RF Generator Experience for Medical Applications
  • Modular & Software Test Instruments Improve Efficiency
  • Scope in Space
  • Challenges of Designing RF Test Equipment
  • Share this page


    Want more like this? Register for our newsletter








    Energy Efficiency of Paramount Importance to 5G Rollout Mauro Boldi | Environmental Engineering Technical Committee, ETSI
    Energy Efficiency of Paramount Importance to 5G Rollout
    Whilst many will focus on the RF and data capabilities of the new 5G mobile communications system, energy efficiency is one of the primary design aims. Not only will operators be able to save on OPEX, but with cellular telecommunications continuing to grow, energy efficient 5G systems will enable green operation.
    Training
    Online - Effective Spectrum Analyzer Measurements
    Learn how to make spectrum analyzer measurements at RF and microwave frequencies

    More training courses

    Whitepapers
    RF Power Measurement Solution for Multi-antenna MIMO Transmissions
    Keysight provides this whitepaper detailing a new RF power measurement solution that addresses the revised EN 300 328 v.1.8.1 and EN 301 893 v1.7.1 standard for MIMO.

    More whitepapers










    Radio-Electronics.com is operated and owned by Adrio Communications Ltd and edited by Ian Poole. All information is © Adrio Communications Ltd and may not be copied except for individual personal use. This includes copying material in whatever form into website pages. While every effort is made to ensure the accuracy of the information on Radio-Electronics.com, no liability is accepted for any consequences of using it. This site uses cookies. By using this site, these terms including the use of cookies are accepted. More explanation can be found in our Privacy Policy