Saving time and money on testing embedded hardware - putting design and production tests in their place
Recent high-profile product recalls and failures have shown that even the most experienced and well-funded company can fall down when it comes to product testing.
A product goes through two fundamental types of test. The first is of the design itself and the second is production testing of the product. These two categories have different aims and this article sets out to explore those differences.
Increasing pressure on timescales and the relentless pace of new technology introductions can pressurise companies to reduce design testing to a minimum. Conversely, production testing can be over-engineered to no great effect, resulting in increased production costs and a false sense of security, with testing of a design carried out in production, rather than testing of the quality of repetition of that design.
The aim of design testing is primarily to check that the design meets the requirements specification. Typically an engineer will check each feature of the design one at a time and then follow up with some form of system testing. The software team will then take over the design and report back any bugs that they find whilst they develop their code. Frequently large portions of hardware can’t be tested without software support, so validating the hardware design requires a combined effort. An engineer might only have a matter of weeks to test a design that may be in service for years, and manufactured in large quantities; within this testing window there are aspects of the design testing that need to be validated to build confidence, for instance:
1. Temperature and humidity range tests.
2. Voltage input range and quality: Does the device work across the full voltage range it can be exposed to?
3. EMC (ESD, immunity, emissions): This is a requirement for CE marking but is more than just box-ticking. It can reveal design weaknesses particularly with regard to immunity and ESD that could cause the type of random field errors that engineers hate.
4. Mechanical fit: As well as the first PCB fitting in the case, will the production versions all fit when tolerances of case and PCB are taken into account?
5. Signal integrity: Searching a board for signals that look wrong may seem like a scattergun approach, but it can reveal mid-level voltages or suspect clock waveforms that work in most cases, but will cause problems under stress.
Thorough design testing should yield a design that will be robust across the full range of component tolerances it can be built with, and in environments that the device is expected to inhabit. This design can then be put into production, where a production test regime is required.
Production testing aims to test that each produced board is the same as the original “gold standard” board. It is different from design testing in that the production testing assumption is that the design is good. Ideally, production testing is fast with high coverage. There are a variety of tools used for production testing:
Some of these test methods complement each other and are designed for different purposes, so AOI is rarely used as a sole test, but can be in conjunction with another method whereas burn-in is only an add-on test, rather than a replacement for any other test method. Some form of functional test is normally carried out on an assembled product, even if every component has already been tested in another way.
Production testing can be slowed by the retesting of components that have already been 100% tested by the supplier. If the manufacturer has confidence in the supplier, these tests are effectively redundant assuming that damage in shipping is not possible. Likewise the production testing should not seek to test whether the design is good, for example by testing all the limits of voltage and temperature that should have been covered by design testing. Such extra testing may unduly stress the board and decrease product lifetime as well as slowing the testing down.
A matrix should be drawn up examining what is tested by each method to minimise overlaps and thus reduce test time, whilst ensuring that coverage is high. As knowledge of the product and suppliers grows, the testing can be streamlined to accentuate testing of problematic areas and to reduce double testing (e.g. retesting components from suppliers).
In conclusion, production and design testing are both required but have different objectives. Design testing needs to be thorough enough to ensure that all produced boards work reliably, whilst production testing needs to ensure that each board is an exact copy of the original.
Dunstan Power is a chartered electronics engineer providing design, production and support in electronics to all of ByteSnap Design's clients. Having graduated with a degree in engineering from Cambridge University, Dunstan has been working in the electronics industry since 1992 and in 2004 founded Diglis Design Ltd, an electronic design consultancy, where he developed many successful electronic board and FPGA designs.
In 2008, Dunstan teamed up with his former colleague Graeme Wintle to establish a company that would supply its clients with integrated software development and embedded design services, and ByteSnap Design was born.