Advertisement
Articles
Advertisement

The ECN Roundtable - Test and Design

Tue, 11/02/2010 - 10:12am
Edited by Alix Paultre

LISTED UNDER:

We recieved a great response for our November Roundtable question: “What facets of testing are most often overlooked by design professionals?” the answers underscore not only the importance of test in design, but also how many ways test can help the engineer.

 

Dave KresseDave Kresse, CEO, Mu Dynamics (/www.mudynamics.com)  

There are several primary issues overlooked in testing devices that can have significant impact on the success of a device or product once in use.  

Designers are first and foremost focused on architecting and developing their product, typically with aggressive time-to-market pressures dictating release dates. With the ever-present “feature creep” and the reality of building complex systems by large, distributed teams, the time allocated to testing invariably becomes an expendable item as the launch date fast approaches.  

The time “saved” by cutting corners with testing however will come back to haunt you. There are simply too many usage paths and data permutations with today’s products that, without a rigorous and highly automated approach to testing, will lead to unacceptable results – faults, crashes, security breaches, loss of data or even full system meltdowns.  As such, any product, no matter how rushed, must be tested prior to deployment.  

Testing the user experience is extremely difficult, mostly due to the ever-evolving interactions with new devices, applications and network paths (such as IPv6) that require completely new test scenarios, built from the ground up. Finding a testing solution that can quickly and efficiently emulate these issues (without requiring arduous testing scenarios be built) is the best way to ensure these products and services work as expected with the desired level of functionality, scalability, security and resilience. 

A mobile phone, for example, may perform flawlessly in a simple laboratory setting with its basic functionality. But what will happen “in the wild” on a shared network with millions of other similar products and thousands of mobile apps? Will it still work as expected?   

Testing approaches now exist that can create test cases replicating the real-world environment where devices will operate.  Testers now have the choice to accurately represent real-world scenarios quickly and effectively.   In today’s competitive environment, can anyone afford not to?

 

Chirag KapoorChirag Kapoor, Product Support Engineer, Xeltek (www.xeltek.com)

Testability is the means of which the functionality of any electronic product circuit or component can be determined to a desired level of accuracy. A test plan must be determined prior to PCB layout and design that shall add more efficient testability features to electronic product designs.

Apart from finding the failure, one should be able to isolate the cause of error by recording diagnostic information.

Following points should be kept in mind during the PCB layout procedure:

-Test pads should be 0.015 to 0.04 inch in diameter; since, usually the SMD probe requires test pad of tip of the probe. They should not be crowded and at least 0.018 inch unpopulated area should be around each test pad.

-The test pads should be coated with conducive non oxidizing material such as gold, for better contact with test probes.

-Probing should be done at the test pad and not at the component itself considering cold solder joint may look good due to pressure applied to the component.

-Avoid using tall components by reason of boards with tall components are difficult to probe and also one should keep test pads far away from tall components.

-While dealing with Flash EPROMS, one should not program the protection bit until the test is done.

With the exception of stated methods above, modern method includes Boundary Scan designs for testing interconnects on PCB, automatic test instruments and its programming, and analog testing of digital outputs.

 

Kristin SullivanKristin Sullivan, Data Translation (www.datatranslation.com)

When measuring signals, users often assume that the grounds of their signals and their measurement system are at the same potential. If the difference in ground potential is large enough, current flows between the signal and your measurement system; this is called a ground loop. Ground loops contribute noise that can greatly affect the accuracy of your measurements, especially when you are trying to measure low level signals precisely.

Many measurement instruments on the market today provide multiplexed architectures, where one A/D is used to measure multiple channels. In this kind of architecture, if one channel goes down, all channels go down. ISO-Channel technology, on the other hand, uses a simultaneous architecture, where each channel has its own dedicated 24-bit Delta-Sigma A/D.

ISO-Channel ISO-Channel technology eliminates ground loop problems by using a differential, isolated, floating front-end. To measure floating signal sources, ISO-Channel technology uses differential analog input signals, a 24-bit Delta-Sigma A/D converter for each channel, and channel-to-channel isolation, as shown here.

ISO-Channel uses galvanic isolation methods to guarantee 1000V isolation between any input channel to any other input channel and ±500V to earth ground. Common mode noise and ground loop problems are eliminated with ISO-Channel since sensors that are at different ground reference levels are easily accommodated, even if they are at widely differing voltages of hundreds of volts or transients to thousands of volts.
The result is that accuracy is preserved for all sensor inputs. This is especially useful when conditions change in the electrical environmental due to motor current surges, electromagnetic radiation, or noisy industrial equipment turning on/off.

 

 

Matt FriedmanMatt Friedman, National Instruments (www.ni.com)

Engineers often identify design verification as the largest bottleneck in the design process. Much of this is attributed to the manually intensive process of performing benchtop measurements in the lab. Turning knobs and dials, engineers manually set up their instruments for each measurement and then transfer that data by disk or memory stick to their computer for additional analysis. With data analysis often separate from acquisition, engineers commonly find measurement and design errors late in the process, often requiring them to retake measurements.

By connecting the instruments to the PC, engineers can automate complex measurements like frequency sweeps and limit testing to save hours of lab time. Additionally, with on-the-fly analysis, engineers quickly can view the results that matter and reduce the probability of retaking measurements. Finally, the ability to save and export measurements simplifies the process of documentation and integration with EDA tools and enterprise, data management software.
Involving test early in the design process

In many product development efforts, test engineers are given key product specs and a set of test requirements after the product has been designed. This is often referred to as the “throwing it over the wall” test-development strategy that is often criticized but still commonly used. When test engineering is brought in late to the design process, it almost guarantees that release dates will slip, budgets will be overrun and corners will be cut, affecting overall product quality.
Many leading manufacturers are beginning to understand the need to involve test engineering early in the design process.

This earlier alignment of test and design resources often leads to significant time and money savings on the back. For example, teams can identify the required tests needed at each phase in the design-cycle and also influence designs to improve the accessibility to test points. This earlier integration can also improve quality as test engineers can provide additional insight on how certain designs and components will pass/fail in the manufacturing process.
Increasing use of PXI and user configurable FPGAs to test next generation protocols and standards

Over the last few years there has been an explosion of devices communicating over new digital and RF protocols. This often creates challenges for engineers who must figure out how to validate these devices that require custom algorithms and new levels of processing. For example, testing a modern RF receiver often involves coding/decoding, modulation/demodulation, packing/unpacking, and other data-intensive tasks to occur inside a single clock cycle of the device under test.

To meet these needs, design engineers are turning to PC based, modular instrument platforms that enable new levels customizability and performance capabilities. PXI, the proven standard for automated test, now provides user accessible FPGAs that enable engineers to tackle these new testing requirements. FPGAs are well suited to these new tests due to their user configurability and a hardware timed execution that provides a high level of determinism and reliability. With a combination of FPGAs and multicore processors, engineers can develop a true software defined test system that will meet their current testing needs but also grow as their products advance and testing needs change. 

 

Erik ReynoldsErik Reynolds, Test Engineering Manager, Logic PD (www.logicpd.com)

When it comes to embedded systems and the products designed around them, the production test solution is a common oversight for many design professionals. How the production test solution is implemented can significantly impact the manufacturing cost of a product. An efficient, well-defined test solution executed in collaboration with the product design team will result in a smooth transition from design to manufacturing, a reduced product development cost, and an optimized per unit production cost.

To collaborate on an efficient test solution, design engineers need to know the aspects of production test that most dramatically impact cost: test time and first pass yield. The ability of a contract manufacturer (CM) to quickly and precisely detect defects in the printed circuit board assembly (PCBA) is the front line for reducing test time and increasing first pass yield for a product. To assist the CM, hardware designers should consider the approach for PCBA-level testing early in the design process, whether that approach is in-circuit test (ICT) or flying probe, and then design the PCB with easy test access to components and nodes.

Another common oversight by design professionals is accurately defining the scope and cost of the production test solution. The test solution cost can easily approach the product development cost for many products in volume production where the major cost contributors are the level of test automation and the depth of testing required. A higher level of automation may require a greater up-front cost to develop, but generally results in reduced test time and a lower per unit labor cost from the CM. Conversely, a less complex test solution may require less up-front cost to develop, but generally results in increased test time and a higher production cost over the life of the product. Analyzing the return on investment (ROI) of the various strategies, and choosing the one that best aligns with anticipated volumes, is essential to determining the appropriate production test solution.

The test solution associated with any manufactured product has a considerable impact on the per unit cost of that product and should be considered as part of the requirements and cost for bringing that product to market. If this consideration becomes habit, then the alpha stage of any product design should find the engineers asking: “How are we going to test this product?” 

 

Heiko EhrenbergHeiko Ehrenberg, GOEPEL Electronics (www.goepelusa.com)

Design for Testability (DFT) to support embedded test capabilities, such as
JTAG / boundary scan, is an important issue. We still see PCB designs that include
integrated circuits (ICs) that support JTAG and are IEEE 1149.1 compliant, but designers
don’t make use of those test capabilities and sometimes even inhibit the use of JTAG for
manufacturing test. This can be a big handicap especially considering the wide use of BGA
components and the test access limitations associated with such packages.

Design for JTAG / boundary scan test requirements include not only the proper design of a scan chain
and providing a means to acces the board-level or system-level test access port (TAP), but also the
controlability of compliance enable pins at the board level. Compliance enable pins are provided in
a good percentage of boundary scan devices and are used to put the pins used for TAP access and the
test cababilities of the respective device into IEEE 1149.1 compliant mode.

Furthermore, proper test bus termination needs to be considered to ensure that any JTAG test and
in-system programming pattern can be applied to the unit under test at the highest possible
clock speed. This is especially important for complex PCB designs with several boundary scan devices
and if in-system programming (ISP) for Flash and serial EEPROM devices are some of the target
applications.

JTAG / boundary scan does not only provide benefits for manufacturing test in terms of regaining
test access and providing detailed and accurate diagnstics. This test technology also can be
indispensable for design verification in the new product introduction phase and for repair and
trouble-shooting activities. Even beyond the prototyping and manufacturing stages, JTAG / boundary
scan can prove very useful for testing and reconfiguration of printed circuit assemblies in the
field, providing the potential to reduce no-defect-found problems and to reduce downtime and
service related costs. 

Paul AndersonPaul Anderson, GrammaTech (www.grammatech.com)

New security vulnerabilities in critical software are reported almost daily. High-profile stories, such as attacks on prominent web sites make the news most often. Most of these are enabled by programming errors in operating system or desktop software running on home computers.

However, other kinds of systems are not immune, and are more and more being seen as juicy targets. Attacks on mobile phones are increasing rapidly. Researchers recently demonstrated successful attacks on an automobile computer allowing them to control the brakes, disable power steering, and falsify dashboard data. Even medical devices such as implantable defibrillators have exploitable security weaknesses.

Clearly testing for software security vulnerabilities is being overlooked. Part of the problem is lack of awareness. Many designers have little knowledge of the myriad ways in which a system might be attacked, so it is difficult for them to understand how to test for vulnerabilities. Consequently a system might get well tested for expected inputs or casual error cases, but not for malicious inputs generated by a hostile attacker.

Resources are available to help designers and testers. The “CWE/SANS Top 25 Most Dangerous Software Errors” (http://cwe.mitre.org/top25/) lists the most widespread code-level weaknesses that can lead to serious security vulnerabilities. Traditional testing aimed at these errors can help, but test engineers need to understand how to write test cases that check for such defects. Advanced static-analysis tools can be more useful because they encapsulate expert knowledge of security vulnerabilities, and yield actionable results without test cases or even the need to run the code.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading