Advertisement
Blogs
Advertisement

Automated Test Outlook - Peer-To-Peer Computing

Tue, 08/31/2010 - 8:17am
by Matt Friedman, National Instruments (www.ni.com)

LISTED UNDER:

Matt FriedmanFor years the computing industry has implemented distributed architectures by dividing computing among multiple processing nodes. For example, Google search queries are not run on one single supercomputer but on a network of more than 450,000 interconnected servers. Personal computing devices run multicore processors and use specialized graphics processing units (GPUs) to handle high-definition graphics processing. With increasingly complex testing requirements and exponential growth in acquired data, automated test systems will need to evolve to work smarter and not just harder.

Peer-to-peer is a type of computing that uses a decentralized architecture to distribute processing and resources among multiple processing nodes. This is in contrast to traditional systems, which feature a central hub responsible for transferring data and managing processing. In automated test systems, peer-to-peer computing may take the form of acquiring data directly from an instrument like a digitizer and streaming it to an available field-programmable gate array (FPGA) for inline signal processing. Other systems use it to offload processing to other test systems or high-performance computers.

The trend of software-defined instrumentation is giving engineers unprecedented control over their automated test systems and opening all new types of applications. Much of this is due to the engineers’ ability to access the raw measurement data, which they can analyze and process for their exact needs. With higher digitization rates and channel counts, the amount of available data is increasing at exponential rates. Within five years, some high-performance test systems will be processing petabytes (thousands of terabytes) of data per day.

Beyond the amount of data being acquired, much of it will need to be processed in real time. Applications such as RF signal processing benefit immensely if demodulation, filtering, and fast Fourier transforms (FFTs) can be implemented instantly. For example, engineers can move beyond power-level triggering in RF applications and create custom triggers based on the frequency domain of the signal.

New high-performance, distributed architectures are required to transfer and process all of this data. These new high-performance architectures will share three key characteristics:

1. High-throughput, point-to-point topologies – The architecture must be able to handle the transfer of many gigabytes of data per second while allowing nodes to communicate with each other without passing data through a centralized hub.
2. Low latency – Data will need to be acquired and often acted upon in fractions of a second. There cannot be a large delay between when the data is acquired and when it reaches a processing node.
3. User-customizable processing nodes – The processing nodes must be user-programmable so that analysis and processing can meet the user’s exact test system needs.

Very few distributed architectures have been able to meet all three of these criteria. For example, Ethernet provides an effective point-to-point topology with a diverse set of processing nodes, but, with high latency and average throughput, it is not well-suited to inline signal processing and analysis. The architecture that has seen the most initial success and future promise in meeting these criteria is PCI Express. The bus that has formed the core architecture of every PC and laptop for much of the last decade, PCI Express was specifically designed for high-throughput, low-latency transfers. It provides throughput of up to 16 GB/s (soon to be 32 GB/s) with latencies of less than a microsecond.

One place where PCI Express is already seeing use as a distributed architecture is in military and aerospace applications. In defining its next-generation test systems, the U.S. Department of Defense Synthetic Instrument Working Group identified PCI Express as the only bus capable of providing the data throughput and latency required for user-customizable instrumentation. This architecture is now seen in BAE Systems’ synthetic instruments that use PCI Express to stream downconverted and digitized RF data directly to separate FPGA processing modules for inline signal processing.

Enhancements are still being made to PCI Express to further its capabilities in peer-to-peer applications. This includes bringing PCI Express out of the computer with new cabled solutions enabling low-latency communication up to 100 meters. The industry is also working to ensure that these high-performance distributed systems are compatible with components from multiple vendors. In September 2009, the PXI Systems Alliance released the PXI MultiComputing Specification to define the hardware and software interfaces to realize this goal.

Beyond just the physical architecture of these systems, peer-to- peer computing will change how engineers configure and program their test systems. With many disparate processing nodes, engineers will need new tools to visualize and direct the flow of data. Software development environments will also need to evolve to abstract away the intricacies of programming FPGAs, GPUs, x86 processors, and more.

Peer-to-peer computing using high-performance, distributed architectures is still early in its application, and there are many innovations still to come. With exponentially growing amounts of data and increasingly complex testing requirements, test engineers will need to learn how best to apply these new technologies to create smarter test systems.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading