PXImc - PXI MultiComputing Tutorial

PXImc is the MultiComputing standard, specifications PXI-7 & PXI-8, enabling faster and distributed processing in test or data acquisition systems.


PXI tutorial includes:
What is PXI?     PXI standard     PXI chassis     PXI bus and backplane     PXI controller     PXI cards & modules     PXI software     PXI Express     PXImc MultiComputing     Set-up & build PXI system    


There has been a significant increase in the requirements for test systems as technology has advanced.

While PXI has been a very successful test standard, there has been the need to move it forwards to meet the increasing needs of testing and data acquisition.

To achieve this, the PXImc, or PXI MultiComputing standard was introduced with the aims of providing high performance, scalability, vendor interoperability and high levels of flexibility.

With computing becoming more reliant on the transfer of large amounts of data to achieve the required levels of performance, this is one of the key issues that PXImc is able to address.

PXImc background

The PXImc or PXI MultiComputing standard is detailed in the PXI Systems Alliance specification references PXI-7 for the hardware and PXI-8 for the software which was announced in November 2009.

The PXI-8 specification defines the hardware and software requirements to enable multiple systems based around a PCI or PCI Express interface to use the Non-Transparent Bridge. In this way it makes high data throughput exchanges of multi-gigabyte per second to be achieved, and thereby enabling microsecond latency to be achieved in many cases.

The advent of PXImc allows users to have an off the shelf solution for a multiple controller system. Prior to this, there were two options to develop multi-chassis PXI systems. The first was using MXI-Express, where the system communicated over a high-bandwidth, low-latency cabled PCI Express link. However this system was limited to a single system controller.

The second alternative was to utilise an Ethernet link to connect the multiple PXI systems. This approach enabled each PXI chassis to have its controller, but the disadvantage was that it reduced the bandwidth available and increased the latency.

The use of PXImc allows a far higher performance system to be created. It allows PXI systems with their own system controllers to communicate over cabled PCI Express. The standard also supports communication between a variety of computers, and stand-alone instruments. A further advantage is that PXImc supports the use of multiple processing modules inside a single chassis. This is a distinct advantage because it allows the processing power to be scalable according to the requirements and in turn this enables additional PXI features to be included, one of which could be enhanced redundancy for systems where high reliability performance is key.

PXImc basics

As outlined, the aim of PXImc was to provide a high bandwidth, low latency interconnect between computing devices whether they are used in a modular format or as standalone devices.

In this way PXImc can be thought of as a local network for PXI and PXI Express.

Using PXImc, multiple PXI systems can be connected provided that they are all PXImc enabled. This enables several systems to operate in conjunction, but operating in a separate fashion to provide the power of several processors operating in parallel.

When using traditional PXI or PXIe technology it is not possible to connect two or more systems together because of various contentions between the two PCI domains. Issues such as bus ownership and endpoint resource allocations prevent this from being possible.

To connect two PCI systems (on which PXI is based) to allow multicomputing requires the two systems to remain separate while still allowing PCI traffic to flow between the two memory spaces.

Basic architecture of a typical PXImc showing the bridge elements and the endpoints within the primary system host and the PXImc device
Basic PXImc system architecture

There are several key elements in the PXImc system:

  • Non-Transparent Bridge, NTB:   The key to the operation of PXImc is a new element known as a Non-Transparent Bridge or NTB. One of the endpoints connecting the two systems together contains the NTB. Each side of this bridge is located in a different hierarchy or system, but it can pass data between the domains.

    Each side of the non-transparent bridge appears as an endpoint and the other side is not aware of anything beyond the near endpoint. Therefore, the bridge is acting in a non-transparent fashion. This enables systems separate, while still being able to communicate in a fashion that enables the multicomputing elements to act in a sufficiently swift fashion..

    In other words, the NTB addresses the contentions by logically separating the two communicating PCI domains. It also provides a mechanism for translating some PCI transactions in one of the domains into corresponding transactions in another. In this way, multiple domains are able to communicate without being directly linked.
  • Primary System Host:   The primary controller in a system is known as the Primary System Host. When a Non-Transparent Bridge is used it is not possible for the host controller to see the instrument cards over the bridge. The host controller is only able to see the PXImc device and instruments connected directly to the host, but not through a bridge. The PXImc device and the instruments behind it appear as a single PXI node to the Primary System Host.

    The specification for the Primary System Host has a number of flexible aspects: it is not required to be of any particular form factor: it may be a PXI System Module in a PXI Chassis, it can be a PXI-Express System Module in a PXI-Express Chassis, a PC, or other computing system. It is its function described above that defines it.
  • PXImc device:   This is a PXImc sub-system that contains a number of elements:

    • Local processor
    • Memory
    • PCI Host Bus Bridge or PCI-Express Root Complex
    • PXImc Logic Block
    The PXImc Logic Block consists of the logical PCI non-transparent bridge and associated circuitry needed to work with clocking references from both the Primary System Host and the subsystem of the PXImc Device. Communication between the Primary System Host and PXImc Device occurs via the NTB. External subsystems including stand-alone instruments, normally connect to the system via a Primary System Host using a PCI Express Cabled Interface.

PXImc topologies

There are many possibilities for the system topologies that can be used to generically expand a system. The topology supported by the PXImc specification is a tree or star configuration. Using this form of configuration, all PXImc devices connect to a central primary System Host.

The Primary System Host contains a computing device and external expansion switches to expand the PCI or PCI-Express based connectivity.

There are a number of ways in which this star based connectivity can be implemented in terms of the actual implementation of the star topology.

Slightly different topologies are required for PCI and PCI-Express based systems, i.e. PXI and PXI-Express systems.

PXImc topology for PCI / PXI based systems
PXImc topology for PCI / PXI based systems
PXImc topology for PCI Express / PXI Express based systems
PXImc topology for PCI Express / PXI Express based systems

These diagrams show the basic logical connection topologies in which PXImc Devices are linked to the Primary System Host. Within these topologies, the PXImc Devices act as an endpoint to the Primary System Host.

PXImc applications

Although it is possible to increase the speed of various PXI applications using techniques like utilising FPGAs in the PXI modules to increase both speed and flexibility, there applications where the power of a multicomputing platform is of significant benefit. Here it is necessary to transfer large amounts of data swiftly and with very low levels of latency.

Some examples of where a multicomputing platform may be needed are within signal intelligence applications and high throughput data systems are required between intelligent systems in advanced simulations. Here large samples of data may be collected and may require very fast processing.

More Test Topics:
Data network analyzer     Digital Multimeter     Frequency counter     Oscilloscope     Signal generators     Spectrum analyzer     LCR meter     Dip meter, GDO     Logic analyzer     RF power meter     RF signal generator     Logic probe     PAT testing & testers     Time domain reflectometer     Vector network analyzer     PXI     GPIB     Boundary scan / JTAG     Data acquisition    
    Return to Test menu . . .