Advertisement
Product Releases
Advertisement

Design Talk - Communications Technology

Tue, 07/21/2009 - 6:57am

alix paultreEditor's Note: Communications are a keystone of our information-based econonomy and society. The next-gen world of ubiquitous computing and cloud data services will rely on high-quality comms in the user interface, RF, and hard wire, to truly exist. Here are some design articles to help you in your communications design tasks.

 

 

Bruce Webber anadigicsThe Importance of Matching RF Output Impedance throughout the Design Cycle

By Bruce Webber, ANADIGICS, www.anadigics.com

Cost pressure is as intense in RF design as anywhere else in the engineering community, and reducing component count is one way that RF designers can keep their BOM cost under control.  Suppliers support the drive to a lower parts count by integrating components like couplers and decoupling capacitors, but there is still pressure to eliminate optional components early in the RF device design cycle to reduce cost and minimize board area.

Despite this pressure to reduce component counts and board area to a bare minimum early in the design cycle, RF engineers need to remember to give themselves options for matching RF output impedance throughout the design cycle.  A simple 2-component match between the power amplifier output and other components in the RF path can, theoretically, give good performance over a narrow band of frequencies.  Good performance over wider bandwidths, and compensating for expected variation in component characteristics due to production tolerances and changes in temperature or voltage usually requires an additional element or two.

Keeping options for additional matching components avoids being trapped in a “forbidden” zone on the Smith chart.  For each type of 2-element matching circuits, there’s a region of the Smith chart from which it is impossible to transform no matter what values are chosen for each element.  Eliminating pads and restricting the network to only capacitive elements can prevent the designer from meeting performance goals that are critical to the target market.

Keeping options for additional matching components is also insurance against changes in performance as the design progresses from paper to production.  Changes in board layout, RF components, or new system requirements can require new matching circuitry.  It’s good to have the option for component placement, even after the device goes to production.  Pads are easy to take off from one spin to another, but hard to put back when you find that you need them in a later version.  Keep your matching options open for as long as possible. 

Gordon McNab Future DevicesIt’s OK being non-PC! 

By Gordon McNab, Future Devices, www.ftdichip.com

USB is such an enabling technology that it continues to be integrated into more diverse peripherals and more sophisticated user interfaces that are universally supported.
One thing that doesn’t differ throughout all these applications is the need for a USB host, sometimes referred to as the USB master. This is the controller which enumerates the slave USB peripherals and allows them to communicate with the master device.

Today this is normally a PC. Indeed, the USB interface has become so endemic in PCs that supporting it now differentiates some manufacturers’ products, by integrating more USB ports with every generation of laptop or desktop.

It’s hard to believe that the adoption of USB was inhibited in the early days by a lack of support from the operating system. Today, for any desktop or laptop OS to become and remain ‘mainstream’, USB support is a fundamental requirement. While many high end embedded operating systems now support USB host controller functions and USB classes, they are often real-time operating systems which target larger 32bit microprocessors and their extensive resources.

Thanks to continued improvements in processor architectures and the speed at which they operate, the vendors of some low-end microcontrollers can now provide support for a USB slave interface. Implementing a USB host controller using such a device, while technically possible, could be limited to supporting only a small subset of all possible peripheral USB classes. This limitation could be as a result of available processing power or memory.
By moving the USB host controller out of the software domain and into the hardware domain, adding native support for a USB host can be achieved without the requirement for more complicated software and overcomes the CPU limitations.

As a bus, USB allows multiple connections to be controlled by a single host using a rugged protocol that can sustain high data throughput. But supporting it is commensurately more difficult, which is why its use in embedded devices is often limited to peripherals. FTDI recognised early on that if the connectivity offered by USB could be extended beyond the desktop PC domain, it would also extend the same level of freedom and flexibility it provides PCs to embedded devices. This would allow them to benefit equally from USB. By abstracting the complexity of USB, FTDI’s solutions provide that freedom and flexibility.

Fundamental to delivering this is FTDI’s Vinculum family of embedded USB host controller ICs, which target low speed and full speed USB 2.0 implementation, with support for legacy USB1.0 devices. The first member in the family, VNC1L, offers two USB ports, each of which can be configured as a host or slave. VNC1L also provides up to 28 general purpose I/O pins, and can be configured to support UART, SPI or parallel FIFO interfaces which are used to communicate with an application processor.

At the heart of the VNC1L is a proprietary 8-bit embedded processor, which employs a Harvard architecture and CISC instruction set, enabling it to deliver RISC-like performance with the additional benefit of excellent code compression ratios. Complementing this is a numerical 32-bit co-processor which is used for complex arithmetic functions, such as calculations used for . The architecture also has its own bus structure and memory – including 64kB of embedded Flash used to store the firmware (see Figure 1). Controlling its operation is made simpler through the use of high level commands, enabled by the supporting firmware provided by FTDI.
It is the combination of this powerful embedded processor architecture and the optimised firmware that enables the VNC1L to deliver full USB host functionality with very little engineering overhead.

the FAT file system used in mass storage devices

Although it can be viewed as a sub-system, the VNC1L is essentially a ‘smart peripheral’ and as such must be controlled via a serial or parallel monitor interface. This is typically achieved using the microprocessor or microcontroller responsible for the main application. However because all the ‘heavy lifting’ needed to deliver USB connectivity is handled by the VNC1L itself, this could be as simple as an 8-bit PIC microcontroller connected via the UART or SPI interface. Commands and data are passed to and from VNC1L command monitor and the application processor via the monitor interface. This simple interface requires only a few commands in order to take full advantage of the USB connection.
This abstraction of the complexity behind implementing a USB connection is the key to VNC1L’s value; it enables any embedded device to act like a PC for the purposes of interfacing to a USB peripheral.

While the value of VNC1L is in its simplicity, the real intelligence is in its support for different USB classes without imposing any software overhead on the application processor. This is achieved, as mentioned earlier, through the optimised firmware developed by FTDI’s engineering team and targeting the VNC1L proprietary 8-bit CISC processor.
Each VNC1L is manufactured with no firmware loaded. An application must first down-load the required firmware to the on-board Flash via the UART interface. There are several compiled firmware builds available for the VNC1L, supporting slave peripheral devices including USB Flash disks, as well as devices that are compatible with Printer and Human Interface Device (HID) classes. Communications Device Class (CDC) used to communicate with, for example, a mobile phone, is also supported by a specific firmware configuration.
Communication with any supported USB device is achieved by sending simple instructions to the VNC1L control monitor. Implementing this command set in a low-cost microcontroller requires few resources, as it can be done using instructions encoded in either ASCII or binary/hex, using an extended- or short-instruction format respectively. When a USB device is first connected to a host controller, the controller must enumerate the device; asses its class and its functionality. Again, this is handled by the VNC1L and its firmware and requires no system interaction.

As an example, an instruction to list the contents of a directory would require sending DIR (or a short-instruction of 01 0D in Hex) from the application processor to the VNC1L controller. When the mass storage device is first connected, VNC1L enumerates it and invokes the correct class driver from firmware. Once this is completed, issuing the DIR instruction would return the content listing of the currently selected directory in the (externally connected USB based) mass storage device.

Similarly, FTDI provides a firmware build that will stream MP3 encoded music files directly to an external decoder, while relaying track information to the application processor, to be displayed for example on an LCD. Another firmware build, VCDC, can be used to send AT commands to a mobile phone connected to the VNC1L device, allowing a low-cost embedded application to send and receive data across a 3G network. This example could be coupled with a GPS receiver to create a cost-effective asset tracking device.

The prevalence of USB and perhaps more significantly USB based peripherals makes it a very powerful bus architecture, enabling a highly accessible framework for PCs. Now, thanks to products like Vinculum, non-PC based embedded systems can also benefit from this expanding network of extensibility.  

Steve Pope, Solarflare CommunicationsThe CPU Cycle per TCP Bit Revisited

By Steve Pope, Solarflare Communications, www.solarflare.com 

There is a commonly held engineering rule of thumb that states: For TCP network processing, one processor cycle, on average, is required to process one bit of data, with the ramification that a 10 Gigabit per second (10Gb/s) capable system should either have 10Ghz equivalent of single core CPU cycles available, or else employ some specialized hardware offload architecture to assist network processing.

For history and reference, the seminal paper on TCP protocol performance (An Analysis of TCP Processing Overhead, Clark, Jacobson, et al, IEEE Communications, 1989, Volume 27, Number 6) is a good starting point. In this paper, Clark shows that, for a contemporary system, the cost of per-byte operations swamped protocol processing costs and highlighted the overheads of a double memory copy additive to the cost of TCP/IP checksum processing. The authors however, advised that “putting protocols in hardware was not required” to achieve good performance and also demonstrated impressive protocol processing efficiency (around 200 instructions for a 1460 byte packet), by use of TCP header prediction and interleaved checksum / copy techniques.

From this high-point of RISC (reduced instruction set computer) workstation efficiency, most of the 1990s revolved around the performance of commodity x86 architectures. These platforms were initially unsuited for high-performance I/O operations and the experiences of engineers on these platforms, especially around the mid 1990s, lead to the genesis of the cycle per bit argument.
 
However, by the early- to mid-2000s, the efficiency of commodity x86 and operating systems was significantly improved, particularly with the advent of core-scaling and improved interrupt handling techniques. For example, measurements in 2009 using the Solarflare Ethernet controller with TCP/IP checksum offload enabled showed that 10Gb/s TCP/IP is achievable using approximately 10% of a 3Ghz Nehalem x86 architecture or approximately 50% of a 1.2 Ghz low-power Intel IOP342 Xscale CPU.

Given that both these CPUs are multi-core, performance measurements should be normalized against the number of cores available in the system. From the above measurements, the IOP342 is a dual core CPU, and in the experiment, all network processing was constrained to a single core. Therefore simplistically, 10Gb/s here consumed 1.2 Ghz of single core equivalent cycles.

Similarly, the Nehalem x86 is a four physical core CPU. While in the experiment, processing was not constrained, a valid interpretation is that 10Gb/s also consumed 1.2 Ghz of single core equivalent cycles. Again, almost an order of magnitude better than the rule of thumb.
 
For both of these systems, the rule of thumb should be corrected, perhaps to, “For TCP network processing, one processor cycle, on average, is required to process one byte of data.”

The ramifications of this data are not simply academic. Where embedded systems are being constructed with high network I/O requirements, in storage target applications for example, it may be the case that a commodity architecture may be amply capable of achieving the system price / power / performance targets, and compared with a specialized hardware offload architecture, product developments based around a commodity solution enjoy significant savings in terms of life-cycle cost and time to market.

By Mike Juran, Altia,Customer Guidance Gets Great GUIs

By Mike Juran, Altia, www.altia.com

When viewing Graphical User Interfaces (GUIs) developed by teams that are new to the process, the quote, "If I had more time, I would have written a shorter letter"  often springs to mind.  Inexperienced teams tend to design their software interfaces by making every feature available with as few clicks as possible.  Although that solution is not always wrong, it can often prove to be ineffective.  Because the team does not know which features to emphasize and which features to remove, they add them all – and wind up with a crowded, complicated GUI that frustrates their customers.  Other companies, like Apple, Sony, Bose and Honda, are great at determining just the right combination of features for their product displays, putting out sleek and sophisticated products that quickly take a lead in the market.  How do they do it?  More importantly, how can you do it? 

How do you know which features are important to your audience – and, therefore, which to include on your GUI?  The secret is to know your audience – or, more importantly, to forget what you know and ask them!  The trick is to ask the right way.  Surveys and questionnaires just do not cut it in the world of interface design.  What is described is seldom what the participant envisions.

The best, most actionable feedback comes from running them through simulations, virtual prototypes, slideshows – the more fidelity, the better.  Let customers test your design and find the hiccups early.  Discover which common tasks simply confound them.  And then repeat this process as often as your development schedule can bear.  Your customers might give you a detailed list of their wants for a product, but once they test it they will undoubtedly change their minds.  The more iterations you can provide for your test audience, the better your end design will be.
System designers too often make critical assumptions about which tasks are frequent or which methodologies make sense – without a single test case.  Getting the proposed solution in front of customers removes ambiguity and provides priceless data that you can act upon.  With an actual hands-on simulation of the proposed feature, you are accessing years of tendencies in your target audience.  Accurate, interactive models will get the most out of your sessions – removing ambiguity that text or static drawings often introduce.  You will need far fewer passes to get to meaningful data – as little as five or ten subjects.  The patterns will emerge.


Once they do, the designer’s job is much easier.  Aspects of the interface previously thought to be critical can be moved to secondary screens or combined with other functionality.  Getting at this unspoken mindset in your audience is how simple – yet revolutionary – product innovations are born.
It all starts with building a working model first and getting it in front of customers before you engineer it.  Take more time to listen to what your customers want out of your product – and deliver a product your customers will want to buy.

Daryl R. Miller, LantronixComparing Wireless Standards and Means of Wireless Communication

By Daryl R. Miller, Lantronix, www.lantronix.com

Over the past decade, wireless technology has revolutionized access to information. From the office to the home, wireless connectivity for most computing devices has become readily available, but the appeal of wireless extends well beyond these applications. Increasingly, users and manufacturers in physical security, healthcare, fleet management, retail, industrial automation and other businesses seek to improve the value of their products and services by adding untethered network connectivity.

Designers who want to capitalize on the growing acceptance of wireless in their product must consider factors including certification and regulatory requirements, power usage, data throughput, data security, physical size, and perhaps most critically, which wireless standards to adopt.

Wireless standards (see Table 1)

  802.11 (WLAN) 802.15.4 (WPAN) 802.15.1 (WPAN)
“WiFi” “Zigbee, others” “Bluetooth”
       
Frequency bands 2.4 & 5 GHz 2.4 GHz worldwide 2.4 GHz
915 MHz U.S.
868 MHz Europe
Range 100 meters 10-20 meters+ 10 meters
Rate 11, 54, 600 mbit/s 20, 40, 250 kbit/s 1 – 3 mbit/s
Battery life Poor to Good Excellent Very Good
(implementation dependent)
Cost per node $$$ $ $$


IEEE 802.11 is a set of standards defining wireless local area network (WLAN) data communications in the 2.4 and 5 GHz frequency bands. The most popular are those defined by the 802.11b and 802.11g standards, which provide reasonable range and bandwidth.  The emerging IEEE802.11n draft standard boasts throughput up to a whopping 600mbits/s, operates in the 5GHz frequency band, and is also backwards compatible with 802.11a.  Why is a second frequency band important?  In many venues, the 2.45GHz band is saturated.  Hospitals, for example, look to segment their wireless traffic between these two bands depending on the application (computers on one, medical equipment on another).

IEEE 802.15.4 is a standard which specifies the physical layer and media access control for low-rate wireless personal area networks (WPAN).  Zigbee, WirelessHART, MiWi, and 6LoWPAN are built on top of 802.15.4, providing a complete networking solution.  IEEE standard 802.15.4 intends to offer the fundamental lower network layers of a type of wireless personal area network (WPAN) which focuses on very low cost, low speed (< 250kbits/s), and very low power consumption.

IEEE802.15.1 (aka Bluetooth) is another WPAN protocol which is found deployed in devices which are transient members of small networks.  It is ubiquitous in cell phones and earpieces.  It is also found in some HID (human interface devices) products like keyboards, mice, and gaming controllers.  It typically consumes more power than 802.15.4 devices and costs more per node, but it supports higher data throughput (1 to 3 mbits/s) and wider range.
For those needing longer range communications, there are additional standards for cellular, satellite, and point-to-point data communications.
Wireless standards have driven interoperability between products from various vendors.  Additional initiatives like the Wi-Fi alliance have advanced this cause.  Module vendors have made wireless enablement easier to implement and advancing wireless technology standards are driving improved range, reduced power consumption, and improved network integrity.  Now is a great time to add wireless to an embedded product.

Kevin J. Weaver, ProxicastSecuring Non-PC Cellular Data Communications - Are You Naked on the Internet?

By Kevin J. Weaver, Proxicast, www.proxicast.com

Cellular data services offered by wireless network operators provide a direct link to the public Internet.  The cellular company is your ISP and your connection is subject to  the same threats present on the traditional “wired” Internet like hackers, denial-of-service attacks, spy-ware, etc.

Increasingly, cellular data networks are being used to provide connectivity for equipment like data loggers, cameras, PLCs, and sensors.  Protecting non-PC equipment is inherently more difficult due to its “closed” and often proprietary nature.

Simply connecting a modem to your equipment brings the Internet “inside” your device making it nearly impossible to implement strong security.

Deploying a cellular data connection without implementing additional security solutions leaves your equipment unprotected and Naked on the Internet.

Common Security Misconceptions

A common belief is that “no one cares about my data”.

It’s probably true that your data itself is not interesting to hackers.  But hackers are interested in controlling your devices for their own purposes.  Creating spam-bots and back doors into your corporate network are often their true objectives.  Your current data might not be interesting, but the cellular link might someday be used for data that is.

There is also a perception that hackers focus on “high value targets” like bank financial systems, credit cards, and military sites, but most hacking involves indiscriminate “port-scanning” to find vulnerable sites. Hackers don’t know and don’t care what’s on the other end, as long as they can compromise it.

Non PC-based devices are not immune to threats.  Industrial devices are not typically tested as thoroughly for security holes as are desktop systems.  Nor are these devices routinely patched or updated.  Linux, Windows CE and Windows-Embedded are popular on newer, higher function devices and have the same risks as their desktop counterparts.  Vulnerabilities in proprietary systems may not be known until they are compromised.

Cellular Network Security

Carriers tout the security of their wireless networks – the networks are indeed secure.  But carriers are protecting their network, not your data and devices.  Encrypted radio links and device authentication do nothing to protect your connection from security threats.

Some carriers offer firewalls on their networks – others are “wide-open” and put the security burden on you.  Hackers know that most cellular data connections are not well protected.

Equipment is also vulnerable to “behind-the-firewall” attacks.  For example, an infected laptop somewhere else on the cellular network could attempt to access your devices.

Some carriers offer “private IP” solutions that don’t send your traffic across the Internet but require both ends of your connection to be part of the cellular network, increasing communications costs and creating a potential bandwidth bottleneck.  And you are still vulnerable to attacks from peer devices that are outside of your control.

Cellular Data Security Best Practices

The use of CDMA and GSM networks for remote data communications continues to grow dramatically; however, there is not yet a widely accepted set of security “best practices” (see sidebar).

The strongest security solution incorporates a Virtual Private Network (VPN) function into a complete cellular communications security “gateway” appliance that also provides firewall, routing, network-address translation, and other security functions (see Figure 1). 

A hardware-based VPN gateway (such as Proxicast’s LAN-Cell 2) offers the most secure means of communicating over the cellular Internet and easily enables devices such as PLCs, sensors and other non-PC equipment to communicate via the VPN with no additional hardware, software or configuration changes.

Best Practices for Cellular Data Security

• Physical Security
• NAT Router
• Strong Passwords
• Full-Featured Firewall
• Close Unused Ports
• Disable Unused Services
• Access Control Lists
• Log & Monitor Security Events
• VPN with Strong Encryption
• Digital Certificates

The best practices for securing remote cellular data communications involve a “layered” approach similar to strategies for protecting wired devices. 

Physical device security is a given. A NAT Router gives you a good first-line of defense but does not protect against all threats.  Strong passwords should be used everywhere passwords are required.

Couple the NAT Router with a full featured stateful packet inspection firewall and create rules that close all unnecessary ports and disable all unused services on your remote equipment.  Implement access control lists to limit who can access what and when.  Remotely log security events to monitor threats.  Implement a VPN with strong encryption and digital certificates for maximum protection.

By implementing these Security Best Practices, you can go from being "Naked on the Internet" to fully protected against any threat.

To see the presentation “Naked on the Internet: Securing Remote Cellular Data Communications”, visit:
http://www.proxicast.com/slides/Naked_on_the_Internet.pdf

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading