Advertisement
Articles
Advertisement

Design Talk: Elegant Design

Mon, 06/16/2008 - 5:04am

Adaptive Tune Addresses Control Needs
by Dave Meyer, Watlow, www.watlow.com
Dave MeyerTo begin to understand adaptive tune control, it is important to discuss what isn’t adaptive tune. Often when “Auto Tune” capabilities are mentioned, it is typically “predictive tune” where the algorithm calculates what the proportional, integral and derivative values should be for the process loop to be controlled. How dynamic is the process? Are there overshoot problems? Are tighter control and increased accuracy important? Can consistent control help reduce scrap? Once the values have been set, the control of the process variable is achieved by varying only the process output percentage. As long as the process is stable, this works reasonably well.

Adaptive Tune as Solution
Adaptive tune, as the name indicates, adapts to the dynamics of the process and will tune “on the fly,” responding to certain process criteria as determined by the specifics of the adaptive algorithm being used. It changes the proportional, integral, derivative (PID) values to respond to the change in the process.When applied properly, it is of great value in taming hard-to-tune process loops. It will also tune a typical process loop more precisely. Adaptive control algorithms can improve tune in virtually any process because the user no longer needs to be a tune expert, nor do they need to call one. Even an expert cannot feasibly tune some processes, because they require re-tune as conditions change. This is true for processes that are operated at a wide range of set points such that the PID parameter values must be different at different set points. It is also true for processes that routinely undergo load changes, such as exothermic chemical reaction or shear heat that results from a plastic extrusion process. For such processes, adaptive control will provide a better match of PID parameters that are automatically optimized. The question often arises about whether adaptive tune will over-tune an application. Most adaptive algorithms will not over-tune a loop; however the issue may merit questioning the provider on how that particular adaptive algorithm tunes. If the provider can’t explain how theirs works or is vague, it may require digging further to ensure it provides what you are looking for. The ideal situation is when the algorithm continuously monitors the process performance and adjusts the tune only when needed.

Built-in Expertise Improves Tune
Adaptive tune is ideal where a tune expert is not available because it applies “built-in” expertise. All the operator must do is set up the sensor type and output type (such as time proportioning, or burst fire), set a set point, and set the control mode to tune. Then, the algorithm takes charge. Most applications are not so dynamic that they require adaptive tune, but virtually any process can be better tuned. The resulting PID settings (proportional band, integral reset, and derivative/rate) will better reflect the thermal characteristics of the process. When a process is well tuned, processed materials are kept closer to the target setting, and that improves yield and reduces scrap and rework of mis-processed material. In addition, when the process variable tracks the set point better, the process spends less time warming up and stabilizing so it is available and productive more of the time, which helps save capital and energy costs. Most adaptive algorithms will work well across a range of different types of processes. That is, faster or slower responding loops. A faster responding process often calls for a higher proportional value, a lower integral value, and in some cases even turning the derivative to zero, whereas a slow responding loop will typically call for a lower proportional value, and higher integral value. The adaptive tune will automatically compensate for these differences in requirements. Adaptive tune takes the experience of control experts and packages it in the algorithm, making it straightforward and easy for the user to implement.

The Difference is in the Algorithms
While there are similarities between the different adaptive tune algorithms, they each have their differences. The following shares some insight as to how Watlow implements their adaptive tune, called TRU-TUNE+.We will look at a couple of features called “Tune Band,” and “Tune Gain,” what they mean, and what they do for users. “Tune Band” used in this algorithm describes the process when the variable is within this band around the set point.When this occurs, TRUTUNE+ adaptively tunes the PID parameters. (See Image 2.) When the process variable is outside this band, no tune is performed. This prevents undesirable de-tune of the PID parameters. “Tune Gain” is the parameter that determines how responsive the algorithm will be to deviations from set point and set point changes. Since the responsiveness is actually a user preference dependent upon the relative importance of preventing overshoot and minimizing time-to-setpoint, this parameter is not set automatically and may be changed by the operator. There are six settings ranging from 1 with the least aggressive response and least potential overshoot (lowest gain) to 6 with most aggressive response and most potential for overshoot (highest gain).

A Case Study
A leading manufacturer of trailer mounted portable decontamination systems, including heated showers, needed precise boiler temperature control for water used to decontaminate large numbers of people quickly in response to Hurricane Katrina in 2005. For this application, it was critical that the shower water temperature be maintained at precisely 92°. If the water was too cool, the hazardous material may not be successfully removed. If the water was too hot, people could be scalded or the pores of their skin could open, increasing their exposure to the very chemicals that the process was designed to remove. The company developed three- and four-boiler trailers to increase decontamination capacity, but control of this number of boilers proved to be difficult. A multi-loop temperature controller with an adaptive algorithm was able to tune the loops automatically, minimizing setup time and effort. In addition, it was able to provide optimal performance by fine tune loops more precisely than autotune features, and provided stable control through set point and load changes. Earlier systems with more than two boilers experienced unacceptable water temperature fluctuations when showers were turned ON or OFF. Prior attempts with other products failed to control this very dynamic system at a test facility.When deployed, the decontamination equipment has to provide precise water temperature control regardless of whether the emergency happens in the blazing temperatures of New Orleans, or the frigid cold of Minnesota.


Don’t Get Burned in DC/DC Thermal Management
by Carl Schramm, RECOM, www.recom-international.com
The efficiency of any energy conversion process is always less than 100 percent. Waste is a fact of life, and a part of any of the energy used in a system goes astray and is converted into heat. Ultimately this waste heat must be removed. Since the laws of thermodynamics state that heat energy can only flow from a warmer to a colder environment, if the internal heat of a device is to be dissipated the ambient temperature must be lower than the internal temperature. The smaller this difference is, the less heat energy will be dissipated.

Temperature Terms
Which temperature specifications should you consult for your calculations? RECOMs declare two values in its datasheets: the operating temperature range with or without derating and the maximum case temperature. Some manufacturers would say that these two values are the same. The maximum case (surface) temperature of many DC/DC modules is typically given as +100°C or +105°C. This value appears at first to be very high; however this figure includes not only the self-warming through internal losses but also the ambient temperature itself. Remember that the smaller the difference between case surface and ambient, the less heat lost. If a converter has higher internal losses, it will be more affected by a smaller temperature difference than a converter with lower internal losses. The internal losses occur mainly through switching losses in the transistors, rectification losses, core losses in the transformer, and resistive losses in the windings and tracks. The maximum allowable internal temperature is determined by the Curie temperature of the transformer core material, the maximum junction temperature in the switching transistors and rectification diodes and the maximum operating temperature of the capacitors.

Exploring Solutions
If the thermal dissipation calculations reveal that the DC/DC module will overheat at the desired ambient operating temperature, there are still a number of options available to reach a solution. One option is to derate the converter, i.e., use a higher power converter running at less than full load. The derating diagrams in the datasheets essentially define the maximum load at any given temperature within the operating temperature range. The derating curves are in reality not as linear as they are declared in most datasheets. However, reliable manufacturers will always err on the safe side so that the values given can be safely relied on in practice. If the converter has a plastic case, the next largest case size with the same power rating could be chosen to increase the available surface area. However, care must be taken not to compromise on efficiency; otherwise no net gain will be made. If the converter has a metal case, adding a heat sink can be very effective, particularly in conjunction with a forced-air cooling system. If a heat sink is used with fan cooling, the thermal resistance equation becomes:

where:
RTHcase-ambient Thermal
impedance from the case to the ambient surroundings
RTHcase-HS Thermal impedance
from the case to the heat sink
RTHHS-ambient Thermal
impedance from the heat sink to ambient

The value of RTHHS-ambient incorporates the thermal resistance of the heat sink as well as the thermal resistance of any thermally conductive paste or silicon pads used for a better thermal contact to the case. If these aids are not applied, then a value of approximately 0.2 K/W must be added to the thermal resistance of the heat sink alone.When establishing of the value of RTHHSambient it is also necessary to know how much air is being blown across the heat sink fins. These values are most often given in lfm (linear feet per minute) and declared by the fan manufacturer. The conversion to m/s is 100 lfm = 0.5 m/s. If the results of your calculations or measurements are borderline, then the issue must be examined in more depth. Sometimes a simple fix is enough, and that there is a difference in thermal performance between vertically and horizontally mounted modules, between static air, moving air, and air at low atmospheric pressures.


Don’t Get Burned in DC/DC
 Thermal Management

by Carl Schramm, RECOM, www.recom-international.com 
The efficiency of any energy conversion process is always less than 100 percent. Waste is a fact of life, and a part of any of the energy used in a system goes astray and is converted into heat. Ultimately this waste heat must be removed. Since the laws of thermodynamics state that heat energy can only flow from a warmer to a colder environment, if the internal heat of a device is to be dissipated the ambient temperature must be lower than the internal temperature. The smaller this difference is, the less heat energy will be dissipated.

Temperature Terms

Which temperature specifications should you consult for your calculations? RECOMs declare two values in its datasheets: the operating temperature range with or without derating and the maximum case temperature. Some manufacturers would say that these two values are the same. The maximum case (surface) temperature of many DC/DC modules is typically given as +100°C or +105°C. This value appears at first to be very high; however this figure includes not only the self-warming through internal losses but also the ambient temperature itself. Remember that the smaller the difference between case surface and ambient, the less heat lost. If a converter has higher internal losses, it will be more affected by a smaller temperature difference than a converter with lower internal losses. The internal losses occur mainly through switching losses in the transistors, rectification losses, core losses in the transformer, and resistive losses in the windings and tracks. The maximum allowable internal temperature is determined by the Curie temperature of the transformer core material, the maximum junction temperature in the switching transistors and rectification diodes and the maximum operating temperature of the capacitors.

Exploring Solutions
If the thermal dissipation calculations reveal that the DC/DC module will overheat at the desired ambient operating temperature, there are still a number of options available to reach a solution. One option is to derate the converter, i.e., use a higher power converter running at less than full load. The derating diagrams in the datasheets essentially define the maximum load at any given temperature within the operating temperature range. The derating curves are in reality not as linear as they are declared in most datasheets. However, reliable manufacturers will always err on the safe side so that the values given can be safely relied on in practice. If the converter has a plastic case, the next largest case size with the same power rating could be chosen to increase the available surface area. However, care must be taken not to compromise on efficiency; otherwise no net gain will be made. If the converter has a metal case, adding a heat sink can be very effective, particularly in conjunction with a forced-air cooling system. If a heat sink is used with fan cooling, the thermal resistance equation becomes:

where:
RTHcase-ambient                                      Thermal
impedance from the case to the ambient surroundings
RTHcase-HS                                    Thermal impedance
from the case to the heat sink
RTHHS-ambient                                         Thermal
impedance from the heat sink to ambient

The value of RTHHS-ambient incorporates the thermal resistance of the heat sink as well as the thermal resistance of any thermally conductive paste or silicon pads used for a better thermal contact to the case. If these aids are not applied, then a value of approximately 0.2 K/W must be added to the thermal resistance of the heat sink alone.When establishing of the value of RTHHSambient it is also necessary to know how much air is being blown across the heat sink fins. These values are most often given in lfm (linear feet per minute) and declared by the fan manufacturer. The conversion to m/s is 100 lfm = 0.5 m/s. If the results of your calculations or measurements are borderline, then the issue must be examined in more depth. Sometimes a simple fix is enough, and that there is a difference in thermal performance between vertically and horizontally mounted modules, between static air, moving air, and air at low atmospheric pressures.

Street Talk: Looking for that Elegant Solution
by John Ardizzoni, Analog Devices
John Ardizzoni
I remember years ago a co-worker having a pottery jar on his desk with a label that read “Elegant Solutions”. I peaked inside the jar…it was empty. Its tough to contain “elegant solutions”, as once created they tend to fly out of that jar exponentially faster than they entered it. Elegant solutions stem from creativity, necessity and ingenuity, not pure technical know how. It’s a challenge, a quest; accomplish a design with the least number of components, processes or code that meets or exceeds the design requirements. Even though I hate to use the cliché “think outside of the box”, frequently it is exactly what is called for. It’s easy to “Rube Goldberg” or overly complicate a process or a device. How about a microprocessor-based toaster that provides 4,096 shades of brown? That’s the easy way out; it’s much more rewarding to design simply, rather than simply designing.

 

Street Talk: LED Design Challenges
by Tony Toniolo, Data Display Products, 
www.ddp-leds.com 
ec85dt104a    
Until recently, LEDs have typically been used in lower-level lighting applications such as status indication and backlighting. With the development of high-power LEDs in white and monochromatic colors, LEDs can now be used in general illumination applications. However the increase in power used to drive new LED technology has created power and thermal challenges that must be addressed in the circuit and mechanical design. Most lighting designs operate from a high voltage AC power source. Since LEDs are DC current-driven, employing a specific AC to DC power supply to achieve a DC source voltage is often the most cost-efficient and reliable LED lighting solution. To ensure efficient LED operation, DC-to-DC LED driver circuitry may also be required in conjunction with the primary power supply. LED circuit designs should  be tailored to the specifics of the application because mechanical and economical constraints make it difficult to design a “catch-all” circuit. Generally, the greater the volume of light that is required, the more high-power LEDs are needed, leading to more complex circuitry, packaging challenges and a higher heat. While the drive circuitry maintains optimum light output, dissipated power in the design manifests itself as either heat or light. Heat that is not managed will reduce the forward current and light output. Ideally, high-power LEDs with low thermal resistance should be used in the design. Excess heat needs to be dissipated through thermally conductive circuit boards in conjunction with enclosures with heat sinking properties.

Street Talk: Designing “Green” Magnetics
by Ariel General and Jim Daniels, Datatronics, www.datatronics.com
Ariel General  Jim Daniels
A common problem facing power supply designers, and consequently the magnetics engineer, is they get locked very early into circuit board layout space limitations that hinder the most power efficient design. With increasing environmental demands and legislation requiring higher efficiency power supplies, it becomes even more important to have the magnetics engineer involved from the earliest design stage of the power supply. Some magnetic-core manufacturers have made significant improvements in lowering magnetics losses in their materials in the recent past, most done without even changing part numbers. There are also recent materials such as nanocrystalline soft magnetics that have been effective in increasing efficiency with minimal cost impact. Switch mode power supply transformers can be designed with an efficiency as high as 98 percent with the selection of the right materials. To achieve such performance levels, however, the designer must pay attention to the trade-offs among size, geometry and cost to achieve the highest efficiency within these constraints. For example, one of our design engineers was faced with a design goal of 94 percent efficiency for a SMPS transformer. Due to pre-set size constraints, he was forced to build a transformer that only afforded 90 percent efficiency. With more advanced planning from a systems design’s perspective he would have achieved the 94 percent goal with a different approach.

 

Powering Gbit I/O
by William Troutman and Chris Romano, Enpirion,
www.enpirion.com
William TroutmanChris Romano
The era of ultra-fast communications for both long haul and backplane has put intense pressure on I/O circuit designers, and serial data rates of 1 Gbit/sec to 10 Gbit/sec are now common. The challenge is that I/O circuits that can manage and manipulate these data rates, with very low error rates, are extremely complex in that they generally involve components such as PLL’s (phase lock loops) and VCO’s (voltage controlled oscillators). Until recently, powering Gbit I/O generally has been done with linear regulators, largely because they have a quiet output. But, in the past year, a set of detailed measurements have been carried out that demonstrate that the output for switched converters can be just as quiet as the output of the linear regulators, thus making them ideal for Gbit I/O powering. In addition to low noise, there is also a substantial power savings that comes along with the switched converters as they are significantly more efficient than the various families of linear regulators. Generally, switching regulators support a wide variety of loads, from 300 mA through about 12A. In a typical full-rate Gbit transceiver, circuits are sensitive to various sources of noise, such as the PLL, the VCO, and the ultra high-speed buffers. Transmitter jitter has a number  of sources, but intrinsic device noise and substrate cross talk are generally out of the control of the circuit/system designer. Intrinsic device noise plus substrate cross talk will set the limit in terms of the absolute minimum transceiver jitter. Starting with a reference of Tj = 78 ps when the power rails are sourced with LDO’s, with high speed switchers measured with and without second stage post-filtering is Tj = 77.2 ps and Tj = 78.3 ps, respectively. These results show that postfiltering is not required to match the noise-related effects of an LDO. Switching regulators can provide a near noise-free solution for the modern silicon chips, i.e. for technology generations of 90 mm, 65 mm, 45 mm, and beyond. In addition, switching regulators save substantial amounts of power, which can be critically important when powering today’s power hungry chips. 

 

Street Talk: Leveraging FPGAs for Low Power
by Gary Sugita, Actel, www.actel.com
Gary Sugita 1 
Today’s designs require integrated, flexible components that deliver both cost and power efficiency. Programmable logic devices provide customizable platforms enabling the integration of several discrete components onto a single chip. Designers should look for programmable solutions that eliminate the need for configuration PROM, brownout detection, clock manager or supplysequencing chips while integrating functions such as FET control, clocking resources, voltage regulators and analog-to-digital converters. This integration of functions onto a single device reduces cost as well as power consumption. With power consumption that is orders of magnitude lower than SRAM-based solutions, today’s nonvolatile flash-based FPGAs offer power efficiencies as low as 5 µW. Designers should pay close attention to both static and dynamic power and should leverage devices which offer reduced core voltages and flexible power-saving modes to reduce total system power consumption. Due to their live-at-power-up capabilities, flash FPGAs can quickly power-up and restore system states, enabling easier entry and exit from low-power sleep modes. Designers can also limit dynamic power consumption with FPGA power-friendly architectures that allow the use of segmented clocks and enable flip-flops. Low-power, high-performance serial connections such as low-voltage differential signal (LVDS) with double data-rate (DDR) registers also help to minimize power consumption. Combining low power with flexibility and integration, flash FPGAs enable designers to optimize their design and leverage programmable logic in ways previously not possible.

Street Talk: Don’t Forget the Algorithm
by Marc Barberis, Agility Design Solutions, www.agilityds.com
Scott Brown 
Most electronic systems contain critical algorithms core to their functionality and performance. A focused approach on good algorithm design can return significant dividends for your overall design. The algorithm level is where you can achieve the most significant gains on any design. Start with an inefficient algorithm, and no amount of tuning the implementation can save the day. When picking an algorithm, you should be aware of implementation constraints:
What are the real-time constraints on the system?
 How will the hardware deal with complex operations (for example a large SVD or matrix inversion)
 Are there simpler algorithms that would achieve roughly the same performance?
However, this is only the beginning of the process. A good design also requires a good design flow, one that will withstand late revisions, new versions of the product, or attrition in the development team. Automation when transitioning between various representations may come in handy to shorten not only the initial design but, more importantly, later iterations. Finally, there is no good design without great debugging and observation capability. Nobody is under the illusion that any design will work first-time through. However, various amounts of time are spent up front thinking about how to best verify it all the way through the transformation chain — from paper to algorithm, to code, to hardware. Those designers who spent the most time up front are far from being the last to achieve a successful design.
 

 

 

 

 

 

 

Street Talk: A Modular Approach to Design Re-use
by Scott Brown, Catalyst Semiconductor, www.catsemi.com
ec85bdt104a 
All designers march to the same orders — get to market faster at a lower cost. This is easier said than done if you are developing a complete line of products with an increasing range of features. For example, let’s say you have to create a line of five printers differentiated by the number of functions, with each set controlled by an array of buttons and LED indicators. The traditional architecture would be to use a low-cost (and low pin-count) microcontroller to interface with the required features on the low-end printers in the line. For the higher-end, more full-featured printers, you would have to use higher pin-count MCUs — an I/O pin for each additional feature. Obviously, this increases the design time and cost to re-layout the controller board for each product. The simplest and most cost-effective solution for managing the range of features across a line of products is a modular approach, where you can re-use the same MCU in each product iteration (reducing costs and increasing your profit margin) and control the various features and functions with a multi-channel I/O expander. I/O expanders such as the CAT9554 and CAT9555 interface to the system MCU via a two-wire I2C bus to provide eight or 16 channels of GPIO. An applicationspecific variation adds embedded LED dimming control to the 16 I/O ports. A modular approach to design re-use keeps things simple and reduces costs and time to market.

 

 

 

 

 

 

 

 

 

 

 

 

 

Memory Design is Important by Noam Wasersprung, Jungo Software Technologies, www.jungo.com
Noam Wasersprung
Designing for embedded systems is more often than not an exercise in satisfying constraints. While this is true for any software design, the embedded environment typically puts the tightest constraints on system design. As feature requirements climb quicker than hardware costs decline, a tight memory design is of paramount importance. For this reason, recursion should more often than not be avoided when designing for an embedded system. If used, a recursive procedure must have a well-defined hard coded limited depth. For similar reasons, close attention should be paid to memory stack usage: Functions should keep to a small number of variables, and should be implemented using minimal amount of local variables. Physical resources are not the only constraints applied to an embedded software design. Another vector to be mindful of is the user experience. Embedded software should be designed to allow continuous usability. Depending on the user interface available in the system, the software should clearly indicate it’s status and have error correction and status monitoring procedures built in to facilitate error recovery. Embedded systems are specialized applications. As such, a good software design should, at the end of the day, be a good solution to a specific, rather than general, problem.

 

 

 

 

 

 

 

 

 

 

 

   

Topics

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading