Quantcast
Channel: Power management
Viewing all 437 articles
Browse latest View live

Power Tips: How to use Nyquist plots to assess system stability

$
0
0

A Bode plot is a very popular way to determine a dynamic system’s stability. However, there are times, when a Bode plot is not a straightforward stability indicator.

Figure 1 shows a Bode plot of TI’s TPS40425synchronous buck converter with. In this application, a π filter is used at the output of the buck converter.

Figure 1: Bode plot of a buck converter with an output π filter

Because the ferrite bead used in the π filter has varying inductance over the load current, the Bode plots measured at different load conditions vary dramatically. At idling, the Bode plot measured has multiple 0db gain crossover. It is difficult to apply the phase margin and gain margin criteria in this case. Instead, I used Nyquist stability criterion.

The Nyquist stability criterion looks at the Nyquist plot of open-loop systems in Cartesian coordinates, with s = jω. Assuming that the open-loop system transfer function is F(s), the Nyquist plot is a plot of the transfer function of F (jω), with ω from -∞ to +∞. Stability is determined by looking at the number of encirclements of the point at (-1,0j). If the number of counterclockwise encirclements of (-1,0j) by F(s) equals the right-half-plane poles of F(s), then the system is stable. In this example, if a buck converter does not have right-half-plane poles, the number of encirclements of (-1,0j) indicates the stability.

You can derive a Nyquist plot from the measured Bode plot. I save the data first. The analyzer I use provides the data with magnitude in decibels and phase in degrees, as shown in Figure 2. Different frequency analyzers provide different formats. There are frequency analyzers that provide the data in complex numbers.

Figure 2: Data saved by a frequency analyzer from a Bode-plot measurement

Equation 1 converts magnitude and phase into complex numbers:

(1)                                                                                                                                                

Where Mn is the magnitude and θn is the phase of measured F(s) with s = j2πfn.

After applying Equation 1, I plotted the complex numbers in Cartesian coordinates. The plot show in Figure 3 is of frequencies from 100Hz to 1MHz. It is a good approximation of plot from 0Hz to +∞Hz. The plot from -∞Hz to 0Hz is the plot from 100Hz to 1MHz mirrored horizontally in Cartesian coordinates. I added the part from plot from -∞Hz to 0Hz to Figure 3 and formed Figure 4.

Figure 3: A Nyquist plot derived from a Bode plot of frequencies from 100Hz to 1MHz


Figure 4: A Nyquist plot derived from a Bode plot from -1MHz to -100Hz and 100Hz to +1MHz

It is the path around the unity gain circle that is most relevant to the system stability.  I zoomed into the area close to the unity gain circle. Since this system is of a voltage-mode buck converter, I know the DC gain is over 90dB, with a phase starting from 0 degrees. I can approximate the plot at lower frequencies in the complete Nyquist plot, as shown in Figure 5.

Figure 5: Approximate Nyquist plot from -∞Hz to +∞Hz when the system is idling

Following the arrows from -∞Hz, you can see that the Nyquist plot does not encircle the (-1,0j) point. The system is stable. I plotted a blue circle with a radius of 0.766 and (-1,j0) as the center. If the Nyquist plot does not enter this blue circle, the phase margin is greater than 45 degrees and the gain margin should be greater than 12dB.

Following the same procedure, Figure 6 shows a Nyquist plot at full load. Following the arrows, you can see that the Nyquist plot doesn’t encircle the (-1,0j) point either. And this Nyquist plot is outside of the safety circle, as specified earlier.

Figure 6: Approximate Nyquist plot from -∞Hz to +∞Hz when the system is at full load

When Bode plots fail to provide a straightforward indication of stability, consider using a Nyquist plot. I’ve shown you in this post how to convert measured Bode plots to a Nyquist plot and how to use Nyquist stability criterion to judge system stability.

Additional resources

 

 

 


Make your PSE system smarter and more efficient

$
0
0

 Power over Ethernet (PoE) enables Ethernet cables to carry electrical power as well as data. For example, old Internet protocol (IP) phones usually needed a DC power supply and Ethernet cable to deliver power and data, respectively. With PoE implemented in an Ethernet switch, power is delivered through the Ethernet cable to the IP phone, eliminating the need for a power supply. See Figure 1.

Figure 1: Old & new IP phone data/power path

There are two types of devices on both sides of an  Ethernet cable: power sourcing equipment (PSE) and powered device (PD). On the sourcing side, PSE devices are usually installed in Ethernet switches, routers, gateways and wireless backhauls. PDs manage and protect the PoE system at the load side. PDs are usually installed in IP phones, security cameras and access points.

In this post, I will explain when you need system software to control PSE in order to implement more functionalities than what’s defined in IEEE802.3at(Institute of Electrical and Electronics Engineers standard for Ethernet), and how to get started with TPS23861 PoE MSP430™ microcontroller (MCU) reference code to develop your own system software.

The TPS23861, a PoE PSE controller, is one of the most popular PSE devices available for mass-market applications, designed in products such as Surveillance NVRs, Ethernet switches and wireless access points. It comes with three modes: auto mode, semi-auto mode and manual mode. In auto mode, host control is not necessary and the TPS23861 can operate by itself (including detection, classification, power on and fault handling). This mode is usually used in standard low-port-count PSE systems. In semi-auto mode, the port automatically performs detection and classification as long as the detection and classification are enabled (0x14). A push-button command (0x19h) is required to power on the port. Semi-auto mode is usually used in high-port-count PSE systems that designers can implement multiport power management. Manual mode provides the most flexibility. It is used in nonstandard PoE applications such as high-power PoE PDs and non-PoE loads.

When operating in semi-auto or manual mode, systems with these criteria will need an external MCU to control the PSE:

  • The system has a high port count (more than eight ports).
  • The system needs to connect to nonstandard PDs such as high-power PoE PDs.
  • The power supply is not able to provide power to all ports with  full loads, so a multiport power-management module is necessary.

Once you determine whether your system requires an external MCU, a good resource to use to develop your own software is the TPS23861 PoE MSP430 MCU reference code. This system software supports:

  • Full compliance to the IEEE802.3at PoE specification.
  • Device detection, classification and power on.
  • Fault reporting (overcurrent, overtemperature, DC disconnect, etc.).
  • Multiport power management.

Multiport power management

Multiport power management methods manage the distribution and prioritization of PDs. The IEEE specification does not define power management itself; instead, it is a feature that takes advantage of the PoE specification as it defines such terms as port and system power.

The goals of multiport power management in a POE-enabled system are twofold: power as many PoE PDs as possible and limit the power cycling of PoE PDs.

The maximum system power available limits the total number of powerable ports. For example, each PoE PD can draw a maximum of 30W, and a 48-port system can draw more than 1,440W of total system power. If the maximum system power available is less than 1,440W, multiport power management then becomes necessary so that the available system power may be used most efficiently while meeting the goals.

In the TPS23861 PoE MSP430 MCU reference code, a multiport power-management module is implemented in semi-auto mode reference code.

There two approaches to implementing the power-management feature:

  • Powering on each port without checking the remaining power and turning off ports if the total system power exceeds the power budget.
  • Before powering on each port, calculating the total system power and checking if the remaining power is enough to power on the port or not.

The TPS23861 PoE  MSP430 MCU reference code implements the second approach, considering more severe cases when the margin between the power budget in the software and the actual power capability of the power supply is not enough to turn on one extra port.

The multiport power-management module is inserted after the PSE discovers a valid PoE PD. My initial thought was to calculate how much power was left and compare it to the power that the current port is requesting (estimated by the class result after classification). If the remaining power is enough to turn on the current port, it will then initiate the power-on command; otherwise, the system software or host will check if there’s any ports powered on with lower priority. When a PD device connects to the PSE port, the PSE generates an interrupt; then the host knows which port is connected with a PD.

If there is a lower-priority port, the host will power off the port in order to have enough power to turn on the current port.

If you think more deeply about it, you will find that there are some corner cases that haven’t been considered in the above logic:

  • What if after turning off all lower-priority ports, the remaining power is still not enough to turn on the current port?
  • Since system power is only calculated when a port is inserted and finishes classification, what if the PD has a sleep mode or doesn’t pull a full load after power on, and at a certain time the load suddenly increases?

Taking these two corner cases into consideration, we optimized the multiport power management algorithm in two ways:

  • Instead of turning off a lower-priority port after recognizing insufficient power, we first check whether the remaining power is sufficient after turning off all ports that have lower priority than the current port. If the power is still not sufficient, we just leave the current port waiting. Otherwise we turn off the lowest-priority port in each loop.
  • To avoid having a load-step change damage the power supply, we added a module running in a timer-triggered interrupt that monitors the total consumed system power. If it exceeds the power budget, it will turn off the lowest-priority port.

Figure 2 shows the flow chart of power on decision and Figure 3 shows the system power monitor flow chart.

Figure 2: Power-on decision flow chart

 

Figure 3: System power monitor flow chart

TI provides TPS23861 PoE MSP430 MCU reference code to help designers ramp up quickly and develop software without having to develop it from scratch. Customers can take this code to an MSP430 MCU LaunchPad™ development kit and run it with the TPS23861EVM. For more information on the TPS23861EVM, see the EVM user guide for instructions and software architecture.

Additional resources

Four-switch buck-boost layout tip No. 1: identifying the critical parts for layout

$
0
0

Layout is very critical to the successful operation of a buck-boost converter. A good layout begins by identifying these critical components, as shown in Figure 1:

  • High di/dt loops or hot loops.
  • High dv/dt nodes.
  • Sensitive traces.

Figure 1: Identifying high di/dt loops, high dv/dt nodes and sensitive traces

Figure 1 shows the high di/dt paths in the LM5175 four-switch buck-boost converter. The most dominant high di/dt loops are the input-switching current loop and output-switching current loop. The input loop consists of an input capacitor (CIN), MOSFETs (QH1 and QL1), and a sense resistor (Rs). The output loop consists of an output capacitor (COUT), MOSFETs (QH2 and QL2), and a sense resistor (Rs).

The high dv/dt nodes are those with fast voltage transition. These nodes are switch nodes (SW1 and SW2), boot nodes (BOOT1 and BOOT2), and gate-drive traces (HDRV1, LDRV1, HDRV2 and LDRV2), along with their return paths.

The current-sense traces from resistor Rs to the integrated circuit (IC) pins (CS and CSG), the input and output sense traces (VISNS, VOSNS, FB), and the controller components (SLOPE, Rc1, Cc1, Cc2) form the noise-sensitive traces. They are shown in blue in Figure 1.

For good layout performance, minimize the loop areas of high di/dt paths, minimize the surface areas of high di/dt nodes, and keep the noise-sensitive traces from the noisy (high di/dt and high dv/dt) portions of the circuit. In the other two installments of this series, I’ll look at each of these in detail in the context of the four-switch buck-boost converter. My next topic will include an example for optimizing hot loops.

Additional resources

Four-switch buck-boost layout tip No. 2: optimizing hot loops in the power stage

$
0
0

Once you have identified the critical parts of your DC/DC converter design, your next task is to minimize any sources of noise and unwanted parasitics. Minimizing hot loops is a major first step in this direction. Figure 1 shows the hot loops or high di/dt loops in a four-switch buck-boost converter. Figure 1 also highlights the hot loops formed by the gate drives and their return paths, in addition to the input and output switching loops (Nos. 1 to 6).

Figure 1: Hot loops in a four-switch buck-boost converter

Since the power-stage hot loops (in red) contain the largest switching currents, optimize these first. The input loop (No. 1) carries the switching current when in buck cycles. The output loop (No. 2) carries the switching current when in boost cycles. In my experience, I’ve realized the lowest loop area and the most compact design when optimizing both loops using a symmetric layout.

Figures 2 and 3 are examples of good power-stage layouts. The layout example shown in Figure 2a provides a better thermal path for the heat generated in the sense resistors and the FETs to spread. Consider following the layout example shown in Figure 2b to create higher-density designs, as it packs the power-stage components closer together.

Figure 2: A symmetrical power-stage layout minimizes both the input and output power loops in a four-switch buck-boost converter, (a) medium density design, (b) high density design

There is a trade-off in size, thermal robustness and noise performance of the power stage. Smaller di/dt loops and smaller dv/dt nodes have lower parasitics and also radiate less. They are also more robust in the presence of external noise, as smaller loop areas couple less noise. Smaller designs are more constricted thermally, however, because there isn’t much copper directly connected to the heat-dissipating elements, which include MOSFETs, sense resistors and the inductor. For relatively higher-power designs, you may need extra copper area at the switch nodes to limit the temperature.

Figure 3 shows a design capable of handling higher currents and allows for the paralleling of FETs. The heat is distributed between the FETs, which can then spread to adjacent copper planes and thus avoid excessive temperature increases or the formation of hot spots.

Figure 3: An example layout with parallel FETs and larger copper areas for higher-power designs

In the next installment of this series, I will discuss how to optimally route sense connections.

Four-switch buck-boost layout tip No. 3: separating differential sense lines from power planes

$
0
0

In my last blog, I provided tips for optimizing hot loops in a buck-boost converter. I decided to add this tip as a separate topic after finding it in almost all of the layouts I reviewed late last year. The most frequently encountered issue in layout is the incorrect routing of differential sense signals from the sense resistor to TI’s LM5175 integrated circuit (IC) pins (the CS-CSG pair). An example of sense connection is shown in Figure 1.

Figure 1: An LM5175 schematic showing differential sense connection from power stage to the controller pins.

In some cases, designers make this error because one of the sense nodes (the lower side of the sense resistor, marked as node “N” in the yellow circle) is electrically same as the circuit ground (GND). Thus, the need to differentially route the CS-CSG pair – which carries a small signal (tens of millivolts) – is not clear to the layout engineer. Figure 2 shows this common error.

Figure 2: (a) Correct differential current sense routing and (b) a common mistake when routing differential sense signals.

In other cases, the designer does recognize the need to differentially route the current-sense signals. But during the finishing of the board, the negative trace is connected to a plane or a copper pour, as the layout tool treats the nets as a ground (GND) net. This unintended connection can happen anywhere along the trace, as shown in Figure 3. In the next paragraphs, I will describe some common practices to avoid this.

Figure 3: An example of unintentional connection of differential sense signal with power ground plane.

Net ties

A net tie allows an artificial separation of net names in the schematic (Figure 4). This allows the layout tool to treat N1 and N2 as separate nodes and protects the bulk of the differential trace (N2) from accidental connections to the ground plane or pours. The downside is that the N1 section is technically a GND net, and therefore still needs to be separated from the GND plane or copper pours (Figure 4).

Figure 4: An example of using Net-Tie to prevent unintentional connection of sense signals to copper planes or pours.

Polygon cutouts or keep outs

Many layout tools provide a feature called polygon cutouts or polygon keep outs. Polygon keep outs create a boundary that keeps polygons or copper pours from entering. A polygon keep-out layer must follow the sense trace from beginning to end. You must take additional care when the sense trace changes layer though vias. In such cases, you must use polygon keep outs on all layers around the via. Figure 5 shows an example.

Figure 5: Correct use of polygon cut-outs to separate sense traces from power planes.

The incorrect routing of sense traces can spoil an otherwise good design. Recognizing the sense traces – particularly those that share the net names with a copper area, plane or pour – is essential. These traces must be isolated using net ties or polygon keep outs during printed circuit board (PCB) design to prevent an inadvertent connection to the copper planes.

Additional resources:

LDO basics: power supply rejection ratio

$
0
0

One of the most touted benefits of low dropout linear regulators (LDOs) is their ability to attenuate voltage ripple generated by switched-mode power supplies. This is especially important for signal-conditioning devices like data converters, phase-locked loops (PLLs) and clocks, where noisy supply voltages can compromise performance. My colleague Xavier Ramus covered the detrimental effect noise has on signal-conditioning devices in the blog: Reducing high-speed signal chain power supply issues. Yet Power Supply Rejection Ratio (PSRR) is still commonly mistaken as a single, static value. In this post, I’ll attempt to illustrate what PSRR is and the variables that affect it.

Just what is PSRR?

PSRR is a common specification found in many LDO data sheets. It specifies the degree to which an AC element of a certain frequency is attenuated from the input to the output of the LDO. Equation 1 expresses PSRR as:

                                (1)

This equation tells you that the higher the attenuation, the higher the PSRR value in units of decibels. (It should be noted that some vendors apply a negative sign to indicate attenuation. Most vendors, including Texas Instruments, do not.)

It’s not uncommon to find PSRR specified in the electrical characteristics table of a data sheet at a frequency of 120Hz or 1kHz. However, this specification alone might not be so helpful in determining if a given LDO meets your filtering requirements. Let’s examine why.

Determining PSRR for your application

Figure 1 shows a DC/DC converter regulating 4.3V from a 12V rail. It’s followed by the TPS717, a high-PSRR LDO, regulating a 3.3V rail. The ripple generated from switching amounts to ±50mV on the 4.3V rail. The PSRR of the LDO will determine the amount of ripple remaining at the output of the TPS717.

 Figure 1: Using an LDO to filter switching noise

In order to determine the degree of attenuation, you must first know at which frequency the ripple is occurring. Let’s assume 1MHz for this example, as it is right in the middle of the range of common switching frequencies. You can see that the PSRR value specified at 120Hz or 1kHz will not help with this analysis. Instead, you must consult the PSRR plot in Figure 2.

Figure 2: PSRR curve for the TPS717 with VIN– VOUT = 1V

The PSRR at 1MHz is specified at 45dB under the following conditions:

  • IOUT = 150mA
  • VIN– VOUT = 1V
  • COUT = 1μF

Assume that these conditions match your own. In this case, 45dB equates to an attenuation factor of 178. You can expect your ±50mV ripple at the input to be squashed to ±281μV at the output.

Altering the conditions

But let’s say that you changed the conditions and decided to reduce your VIN– VOUT delta to 250mV in order to regulate more efficiently. You would then need to consult the curve in Figure 3.

Figure 3: PSRR curve for the TPS717 with VIN– VOUT = 0.25V

You can see that holding all other conditions constant, the PSRR at 1MHz is reduced to 23dB, or an attenuation factor of 14. This is due to the CMOS pass element entering the triode (or linear) region; that is, as the VIN– VOUT delta approaches the dropout voltage, PSRR begins to degrade. (Bear in mind that dropout voltage is a function of output current, among other factors. Hence, a lower output current decreases the dropout voltage and helps improve PSRR.)

Changing the output capacitor will have implications as well, as shown in Figure 4.

Figure 4: PSRR curve for the TPS717 with VIN– VOUT = 0.25V, COUT = 10μF

By sizing up the output capacitor from 1μF to 10μF, the PSRR at 1MHz increases to 42dB despite the VIN– VOUT delta remaining at 250mV. The high-frequency hump in the curve has shifted to the left. This is due to the impedance characteristics of the output capacitor(s). By sizing the output capacitor appropriately, you can tune, or increase, the attenuation to coincide with the particular switching noise frequency.

Turning all the knobs

Just by adjusting VIN– VOUTand the output capacitance, you can improve PSRR for a particular application. These are by no means the only variables affecting PSRR, though. Table 1 outlines the various factors.

Parameter

PSRR

Low frequency

(<1kHz)

Mid frequency

(1kHz – 100kHz)

High frequency

(>100kHz)

VIN– VOUT

+++

+++

++

Output capacitor (COUT)

No effect

+

+++

Noise reduction capacitor (CNR)

+++

+

No effect

Feed-forward capacitor (CFF)

++

+++

+

PCB layout

+

+

+++

Table 1: Variables affecting PSRR

I will discuss these other factors in a future post. But for now, I hope that you are more familiar with the various tools at your disposal that can help you design an effective LDO filter. For more information on LDO PSRR, read the application note, LDO PSRR Measurement Simplified.

Additional resources

 

Power Tips: PWM-controlled output adjustment for USB Type-C™ applications

$
0
0

Consumer applications often require power supplies to support an adjustable output voltage for different operating conditions such as USB Type-C™. This demand creates the need for a simple and effective method to tune the output voltage. There are many ways to interact with the feedback (FB) pin on the integrated circuit (IC) to set the desired output. One way is to add a trim resistor at the FB pin, and apply a voltage to source or sink additional current at the FB pin’s resistive divider. Another approach is to use an I2C bus to program signals to interact with the FB pin. But what if a variable voltage source or I2C bus is not available? In this post, I will show you how to use a simple resistor capacitor (RC) low-pass filter, a trim resistor and a pulse-width modulation (PWM) signal from a microcontroller unit (MCU) to tune the output voltage.

Figure 1 shows the circuit illustration of this approach.

Figure 1: PWM injection circuit

The RC low-pass filter will average the PWM signal based on the duty cycle. The Thevenin equivalent of the circuit in Figure 2 and the filter capacitor will create the time constant, which determines the slew rate of the signal injected into the FB pin. Equation 1 shows the calculation:

Figure 2: Thevenin equivalent

Because the added RC filter introduces a pole and zero pair into the overall control loop, you will need to take care when selecting the RC filter. Looking again at Figure 1, at low frequency, when Clowpass is open, the sum of Rinject and Rlowpass is in parallel with Rfbb. When Clowpass shorts at high frequency, only Rinject is in parallel with Rfbb. Therefore, selecting Rlowpass to be much smaller than Rinject will ensure that the pole and zero pair is close to each other and will minimize the effect on the controller’s control loop.

Equations 2, 3 and 4 calculate how to best select the injection, top and bottom FB resistors, respectively:

Equation 3 corresponds to the minimum output voltage, and Equation 4 corresponds to the maximum output voltage.

For example, if the available 3.3V PWM signal’s duty cycle varies from 6% to 94%, you would select a 49.9kΩ top FB resistor and a 1kΩ Rlowpass to achieve a 1V-to-10V output, and your controller’s FB voltage would be 0.8V. Equations 5 and 6 show the calculations of Rfbb and Rinject:

Setting Equations 5 and 6 as equal, Radj yields 16.47kΩ and Rfbb yields 5.41kΩ when selecting a standard value of 15.4kΩ and 5.36kΩ for Rinject and Rfbb, respectively.

The controller’s duty cycle will be perturbed if a lot of ripple is injected into the FB pin; therefore, you will need to take some care when selecting the low-pass filter’s capacitance. As a good design practice, maintain less than 1% ripple on the FB pin voltage – much less. For example, with a switching frequency (Fsw) of 200kHz, use a PWM of 1MHz with an RC time constant of 1ms. This will minimize any beat-frequency components appearing on the output voltage. Rlowpass and Clowpass will dominate the time constant, since the resistors at the FB divider side have a much higher impedance than Rlowpass.

Following the design methodologies described in this post will reduce development time and circuit complexity for applications requiring multiple outputs.

Additional resources

 

Tips and tricks for optimizing your voltage supervisor

$
0
0

Voltage supervisors have provided analog voltage monitoring to digital circuits for decades. Texas Instruments released the original TL7705 in 1983; it consumed 1.8mA, came in a plastic dual-inline package (PDIP) and you can still purchase it today. Newer supervisors come with a wide range of options, from ultra-low current (TPS3839), tiny packages (TPS3831), dual channel (TPS3779/80) and high accuracy (TPS3702) to multichannel, feature-rich power monitors (TPS38600). In addition to choosing from these options, there are some simple circuit additions that you can make to help optimize the voltage-supervisor function; here are a few of those additions.

Add a resistor to increase hysteresis

Some applications require a wider-voltage hysteresis than what is typically available with standard supervisors. One way to increase hysteresis on an adjustable supervisor is to add an additional resistor between the output pin and the input resistor divider.

In the normal configuration shown in Figure 1, R1 and R2 set the threshold voltage and R4 is a pull-up resistor. Adding R3 gives a feedback path from the output (VOUT) to the divider voltage, allowing for adjustable hysteresis with proper resistor selection.

Figure 1: TPS3710 with resistors added for hysteresis

Equations 1 and 2 calculate the rising and falling thresholds for the circuit in Figure 1:

Sense a negative voltage

Monitoring a negative voltage can be tricky because most systems have ground-referenced logic signals, requiring level shifting to enable communication. One way to accomplish the necessary level shifting is to use open-drain outputs. Figure 2’s schematic shows how to use the TPS3700 in a negative rail with the outputs level-shifted up to give positive logic.

  

Figure 2:  TPS3700/1 configured for negative voltage sensing

In Figure 2, the monitored voltage (VMON) is a negative voltage relative to ground. You can program the overvoltage and undervoltage limits with R1, R2 and R3 in the same way as for the positive voltage (see product data sheets for more information). The open-drain outputs of the TPS3700 or TPS3701 are independent of VDD, which means that VPULLUP can be a positive voltage that enables a positive ground-referenced logic voltage to interface with any microcontrollers or processors.

Sensing a negative voltage using the previously described method requires additional diodes and resistors on the output. Another trick for sensing a negative voltage that will have fewer additional components is to use a positive voltage to shift up the resistor-divider voltages so that the divided threshold voltage is positive relative to ground. The four-channel TPS386000 supervisor makes this easy by providing a reference voltage to which you can connect the resistor chain. See Figure 3.

  

Figure 3: Using an external voltage reference to sense a negative voltage

In Figure 3, the VMON(4,NEG) node represents the negative monitored voltage and VMON(4,POS) the positive monitored voltage. Negative monitoring is possible because the resistor divider is referenced to the VREF pin (a 1.2V output) instead of ground-referenced, as in the positive channel. The RESET output will go high in Figure 3 when the negative channel falls below -14.92V and the positive channel rises above 15.04V, nominally.

Add a P-type JFET to remove false low-voltage output signals

Most supervisors require a certain amount of voltage on VDD before the device’s output can give an accurate output. This voltage is typically around 800mV – below this voltage, the supervisor has no way to control the internal circuitry that’s pulling the output low or high. As a result, the output will rise up with the pull-up voltage until there is enough headroom for the device to pull it back down. Many times you can ignore this; however, in the cases where you cannot, you can add a P-channel junction field effect transistor (JFET ) to ensure that the output stays low even when VDD is not sufficient to provide power to the supervisor. Figure 4 shows an example.

Figure 4: Adding a JFET to remove output voltage rise with low VDD 

In Figure 4, the normal output of the TPS3890 is represented as VG. When VMON (the monitored voltage) rises, the voltage at VG also rises briefly, to around 0.5V. By adding a standard JFET configured in a source-follower configuration, the voltage at the source (labeled as VOUT) will track the voltage at VG minus the threshold voltage of the JFET. This results in an approximate 1V drop between VG and VOUT, and eliminates the 0.5V rise on VG.  Figure 5 shows the effect of using a JFET on the output of the TPS3890.

Figure 5: TPS3890 startup with and without a JFET on the output

Supervisors are necessary in a wide range of applications and systems.  While most standard configurations don’t require any additional components beyond a resistor or two, there are some applications that require additional functionality.  Hopefully this blog helps give a few ideas on how to solve these unique cases.  For more information on the circuits mentioned here, see the links below, or visit ti.com. 

Additional resources


Innovate, design and learn with TI power experts at APEC

$
0
0

Reviewing this year’s program of events for APEC in Tampa, Florida, on March 26-30 provoked a few nostalgic memories. Back in 1994, I attended my first APEC conference at Disney World in Orlando, Florida, and was impressed to see so many power electronics experts at one event. I actually had the pleasure to meet Ned Mohan, the author of my undergraduate power electronics textbook. Unfortunately, I hadn’t brought it along for an autograph.

That event, as in the previous years, had many papers and speakers discussing the latest innovations in power-supply design, including soft switching inverters, which helped me with my graduate research. To interact with the authors of the papers I studied and to listen to their presentations was a learning experience that I will never forget.

I also enjoyed the product demonstrations and discussions with other engineers at the exhibitor booths. In graduate school, I had only read about the specifications of “hockey puck” diodes and insulated gate bipolar transistor (IGBT) half bridges. Seeing the products and asking questions was also very beneficial for my research project.

From educational seminars to live in-booth demos, APEC’s tradition of expanding your knowledge of power electronics continues at this year’s show. Stop by TI’s booth, No. 701, to get your questions answered by TI experts; meet Bob Mammano and enter for a chance to win a copy of his new book, “Fundamentals of Power Supply Design” (Figure 1); get to see our newest products; and witness the unveiling of end-to-end power-management system solutions including hardware, software and reference designs that help you get to market quickly.

Figure 1: Bob Mammano’s book, “Fundamentals of Power Supply Design”

If you are a returning attendee or this is your first time at the show, I hope you enjoy the conference as much as I have. Perhaps Bob Mammano will agree to autograph a couple of Power Supply Design Seminar books that I cherish – if I remember to bring them along.

What you’ll find in TI’s booth

At APEC, TI will give designers the power to:

  • Innovate:
    • See how the industry’s first end-to-end gallium nitride (GaN) solution revolutionizes data center AC-to-processor power density by over 3x, with TI’s GaN technology clocking in at over 6 million device reliability hours.
    • Experience the design simplicity that power modules provide as TI takes the wraps off its new high-current PMBus module with telemetry for communications applications (Figure 2). 
    • Watch two new low quiescent current (IQ) step-down DC/DC converters power the next evolution of automotive USB Type-C™ and industrial heating, ventilation and air conditioning (HVAC) applications.
    • See the industry’s first multiphase bidirectional current controller efficiently transfer power between 48-V and 12-V automotive dual-battery systems.

      Figure 2: High current PMBus power module with telemetry for communication applications

  • Design:
    • Analyze and interpret designs with new WEBENCH® Power Design tools. Go beyond the traditional concerns of performance, footprint and cost with tools that mitigate electromagnetic interference (EMI) noise and handle noise-sensitive loads.
    • Check out the new TI Power Management Lab Kit (TI-PMLK) buck-boost board (Figure 3) and experiment book to better understand the trade-offs related to common power-supply parameters such as power losses, converter efficiency, stability, load and line regulation, and more.

Figure 3: TI-PMLK buck-boost board

  • Learn:
    • Attend the APEC plenary Power Semiconductor Technology – Flexibility for Tomorrow’s Solutions” by Ahmad Bahai, Texas Instruments chief technologist and senior vice president, from 1:30 to 2 p.m. Monday, March 27. Bahai will show how the demand for smart, high-efficiency power management is outpacing the average growth rate of the semiconductor industry.
    • Join TI power experts at more than 30 technical sessions covering topics such as renewable energy systems, wireless technology and GaN device reliability validation.
    • Meet Bob Mammano daily in the TI booth, author of “Fundamentals of Power Supply Design,” your new indispensable reference book for all things power. Enter for a chance to win an exclusive autographed pre-release copy.
    • Grab TI’s new month-to-month power-topology guide in the TI booth while they last.

Stay connected with TI at APEC 2017: see ti.com/apec2017 and follow us on Twitter and Facebook. Tune into Facebook Live discussions throughout APEC to watch demonstrations, hear more about TI’s latest news in power innovation, and get your questions answered by TI experts in real time.

Find out more about TI’s power-management portfolio

LDO basics: dropout

$
0
0

The quintessential characteristic of a low-dropout (LDO) linear voltage regulator has to be dropout. After all, that is the source of its name and acronym.

At the most basic level, dropout describes the minimum delta between VIN and VOUT required for proper regulation. However, it quickly becomes more nuanced when you incorporate variables. Dropout, as you’ll see, is essential to obtaining efficient operation and generating voltage rails with limited headroom.

What is dropout?

Dropout voltage, VDO, refers to the minimum voltage differential that the input voltage, VIN, must maintain above the desired output voltage, VOUT(nom), for proper regulation. See Equation 1:

                 (1)

Should VIN fall below this value, the linear regulator will enter dropout operation and no longer regulate the desired output voltage. In this case, the output voltage, VOUT(dropout), will track VIN minus the dropout voltage (Equation 2):

      (2)

As an example, consider an LDO like the TPS799 regulating 3.3V. When sourcing 200mA, the TPS799’s maximum dropout voltage is specified at 175mV. As long as the input voltage is 3.475V or greater, regulation is not affected. However, dropping the input voltage to 3.375V will cause the LDO to enter dropout operation and cease regulation, as shown in Figure 1.

Figure 1: The TPS799 operating in dropout

Although it’s supposed to regulate 3.3V, the TPS799 does not have the headroom required to maintain regulation. As a result, the output voltage begins to track the input voltage.

What determines dropout?

The architecture of the LDO primarily determines dropout. To see why, let’s look at PMOS and NMOS LDOs and compare their operation.

PMOS LDO

Figure 2 shows a PMOS LDO architecture. In order to regulate the desired output voltage, the feedback loop controls the drain-to-source resistance, or RDS. As VIN approaches VOUT(nom), the error amplifier will drive the gate-to-source voltage, or VGS, more negative in order to lower RDS and maintain regulation.

Figure 2: A PMOS LDO

At a certain point, however, the error-amplifier output will saturate at ground and cannot drive VGS more negative. RDS has reached its minimum value. Multiplying this RDS value against the output current, or IOUT, will yield the dropout voltage.

Bear in mind that the more negative the value of VGS, the lower RDS achieved. By increasing the input voltage, you can achieve a more negative VGS. Therefore, PMOS architectures will have lower dropout at higher output voltages. Figure 3 illustrates this behavior.

Figure 3: Dropout voltage vs. input voltage for the TPS799

As shown in Figure 3, the TPS799 has a lower dropout voltage as the input voltage (and output voltage, for that matter) increases. That is because a higher input voltage yields a more negative VGS.

NMOS LDO

In the case of an NMOS architecture, as shown in Figure 4, the feedback loop still controls RDS. As VIN approaches VOUT(nom), however, the error amplifier will increase VGS in order to lower the RDS and maintain regulation.

Figure 4: An NMOS LDO

At a certain point, VGS cannot increase any more, since the error-amplifier output will saturate at the supply voltage, or VIN. When this condition is met, RDS is at its minimum value. Multiplying this value against the output current, or IOUT, derives the dropout voltage.

This presents a problem though, because as VIN approaches VOUT(nom), VGS will also decrease, since the error-amplifier output saturates at VIN. This prevents ultra-low dropout.

Biasing the LDO

Many NMOS LDOs employ an auxiliary rail known as a bias voltage, or VBIAS, as shown in Figure 5.

Figure 5: An NMOS LDO with a bias rail

This rail serves as the positive supply rail for the error amplifier and allows its output to swing all the way up to VBIAS, which is higher than VIN. This type of configuration enables the LDO to maintain a high VGS, and therefore achieve ultra-low dropout at low output voltages.

Sometimes an auxiliary rail is not available but low dropout at a low output voltage is still desired. In such a situation, an internal charge pump can be substituted in place of VBIAS, as shown in Figure 6.

Figure 6: An NMOS LDO with an internal charge pump

The charge pump will boost VIN so that the error amplifier may generate a larger VGS value despite the lack of an external VBIAS rail.

Other variables

In addition to architecture, dropout is also affected by a few other variables, as outlined in Table 1.

Table 1: Variables affecting dropout

It’s clear that dropout is not a static value. Rather than just complicating your LDO choice, though, these variables should help you choose the optimal LDO for your specific set of conditions. Learn more about LDO dropout in the application note, Understanding LDO Dropout.

Additional resources:

Selecting a bidirectional converter control scheme

$
0
0

The 48V-12V dual-battery power system is becoming popular for mild hybrid electric vehicles. The vehicle’s dynamic operating conditions may require the transfer of electric power as high as 10kW back and forth between the two battery rails. Because of all sorts of operation scenarios in a moving vehicle, controlling the power flow needs in one direction or the other in real time can be a rather intricate task that demands the intelligence of a digital control scheme. So when leading automakers and tier-1 suppliers started developing 48V-12V bidirectional power converters, most of them took a full digital approach.

Full digital solutions are costly because they require many discrete analog circuits. These analog circuits include precision current-sense amplifiers, power MOSFET gate drivers, monitor and protection circuits, and so on. Discrete solutions are also bulky and less reliable due to the large device counts on the circuit board. To reduce solution size and lower cost while improving performance and system-level reliability, some tier-1 suppliers are looking into a mixed architecture where a microcontroller handles higher-level intelligent management, and highly integrated analog controllers implement the power-converter stage. In this post, I’ll discuss how to identify the most suitable control scheme for such an analog controller.

Table 1 summarizes the advantages and disadvantages of different control schemes.

Table 1: Control scheme comparisons

A 48V-12V bidirectional converter normally must have highly accurate current regulation (better than 3%) in order to precisely control the amount of power delivered from one battery rail to the other. Owing to high power, the system usually requires multiphase circuits in an interleaved parallel operation to share the total load, and the sharing should be evenly balanced among individual phases. Therefore, the voltage-control mode is not suitable because of its inability to achieve multiphase sharing.

The peak current-mode control scheme, which generates a pulse-width modulation (PWM) signal based on the peak inductor current, can achieve multiphase sharing. However, the sharing balance is largely affected by the power inductor tolerances. The power inductor usually has a tolerance of ±10%, leading to remarkable sharing errors, thereby causing unbalanced power dissipations in different phases. What’s worse is that the peak current of the inductor has an inherent error from the DC current, leading to less accurate current regulation and thus less accurate power delivery.

The conventional average current-mode control scheme solves the current-error problem of peak current-mode control because it regulates the averaged inductor current and eliminates the effects of inductor tolerances on current regulation. However, the power plant transfer function varies with the operating voltage and current conditions, and bidirectional operation requires two different loop compensations.

In order to overcome the challenges with a conventional average current-mode control scheme and simplify real circuit implementation, TI developed an innovative average current-mode control scheme for 48V-12V bidirectional converter operation, as shown in Figure 1 and Table 1. The power stage consists of:

  • A high-side FET (Q1).
  • A low-side FET (Q2).
  • A power inductor (Lm).
  • A current-sense resistor (Rcs).
  • Two batteries, one at the HV-Port and the other at the LV-Port.

The control circuit comprises:

  • A gain-of-50 current-sense amplifier with direction steering by the direction command DIR (“0” or “1”).
  • A transconductance amplifier serving as the current-loop error amplifier, with a reference signal (ISET) applied at the noninverting pin to set the phase DC current regulation value.
  • A PWM comparator.
  • A ramp signal generated in proportion to the HV-Port voltage.
  • A steering circuit controlled by DIR to apply the PWM signal to control either Q1 or Q2 as the main switch.
  • A loop compensation network at the COMP node.

Rcs senses the inductor current, and the signal is amplified by 50 times. The signal is sent to the inverting input of the transconductance amplifier, resulting in an error signal at the COMP node, which is also the node of the noninverting input of the PWM comparator. Comparing the error signal with the ramp signal produces the PWM signal. Steered by the DIR command, the PWM signal can control the Q1 for buck-mode operation and force the current flow from the HV-Port to the LV-port, or when sent to Q2, reverse the direction of current flow.

Figure 1: TI’s proprietary average current-mode control scheme for a bidirectional current converter


Table 2: Converter power plant transfer function (KFF is the ramp generator coefficient, Vramp = KFF x VHV-Port. Rs is the effective total resistance along the power-flow path, excluding Rcs.)

 

Table 2 shows the advantages of the new control scheme. The power plant transfer function is the same for bidirectional operation, and it is a first-order system. In addition, the transfer function is independent of the operating conditions such as the port voltage and load current levels. Therefore, applying a single Type-2 compensation network will always stabilize the bidirectional converter under all operating conditions, greatly simplifying real circuit implementation and enhancing performance.

TI’s proprietary averaged current-mode control scheme is suitable for an automotive 48V-12V bidirectional current controller. It requires a single Type-2 compensation network to cover bidirectional operation regardless of operating conditions. Current-regulation accuracy – despite the inductor tolerances, naturally multiphase parallel operation to evenly share high power, etc. – will greatly simplify a bidirectional converter design with high performance. TI has implemented this control scheme in the LM5170-Q1 multiphase bidirectional current controller. Read the blog post, “Interconnecting automotive 48V and 12V rails in dual battery systems,” to learn how to overcome the challenges of designing a power supply for hybrid electric vehicles.

Know your gate driver

$
0
0

I hope you’ve had a chance to watch our video series, “Know Your Gate Driver.” Although often overlooked, gate drivers are responsible for a lot of the heavy lifting in systems like power supplies and motor-control systems. I like to think of a gate driver as a muscle! The video series explains how while highlighting key gate-driver parameters. Topics range from negative-voltage handling and delay matching to wide VDD and operating temperature ranges. In this blog post, I’ll explain these topics a bit more fully.

Negative voltage

Negative-voltage handling in a gate driver is the ability to withstand negative voltages at the input and output. These unwanted voltages can result from parasitic inductances caused by switching transitions, leakage or even poor layout. A gate driver’s ability to survive negative voltages is critical for a robust, reliable solution.

Figure 1 shows how TI gate drivers survive negative voltages resulting from large amounts of undershoot and overshoot, with no damage to the integrated circuit.

Figure 1: Negative reverse voltage handling

Delay matching

Delay matching is a specification that shows how accurately the internal propagation delays between channels are matched. If a signal is simultaneously applied at the inputs of two channels, the time delay at the outputs of both channels is the delay-matching figure. The smaller the delay-matching specification, the better performance the gate driver can achieve.

Delay matching has two main benefits:

  • It ensures the driving of paralleled MOSFETs simultaneously with a minimal turn-on delay difference.
  • It eases the paralleling of gate-driver outputs to effectively double current capability and eases the driving of parallel power switches.

TI’s UCC27524A has extremely accurate 1ns (typical) delay matching, which can increase the drive current from 5A to 10A. Figure 2 shows the UCC27524A’s A and B channels combined into one driver. The INA and INB inputs are connected together, as are OUTA and OUTB. One signal controls the paralleled combination.

 

Figure 2: The UCC27524A with paralleled outputs to double drive-current capability

One result of accurate delay matching is an increase in power density. The need for higher power densities is a trend in applications like power factor correction (PFC) and synchronous rectification blocks of isolated power supplies, DC/DC bricks and solar inverters, where designers are restricted to the same size (or smaller!) for the same amount of output power.

Wide VDD range

When I refer to having a “wide VDD range,” I am referring to the positive supply voltage connected to the drain of the MOSFET or to the collector of the insulated gate bipolar transistor (IGBT). For gate drivers, the VDD defines the range of drive output.

There are three benefits of having a wide VDD range:

  • It provides the flexibility in your system design to use the same driver with different operating voltages and different types of power switches.
  • It offers robustness in noisy environments or when using low-quality power supplies, effectively preventing your system from being damaged by overshoot or undershoot.
  • Drivers with a wide VDD range can be used in split-rail systems, such as driving IGBTs with both positive and negative supplies.

Overall, a wide VDD range gives you flexibility in your system design and robustness in extreme conditions.

Operating temperature range

An often-overlooked detail is under what conditions the gate driver was tested. You will typically see gate drivers tested at room temperature, which is usually 25°C. The minimum and maximum are characterized over the entire operating temperature range and added statistical guardband.

Do not ignore the operating temperature range, because the minimum and maximum specifications can vary significantly when they are guaranteed over -40°C to 125°C versus only at room temperature. Most TI gate-driver devices are specified from -40°C to 125°C or at 140°C. This provides you with consistent performance and robustness under extreme temperatures. Taking this detail from data sheets into account makes it easier for you to choose the correct gate driver.

As you can see, there are many parameters to consider when choosing your system’s gate driver. Be sure to watch our video series and ask any questions in our E2E community

Additional resources

Rethink compute density with GaN

$
0
0

According to a report published by the Lawrence Berkeley National Laboratory in 2016, data centers across the United States consumed a whopping 70 billion kWh of energy in 2014. It comes as no surprise that the industry is continuously looking for ways to increase power efficiency and density. Higher efficiency saves both electrical and major operational expenses such as cooling. It also enables operators to increase rack density and available computing space, and to meet increasing demands more cost-effectively.

The advent of wide-bandgap devices such as gallium nitride (GaN) is enabling a new generation of power-conversion designs not previously possible with silicon metal-oxide semiconductor field-effect transistors (MOSFETs). These designs enable systems to reach new levels of power density and efficiency. GaN-based solutions can be incorporated into power supplies throughout data centers, from the AC mains to the individual points of load (POLs). GaN also enables new architectures such as high-voltage DC distribution systems.

Figure 1 illustrates the major blocks of power supplies in today’s systems.

 Figure 1

Electric utilities require a power-factor-correction (PFC) stage in order to optimize power-grid efficiency. PFC operates as a boost converter, and typically provides a DC output voltage of 380V. This voltage needs to be further stepped down to provide a DC bus supply for the system. Various topologies are used for this stage, but inductor-inductor-capacitor (LLC) and phase-shifted full bridge are commonly used to generate a bus voltage of 12V or 48V. This bus voltage is routed throughout the system, and undergoes multiple conversion steps to power various POLs such as processors, field-programmable gate arrays (FPGAs), memory and storage.

GaN-based solutions, shown in Figure 2, fundamentally change both the architecture and density of the entire power system, from the AC to the processors. Let’s break it down.

  • PFC: By enabling a totem-pole topology, GaN devices such as the integrated Texas Instruments LMG3410 reduce the number of active power switches and filter inductors by 50%. A four- to tenfold increase in switching frequency (continuous conduction mode [CCM] or critical conduction mode [CRM] operation) significantly reduces the size of magnetics while improving overall efficiency to over 99%, versus 96% for today’s titanium-grade power supplies. Interleaved solutions further enable designers to scale the power stage to meet system demand.
  • LLC: The DC/DC stage takes advantage of GaN’s superior switching characteristics to push the resonant converter to over 1MHz. The high frequency reduces the magnetics while improving power density and efficiency. The smaller form factor further enables the emerging high-voltage distribution systems in data centers for 380V-to-48V converters.
  • POL DC/DC: GaN has a major impact on these converters. First, it enables a single step conversion from 48V in order to power processors, memories and other loads directly, reducing component counts on precious printed circuit board (PCB) real estate by as much as 50% while reducing footprints by 75%. Second, the half-bridge current double topologies using the LMG5200 enable designers to easily stack the power stages for different load demands and placed closely to the load for best transient performance.

Figure 2

The days of GaN being viewed as a future technology are over. GaN is here now and is enabling designers to do what was once unreachable: design new power systems that are substantially smaller, switch faster and are cooler than ever before.

See a live demonstration of a TI GaN-enabled AC-to-processor power solution in TI’s booth (No. 701) at the Applied Power Electronics Conference (APEC), March 26-30 in Tampa, Florida. Follow TI at www.ti.com/apec2017.

Additional resources:

Save PCB space and overcome point-of-load design complexity with PMBus modules

$
0
0

Engineers face many difficult challenges when designing power supplies for industrial and communication systems. A typical system can include one or more field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), systems-on-chip (SoC), networking and communication processors, or other types of processors. Each processor typically requires complex power management of multiple rails (four, 10 or more) for proper operation. The core voltage rail of the processor can often require 20A or more of current. Managing the core rail and all of the auxiliary voltage rails is a huge challenge. In addition, printed circuit board (PCB) space is limited due to shrinking system form factors, so engineers must use high-density solutions.

PMBus-enabled DC/DC converters are becoming a popular option over traditional analog DC/DC to solve these challenges. But because PMBus adds an extra layer of design complexity, many engineers are seeking solutions when implementing PMBus in a system to make things a bit easier.

PMBus is a digital communication standard that enables digital control of a DC/DC converter. PMBus is available in different versions and manufacturers have flexibility in selecting the supported commands. A PMBus-enabled device may offer the ability to set the output voltage to a complete command set that enables margining, output current readings, adaptive voltage scaling (AVS) and adjustable device functionality. PMBus eliminates the external components typically used to program these features which reduce your bill-of-materials. In newer versions of PMBus, AVS can dynamically adjust the output voltage based on processor power demand to boost efficiency.

DC/DC modules with PMBus make designs easier and reduce the power-supply footprint. DC/DC modules help designers get to market faster by simplifying power management; you do not need to be a power expert to design a small solution and meet critical specifications. Traditional PMBus modules solved challenges but were often much more bulky than a custom-designed solution using a DC/DC converter. Some of the new module offerings are smaller than the custom-designed solutions.

Shown in Figure 1, the new TPSM846C23 35A PMBus module reduces the compensation design,  eliminates inductor selection, and guarantees performance compared to the TPS546C23 DC/DC converter shown in Figure 2. The TPSM846C23 achieves a smaller footprint by hiding the IC and passives underneath the inductor in a 15mm-by-16mm-by-5.8mm package to deliver an incredible density of 146mA/mm2. Two of these modules, when sharing current, deliver as much as 70A of output current. The TPSM846C23 is ideal if space and time saving is critical and TPS546C23 is ideal if cost or design optimization is critical. 

Figure 1: The 4.5V-15V, 35A TPSM846C23 PMBus stackable synchronous buck module


Figure 2: The 4.5V-18V, 35A TPS546C23 PMBus stackable synchronous buck converter

The TPSM846C23 uses the PMBus v1.3 command set with telemetry features for maximum configurability and design flexibility. With this command set, you can program sequencing, customize device performance and protection, read output current and operating temperature, and implement AVS (Figure 3).

Figure 3: TPSM846C23 PMBus commands supported

To ease the design process, TI’s implementation of the TPSM846C23 in WEBENCH has support for AVS, output-voltage transition rate, undervoltage lockout and soft start through PMBus. For full prototyping of PMBus, the TPSM846C23 evaluation module is supported by the Fusion graphical user interface (GUI). TI’s Fusion GUI configures and monitors the TPSM846C23. Fusion GUI uses the PMBus protocol to communicate with the TPSM846C23 over serial bus by way of a TI USB adapter.

You can see a live demonstration of the TPSM846C23 high-current PMBus power module with telemetry in TI’s booth, No. 701, at the Applied Power Electronics Conference (APEC) March 26-30 in Tampa, Florida.

In the demonstration, you’ll be able to see how the module’s PMBus compatibility enables output voltage, current and junction temperature telemetry, each of which is displayed in real time on the computer interface. The module is controllable through Fusion GUI that allows users to enable or disable the device, and also allows soft start and soft stop. The solution primarily targets enterprise and communication applications.

Additional resources

 

How to tame EMI noise in your power-supply design

$
0
0

Let’s say you are designing a buck power supply for a combustion engine application (a lawn mower, chain saw or automobile). For this application, you know you will need to meet a Comité International Spécial des Perturbations Radioélectriques (CISPR) (or Federal Communications Commission [FCC]) electromagnetic interference (EMI) specification. There are multiple approaches to mitigating EMI, including identifying the significant EMI offenders, figuring out any coupling paths, carefully engineering your circuit layout to mitigate the interference, and adding filters and snubbers. Each of these steps takes time and is difficult to accomplish without trial and error. Furthermore, you need specialized equipment and environments to test for EMI. But for all of your troubles, there are benefits beyond passing the CISPR specification. You will be protecting any sensitive circuits from excessive input or output-voltage ripples that can compromise the operation of your supply or load.

Let’s discuss one EMI mitigation method that addresses conducted EMI noise: the use of an input filter. A power-supply input filter prevents a high-frequency voltage on the power line after passing through the power converter. You must carefully design the filter to be stable, avoid large in-rush current and avoid degrading the loop response, all while achieving a good compromise between efficiency, size and cost. The TPS54360 in a buck topology runs at 10VDC-14VDC and delivers 3.3V at the output at 2A. The circuit shown in Figure 1, created using WEBENCH® Power Designer, is the design I’ll explore in this post.

Figure 1: TPS54360 buck power supply circuit example

Who knows, you may be lucky and not require a filter. You won’t know for sure until you do some calculations. Spoiler alert! In this example, we’re not lucky.

Here are the steps to creating the input filter:

  • Identify the highest noise component and the frequency at which the highest noise occurs.
  • Determine how much attenuation you need to meet the noise specification.
  • Select a filter type, such as undamped Inductor-capacitor, parallel damped, series damped, etc.
  • Determine the output impedance and the transfer function equations for the filter type.
  • Select components for the filter (inductors, capacitors, resistors) that will accomplish these goals:
    • Provides the necessary noise attenuation, particularly at the main frequency.
    • Enables the power supply to remain stable.
    • Does not significantly impact the loop compensation.
    • Components are available off the shelf that meet cost and size targets.
  • Measure the prototype for conducted noise and verify that the circuit meets the specification across all frequencies. (You know how this works … if the measurements show that the circuit doesn’t meet the specification, it’s rinse, wash and repeat, starting again at selecting the filter and components.)

To make things easier, WEBENCH Power Designer has an input filter tool that does the calculations and selects the filter type and components to meet your criteria. You can follow along by copying this exact example design in Power Designer. Once the design is open, click the Input Filter icon at the top of the page to access the input filter tool.

Assume that the highest noise component will typically be at the switching frequency of the switch-mode power supply (SMPS). The power circuit using the TPS54360 has a switching frequency of 918kHz. You can estimate the noise or obtain the baseline noise figure at the switching frequency through measurement under worst-case operation or the highest input current, which in this case is 2.3A. See the application report, “AN-2162 Simple Success with Conducted EMI from DC-DC Converters,” for more details on estimating attenuation and measurement of EMI.

It is important to understand the unfiltered circuit phase margin and gain crossover frequency, which is 60.5 degrees with a crossover at 58.8kHz.

The circuit has a whopping 90dBµV at 918kHz. At this frequency, the CISPR 25 Class 5 specification is 54dBµV. So you need a filter that will attenuate 36dBµV or more. You can also expect that additional harmonics may be well above the specification, so you will need to verify that you are meeting the specification across the spectrum of 1-100kHz and mitigate any noise above the specification across the entire range. To do so, you need to choose a filter topology. I’ll use the parallel damped filter topology since this topology addresses the attenuation, helps maintain a stable supply and meets impedance requirements.

First choose an inductance, Lf_inpflt, and then calculate the right filter capacitance, Cf_inpflt. You have to be careful in your choices, because large values of Lf_inpflt and small values of capacitance can lead to input instability and interfere with the operation of the power supply. To avoid this, use a damping capacitance, Cb_inpflt and Rd_inpflt (to reduce the output peak impedance of the filter at the cutoff frequency). See the application note, “Input Filter Design for Switching Power Supplies,” for more on the calculation and design of parallel damped filters.

 

Figure 2: Parallel damped filter configuration

These calculations were easy and fast, and address any stability and loop-compensation concerns using WEBENCH Power Designer’s input filter design tool. Using the Fourier series of the input current waveform and the impedance of Cin, Power Designer calculated the values for the input filter shown in Figure 3.

Figure 3: WEBENCH Power Designer’s calculated filter component values

The tool provides the ability to adjust the input line impedance, Rz and Lz, in order to adjust the filter to board layout and circuit conditions. Figure 4 shows the filtered circuit with its estimated line impedances.

Figure 4: WEBENCH Power Designer calculated damped input filter circuit

Figure 5 gives you the full power supply circuit example with the added input filter. Power Designer will let you add or remove the input filter and explore the circuit and filter behavior to make any adjustments you desire.  

Figure 5: Final TPS54360 buck power supply circuit with input filter

The crossover gain and phase margin are unchanged by the addition of the filter. The EMI conducted noise is now below the CISPR standard across all frequencies. In Figure 6, the orange line is the CISPR standard, the green line is the calculated noise before the filter addition, and the blue line is the calculated noise after the filter addition.

 

Figure 6: Ripple magnitude vs. frequency CISPR 25, before and after input filter

As you can see, WEBENCH Power Designer cannot just help you select and design your power-supply circuit and explore circuit performance. It also provides advanced features like EMI input filter design to help you address issues like conducted EMI noise, and low-dropout regulator (LDO) attach to reduce switcher-induced noise in your power system.

You can see a live demonstration of the input filter design for EMI with WEBENCH Power Designer in TI’s booth, No. 701, at the Applied Power Electronics Conference (APEC) March 26-30 in Tampa, Florida.

Additional resources


It's a wrap at APEC – power trends, friends and automobiles

$
0
0

The 2017 Applied Power Electronics Conference (APEC) did not disappoint. The annual pilgrimage to the premier global event in power electronics was full of ideas, implementations and illuminations. Let me share some of the key takeaways from the energizing week.

Data center power delivery could see some major restructuring. Two plenary talks (one by Google, another by TI – “Power Semiconductor Technology – Flexibility for Tomorrow’s Solutions”) highlighted the benefits of 48V bus architectures to improve efficiency and power density. Several vendors demonstrated 48V solutions in the exhibit hall too. However, as Shuai Jiang from Google pointed out in his plenary, there are still open questions about the best approach. Are transformer-based converters the way to go or could you use hybrid converter topologies? Should there be one conversion stage or two stages? What are the cost, efficiency, performance, design and manufacturing implications? I imagine we’ll be hearing more about this topic for the next several years as these issues are debated and addressed. For now, you can see TI’s AC-to-processor demo in Figure 1 below.

Figure 1: Powering Tomorrow’s Datacenters from AC input to processors

Another reoccurring theme is wide bandgap semiconductors like gallium nitride (GaN) and silicon carbide (SiC). What was evident this year was the move to further levels of integration and solving system-level challenges. It’s no longer enough just to show a new switch. Integrating gate drivers and protection features into the devices is important. Providing easy-to-use solutions that solve real problems of size, efficiency and performance is necessary. You could sense this shift in the way people talk about wide bandgap devices. There was less discussion about the intrinsic capabilities and parameters of wide bandgap semiconductors and more demonstrations of application challenges that the technology overcomes. Re-architecting the converter design may be an important step to getting the most out of wide bandgap devices. Otherwise, you are using a sports car engine to drive a station wagon.

Automotive electronics was also a highlight of the conference. Everything from electric vehicle (EV) and hybrid electric vehicle (HEV) systems to USB Type-C™ and infotainment got attention. An overview of power conversion needs for future cars is shown in Figure 2. The case was made for 48V and 12V buses in vehicles. The idea of a mid-voltage bus (42V or 48V) has been around for some time, but the convergence of mild hybrids, start-stop functionality and more electrically driven loads may bring the approach center stage. Add on top of it the growing compute power for advanced driver assistance systems (ADAS) and infotainment, and the standard power trees may not be able to handle future vehicle electronics needs.

Figure 2: Power conversion in future cars

The power electronics community is always trying to get to the next level of performance, size and efficiency. Every year at APEC there are many conversations about converter topologies, control techniques and devices that could help provide that competitive edge. Bob Mammano and other luminaries bantered about “Power Electronic Topologies: Do We Need More or Any Benefit to New Ones?” in one of the rap sessions (get Bob’s new book, “Fundamentals of Power Supply Design,” from Amazon). TI showed off the TPSM84A21/A22, which leverages a novel series capacitor buck converter to achieve high-efficiency voltage step down in a very small form factor. Innovations in power conversion really energize the conference attendees. Perhaps that is why so many people make their yearly pilgrimage to APEC.

Watch the video of TI’s booth demonstrations at APEC 2017 or download the quick guide detailing all of TI's demonstrations at the show.

Be sure to get more information about all of TI’s efforts at APEC. See you at APEC next year, March 4-8, 2018, in San Antonio.

Managing power scaling and sequencing the analog way

$
0
0

With the emergence of Internet of Things (IOT) applications, more and more application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) are being deployed into cloud computing and server system applications. These high-performance end applications demand higher processing power and ever-increasing clock speeds.

Voltage and power-scaling technologies have enabled very powerful and versatile power-management systems on smaller and denser integrated circuits, significantly reducing power losses and optimizing device junction temperatures while fulfilling the high-performance requirements of the system. Adaptive voltage-scaling optimization (AVSO) is one of the voltage scaling implementations.

There are different methods to AVSO implementations, either via digital and/or analog means. The most common are through PMBus or the I2C interface. You can program the initial supply voltage to power up the ASICs, FPGAs or a microprocessor with reasonable accuracy. Once the processor completes the boot-up sequence, it communicates with a voltage regulator, either via a PMBus or I2C command set. The purpose of this communication is to tell the regulator to adjust the output-voltage level based on the performance demand of the host. The most popular standard commands are VOUT_COMMAND and VOUT_MARGIN. In order to facilitate such dialogs, both the host and voltage regulator need to implement the same digital communication protocol.

Sometimes a digital implementation is just not a viable option due to the lack of hardware, software or firmware in the end system. In systems where there is still a need to optimize power dissipation, consider an analog voltage regulator employing the use of a reference input (REFIN). One such example is TI’s TPS548D21 (see Figure 1) - a fully integrated 40A high-performance synchronous step-down converter supporting AVSO and full differential sensing in a compact 5mm-by-7mm stacked-clip quad flat no-lead (QFN) package.

Figure 1: TPS548D21 Pin Out Diagram and Bottoms Up View of Package

Managing voltage scaling and sequencing is very easy using the TPS548D21. The TPS548D21 device can operate in either tracking (voltage scaling) or sequencing mode through MODE pin-strap configurations.

For voltage scaling/tracking, the reference tracking input (REFIN_TRK) allows an external reference voltage source to set the reference voltage of the TPS548D21 device (Figure 2). This voltage source can be anywhere between 0V to 1.25V, although its input impedance must be much less than 100kW. When the external voltage source transitions up and down between any two voltage levels (between 0.5V to 1.25V), the slew rate must be controlled to no more than 1mV/µsec. A tracking accuracy of less than 1% is possible between 0.5V to 1.25V.

The external sequencing feature of the TPS548D21 device allows ratiometric sequencing of multiple converters during startup and shut down by applying the same voltage source on the REFIN_TRK pin of the TPS548D21 (Figure 3). When programming the TPS548D21 device to perform external tracking (sequencing), the REFIN_TRK voltage must start from 0V and the external applied ramp must ramp upon completion of the power-on delay. The ramp time must also be longer than 1ms in duration.

Figure 2: Tracking waveform

Figure 3: Sequencing waveform

An additional benefit of analog tracking is the on-the-fly change resulting in no latency time between reference input and output response, resulting in improved system response and reduced power loss.

The next time you are powering an ASIC or FPGA consider a full analog approach to managing voltage scaling and sequencing. See the TPS548D21 data sheet and get started now.

 

A handy tool for power-supply designs

$
0
0

Versatility is a good thing. You probably have one of those tools in your toolbox – probably not advertised as such, but something that just ended up being good for everything from prying and bending, to loosening stuff. You reach for that tool repeatedly because you’re comfortable with it and you know that it’s going to do the job.

Design engineers are no different when designing circuits. They all have their favorite operational amplifier (op amp), low-dropout regulator (LDO), buffer, etc., that they go to time and time again. In this post, I’d like to propose a new versatile tool that is a high-accuracy, high-voltage, high-bandwidth current-sense amplifier with enhanced pulse-width modulation (PWM) rejection for measuring currents across several different circuit placements.

When choosing which tool to grab from your toolbox, you need to know its features before you decide that it’s your go-to solution. Table 1 summarizes the most important aspects of the INA240 that make it versatile, while analogously showing that every aspect of the device has usefulness for a specific task.

Table 1: The Versatile INA240

Figure 1 is a block diagram of a generic DC/DC power supply depicting various locations where current sensing is typically performed. Of course, not all current-sensing locations may be used all at once. The first case I show is on the high side at the input. The high common-mode range here is vital, especially if 48V is used as the input voltage. The 80V specification of the INA240 is more than sufficient, even if transients are present on the power rail.

With a ±25μV offset and 0.20% gain error, the measurements will be the most accurate for this class of current-sense amplifier. Additionally, the drift specifications enable ultra-stable current measurements over temperature.

Figure 1: Generic DC/DC power-supply block diagram

Another implementation used by many designers (also shown in Figure 1) is just after the inductor storage element. At this location, common-mode voltage and bandwidth are important. The INA240 has a bandwidth of 400kHz, which is sufficient for many designs. Couple this with great CMRR performance (both AC and DC), and the accuracy level is quite high for a high-voltage current-sense amplifier.

A discrete implementation using op amps and external gain resistors may not be possible since the op amp’s common-mode range is typically tied to its supply voltage, so low-side current sensing where the common-mode voltage is ~0V is its limited use case. That’s the beauty of a current-sense amplifier: the common mode is independent of the power-supply voltage. It’s like getting a bonus tool beyond what’s listed in Table 1.

Current-sense amplifiers are also used on the low side of the load – shown on the far right of Figure 1. A very important feature that impacts this implementation is offset voltage. The sense-resistor value needs to be as small as possible to ensure minimal power dissipation through the resistor, but it is even more necessary in low-side current sensing to ensure that the ground voltage experienced at the load is as close to “real” ground as possible. A low-input offset voltage enables the use of a small resistor. A smaller sense resistor, in turn, allows for a more efficient power-supply design.

There are various other locations in power-supply designs that make full use of all of the INA240’s capabilities. One worth mentioning is functionality in an AC/DC or DC/DC design that enables bi-directional current measurement during battery charging or discharging.

The INA240 tool enables design engineers to measure current accurately and in multiple locations, providing for more efficient power-supply designs. Furthermore, having a single device to purchase makes it a lot easier for the procurement team.

Additional resources

The effects of gate-driver strength in synchronous buck converters

$
0
0

The peak voltage at phase node, VPH, in a synchronous buck converter is one of the main specifications to determine converter reliability. Designers usually allow phase-node ringing to be as much as 85% to 90% of the MOSFET data sheet’s absolute maximum rating. This margin is necessary for long-term reliability of the converter since the circuit needs to safely operate over a wide range of ambient temperature (-400C to + 850C).

From the driver side, the main factor contributing to phase-node ringing is the gate-driver strength during the turn-on process of the upper MOSFET, FETUPPER. Let’s analyze its effects in a converter with different gate-driver resistance values.

Figure 1 shows the top level of a synchronous buck converter with the upper MOSFET gate-driver section. The FETUPPER requires a charge to turn on. This charge comes from the boot capacitor, CBOOT. The charging path starts from CBOOT, to RBOOT, to the pull-up driver P-MOSFET (DUP), the FETUPPER input capacitor and back to CBOOT.

Figure 1: Top-level synchronous buck converter

To simplify the comparison, treat RBOOT as a short and assumeMOSFET DUP behavior as a linear resistance during the turn-on period of the FETUPPER. A higher DUP resistance value has a lower peak ringing voltage and lower converter efficiency due to higher switching power losses. A lower resistance value has both a higher peak-ringing voltage and better efficiency.

Figure 2 shows the rising edge of phase-node ringing with different gate-driver strength values. The waveforms are from the TPS543C20 evaluation board with VIN =12V, VOUT =1V, FSW = 500 kHz, ILOAD = 40A on. The peak ringing voltage of 6Ω DUP is about 6V higher than 8Ω DUP. The resistance value of DUP inside the TPS543C20 can be program via an external communication interface such as an I2C protocol. The waveforms are from the same device and same evaluation board to minimize variation from other components.

Now, let’s compare a 6Ω DUP along with different boot resistor values to an 8Ω DUP. From the first order of the circuit analysis, a 6Ω DUP along with a 2Ω boot resistor should have the same peak ringing voltage as the 8Ω DUP. Figure 2 also compares the 6Ω DUP value with 1Ω, 3Ω and 5Ω boot resistors. The peak ringing voltages of these configurations are higher than the 8Ω DUP value.

You might ask why the 6Ω DUP with a 2Ω boot resistor and the 8Ω DUP values do not have the same peak ringing voltage results. This is because the DUP behaves as a dynamic MOSFET, which requires time during the turn-on process, in comparison to pure resistance as RBOOT. As a result, the rising slope of a phase node with a 6Ω DUP and 6Ω DUP plus boot resistors has the same ratio and is faster than the 8Ω DUP slope.

Figure 2: Phase-node ringing on the TPS543C20 device

Figure 3 compares the efficiency of all configurations. The results clearly correlate our earlier assessment. 6Ω DUP efficiency is the highest and has the highest peak voltage ringing.  And, 8Ω DUP efficiency is the lowest with the lowest peak voltage ringing.

Figure 3: Efficiency comparison with gate-driver strength variation

It’s critical to optimize the gate-driver strength with the main power-stage MOSFETs to ensure reliability and obtain the highest converter efficiency. A small variation in gate-driver strength can lead to wide variations in converter performance. Consider the TPS543C20 fixed frequency, non-compensation stackable synchronous buck converter for your next design.

Design advantage of D-CAP control topology

$
0
0

Texas Instruments first implemented the D-CAP™ control topology in 2004 on a double-data-rate (DDR) controller, the TPS51116. TI developed D-CAP technology as a form of current-mode control using the output capacitor equivalent series resistance (ESR) as the sense element. The name “D-CAP” refers to the current information that’s directly sensed across the output capacitors.

The first D-CAP controller was realized using an adaptive constant on-time (COT) modulator. Today, TI has a family of products using different modulators and evolving derivatives of the D-CAP control topology, such as the D-CAP2™ control scheme and D-CAP3 control mode. Figure 1 shows the evolution of the D-CAP family.

Figure 1: D-CAP family evolution chart

After several iterations, each improving upon the last, TI introduced the D-CAP3 control mode as an optimized D-CAP control loop with adaptive internal ramp compensation to support a wide range of external power-stage designs. In addition, the D-CAP3 control mode implements a ramp offset cancellation technique to improve VOUT setpoint accuracy performance. Unlike the D-CAP control topology, D-CAP3 control mode can stabilize almost any power-stage design using all multilayered ceramic capacitors (MLCCs), MLCCs plus bulk capacitors (with significant ESR) or bulk capacitor combinations. Table 1 below shows the general power supply considerations for the DCAP family of modulators. 

Table 1: Power-supply design considerations for D-CAP family modulators

The TPS548D22 works well with any output capacitor combination. In a typical TPS548D22 converter design, there are three primary considerations for selecting the value of the output capacitance: transient requirements, output voltage ripple and stability.

The TPS548D22 is a 40A converter, so for any given proper set of design constraints, the transient requirements generally dominate the output stage (output inductor and capacitor bank) design. A tight ripple specification (along with a tighter DC specification) is becoming critical in some high-end and high-performance application-specific integrated circuit (ASIC) and field-programmable gate array (FPGA) designs. In applications where accuracy is critical, you will need to design the output power stage to meet the ripple and DC criteria. There is a minimum capacitance requirement in terms of small-signal stability. This requirement is in place to prevent subharmonic multiple pulsing behavior in a COT modulator.

Generally speaking, if the converter is properly designed for both the transient and ripple requirements, the minimum capacitance requirement is considered satisfied, but you should still double-check the minimum capacitance requirement by using Equation 6 in the TPS548D22 data sheet.

In summary, D-CAP based control topology brings convenience to the power supply designer that is designing high performance POL converters. Consider TI’s SWIFT™ TPS548D22 synchronous step-down converter for your next design.

 

Viewing all 437 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>