Quantcast
Channel: Power management
Viewing all 437 articles
Browse latest View live

Would you like a logo to go with your PoE system?

$
0
0

In order to reach their true potential, all “standards-based” solutions such as USB and Underwriters Laboratories (UL) require an easy way to identify compliant solutions. Otherwise, a promising solution is held back over confusion and/or fear about what is really inside the box. Is it “as promoted” on the data sheet? How can you be sure? And what do you do when real-life experiences result in questioning the standard itself?

Such was the case for Power over Ethernet (PoE) – until Jan. 16, 2018.

Following a worldwide press release by the Ethernet Alliance (EA), PoE technology enters a new era in ease of adoption by providing an easy way for end users (or anyone in between) to determine if a design is compliant to the current standard – and at what power level. The new certification program also allows for faster debugging/troubleshooting when PoE systems are demonstrating interoperability issues between the PSE and PD ends of the CAT-5 cable.

As Figure 1 shows, there are essentially two categories of certification marks: one for use with powered devices (PDs), also known as “loads,” and one for power sourcing equipment (PSE), also known as “power sources.” The arrows indicate the directional flow of current in the system. In the case of PDs, the arrow points into the logo, indicating that the equipment is capable of receiving power. In the case of PSEs, the arrow points out from the logo, indicating the equipment is capable of sending power. The number inside the boxes indicate the class level certification granted (see Table 1). For those unfamiliar with PoE, these numbers directly correspond to the maximum power a PoE-enabled design can send or receive.

Figure 1: EA certification marks (logos)*


Table 1: PoE class levels

This initial rollout was limited in scope to the current Institute of Electrical and Electronics Engineers (IEEE) 802.3 standard (and to classes 1-4), but work is well underway to define the test suite for verifying future designs under the planned IEEE802.3bt standard (which will add classes 5-8).

TI believes that end equipment with a PoE logo provide a competitive advantage and we want to enable our customers to achieve this.  TI is putting our popular EVMs and Reference Designs through EA certification so that end equipment design engineers can start from a verified solution and have higher confidence that their design will pass industry compliance testing.

 

Start your new PoE design with increased confidence by looking for the logo.

* EA CERTIFIED & PD Mark™ and EA CERTIFIED & PSE Mark™ and EA CERTIFIED™ are certification marks of The Ethernet Alliance in the United States and other countries. Used here under license. Unauthorized use strictly prohibited.

 

Additional resources

 


How can a load switch extend your device’s battery life?

$
0
0

Let’s say that you’re a power electronics engineer and your boss has asked you to extend the battery life of your product. After optimizing the front-end power path (battery charger) and mid-rail converters (DC/DCs and low-dropout regulators [LDOs]), you believe there’s no room left to squeeze out a few more hours or days of battery life. You’re almost ready to report back that it’s impossible, but after taking a look in your toolbox, you find the solution: load switches.

One of the many ways that you can use a load switch is to reduce the shutdown current of any load. Do you have a Bluetooth® or Wi-Fi® module that consumes over 10µA in deep sleep or hibernation mode? Try adding a load switch like the TPS22916 in Figure 1, which can reduce the shutdown current to just 10nA.

Figure 1: Reduce shutdown current by adding a low-leakage load switch between the supply and load.

In some applications, such as wearables, building automation or medical devices, there can be several sensors and wireless transmitters in a single product that load switches can disable. Figure 2 shows the system block diagram of a typical smartwatch and where load switches would be used in the power path.

Figure 2: Smartwatch system block diagram

Take an example of a smartwatch using a standard 3.7V lithium polymer battery with 65mAh of capacity. In order to last at least five days before charging (a typical work week), you need to leave certain sensors powered (like the step counter), but you can shut down other areas of the board, like the Bluetooth® module. If the Bluetooth® module draws 10µA when disabled, it is contributing to at least 10µA × 24 hours × 5 days = 1.2mAh of the 65mAh budget. In other words, this one module is contributing at least 1.8% (1.2mAh/65mAh) to overall battery-life loss. Plus, this very conservative estimate does not take into account minimum/maximum specs over temperature, nor efficiency loss through the DC/DC converters. If you have several modules leaking current, the situation can multiply very quickly.

How do you combat this? Using the TPS22916 will cut that shutdown leakage current to just 10nA. This means that you can continue using your favorite Bluetooth module and it will have virtually no effect (0.0018%) on battery life when disabled, thanks to your new favorite load switch.

To learn more about load switches and the features they can offer, be sure to check out ti.com/loadswitch.

 

The value of a charger IC over a discrete solution for vacuum robots

$
0
0

As technology continues to boom, interconnectivity between devices enables the rapid growth of automated homes. The development of the vacuum robot came about as a result of the convenience of manipulating home equipment through wireless connectivity and remote accessibility.

A vacuum robot eliminates the need for manual floor cleaning by setting schedules to ensure a consistently clean home. In addition to convenience and time savings, these compact vacuum robots – unlike traditional, hefty vacuums – relieve the strain of running a vacuum by easily reaching tough nooks and crannies associated with furniture, walls and corners.

To maximize the floor space cleaned, a vacuum robot must optimize its running time. Thus, a battery-charging solution requires astute consideration in vacuum robot design. Taking an input voltage of approximately 19-20V from a charging dock, most vacuum robots on the market employ rechargeable four-cell lithium-ion (Li-ion) batteries to power their systems. Extending the run time of these batteries and cutting battery costs requires efficient utilization of the battery pack to its fullest capacity. Figure 2 illustrates the vacuum robot system and highlights its battery-management solutions.

Figure 1: Vacuum robot system power diagrams

You can implement the charging circuit in several ways. The discrete solution uses a simple DC/DC converter to charge the battery. The system’s microcontroller (MCU) mimics a CC-CV charging curve by controlling the on/off switching of the MOSFETs. While the discrete solution may be inexpensive, its inaccurate charge voltage, low switching frequency and lack of built-in battery protections contribute to additional costs and poorer performance than a charger integrated circuit (IC).

Alternatively, charger IC solutions offer high charge voltage accuracies, high switching frequencies and enhanced battery protections. While some designers may choose a discrete solution with a lower cost than the charger IC solution, the benefits of a charger IC impressively outweigh the price difference.

Vacuum robots on the market vary in run time, ranging from about 60 to 150 minutes. Maximizing the amount of cleaned surface area relies on the robot’s operational period, making optimal run time a key selling point for consumers; just several extra minutes could make the difference between an entirely clean and partially dirty home.

TI’s charger IC such as the bq24725A or bq24610 offers a high-accuracy charge voltage of ±0.5% compared to a low-cost DC/DC converter charge voltage accuracy of ±5%. Due to the small ±0.5% charge voltage accuracy, this charger IC maximizes battery capacity, which ultimately extends the robot’s run time.

Figure 3 describes the battery voltage versus the depth of discharge (DOD) of a 4.2 Li-ion battery at room temperature. Based on the charge voltage accuracies of the charger IC and several discrete solutions, the data maps several DOD points to battery voltages, which translate to run time. As shown in Figure 3 and its associated data in Table 1, the TI charger IC solution significantly maximizes capacity over discrete charging solutions.

Figure 2: Li-ion DOD versus battery voltage


Table 1: Charge voltage accuracy mapped to battery capacity

 Battery capacity ultimately translates to device run time and the cost difference of batteries with a certain run-time goal. For example, take a vacuum robot that uses a TI charger IC solution with ±0.5% charge voltage accuracy that runs for 120 minutes. The same vacuum robot using a DC/DC converter as a discrete solution with a charge voltage accuracy of ±5% would only run for 55 minutes. Therefore, the loss in capacity due to a less accurate charge voltage, as shown in Table 1, significantly diminishes the run time of the robot.

From a monetary standpoint, a battery pack for this application costs about $20. The 56% capacity lost in this example requires you to purchase $11 more in capacity. This extra 65 minutes of run time would allow the robot to clean several additional rooms, and the cost savings due to maximized capacity quantifies the value of using a charger IC solution.

Charger IC solutions yield high switching frequencies and in turn require small, low-cost inductors. For example, TI’s bq24725A, with a switching frequency of 750kHz, typically uses a 4.7µH inductor sized at 28mm2. Alternatively, a discrete solution with a switching frequency of only 50kHz requires a larger, 75µH or greater inductor covering about 113mm2 of board space. Along with saving solution size, the charger IC’s inductor is roughly two times less expensive than the discrete solution’s inductor, depending on inductor choice.

From a design perspective, a charger IC advantageously offers a complete, sophisticated suite of battery safety features, including input overcurrent, charge overcurrent, battery overvoltage, thermal shutdown, battery shorted to ground, inductor short and field-effect transistor (FET) short protections. On the other hand, a discrete solution would have to use its MCU to implement battery protections, and damage could occur to the battery by the time the MCU detects faults due to its slow response times. Therefore, a charger IC protects the battery in any worst-case scenarios while eliminating the need for you to create your own battery protections.

For additional design flexibility, the TI multicell switching charger portfolio provides options for stand-alone and host-controlled topologies. A stand-alone charger such as the bq24610 controls voltage and current limits with external circuitry elements, facilitating simple implementation. A host-controlled charger such as the bq24725A or bq24773 uses I2C or SMBus to program the limits, saving bill-of-materials cost by employing the calculating power of the system’s already-existing MCU.

A charger IC offers a myriad of advantages over common discrete charging solutions for vacuum robots. Although a discrete solution may be more economical initially, a charger IC solution provides a significantly longer run time, system cost savings and a simpler design implementation than the discrete alternative. Ultimately, the benefits of the complete charger IC solution outweigh the initial cost savings of the discrete charging solution.

Additional resources:

How to minimize time to market in your USB Type-C™ and USB Power Delivery designs

$
0
0

Many electronic system designers are interested in implementing USB Type-C™ and USB Power Delivery (PD) while getting their products to market as quickly as possible. Most USB Type-C applications require a microcontroller (MCU) because of the need for firmware configuration via the I2C, Serial Peripheral Interface (SPI) and/or Universal Asynchronous Receiver Transmitter (UART) data communication protocols.

But what if you could implement USB Type-C and USB PD without any firmware configuration or an external MCU and get your product to market fast?

Let’s say that you are designing an AC/DC power adaptor with the UCC28740 flyback controller using opto-coupled feedback and the TPS25740B USB Type-C and USB PD source controller, as shown in Figure 1.

Figure 1: AC/DC adapter simplified schematic using USB Type-C and USB PD

The TPS25740B has three control pins (CTL1, CTL2, and CTL3) which adjust the output voltage of the power supply based on the voltage requested by the attached sink. In other words, the CTL pins adjust the resistive feedback network of the optocoupler transmitting to the UCC28740 in order to output the desired voltages on the VBUS line in real time. This is what enables effortless USB Type-C PD adoption without the need for any firmware implementation.

Table 1 below shows the TPS25740B CTL pin states as a function of the target voltage on VBUS

Table 1: TPS25740B CTL states as a function of target voltage on VBUS

The voltages that are advertised depend on the USB PD source controller and how the device is configured. There are a variety of USB PD source controllers on the market today that can be configured to advertise a range of commonly desired voltages. One family of these devices and their voltage offerings can be seen below in Table 2.

Table 2: TPS25740 and TPS25740X device comparison table

In conclusion, there are a variety of USB Type-C and PD source controllers available today that can be used to reduce time to market. So, consider using a device without the need for firmware or an MCU for your USB Type-C and PD design and get your product to market fast.

Additional Resources

    

Accurately measure vital signs with low Iq and a small form factor

$
0
0

These days, it feels like the “portable future” is right around the corner. Devices that used to be cumbersome and bulky have become light and portable. I saw this firsthand in personal electronics: cellphones were once heavy and slow; they’re now slim, fast and have an increasingly longer battery life.

I’ve also witnessed this trend in personal health care. It is now possible to check vital signs without having to go to the doctor, partly because of the increasingly small solution size and low power of devices such as blood glucose monitors in the palm of your hand. Blood glucose monitors are experiencing a growing trend in lower power with longer battery life that enables users to have a responsive body vital measurement device.

Blood glucose monitors are devices that have extremely low power and try to push quiescent current (Iq) to the lowest limits possible because they must be able to measure at least 1,000 tests on the same battery which is typically a lightweight 3V button cell. Reaching a battery life that can handle 1,000 tests has become an issue as blood glucose monitors start to become more connected with Bluetooth® and other wireless connectivity (as shown in Figure 1). This is because the increase in wireless connectivity in turn increases power consumption and lowers battery life and increase the need for multiple coin-cell (220mAh) or even AAA batteries (1,000mAh), which have an increase in size and weight.

Figure 1: Blood glucose monitor system diagram

Unlocking higher accuracy

As system power decreases it can become a challenge to maintain a high accuracy, so it is important to make sure that system accuracy remains high; one way to increase accuracy is to have an external voltage reference. An external voltage reference is often practical because there are several techniques – such as oversampling on an analog-to-digital converter (ADC) – that increase the requirements of the reference voltage well above what a typical internal voltage reference can do. The increased requirements on a voltage reference can be a combination of an improved initial accuracy, a lower temperature coefficient, lower noise or even a different voltage reference level. These requirements are typically difficult to achieve in a lower-power application, but the REF3320 from the REF33xx family of voltage references solves these issues by providing a high accuracy, low temperature coefficient at a low Iq.

A low Iq voltage reference

The REF3320 is one of TI’s low-power precision voltage references. The largest advantage of the REF33xx family is its typical 3.9µA low supply current requirement and its ability to source and sink up to 5mA for an ADC or digital-to-analog converter (DAC) while active. This allows the REF3320 to have a very minor impact on overall system power while the system is sampling, as shown in Figure 2.

Figure 2: Total power consumption overhead example (estimated percentage)

The REF33xx family also offers low-voltage options between 1.25V (the REF3312) and 3.3V (the REF3333) to benefit applications that use coin-cell batteries. These output voltages offer you the flexibility to get the most out of your ADC by selecting an adequate voltage reference to take advantage of the complete full-scale signal. Higher output voltages also give you the flexibility to power ultra-low-power ADCs such as the ADS7042 that rely on using AVDD as the voltage reference.

Low Iq is possible in more ways than one

TI’s large, low-power voltage reference portfolio includes several options aside from the REF33xx family. For certain applications that need the absolute lowest power for regulation, the 1.25V REF1112 has a typical power consumption of 1µA in a small package, making it TI’s lowest-power voltage reference.

But there are more ways to save power. One such way would be to use the enable feature of the REF3425to limit power consumption while the device is active. This is a feature in TI’s REF3425, which is a high-precision voltage reference that can achieve 2.5µA of Iq in shutdown mode. It is also possible to use a load switch to turn of sections of the system to lower Iq which can further bring down the standby current of a blood glucose monitor. Figure 3 shows the power consumption of several popular low-power voltage references. 

Figure 3: Typical Iq of several TI voltage references

1.5mm-by-1.5mm UQFN voltage reference

The REF3320 or REF1112 also shine in continuous-sampling low-power monitors such as gas analyzers, personal radiation detectors and smoke detectors. These low-power applications continuously sample every minute (up to several hundred samples) while still maintaining a small battery and form factor. For example, due to the harsh temperature conditions in radiation detectors, the REF3320 has a low temperature drift of 30ppm/°C from -40°C to 125°C that ensures an accurate reference across temperature. In addition, the REF3320 is available in a small-form-factor 1.5mm-by-1.5mm UQFN package, as shown in Figure 4. This small form factor also gives you the flexibility to add passives for additional noise filtering and still be smaller than a typical SOT23-3 package.

Figure 4: The REF33xx family in a UQFN package

Don’t force yourself to choose between low system power and high system accuracy, as it is possible to increase ADC and DAC accuracy by adding a low-power voltage reference like the REF3320 or REF1112. No matter which application you are designing for, TI offers a large portfolio of voltage references that can help you unlock more accuracy in your system.

Additional resources

 

TI at the Auto Lamp Exhibition 2018 in Shanghai

$
0
0

TI exhibited the newest automotive exterior lighting solutions at the 13th Auto Lamp Industry Development Technical Forum and Fourth Shanghai International Auto Lamp Exhibition (ALE) on March 28-29 in Shanghai. ALE Shanghai is an annual automotive lamp show at which the top innovators and companies share the most updated technologies around automotive exterior lighting.

TI had three demos at ALE: an automotive rear combination lamp (RCL) taillight demo featuring the TPS92830-Q1 LED controller, a sequential lighting/ambient lighting demo using the TLC6C5712-Q1 LED driver and a next-generation adaptive frontlight system (AFS) light-emitting diode (LED) matrix headlight.

As automotive exterior lighting becomes more sophisticated, most of the new requirements are focused around supporting animations, styling or pixel-brightness control to enhance road safety. Our demos showed the capabilities of TI devices to help customers build on required features in both headlights and RCLs.

As shown in Figure 1, the automotive RCL taillight demo featuring the TPS92830-Q1 LED controller showed different kinds of signal functions that the linear-based TPS92830-Q1 device can support (such as turn, tail and stop), with both analog dimming and pulse-width modulation (PWM) dimming. The demo also highlighted the TPS92830-Q1’s integrated diagnostic and thermal-protection features.

Figure 1: Automotive RCL taillight demo featuring the TPS92830-Q1 LED controller

The sequential lighting/ambient lighting demo using the TLC6C5712-Q1 LED driver exhibited the ability to achieve differentiated styling with red-green-blue (RGB) color mixing (Figure 2). The TLC6C5712 supports diversified automotive applications with independently controlled brightness and color for each LED or lighting bar (through either analog or PWM dimming), and a 12-bit constant-current sink architecture. The device has built-in diagnostic features to improve robustness of the lighting system.

Figure 2: Sequential/ambient lighting using the TLC6C5712-Q1 LED driver

Finally, as shown as Figure 3, the next-generation AFS LED matrix headlight demo enables necessary intelligent headlight functions such as dynamic beam shaping, glare-free high beams, light customization and animation for welcome lights and/or sequential turn lights. It showcases the TI-designed lighting control unit (LCU) as a complete LED driver and lighting management solution for headlights, with the TPS92518HV-Q1 as the constant-current source to drive LEDs and the TPS92662-Q1 as the LED matrix manager to control an individual pixel light’s brightness through shunt-field effect transistor (FET) PWM dimming.

Figure 3: A next-generation AFS LED matrix headlight using the TPS92518HV-Q1 and TPS92662-Q1

TI has many other devices to help build innovative, robust and high-performance solutions in automotive exterior lighting applications. ALE Shanghai is definitely one of the most important events that connects TI and automotive lighting industry gurus. TI is continuing to define, design and make products that are adapting to the challenging standards for automotive exterior lighting needs.

For more information on TI's LED portfolio visit ti.com/LED 

Create a cost-effective boost power supply with true load-disconnect functionality

$
0
0

The fast-growing consumer electronics market brings opportunities as well as challenges for boost converters. The huge volume drives the market to be very cost-sensitive, so you will need to make trade-offs between solution cost and performance.

The need for true load-disconnect functionality

A boost converter does not have a native mechanism for load disconnection. The rectifier diode or body diode of the synchronous field-effect transistor (FET) passes the battery voltage to the load even when the boost converter is not operating. This leads to continuous battery drainage, even though the leak current may be small.

Many applications require the complete load disconnection and elimination of battery-current leakage when the circuit is not operating. For instance, a boost converter in an electric shaver only needs to operate when it is in use, to boost the battery voltage up for the LED backlight and the motor. Because the electric shaver is off most of the time, disconnecting the loads (the LED backlight and motor in this example) can avoid draining the battery and extend the shaver’s service time between charging or battery replacement.

Obviously, a boost converter with an integrated load-disconnect function is a solution, but the cost of such a device is much higher than a converter without one. This is because a true load disconnect has to be implemented with two high-side power metal-oxide semiconductor field-effect transistors (MOSFETs) in a back-to-back connection, or with a single power MOSFET that can turn off the FET’s body diode. Both implementations can greatly increase the cost of the boost converter integrated circuit (IC). In contrast, a boost converter like the TPS61322xx with an external load disconnect switch offers a cost-effective solution.

True load-disconnect configuration options

You can implement a load-disconnect function by placing an external switch Q1 in the input power path of the boost converter, as shown in Figure 1. A mechanical switch, p-channel FET, p-channel n-channel p-channel (PNP) transistor, n-channel FET or n-channel p-channel n-channel (NPN) transistor can serve as the disconnect switch.

A popular choice is a mechanical switch, which doesn’t require any extra control logic circuit but loses the ability to be controlled by the system microcontroller (MCU). Solid-state semiconductor devices are more controllable and robust, though. P-channel or n-channel FETs are usually preferable to PNP or NPN transistors because the latter two consume continuous base current to drive.

Between the n-channel FET and p-channel PFET, the RDS(on)of the n-channel FET is less than half the p-channel FET for the same size, and cheaper as well. However, it is challenging to design the printed circuit board (PCB) layout for an n-channel FET because you have to use it to break the ground path. Pay careful attention to the ground routing in these systems. Some applications prohibit the interruption of ground routing, because the broken ground leaves the load circuit energized, which can be a big risk.

Figure 1: Load disconnection by mechanical switch

Another popular solution is to use a p-channel FET on the high side of the power path, which doesn’t interrupt the system ground routing. Figure 2 shows a circuit configuration when the supply voltage of the MCU is higher than the battery voltage, and where the MCU directly controls main switch Q1. If the MCU’s supply voltage is lower than the battery voltage, the general-purpose input/output (GPIO) voltage will not be high enough to turn off Q1 successfully. The solution is to use the configuration shown in Figure 3, where introducing a small n-channel FET or NPN transistor (Q2) helps control Q1.

Figure 2: Load disconnection by p-channel FET, V_MCU > VBattery

Figure 3: Load disconnection by p-channel FET, V_MCU < VBattery

If the ground routing is not a concern, an n-channel FET or NPN transistor can fulfill the load-disconnect function, as shown in Figure 4. This approach is simpler than the p-channel FET configuration, and the MCU controls Q1 directly.

Figure 4: Load disconnection by n-channel FET

Switch considerations

It is very important to select a suitable load-disconnect switch. Unlike the MOSFETs of DC/DC converters, the load-disconnect switch is either on or off without frequent switching. Therefore, the gate charge Qg and the parasitic capacitances of the disconnect switch are not a main concern in component selection. Do pay attention to two things, however:

  • The rated voltage of the switch should be higher than the battery voltage.
  • Assess the RDS(on) of Q1 by allowing about 1% total solution efficiency loss under the maximum load. Use Equations 1 and 2 to calculate the power loss of the disconnect switch:

where IINRMS is the root-mean-square (RMS) value of the input current and RDS(on) is the on-resistance of the switch.

As an example, selecting an MOSFET with an RDS(on)< 25mΩ for 3V-to-5V conversion under a 500mA load causes about 1% loss in overall efficiency. The gate threshold should be less than the minimum battery voltage in order to guarantee operation in the entire range of the battery voltage. Q1 should support in-rush current during startup, during which the battery will charge both the input and output capacitors, and there are not many current-limit factors except for the RDS(on) of Q1 and the boost converter’s internal synchronous FET. This in-rush current is not only related to the slew rate of Q1’s gate voltage, but also affected by the input and output capacitors.

Test results

I conducted a test with the TPS613222ADBVR, a fixed 5.0V output voltage boost converter. The conditions were VIN = 1.8V, 2.7V, 3.6V, 4.2V, L = 2.2µH.

As Figures 5, 6 and 7 show, the efficiency differences are very small between the circuit with and without a load-disconnect switch. The worst case is at a heavy load under a low VIN condition, because if VIN< 1.8V, Q1 will not have an adequate gate voltage to fully drive the FET. The RDS(on) will increase and hurt the efficiency.

Figure 5: TPS613222A efficiency without a load-disconnect switch


Figure 6: TPS613222A efficiency with n-channel FET disconnect switch (FDN337N, RDS(on) = 82mΩ at VGS = 2.5V)


Figure 7: TPS613222A efficiency with p-channel FET disconnect switch (FDN306P, RDS(on)= 50mΩ at VGS = -2.5V)

Conclusion

A device like the TPS613222A provides a cost-effective solution to applications requiring a true load-disconnect function. You can decide to add the appropriate type of switch according to your end-application requirements. True load disconnection is easy to achieve and the total cost can still remain very competitive.  For further information, read the blog post, “How to Select a MOSFET – Load Switching.”

 

 

Why are there so many control modes for step-down DC/DC converters and controllers?

$
0
0

One of the questions I receive frequently is why there are so many control modes for step-down DC/DC converters and controllers. Whether hysteretic, voltage mode, current mode, constant on time or D-CAP™ mode control (and all of their derivatives), it seems a new one comes out just as we’ve gotten comfortable with the last one.

A few months ago, TI released a new control mode called internally compensated advanced current mode (ACM), which is used in the TPS543B20. This 18V input, 25A DC/DC converter operates at a programmable fixed frequency like current-mode control and does not require loop-compensation components, but employs a feature called asynchronous pulse injection (API) to enable the fast transient behavior of constant on-time control with less output capacitance.

Ultimately, the best control-mode choice depends on the design problems, and that is the answer to the question that I posed in the title. TI is active in the development of leading-edge control circuits to help engineers address tomorrow’s design challenges.  Figure 1 shows 12 different control modes used by nonisolated DC/DC converters and controllers from Texas Instruments.

Figure 1: Control modes for nonisolated step-down controllers and converters

In 2011, I was nominated to monitor control modes of nonisolated DC/DC converters and controllers for TI, and it has become an interesting hobby! There were more than 10 different control modes, including control modes from National Semiconductor. Six years later, there are now several new ones, and I maintain a short training presentation and quick reference guide to help differentiate one control mode from another. Each control mode can take hours to effectively present, so the quick reference guide provides useful links to more technical documentation on the TI website. To find products with a particular control mode more easily, our parametric search for step-down converters features a control-mode parameter.

My other role is to act as the control-mode institutional memory, and my journey started in 1999 when TI released a hysteretic controller for the server market, powering the main motherboard processor. The hysteretic controller improved the transient response time with less output capacitance than current- or voltage-mode control, saving board space and cost. But some designers were apprehensive about using a hysteretic nonlinear controller for the first time in their design. A few years later, derivatives of hysteretic control such as constant on-time and adaptive on-time became available. Designers did not have to spend time taking Bode plots or compensating feedback loops with external compensation components as they did with current mode and voltage mode. We were gaining traction. Designers that used simpler internally compensated current- or voltage-mode converters were satisfied with the constant on-time control modes, actually. The limitations of the inductor and output capacitor versions were undesirable, however, as they provided no means to adjust the loop. When higher-value ceramic capacitors like 47µF and 100µF became more available at lower costs, new derivatives of nonlinear control modes came out to support the low equivalent series resistance (ESR) of ceramic capacitors and provide the tighter reference-voltage accuracy that processors require.

On the other hand, many designers still preferred a linear, predictable, fixed-frequency control, since their applications used high-speed clocks, data converters and noise-sensitive analog circuitry such as those found in the industrial and communications markets. Over the years, several derivatives to these linear control modes were released to allow low conversion ratios with a high input voltage and varying line input voltages.

Again, the best control-mode choice depends on the design problems at hand. Check out our quick reference guide and watch our short training presentation and let us know about your favorite control mode.


How to select a MOSFET - Hot Swap

$
0
0

Over the course of this series of blogs, I have covered a number of different FET applications, from motor control to power supply. Alas, I have arrived at the final topic – selecting a MOSFET for hot swap.

When a power supply is abruptly disconnected from its load, large current swings across the circuit’s parasitic inductive elements can generate dramatic voltage spikes that can be detrimental to the electrical components on the circuit. Similar to a battery protection application, here the MOSFET acts to insulate input power supply from the rest of the circuitry. In this case however, the FET is not meant to immediately sever the connection between input and output, but rather limit the severity of those destructive current surges. This is accomplished via a controller regulating the gate-to-source bias on a MOSFET placed in between an input power supply (VIN) and the output voltage (VOUT)  forcing the MOSFET to operate in a saturation mode, and thus impeding the amount of current that can pass through (see Figure 1). 

Figure 1: Simplified hot swap circuit

Before anything else, your first consideration for this FET should be selecting an appropriate breakdown voltage, which is generally 1.5 to 2x the maximum input voltage.  For instance, 12V systems tend to implement 25V or 30V FETs, and 48V systems tend to implement 100V or in some cases 150V FETS. The next consideration should be the MOSFET’s safe operating area (SOA) – a curve provided in the datasheet that is especially useful for indicating how susceptible a MOSFET is turn thermal runaway during short power surges not unlike those it must absorb in hot-swap applications. I wrote a post last year about how we at TI go about measuring and then generating the SOA of a MOSFET as it appears on the device’s datasheet. If you haven’t read that, you may want to consider giving it a skim, because for this application, the SOA is the most crucial criteria for making an appropriate selection.

The critical question for you, the designer, to ask is what is the maximum current surge that the FET might see (or be expected to limit to the output) and how long will this surge last. Once you know this, it is relatively simple to look up the corresponding current and voltage differential on the SOA figure in the device datasheet.

For instance, if your design has an input of 48V and you want to limit the current to the output to no more than 2A for 8ms, you could refer to the CSD19532KTT, CSD19535KTT and CSD19536KTT SOAs’ 10ms curves (Figure 2) and deduce that the latter two devices might work, whereas the CSD19532KTT would be insufficient. But since the CSD19535KTT is good enough with some margin, the performance of the more costly CSD19536KTT may be overkill for this application. 

Figure 2: The SOA of three different 100V D2PAK MOSFETs

In the example above, I assumed an ambient temperature of 25˚C, the same condition at which the SOA was measured at on the datasheet. But if the end application could be exposed to a much hotter environment, you must derate the SOA in proportion to how close the higher ambient temperature is to the FET’s maximum junction temperature. Let’s say for instance that the max ambient temperature of the end system is 70˚C; you would derate the curves of the SOA using Equation 1:

In this case, the CSD19535KTT’s 10ms, 48V capability would decrease from ~2.5A to ~1.8A. You would then deduce that particular FET would probably no longer be capable enough for this application and instead, select the CSD19536KTT.

It’s worth noting this method of derating assumes that the MOSFET will fail at exactly the max junction temperature, which is generally not the case. Say the failure points measured in the SOA testing actually occur at 200˚C or some other arbitrary higher value; the calculated derating will be closer to unity. That is to say, this derating methodology errs on the conservative side.

The SOA will also dictate the type of MOSFET package you select. D2PAK packages can house large silicon die, so they are very popular for higher-power applications. Smaller 5mm-by-6mm and 3.3mm-by-3.3mm quad flat no-lead (QFN) packages are preferable for lower-power applications. For current surges less than 5 – 10A, the FET is most often integrated with the controller.

A few final caveats:

  • While I was speaking specifically to hot-swap applications here, you could apply the same SOA selection process to any situation where the FET operates in the saturation region. You could even use the same method for selecting a FET for OR-ing applications, power over Ethernet (PoE), or even slow switching applications like motor control, where there is going to be substantial high VDS and IDS overlap during MOSFET turn off.
  • Hot swap is an application that tends to use surface-mount FETs as opposed to through-hole FETs (like TO-220s or I-PAK packages). The reason is that the heating that takes place for short pulse durations and thermal runaway events are very localized. In other words, the capacitive thermal impedance elements from the silicon junction to case prevent heat from dissipating into a board or heat sink fast enough to cool the junction. Junction-to-case thermal impedance (RθJC), a function of die size, is important, but junction-to-ambient thermal impedance (RθJA), a function of package, board and system thermal environment, is much less so. For that same reason, it is pretty rare to see heat sinks used for these applications.
  • Designers often assume that the lowest-resistance MOSFET in a catalog will have the most capable SOA. There is some logic to this – lower resistance within the same silicon generation is usually indicative of a larger silicon die inside the package, which does yield greater SOA capability and lower junction-to-case thermal impedance. However, as silicon generations improve in resistance per unit area (RSP), they tend to increase in cell density. The denser the cell structure inside the silicon die, the more susceptible the die tends to be to thermal runaway. That is why older-generation FETs with much higher resistance sometimes also have much better SOA performance. The takeaway is that it always pays to investigate and compare the SOA.
  • I would be remiss if I did not remind you (as I did in my blog post last year) not all datasheet SOAs are created equal, and you should not take all vendors’ SOAs at face value. Although TI certainly stands by the SOAs in its datasheets, it’s always best to have real data when possible.

TI has a number of hot swap controllers that you can learn more about here. For reference, tables 1-3 at the end of this post highlight some devices we commonly recommend for hot swap, that provide some easy look up values for SOA capability.

That brings me to the conclusion of this MOSFET selection blog series. Thank you for reading, and I hope you learned something in the process. If you have any questions about this or any other post, feel free to post a comment below, or ask a question in TI’s E2E™ Community forum here.

Table 1: MOSFETs for 12V hot swap

MOSFET

VDS (V)

Package

Typ RDS(ON) (mΩ)

SOA Current Rating (A) @ 14V VDS

@ 10V VGS

1ms

10ms

CSD17575Q3

30

SON3.3x3.3

1.9

4.5

2

CSD17573Q5B

30

SON5x6

0.84

8

4.5

CSD17576Q5B

30

SON5x6

1.7

8

4

CSD16556Q5B

25

SON5x6

0.9

25

6

CSD17559Q5

30

SON5x6

0.95

30

14

CSD17556Q5B

30

SON5x6

1.2

35

12

CSD16401Q5

25

SON5x6

1.3

100

15

CSD16415Q5

25

SON5x6

0.99

100

15

Table 2: MOSFETs for 24V hot swap

MOSFET

VDS (V)

Package

Typ RDS(ON) (mΩ)

SOA Current Rating (A)

@ 30V  VDS

@ 10V VGS

0.1ms 

1ms

10ms

100ms

CSD18531Q5A

60

SON5x6

3.5

28

9

3.8

0.9

CSD19502Q5B

80

SON5x6

3.4

30

9

3.2

1

CSD18532NQ5B

60

SON5x6

2.7

100

8.6

3

1.9

CSD18540Q5B

60

SON5x6

1.8

105

13

4.9

2.2

CSD19535KTT

100

D2PAK

2.8

130

18

5.1

3

CSD19505KTT

80

D2PAK

2.6

200

18.5

5.3

3.4

CSD18535KTT

60

D2PAK

1.6

220

21

6.1

4.1

CSD18536KTT

60

D2PAK

1.3

220

31

9.5

5

CSD19506KTT

80

D2PAK

2.0

310

29

10

5.3

CSD19536KTT

100

D2PAK

2.0

400

34

10.5

5.4

Table 3: MOSFETs for 48V hot swap

MOSFET

VDS (V)

Package

Typ RDS(ON) (mΩ)

SOA Current Rating (A) @ 60V  VDS

@ 10V VGS

0.1ms 

1ms

10ms

100ms

CSD19531Q5A

100

SON5x6

5.3

10

2.7

0.85

0.27

CSD19532Q5B

100

SON5x6

4.0

9.5

3

1

0.33

CSD19532KTT

100

D2PAK

4.6

41

3.3

0.8

0.5

CSD19535KTT

100

D2PAK

2.8

46

6.1

1.9

1

CSD19536KTT

100

D2PAK

2.0

120

11

3.7

1.9

Converting from USB Type-A to USB Type-C™ brings greater energy efficiency

$
0
0

USB Type-C™ adoption into the personal electronic space is ramping up. Many system and product definers are asking if they should convert USB Type-A ports into USB Type-C ports, and what that entails.

There are benefits and costs to making the switch. Besides the obvious advantage of having the new USB Type-C connector, which is easier for customers to use and has a smaller form factor, the system also becomes more energy-efficient. TI has a family of products that make the conversion very straightforward.

In this post, I want to focus on USB 2.0 and USB 3.1 data systems that are 5V only. The conversion is simplest for a USB 2.0 system. Figure 1 shows a typical schematic for a USB 2.0 Type-A system. Converting this system to USB Type-C is just a matter of replacing the USB power switch, as shown in Figure 2. TI has a large portfolio of USB Power Switches for Type-A ports, so this blog is to help those using power switches such as the TPS20xxC family know how to convert to Type-C ports.

Figure 1: Traditional USB 2.0 Type-A system using a TPS2051C USB Power Switch

Figure 2: USB 2.0 Type-C system

The first thing to notice about the USB Type-C power switch is that it has more pins and more features. The configuration channel (CC) pins CC1 and CC2 perform the USB Type-C functionality. USB Type-C allows the source to change its advertisement dynamically, so TI’s TPS25821 advertises either 0.5A or 1.5A, for example, based on the state of the CHG pin.

USB Type-C hosts are required to disable VBUS when nothing is attached. This cold-socket requirement has the great advantage of allowing systems to reduce quiescent current. For example, the TPS2051C in Figure 1 must remain enabled even when there is no cable attached – and will be consuming 80µA. In contrast, the TPS25821 will not apply voltage to the OUT pin when the receptacle is empty and will consume only 1µA. The TPS25821 also indicates whether the receptacle is empty via the  pin, which the system may use to reduce quiescent current.

A USB Type-C receptacle has two pairs of USB 2.0 data contacts, but they are shorted together because the Type-C cable only has one pair of USB 2.0 data wires.

Before moving on to USB 3.1 systems, it may be a good idea for me to pause here and explain why it is not a good idea to try to implement a discrete solution. It is important that VBUS be low when the USB Type-C receptacle is empty. A USB Type-C system can connect to a USB Type-A port through an A-to-C cable. Since the USB Type-A port VBUS will always be about 5V, if the connection happens when the USB Type-C port is not cold-socket, the port with the higher VBUS voltage will drive current into the other port. Many USB Type-A port power switches do not have reverse-current protection, so they could be damaged if driven to a higher voltage.

Figure 3 shows a discrete USB Type-C implementation that implements the cold-socket requirement. It is much simpler to just use a USB Type-C power switch!

Figure 3: Discrete USB 2.0 Type-C system

In Figure 2, the  pin on the USB power switch is unused because it is a USB 2.0 system. When converting to a USB 3.1 system, however, the  pin becomes very important because the flip-ability of the USB Type-C plug requires a SuperSpeed multiplexer. You cannot use the USB 2.0 stubbing trick for USB 3.1 systems because there are eight SuperSpeed data wires in a USB Type-C cable, not just the four being used. Stubbing signals together would cause the signal to travel back through the cable again, and at the short wavelengths of USB 3.1 signals, the reflections would totally kill the signal.

There is another major change in the USB Type-C ecosystem that affects the requirements for a USB Type-A to USB Type-C conversion. Since a USB 3.1 Type-C cable may be connected in systems where VBUS reaches 20V, the VBUS wire can no longer directly power the active components in the cable (eg. a signal redriver such as TUSB544). Therefore, the electronics in the plug need a low-voltage supply from the USB Type-C host. USB Type-C hosts are required to provide this low-voltage supply onto the plug’s VCONN pin, which will be attached to either the CC1 or CC2 pin in the receptacle depending on plug orientation. The TPS25821 used in Figure 2 for USB 2.0 does not have VCONN capability, but the TPS25820 used in Figure 4 will automatically apply voltage to and discharge the VCONN pin of the cable according to USB Type-C specification requirements.

Figure 4 shows the simplest implementation for converting a USB 3.1 Type-A system (not showing the electrostatic discharge [ESD] of the SuperSpeed lines for simplicity). The TPS25820 pin controls the multiplexer so that the correct signals get routed to the USB 3.0 data system, but the addition of a multiplexer is a new burden for USB 3.1 systems.

Figure 4: USB 3.1 Type-C system

Finally, this post mainly targets USB systems that only provide the USB standard current.  However, USB Type-C provides a simple way to increase the amount of current advertised without overloading the D+/D- wires as in the CDP mode of Battery Charging (BC) 1.2. For example, you can configure the TPS25820 and TPS25821 to offer either the USB standard current or 1.5A by pulling the CHG pin high or low, while the TPS25810 can offer up to 3A.

In summary, converting from USB Type-A to USB Type-C not only allows you to keep up with the latest technology trend, but it also provides for becoming more energy efficient and easily sourcing higher power.

Additional resources

 

How to charge and discharge battery test equipment

$
0
0

A battery test system (BTS) offers high voltage and current control accuracy to charge and discharge a battery. It is mainly used in manufacturing during production of the battery. Battery test equipment can also be used in R&D departments to study battery performance.

One typical application of a BTS is to charge and discharge a one-cell lithium-ion battery. Considering the voltage drop in the cable, the voltage required to do this is 0V to 5V. When the battery is charging, the power bus voltage is typically 12V in order to obtain good efficiency in voltage conversion. The bus voltage increases to 14V when the battery energy discharges back to the power bus. So the BTS requires a bidirectional converter that supports a 12V-to-14V input voltage and a 0V-to-5V output voltage. The converter operates in buck mode when charging the battery and in boost mode when discharging the battery.

The LM5170-Q1 bidirectional current controller is a good fit for this application. This device regulates the average current flowing between the high-voltage (HV) and low-voltage (LV) port in the direction designated by a DIR pin. Figure 1 is a simplified schematic of the LM5170-Q1. The two phases’ interleave topology offers up to 100A of charge or discharge current. Its integrated 5A driver helps optimize efficiency in voltage conversion, which means small power loss and good thermal performance for the system. The diode-emulation feature of the LM5170-Q1 prevents current from flowing opposite from the direction selected by the DIR pin. This feature helps protect the battery from overcharge or discharge during transient periods.

Figure 1: LM5170-Q1 simplified schematic

One critical requirement of the BTS is the current accuracy, which should be better than 0.05% of full scale. Because the LM5170-Q1 integrated current amplifier can’t meet this requirement, an external current-sensing resistor and amplifier are required to improve the current accuracy.

To demonstrate the feasibility of the LM5170-Q1 in achieving this current accuracy, TI released the 50A, 0.05% Current Accuracy Power Reference Design for Battery Test Systems to target applications that charge or discharge one-cell lithium-ion batteries with high current accuracy.

Figure 2 is a block diagram of the reference design. The lithium-ion is connected to the LV port of the LM5170EVM through a 1mΩ resistor. The HV port is connected to the 12V-14V bus. The current through the battery is sensed by the voltage drop of the 1mΩ resistor using the INA188. The output of INA188 feeds into the OP07C operational amplifier, which generates an integrating signal using the difference between the output of the INA188 and the reference voltage. The reference voltage is provided by a microcontroller not included in this reference design.

The output of the OP07C connects to the ISETA pin of the LM5170-Q1, which regulates the current flowing through the 1mΩ resistor to the current reference. The battery voltage is also sensed and regulated based on the voltage reference. When the battery voltage doesn’t reach the voltage reference, the battery current is controlled at the current reference. When the battery does reach the voltage reference, the voltage loop overrides the current loop and the battery current increases to zero.  The direction of the current to charge or discharge the battery is controlled by a logic signal (indicated as “Direction” in Figure 2). This logic signal is connected to the DIR pin of the LM5170-Q1. It also controls the INA188 input signal direction through analog multiplexers. 

Figure 2: 50A, 0.05% Current Accuracy Power Reference Design for Battery Test Systems block diagram

Figure 3 shows the current accuracy when charging the battery from 1A to 50A. The test data shows that the current accuracy is much better than 0.05%, which proves the capability of the LM5170-Q1 in a BTS application.

Figure 3: Battery Charging Current Accuracy

Table 1 shows the output voltage at different reference voltage conditions, with the voltage gap less than 0.5mV from a 0.05V-5V reference voltage, which is also good for BTS applications.

Reference voltage (V)

0.04968

0.4988

0.99788

1.99434

2.9912

3.98935

4.98531

Actual voltage (V)

0.04969

0.49882

0.99788

1.99433

2.99116

3.98935

4.98526

Table 1: Voltage Control Accuracy

The LM5170-Q1 is a good fit for applications that require an energy exchange between two power sources. Its high driver current, diode emulation and cycle-by-cycle peak-current limit features greatly improve efficiency in voltage conversion and also provide good protection during stable and transient periods. Get more information on TI’s products for battery test equipment.

In automotive lighting, it’s time to say goodbye to discrete solutions

$
0
0

Auto manufacturers are using light-emitting diodes (LEDs) beyond the traditional front, rear, day-running, stop and turn lights to make their vehicles stand out in the market. LEDs are now appearing in side markers, license plates, brand logos, welcome lights, and ambient lights.

To drive these LEDs, you must consider:

  • The accuracy of the current so that the homogeneity of the LEDs are much better.
  • The changing brightness of the LEDs, thus requiring some kind of dimming.
  • The diagnosis and protection of the LED open/short circuit, as well as thermal protection, since safety is always a key concern in automobiles.
  • How to improve energy efficiency.

LEDs have traditionally been driven by discrete solutions. Figure 1 shows three typical options: an operational amplifier (op amp) (option 1), a bipolar configuration with the power directly connected to the car battery (option 2) or some type of shunt regulation (option 3).

Figure 1: Typical discrete solutions to drive LED

Let’s take a look at option 1 first, an op amp driven from the low side. An op amp enables you to achieve relatively high accuracy (<10%) and dimmable LEDs are also possible. With this solution, however, it’s difficult to achieve the diagnostics for LED open and short circuits. And the dropout voltage is as high as 1V, which is not very energy-efficient.

Option 2 is also popular and consists of diodes and an n-channel p-channel n-channel (NPN) transistor. This solution is simple and cost-effective, but the accuracy is only about 20%, which is far from enough. The dropout voltage could be as high as 1.2V, making this solution’s efficiency even lower compared to option 1. Again, no diagnostics are available for LED open and short circuits, and no pulse-width modulation (PWM) dimming is available. This solution is really too old for today’s designs.

Option 3 is popular in applications that require very high output accuracy (<5%). The dropout voltage is extremely high (up to 3V). No diagnostics or PWM dimming are available for this solution; thus, adoption of this option is narrow and the trade-offs are huge.

Every solution has its advantages and disadvantages, but compared to discrete solutions, a low-cost monolithic solution lowers system-level component counts and significantly improves current accuracy and reliability. The TPS9261x-Q1 family shown in Figure 2 is designed just for this purpose.

Figure 2: TPS9261x-Q1 simplified circuit

The TPS92610-Q1 device is a single-channel high-side LED driver operating from an automotive car battery. It is a simple and elegant solution to deliver constant current for a single LED string with full LED diagnostics. The accuracy is as high as 7%, which is adequate for most applications.

The TPS9261x-Q1 has different package options, from small outline transistor (SOT)-23, MSOP-8 up to thermally enhanced thin shrink small-outline package (HTSSOP)-14 to support different output power levels. With this family of single-channel automotive LED drivers, you can now say goodbye to discrete solutions.

 

Additional Resources:

How to achieve higher system robustness in DC drives, part 1: negative voltage

$
0
0

I recently moved to a new apartment, so I decided to go shopping for a vacuum cleaner. As I walked around the appliances section, I noticed how many of the vacuums were cordless. The manufacturers also seemed to be advertising more suction power, longer battery life and extended warranties.

One takeaway, as a customer, was that robust and reliable solutions are facilitating longer lifetimes, which in return improve a product’s reputation. This is relevant for applications like small home appliances, garden and power tools, and residential air conditioners that use electric motors. Most of these systems use DC drives such as brushed-DC (BDC) motors, three-phase brushless-DC (BLDC) motors or stepper drives. Given their high efficiency, lower audible noise and longer life spans, BLDC motors are widely used in appliances in order to achieve longer battery life, reduce cooling efforts and enable reliable operation.

Electronic motor control represents one of the main applications for MOSFET drivers. When selecting a gate driver for your DC drives, there are some design considerations to keep in mind in order to achieve higher system robustness. Part 1 of this blog series will cover negative voltage handling.

Negative voltage, in relation to a gate driver, it is the ability to withstand voltages seen at the input and output. Negative voltages result from parasitic inductances caused by switching transitions, leakage or poor layout. These unwanted voltages commonly appear in applications like motor drives, appliances and switch-mode power supplies.

Of all the issues resulting from parasitic inductances, one of the main problems for motor control is a tendency for the switch node (HS) to undershoot the ground following switching transitions. Figure 1 shows the internal parasitic inductances and board-layout inductances that exist in any design.

Figure 1 also shows that during the high-side turn-off, with continuous current flowing in the inductor, as the high-side current falls the low-side current rises. In most cases, this occurs  in the body diode for a short period of time. The current in the parasitic inductance generates a negative voltage relative to the MOSFET channel and body diode. The result is a negative voltage spike on HS, which can cause gate driver malfunction, DBOOT diode overcurrent or VHB-HS overvoltage.

Figure 1: Negative voltage on HS: di/dt effect on a gate driver

It’s important for gate drivers to have significant HS-negative voltage capabilities in order to improve robustness in your designs. For example, the UCC27710 600V driver maintains the correct output state within the voltage and time conditions shown in Figure 2. With a negative voltage capability of -11VDC across temperature, this solution offers robust operation under these conditions, which is critical for a reliable solution.

Figure 2: UCC27710 HS-negative voltage capability

Now, let’s discuss how to reduce those unwanted negative spikes. The parasitic inductances mainly come from the board layout. The layout of half-bridge power devices can be relatively tight, but what about the long trace from the FET to the bulk input capacitor?

Figure 3 shows a typical half-bridge driver and power-train layout. You can see that the MOSFETs are relatively close together, but due to PCB board size constraints, the bulk capacitor is often placed further from the FETs. This board layout path will result in source-to-capacitor parasitic inductance, which can result in large negative HS spikes.

Figure 3: Board layout path resulting in parasitic inductance

Figure 4 shows the bottom layer of the board layout. If you add high-voltage ceramic capacitors, you can place them very close to the power MOSFETs. Now the path from the low-side FET source to the capacitor is reduced significantly. Assuming that the parasitic inductance is relative to the path length, you can reduce the negative spike, as Figure 4 illustrates.

Figure 4: Improved board layout resulting in a reduced negative spike

As you can see, negative voltage handling is a critical function for gate drivers. Keep this in mind to achieve higher system robustness when designing motor-drive applications.

Additional resources

Shrink module size with Flip Chip On Lead (FCOL) package technology

$
0
0

Power modules are becoming increasingly popular in many market segments – the ability to buy a power supply that already includes the switching inductor is a big advantage. The main reasons engineers are choosing power modules as opposed to a discrete alternative are:

  • Reduced Solution Size. Power Modules are typically smaller than what designers can easily develop on their own. The ability to integrate active circuitry under the inductor can significantly reduce solution size and enables you to put more functions in a smaller space
  • Reduced Development Time. Power Module designers are power experts; they put tremendous effort into ensuring the components used in the module are reliable and of the right value to ensure excellent performance. They choose the optimal control topology and confirm the layout is of high quality. The result is a solution that is robust, high performance and easy to use.

In order for the power module market to continue to expand, the modules need to continue to get smaller and include more compelling features.

Power modules are built with several different manufacturing techniques, each with pros and cons. Figure 1 highlights some of the different approaches, including embedding the integrated circuit (IC) in a laminate, putting components onto a laminate and overmolding (or not), or putting components onto a metal leadframe and overmolding.

Example of various package types offered by Texas Instruments

All of these approaches can be designed with the inductor over the active circuit. In the embedded approach, the IC is integrated into the laminate and the inductor is placed on top (as in the TI TPS82130). For higher-current, laminate and leadframe-based designs (see the TPSM846C24), the active circuitry is placed under a stilted inductor. While in theory it is possible to implement these same techniques with a discrete design, in practice it is complicated to achieve in volume production.

The embedded approach enables the smallest module size for a given power level, but thermal limitations of the power IC in the laminate reduce the amount of power that this packaging technique can handle. Using a stilted inductor is also not a good approach when you need lower-current small-chip inductors.

Another factor to consider when developing a power module is how the IC is packaged. An unpackaged IC is best, as it is smaller and costs less than a packaged alternative. But it is more difficult to handle and test in production, and until now has not been a common approach. Most power modules either embed the silicon in the laminate and place the inductor over the top, or use a packaged IC mounted onto a leadframe or laminate.

In order to address the need for small size and good thermals, TI has added a new package approach to its portfolio. Called flip chip on leadframe (FCOL), a bumped die is mounted onto a leadframe along with passive components and then overmolded. TI has just released its first product with this new technology, the TPSM84209. (See Figure 2 for a look at how the module looks from the inside and outside.)

A more detailed look at the inside and outside of the TPSM84209 power module

Here are the advantages:

Smaller package size. The TPSM84209 is in a 4x4.5x2mm package. This is the smallest 28V/2.5A power module on the market. With FCOL technology – the solution is smaller than if a discrete solution was used to implement the same circuit

Overmolded package. Some engineers like to use an overmolded package. In select applications it is preferable for there to be no exposed active circuitry. Aplastic overmold also improves thermals

Improved thermals. The Θja of the TPSM84209 is just 32.7°C/W vs 46.1°C/W for the TPS82130 (a 17V/3A embedded MicroSiPTM module). This allows higher output currents at higher temperatures (see Figures 3 and 4)

Good EMI. Because the IC is mounted on a metal leadframe that is soldered directly to the printed circuit board (PCB), it is possible to design very small products that meet Comité International Spécial des Perturbations Radioélectriques (CISPR) 11 specifications. The evaluation module (EVM) for the TPSM84209 has gone through electromagnetic interference (EMI) testing at an approved Underwriters Laboratories (UL) lab and it passes CISPR11 up to Class B.

Excellent reliability. Like all of TI’s modules, the TPSM84209 has gone through an extensive manufacturing qualification process which includes tests such as high temperature storage, life test, biased humidity and board-level reliability. The package/device is specified to moisture sensitivity level (MSL) 3 and 260°C/3X reflow. 

Figure 4: TPSM84209 Efficiency vs Output Current


Figure 5: Safe Operating Area 12Vin, 5Vout

There is no single package technology that is suitable for every product or application requirement.  TI uses several different technologies to enable the optimal module performance at different power levels. The new FCOL package is a good new option for when size is a key concern. Get more information on TI’s DC/DC power module portfolio.

 

What is the storage temperature rating, and why does it matter?

$
0
0

One key metric to determining power-module reliability is the storage temperature rating. When selecting a module, you need to be sure that the module manufacturer is independently verifying the stand-alone inductor component used in the power module. Inductor qualification tests are not standardized across companies and quantities, and ratings that appear on data sheets can be inconsistent.

High-temperature storage (HTS) testing involves exposing the device to a rating temperature for 1,000 hours, and then checking for consistent characteristics before and after the exposure. All TI power modules are rated to a storage temperature of 125°C or above, meaning that the devices are guaranteed to maintain performance (efficiency) even after exposure to high temperatures.

In addition to this test in the module qualification regimen, module inductor components are pre-screened to ensure that they will meet a storage temperature requirement of 150°C as stand-alone components. Although inductor data sheets often cite a storage temperature, TI’s power-module team has found that some inductors rated to a specified storage temperature on the data sheet may not be suitable for use in a particular switching-converter application. Without qualifying the inductors before prototype and production, frustration and redesigns may occur.

The reason for this unsuitability is that the thermal aging in the material occurs in some inductors after exposure to high temperatures. This well-known phenomenon may not have previously been much concern if you use your devices below 65°C, but it has a significant effect on devices made to operate at temperatures above 85°C. You can observe the change in material by thermally measuring the losses of the inductor under test currents, or by using the inductor in a switching converter and noting a drop in efficiency, especially in wide Vin or high-output-current devices. Although these two test methods may be complex, you can also determine thermal aging with two straightforward measurements, which TI uses to qualify and select inductor components.

Figure 1 shows two simple measurements that indicate HTS failure. Samples from four different inductor vendors are evaluated at 150°C with measurements every 168 hours, showing significant drops in both quality factor Q at the switching frequency, and shunt or core resistance measured without biasing the device. The inductors that fail this test demonstrate high losses post-exposure when used in a buck-converter application, in spite of a lack of any visually observable changes, or change to direct current resistance (DCR) or L0. This impact would be more significant in a wide Vin device and/or a device with a high output current simply due to the increased leakage current flowing through the now-reduced shunt resistance of the inductor.

In the evaluation shown in Figure 1, feedback from TI enabled one vendor to sample a second device, using an improved magnetic material that did not demonstrate thermal aging (vendor A2 in Figure 1), which was then chosen for the TI module. The details of the exact revision made to the device are generally proprietary, but it is likely that the revised device had a different material formula than the first device.

Figure 1: Quality factor and core resistance as a function of test time for several inductor devices

Conclusion

TI has developed an independent inductor testing regimen for every inductor component selected for use in a power module. The HTS test is a critical aspect of TI reliability evaluation, enabling us to evaluate whether an inductor may exhibit significant core losses once exposed to high temperatures for an extended period.

This test is significant, since the deficiency may not be obvious when measuring only the inductance and DCR before and after exposure. Qualifying the inductor component by itself is critical to ensuring a reliable switching converter without revisions and reliability issues later in the design cycle.


How to design a simple constant-current/constant-power/constant-voltage-regulating buck converter

$
0
0

Introduction

In a previous blog titled “How to design a simple constant current/constant voltage buck converter,” I discussed how to design a constant current/constant voltage converter (CC/CV).  With the addition of a simple modification, the functionality can be modified to regulate the output power and operate within a constant power (CP) limit as the output voltage varies in the CC mode of operation.  This blog discusses the simple modification need to design a CC/CP/CV converter using the current monitor (CM) feature of the LM5117 buck controller.

Application example

In the CC mode of operation, the output voltage increases as the load resistance increases, and as a result, the output power will increase linearly as the current is regulated into the load.  However some applications require the output current to be reduced as output voltage increases thereby limiting the power delivered to the load.  A relatively flat power limit over a wide output voltage range can be achieved with a simple modification to a CC/CV converter that uses the LM5117.

Method of implementing CC/CP/CV

Figure 1 shows a typical discrete implementation of a CC/CP/CV converter.  The difference between the CC/CV converter and the CC/CP/CV converter is the addition of a feedforward resistor (Rff).  

Figure 1 A typical discrete implementation of a CC/CP/CV converter

Output current regulation

The LM5117 has a CM pin. When the converter is operating in continuous conduction mode, the voltage on the CM pin (VCMAVE) is proportional to the output current. By using a resistor divider from the VCMAVE to ground and connecting the divider tap point to the feedback node of the LM5117, you can control the output current. Equation 1 expresses the relationship between the voltage on the CM pin and the output current:

Where Rs is the current sense resistor and As is the internal current sense amplifier gain.

The relationship between the voltage on the CM pin (VCMAVE) and the voltage at the feedback node of the LM5117 (Vfb) is set by the resistor divider network and is expressed in equation 2.

Combining Equations 1 and 2 and rearranging for output current we can see how the output current is controlled by the feedback resistors from the CM to Vfb.  This is shown in Equation 3.

CP Programming

Rff is used to offset VCMAVE as Vout varies by injecting a current into Vfb.  At higher output voltages , more current is injected into Vfb through Rff which reduces VCMAVE.  As can be seen in equation 1, VCMAVE controls the output current (Iout) and by reducing VCMAVE, we reduce the regulated output current as the output voltage increases. Equation 4 calculates the voltage reduction (VoffCM) at VCMAVE:

Rff has a linear relationship with the output voltage and the output voltage has a linear relationship with Iout.  Because we are in effect summing into Vfb, the power limit over a given output voltage range will be nonlinear. 

As an example, given the requirements of a 60W power limit and an output voltage range of 6V to 12V, we can calculate the power stage components.   It is suggested the power stage components be selected for the highest Iout of 10A corresponding to the lowest output voltage of 6V with a power limit of 60W.  From our calculations, we determine Rs = 12mΩ and As = 8.5.  Note that the As of the LM5117 is reduced by external 200Ω series resistors at CS and CSG.  Refer to the LM5117 datasheet for more details on how series resistors connected to these pins reduces the current sense gain.

As a starting point, select values for RTopCM and RBotCM that will yield a maximum regulated output current 1.4 times greater than the specified regulated current of 10A, which occurs at the minimum output voltage. This suggestion is based on the fact that Rff will reduce VCMAVE as described earlier and therefore reduce Iout.

For example, for a 60W power limit at a 6V output, multiply 10A by 1.4, which yields 14A of regulated output current. Select 10kΩ for RBotCM and rearrange Equation 3 to calculate ~25kΩ for RtopCM. Select a standard value of 25.5kΩ for RtopCM.

Equation 5 calculates the amount of error introduced at the minimum Vout and serves as a good starting value for Rff:

With the addition of Rff at the minimum Vout, you need to ensure that this error is subtracted from VCMAVE by making sure that the VCMerror is equal to the VCMAVE. Make Equation 5 equal to Equation 4 and rearrange for Rff, as shown in Equation 6:

Evaluating Equation 6 yields an Rff = 167kΩ. Select a standard value of 155kΩ for Rff. Using Equation 4, calculate VoffCM = 0.855V at a 6V output.

Equation 7 shows the resulting VCMAVE, which is determined by subtracting Equation 4 from Equation 2:

For the given example here, VCMAVE =1.985V

VCMAVE controls the output current and this voltage decreases as the output voltage increases.  Equation 8 shows the relationship between the regulated output current (Ioutadj), VCMAVE and VoffCM.

Evaluating Equation 8 yields an Ioutadj = 10.053 at a 6V output.

Multiplying Equation 8 with the output voltage calculates the output power, shown in Equation 9:

Where

Evaluating Equation 10 calculates a Pout = 60.13W.

I recommend using Equations 7, 8 and 9 to check the power limit for a given output-voltage range to ensure that the power-limit profile suits the needs of your particular application. You can adjust Rff and RTopCM to modify the power-limit profile for a given output-voltage range.

Figure 2 shows a plot of the power limit as voltage increases using the values from the example. As previously stated, the CP regulation is not precisely constant, but the power variation in this example is less than ±7% over the full operating range.

Figure 2: Power-limit curve as a function of output voltage

CV programming

Looking at Figure 1, the voltage-clamp circuit uses the LM4041 voltage reference. The LM4041 is unique when compared to other references in that its reference voltage Vfb is set between the LM4041’s cathode and reference pin as opposed to the anode and reference pin (cathode-referenced and anode-referenced voltage references, respectively). This is advantageous for this particular application because it minimizes the interaction between Rff and R1 during transition from CP to CV control.

Being cathode-referenced, the LM4041 allows the voltage reference to turn on without introducing errors during the transition. By using the LM4041, the end result is a small change in the output voltage as the converter changes from CP to CV regulation, which is solely based on the values selected for RTopVC and RBotVC.

Equation 11 sets the voltage clamp set point:

CC/CP/CV design of the LM5117

The design approach for the power stage of a CC/CP/CV converter using the LM5117 is the same as it is for a basic buck converter. I suggest carrying out the design at the highest output power level using either WEBENCH® Designer or the LM5117 Quick Start Calculator. You can also look at the LM5117 data sheet for guidance on the design of the buck power stage.

Example schematic

Figure 3 shows a 48V input, 60W power limit LM5117 from 6V to 12V, with an output voltage limiter (Vclamp) set to engage at 12.5V.

Figure 3: Example schematic for a 5V to 12V power limit of 60W and a voltage clamp set at 12.5V

Figure 4 shows the measured efficiency as a function of Iout.

Figure 4: Efficiency of the buck converter based on Figure 3

Figure 5 shows the measured power-limit profile as a function of the output voltage.

Figure 5: Power-limit profile

(I further reduced Rff to keep the power limit below 60W, selecting a final value of 150kΩ for Rff.)

Figure 6 shows the measured output voltage as a function of Iout. The converter’s sense resistor (Rs) sets the maximum current at the right side of the graph. Note that the value of Rs will affect the power-regulation variation.

Figure 6: Output-voltage variation as a function of Iout

Figure 7 shows the measured switch node (CH3), Vout ripple (CH1) and output current (CH4) at 48Vin, 6.6Vout at Iout > 8A.

Figure 7: Steady-state scope shots

Figure 8 shows the measured transient response at a 48V input voltage, with Vout (CH1), Iout (CH4) and a load step from 3.8A to 5.7A (0.1A/µs).

Figure 8: Transient response scope shots

Conclusion

The LM5117 configured as a CC/CP/CV converter provides a power-limit profile for an output-voltage range with the addition of an Rff resistor to a CC/CV converter. This design approach is relatively simple and provides advantages of reduced size, cost and power losses.

Look at the details when designing an industrial PC – 1% is not always 1%

$
0
0

With process geometries of semiconductor technology declining to 10nm or below, designing the power supply for an industrial PC or single-board computer gets challenging like never before. In addition, the Industrial Internet of Things (IIoT) and Industry 4.0 are creating a stronger push toward smaller computing systems with increased performance. Figure 1 shows examples of supply requirements for some selected FPGAs with very low core voltages and very tight tolerances.

Figure 1: Supply voltage requirements for selected FPGAs (source: FPGA datasheets) 

Processors, field-programmable gate arrays (FPGAs) or system on chips (SoC)/application-specific integrated circuits (ASICs) using ultra-small process geometries offer very high functional integration and extreme performance levels, but they also require very high accuracy on their power-supply rails. Designing for 1.0V core voltages and below requires careful calculations of all DC specifications, as well as AC transients of corner spreads and process variations, to avoid false resets, unreliable operation or malfunction in single-board computer or industrial PCs.

Using 0.1% resistors to set the output voltage and adding multiple output capacitors can help fulfill the typical ±5% supply accuracy requirement even at very low voltages. But these resistors add cost and take up board space. Choosing a power supply with a 1% feedback voltage accuracy specification gives you more flexibility and potential cost reductions when selecting the output-voltage-setting resistor divider and output capacitors.

Figure 2 shows an example of the tolerance stack-up with a 1% reference voltage and 1% resistor accuracy, summing up to ±1.8% DC variations.

Figure 2: Target specification with 5% variation at a 1.0V core supply (source: Texas Instruments)

 

But you need to look at the details: not all 1% accuracy specifications are equal. Temperature variations and dependency on input voltages are common variables whose influences are sometimes not included in the 1% value. Some semiconductor suppliers show 1% accuracy on the first page of the data sheet. But this critical electrical parameter is often defined only at one given temperature and one input voltage point (for example, a 25°C room temperature and 3.6V input voltage).

TI’s new low-power TPS62147/TPS62148/TPS62135/TPS62136 (for 12V supply rails) and TPS62821/TPS62822/TPS62823/TPS62825/TPS62826 (for 5V supply rails) DC/DC buck converters can help solve the challenges that come with a 1% feedback voltage accuracy (or an output-voltage accuracy for fixed output-voltage options) specified over the full junction temperature range from -40°C to +125°C and over the full input voltage range of 3V to 17V or 2.4V to 5.5V. As well, these DC/DC converters are specified at very high output accuracy and offer very small solution size; for example: 1.5x1.5mm QFN for 2A or 3A (5V supply), or 2x3mm QFN for 2A or 4A (up to 17V supply).

For more information, visit TI’s new Power Management for FPGAs and Processors web page.

Additional resources

A PMBus power solution for NXP QorIQ processors

$
0
0

NXP QorIQ processors are high-performance 64-bit Arm® multicore processors for cloud networking and storage applications. Two of the high-end QorIQ variants are the LS2085A and LS2088A.

There are several power-supply rails required on an LS2085A/2088A board: the most important are the core rail and the SerDes logic and receiver rail. Using the PMBus interface eases the programming and monitoring of these high-current rails.

TI offers the PMBus Voltage Regulator Reference Design for NXP QorIQ LS2085A/88A Processors, which fully meets LS2085A/88A tolerance specifications while requiring only 50% of the output capacitance necessary on NXP evaluation boards. The reference design consists of the TPS53681 dual-output PMBus controller and TI’s CSD95490Q5MC smart power stages, as shown in Figure 1.

Figure 1: NXP LS2085A88A QorIQ power design

TI fully tested this reference design, and the board includes onboard load-transient generator circuits to test load transients and thus measure the AC output-voltage tolerance. The reference design board has all of the connections for complete power-supply testing, as well as programming of the TPS53681 and monitoring of the two voltage rails through the PMBus interface using TI’s Fusion Digital Power™ Designer (Figure 2).

Figure 2: NXP LS2085A88A QorIQ power design evaluation board

Among the many tests are the important load-step and load-release transient tests, as shown in Figures 3 and 4. In both cases, the reference design meets the NXP tolerance specifications.

Figure 3: NXP LS2085A88A QorIQ load-step test

Figure 4: NXP LS2085A88A QorIQ load-release test

The design even includes a Nyquist plot on top of the Bode plot to prove design stability, as shown in Figure 5. The Nyquist plot is stable if no encirclements (circle wrapping around) of the point (–1, 0) occur as the test frequency is swept during testing.

Figure 5: NXP LS2085A88A QorIQ GDD rail Nyquist plot

Finally, the design also offers thermal performance data of both VDD and GVDD power supplies, as shown in Figure 6.

Figure 6: VDD rail power-stage temperatures, 60A load (left); GVDD power-stage temperatures, 30A load (right)

NXP processors are now part of TI’s power for processors portal, which will be updated with TI power solutions and designs for additional NXP networking processors. So if you are designing with NXP QorIQ processors and don’t know where to start, download the reference design now.

Additional resources

Less (standby) power and more light

$
0
0

With the highly anticipated ratification of the latest Power over Ethernet (PoE) standard, Institute for Electrical and Electronics Engineers (IEEE) 802.3bt, an increasing number of applications are considering PoE as their power delivery technology of choice. Historically, on the load (also referenced to as the Powered Device or PD) side of the cable, PoE has been very popular with Internet protocol (IP) surveillance cameras, wireless access points, IP phones, and small home and office routers. A new application, PoE connected lighting, is garnering a lot of attention and consideration.

PoE is not currently an option in today’s light-emitting diode (LED) lighting ballasts for two reasons.

First, given the absolute power limit of the current IEEE 802.3at standard, PoE could only deliver 25.5W at the end of 100m of standard Ethernet cable, thus limiting the lumens generated from a PoE-based system and prohibiting its use for smart building use cases. The upcoming IEEE 802.3bt standard – expected to publish in late summer 2018 – eliminates this issue. With the new standard, power delivered to the PD can achieve 71W, which will meet the demands for many of the most popular LED ballast models available today.

Second, standby power is a critical consideration for connected lighting. Unlike a traditional light luminaire with a hardware switch, intelligent LED ballasts are usually powered even when the light is off. Having the sensors and other intelligence remain on while the luminaire is off enables fast turn-on.

Reducing overall power consumption is a major selling point for intelligent LED ballasts, making it critical to keep standby power as low as possible. Today, the minimum maintain power signature (MPS) overhead budget is ~115mW (calculated as 50V x 10mA x 75ms/75ms + 250ms) at a VIN of 50V. Most designs will need to add margin to the MPS current and duty cycle to compensate for integrated circuit tolerance and noise. Thus, a typical design following the IEEE 802.3at standard will likely consume 150mW-200mW of power. This may not seem a big deal, but larger installations have hundreds or even thousands of lighting fixtures. It adds up fast!

The new IEEE 802.3bt standard can achieve significant improvements in standby power consumption. Using the Connected LED Lighting IEEE802.3bt Power Over Ethernet (PoE) Reference Design, the entire standby power consumption is just 181mW (see Figure 1). This includes all of the operation required during standby (powered device, microcontrollers [MCU], physical layer [PHY], media access control [MAC], etc.). It’s easy to see why lighting manufacturers are excited about the new MPS specification in the upcoming IEEE 802.3bt standard. When combined with an optimized design, the MPS specification significantly helps minimize standby consumption and reduce the carbon footprint.

Figure 1: Input power consumption in standby (lights off)

Of course, some may question the value of PoE-connected lighting beyond these straightforward power demand and overall system-efficiency calculations. What about cost? According to one lighting solution provider, installation costs can be reduced by as much 25% and commissioning costs by as much as 50% when installing PoE-based networks.

As for end-user value, connected lighting brings many exciting new features, including customized brightness and color control; activity tracking/people counting; and visible light communication (VLC), which ultimately enables indoor positioning systems.

Early feedback on adding PoE to connected LED lighting ballasts is positive and gaining momentum. TI’s TPS2372-3 and TPS2372-4 PoE powered devices were developed specifically for such applications.

Additional resources

How to approach a power-supply design – part 5

$
0
0

In this final installment of the topology blog series, I’ll introduce the inverting buck-boost converter and Ćuk converter. Both topologies allow you to generate a negative output voltage from a positive input voltage.

Inverting buck-boost converters

The inverting buck-boost topology can step up and step down its input voltage while the output voltage is negative. The energy transfers from the input to the output when switch Q1 is not conducting. Figure 1 shows the schematic of a nonsynchronous inverting buck-boost converter.

Figure 1: Schematic of a nonsynchronous inverting buck-boost converter

Equation 1 calculates the duty cycle in continuous conduction mode (CCM) as:

Equation 2 calculates the maximum metal-oxide semiconductor field-effect transistor (MOSFET) stress as:

Equation 3 gives the maximum diode stress as:

where Vin is the input voltage, Vout is the output voltage and Vf is the diode forward. The value for Vout needs to be negative for all three equations.

Since there is no inductor-capacitor (LC) filter pointing to the input or output of the inverting buck-boost converter, this topology has pulsed currents at both converter ends, leading to rather high voltage ripple. For electromagnetic interference (EMI) compliance, additional input filtering might be necessary. If the converter needs to supply a very sensitive load, a second-stage filter at the output might not provide enough damping of the output voltage ripple. In such cases I recommend using a Ćuk converter instead.

You can build an inverting buck-boost converter by using a buck controller or converter as you need a P-channel MOSFET or high-side MOSFET driver. However, the ground terminals of the controller or converter integrated circuit (IC) need to be connected to the negative output voltage. The IC is then regulating the ground signal versus the negative output voltage.

The right half-plane zero (RHPZ) is the limiting factor for the inverting buck-boost converter’s achievable regulation bandwidth. The maximum bandwidth is roughly one-fifth the RHPZ frequency. Equation 4 estimates the single RHPZ frequency of the inverting buck-boost converter’s transfer function:

where Vout is the output voltage, D is the duty cycle, Iout is the output current and L1 is the inductance of inductor L1. The values for both Vout and Iout need to be negative.

Figures 2 through 7 show voltage and current waveforms in CCM for the FET Q1, inductor L1 and diode D1 in a nonsynchronous inverting buck-boost converter.

Figure 2: Inverting buck-boost FET Q1 voltage waveform in CCM

Figure 3: Inverting buck-boost FET Q1 current waveform in CCM

Figure 4: Inverting buck-boost inductor L1 voltage waveform in CCM

Figure 5: Inverting buck-boost inductor L1 current waveform in CCM

Figure 6: Inverting buck-boost diode D1 voltage waveform in CCM

Figure 7: Inverting buck-boost diode D1 current waveform in CCM

Ćuk converters

The Ćuk topology can step up and step down its input voltage while the output voltage is negative. The energy transfers from the input to the output when switch Q1 is not conducting. Figure 8 shows the schematic of a nonsynchronous Ćuk converter.

Figure 8: Schematic of a nonsynchronous Cuk converter

Equation 5 calculates the duty cycle in CCM as:

Equation 6 calculates the maximum MOSFET stress as:

Equation 7 gives the maximum diode stress as:

where Vin is the input voltage, Vout is the output voltage, Vf is the diode forward voltage and VC1,ripple is the voltage ripple across the coupling capacitor C1. The value for Vout needs to be negative for these three equations.

The LC filter L2/Co in a Ćuk converter is pointing to the output. As a result, the output ripple is quite small, because the output current is continuous. When you look at the input, there is another LC filter with L1/Ci. Thus the input current is continuous as well, which results in very small input ripple, too. Thus the Ćuk converter is a perfect fit for applications that require a negative output voltage while being very sensitive at both the input and output, such as power supplies for telecommunications.

You can easily build a Ćuk converter by using a boost controller, because MOSFET Q1 needs to be driven on the low side. The boost converter or controller IC will typically only accept a positive feedback voltage at its feedback pin. You can translate the negative output voltage into a positive voltage signal by using a simple inverting operational amplifier circuit.

The Ćuk converter has an RHPZ as well. The power stage cannot immediately react to changes at the output, because the energy transfers to the output when switch Q1 is off. The maximum achievable crossover frequency is again one-fifth the RHPZ frequency. Note that a Ćuk converter has more than one RHPZ. Equation 8 estimates one of the Ćuk converter’s RHPZs as:

where D is the duty cycle, L1 is the inductance of inductor L1 and C1 is the capacitance of coupling capacitor C1.

Figures 9 through 18 show voltage and current waveforms in CCM for the FET Q1, inductor L1, coupling capacitor C1, diode D1 and inductor L2 in a nonsynchronous Ćuk converter.

Figure 9: Ćuk FET Q1 voltage waveform in CCM

Figure 10: Ćuk FET Q1 current waveform in CCM

Figure 11: Ćuk inductor L1 voltage waveform in CCM

Figure 12: Ćuk inductor L1 current waveform in CCM

Figure 13: Ćuk coupling capacitor C1 voltage waveform in CCM

Figure 14: Ćuk coupling capacitor C1 current waveform in CCM

Figure 15: Ćuk diode D1 voltage waveform in CCM

Figure 16: Ćuk diode D1 current waveform in CCM

Figure 17: Ćuk inductor L2 voltage waveform in CCM

Figure 18: Ćuk inductor L2 current waveform in CCM

Much like for a single-ended primary inductance converter (SEPIC) and Zeta converter, it can also make sense to use coupled inductors for a Ćuk converter instead of two separate inductors. Using coupled inductors offers two advantages: The first benefit is that only half the inductance compared to a two-inductor solution will be required for similar current ripple because coupling the windings leads to ripple cancellation. The second benefit is that you can get rid of resonance in the transfer function caused by the two inductors and the coupling capacitor. This resonance usually needs to be damped by a resistor-capacitor (RC) network in parallel with coupling capacitor C1.

One drawback to using coupled inductors is that you need to use the same inductance value for both inductors. Another limitation of coupled inductors is typically their current rating. For applications with high output currents, you might need to use single inductors.

You can also configure both inverting buck-boost and Ćuk converters as a converter with synchronous rectification if your application requires output currents greater than 3A. If you implement synchronous rectification for a Ćuk converter, you need to AC-couple the high-side gate-drive signal, because many controllers require you to connect them to the switch node. The Ćuk converter has two switch nodes, so take care to avoid negative voltage rating violations on the SW pin.

Additional resources

Viewing all 437 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>