Quantcast
Channel: Power management
Viewing all 437 articles
Browse latest View live

How to get the most benefits out of WCSPs for portable electronics

$
0
0

Wafer chip-scale packages (WCSPs) are becoming more popular in portable electronics due to their smaller footprint, lighter weight, better electrical parameters and lower manufacturing costs. WCSPs enable a direct interconnection between the silicon chip and the printed circuit board (PCB). Their minimal footprint frees up a lot of space on the PCB for other components. Figure 1 illustrates a WCSP.

Figure 1: WCSP illustration

A WCSP also reduces parasitic resistance, inductance and weight by eliminating the extra bond wires, lead frames and encapsulation found in other packages. The lowered resistance in the power-conversion integrated circuit (IC) results in higher efficiency. As Figure 2 shows, the power loss is lower in  bq25898x WCSP than in a competing IC family, while also providing 13% more charging current for the same 1W power-loss budget. This is very significant for maintaining the same equipment case temperature at a higher charging current, as discussed in the blog post, “How 1.2% more efficiency can help you charge faster and cooler.”


Figure 2: Power loss of  bq25898 in WCSP

For the same power loss, however, the IC temperature increase is usually higher in WCSPs due to their higher junction-to-ambient thermal resistance (RθJA). The smaller footprint also leaves little room for component placement and wire routing. Optimizing PCB layout and maintaining good thermal performance are therefore big challenges.

Figure 3a is an example layout of the bq25898. All critical components, input (PMID and VBUS), system (SYS) and battery (BAT) capacitors are placed next to the device pins, keeping the high-frequency current loop at a minimum. All power lines use the maximum copper pours; almost the entire PCB is covered by copper. A big ground copper pour is created at device GND, and well connected to other GNDs.

Figure 3b is a thermal image of the bq25898 taken at 3A Charging current with a 9V input and a 3.9V battery . As you can see, heat is evenly distributed across the IC, with no excessive heat buildup or hot spots.

This experiment shows that a PCB layout like that shown in Figure 3 can reduce the IC temperature from 55°C to 52°C compared to a “normally” laid-out PCB. For more details about layout optimization, see the application note, “Achieving the Optimal Thermal Performance for Chip Scale Package.”

 

(a)                               (b)

Figure 3: PCB top-layer layout (a); and thermal image (b)

As you can see, it’s possible to highly enhance thermal performance with an optimized PCB layout to avoid hot spots. If done properly, WCSPs provide a lighter, faster and cooler charger, with a better customer experience.

 

Additional resources:


Drive a piezoelectric buzzer with a simple boost converter

$
0
0

A piezoelectric buzzer is a loudspeaker that uses the piezoelectric effect to generate sound. Its lightweight, simple construction and low price make it usable in many applications, such as home appliances, shared bikes and alarm devices. A typical piezoelectric buzzer is made of piezoelectric ceramic adhered to a metal plate. The plate is normally placed into a plastic case to improve sound press.

Piezoelectric ceramic is a man-made material that harnesses the piezoelectric effect discovered in 1880 by Jacques and Pierre Curie. Applying an oscillating electrical field causes the ceramic material to stretch or shrink. The deformation of the ceramic material causes a metal plate to bend in two directions and generate sound waves in the air, as shown in Figure 1. Assembling the plate into a plastic case can improve the sound pressure of a buzzer.

Figure 1: Operating principle of a piezoelectric buzzer

A piezoelectric buzzer has the highest sound pressure when driven with a frequency equal to its resonant frequency. Thus, it is normally used in applications where volume and high-pitch requirements are more important than sound quality. In a shared bike application, a piezoelectric buzzer is used in the smart lock to indicate lock status clearly despite conditions like a noisy street. The function of the buzzer as an alarm device or in a home appliance is identical.

In a piezoelectric buzzer, the sound pressure mainly depends on the voltage amplitude if driven with a resonant frequency. The higher the driving voltage, the higher the sound pressure. Taking Murata’s PKM13EPYH4002 as an example, the maximum driving voltage amplitude, 30V, generates the highest sound pressure. But the battery in a shared bike smart lock or alarm device is normally a one-cell lithium-ion battery or an alkaline battery in series. The battery voltage is not higher than 4.2V – much lower than the voltage requirement. Thus, a boost topology circuit is required to convert the voltage. In addition, the boost circuit should have a small shutdown current in order to extend battery life. A piezoelectric buzzer is off most of the time and only powers on when necessary.

The input voltage of TI’s TLV61046A ranges from 1.8V to 5.5V, which covers two alkaline batteries in series, a lithium-ion battery and a 3.3V, 5V bus power system. The maximum output voltage is 28V with 29V of overvoltage protection (OVP). The OVP function can protect the boost converter and load from damage when the feedback resistor divider is incorrectly soldered during manufacture.

The input peak current of the TLV61046A is set to 980mA, which is high enough to drive a piezoelectric buzzer. The cycle-by-cycle current-limit and short-circuit protection functions protect the converter and power supply during short-circuit conditions.

The TLV61046A also integrates a power rectifier diode and an isolated metal-oxide semiconductor field-effect transistor (MOSFET). The rectifier diode eliminates the external Schottky diode found in traditional high-voltage boost converters and reduces cost and solution size. The isolated MOSFET disconnects the current path from power supply to load when the device is disabled to reduce current consumption. Because of its high level of integration, the implementation of a TLV61046A boost solution is simple, as shown in Figure 2.

Figure 2: TLV61046A schematic

As I mentioned above, an oscillating voltage is required to drive a piezoelectric buzzer. An N-channel P-channel N-channel (NPN) transistor can generate the oscillating voltage, as shown in Figure 3. The output voltage of the TLV61046A is set to a maximum of 28V. LS is the piezoelectric buzzer. The pulse-width modulation (PWM) is the signal from an input/output (I/O) pin of a microcontroller. The frequency of the PWM signal should be equal to the resonant frequency of the buzzer if you need the highest sound pressure. R4 limits the output current of the I/O pin. When the PWM signal is higher than 0.7V, Q1 turns on and LS charges to 28V. When the PWM signal is zero, Q1 turns off and R3 discharges LS. The voltage across LS is a rectangular waveform during operation, as shown in Figure 4.

Figure 3: Simplified schematic of a 28V piezoelectric buzzer driver


Figure 4: Operating waveform of a 28V piezoelectric buzzer driver

Driving the piezoelectric buzzer with a voltage amplitude of ±15V or higher is also possible with the TLV61046A; Figure 5 shows the simplified schematic to generate this voltage. The output voltage is set to 15V. An easy charge-pump circuit based on diodes outputs 30V. Q1 and Q2 form a totem-pole circuit to drive the negative side of LS with approximately 30V or 0V. Thus, the voltage across LS is a ±15V rectangular waveform.

Figure 6 shows the operating waveform of this circuit. By setting the output voltage of the TLV61046A to 28V, the driving voltage of the buzzer can be as high as ±28V, which greatly increases the sound pressure.

Figure 5: Simplified schematic of a ±15V piezoelectric buzzer driver

Figure 6: Operating waveform of a ±15V piezoelectric buzzer driver

The TLV61046A highly integrated boost converter has a wide input voltage and as much as 28V of output voltage. Its abundant features make it easy to use not only in piezoelectric buzzer drivers, but also in passive-matrix organic light-emitting diodes (PMOLEDs), high-side MOSFET drivers and liquid-crystal display (LCD) panels.

For more details about the circuit and test bench data discussed in this post, check out the High Voltage Cost-Effective Piezoelectric Sounder Driver Reference Design.

What is a low noise inverting buck converter?

$
0
0

Do you need a negative voltage to power your data converter or amplifier? These types of systems usually require low noise to power such sensitive analog circuitry. While a linear regulator (LDO) is frequently used to achieve low noise, what if the switching DC/DC converter could provide low noise by itself? If you could use only a DC/DC converter, you could save significant bill of materials (BOM) costs, reduce your printed circuit board (PCB) space and remove additional power loss from your system.

How does a low-noise inverting buck converter work? Figure 1 shows the basic schematic and typical waveforms from the TPS63710. The CCP capacitor, between the CP and SW pins, inverts the supply voltage, which is then further downregulated. The low-side (LSD) and rectifying (RECT) MOSFETs provide synchronous rectification for high efficiency. To begin a switching cycle, the high-side (HSD) and RECT MOSFETs turn on to charge CCP to VIN. Once charged, both the HSD and RECT MOSFETs open and the LSD MOSFET closes to deliver CCP’s stored energy to the output. The duty cycle is regulated to provide a low-noise and regulated output voltage.

Figure 1: Simplified schematic of a low-noise inverting buck converter, TPS63710

The low-noise inverting buck converter is a voltage inverter – it creates a negative voltage. Because it is an inverting buck converter and not an inverting buck-boost converter, the output voltage (in absolute value) must always be below the input voltage for proper regulation.

Let’s see how the entire solution benefits from the combination of all of its traits. Figure 2 shows the full schematic of the TPS63710 low-noise inverting buck converter.

Figure 2: The low-noise inverting buck converter TPS63710

Low noise is the most important characteristic when trying to achieve critical system performance parameters such as signal resolution and noise floor. The power supply, whether a switching DC/DC converter or otherwise, should not add additional noise to the system, because this noise can show up on the measured signal and result in distortions. The topology’s continuous output current and the device’s CAP pin enable the TPS63710 to achieve very low noise.

Continuous output current refers to the inductor being in series with the output stage and load. This is the same as in a typical step-down (buck) converter and supports a low output-voltage ripple from the switching action. Specifically, this topology keeps the output voltage free from most high-frequency noise that would otherwise be created by having a discontinuous output current across the output capacitor’s equivalent series resistance (ESR) and equivalent series inductance (ESL), as found in the traditional inverting buck-boost topology. While some high-frequency noise above 100MHz still couples through the inductor’s windings, you can easily filter this noise with a ferrite bead. The inverting buck topology creates a lower magnitude of switching ripple at its 1.5MHz switching frequency than the inverting buck-boost topology with the same output filter.

Additionally, the CAP pin reduces the self-generated, low-frequency 1/f noise, which typically comes from the voltage reference (bandgap voltage) inside the DC/DC. The external capacitor (CCAP) forms a low-pass filter with an internal resistor to achieve extremely small low-frequency noise below 100kHz, as shown in Figure 3. This noise-reduction functionality is the same as what’s found on a high-performance LDO’s noise reduction and soft-start (NR/SS) pin, such as in the TPS7A8300, and is the biggest concern in many telecommunications systems since this noise occurs within the bandwidth of the transmission channel and is difficult to filter out. Including this 1/f-noise filter inside the DC/DC reduces the low-frequency 1/f noise to acceptable levels.

Figure 3: Output noise density of a low-noise inverting buck converter

At last, there is a switching DC/DC converter with low noise to create that cumbersome negative rail in a single inverting step. Which part of your analog signal chain will it power?

Additional resources

FPGA power made simple: system architecture

$
0
0

Field-programmable gate arrays (FPGAs) are used in applications ranging from medical devices, to wire communications, to aerospace and defense. FPGAs simplify the design process by providing a reprogrammable circuit; this ability to repeatedly reprogram enables quick prototyping and eliminates the need to create a custom application-specific integrated circuit (ASIC). FPGAs are a relatively inexpensive solution even in small quantities, making them popular at both small and large companies. However, because of the multiple rails required to power the FPGA (as shown in Figure 1), it can be confusing to design the power circuitry.

Figure 1: Basic FPGA schematic

Each rail will have different requirements for current, accuracy, voltage ripple, load transients and sequencing. This means that the power design will require multiple power supplies to meet all of the different rail requirements. In this four-part series, I will break down the basic design considerations of designing FPGA power circuitry, starting with my first topic: system architecture.

The requirements of your application will inform your system architecture. Typically, as shown in Figure 2, designers will use a DC/DC converter to step down from the power source to an intermediate rail. Additional power supplies will then step the intermediate voltage down to the point-of-load (POL) power required.

Figure 2: Typical system architectures

One of the first steps is deciding what voltage to use for the intermediate rail. The most common intermediate rail voltages are 12V, 5V and 3.3V. Typically, the lower the intermediate rail, the higher the efficiency of the voltage conversion to the POL power level. However, a lower intermediate rail voltage will require a higher input current. There are also fewer devices available that can step down to 3.3V, depending on how high the voltage of the power source is. Table 1 summarizes the trade-offs.

Table 1: Intermediate rail voltage trade-offs

Defining the system architecture will determine what devices and how much power you need to feed your design. After selecting the architecture, you can then move on to the next step: determining the current level. To determine the current requirements, I recommend that you use the spreadsheets provided by your FPGA vendor. From these spreadsheets, input the specific FPGA you are using and other details of your design (like clock frequency and temperature) and it will calculate the voltage and current requirements of each rail.

Once you define the system architecture and have estimated the current requirements, it is time to start looking at the requirements of individual rails which will be covered in the next blog post. In the meantime, get more information on TI’s solutions for Xilinx and Altera FPGAs.

How to design a simple two-phase current-sharing synchronous buck regulator using a voltage-mode controller

$
0
0

I’m often asked, “Can you tie the outputs of two voltage-mode buck controllers together, and will the currents in each phase share evenly?” Many experienced designers know that it’s often easier to implement current sharing with a peak current-mode controller by simply tying the compensation pins together. You can achieve reasonable accuracy this way because the compensation voltage on a current-mode controller is proportional to the peak inductor current, which is related to the output current. Tying the compensation pins together ensures even distribution of the currents between the phases.

Things are not so easy and straightforward when using two voltage-mode controllers, however. So in this blog post, I’ll discuss a method of getting two voltage-mode controllers to share evenly.

Voltage-mode control vs. current-mode control

The advantages and disadvantages of voltage-mode control (VMC) vs. current mode control (CMC) are popular discussion topics. VMC has a couple of advantages over CMC because with CMC, high di/dt cycle-by-cycle current information is injected into the feedback loop to generate a ramp voltage presented to the pulse-width modulation (PWM) comparator; this signal is compared to the error voltage. VMC generates its ramp internally and, as a result, is less prone to noise and duty-cycle jitter.

The need to feed the cycle-by-cycle current into the feedback loop to generate a ramp requires filtering of the current information, a process known as leading-edge blanking. Leading-edge blanking affects the minimum controllable on-time and is specified in data sheets as Ton min. The Ton min (max) is a specification often scrutinized in high step-down ratio designs because it is the minimum controllable on-time that the converter power stage must exceed. VMC typically enables very small duty-cycle control without the impact of relatively large leading-edge blanking times.

Current imbalances

So, addressing the original question, “Can you tie the outputs of two voltage-mode buck controllers together?,” one of the two phases will likely hit a current limit as the output of one phase sources current into the other given the different voltage setpoints of each phase. Even if the output setpoints were not dissimilar, the phases would be unlikely to share evenly, and the imbalance would cause one of the two phases to deliver more current than the other and one phase to get hotter than the other. In order to get the phases to share evenly, you must first measure the current.

DCR current sensing

Direct current resistance (DCR) sensing is a method of measuring the current in each phase that does not add additional losses or costs. Using a resistor capacitor (RC) across the inductor, as shown in Figure 1, the Cs voltage sensed across C1 is proportional to the voltage sensed across the LDCR of the inductor winding. The sensed voltage is also proportional to the output current.

Figure 1: DCR current sensing

Current sharing

Figure 2 shows the current-sharing circuit of a two-phase VMC synchronous buck controller. The difference amplifier ensures that the voltages at VCsmstr and VCsslv are equal. For example, should the current in the master phase be greater than the slave phase, the voltage at the negative input of the amplifier will cause the output of the amplifier (Vinj) to fall. A falling voltage at the injection resistor (Rinj) will result in an increase of the slave phase voltage. Increased current in the slave phase will decrease the current in the master phase. I recommend selecting Rin and Rfb such that if both phases are well-matched and even, the voltage presented at the output of the amplifier (Vinj) is equal to the reference voltage of the controller and no current sources or sinks at the feedback node.

The current-sharing circuit does not account for mismatching the LDCRs of the two phases. If there is a mismatch in initial LDCR accuracy, there will be a mismatch in current between the phases. This mismatch will self-balance to some degree, however. The current in the lower LDCR phase will be higher, and as a result of higher temperatures, the LDCR of the offending phase will increase and adjust the sharing in the right direction. Should you require superior matching, you can use current-sense resistors instead of DCR sensing.

Figure 2: Current sharing withDCR current sensing

Current-sense amplifier example

Looking again at Figure 2, consider a dual-channel 12V output where the output currents between the two phases are balanced at 10A each and LDCR is 12mΩ.

Assume that the voltages across C1 and C2 are equal to the current multiplied by the LDCR of each channel, respectively. See Equation 1:

Equation 2 expresses the voltage at the positive input to the amplifier as:

Equation 3 assumes an ideal op amp where the voltage on its inputs are equal:

Equation 4 expresses the current flowing through Rfb as:

where underbalanced loads V(Csslv) = V(Cmstr) and IRfb = 20.2μA.

In order to have no current presented to the feedback node under balanced conditions, the output of the amplifier must be 0.8V, which happens to be the reference voltage of the TI LM5145 buck controller. So Equations 5 and 6 are:

Should there be an imbalance between the phases, the output voltage will adjust above or below the 0.8V feedback voltage. If Vout is less than the feedback voltage – say 0.7V (due to a greater amount of current flowing in the master than the slave) – the amplifier will sink current from the feedback node and the output voltage of the slave will adjust above the setpoint of the feedback resistors according to Equation 7:

The output voltage of the slave will increase from 12V to 12.2V, rebalancing the currents in each phase.

The current-sense amplifier requires high-frequency attenuation; a capacitor is placed in parallel with the feedback resistor, Rfb.

Figure 3 shows the actual design of the current-sense amplifier using the nearest preferred values.

Figure 3: Current-sense amplifier design

Putting it all together – the LM5145 controller in an interleaved buck application

Figures 4 and 5 show two LM5145 voltage-mode synchronous buck controllers configured in an interleaved application with current balancing. As I discussed, a difference amplifier that senses the voltage drop across LDCR achieves current balancing. Some of the features of the LM5145 enable easy implementation of a two-phase interleaved buck. The LM5145 implements diode emulation at startup and ensures that currents in one phase do not sink into the other. In the event of any current imbalance between phases, the current-sense amplifier will adjust the slave voltage above or below its setpoint to ensure evenly loaded phases.

Another benefit of the LM5145 is the soft start and tracking input pin. Soft start slows the startup time and minimizes current mismatch during startup. In addition to soft start, you can use the tracking feature as an option to precisely sequence the startup and settling times between the two phases. Another beneficial feature of the LM5145 is the sync in and sync out feature. Sync in ensures that the two phases are synchronized. Sync out ensures a 180-degree phase shift of the clocks, resulting in a root-mean-square (RMS) ripple-current cancellation of the input currents to the buck stages and significantly reducing the required RMS ripple-current rating of the input capacitors.

Figure 4: LM5145 master phase


Figure 5: LM5145 and current-sense amplifier slave phase

Results

Figure 6 shows measured results of the example discussed above.  

Figure 6: Curve showing peak efficiency of almost 97% at 24Vin, 12Vout at 11A

Figure 7 shows the measured current sharing accuracy between two phases.


Figure 7: Curve showing current sharing: master at -2.5% and slave at +2.6% of target

 Figure 8 shows the switch node and inductor current of each phase in steady state conditions.

Figure 8: Steady state performance: 48V in Vswitch and inductor current at 20A load (channel 1 = Vswitch master, channel 2 = Vswitch slave, channel 3 = inductor current master, channel 4 = inductor current slave)

Figure 9 shows the dynamic performance during a load step showing how the inductor current shares between the phases during this transient condition.

Figure 9: Dynamic performance: 48Vin transient response from 5A to 15A (1A/µs) (channel 2 = Vout, channel 3 = inductor current master, channel 4 = inductor current slave), Vout perturbation = 100mV

Conclusion

This blog describes how to design with the LM5145 synchronous buck controller in a dual-phase application where higher currents and higher efficiencies are required. The use of voltage-mode controllers with specific features such as pre-bias start up (diode emulation) and SYNC IN/OUT features make for a relatively simple implementation. VMC has the advantage over CMC with jitter free, high-current performance and the ability to convert voltages with higher step-down ratios.  This implementation featuring the LM5145 does not add extra complexity, cost, or power loss compared to solutions using current-mode control.  For further information on the power stage of the buck converter, please see the application information in the LM5145 datasheet.

 

 

 

FPGA power made simple: rail requirements

$
0
0

In the first installment of this series, I reviewed the system architecture choices for a field-programmable gate array (FPGA) power design and how to estimate the power requirements. Now that you have a good idea from the vendor’s spreadsheet what the voltage and current requirement of each individual rail is, you need to look at the requirements of each individual rail before selecting parts. In this installment, I will focus on the four basic types of rails: core, transceiver, auxiliary and input/output (I/O) rails. This is not an all-inclusive list of the rails that your specific FGPA may have, but they are the most common and each has distinct requirements. Table 1 summarizes the requirements for each rail.

Table 1: FPGA rail requirements

Let’s start by looking at the core power rail. Typically, the core rail has a low voltage requirement but requires a high amount of current. Sequencing is also an important concern for this rail. Every FPGA must turn on and off in a certain order to satisfy each rail in the correct order. The core rail is typically the first rail on and the last rail off, so you should use a dedicated power supply for the core rail. I will go into more detail on sequencing techniques in the next blog. Finally, the core rail typically has a tight output-voltage tolerance requirement. It must have an accuracy of at least 3% (some FPGA families may be okay with 5% accuracy for the core rail) and be able to handle a 50% load step at <1A/µs.

The transceiver rail has the strictest requirements of all the rails on the FPGA. It typically has the tightest requirements for tolerance and requires an accuracy of 2.5-3%. This rail has strict noise requirements, requiring a voltage ripple of 10mV peak-peak or less over a wide range of frequencies. Thus, you may need a dedicated power supply for this rail, even if it has the same voltage requirements as another rail. Make sure to design the power supply for low noise, or simply choose a power module with guaranteed electromagnetic interference (EMI) performance. The layout of the power supply is also very important to help achieve low-noise targets. Make sure that your layout is tight, with capacitors close to the device pins.

The auxiliary and I/O rails typically have similar requirements, so I’ll discuss them together. Often, the same devices may power both rails. The current requirements of the I/O rail will vary depending on how many I/O banks you are using in your application, but typically the current requirements are lower than the core rail. The auxiliary and I/O rails have a looser tolerance requirement and can typically use devices with up to 5% accuracy.

Several times in this post, I’ve mentioned the importance of output voltage tolerance. It is important to consider the tolerance in two states: the static and dynamic state. As you can see in Figure 1, in the static state (when only fixed or gradual changes are occurring), the tolerance is made out of the voltage ripple and the power-supply regulation. Typically, it is at 1% or 1.5%. Next, you need to consider the tolerance in the dynamic state (when quick changes are occurring). The dynamic state is primarily made up of transient droop and DC loss.

Figure 1, using the LMZ31520 as an example, shows all of the factors that add up to create a static output voltage tolerance of 1.65%. This leaves about 1.35% of space to cover dynamic changes.

Figure 1: Output voltage tolerance

There are many ways to improve the tolerance. In the static state, you can take measures to improve the power-supply regulation. Selecting a feedback resistor with a tight tolerance can help improve the overall tolerance. Additionally, you can reduce the output-voltage ripple by increasing the switching frequency you are using and adding additional ceramic output capacitors.

You can take steps to improve tolerance in the dynamic state. Transient droop occurs when the power supply changes state. The load-step size, load-step speed and output capacitance all affect the amount of transient droop. If the load step is small, the droop will be small. If the load step is large but the speed of the change is slow, the power supply can handle the change more easily and the droop will be small.

Even if the load step is large, you can still make quick improvements by adjusting the amount of output capacitance. Place bypass capacitors directly at the FPGA pins. Typically, the FPGA vendor will provide recommendations for the amount of capacitance needed. You can also use bulk capacitors to support load steps during power up or during changes in the processor state. Make sure to choose high-quality capacitors with low equivalent series resistance (ESR), like ceramic X5R or X7R dielectric capacitors. Adding different types of capacitors can also help. Bulk capacitors typically are better at filtering out low frequencies, while ceramic capacitors are better at filtering out high frequencies. Figure 2 depicts these recommendations.

Figure 2: Output capacitor network

You can improve DC loss through improved layout techniques. It is important to use wide and thick copper traces and to place the power supply as close to the FPGA as possible. Finally, if the power supply has a remote-sense feature, you can improve regulation by connecting it to VOUT at the load. This allows it to compensate for an I-R voltage drop between the output pins and the load.

In the next installment, I’ll cover the final topic before part selection – sequencing – and give an overview of several common sequencing strategies.

Additional resources

How to ensure precision in automated processes

$
0
0

Precision in factory automation applications depends on many factors, such as accuracy, temperature drift, and noise. But there are two more key factors that you may have overlooked when initially designing a system: safety and repeat-ability. In order to ensure that every measurement from an analog-to-digital converter (ADC) or signal from a digital-to-analog converter (DAC) from your field sensor is consistent and precise on startup, it is important to have a precise reference voltage. This system also needs to have a proper startup cycle in case it experiences a large amount of current. This current is known as In-rush current and it is the maximum current drawn from the power supply at startup. In-rush current can be much larger than the typical current the system was designed for because of the capacitive loads in the system that typically found around in the power rails. This is can be an issue because in-rush current during startup can cause low-dropout regulators (LDOs) and series voltage references to go out of regulation and negatively impact system precision and regulation.

In-rush current is not only applicable to factory automation systems such as PLCs and field transmitters but it also seen in industrial systems and other 4-20mA current loops applications that require isolation. This is because these systems typically turn on and turn off periodically for measurements and automation. Looking at a system with no in-rush current protection during start-up, there is very little impedance to the in-rush current. This is because power supplies typically have a small series impedance and any LDO’s have a low RDSON, therefore there is not enough impedance to limit the current going the capacitors during startup. This current can damage the board traces and devices that are not built to handle this current.

Figure 1 is a typical example of a connection from a voltage source to a voltage reference. The in-rush current from VCC is caused by the capacitors CS and CLoad, which can both be several microfarads for stability. The typical total value of the series impedance limiting the current is several ohms or lower; as a result, the in-rush current can reach several hundred milliamps and cause the voltage reference to go out of regulation, or damage the device or supply.

Figure 1: A precision voltage reference lacking in-rush current protection

A common solution is to use Rs to limit the peak current, as shown in Figure 1. In certain applications, this resistance can be as high as hundreds of ohms to limit the peak current. The current-limiting resistor sets an upper limit without controlling the slope of the voltage and current.

One problem that can result is that if Cs after Rs is small, the capacitor cannot supply enough current at startup for the reference. The lack of current can cause very slow ramp-up during startup, which might affect the internal power-up initialization of the device. A second problem is that because of the Rs, there is a waste in power and a decrease in voltage from the original supplied voltage due to the continuous current on the resistor. In power-efficient applications where the power consumption of the voltage reference is a key parameter, the power consumption of Rs can be significant. You can see the giant dip in Figure 2 caused by the large series resistance Rs.

Figure 2: Precision voltage reference with Rs for in-rush current limiting

A clean start for precision

Although there are external discrete solutions to in-rush current protection in voltage references, they can often be bulky or inefficient. The REF2125 voltage reference introduces a new feature called “Clean Start,” which is similar to the soft-start feature on other devices that protects against in-rush currents. Both clean start and soft start prevent the device from entering undesirable states that might affect the regulation voltage by regulating the current going into the device.

The difference between a typical device soft start and the REF2125 clean start is the mechanics behind soft start. Soft start works by having an external capacitor, CSS, connected to a soft-start pin. The output charging rate of the device is then made proportional to the ramp rate  of the current on CSS. While soft start does provide in-rush current protection, this feature is not common in voltage references.

Clean start offers programmable in-rush current protection for the REF2125, with an additional level of programmability that soft start lacks. Clean start limits the output current proportional to the voltage at the CS pin of the voltage reference. This allows for flexibility of the ramp-up rate to be as gradual or fast as necessary with resistor RCS and capacitor CCS. When the CS pin of the REF2125 only has RCS, the input current of the device is limited to Iin,peak. In this state, the device startup looks similar to soft start, as the current rises linearly with the resistor. Equation 1 calculates Iin,peak based on RCS.

Iin,peak≈ 466uA + 13.54µA*Rcs                      (1)

A benefit of clean start is the additional programmability that comes in the form of adding capacitor CCS. In addition to the resistor on the CS pin, this capacitor allows the ramp of the output current to increase according to the RC time constant of CCS and RCS, as shown in Figure 3.

Figure 3: REF2125 clean-start example with RCS and CCS

If the programmability of the CS pin is not needed but the in-rush current protection is, the REF3425 can handle this. In the REF3425, the ramp-up current during start up is linear with a fixed turn-on time that ensures that the device always turns on as expected.

In-rush current protection is needed for factory automation in PLCs and field transmitters. They are necessary for automated systems especially since each measurement and analog input must be precise and reliable. While there are discrete solutions that can help limit in-rush current, there is no substitute for a built-in solution. The REF2125 and the REF3425 bring built-in in-rush current protection into the voltage reference space for the factory automation.

 

Additional resources

An introduction to the D-CAP+™ modulator and its real world performance

$
0
0

The D-CAP+™ control architecture is optimized for multiphase regulators. Like the standard D-CAP™ control scheme, the D-CAP+ architecture is also a constant on-time architecture, but it’s implemented using a true current-mode design. This enables it to properly sense and balance inductor current between multiple phases of a switching regulator. The controller’s on time is also adaptive to operating conditions such as input voltage, output voltage, and load current, resulting in a constant on-time and fixed frequency in steady-state conditions. Some of its many benefits include:

  • High loop bandwidth and phase margin for fast transient response.
  • Stability that is insensitive to load current, input voltage and number of active phases.
  • Dynamic phase add/drop capabilities to keep performance and efficiency at peak levels.
  • Accurate current sharing to avoid stressing the components of any one phase and maintain regulation.

Figure 1 gives a basic overview of the D-CAP+ architecture. The pulse-width modulation (PWM) comparator and adaptive on-time circuitry form the heart of the modulator, with inputs from the voltage and current loops outlined in green and red, respectively. Additional phase-management circuitry turns the individual phases on and off.

Figure 1: D-CAP+ Architecture Block Diagram

Figure 2 shows the basics of how the modulator operates in steady state. The sensed inductor current waveform of all phases, ISUM, is compared against the output of the error amplifier, EA. The intersection of ISUM and EA generates a constant on-time PWM pulse to start a switching cycle. The phase-management circuit fires each phase sequentially in order to keep the phase currents balanced. For a fixed VIN, VOUT and IOUT, the switching frequency of the regulator will remain constant.

Figure 2: D-CAP+ operation in steady state

During a transient event, the D-CAP+ modulator maintains output regulation by keeping a relatively fixed on time while increasing or decreasing the switching frequency by adjusting the off time as needed. Figure 3 shows this behavior.

Figure 3: D-CAP+ transient operation

TI benchmarked the transient response of a competitor’s digital current-mode modulation scheme against the TPS53679 D-CAP+ multiphase regulator. The conditions of the benchmarking were:

  • VIN = 12V, VOUT = 1V, fSW = 400kHz, six phases.
  • IOUT = 0A to 150A, 1kHz, D = 5% to 30%.
  • Matched output filters.
  • Optimally adjusted compensation.

Figures 4, 5, and 6 show the scope shots of the testing, while Table 1 summarizes the measured overshoot, undershoot and settling times. In all of the scenarios studied, the D-CAP+ topology offered lower over/undershoot with comparable settling times to the competitor and emerged as the clear victor.

Figure 4: Transient Benchmarking – Undershoot

Figure 5: Transient Benchmarking – Overshoot


Figure 6: Transient Benchmarking – Duty-Cycle Sweep


Table 1: Transient Results Summary

TI also looked at the phase-firing behavior of each solution on the bench as well; see Figure 7. D5 = PWM1 for the competitor’s part, while D0 = PWM1 for the TPS53679. As expected, the TPS53679 and D-CAP+ modulator showed a constant on time with a shortened off time during the load step. The competitor’s controller overlaps active phases by adjusting the PWM on time instead. Even with four of six phases overlapped, the competing device still cannot beat TI’s D-CAP+ control scheme.

Figure 7: Phase-Management Comparison During Load Step

The results are in and when it comes to your next high-powered design the choice is clear. Take your next ASIC or processor design to the next level with one of the many D-CAP+ switching regulators from TI. Stay on the lookout for more Power House blogs exploring other benefits of the D-CAP+ modulator.

Additional resources

 


Improve the user experience with an LED ring

$
0
0

Products with good user experience can always receive positive feedback in the market. You can use audible, visual or tactile methods to improve the user experience for consumer electronics or smart home products like speakers, locks, smoke detectors and wearables. Visual is one of the most direct ways – and also one of the easiest – if you add a light-emitting diode (LED) ring. A good LED ring has various fancy effects, including smooth breathing, elegant chasing and rhythmic beating, Figure 1 illustrates an example of an LED Ring. It delivers a new soul to a product!

Figure 1:  LED ring in Rotary Knob and button 

The secret behind an LED ring is a very smart LED driver, which controls the current and pulse-width modulation (PWM) duty cycle of every red-green-blue (RGB) LED to achieve different colors and brightness levels. The PWM resolution of the LED driver is very important in order to achieve smooth color and brightness changes. Most LED drivers on the market offer 8-bit 256-step PWM control, which is not good enough to achieve an ultra-smooth effect, as the human eye can observe color and brightness jumps under low gray-scale conditions. TI’s TLC5955 48-channel LED driver resolves this problem; it has 16-bit 65,536-step resolution PWM control and 7-bit dot correction for every channel. An ultra-high 2% channel-to-channel current accuracy helps achieve perfect color and brightness consistency between different RGB LEDs.

To achieve sophisticated LED ring effects, the microcontroller (MCU) needs to refresh the LED driver’s internal registers continuously to control color and brightness. In some systems, the MCU is not strong enough to handle this. TI’s LP5569 is equipped with internal static random access memory (SRAM) for user-programmed sequences along with three programmable LED engines, which enables operation without processor control. Autonomous operation reduces system power consumption because you can put the processor into sleep mode, as shown in Figure 2.

Figure 2:  LED with engine control help to offload MCU

With more and more products integrating LED ring functionality, TI’s large RGB LED driver portfolio is expanding, adding new products to help you design systems more easily.

Additional Resources

FPGA power made simple: sequencing

$
0
0

I hope you enjoyed the last blog on rail requirements.  Now we move on to sequencing, another important topic in field-programmable gate array (FPGA) power design. While powering the FPGA on and off, the power supplies need to turn on in a particular order. The exact sequencing series will vary, but typically the core rail is the first on and the first off. The turning on of the power supply has to be monotonic, which means that there is a continual rise in output voltage powering the FPGA. Finally, there are requirements for how quickly everything needs to fully power up from start to finish: typically, all rails need to get to 95% within 40 or 50ms. In this blog post, I will review three common techniques for sequencing.

The first and most simple technique is resistor-capacitor (RC) sequencing, depicted in Figure 1. When a power supply turns on, the RC time constant will drive when the device turns on. This means that you can simply adjust the RC values for each power supply, which allows you to stagger the turn-on. As you can see in Figure 1, the power supply with the lower RC time constant will turn on first. Another similar technique is adjusting the soft-start so that some power supplies turn on more slowly. However, these techniques cannot support power-down sequences and can be unreliable during temperature variations, or if there are any faults causing incomplete or repeated power cycles.

Figure 1: RC sequencing

A much more effective and reliable technique is cascading power supplies as seen in Figure 2. In this technique, you connect the output of the first device’s power-good pin into the enable pin of the next power supply to turn on. This means that when the first device’s power good goes high, it triggers the next device to turn on. This technique is more reliable than basic RC sequencing; however, it still cannot support a power-down sequence.

Figure 2: Cascading power supplies

The final sequencing technique, shown in Figure 3, uses the microcontroller (MCU) for sequencing. This method requires software and uses a timer, general-purpose input/outputs (GPIOs) and the bandwidth of the MCU. The MCU provides a brain that senses the output voltage of each power supply and then tells the next device to start turning on (or off) based off the programmed sequence. This is the most complex technique, but it provides the most amount of control and gives you the ability to program both the power-up and power-down sequences.

Figure 3: MCU sequencing

Another potential solution is TI’s LM3880 simple sequencer. The LM3880 has three rails to turn on or turn off three devices by manipulating the values of R1, R2 and R3, as shown in Figure 4. This provides a simple way to sequence if you need something more reliable than the first two techniques and don’t have the MCU bandwidth to implement the third technique. You can daisy-chain the LM3880 to support the sequencing of more than three devices if necessary.

Figure 4: LM3880 simple sequencer

After choosing a system architecture, understanding the rail requirements and choosing a sequencing strategy, you should now have a better understanding of what key features and performance metrics you need for your power supplies and can start selecting parts. I’ll cover this in the next and final installment of this series on FPGA power.

Additional resources

How to minimize undershoot when the pre-boost converter wakes up

$
0
0

 The start-stop system in hybrid electric vehicles (HEVs) helps reduce fuel consumption and emissions by stopping the engine during idling, but the battery voltage drops whenever the engine restarts. To provide the minimum required voltage to the loads during the battery voltage drop, pre-boost converters are widely used in automobiles.

Table 1 shows the typical requirements for a pre-boost converter.

 

Requirements

Input voltage

Peak 40V, typical 12V, minimum 2.5V

Output voltage

Maximum 40V, minimum 8.5V

Output current

From 2A to 4A

Table 1: Pre-boost converter specifications

In this blog post, I’ll explain the key parameters which affect the output-voltage undershoot of the pre-boost converter when the battery voltage drops.

Small-signal analysis

According to the traditional small-signal analysis, there are two important parameters which affect the output undershoot: the loop response of the converter and the output impedance of the power stage. Table 2 shows the factors that optimize the loop response and output impedance.

Key parameters

Factors for performance improvement

Fast loop response

  • High crossover frequency.
  • Fast switching.
  • Low-value inductor/high right-half-plane (RHP) zero frequency.

Low output impedance

  • High-value output capacitor.
  • High crossover frequency.

Table 2: Factors that improve the loop response and lower the output impedance

Large-signal analysis

In addition to the two parameters in small-signal analysis, three more parameters in large-signal analysis affect the output undershoot: the error amplifier to pulse-width modulator (PWM) comparator offset, the error amplifier sourcing capability and the wake-up delay. Because all three parameters are device-dependent and none are user-adjustable or user-programmable, device selection is very important when designing a pre-boost converter.

Error amplifier to PWM comparator offset

Traditional boost devices have about 1.2V offset from the error-amplifier output to the input of the PWM comparator (see Figure 1). Because the device cannot start switching until the error amplifier output is greater than the offset voltage, minimizing this offset is a key factor to improve undershoot during the battery voltage drop.

Figure 1: Controller block of a boost device

Error amplifier sourcing capability

Sometimes the undershoot increases due to the limitation of the error amplifier. Ideally, the gain of the error amplifier should be constant over the typical error amplifier operating range, but the gain drops if the error amplifier sourcing capability is insufficient.

Wake-up delay

Because a pre-boost converter usually falls into a low quiescent current (IQ) standby mode to minimize battery drain when the battery voltage is in normal range, it takes some time to wake up the device from a low-IQ standby mode. Because the device cannot start switching until it wakes up, excessive undershoot can occur during a long wake-up delay.

LM5150-Q1 automotive boost controller

The LM5150-Q1 is a 2.2MHz automotive boost controller that features ultra-low IQ in standby mode. Specifically designed for use in start-stop systems as a pre-boost converter, the device has only 0.3V offset and a strong transconductance error amplifier. The wake-up delay is less than 6µs, which is among the fastest in the industry.

Figure 2 is a comparison between a traditional boost converter and the LM5150-Q1. While the output undershoot using the traditional boost conversion is big and greatly affected by a long wake-up time, a large offset and an insufficient error amplifier sourcing capability, the output undershoot using the LM5150-Q1 is small and minimized. The traditional boost converter’s parameters are set to 1.2V offset, with a 100µA source current limit, a 2mA/V transconductance error amplifier gain and a 50µs wake-up delay.

Figure 2: Pre-boost output voltage during battery voltage drop (a traditional boost converter shown in green, the LM5150-Q1 shown in red)

Conclusion

A pre-boost converter’s output undershoot is affected by device selection. The output undershoot using the LM5150-Q1 is minimized by its small offset, large error amplifier sourcing capability and a quick wake-up time.

 

Additional resources

The D-CAP+™ Modulator: Evaluating the small-signal robustness

$
0
0

In a previous Power House blog post, I introduced TI’s D-CAP+ multiphase regulator control topology and covered the basics of its architecture, steady-state operation and transient performance versus a competitor’s part. I also said something when listing some of the features of the D-CAP+ control topology that at first glance just seems like a marketing claim: that the overall regulator stability is insensitive to load current, input voltage and number of phases. The math behind the modulator checks out, so I went to the lab to take some Bode plots and evaluate the D-CAP+ control topology in the real world.

While holding all parameters constant and only changing VIN, IOUT or the number of phases, I used a network to generate a number of Bode plots across a wide range of conditions. For my testing, I used the TPS53679 multiphase controller under these conditions:

  • VIN = 8V, 12V, 16V.
  • VOUT = 1V, fSW = 400kHz.
  • Number of phases = two, four, six.
  • IOUT = 50A, 100A, 150A.

In Figure 1, I took plots at VIN = 8V, 12V and 16V. DC gain and dominate pole frequency were identical for all three curves and there was little variation in crossover frequency and phase margin. All three curves showed a stable system; changing the input voltage did not affect the performance of the D-CAP+ modulator.

Figure 1: Loop Response vs. Converter Input Voltage

In Figure 2, I changed the number of active phases from two to four to six while holding everything else in the design constant. Once again, there was very little variation in any of the Bode plots and no instabilities. D-CAP+ stability is not dependent on phase number. The modulator was two for three, with one test to go.

Figure 2: Loop Response vs. Converter Phase Number

For the last test, I took Bode plots at 50A, 100A and 150A, and for the third time, the D-CAP+ control topology proved to be insensitive to changes in operating conditions. The plots in Figure 3 all came out stable and identical to one another.

Figure 3: Loop Response vs. Converter Load Current

Table 1 summarizes the unity gain and crossover frequencies of all of the Bode plots I took during my time in the lab. Under all conditions, there’s more than adequate phase margin (>50 degrees) at the unity gain frequency, making stability a non-issue. With an easily achievable loop bandwidth around a quarter of the switching frequency for most conditions, the reason for the D-CAP+ control topology’s excellent transient response becomes apparent. A higher bandwidth gives the controller a much quicker response time to load transients and less over- and undershoot on VOUT as a result. With stability practically guaranteed over a wide range of operating conditions, designing with the D-CAP+ modulator becomes even easier.

Table 1: Measured Regulator Crossover Frequencies and Phase Margin

I’d like to note that I did not once touch any compensation parameters during testing. The D-CAP+ control topology allows you to dial in the compensation for your chosen output filter and not worry about stability over your expected operating corners. Lowering the loop bandwidth to ensure stability and eke out that last bit of phase margin while sacrificing transient response is a thing of the past with this modulator. The performance is really there.

Additional resources

“Peak current-mode control” in an LLC converter

$
0
0

In 1978, when Cecil Deisch worked on a push-pull converter, he faced a problem of how to balance the flux in the transformer and keep the core from walking away into saturation caused by slightly asymmetrical pulse-width modulation (PWM) waveforms. He came up with a solution of adding an inner-current loop to the voltage loop and let the switch turn off when the switching current reached an adjustable threshold. This is the origin of peak current-mode control.

Since then, peak current-mode control technique is widely employed in PWM converters. Compared to conventional voltage-mode control, peak current-mode control brings many advantages. For example, it changes the system from second order to first order, simplifying compensation design and achieving high loop bandwidth with much better load transient response. Other advantages include inherent input-voltage feed forward with excellent line transient response, inherent cycle-by-cycle current protection, easy and accurate current sharing in high-current multiphase designs.

However, when it comes to inductor-inductor-capacitor (LLC) converters, peak current-mode control becomes infeasible. The reason is obvious: because the resonant current in LLC is sinusoidal, the current is not at its peak when the switch turns off. Turning off the switch at the peak-current instant will cause the duty cycle to be far away from the required 50% for LLC.

Because of this, while peak current-mode control has already been widely used in other topologies, voltage-mode control is still dominant in LLC applications. When power engineers enjoy the high efficiency of LLC, they also experience the poor transient performance caused by conventional voltage-loop control.  Because LLC is a high nonlinear system, its characteristics vary with operational conditions. Therefore, it is very difficult to design an optimized compensation, the loop bandwidth is usually limited and the load/line transient response may not be able to meet a strict specification.

Is there any way to employ peak current-mode control in LLC? Let’s take a close look at how peak current-mode control works in PWM converters. In a PWM converter, the switching current is sensed, typically through a current transformer (CT), which is then compared to a threshold to determine the PWM turn-off instant. The CT output is a sawtooth waveform, and the input electric quantity is proportional to the magnitude of this sawtooth wave. This means you are actually controlling the electric quantity going into the power stage. Since the input electric quantity represents the input power, and input power equals output power (assuming 100% efficiency), the peak current mode controls the output power by controlling how much electric quantity goes into the power stage in each switching cycle.

So can you use the same concept in LLC? The answer is yes. One intuitive way is to integrate the input current in each half switching cycle, this can be done by connecting the CT output to a capacitor, where the capacitor voltage represents the integration of the input current. Luckily, there’s already an integration circuit in a LLC circuit. In LLC, when the top switch turns on, the input current charges the resonant capacitor, causing the resonant capacitor voltage to increase. The voltage variation over this half period represents the net input current charged to the resonant capacitor. By controlling the voltage variation on the resonant capacitor, you can control how much input power goes into the resonant tank and thus control the output power.

The UCC256301 adopts this charge-control concept through a novel control scheme called hybrid hysteretic control (HHC), which combines charge control and traditional frequency control – It is charge control with an added frequency compensation ramp, just like conventional peak current-mode control with slope compensation.

Figure 1 shows the details of HHC. There is still a voltage loop; however, instead of setting the switching frequency, its output sets the comparator thresholds VTH and VTL. A capacitor divider (C1 and C2 in Figure 1) senses the resonant capacitor voltage, and an internal current source (ICOMP) charges (when the high-side gate is on) or discharges (when the low-side gate is on) the capacitor divider. Comparing the sensed voltage signal (VCR) with VTH and VTL determines the gate-drive waveform.

 

Figure 1: HHC in the UCC256301

Figure 2 shows how to generate the gate waveform. When VCR drops below VTL, turn off the low-side gate; after some dead time, turn on the high-side gate. When VCR reaches VTH, turn off the high-side gate; and after dead time, turn on the low-side gate.

Figure 2: Gate waveform in the UCC256301

Just like peak current-mode control in PWM converters, HHC in the UCC256301 offers excellent transient performance by changing the LLC power stage into a single-pole system that simplifies compensation design and achieves higher bandwidth.

Figures 3 and 4 compare the load transient response with HHC and conventional voltage-mode control, respectively. With the same load transient, the voltage deviation is much smaller than conventional voltage-mode control.  

Figure 3: Load transient with HHC control


With such superior transient performance, you can reduce the output capacitance while still meeting a given voltage regulation requirement, allowing for a reduced bill of materials count and a smaller solution size.

Additional Resources:

 

Essential linear charger features

$
0
0

As electronic devices become smarter with more built-in features, they become more attractive but also more power-hungry, making rechargeable batteries an obvious economical choice. Charger requirements have evolved in recent years, with innovative applications, emerging technologies and new battery chemistries. For example, new applications in the wearables space such as smart bank cards, smart clothing and medical patches are driving smaller and cheaper solutions, and smaller and higher-power-density batteries.

Typical questions or concerns from designers include, “How can I maximize battery run time?” “How do I extend my products’ shelf life?” “Is it possible to over discharge the battery?” “What happens if the battery is missing or defective?” “How do I get my product to work with a weak adapter?” and “Can I use the same charger for different designs and different batteries?” In this blog post, I will discuss how different linear charger features can help resolve these issues.

Power path

A power-path function adds one more switch inside the charger, enabling a separate output to power the system and charge the battery. This architecture leads to quite a few other features, but let’s start by looking at Figure 1, which shows a simple linear charger without a power path. The system input and battery terminal connect to the same charger-output node. This non-power-path architecture is popular because of its simplicity and small solution size; examples include the bq24040 and bq25100 battery chargers. Figure 2 shows the bq25100’s evaluation module (EVM).

Figure 1: Simple non-power-path linear charger diagram

Figure 2: bq25100 EVM

This architecture has a few limitations, however. The charger output, battery terminal and system input all connect at the same point. In the case of a deeply discharged or even defective battery, it may not be possible to power up the system even when connecting an external power source. The battery needs to charge to a certain voltage level before the system can power up. Thus, if a product has a deeply discharged battery, the end user may think that the entire system is dead when they plug in the adapter and find nothing happening because the system cannot power up. If the product actually has a defective battery, the system may never power up. For products with removable batteries, it may be possible to remove and replace a defective battery with a different pack. But if the battery is embedded within the equipment, a defective battery will render the entire product useless.

Another issue with this architecture is that the charger only detects the total current going into both the battery and the system. If the system is operating, how can a charger determine if the battery current has reached a termination level?

The solution to all of these concerns is fairly straightforward – you simply add in another switch between the system input and the battery terminal. Figure 3 shows the power-path linear charger architecture. In addition to the Q1 field-effect transistor (FET), which routes current in from the external source, adding one more switch (Q2) decouples the battery from the system when necessary. The system always has the priority of input power, and can turn on instantly when the end user plugs in the adapter. If the adapter has additional power left from supporting the system load, the battery can charge.

Figure 3: Power-path linear charger diagram

This approach also allows the charger to independently monitor the charge current into the battery (as opposed to the total current drawn from the adapter) to allow proper termination and check for any fault conditions.

The bq24072 device family comprises stand-alone power-path linear chargers. The bq25120A is a newer device with more integration. Aside from the power path linear charger, it also integrates a DC-DC buck converter, an LDO, pushbutton controller, and I2C interface for customer programmability.

Ship mode

Ship mode is typically the lowest quiescent current state of a device. Manufacturers enable this state right before a product leaves the factory in order to maximize shelf life, so hopefully the battery is not dead when the end user gets the product. The ship-mode circuit essentially disconnects the battery to prevent battery leakage into the system by turning off Q2 in a power-path charger. When the end user turns the product on for the first time, Q2 turns back on and the battery connects to the system.

Dynamic power-path management (DPPM)

DPPM is another feature on power-path devices. It monitors the input voltage and current of a device, and automatically prioritizes the system when the adapter cannot support the system load. The input source current is shared between the system loading and the battery charging. This feature reduces the charging current if the system load increases. When the system voltage drops to a certain threshold, the battery can stop charging and instead discharge the battery to supplement system current requirements. Implementing this feature prevents the system from crashing.

Input voltage dynamic power management (VIN-DPM)

Another feature usually confused with DPPM is VIN-DPM. The mechanism may sound very similar, but the focus is quite different. Input power sources or adapters have power ratings. There are situations where the input power source does not have enough power to supply what the device demands. Designers are seeing this more commonly now with different USB standards. The device that’s charging may need to work with various (and even unknown) adapter types. If the input source is overloaded and causes the input voltage to fall below the undervoltage lockout (UVLO) threshold, the device will shut down and stop charging. The power-supply load goes away and the adapter recovers. Its voltage goes back up to above UVLO and charging restarts, but the adapter is immediately overloaded again and crashes. This undesirable situation is called “hiccup mode.” See Figure 4.

Figure 4: Hiccup mode

The VIN-DPMfeature resolves this problem because it continuously monitors the input voltage to the charger. If the input voltage falls below a certain threshold, VIN-DPM will regulate the charger to reduce the input current loading and thus prevent the adapter from crashing.

Now you can see that VIN-DPM and DPPM are in fact two very different features. VIN-DPM monitors the adapter’s output (or the charger’s input) and keeps it at a certain level. DPPM monitors the charger output (or the system rail) and maintains it at a minimum predetermined level. These two features can work very well together to allow smooth operation across different operating conditions. Not all chargers have both features. You can implement VIN-DPM on non-power-path chargers as well.

I used the linear charger topology as an example to illustrate some essential features related to power-path functionality, but switching chargers can certainly have this functionality as well. For more information on battery charger selection, see the Battery Charger Solutions page.

Additional resources

FPGA power made simple: design steps

$
0
0

Over the last three installments of this series, I have gone over some basic design considerations for creating a power supply for field-programmable gate arrays (FPGAs). Now that you have determined the power requirements and identified some necessary feature and performance specifications, you are ready to pick parts!

One way to simplify FPGA power if you are a new designer (or strapped for time) is to choose modules as your power supplies. Modules integrate the inductor as well as other passive components to create an easy solution with minimal design. Many of our modules require only three components: the input capacitor, output capacitor and a resistor to set the output voltage. This helps create a small, compact footprint without needing expertise in power-supply layout.

Fewer components not only simplify the solution and reduce the amount of hours needed to design and debug, but also increase reliability. Using a minimal number of components reduces the risk of faulty components or a mistake in the design. TI guarantees many performance parameters in its data sheets, including electromagnetic interference (EMI) performance, thermal performance and efficiency. This means that you can focus less on designing the power and more on adding value to the end product or getting to market quicker.

The disadvantage of a module is that there is less flexibility to optimize the solution through inductor or passive component selection. Modules are typically designed to work with common system architectures, so they are a good option unless you have a particularly stringent performance requirement. Modules can provide good performance and compact solution sizes for most power designs and can be an excellent option, especially for space-constrained, time-constrained or beginner power designers.

Table 1 lists a subset of devices from TI’s power module portfolio that meet the requirements for FPGA rails.

Table 1: Modules recommended for FPGA power

For rails that require large amounts of currents like the core rail, I recommend the LMZ31530/20 or LMZ31710/07/04, which are rated for 30A/20A or 10A/7A/4A, respectively, and meet the 3% tolerance requirement. These devices also have extra features – remote sense to improve load regulation and frequency synchronization, which can help reduce noise and power good for easy sequencing.

For the auxiliary and input/output (I/O), I recommend using TI’s LMZ21700/1 or LMZ20502/1 Nano Modules for the auxiliary rails or general-purpose I/O (GPIO) rails, or the LMZ31704/7/10 if you need higher current. An additional advantage to using Nano Modules is the size advantage. As Table 2 illustrates, Nano Modules in particular provide a very tiny solution size through a small 3mm-by-3mm package and require minimal external components, enabling you to easily save space.

Table 2: Smallest Solutions for Powering I/O & AUX Rails

The transceiver rails are often the most tricky to design for because of the tight noise requirements. Fortunately, all TI modules use shielded inductors and are tested to the Comité International Spécial des Perturbations Radioélectriques (CISPR) 22 Class B standard, which guarantees that the modules meet low-noise requirements. Some TI modules have a frequency synchronization feature to further reduce noise for applications like medical equipment, which is extremely noise-sensitive.

TI provides resources such as reference designs, which provide a ready-made solution that customers can use as a starting point. There is also a tool on WEBENCH® called FPGA Power Architect that can recommend parts and create a full design for you based off of your FPGA and power requirements.

To summarize, when starting a design I recommend starting with these basic steps:

  1. Use vendor tools to determine your current requirements.
  2. Check TI reference designs for a ready-made solution.
  3. Use WEBENCH FPGA Power Architect to create a design.

Following these steps will give you an excellent starting point; you can then continue to optimize the design to meet your exact requirements.

For more information on this topic, check out these resources on FPGA power designs:


Why a wide VIN DC/DC converter is a good fit for high-cell-count battery-powered drones

$
0
0

More and more drone applications require high-cell-count battery packs to support longer flying distances and flight times. For example, consider a 14-series lithium-ion (Li-ion) battery pack architecture where the working voltage is 50V to 60V. When designing a DC/DC power supply for such a system, one of the challenges is how to select the maximum input voltage rating. Some engineers see an outsized voltage excursion at the node designated VM in Figure 1, but may not be aware of its origin or how to deal with it.

Figure 1: Drone system block diagram

First, let me explain the modes of operation of a motor driver. As shown schematically in Figure 2, the battery stack powers a brushed-DC (BDC) motor, M1, through the forward current path designated as loop 1, and electric power converts to the motor’s rotational kinetic energy during this period. Conversely, when the motor decelerates or changes its direction of rotation, it acts as a generator; the resulting back electromotive force (EMF) returns energy to the input through the driver via current loop 2.

Although this action may seem advantageous in terms of improving overall system efficiency, the regenerative behavior can result in a large reverse current and consequent voltage overshoot at the supply input.

Figure 2: Forward current paths of a BDC motor H-bridge driver

Table 1 outlines typical voltage-rating margins for different motor types. The overshoot voltage range (relative to the nominal battery operating voltage) also depends on the drone’s flight dynamics and control algorithm for the thrust and direction change of each propeller.

Table 1: Motor driver voltage rating requirements

In order to manage this voltage overshoot and ensure that the system runs safely, you can use an electrolytic bulk capacitor for C1 to absorb the energy or, alternatively, add a transient voltage suppressor (TVS) diode to clamp the voltage to a safe range.

Take the Rubycon 2200µF/63V electrolytic capacitor, for example. Its diameter and height are 18mm and 33mm, respectively – quite large to go in a drone implementation where footprint and profile are important constraints. The 1,000-unit price of this capacitor is more than $1.00 from a distributor such as Digi-Key. More important is that this electrolytic capacitor, with its finite rated lifetime, represents an acknowledged limitation in terms of system reliability and robustness. A TVS clamp also creates space, cost and reliability concerns for the whole system.

Another option is to use a DC/DC converter solution with a wide input voltage range and high-line transient immunity to accommodate the full voltage excursion during the motor’s regenerative action.

Selecting a converter with a wide VIN range, such as the LM5161 100V, 1A synchronous buck converter from TI (see Figure 3) enables you to eliminate the bulk energy storage or TVS clamp, saving time, cost and board space. Moreover, the LM5161 converter offers a large degree of flexibility in terms of platform design. Not only does it support a non-isolated output, but the converter can also deliver one or more isolated outputs – using a Fly-Buck™ circuit implementation – if it’s necessary to break a ground loop or decouple different voltage domains in the drone system. If a VCC bias rail between 9V and 13V is available, the LM5161’s input quiescent current reduces to 325µA at a 50V input to uphold battery life during standby operating conditions.


 

Figure 3: LM5161 step-down converter schematic

 

Summary

Amid a continual focus on high reliability, small size and low overall bill-of-materials cost, a wide VIN synchronous buck converter dovetails seamlessly into a variety of power-management circuits for drone applications. A proposed DC/DC converter conveniently provides high efficiency performance, topology flexibility and increased circuit robustness during transient voltage events when mechanical energy from the motor cycles back to the input supply.

Additional resources

ESD fundamentals, part 1: What is ESD protection?

$
0
0

If you’ve seen a lightning bolt or been unpleasantly shocked by a doorknob, you’ve been exposed to the phenomenon known as electrostatic discharge (ESD). ESD is the sudden flow of electricity between two charged objects that come into close proximity. Objects that are in contact sometimes cause a discharge of electricity to go directly from one object to the other. Other times, the voltage potential between the objects can be so great that the dielectric medium (usually air) between them breaks down – contact isn’t even necessary for the electricity to flow (Figure 1).


Figure 1: Electrons transferring between two charged objects result in ESD

These shocks happen all the time, but humans can’t feel most of them because the shock voltage is too low to be noticeable. Most people actually do not start feeling the shock until the discharge exceeds 2,000-3,000V! While ESD in the 1-10kV range is typically harmless to humans, it can cause catastrophic electrical overstress failures for semiconductors and integrated circuits (ICs).

ESD suppressors or diodes placed in parallel between the source of ESD (typically an interface connector to the outside world) and the component (Figure 2) can protect system circuitry from electrical overstress failures.

Figure 2: An ESD strike will damage an IC without ESD protection (left); ESD protection will shunt current to protect the downstream IC (right)

Without ESD protection, all of the current from an ESD strike would flow directly into the system circuitry and destroy components. But if there is an ESD protection diode present, a high-voltage ESD strike will cause the diode to breakdown and provide a low-impedance path to redirect the current to ground, thus protecting the circuitry downstream.

Many circuit components include device-level ESD protection, which has led some to question the need for external ESD protection components. However, device-level ESD protection is nowhere near robust enough to survive ESD strikes discharged onto real-world end equipment. And as chip sets decrease in size due to process innovations, their susceptibility to ESD damages actually increases, making discrete ESD protection a necessity for every circuit designer.

In the next installments of this five-part series, I will go over the different features and requirements for selecting the proper ESD protection diode.  In the meantime, get more information on TI’s comprehensive portfolio of protection solutions for ESD and surge events.

Additional Resources:

  • Learn more about TI's ESD products here.

 

 

How to use load switches and eFuses in your set-top box design

$
0
0

Sitting in front of a TV is easy. Changing the channel is easy. Recording four shows at once while watching one show on your TV and streaming another show to your tablet is excessive – but also easy! It’s all thanks to the power of set-top boxes (STBs), such as the one shown in Figure 1.

Figure 1: A typical STB

STBs take in a cable/satellite signal; translate that signal into video; and then transmit that video to a TV, hard drive or wireless device. Designing these systems can be complicated but designing the power distribution can be easy, especially when you use load switches and eFuses.

Why bother turning different loads on and off?

STB designers usually follow standby power requirements so they can improve the system’s power efficiency. These requirements limit the amount of power that the STB can draw when it is inactive, so different subsystems need to be off in order to draw a minimal amount of power. Some regions even have specific power requirements, such as such as Energy Star. Figure 2 shows the common STB subsystems that can be controlled to improve standby power.

Figure 2: Load switch and eFuse applications in STBs

Now let’s take a look at some of the subsystems you can switch on and off.

  • Front end/tuner. This subsystem takes the input signal (cable or satellite) and converts the signal into video. One tuner is responsible for a single video output, so if an STB can record five shows at once, that means that there are five different tuners dedicated to recording. Likewise, there is a tuner for the output video port and for connecting additional devices through Wi-Fi®. Switching the tuners off when they are not being used can reduce shutdown power.
  • Hard drive. A hard drive that is not recording a show or playing a previously recorded show does not need to be active. This is also the case when the STB is just outputting the cable signal.
  • Wi-Fi. Wi-Fi connects additional devices to the STB such as tablets or computers, or connects smaller STBs within the same household.

Load switches can control power to each of these subsystems, and both the TPS22918 and TPS22975 can be used depending on the current load. Both are plastic devices that come with an adjustable output rise time suitable for different capacitive loads. The TPS22918 can support loads up to 2A and the TPS22975 can support loads up to 6A.

What about using switches for additional features?

Aside from power savings, several other features require a switch:

  • Power sequencing.The system on chip (SoC) or microcontroller that controls the STB has a specific power-on sequence for its different voltage rails. For optimal performance, devices like the TPS22918 or TPS22975 can turn on each of the rails in order.
  • Input protection.Voltage and current transients can occur when the 12V adapter is plugged into the STB. Placing the TPS2595 eFuse at the STB input can protect the rest of the system from hot-plug events.
  • SD card. If an STB uses an SD card, the option exists to power it with 3.3V or 1.8V. Using the TPS22910A and TPS22912C load switches enables you to choose the appropriate rail.
  • HDMI. The HDMI port is powered with 5V when in use, and the current needs to be limited for user protection. The TPS22945 load switch has a low current limit of 100mA.

All of these applications are modeled in the block diagram in Figure 3.

Figure 3: Recommended load switches and eFuses in STBs

Where can you get started?

The Power Switching Reference Design for Set Top Box shows all of the different load switches and eFuses used for each subsystem. With the added DC/DCs, the design helps create a complete solution for STB power delivery.

Figure 4: Power Switching Reference Design for Set Top Box

With the trend in STB moving to smaller form factors, it’s easy to see why designers are looking for more integration in their systems. So keep your STB power design small and easy, and use load switches and eFuses to accomplish your varying power switching needs!

Additional resources

 

 

 

 

How to design a simple constant current/constant voltage buck converter

$
0
0

Introduction

A DC-to-DC converter is typically implemented as a constant voltage (CV) regulator. The control loop will adjust the duty cycle in order to maintain a constant output voltage regardless of changes to the input voltage and load current.

A constant current (CC) converter regulates current the same way: the control loop adjusts the duty cycle to maintain a constant output current regardless of changes to the input voltage and output resistance. A change in output resistance will cause the output voltage to adjust as the load resistance varies; the higher the output resistance, the greater the output voltage.

A CC/CV converter will regulate both current and voltage depending on the output resistance level.

Application examples

Many applications limit the maximum output resistance and resulting output voltage so that components connected to the output won’t be damaged, which is where constant voltage regulation engages. Some examples of CC/CV converter uses are applications driving a light-emitting diode (LED) or charging batteries or supercapacitors. The current is regulated for a range of output resistances; should the resistance increase beyond a certain level, the voltage is regulated, or “clamped.”

Output-voltage accuracy may be crucial, particularly in battery applications and supercapacitor chargers. Precise voltage regulation enables more energy storage because you can set the voltage regulation point as close as possible to the maximum safe operating voltage rating of the storage device.

Traditional methods of implementing CC/CV

Figure 1 outlines a typical discrete implementation of a CC/CV converter. The converter requires a sense resistor (Rsense), an amplifier and a voltage regulation circuit (Vz). The current flowing through Rsense sets the voltage across RFB, which is the feedback voltage of a controller. In this way, the current is regulated. As Rout increases, the voltage on the output rises to a point where the Zener diode conducts, and the device transitions from a CC converter to a CV converter.

Figure 1: CC/CV implementation with a buck topology

As I mentioned, the current through Rsense sets the feedback voltage, which regulates the output current. Equation 1 expresses the relationship between the output current and VFB:

Assuming a resistive load, Equation 2 governs the voltage at the output:

Equation 3 sets the voltage regulation level:

As you can see in Figure 1,a Zener diode regulates the voltage in CV mode. Using a Zener as a voltage clamp will yield relatively poor voltage-accuracy performance because of the variation in Zener voltages from device to device. Two Zener diodes are sometimes used in series to prevent leakage current flowing from cathode to anode, which if present will cause errors in the current regulation loop.

Drawbacks of the traditional method

The traditional method requires the use of a sense resistor in series with the output in order to sense current. As a result, resistive losses will impact efficiency; Equation 4 shows the losses in the sense resistor:

Higher losses will increase the operating temperature and reduce system efficiency because the resistor will have all of the output current flowing through it. Cost also increases because low milliohm current-sense resistors are relatively expensive compared to a small signal resistor.

The common-mode voltage range of the amplifier needs to be rated to the maximum output voltage. A high output voltage might increase the cost of the amplifier. To help save costs, you could use a floating bias supply to reduce the common-mode voltage range requirement, but that will increase the component count.

The solution presented in Figure 1 has many disadvantages, including added design complexity, board real estate required, cost and impact on system efficiency.

A simple CC/CV method using the LM5117

The LM5117 is an emulated peak current-mode synchronous buck controller suitable for high-current, wide stepdown conversions. The major benefit of using the LM5117 in a CC/CV application is that it has a current monitor (CM) feature. The CM pin provides an accurate voltage that is proportional to the output current of the buck power stage. You can use the CM pin as your current loop feedback, saving the additional current-sense circuitry that the traditional method requires. The voltage present on the CM pin is accurate to ±2%, provided that the converter is set to forced pulsed width modulation (FPWM) or is in continuous conduction mode. Figure 2 shows a basic CC/CV regulator implementation using the LM5117.

Figure 2: LM5117 CC/CV basic implementation

CC programming

Equation 5 describes the relationship between the CM voltage and Iout:

Equation 6 simplifies Equation 5:

As you can see, the CM pin enables you to omit the series power dissipative current-sense resistor at the output. Rs is the current-sense resistor of the power stage used to generate a ramp for the current-mode pulse-width modulation (PWM) loop. As is the current-sense amplifier gain of the LM5117, which has a typical value of As = 10.

For example, assume that Iout = 10A and Rs = 10mΩ. Using Equation 6:

Setting the resistor divider network from the CM pin to ground and connecting the divider node to the feedback pin sets the current regulation point. With 2V at the CM pin, selecting the proper resistor-divider ratio will set the current regulation level. To set the resistor-divider values for 10A current regulation, select RFBb = 10kΩ and calculate RFBt using Equation 7:

Which yields an RFBt value of 15kΩ.

Remember to account for the reduction in As caused by placing a resistor from Rs and ground to Cs and Csg, respectively. Refer to the LM5117 data sheet for more details on how resistors in series with the current-sense pins affect the gain of the internal current-sense amplifier.

CV programming

CV programing is achieved by using an LMV431 as a voltage clamp.  Assume a voltage clamp level of 12V. The forward voltage drop Vfwd across BAT54 is 0.5V and the FB voltage of the LM5117 is 0.8V. The voltage clamp engages when the voltage across R1 is equal to the voltage calculated using Equation 8:

Therefore, VR1 = 1.3V.

The voltage at the reference pin of the LMV431 will need to be a reference voltage above VR1. The LMV431 has a reference voltage of 1.24V, so the voltage at the reference pin of the LMV431 is equal to the voltage calculated using Equation 9:

Therefore, a Vref = 2.54V is required for the LMV431 to conduct current from its cathode to anode.

Select RBotVC = 10kΩ and calculate RTopVC using Equation 10:

Power design of the LM5117

The design approach for the power section of a CC/CV converter using the LM5117 is the same as it is for a basic buck converter. I suggest carrying out the design at the highest output power level (which is the highest output resistance) using either WEBENCH® Designer or the LM5117 Quick Start Calculator. You can also refer to the LM5117 data sheet for guidance on the design of the buck power stage.

Example schematic

Figure 3 shows a 30V-to-54V input, 27V-at-6A output CC/CV implementation using the LM5117.

 Figure 3: A 30V-to-54V input, 27V-at-6A output CC/CV converter using the LM5117

Results

Figure 4 shows the efficiency results with increasing output resistance.

Figure 4: Efficiency at 30Vin

Figure 5 shows load regulation and the voltage set point with increasing output resistance.

Figure 5: Load regulation at 30Vin with increasing Rload (Vout/Iout)

Figure 6 shows the switch node (CH3), Vout ripple (CH1) and output current (CH4) at 30Vin, 25Vout at 6A.

Figure 6: Steady-state waveforms

Figure 7 shows the load-transient performance Vout (CH1) and output current (CH4) when stepping a constant resistive load from 60Ω to 120Ω.

Figure 7: Load-transient performance

Summary

The LM5117 configured as a CC/CV converter provides accurate current regulation, while offering many advantages over the traditional implementation. The design approach is relatively simple, and enables significant reductions in size, cost and power losses. Start a power supply design now with the LM5117 using WEBENCH® Designer.

Make a boost converter quieter

$
0
0

In order to minimize the power loss of a boost converter under light or no load conditions, designers usually use the Pulse Frequency Modulation  (PFM) to lower the switching frequency and hence the associated switching losses.  In PFM more and more switching pulses are skipped as the load is getting lower, as shown in Figure 1.  Obviously these scattered switching pulse trains carry the subharmonic frequency that varies with load.  Depending on the time duration of the dead band between the switching pulse trains, the subharmonics may appear as the Radio Frequency (RF) noise, or audible noise.  The RF noise can cause unwanted interferences with the performance of the entire system, and the audible noise is not only unpleasant but also risks compromising the mechanical integrity of the system. Therefore, these noise issues should be resolved.

Figure 1 – Inductor Current at Various Loads

1. Method to Prevent the Audible Noise Issue

In a DC/DC converter including the boost, the audible noise can be produced by both the power inductors and multi-layer capacitors.  However, the power inductors in the Personal Electronics (PE) applications are mostly molded, therefore they are not a big concern.  The multi-layer ceramic capacitors are the main source of audible noises.

The multi-layer ceramic capacitors offer a nice combination of low equivalent series resistance (ESR), low equivalent series inductance (ESL) and small size. However, they suffer from the piezoelectric effect, namely the voltage applied to their terminals induces mechanical stress. Figure 2 shows a ceramic capacitor soldered on the PCB.  Its dielectric is stretched or compressed when the applied voltage varies. If the applied voltage carries a component of subharmonics that falls in the 20 Hz to 20 kHz audible frequency range, it will produce the audible noise, and the resulted sound pressure level versus frequency is shown in Figure 3. 

Figure 2 – Mechanical Stress on the Ceramic Capacitor


Equal-loudness contours (red) from ISO226:2003 revision, original ISO standard shown (blue) for 40-phons

Figure 3 – Sound Pressure Level vs. Frequency

To minimize the audible noise, both mechanical and electrical methods can be employed. The mechanical method is basically the PCB layout optimization but it can be a very difficult and complex task, and it can easily increase the manufacturing costs. The preferred method, which is electrical, can resolve the issue by means of controlling the circuit operation.   One effective method is to employ a non-audible PFM scheme at light or no load condition. A good example is the TPS61253A boost converter, which includes a unique control technique to keep the PFM frequency above the audible band.

Figure 4 – TPS61253A Typical Application Circuit

As shown in Figure 4, the TPS61253A can be configured by the MODE pin for different operation modes.  There are three modes to choose from.  When the MODE pin is pulled low, it operates in auto PFM mode.  When it is pulled high, it is in the forced PWM mode.  When it is left open or floating, it is set to the ultrasonic mode.  These three modes can be selected dynamically during operation by externally reassigning the MODE pin condition.  

When the TPS61253A is configured in the ultrasonic mode, it automatically enters the PFM mode as soon as the valley current of the power inductor crosses zero. As the load decreases further, the valley current limit will go negative to reduce the number of skipped switching pulses, and this effectively prevents the subharmonic frequency from falling into the audible band. Bench tests prove that the subharmonic frequency in the ultrasonic mode is typically 53 kHz at no load condition, i.e. it will always stay above the audible band over the entire load range.

Figure 5 below shows the mode transition from PWM to PFM and then to ultrasonic mode with the load getting lighter and lighter.

Figure 5 – TPS61253A Operation Mode with Various Loads

Test Condition: VIN = 3.6 V, VOUT = 5 V, Load = 0 A, L = 0.56 µH, XEL3515-561MEB, COUT = 7 µF, Ultrasonic Mode

Figure 6: Steady State Waveforms of USM at No Load

2. Method to Cope with the RF Noise Issue

The second noise is the RF related noise, which is more critical in applications like Near Field Communication (NFC). There are two sidebands for the subcarrier load modulation, the upper sideband is at 14.4 MHz and the lower one at 12.7 MHz.

The sub-carrier frequency fs is 1/16th of the carrier frequency fc, namely fs = fc / 16.  It means that the sub-carrier frequency is in a range from 794.5 kHz to 900.5 kHz.

To avoid the potential noise interference with the NFC, the power converter should not generate noise in the sub-carrier frequency band. Otherwise, the application work-around solution like the PCB layout optimization or shielding must be implemented, and it undoubtedly raises the over cost. 

Figure 7 shows the modulation scheme with subcarrier, with the subcarrier frequency varying from 12.7 MHz to 14.4 MHz.

 

Figure 7 - NFC Modulation Products Using Load Modulation with a Subcarrier

The proposed solution is to maintain the switching frequency of the boost converter above the subcarrier frequency band.  This can be easily achieved with the TPS61253A.  Its switching frequency can be conveniently set at 3.8MHz typical, always higher than the subcarrier frequency band, by configuring the MODE pin to the forced PWM mode.  

Because the TPS61253A supports dynamic MODE programming during operation, it offers a valuable flexibility and programmability for a boost converter for different application environment.  Figure 7 shows the efficiency of a typical design in different operation modes, including the Ultrasonic or Force PWM for low noise purpose, and the Auto PFM Mode for enhanced light load efficiency.

TPS61253A efficiency measurement is conducted at the conditions of:

VOUT = 5 V, Load = 100 µA – 200 mA, L = 0.56 µH, XEL3515-561MEB, COUT = 7 µF (effective@5V)

Figure 8 - TPS61253A Efficiency at Auto PFM / Forced PWM / USM

Conclusion

Audible noise or the RF noise generated by the boost converter can severely affect system performance.  The conventional solutions by the mechanical means are normally complex and costly.  To resolve the issue at the root cause, this blog discusses more elegant approaches, which are new PFM control schemes that can prevent the audible and RF noise completely.  The TPS61253A is an easy to use solution for a variety of applications that features ultrasonic PFM mode, auto PFM mode, and forced PWM mode in one device. Its ultrasound mode eliminates the audible noise, the forced PWM mode prevents RF noise from affecting the NFC operation, and the auto PFM mode delivers the best efficiency at light load.  Test results with the TPS61253A verify the concepts discussed in this blog.  As well, the PFM schemes discussed in this blog can be applied to other DC/DC converter topologies, improving the overall system performance without additional cost. Start a power supply design in WEBENCH® Designer with the TPS61253A now.

Viewing all 437 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>