Energy Storage Project, part 2: Inverter details

Introduction

When you hear the words "DIY mains inverter", what do you think?

How about "DIY BMS"? (BMS = battery management system)

I bet if you're an electrical engineer like me, it sends a shiver down your spine. You probably think of some cobbled-together project someone did in their garage, hacked together with wires, home-etched PCBs, and goodness knows what else. Or you think of something people buy off of AliExpress, sold at 3x the power rating it can safely do.

But it can be done better than that.

Aims

Yes, I built my own inverter, from component level. If you're wondering why, look at my previous post on this topic. But in short, my own inverter, with my own firmware, and my own BMS, allows me to use LiFePO4-based energy storage in a cost-effective and customised manner, and allows me to fully 'own' the system, which I don't feel I would with any of the existing off-the-shelf options. Here are some key specifications:

Hardware Design Approach

Here are some details about broad design choices I made, and why. Firstly, I used a high-frequency design, i.e. no 50Hz transformer. That means using a DC-DC converter to step 72V up to about 350V, and then an output inverter stage to produce a 50Hz AC sine wave at mains voltage. This reduces weight a lot by eliminating the big transformer. It also works well with my idea of simultaneous mains charging, because of the ability to customise the topology. My design actually has a second output inverter stage, except that this one is controlled to act like a PFC input stage, and it can draw power from the grid (or export to the grid - but that's illegal without permits which require certification, so the control system is set up to prevent that).

Many people tout benefits to using a big 50Hz transformer, such as output protection being easier and overload capability being better. However, I've not seen this to be true. I have implemented output protections that have worked well in my testing so far while still being able to start high-inrush loads. As well as size and weight, the other disadvantage of having a big 50Hz transformer, especially an iron core one, is the magnetising current. At this power level, I could be losing 50-100W at idle in the transformer alone! Many of the DIY inverter communities go to toroidal transformers instead for that reason, and appear to get fairly low idle power losses of say 20W for a similarly sized inverter. While that's impressive, it still doesn't give you a high voltage (400V) DC link, thus it wouldn't provide the flexibility I need in terms of high voltage inputs and outputs, plus the possibility to interface directly with external 400V DC systems in the future.

Aside from all of that, I also just find high frequency inverters more interesting. And yes, that matters. It's my flipping project after all!

In terms of semiconductor choices, I am using regular Silicon-based MOSFETs on the low voltage side, but I chose fast ones. I'm using Silicon Carbide (SiC) MOSFETs on the high voltage side because at 300V+, they offer some good benefits over regular Silicon MOSFETs. Most inverters actually use Silicon IGBTs for the high voltage side, but SiC MOSFETs are also a valid choice, and may achieve lower losses in this application. They also conduct bidirectionally, which is important for this project. That means I can attach solar microinverters to the output of the inverter, and with correct control, handle reverse power flow to charge the batteries.

Control

On the topic of control, another way that my project differs from most of the DIY inverter builds on the internet is that I rolled my own control system instead of using an off-the-shelf inverter control IC like the EG8010 (which honestly looks pretty flipping dodgy to me!). I used digital control, running on one of the venerable Texas Instruments C2000 series microcontrollers. These are widely used to control power electronics, in both academia and industry. The particular microcontroller I'm using is the TMS320F28379D, which is a dual core version - ideal for controlling a system such as mine which includes multiple converters. There is also a nice development board available, the TI LaunchXL-F28379D. Since the chip itself is a 337 pin BGA and I don't want to solder that(!), I just bought the dev board and plugged it into an appropriate footprint on my inverter board.

I am coding everything from scratch using C. The only functions provided are those in the TI DriverLib, which simplifies setting various registers on the device. I avoided assembly mostly, but did have to deal with a few lower level things like memory maps - there's no avoiding the detail when you want to make something customised.

And no... I didn't use AI to write the code.

Protection and safety

Now of course, the first critisism that this sort of project usually gets is regarding safety, and I'm sure I won't hear the end of it. But here's my take, in case it soothes anyone's concerns.

High voltages are dangerous, and the grid can deliver a lot of energy very quickly, even with circuit breakers employed. There's also a heck of a lot of energy stored in the batteries I'm using. So the stakes are high. In the time before I did this project, I have had mishaps, including shorting various large batteries and touching various unsafe DC and AC voltages, including the mains. So I am under no illusions about how dangerous this stuff is.

When you buy an inverter, you trust that it has safety features to keep it safe and keep you safe, and, preferably, make it reliable. If you build one, you need to know what the safety features are, how to implement them, and how to verify they work. The problem, and the reason these types of projects are discouraged, is that it's very possible to know how to make the device work, but not know what safety features are required to make it safe, or just to be lazy and not implement them. Since inverters in particular are connected to large energy sources, this matters.

Trust me on one thing - I'm not out to give the naysayers any ammo whatsoever, to fuel their arguments that individuals shouldn't do these sorts of projects. I am pretty serious about not blowing myself up here. But also, a disclaimer is a must. Please do not consider any descriptions of safety features on this site as 'complete' or 'all inclusive', either in terms of what is required to make a safe inverter or what I have actually implemented. If you build this kind of stuff, the only thing keeping you safe is YOU. Take the responsibility seriously!

Don't assume you'll be safe because you followed someone else's instructions. Especially not mine. My intention in writing about this is not to prompt others to do unsafe things, it's to document insights that might help and inspire others build the confidence to do these sorts of things safely themselves.

So here are some important points.

The last line of defence is always fuses or breakers. In this case, I am using an appropriately rated DC fuse on the low voltage side, and another onboard fuse on the grid side, plus an external breaker on the grid side as well. I am using HRC (high rupture capacity) fuses, to minimise the potential for shrapnel if a fuse blows. I am also using a breaker with a 'Z' tripping curve, meaning it trips quickly (100ms) at 3x the rated current. That sounds like a lot, but the 'C' type breakers often used in household circuits only trip quickly at 5-10x the rated load - that's a lot more. Reducing the value of current required for fast tripping reduces the potential for energy dissipation if a fault occurs, so it reduces the chances of setting something on fire if the worst happens. I have tripped this breaker during control system malfunctions, and I belive the fact that I chose a Z-type breaker is one of the reasons I didn't have any big mishaps (e.g. explosions) during the development of this system.

The device will also be housed in a grounded metal enclosure. While this doesn't make it 'fire proof', the design approach of many devices appears to rely on a combination of circuit protection and a lack of easily flammable material, so that in a worst-case scenario protective devices will cut power before anything gets hot enough to start a fire. In a commercial design, this needs to be optimised and verfied experimentally, possibly by a third party. In my case, I'm not afraid of over-engineering things a bit to save verification steps, because I'm not planning to blow anything up. Done correctly, a grounded metal enclosure is also a good thing in terms of protecting users from electric shocks - but the devil is in the detail. The integrity of the earthing method (e.g. screw/lug terminal) for the case is important, because if that fails, a conductive case can quickly become dangerous.

Your development approach is also important for safety, and reducing hardware damage due to early mishaps. I tested everything initially using low voltage power supplies and loads, where the voltages and currents were too low to cause hardware damage or safety hazards when mishaps occurred. This allowed me to prove the control systems and the hardware before scaling things up. Scaling up should ideally also be done in steps because it will reveal problems that weren't apparent at low power levels. If you scale up slowly and pay close attention, most problems can be caught before damage is done and smoke is released. I only killed three MOSFETs in the development of this system (out of a total of 16 on the board), and none of them went 'bang' - one was damaged by incorrect mounting and another two by an overcurrent due to a control signal fault, before I'd implemented desat protection.

What is desat protection? Here goes. The second-to-last line of defence is circuit-based hardware protection features. Fuses are there for safety, but they are nowhere near fast or precise enough to protect semiconductors from damage. What can protect semiconductors - namely, all the power MOSFETs - from damage, is fast circuits that sense overcurrents and either limit them or shut down the system. Some may argue that this can be done with software, but I don't like that idea. Not only would I be relying on my code being good, I'd also be relying on the microcontroller itself being functional at any given time, including all of its peripherals and power supply circuits, etc. I would much rather have a dedicated hardware circuit for every single MOSFET that can shut it down if there is a problem, so that is what I have done. The other benefit of this approach is speed.

These circuits are often referred to as 'desat' circuits, or just overcurrent protection circuits. They work by sensing the on-state voltage of the MOSFET. A MOSFET can work like a shunt resistor, with its RDS(On) being the shunt resistance. Yes, RDS(On) varies with temperature, and increases at high temperatures, but if the shunt is to be used for protection purposes rather than measurement, then that doesn't matter. These circuits are designed to trip at currents far over the nominal operating current of the device. If the MOSFET gets hot enough to trip the protection circuit at a lower than expected current, it was overheating anyway, so I'm fine with it tripping! These circuits can alteratively sense when the MOSFET exits its resistive region ('desat' behaviour), which is particularly relevant with SiC devices. In this case, the voltage across the device no longer corresponds linearly to the current. No matter the cause, the detection method is the same: when the on-state voltage is too high for too long, the device is shut down. There is a good TI reference design which explains more about how this works. Like that design, my circuits have a delay period of 1 microsecond and then a two-level turn-off which may somewhat prevent ringing due to the current change at turn-off. The device is completely turned off after 2 microseconds, assuming it hasn't melted. Many SiC MOSFETs specify a short circuit withstand time of 3-5 microseconds, making this at least somewhat feasible to protect the device. And from my experience so far, yes they do work: I've encountered several scenarios where a fault has occured and I'm very relieved to discover a MOSFET was turned off in time and didn't blow. I've even seen them protect devices during shoot through - and that was with a very low-impedance DC bus, albeit at 80V on the low voltage side.

Just because I have all that hardware protection, doesn't mean I'm planning to use it as part of the normal operation of the device. The first line of defence for protection from grid or inverter output overcurrents is actually my software control, which acts to limit the overcurrent in time so that the hardware protection doesn't have to activate. This works because the threshold for software-triggered current limiting is much lower than that for hardware protection. Under all normal circumstances, the software is what actually keeps things under control. The hardware protection is there in case something unforeseen happens that the software can't handle. This is only possible because I'm using a ridiculously high sampling rate (50-100kHz), so the maximum change of current through the output inductor between samples is non-destructive.

Grid tie and islanding

One important note is that, while the device connects to the grid, this is for charging purposes only. That is, power flows downstream, not upstream, so to speak. Output power is used entirely via a separate inverter output. This means that it isn't officially a grid tie inverter. In reality, it's simply a grid tie inverter operated in the correct quadrant to only draw power from the grid - but this setting means there is no possibility for islanding to occur if the grid goes down, so the associated safety considerations are simplified. Islanding is a whole different kettle of fish to the preceding safety concerns because it has some potential to make upstream circuits live when they're not supposed to be, and thus present an unexpected danger to people in a completely different location. The circumstances required to achieve this are limited and often overplayed in my opinion, but the risk is real. So, again, don't do what I do just because I did it. If you build something that connects to the grid, you take full responsibility for its safety. You should think about what will happen if the grid goes down unexpectedly when it's operating, as well as how it will handle all the different grid conditions that can occur, such as momentary voltage sags, glitches, etc.

BMS

For the BMS, I've used a modular design, which means individual boards for each cell in the battery, which will all be wired to the main system via an isolated CAN bus for communications. Each cell board has a basic microcontroller (ATmega328PB!), a dissipative balancer circuit, and temperature and voltage sensing, plus an isolated CAN transceiver. The data from the cell boards will allow the charging process to operate safely, and the cell boards balance the cells by discharging high cells while charging in the constant voltage phase. They also have onboard LEDs to easily visually identify which cells are being discharged, and indicate any cell problems. I'm currently charging by manually monitoring these LEDs (i.e. always being present in the room while charging, which is a good idea for any new system), but full CAN bus communications will enable the system to pick up issues automatically, which is a must for anything beyond that.

Next

Anyone who reads all this will have a fair idea of what I'm doing, but they'll also be pretty bored. Fortunately, the next post will have pictures, results, and data to look at!! I'm also not stopping here: this energy storage project is part of something larger which I will post about in due course, but long story short it involves things that move on wheels using electric power and it's pretty exciting so watch this space.

Comments

None yet!

Leave a Comment

Display Name
Email (optional, not displayed)
Website (optional, displayed, include "https://")
Comment (up to 1000 characters)