Steve Pullins, Vice President, Energy Solutions
This not a short read because real problems are not understood and fixed in an elevator speech or executive summary. So, if you want some reality, please read on. If you’re looking for the elevator speech, no need to read beyond the first two sentences.
Solving the Right Problem
Engineers learn early to first understand what the problem is before setting out to solve it. In the grid industry, the majority of effort spent on reliability is focused on the wrong problem.
A Spring 2016 EnergyBiz article, “Spare Transformers: The Answer to Extreme Weather Risks?,” quoting a 2015 study by Lawrence Berkeley National Laboratory and Stanford University, said there is a 260% increase in storm outage duration to 370 minutes per customer over the last decade. The report provided several examples of greatly extended outages in distribution, as well as efforts in states for grid modernization. But, the industry is still focused on transmission lines, reserve capacity at the central generation level, and spare power transformers through FERC rules.
Okay – the wrong problem. The transmission and generation (bulk power) system only contributes 10% of the events that lead to customer outages, so massive investment in improving reliability at the bulk power system can only have minimal effect on the reliability felt by customers. It would seem more helpful to attack the 90% problem – distribution system reliability.
There is a difference between how reliability is viewed, and how metrics are structured, at the bulk power system level and the distribution level.
The bulk power system uses grid architecture-based metrics to judge reliability, such as redundancy, reserve margins, N-1 contingencies. However, these architectural metrics do not demonstrate reliable performance, from the customer’s perspective.
At the distribution level, reliability is measured as a performance-based metric. Okay, better. However, the industry uses a reliability metric standard (IEEE 1366) that specifically excludes the largest (and most rapidly growing) cause of customer outages; storms and other Acts of God. Figure 1 suggests the industry pay more attention to the impact of major storms on customers.
For more than 15 years, the industry has said that storms are not the utility’s fault, and that is true. However, the source of the cause of the grid outage is far less important to the customer than the fact that the customer is without power, losing business, damaging product, or failing to deliver important life functions.
The point is, that from the customer perspective, at the time the grid is most needed, in the face of storms, it is not required to operate. This is not reliability, nor is it resilience. The industry metrics do not even measure this most critical element of reliability and resilience.
Storms are nasty. Just ask Mississippi Power about Hurricane Katrina, which damaged all but 3 of their transmission lines and 65% of their distribution infrastructure. All of this damage greatly affected the customers of Mississippi Power, but none of this damage counted as a reliability performance metric.
That same LBNL and Stanford study mentioned above puts the business loss price tag of storm outages in the US at $18B to $33B/year felt by commercial and industrial businesses. Studies at LBNL and EPRI for the last 17 years show the business loss price tag for commercial and industrial businesses for grid reliability (non-storm) at $79B+.
This says that storm-related grid outage impacts on business is significant enough, and growing, to become part of the grid reliability discussion. So, would changing the basis of how reliability is measured help the customer see improved reliability? Yes, but is the cost too great for customers to shoulder the burden?
What Performance We Track Today
The following Figures 2 and 3 from a Heidemarie C. Caswell article in T&D World Magazine, November 2012, show trends in distribution system reliability from the IEEE Distribution Reliability Working Group. The trend in non-storm outage durations is up slightly, and this would suggest that customers are being delivered a reliable grid service at 99.97% uptime.
A 99.97% uptime means grid services available for 99.97% of the hours in a year.) A SAIDI of 170 min/yr/customer is an uptime of 99.97%.
This is an A+ in school. This is fine for most residential applications, but today’s commercial and industrial customers have a growing digital footprint in their processes, point of sales, and overall operations. They require more reliability for business continuity.
However, reviewing where the Quartiles reside in the nation (Figure 3), shows that the Northeast and Mid-Atlantic states constitute nearly all of the 4th Quartile performance on reliability, and this does not include storms, which also affect the Northeast and Mid-Atlantic states heavily.
As utilities and regulators move to decouple the relationship between how much energy a utility produces and delivers, from the rates and fees it charges to customers, the belief is that utility distribution system investments can be maintained or increased in the face of flat or declining customer energy consumption. On the surface this sounds prudent. But the reality does not seem to bare this out.
It seems that the unintended consequence is (1) deterring innovative solutions, and (2) higher rates for customers, even those who conserve more energy. Left unchecked, the results can be unfathomable. One industrial customer in Connecticut, with flat consumption, saw their energy bill increase over the last 14 years from $450,000/month to $1,100,000/month, but their energy consumption portion of that bill only grew from $300,000/month to $350,000/month over the same 14-year period. The non-consumption portion of their bill grew from $150,000/month to $750,000/month, unchecked. At the same time, this customer saw a decrease in reliability and resilience of their electric service.
The industry needs to change two assumptions; (1) the only solution is more of the same, and (2) the customer will pay for all of it.
To reinforce the point about customer impact from storms, one utility in the Northeast reported to their regulator that their 2012 non-storm SAIDI was typical for the Northeast. They also stated that it represented 26% percent of the total (storm related and non-storm related) outage numbers. This means that 74% of all outages were storm-related outages, three times the non-storm outage numbers. Granted this included Superstorm Sandy; but it makes the point that storm-related outages are important for the industry to incorporate into its thinking and metrics about reliability and resilience. More of the same will not change this. Neither will making the customer pay for it (all).
A Proper Solution Will Take Time
Even if you believe that utilities and regulators are starting to understand the problem and taking actions to address it, which many are; based on historical evidence of significant change in the electric industry, it will take 10 to 15 years to see a broad measurable improvement.
Commercial and industrial customers then must determine if they can wait for this improvement, or seek another course. All too often, one of the options chosen is to move the business to lower energy cost regions or countries.
Is there another way to achieve significantly better reliability performance for the customer?
There is an answer. Not in all cases, but in many communities and campuses (commercial, industrial, university, hospital, etc.) a Microgrid solution can deliver concurrent reliability, resilience, and cost savings (and/or containment) improvements.
There are more options today for the energy service for customers. It would seem prudent that distribution utilities see Microgrids and distributed energy resource (DER) solutions as additional tools in its toolbox to better serve customers.