musings
from my mind to yours...
Categories:

Archives:
Meta:
April 2024
M T W T F S S
« Nov    
1234567
891011121314
15161718192021
22232425262728
2930  
05/04/09
The Evolution of Electricity Markets
Filed under: Politics and Economics, Technology and the Law
Posted by: site admin @ 6:59 pm

There has been a perception that electricity generation and distribution is a natural monopoly since Samuel Insull decided to employ the Wright demand meter and the rotary convertor to build a central – substation styled electricity generation and distribution system in Chicago. The rotary convertor provided the technical capability to build a combined AC-DC system where central generation with distributed substations proved economically feasible. The Wright demand meter provided the economic tool to apportion fixed and variable costs between large and small customers in a way that kept large customers, who could build their own generators, and incentivized small customers, by making their incremental cost lower than that of alternative power sources. This approach, coupled with control of several key patents, made it possible for Insull to develop a “complex of holding companies that exercised control over most of the electric utilities in the United States.”

From that time until now, electricity providers have been regulated as public utilities; businesses granted legalized monopolies with a concomitant duty to serve.  This has not kept the US free from blackouts or (perceived) high prices.  With retail electricity prices varying widely across the country, and with realization that the natural monopoly really only exists in the system that delivers power from producer to consumer,  has motivated regulators to reconsider whether the legalized monopolies were “natural” or even desirable.

In fact, even before the recent attempts at retail competition in electricity, regulators began finding ways to insert competition into the mix. In Otter Tail Power v. US, 410 US 366 (1973), the Court held that distributors are required to make their distribution networks available to all electricity generators, as long as it is “necessary or appropriate in the public interest”.  This interconnection for the purposes of “wheeling” power from non associated generators had the effect of keeping wholesale prices low.  That has been followed by the Public Utilities Regulatory Policy Act (PURPA) in 1978, which include rules to encourage cogeneration by authorizing FERC to require utilities to purchase or sell electricity from qualifying facilities (QFs).  This was combined with regulatory quid pro quo whereby FERC required open access transmission in exchange for the approval of market based rates or utility mergers as a way to introduce more competition into electricity markets.

As competition grew, market based rates and incentive rates were introduced as a way to drive down retail costs. Market based rates allow electricity suppliers that can show they possess no significant market power over buyers to set their rates at whatever the market will bear. Incentive rates, on the other hand, encouraged dominant suppliers to find cheaper methods of generation, distribution and or transmission by allowing the dominant utility to keep a portion of the savings in the form of a higher rate of return for shareholders.

The “final” step in this evolution was put into motion in 1996 with FERC Order 888, which effectively split utilities into generation, distribution, and transmission units. This required utilities to separate for the purpose of determining costs, whether or not the functional unbundling involved actual legal restructuring. The goal of this regulation was to make it clearly possible for independent power generators to compete at the wholesale level .  Subsequently, FERC Order 889 required transmission utility participation in OASIS, a real-time market for wholesale power, which provided a basic market mechanism to effect the intent of order 888.

The effect of these market mechanisms have not been what was originally envisioned. Subsequent to their creation, a flourishing wholesale spot market in electricity developed in 1998, only to result in fluctuating prices that peaked more than 2 orders of magnitude higher than the previous average wholesale price and, in the case of California, the bankrupting of Pacific Gas and Electric. In addition, grid reliability did not improve, as evidenced by the August 2003 blackout.

These problems can be traced to a realization that Samuel Insull made when he first began to put together is network of electric utilities: Because electricity could not be stored, electricity supply and demand were required to match each other on a real time basis.  Until there is technology that can create appropriate storage buffers for electricity (just as we have for water or gas), electricity production must match demand on a near instantaneous basis. Since demand levels can fluctuate significantly, any system built to be reliable must be able to handle peak demand, whatever that demand is and whenever it occurs, regardless of how much that demand differs from the average.

Peak demand presents particular problems in a market driven industry. Efficiency concerns mean that there is no incentive on the part of any particular generating company to keep any more capacity available than it can reasonably hope to sell. Because peaks and averages can be widely different, insuring that peak capacity is available at all times means that some facilities will be underutilized, perhaps significantly.  This can result in a game of investment musical chairs, where some investors are left without customers. If this is coupled with a requirement to sell on the spot market, the lack of long term contracts could easily scare off investors in incremental capacity.

Currently, attempts to mitigate peaks in demand have focused on demand management, which can be done in several ways. Demand can be passively managed through the use of devices that respond to fluctuations in incoming power automatically with adjustments in the load they put on the supply.  These devices are not widespread yet (and may never be because they require adding supply management circuitry to each device). “Smart Grid” approaches use out of band signaling to deliver real time pricing information to consumer endpoints with the hope that the consumer will modify their consumption based on this information.   So far, this approach hasn’t had the desired results.

There are other solutions, however. Current work on large scale energy storage facilities can provide the needed buffer by storing energy during times of low demand and then feeding it into the grid during periods of high demand. Currently these approaches are being tried at the wholesale level as a way to mitigate the rapid fluctuations that can occur in wind power generation. In that case, the energy storage deals with fluctuating supply – the complement of the fluctuating demand problem.  They are also being tried at the retail level by “off grid” homeowners.  A quick search on the ‘Net reveals a number of companies that market battery packs for an entire home. In both these cases, we have converted the energy generator from one form (it’s original, be that coal, gas, solar, wind, etc.) to another (a battery).

In Insull’s time, significant energy storage mechanisms (other than pumped storage) didn’t exist. As a result, it made sense to create monopolies as a way of amortizing the cost of excess capacity across the largest possible customer base.  If we are to successfully move away from the regulated monopoly model of electricity generation and delivery, we must develop energy storage mechanisms significant enough to decouple energy production from demand. In order for this decoupling to be economically feasible, the amortized storage costs must be significantly lower than the average original  generation cost. 

Further, the location of the storage facility can determine the level of competitiveness on the supply side. The more closely sited the storage facilities are to the ultimate customer, the greater the choices of that customer as to the original source of power. The storage facility can even be at the retail destination, allowing the retail customer to perform energy arbitrage by purchasing cheap power at times of low demand and then reselling it back onto the grid at times of high demand.  This of course requires demand based pricing - with significant enough variations in prices to justify the investment in storage technology.  This, however, is the end result of the Smart Grid.

4 comments
04/12/09
Is a Smart Grid Really That Smart?
Filed under: Politics and Economics, Technology and the Law
Posted by: site admin @ 9:55 am

If you want to consider the security future of the smart
grid, you need to consider the success of Digital Rights Management
(DRM) in music. More specifically, the lack of success.

I realize this may seem to be completely unrelated, but it’s not for a
fundamental reason: secure protocols are about enabling A & B to
communicate while simultaneously keeping C from knowing what they are
communicating. It can never completely resolve the problem of A
communicating with B while simultaneously controlling B’s access to the
information communicated. Vendors of copyrighted material (music,
video, books) have watched repeatedly as one encryption scheme after
another has been broken, with the result that purchasers of DRM’d
material have been able to copy the end product at will in an
unprotected state.

What does this mean for the smart grid? There is no way to prevent
those who would attack the grid from becoming part of the grid and
attacking it from the inside. No matter how much encryption is used,
someone will be able to break the encryption scheme because the
destination end point must be given both the cyphertext and the key at
some point. Intercepting this transmission at the appropriate point
isn’t difficult. Once that is done,  malicious smart grid end points
will be able to send false information back into the grid, doing such
things as creating rapidly fluctuating demand signals, make false
responses to received commands, etc. Depending on what the endpoints
are instructed to do and how they are coordinated, this could create
some very interesting problems.

There’s another problem with the smart grid - emergent behavior and the
inherent weakness of complex systems. As systems become more complex,
they begin to exhibit reliability problems and other inherent
weaknesses. Attempting to correct this problem by adding additional
checks and counter checks only makes the resulting system even more
complex, which creates more potential points of failure. As the number
of places that can fail increases, we inexorably move towards a point
where we the probability that something has failed at any given time
approaches 1. On top of this, complex systems also exhibit emergent
behavior - where the whole behavior of the systems is greater than the
sum of its parts - in ways that have not been predicted or planned for.
And all of this occurs even if there’s nobody malicious out there
attempting to exploit the system.

The history of the Internet is instructive here. The origins of the
Internet go back to the last 60’s when ARPAnet first came online. It is
now 2009 and we still run into problems with things like denial of
service attacks – 40 years later. While, we’ve obviously learned some
things during this time and can avoid many of the problems of the past,
we should also have learned that getting things right the first time is
probably impossible. The difference between failure on the Internet and
failure in the power grid, however, is that we have backup systems if
the Internet fails.  We can use telephone (provided it isn’t VoIP) or
even snail mail. If the power grid fails, we have no system wide backup
plan that enables those at the end points to continue functioning while
the power grid comes back online.

Further, the goal of the smart grid is efficiency.  Private
enterprise, and the shareholders that fund it, desire efficiency
because it means a better return on assets. Individual investors don’t
want capital tied up in non income producing assets. The government
also wants efficiency because wasted energy production contributes
greenhouse gas and other environmental problems. The problem with
efficiency is that it means operating at close to capacity on a
continual basis.  When capacity drops suddenly, systemic failures
occur. Further, it only takes a small change in the relationship
between supply and demand to cause this problem to occur. And once the
problem occurs, it can require demand dropping not just to prior levels
but significantly below them in order to clear out the congestion. One
need only look at a traffic jam to see a common example of this
problem. 

In this era, when it takes minutes to distribute a successful exploit
worldwide, but can take months to fix it, the asymmetrical nature of
the threat dictates a radical response.  Eventually, proponents of the
smart grid are going to realize this. When they do, they’ll realize
that the smartest thing for the grid is no grid at all. In other words,
distributed generation and islands of power. Physical isolation
ultimately is the simplest way to protect any grid if you’re going to
make it smart. Of course, if you keep it dumb, this isn’t a problem.

1 comment
03/04/09
“Public Use” of Private Property - Is a Monopoly a Bad Thing?
Filed under: Politics and Economics, Technology and the Law
Posted by: site admin @ 7:06 pm

Typically, this concept is used as a justification for regulating monopolies – specifically, monopolies that affect the prices consumers pay. And the regulation typically takes several forms – either price regulation, quantity and quality of service requirements, or both. 

Monopolies result when fixed costs are high and variable costs are low.  In particular, when variable costs are so low that the average cost of goods continues to fall as quantity increases across the entire demand curve, there will be a natural ability for the dominant seller in the market to lower prices below the average cost of other sellers and still be able to make a profit. When this can happen, there is a fear in this country that the dominant seller will lower prices temporarily to drive competitors out of business and then raise prices after the competition has disappeared. Naturally, consumers of the product, having become accustomed to low prices don’t like the idea (or reality) of having to pay higher prices for the same good at a later time – particularly when they have no apparent alternative or negotiating leverage over the price.

The flourishing of various natural and legislated monopolies (such as the Charles River Bridge) from the colonial period through the late 1800’s ultimately led to the populist reaction illustrated in  Munn v. Illinois .  By the end of the nineteenth century,  courts generally perceived that monopolistic control of some resource of significance implied an obligation to the public, usually in the form of a requirement to furnish an “adequate supply or service without discrimination” (Harr and Fessler, “The Wrong Side of the Tracks”).  Courts used four different rationales to justify this requirement:
1)    Imposition of a right of common access based on the concept of a “public calling, essential to individual survival within the community”
2)    The duty to serve all equally as an outgrowth of natural monopoly power.
3)    The duty to serve all parties alike, as a consequence of a grant of the power of eminent domain
4)    The duty to serve all equally, flowing from consent express or implied.

The fundamental assumption behind all of these is built into the definition of “adequate supply”.  Realistically, this means at a price significantly lower than optimal price a monopolist would charge, because such a price would result in reduced consumption – in effect, price driven rationing. For goods such as energy which have a significant impact on society, this rationing was perceived to be unacceptable due to collateral societal impacts – the poor, for example, may not be able to afford to heat their homes in the winter, which could result in some of the poor freezing to death.

In order to forestall such unacceptable scenarios, state and federal governments were required, if they were to regulate prices, to determine a price regulation scheme that would minimize consumer prices to the extent practicable while still incentivizing private industry investment in the regulated markets.  In electricity markets, the scheme ultimately settled is formulated as:

    R = B*r + O
Where
R     is the monopoly revenue requirement (the total amount needed to recover costs and return a profit sufficient to incentivize investors.
B    the rate base, which is based on capital investment in plant and other assets
   the allowed rate of return
O    operating expenses / variable costs such as fuel  and labor
Once R has been determined, price is simply
    P = R/V
Where
P    is the price per unit volume
V    the volume of units expected to be sold.

This formulation, while apparently simple, has some implications and complexities.  The most obvious implication is the strong incentive for investment in capital equipment over labor or other variable costs.  This incentive has led in some cases to abuses, which has, in turn, led to a need to closely monitor what can actually be included in B, the rate base. 

This eventually led to the rule that, in order for capital to be included in the rate base, it must be deemed “used and useful” in supplying consumers.  Costs for capital not meeting that requirement could still be recovered by recovering the ongoing capital expense as part of the operating expense O. This kept investors from losing money, but did lower their mean rate of return.

It should be noted that regulation has not been uniformly successful. This is largely because regulators have failed to recognize occasions when legal or market forces no longer give the regulated entity a monopolistic advantage. In the case of Market Street Railway Co. v. Railroad Commission of Ca., Market Street Railway continued to face price regulation even though it competed with a municipal railroad as well as rising automobile traffic and, after years of declining service and revenue, went bankrupt.   It isn’t clear whether Market Street would have survived had it been unregulated, but the fact that its ridership and revenues continued to decline should have been a signal that it no longer possessed monopoly power, assuming it ever did.

This illustrates the core problem with government regulation: once started, it can be very hard to stop. Technology changes and monopoly power changes with it. Canals had lucrative monopolies until railroads came along. Railroads had monopoly power until cars became affordable.  Failure to recognize this change can create systemic problems that are never allowed to heal.

In fact, regulation could well thwart rapid innovation. Because regulation keeps prices (artificially) low, it can delay the introduction of new technologies that could enervate the existing monopoly. This is particularly true in the field of energy.  By regulating retail prices, we lessen the incentive to develop conservation technologies.  While high prices can be painful in the short term, that pain is exactly the incentive that motivates investment in cheaper alternatives.

1 comment
07/03/06
On Smoking Bans
Filed under: Technology and the Law
Posted by: site admin @ 8:51 am

We’ve had smoking bans in place where I live for a while now. Currently, they’re county wide - but there’s some talk of a statewide ban just to level the playing field. While I’m interested in health, I’m not convinced this is the best approach.

The reason is this: Anytime you legislate to a particular process rather than against an outcome, you run the risk of enabling that outcome via a different, previously unimagined path. And, because techonology always moves faster than the law (at least it does in this century), this can mean that lots of harm can occur along that unintended path before the law catches up.

Let’s use smoking bans as an example. What’s the real goal here? Smoking bans are based on a public health issue. We know that smoke, either first hand or second hand, is a significant agent in the instigation of many diseases. Because of that, we want to improve public health by limiting people’s exposure to smoke. Until we legislate smoking out of existence (which is very unlikely), we must resort to other means to limit the health effects of smoking on the population.

The approach where I live is to ban smoking in a whole list of public places, including restaurants and bars. What this does is drive the smokers outside (flashback to the high school “smokehole”). This is an example of legislating a particular process. It doesn’t eliminate the second hand smoke from outdoor spaces though - we just assume that the pollution will waft away and be diluted in the atmosphere.

A better, more complete approach, would be to simply legislate the desired outcome - that enclosed spaces can contain no more than so many parts per billion of various pollutants. Then leave it to the owners/ operators of those spaces to figure out how to acheive that goal. They might choose to ban smoking on their premises. Or, they might choose to use clean room technology1 to filter the air. Similar to the way a kitchen stove top hood works, this techology can be used to vacuum up the pollutants before they circulate. The advantage of the second is that, while it is more expensive, it keeps the smokers out of public places where the pollution they create isn’t filtered.

Without an approach that looks to the real issue of airborne pollutants, we could easily end up with the creation of other nicotine transfer devices that don’t involve “smoking” and thus are not banned under currently law, but yet still create just as much of a health hazard because they create just as many airborne pollutants.

It’s easy to pass laws that address specific problems. It’s harder to really think through the issues to determine what the real problem is. But this extra work will pay off in the long run - because it will result in laws that truly express their underlying philosphy and, as a result, are durable in the face of change.

NOTES:
1. The technology used to keep semiconductor fabs clean enough for chip manufacturing.

comments (0)
04/30/06
On Child Pornography, the Law and Technology
Filed under: Technology and the Law
Posted by: site admin @ 4:15 pm

This link points to an article that describes a potential bill to be introduced by US Rep. Diana DeGette (D-CO). As much as I hate child pornography, I cannot tolerate such an invasion of privacy as described in this article. As the concept is described, it is not limited to retention of records of people vising child pornography web sites. In fact, it doesn’t appear to be limited at all.

I see this as another example of well intentioned people without any real understanding of technology crafting a solution that forces a conflict between values important to this country. What appears to have happened in this case, as in so many others, is that those involved in public policy have failed to clearly delineate the problem they are trying to solve. Further, they have failed to put enough effort into finding a solution - and, as a result, have come up with a solution that in fact does not solve the problem, but does create other ones.

Keeping track of everything everyone sees on the Internet won’t stop child pornography. But it may be used to get people in trouble when they stumble onto a site unintentionally. And it reduces this country to a place little different than China in terms of the level of government watchers looking over our shoulders.

Let’s keep in mind what the real goal is here: To stop the distribution of this filth. Prosecuting users after the fact is limited by the resources required for prosecution. So keeping records, even to aid the prosecution, is at best imperfect.

A more appropriate solution would be to simply create a national registry that tracks child porn sites and require that all ISPs block those sites. This list can be easily distributed using an RSS feed (that’s not the only way to do it, but I wanted to give an example that most people can understand). And since every ISP already has the technology to block requests geographically - this has already been demonstrated in such cases as Yahoo and the sale of Nazi-related items1, it wouldn’t be difficult for them to check this list before honoring the request. Keep in mind that the producers of child porn are a much smaller number than the consumers2. Blocking access to those sites would make it more difficult for people to consume - which is really what we want. Further, this can be done under the auspices of any law that outlaws the distribution of child pornography (since that’s exactly what happens when you honor an http request).

Granted, blocking access in this country won’t necessarily result in no access at all to child porn. However, since we aren’t talking about interdicting just hosting, but also blocking http requests, the effect would be to block not only child porn hosted in this country, but to block child porn hosted ANYWHERE. While this isn’t a complete solution, it raises the level of effort required to get around the ban, which, in turn, limits the number of people who will attempt to do so3. Moreover, since child porn is illegal in EVERY country, it would be relatively simple to convince other governments to follow our lead, and even use our database. As more and more countries adopt this approach it would become harder for child pornographers to find a place to hide.

We should also keep in mind this same mechanism can be used to block ANYTHING. This means we must be careful what gets blacklisted4. Our country is vibrant because we constantly seek to limiting the limits of freedom. To the degree that we can preclude things with no positive value, while limiting the impact on things such as free speech, to that degree we can flourish in freedom. If we fail to find that balance, we can easily create a chilling effect that will cause people to censor everything in an attempt to censor the bad and marginal - and destroy the creativity that makes the country great. We need only look at the former Soviet Union to see what happens when people live in a censored society.

NOTES:
1. eBay, Amazon avoid French knot
2. Statistics on Porn & Sex Addiction
3. While I could outline several ways to get around this, I see no reason to make it easier for anyone.
4. The Chinese would just as soon use it to block access to websites dealing with freedom, democracy and the like (Oh wait - they already do that…).

comments (0)