Friday, January 21, 2011

Describing Performance Improvements (Beware of Ratios)

Recently, I received into my Spam folder an ad claiming that a product could “...improve performance 1000%.” Claims in that format have bugged me for a long time, at least as far back as the 1990s, when some of the most popular Oracle “tips & techniques” books of the era used that format a lot to state claims.

Beware of claims worded like that.

Whenever I see “...improve performance 1000%,” I have to do extra work to decode what the author has encoded in his tidy numerical package with a percent-sign bow. The two performance improvement formulas that make sense to me are these:
  1. Improvement = (ba)/b, where b is the response time of the task before repair, and a is the response time of the task after repair. This formula expresses the proportion (or percentage, if you multiply by 100%) of the original response time that you have eliminated. It can’t be bigger than 1 (or 100%) without invoking reverse time travel.
  2. Improvement = b/a, where b and a are defined exactly as above. This formula expresses how many times faster the after response time is than the before one.
Since 1000% is bigger than 100%, it can’t have been calculated using formula #1. I assume, then, that when someone says “...improve performance 1000%,” he means that b/a = 10, which, expressed as a percentage, is 1000%. What I really want to know, though, is what were b and a? Were they 1000 and 1? 1 and .001? 6 and .4? (...In which case, I would have to search for a new formula #3.) Why won’t you tell me?

Any time you see a ‘%’ character, beware: you’re looking at a ratio. The principal benefit of ratios is also their biggest flaw. A ratio conceals its denominator. That, of course, is exactly what ratios are meant to do—it’s called normalization—but it’s not always good to normalize. Here’s an example. Imagine two SQL queries A and B that return the exact same result set. What’s better: query A, with a 90% hit ratio on the database buffer cache? or query B, with a 99% hit ratio?

QueryCache hit ratio

As tempting as it might be to choose the query with the higher cache hit ratio, the correct answer is...
There’s not enough information given in the problem to answer. It could be either A or B, depending on information that has not yet been revealed.
Here’s why. Consider the two distinct situations listed below. Each situation matches the problem statement. For situation 1, the answer is: query B is better. But for situation 2, the answer is: query A is better, because it does far less overall work. Without knowing more about the situation than just the ratio, you can’t answer the question.

Situation 1
QueryCache lookupsCache hitsCache hit ratio

Situation 2
QueryCache lookupsCache hitsCache hit ratio

Because a ratio hides its denominator, it’s insufficient for explaining your performance results to people (unless your aim is intentionally to hide information, which I’ll suggest is not a sustainable success strategy). It is still useful to show a normalized measure of your result, and a ratio is good for that. I didn’t say you shouldn’t use them. I just said they’re insufficient. You need something more.

The best way to think clearly about performance improvements is with the ratio as a parenthetical additional interesting bit of information, as in:
  • I improved response time of T from 10s to .1s (99% reduction).
  • I improved throughput of T from 42t/s to 420t/s (10-fold increase).
There are three critical pieces of information you need to include here: the before measurement (b), the after measurement (a), and the name of the task (here, T) that you made faster. I’ve talked about b and a before, but this I’ve slipped this T thing in on you all of a sudden, haven’t I!

Even authors who give you b and a have a nasty habit of leaving off the T, which is far worse even than leaving off the before and after numbers, because it implies that using their magic has improved the performance of every task on the system by exactly the same proportion (either p% or n-fold), which is almost never true. That is because it’s rare for any two tasks on a given system to have “similar” response time profiles (defining similar in the proportional sense). For example, imagine the following quite dissimilar two profiles:

Task A
Response timeResource
10%Disk I/O

Task B
Response timeResource
90%Disk I/O

No single component upgrade can have equal performance improvement effects upon both these tasks. Making CPU processing 2× faster will speed up task A by 45% and task B by 5%. Likewise, making Disk I/O processing 10× faster will speed up task A by 9% and task B by 80%.

For a vendor to claim any noticeable, homogeneous improvement across the board on any computer system containing tasks A and B would be an outright lie.


Joel Garry said...

I plugged that 1000% claim into google, and found a site that seemed to match your spam. Below the performance claim it had a quote that said records per second, so I guess that's the T. It still seemed silly, not near enough context to evaluate truth. But that isn't its purpose, the purpose is to make people register to see how great the product is. I would guess your spam left off the quote, if it is the same product.

I think hiding information is not only a sustainable strategy, but enhances information value. Gartner reports may look useless to us, but we have different criteria than those who buy them. Open Source may appear to be not hiding information, but it is - there is high opportunity cost just to get to the skill level to seek the information, and more cost yet to use it - security by obscurity may not work, but it may be expensive to bypass. And how much would you give me for the secret recipe for Coca-Cola (patent pending)?

Yes, vendors use fluff, outright lies and many other tricks to get money from businesses, maximizing profit while minimizing cost. Caveat emptor.

Spotlighting false claims is the good fight.

word: bulogged

Steve Howard said...

Anyone who has ever watched a query do 0 physical reads and yet sit on a processor should be able to answer the buffer cache hit ratio one.

I know that horse has been beaten, resurrected, and beaten to death too many times to count. However, watching the CPU spin on a query that gets all of its blocks in memory is really something to behold, and makes complete sense when you think about it.

UK said...

Hello Carry,

Every time I read ratio related reluctancy, I wonder why does Oracle promotes it in its AWR Advisories ? I agree that there is no silver bullet, but I look at it as a one of good indicators of performance, thou a DBA should not be totally dependent on the same, I feel they should include different ratios in their health check monitoring.

I have written and inhouse monitoring program in which I have included ratio for parsing, lib and DB cache etc. which is calculated for a snapshot of every 10 secs.

Please advice if its a good idea or else what in your view would be good to monitor on every 10sec which can give the DBA a quick health check of the system -- Let me know your insights.

Cary Millsap said...


Remember, the DNA of the AWR advisories goes all the way back to the old bstat.sql and estat.sql utility pair from Oracle version 6. The ratios shown there are an integral part of thousands of DBAs' daily routines.

I'm not trying to claim that all ratios are bad, but all ratios do have the potential to trap you. If you want to see a really interesting case, read a little bit about Simpson's paradox, which I find fascinating.

Ratios are Instruments Of Good when you use them right. For example, my company's Method R Profiler software reports ratios like statement and call response times as proportions of total response time, mean duration per call, total calls per execution, average library cache miss count by execution, and average database buffer cache access count by row. The Profiler highlights information based on ratios, too. For example, the prepare (PARSE) call count gets highlighted if it's greater than 50% of the execute (EXEC) call count.

So, certainly ratios are useful. Just realize that a ratio's potential for causing faulty decision-making increases dramatically if you cannot see its numerator and denominator.

UK said...

Thanks for your quick response Carry,I get your point of holistic approach and selection of correct ratio is very critical.

Indeed Your Method R is one of best approach for performance tuning, which I read in your "Optimizing Oracle Performance" book , it makes DBA to look for database as whole instead of specific areas which make a lot of difference.

from a proactive perspective, I thought would be helpful to have a monitoring program, which does not have to be fancy with different colours but should highlight critical elements indicating the overallhealth of the system. As mentioned earlier, I have written a script which runs every 10 secs to provide information such as active session, hard parse, logons consistent gets , physical gets ,etc from v$sysstat and v$system_event , etc views. What I feel missing are good set of ratios which can be a good indicators but not sure which would be the good ones to include , can you please let me know you insights on below :
Do you think its a good idea to have such 10 secs interval monitor ?
Would ratios make any sense to be included in these 10sec interval ?
If yes, What ratios would you think are good candidates for these monitoring.

Cary Millsap said...


I'm having a difficult time coming up with a productive answer to your question. It's a little bit like being asked if it makes sense to use oxygen molecules in a cake :-).

At any rate, my answer is yes. I think that some ratios like commits/transaction, rollbacks/transaction, parse calls/transaction, maybe parses/execution, and so on would probably make sense for you. But as I said before: I'd recommend not concealing the numerators and denominators.

I'm not a big fan of most tools that poll for their data. One that I've seen that I believe to be quite useful is Snapper, by Tanel Põder. You'll find Tanel's URL on my blog roll. If you haven't seen it, you should have a look.

JD said...

If you know what cache hit ratio is, and you know that a higher ratio is *always* better, then you actually do have enough info.

Your example said that doing less work was what made it better, and obviously cache hit ratio is not sufficient to describe the amount of work done, because it's an entirely different thing. Your point is valid that this can be used to mislead, but it isn't *always* insufficient or misleading to use a ratio.

For example, a 99% customer satisfaction ratio is *always* better, in the sense that it's a better ratio for this isolated unit of measurement, than a 90% customer satisfaction ratio. Other related numbers, such as overall number of customers, gross revenue, net profit, etc., are not at issue here, even though you might find that having a lower customer satisfaction ratio is somehow related to higher profits (like if your business has a monopoly and charges absurd prices.)

Cary Millsap said...


Thank you for your comment.

The customer satisfaction ratio number can trick you in a much more fundamental way than having to imagine a monopoly situation.

Imagine that you have 100 customers labeled 1 through 100. Customer 1 is responsible for 90.1% of your revenue. Customers 2, 3, ..., 100 are each responsible for .1% of your revenue, collectively comprising the other 9.9%.

In this case, you can’t know that a 99% customer satisfaction rate is better than a 90% customer satisfaction rate, until you learn more detail. If customer 1 is your only dissatisfied customer, then you have a 99% satisfaction rate, but you jeopardize 90.1% of your revenue. If customers 91–100 are dissatisfied, then you have only a 90% satisfaction rate, but you jeopardize only 1% of your revenue.

I agree that ratios aren’t always insufficient or misleading, but my point is that ratios usually—and I would in fact argue “almost always”—conceal hidden treachery that, unless you’re in the habit of digging for it, can trip you up.