His email looked like this:
It’s a screen shot of frame 3:12 from my November 2014 video called “Why you need a profiler for Oracle.” At frame 3:12, I am answering the question of how you can know when you’re finished optimizing a given application function. Gummi’s question is, «Oi! What happened to “when the application is fast enough to meet users’ requirements?”»
Gummi noticed (the good ones will do that) that the video says something different than the thing he had heard me say for years. It’s a fair question. Why, in the video, have I said this new thing? It was not an accident.
When are you finished optimizing?
The question in focus is, “When are you finished optimizing?” Since 2003, I have actually used three different answers:When are you are finished optimizing?My motive behind answers A and B was the idea that optimizing beyond what your business needs can be wasteful. I created these answers to deter people from misdirecting time and money toward perfecting something when those resources might be better invested improving something else. This idea was important, and it still is.
- When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
Source: Optimizing Oracle Performance (2003) pages 302–304.- When the application is fast enough to meet your users’ requirements.
Source: I have taught this in various courses, conferences, and consulting calls since 1999 or so.- When there are no unnecessary calls, and the calls that remain run at hardware speed.
Source: “Why you need a profiler for Oracle” (2014) frames 2:51–3:20.
So, then, where did C come from? I’ll begin with a picture. The following figure allows you to plot the response time for a single application function, whatever “given function” you’re looking at. You could draw a similar figure for every application function on your system (although I wouldn’t suggest it).
Somewhere on this response time axis for your given function is the function’s actual response time. I haven’t marked that response time’s location specifically, but I know it’s in the blue zone, because at the bottom of the blue zone is the special response time RT. This value RT is the function’s top speed on the hardware you own today. Your function can’t go faster than this without upgrading something.
It so happens that this top speed is the speed at which your function will run if and only if (i) it contains no unnecessary calls and (ii) the calls that remain run at hardware speed. ...Which, of course, is the idea behind this new answer C.
Where, exactly, is your “requirement”?
Answer B (“When the application is fast enough to meet your users’ requirements”) requires that you know the users’ response time requirement for your function, so, next, let’s locate that value on our response time axis.This is where the trouble begins. Most DBAs don’t know what their users’ response time requirements really are. Don’t despair, though; most users don’t either.
At banks, airlines, hospitals, telcos, and nuclear plants, you need strict service level agreements, so those businesses invest into quantifying them. But realize: quantifying all your functions’ response time requirements isn’t about a bunch of users sitting in a room arguing over which subjective speed limits sound the best. It’s about knowing your technological speed limits and understanding how close to those values your business needs to pay to be. It’s an expensive process. At some companies, it’s worth the effort; at most companies, it’s just not.
How about using, “well, nobody complains about it,” as all the evidence you need that a given function is meeting your users’ requirement? It’s how a lot of people do it. You might get away with doing it this way if your systems weren’t growing. But systems do grow. More data, more users, more application functions: these are all forms of growth, and you can probably measure every one of them happening where you’re sitting right now. All these forms of growth put you on a collision course with failing to meet your users’ response time requirements, whether you and your users know exactly what they are, or not.
In any event, if you don’t know exactly what your users’ response time requirements are, then you won’t be able to use “meets your users’ requirement” as your finish line that tells you when to stop optimizing. This very practical problem is the demise of answer B for most people.
Knowing your top speed
Even if you do know exactly what your users’ requirements are, it’s not enough. You need to know something more.Imagine for a minute that you do know your users’ response time requirement for a given function, and let’s say that it’s this: “95% of executions of this function must complete within 5 seconds.” Now imagine that this morning when you started looking at the function, it would typically run for 10 seconds in your Oracle SQL Developer worksheet, but now after spending an hour or so with it, you have it down to where it runs pretty much every time in just 4 seconds. So, you’ve eliminated 60% of the function’s response time. That’s a pretty good day’s work, right? The question is, are you done? Or do you keep going?
Here is the reason that answer C is so important. You cannot responsibly answer whether you’re done without knowing that function’s top speed. Even if you know how fast people want it to run, you can’t know whether you’re finished without knowing how fast it can run.
Why? Imagine that 85% of those 4 seconds are consumed by Oracle enqueue, or latch, or log file sync calls, or by hundreds of parse calls, or 3,214 network round-trips to return 3,214 rows. If any of these things is the case, then no, you’re absolutely not done yet. If you were to allow some ridiculous code path like that to survive on a production system, you’d be diminishing the whole system’s effectiveness for everybody (even people who are running functions other than the one you’re fixing).
Now, sure, if there’s something else on the system that has a higher priority than finishing the fix on this function, then you should jump to it. But you should at least leave this function on your to-do list. Your analysis of the higher priority function might even reveal that this function’s inefficiencies are causing the higher-priority function’s problems. Such can be the nature of inefficient code under conditions of high load.
On the other hand, if your function is running in 4 seconds and (i) its profile shows no unnecessary calls, and (ii) the calls that remain are running at hardware speeds, then you’ve reached a milestone:
- if your code meets your users’ requirement, then you’re done;
- otherwise, either you’ll have to reimagine how to implement the function, or you’ll have to upgrade your hardware (or both).
Well, here’s what most people do. They get their functions’ response times reasonably close to their top speeds (which, with good people, isn’t usually as expensive as it sounds), and then they worry about requirements only if those requirements are so important that it’s worth a project to quantify them. A requirement is usually considered really important if it’s close to your top speed or if it’s really expensive when you violate a service level requirement.
This strategy works reasonably well.
It is interesting to note here that knowing a function’s top speed is actually more important than knowing your users’ requirements for that function. A lot of companies can work just fine not knowing their users’ requirements, but without knowing your top speeds, you really are in the dark. A second observation that I find particularly amusing is this: not only is your top speed more important to know, your top speed is actually easier to compute than your users’ requirement (…if you have a profiler, which was my point in the video).
Better and easier is a good combination.
Tomorrow is important, too
When are you are finished optimizing?Answer A is still a pretty strong answer. Notice that it actually maps closely to answer C. Answer C’s prescription for “no unnecessary calls” yields answer A’s goal of call reduction, and answer C’s prescription for “calls that remain run at hardware speed” yields answer A’s goal of latency reduction. So, in a way, C is a more action-oriented version of A, but A goes further to combat the perfectionism trap with its emphasis on the cost of action versus the cost of inaction.
- When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
- When the application is fast enough to meet your users’ requirements.
- When there are no unnecessary calls, and the calls that remain run at hardware speed.
One thing I’ve grown to dislike about answer A, though, is its emphasis on today in “…exceeds the cost of the performance you’re getting today.” After years of experience with the question of when optimization is complete, I think that answer A under-emphasizes the importance of tomorrow. Unplanned tomorrows can quickly become ugly todays, and as important as tomorrow is to businesses and the people who run them, it’s even more important to another community: database application developers.
Subjective goals are treacherous for developers
Many developers have no way to test, today, the true production response time behavior of their code, which they won’t learn until tomorrow. ...And perhaps only until some remote, distant tomorrow.Imagine you’re a developer using 100-row tables on your desktop to test code that will access 100,000,000,000-row tables on your production server. Or maybe you’re testing your code’s performance only in isolation from other workload. Both of these are problems; they’re procedural mistakes, but they are everyday real-life for many developers. When this is how you develop, telling you that “your users’ response time requirement is n seconds” accidentally implies that you are finished optimizing when your query finishes in less than n seconds on your no-load system of 100-row test tables.
If you are a developer writing high-risk code—and any code that will touch huge database segments in production is high-risk code—then of course you must aim for the “no unnecessary calls” part of the top speed target. And you must aim for the “and the calls that remain run at hardware speed” part, too, but you won’t be able to measure your progress against that goal until you have access to full data volumes and full user workloads.
Notice that to do both of these things, you must have access to full data volumes and full user workloads in your development environment. To build high-performance applications, you must do full data volume testing and full user workload testing in each of your functional development iterations.
This is where agile development methods yield a huge advantage: agile methods provide a project structure that encourages full performance testing for each new product function as it is developed. Contrast this with the terrible project planning approach of putting all your performance testing at the end of your project, when it’s too late to actually fix anything (if there’s even enough budget left over by then to do any testing at all). If you want a high-performance application with great performance diagnostics, then performance instrumentation should be an important part of your feedback for each development iteration of each new function you create.
My answer
So, when are you finished optimizing?- When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
- When the application is fast enough to meet your users’ requirements.
- When there are no unnecessary calls and the calls that remain run at hardware speed.
Answer C is usually a tougher standard than answer A or B, and when it’s not, it is the best possible standard you can meet without upgrading or redesigning something. In light of this “tougher standard” kind of talk, it is still important to understand that what is optimal from a software engineering perspective is not always optimal from a business perspective. The term optimized must ultimately be judged within the constraints of what the business chooses to pay for. In the spirit of answer A, you can still make the decision not to optimize all your code to the last picosecond of its potential. How perfect you make your code should be a business decision. That decision should be informed by facts, and these facts should include knowledge of your code’s top speed.
Thank you, Guðmundur Jósepsson, of Iceland, for your question. Thank you for waiting patiently for several weeks while I struggled putting these thoughts into words.
4 comments:
Hi Cary
Is the function's top speed a metric that can be realistically calculated early on in the development life cycle?
Or is it to be calculated pre go-live to be used as a benchmark for production level performance - in order to assess if further optimization is required?
Calculating a meaningful value for RT early on assumes that the perfect solution (code, infrastructure, architecture) is known and during development this might not always be the case.
Thoughts?
Gary,
I think it’s very important to measure and record the speed of your code as you’re writing it. Having the baseline available later when you’re in production is incredibly useful. When production performance is different from the baselines you have recorded, it is an invaluable learning experience to explore why. It is important feedback that makes developers better.
A mistake I think too many people make is that they wait until long after the code is originally written to begin thinking about performance. Developers should think about performance as they’re writing their code, and the DBAs on the development team (the people who are making sure that they have realistic data volumes and competing workloads) should be helping them during the development process.
“Infrastructure” and “architecture” are words with a thousand facets in their meanings. They refer to physical data design, hardware selection, parameter settings, and hundreds of other things. Let’s talk about hardware selection for a minute. Imagine that you’re writing code for an Oracle Exadata deployment. Optimizing a function for use on Exadata is, in fact, different from optimizing it to run on another platform. The Smart Scan feature set actually changes the way you’ll want to write some queries. (N.B.: I hope everyone who reads this understands that running on Exadata does not mean that performance always just magically takes care of itself.)
To me, this example is an argument that you need to push your deployment platform information back earlier into your software development work, instead of just springing the platform change upon the team at go-live time. No matter when you deploy on a given platform, you’ll have to spend some time adapting your application to that platform. It’s better to do this sooner than later, because the cost of changing your code is exponentially less when you do it earlier in your life cycle.
Now, even if you don’t have your production hardware available at development time, you can still write “top speed” code, even if you might have a hard time predicting how many seconds per execution your top speed will actually be. If you write your code so it doesn’t execute unnecessary code path (continually measuring your code path with profiling tools so that you don’t optimize irrelevant parts of your code, as Knuth prescribed in 1974), then you maximize your chances of having efficient code that runs fast and scales well. Your goal is efficient code that takes advantage of the distinctive benefits of your platform and avoids its distinctive pitfalls.
Dear Cary Millsap
i have questions regarding change my career to oracle DBA , can i send my questions to you , if no problem where i can send my questions , i need your email address .
thanks
nagi yahia
Nagi,
I'll be happy to help you if I can. I encourage you to post your questions at a forum like http://www.quora.com. That way, you can get help from more people than just me, and the discussions you inspire might help more people than just you.
—Cary
Post a Comment