Sunday, December 4, 2011

Gwen Shapira on SSD

If you haven’t seen Gwen Shapira’s article about de-confusing SSD, I recommend that you read it soon.

One statement stood out as an idea on which I wanted to comment:
If you don’t see significant number of physical reads and sequential read wait events in your AWR report, you won’t notice much performance improvements from using SSD.
I wanted to remind you that you can do better. If you do notice a significant number of physical reads and sequential write wait events in your AWR report, then it’s still not certain that SSD will improve the performance of the task whose performance you’re hoping to improve. You don’t have to guess about the effect that SSD will have upon any business task you care about. In 2009, I wrote a blog post that explains.

Friday, November 18, 2011

I Can Help You Trace It

The first product I ever created after leaving Oracle Corporation in 1999 was a 3-day course about optimizing Oracle performance. The experiences of teaching this course from 2000 through 2003 (heavily revising the material each time I taught it) added up to the knowledge that Jeff Holt and I needed to write Optimizing Oracle Performance (2003).

Between 2000 and 2006, I spent many weeks on the road teaching this 3-day course. I stopped teaching it in 2006. An opportunity to take or teach a course ought to be a joyous experience, and this one had become too much of a grind. I didn’t figure out how to fix it until this year. How I fixed it is the story I’d like to tell you.

The Problem

The problem was simply inefficiency. The inefficiency began with the structure of the course, the 3-day lecture marathon. Realize, 6 × 3 = 18 hours of sitting in a chair, listening attentively to a single voice (my voice) is the equivalent of a 6-week university term of a 3-credit-hour course, taught straight through in three days. No hour-plus homework assignment after each hour of lecture to reinforce the lessons; but a full semester’s worth of listening to one voice, straight through, for three days. What retention rate would you expect from a university course compressed into just 3 days?

So, I optimized. I have created a new course that lasts one day (not even an exhausting full day at that). But how can a student possibly learn as much in 1 day as we used to teach in 3 days? Isn’t a 1-day event bound to be a significantly reduced-value experience?

On the contrary, I believe our students benefit even more now than they used to. Here are the big differences, so you can see why.

The Time Savings

In the 3-day course, I would spend half a day explaining why people should abandon their old system-wide-ratio-based ways of managing system performance. In the new 1-day course, I spend less than an hour explaining the Method R approach to thinking about performance. The point of the new course is not to convince people to abandon anything they’re already doing; it’s to show students the tremendous additional opportunities that are available to them if they’ll just look at what Oracle trace files have to offer. Time savings: 2 hours.

In the 3-day course, I would spend a full day explaining how to interpret trace data. By hand. These were a few little lab exercises, about an hour’s worth. Students would enter dozens of numbers from trace files into laptops or pocket calculators and write results on worksheets. In the new 1-day course, the software tools that a student needs to interpret files of any size—or even directories full of files—are included in the price of the course. Time savings: 5 hours.

In the 3-day course, I would spend half a day explaining how to collect trace data. In the new 1-day course, the software tools that a student needs to get started collecting trace files are included in the price of the course. For software architectures that require more work than our software can do for you, there’s detailed instruction in the course book. Time savings: 3 hours.

In the 3-day course, I would spend half a day working through about five example cases using a software tool to which students would have access for 30 days after they had gone home. In the new 1-day course, I spend one hour working through about eight example cases using software tools that every student will take home and keep forever. I can spend less time per case yet teach more because the cases are thoroughly documented in the course book. So, in class, we focus on the high-level decision making instead of the gnarly technical details you’ll want to look up later anyway. Time savings: 3 hours.

...That’s 13 classroom hours we’ve eliminated from the old 3-day experience. I believe that in these 13 hours, I was teaching material that students weren’t retaining to begin with.

The Book

The next big difference: the book.

In the old 3-day course, I distributed two books: (1) the “Course Notebook,” which was a black and white listing of the course PowerPoint slides, and (2) a copy of Optimizing Oracle Performance (O’Reilly 2003). The O’Reilly book was great, because it contained a lot of detail that you would want to look up after the course. But of course it doesn’t contain any new knowledge we’ve learned since 2003. The Course Notebook, in my opinion, was never worth much to begin with. (In my opinion, no PowerPoint slide printout is worth much to begin with.)

The Mastering Oracle Trace Data (MOTD) book we give each student in my new 1-day course is a full-color, perfect-bound book that explains the course material and far more in deep detail. It is full-color for an important reason. It’s not gratuitous or decorative; it’s because I’ve been studying Edward Tufte. I use color throughout the book to communicate detailed, high-resolution information faster to your brain.

Color in the book helps to reduce student workload and deliver value long after a student has left the classroom. In this class, there is no collection of slide printouts like you’ve archived after every Oracle class you’ve been to since the 1980s. The MOTD book is way better than any other material I’ve ever distributed in my career. I’ve heard students tell their friends that you have to see it to believe it.
“A paper record tells your audience that you are serious, responsible, exact, credible. For deep analysis of evidence and reasoning about complex matters, permanent high-resolution displays [that is, paper] are an excellent start.” —Edward Tufte

The Software

So, where does a student recoup all the time we used to spend going through trace files, and studying how to collect trace data on half a dozen different software architectures? In the thousands of man-hours we’ve invested into the software that we give you when you come to the course. Instead of explaining every little detail about quirks in Oracle trace data that change between Oracle versions 10.1 and 10.2 and 11.2 or 11.2.0.2 and 11.2.0.4, the software does the work for you. Instead of having to explain all the detail work, we have time to explain how to use the results of our software to make decisions about your data.

What’s the catch? Of course, we hope you’ll love our software and want to buy it. The software we give you is completely full-featured and yours to keep forever, but the license limits you to using it only with one login id, and it doesn’t include patches and upgrades, which we release a few times each year. We hope you’ll love our software so much that you’ll want to buy a license that lets you use it on any of your systems and that includes the right to upgrade as we fix bugs and add features. We hope you’ll love it so much that you encourage your colleagues to buy it.

But there’s really no catch. You get software and a course (and a book and a shirt) for less than the daily rate that we used to charge for just a course.

A Shirt?

MOTD London 2011-09-08: “I can help you trace it.”
Yes, a shirt. Each student receives a Method R T-shirt that says, “I can help you trace it.” We don’t give these things away to anyone except for students in my MOTD course. So if you see one, the person wearing it can, in actual fact, Help You Trace It.

The Net Result

The net result of all this optimization is benefits on several fronts:
  • The course costs a lot less than it used to. The fee is presently only about 25% of the 3-day course’s price, and the whole experience requires less than ⅓ of time away from work that the original course did.
  • In the new course, our students don’t have to work so hard to make productive use of the course material. The book and the software take so much of the pressure off. We do talk about what the fields in raw trace data mean—I think it’s necessary to know that in order to use the data properly, and have productive debates with your sys/SAN/net/etc. administration colleagues. But we don’t spend your time doing exercises to untangle nested (recursive) calls by hand. The software you take home does that for you. That’s why it is so much easier for a student to put this course to work right away.
  • Since the course duration is only one day, I can visit far more cities and meet far more students each year. That’s good for students who want to participate, and it’s great for me, because I get to meet more people.

Plans

The only thing missing from our Mastering Oracle Trace Data course right now is you. I have taught the event now in Southlake, Texas (our home town), in Copenhagen, and in London. It’s field-tested and ready to roll. We have several cities on my schedule right now. I’ll be teaching the course in Birmingham UK on the day after UKOUG wraps up, December 8. I’ll be doing Orlando and Tampa in mid-December. I’ll teach two courses this coming January in Manhattan and Long Island. There’s Billund (Legoland) DK in April. We have more plans in the works for Seattle, Portland, Dallas, and Cleveland, and we’re looking for more opportunities.

Share the word by linking the official
MOTD sticker to http://method-r.com/.
My wish is for you to help me book more cities in North America and Europe (I hope to expand beyond that soon). If you are part of a company or a user group with colleagues who would be interested in attending the course, I would love to hear from you. Registering en masse saves you money. The magic number for discounting is 10 students on a single registration from one company or user group.

I can help you trace it.

Friday, June 17, 2011

Using Agile Practices to Create an Agile Presentation

What’s the best way to make a presentation on Agile practices? Practice Agile practices.

You could write a presentation “big bang” style, delivering version 1.0 in front of your big audience of 200+ people at Kscope 2011 before anybody has seen it. Of course, if you do it that way, you build a lot of risk into your product. But what else can you do?

You can execute the Agile practices of releasing early and often, allowing the reception of your product to guide its design. Whenever you find an aspect of your product that doesn’t get the enthusiastic reception you had hoped for, you fix it for the next release.

That’s one of the reasons that my release schedule for “My Case for Agile Methods” includes a little online webinar hosted by Red Gate Software next week. My release schedule is actually a lot more complicated than just one little pre-ODTUG webinar:

2011-04-15Show key conceptual graphics to son (age 13)
2011-04-29Review #1 of paper with employee #1
2011-05-18Review #2 of paper with customer
2011-05-14Review #3 of paper with employee #1
2011-05-18Review #4 of paper with employee #2
2011-05-26Review #5 of paper with employee #3
2011-06-01Submit paper to ODTUG web site
2011-06-02Review #6 of paper with employee #1
2011-06-06Review #7 of paper with employee #3
2011-06-10Submit revised paper to ODTUG web site
2011-06-13Present “My Case for Agile Methods” to twelve people in an on-site customer meeting
2011-06-22Present “My Case for Agile Methods” in an online webinar hosted by Red Gate Software
2011-06-27Present “My Case for Agile Methods” at ODTUG Kscope 2011 in Long Beach, California

(By the way, the vast majority of the work here is done in Pages, not Keynote. I think using a word processor, not an operating system for slide projectors.)

Two Agile practices are key to everything I’ve ever done well: incremental design and rapid iteration. Release early, release often, and incorporate what you learn from real world use back into the product. The magic comes from learning how to choose wisely in two dimensions:
  1. Which feature do you include next?
  2. To whom do you release next?
The key is to show your work to other people. Yes, there’s tremendous value in practicing a presentation, but practicing without an audience merely reinforces, it doesn’t inform. What you need while you design something is information—specifically, you need the kind of information called feedback. Some of the feedback I receive generates some pretty energetic arguing. I need that to fortify my understanding of my own arguments so that I’ll be more likely to survive a good Q&A session on stage.

To lots of people who have seen teams run projects into the ground using what they call “Agile,” the word “Agile” has become a synonym for sloppy, irresponsible work habits. When you hear me talk about Agile, you’ll hear about practices that are highly disciplined and that actually require a lot of focus, dedication, commitment, practice, and plain old hard work to execute.

Agile, to me, is about injecting discipline into a process that is inevitably rife with unpredictable change.

Friday, June 3, 2011

Why KScope?

Early this year, my friend Mike Riley from ODTUG asked me to write a little essay in response to the question, “Why Kscope?” that he could post on the ODTUG blog. He agreed that cross-posting would help the group reach more people, so I’ve reproduced my response to that question here. I’ll hope to see you at Kscope11 in Long Beach June 26–30. If you develop applications for Oracle systems, you need to be there.

MR: Why KScope?

CM: Most people in the Oracle world who know my name probably think of me as a database administrator. In my heart, I am a software designer and developer. Before my career with Oracle, I worked in the semiconductor industry as a language designer. I wrote compilers for a living. Designing and writing software has always been my professional true love. I’ve never strayed too far away from it; I’ve always found a reason to write software, no matter what my job has been. [Ed: Examples include the Oracle*APS suite and a compiler design project he did for Great West Life in the 1990s, the queueing theory models he worked on in the late 1990s, the Method R Profiler software (Cary wrote all the XSLT code), and finally today, he spends about half of his time designing and writing the MR Tools suite.]

My career as an Oracle performance specialist is really a natural extension of my software development background. It is still really weird to me that in the Oracle market, performance is regarded as a job done primarily by operations people instead of by development people. Developers control at least 90% of the leverage over how fast an application will be able to run. I think that performance became a DBA responsibility in the formative years of our Oracle world because so many early Oracle projects had DBA teams but no professional development teams.

Most of those big projects were people implementing big off-the-shelf applications like Oracle Financial and Manufacturing Applications (which grew into the Oracle E-Business Suite). The only developers that most of those implementation teams had were what I would call nonprofessional developers. Now, I don’t mean people who were in any way unprofessional. I mean they were predominantly businesspeople who had never been educated as software developers, but who’d been told that of course anybody could write computer programs in this new “fourth-generation language” called SQL.

Just about any time you implement a vendor’s highly customizable new application with 20,000+ database objects underneath it, you’re going to run into performance problems. Someone had to attend to those problems, and the DBAs and sysadmins were the only technical people anywhere near the project who could do it. Those DBAs and Oracle sysadmins were also the people who organized the early Oracle conferences, and I think this is where the topic of “performance tuning” became embedded into the DBA track.

The resulting problem that I still see today is that the topic became dominated by “tips and techniques”—lists of tricks that operational people could try to maybe make their systems go a little bit faster. The word “tuning” says it all. I almost never use the word except facetiously, because it’s a cheap imitation of what systems really need, which is performance optimization, which is what designers and developers of software are supposed to do. Even the evolution of Oracle tools for the performance analyst mirrors this post-production tips-and-techniques “tuning” mentality. That’s why most performance management tools you see today are predominantly oriented toward viewing performance from a system resource perspective (the DBA’s perspective), rather than the code path perspective (the developer’s perspective).

The whole key to performance is the application design and development team, especially when you realize that the performance of an application is not just its code path speed, but its overall interaction with the person using it. So many of the performance problems that I’ve found are caused by applications that are just stupid in how they’re designed to interact with me. For example, if you’ve seen my “Messed-up apps” presentation before, you might remember the self-service bus ticket kiosk that made me wait for over a minute while the application tallied the more-than-2,000 different bus trips for which I might want to buy a ticket. That’s an app with a broken specification. There’s nothing that a run-time operations team can do to make that application any fun to use (short of sending it back for redesign).

My goal as a software designer is not just to make software that runs quickly. My goal is also to make applications that are delightful to use. It’s the difference between an application that you use because you must and one that feels like it’s a necessary part of who you are. Making software like that is the kind of thing that a designer learns from studying Don Norman, Edward Tufte, Christopher Alexander, and Jonathan Ive. It’s a level of performance that just isn’t on the menu for operational run-time support staff to even think about, because it’s beyond their control.

So: why Kscope? The ODTUG conferences are the best places I can go in the Oracle market where I can be with people who think and talk about these things. …Or for that matter, who understand that these ideas even exist and deserve to be studied. KScope is just the right place for me to be.

Monday, February 14, 2011

It’s Conference Season!

My favorite mode of life is being busy doing something that I enjoy and that I know, beyond a doubt, is the Right Thing to be doing. Any hour I get to spend in that zone is a precious gift.

I’ve been in that zone nearly continuously for the past three weeks. I’ve been doing two of my favorite things: lots of consulting work (helping, earning, and learning), and lots of software development work (which helps me help, earn, and learn even faster).

I’m looking forward to the next four weeks, too, because another Right Thing that I love to do is talk with people about software performance, and three of my favorite events where I can do that are coming right up:
  • RMOUG Training Days, Denver CO — I leave tomorrow. I’m looking forward to reuniting with lots of good friends. My stage time will be Wednesday, February 16th, when I’ll talk about material from my new “Mastering Performance with Extended SQL Trace” paper. 
  • NoCOUG Winter Conference, Pleasanton CA — I’ll be in the east Bay Area on Thursday, February 24th presenting the keynote address where I’ll discuss whether Exadata means never having to “tune” again and then spending two hours helping people to think clearly about performance.
  • Hotsos Symposium, Irving TX — I’ll present “Thinking Clearly about Performance” on Monday, March 7th. I love the agenda at this event. It’s a high quality lineup that is dedicated purely to Oracle software performance. This is one of the very few conferences where I can enjoy sitting and just watching for whole days at a time. If you are interested in Oracle system performance, do not miss this. 
Happy Valentine’s Day. I shall hope to see you soon.

Friday, January 21, 2011

Describing Performance Improvements (Beware of Ratios)

Recently, I received into my Spam folder an ad claiming that a product could “...improve performance 1000%.” Claims in that format have bugged me for a long time, at least as far back as the 1990s, when some of the most popular Oracle “tips & techniques” books of the era used that format a lot to state claims.

Beware of claims worded like that.

Whenever I see “...improve performance 1000%,” I have to do extra work to decode what the author has encoded in his tidy numerical package with a percent-sign bow. The two performance improvement formulas that make sense to me are these:
  1. Improvement = (ba)/b, where b is the response time of the task before repair, and a is the response time of the task after repair. This formula expresses the proportion (or percentage, if you multiply by 100%) of the original response time that you have eliminated. It can’t be bigger than 1 (or 100%) without invoking reverse time travel.
  2. Improvement = b/a, where b and a are defined exactly as above. This formula expresses how many times faster the after response time is than the before one.
Since 1000% is bigger than 100%, it can’t have been calculated using formula #1. I assume, then, that when someone says “...improve performance 1000%,” he means that b/a = 10, which, expressed as a percentage, is 1000%. What I really want to know, though, is what were b and a? Were they 1000 and 1? 1 and .001? 6 and .4? (...In which case, I would have to search for a new formula #3.) Why won’t you tell me?

Any time you see a ‘%’ character, beware: you’re looking at a ratio. The principal benefit of ratios is also their biggest flaw. A ratio conceals its denominator. That, of course, is exactly what ratios are meant to do—it’s called normalization—but it’s not always good to normalize. Here’s an example. Imagine two SQL queries A and B that return the exact same result set. What’s better: query A, with a 90% hit ratio on the database buffer cache? or query B, with a 99% hit ratio?

QueryCache hit ratio
A90%
B99%

As tempting as it might be to choose the query with the higher cache hit ratio, the correct answer is...
There’s not enough information given in the problem to answer. It could be either A or B, depending on information that has not yet been revealed.
Here’s why. Consider the two distinct situations listed below. Each situation matches the problem statement. For situation 1, the answer is: query B is better. But for situation 2, the answer is: query A is better, because it does far less overall work. Without knowing more about the situation than just the ratio, you can’t answer the question.

Situation 1
QueryCache lookupsCache hitsCache hit ratio
A1009090%
B1009999%

Situation 2
QueryCache lookupsCache hitsCache hit ratio
A10990%
B1009999%

Because a ratio hides its denominator, it’s insufficient for explaining your performance results to people (unless your aim is intentionally to hide information, which I’ll suggest is not a sustainable success strategy). It is still useful to show a normalized measure of your result, and a ratio is good for that. I didn’t say you shouldn’t use them. I just said they’re insufficient. You need something more.

The best way to think clearly about performance improvements is with the ratio as a parenthetical additional interesting bit of information, as in:
  • I improved response time of T from 10s to .1s (99% reduction).
  • I improved throughput of T from 42t/s to 420t/s (10-fold increase).
There are three critical pieces of information you need to include here: the before measurement (b), the after measurement (a), and the name of the task (here, T) that you made faster. I’ve talked about b and a before, but this I’ve slipped this T thing in on you all of a sudden, haven’t I!

Even authors who give you b and a have a nasty habit of leaving off the T, which is far worse even than leaving off the before and after numbers, because it implies that using their magic has improved the performance of every task on the system by exactly the same proportion (either p% or n-fold), which is almost never true. That is because it’s rare for any two tasks on a given system to have “similar” response time profiles (defining similar in the proportional sense). For example, imagine the following quite dissimilar two profiles:

Task A
Response timeResource
100%Total
90%CPU
10%Disk I/O

Task B
Response timeResource
100%Total
90%Disk I/O
10%CPU

No single component upgrade can have equal performance improvement effects upon both these tasks. Making CPU processing 2× faster will speed up task A by 45% and task B by 5%. Likewise, making Disk I/O processing 10× faster will speed up task A by 9% and task B by 80%.

For a vendor to claim any noticeable, homogeneous improvement across the board on any computer system containing tasks A and B would be an outright lie.

Friday, January 14, 2011

An Axiomatic Approach to Algebra and Other Aspects of Life

Not many days pass that I don’t think a time or two about James R. Harkey. Mr. Harkey was my high school mathematics teacher. He taught me algebra, geometry, analytic geometry, trigonometry, and calculus. What I learned from Mr. Harkey influences—to this day—how I write, how I teach, how I plead a case, how I troubleshoot, .... These are the skills I’ve used to earn everything I own.

Prior to Mr. Harkey’s algebra class, algebra for me just was a morass of tricks to memorize: “Take the constant to the other side...”; “Cancel the common factors...”; “Flip the fraction and multiply...” I could practice for a while and then solve problems just like the ones I had been practicing, by applying memorized transformations to superficial patterns that I recognized, but I didn’t understand what I had been taught to do. Without continual practice, the rules I had memorized would evaporate, and then once more I’d be able to solve only those problems for which I could intuit the answer: “7x + 6 = 20” would have been easy, but “7/x – 6 = 20” would have stumped me. This made, for example, studying for final exams quite difficult.

On the first day of Mr. Harkey’s class, he gave us his rules. First, his strict rules of conduct in the classroom lived up to his quite sinister reputation, which was important. Our studies began with a single 8.5" × 14" sheet of paper that apparently he asked us to label “Properties A” (because that’s what I wrote in the upper right-hand corner; and yes, I still have it). He told us that we could consult this sheet of paper on every homework assignment and every exam he’d give. And here’s how we were to use it: every problem would be executed one step at a time; every step would be written down; and beside every step we would write the name of the rule from Properties A that we invoked to perform that step.

You can still hear us now: Holy cow, that’s going to be a lot of extra work.

Well, that’s how it was going to be. Here’s what each homework and test problem had to look like:


The first few days of class, we spent time reviewing every single item on Properties A. Mr. Harkey made sure we all agreed that each axiom and property was true before we moved on to the real work. He was filling our toolbox.

And then we worked problem after problem after problem.

Throughout the year, we did get to shift gears a few times. Not every ax + b = c problem required fourteen steps all year long. After some sequence of accomplishments (I don’t remember what it was—maybe some set number of ‘A’ grades on homework?), I remember being allowed to write the number of the rule instead of the whole name. (When did you first learn about foreign keys? ☺) Some accomplishments after that, we’d be allowed to combine steps like 3, 4 and 5 into one. But we had to demonstrate a pattern of consistent mastery to earn a privilege like that.

Mr. Harkey taught algebra as most teachers teach geometry or predicate logic. Every problem was a proof, documented one logical step at a time. In Mr. Harkey’s algebra class, your “answer” to a homework problem or test question wasn’t the number that x equals, it was the whole proof of how you arrived at the value of x in your answer. Mr. Harkey wasn’t interested in grading your answers. He was going to grade how you got your answers.

The result? After a whole semester of this, I understood algebra, and I mean thoroughly. You couldn’t make a good grade in Mr. Harkey’s algebra class without creating an intimate comprehension of why algebra works the way it does. Learning that way supplies you for a whole lifetime: I still understand it. I can make dimensioned drawings of the things I’m going to build in my shop. I can calculate the tax implications of my business decisions. I can predict the response time behavior of computer software. I can even help my children with their algebra. Nothing about algebra scares me, because I still understand all the rules.

When I help my boys with their homework, I make them use Mr. Harkey’s axiomatic approach with my own Properties A that I made for them. (I rearranged Mr. Harkey’s rules to better illuminate the symmetries among them. If Mr. Harkey had been handy with the laptop computer, which didn’t exist when I was in school, I imagine he’d have done the same thing.)

Invariably, when my one of boys misses a math problem, it’s for the same stupid reason that I make mistakes in my shop or at work. It’s because he’s tried to do steps in his head instead of writing them all down, and of course he’s accidentally integrated an assumption into his work that’s not true. When you don’t have a neat and orderly audit trail to debug, the only way you can fix your work is to start over, which takes more time (which itself increases frustration levels and degrades learning) and which bypasses perhaps the most important technical skill in all of Life today: the ability to troubleshoot.
Theory: Redoing an n-step math problem instead of learning how to propagate a correction to an error made in step – k through step n is how we get to a society in which our support analysts know only two solutions to any problem: (a) reboot, and (b) reinstall.
It’s difficult to teach people the value of mastering the basics. It’s difficult enough with children, and it’s even worse with adults, but great teachers and great coaches understand how important it is. I’m grateful to have met my share, and I love meeting new ones. Actually, I believe my 11-year old son has a baseball practice with one tomorrow. We’ll have to check his blog in about 30 years.

Thursday, January 13, 2011

New paper "Mastering Performance with Extended SQL Trace"

Happy New Year.

It’s been a busy few weeks. I finally have something tangible to show for it: “Mastering Performance with Extended SQL Trace” is the new paper I’ve written for this year’s RMOUG conference. Think of it a 15-page update to chapter 5 of Optimizing Oracle Performance.

There’s lots of new detail in there. Some highlights:
  • How to enable and disable traces, even in un-cooperative applications.
  • How to instrument your application so that tracing the right code path during production operation of your application becomes dead simple.
  • How to make that instrumentation highly scalable (think 100,000+ tps).
  • How timestamps since 10.2 allow you to know your recursive call relationships without guessing.
  • How to create response time profiles for calls and groups of calls, with examples.
  • Why you don’t want to be on Oracle 11g prior to 11.2.0.2.0.
I hope you’ll be able to make productive use of it.