My web log for things I’m interested in, including design, software development, performance analysis, learning, and running a business.
Wednesday, January 7, 2009
New whitepaper: "Making Friends..."
I've just posted a new 18-page whitepaper called "For Developers: Making Friends with the Oracle Database" at our method-r.com papers and articles downloads page. Let me know what you think...
Thanks. I just browsed it. Initial thoughts are: The reference to ILO design is great for any development language. When half the response time is in the DB, and half in the application, we need application trace in addition to ILO DB trace. Integrating both is key. Yet, it is tough to extend ILO outside the DB and into other languages. Great discussion topic!
I can spend several weeks using oracle without learning new things in my daily work. Reading this paper, I can say at least I learnt one thing new: what “SQL*Net more data to client” exactly means. Thanks for that. As per regards to the log file sync and XCTEND raw trace parameter, I totally agree with you about the performance degradation the abuse of commit can make. It not only degrades the performance but it represents one of the main causes of the ORA-01555 rollback segment snapshot too old particularly when committing across fetch. This is why, when the situation allows me, I commit only once. And when this is not possible (appearance of ORA-01652), I make sure to : 1.Commit outside the loop 2.Make my process restartable
Excellent paper, Cary - I hope you don't mind me using it in discussions with my Java-coding colleagues.
I fear it is a losing battle though - what do you answer in response to "I did not write that SQL, I just delivered my object to the persistence layer (eg. Hibernate)"?
I took yor example a step further - I would like developer to access data through an api so as a test I wrote a simple pl/sql procedure that returns a resultset as aref cursor. My java code just calls this procedure with a CallableStatement. My cursor returned 667 rows. Setting Statement.setFetchSize did not change anything as expected but setting Resultset.setFetchSize did have an effect, but not quite what I expected
Here is what I got ResultSet rs = (ResultSet)stmt.getObject(1); rs.setFetchSize(1024);
from trace: (I had to shorten the lines a bit) EXEC #2:c=0,e=11832,p=0,cr=0,cu=0,mis=0,r=1 WAIT #2: nam='SQL*Net message from client' WAIT #3: nam='SQL*Net message to client' WAIT #3: nam='SQL*Net message from client' WAIT #3: nam='SQL*Net message to client' FETCH #3:c=0,e=180,p=0,cr=4,cu=0,mis=0,r=10 WAIT #3: nam='SQL*Net message from client' WAIT #3: nam='SQL*Net message to client' WAIT #3: nam='SQL*Net more data to client' WAIT #3: nam='SQL*Net more data to client' WAIT #3: nam='SQL*Net more data to client' WAIT #3: nam='SQL*Net more data to client' WAIT #3: nam='SQL*Net more data to client' WAIT #3: nam='SQL*Net more data to client' FETCH #3:c=10000,e=1792,p=0,cr=28,cu=0,mis=0,r=657 First fetch takes 10 rows and the next fetch takes the remaining 657, but there seems to be more roundtrips in between? Kurt
I like where you went. If I understand you correctly, the result is exactly what happened in the paper. You can't fit your 657 rows in one network frame, so it took a sequence of "SQL*Net more data to client" calls to transfer the remainder of the data that wouldn't fit into the first packet.
In reference to your persistence layer question, an organization can't afford to have any gaps in it where a performance problem becomes "structurally untouchable." An application owner has to be able to exert control over whatever part of the application is causing speed or scalability problems to its users. If a persistence layer creates an "unsolvable" performance problem, then I think you have to consider that a tangible disadvantage to be weighed against all the advantages afforded by having the abstraction.
That's a fancy way of saying if it doesn't meet all your requirements (including your performance ones), then maybe it's not the right tool. But I don't know enough about Hibernate to tell you that the kind of problem you're talking about is "unsolvable." There may well be a perfectly elegant solution.
Cary, Excellent paper. I appreciate the way you explain how to read the trace files and provide needed tools. I can think of many developers who I work with would benefit from your paper.
The paper is ok. I'm only missing batch executing (addBatch and executeBatch, see also my blog entry about this a while ago at http://technology.amis.nl/blog/2652/sql-in-javal-to-batch-or-not-to-batch-with-oracle).
14 comments:
Thanks. I just browsed it. Initial thoughts are: The reference to ILO design is great for any development language. When half the response time is in the DB, and half in the application, we need application trace in addition to ILO DB trace. Integrating both is key. Yet, it is tough to extend ILO outside the DB and into other languages. Great discussion topic!
fauw, I agree absolutely. We're gathering momentum on the other half, too ;-).
Cary
I am interested in helping on the application side. I know the ILO design in detail. How can I help?
Mike
PS: My stupid fuaw blogger handle, it must be the google toolbar.
Thanks Cary for this paper,
I can spend several weeks using oracle without learning new things in my daily work. Reading this paper, I can say at least I learnt one thing new: what “SQL*Net more data to client” exactly means. Thanks for that.
As per regards to the log file sync and XCTEND raw trace parameter, I totally agree with you about the performance degradation the abuse of commit can make. It not only degrades the performance but it represents one of the main causes of the ORA-01555 rollback segment snapshot too old particularly when committing across fetch.
This is why, when the situation allows me, I commit only once. And when this is not possible (appearance of ORA-01652), I make sure to :
1.Commit outside the loop
2.Make my process restartable
Thanks, Mohamed.
Cary
Excellent paper, Cary - I hope you don't mind me using it in discussions with my Java-coding colleagues.
I fear it is a losing battle though - what do you answer in response to "I did not write that SQL, I just delivered my object to the persistence layer (eg. Hibernate)"?
I took yor example a step further - I would like developer to access data through an api so as a test I wrote a simple pl/sql procedure that returns a resultset as aref cursor.
My java code just calls this procedure with a CallableStatement.
My cursor returned 667 rows.
Setting Statement.setFetchSize did not change anything as expected but setting Resultset.setFetchSize did have an effect, but not quite what I expected
Here is what I got
ResultSet rs = (ResultSet)stmt.getObject(1);
rs.setFetchSize(1024);
from trace: (I had to shorten the lines a bit)
EXEC #2:c=0,e=11832,p=0,cr=0,cu=0,mis=0,r=1
WAIT #2: nam='SQL*Net message from client'
WAIT #3: nam='SQL*Net message to client'
WAIT #3: nam='SQL*Net message from client'
WAIT #3: nam='SQL*Net message to client'
FETCH #3:c=0,e=180,p=0,cr=4,cu=0,mis=0,r=10
WAIT #3: nam='SQL*Net message from client'
WAIT #3: nam='SQL*Net message to client'
WAIT #3: nam='SQL*Net more data to client'
WAIT #3: nam='SQL*Net more data to client'
WAIT #3: nam='SQL*Net more data to client'
WAIT #3: nam='SQL*Net more data to client'
WAIT #3: nam='SQL*Net more data to client'
WAIT #3: nam='SQL*Net more data to client'
FETCH #3:c=10000,e=1792,p=0,cr=28,cu=0,mis=0,r=657
First fetch takes 10 rows and the next fetch takes the remaining 657, but there seems to be more roundtrips in between?
Kurt
Kurt,
I like where you went. If I understand you correctly, the result is exactly what happened in the paper. You can't fit your 657 rows in one network frame, so it took a sequence of "SQL*Net more data to client" calls to transfer the remainder of the data that wouldn't fit into the first packet.
In reference to your persistence layer question, an organization can't afford to have any gaps in it where a performance problem becomes "structurally untouchable." An application owner has to be able to exert control over whatever part of the application is causing speed or scalability problems to its users. If a persistence layer creates an "unsolvable" performance problem, then I think you have to consider that a tangible disadvantage to be weighed against all the advantages afforded by having the abstraction.
That's a fancy way of saying if it doesn't meet all your requirements (including your performance ones), then maybe it's not the right tool. But I don't know enough about Hibernate to tell you that the kind of problem you're talking about is "unsolvable." There may well be a perfectly elegant solution.
Cary
Hi Cary, Just read your paper - I must say I think it is fantastic. Developers? This should be required reading for DBA's as well.
Thanks,
Jerry
Thank you, Jerry.
Excellent paper (with java examples even). I'm impressed!
Cary,
Excellent paper. I appreciate the way you explain how to read the trace files and provide needed tools. I can think of many developers who I work with would benefit from your paper.
Thanks again.
Ittichai
Ittichai,
Thank you; and thank you for helping spread the word about it at http://oraexplorer.com/.
Cary
The paper is ok. I'm only missing batch executing (addBatch and executeBatch, see also my blog entry about this a while ago at http://technology.amis.nl/blog/2652/sql-in-javal-to-batch-or-not-to-batch-with-oracle).
Emiel,
Thank you. It's a big field for lots more research and writing. I appreciate your reference.
—Cary
Post a Comment