After the UKOUG Tech 2011 in Birmingham, I really wanted to go to a new UKOUG Tech conference. And this year I am back, now in Liverpool. Saturday, after landing and traveling by a slow bus to the city centre, I had time to visit The Beatles Story and got to see how Beatlemania and the British Invasion of the USA had emerged. In a shaking and twisting mood (on the inside) I went back to the hotel. I had a really good diner at the Salt House, a tapas restaurant. I’m not going to blog about every place where I eat, but this placce catered my highly sophisticated tastes 🙂
When I woke up on “Super Sunday” (that’s how the UKOUG named it) and looked out of the window, I saw two Santas. Then two more Santas, and a mother Santa and a stroller and three little Santas running around her. After breakfast, I saw even more Santas. So I followed the Santas and found there was some kind of running event where everyone was dressed as Santa Claus (some in blue). It was the Liverpool Santa Dash.
After that and some hail and rain here and there, it was time for the opening of the UKOUG Tech 2014. It was sometimes hard to choose, but these were the sessions I’ve attended:
Profiling the Database Writer and Log Writer – Frits Hoogland
Frits’ presentation was very, very technical and he managed to lose me in a new record time. Possibly that says something about me. But usually I remember his punchlines. Like that in 12c the logwriter now has slaves (If you really must, you can disable that with _use_single_log_writer. If you must.) And that in non-asynch I/O, there is a bug that makes parallel wait events miss I/O time.
And very interesting for me: if you want to change the threshold at which the log writer trace shows log writer waits, you can use _side_channel_batch_timeout=<amount of ms>.
Calculating Selectivity – Jonathan Lewis
Jonathan Lewis hardly needs an introduction. He has done many sessions about the Oracle optimizer and written a number of books.
In quite a brisk pace he showed how to calculate selectivity and how Oracle calculates selectivity. And as quickly becomes apparent, Oracle sometimes just guesses, when creating a plan. There were many examples in his presentation. One I liked was a query on 1 million rows that he created with a column with just 3 values: 1, 0 and -1. He showed how the optimizer guessed that for each value about 10000 records would be returned. Somehow the other 970000 rows were lost in the equation.
There were many other interesting examples where Oracle got it wrong, but I have to review Jonathan’s presentation first before I can write anything sensible about it.
Advanced Diagnostics Revisited – Julian Dyke
This is one of those sessions that I will remember, because I learned so much useful stuff in so little time. Julian’s session was full of interesting diagnostic tips. Even though his slides were quite clear, I made lots of notes. Because I want to try this stuff as soon as possible.
Like that you can set a trace file id for somebody else’s session with oradebug. I haven’t downloaded his presentation yet, but during the session I jotted down ‘oradebug settracefileid <traceidentifier>’. Names of trace files of your running traces can be found in v$diag_info.
v$sql_hint not only shows you what hints are possible, but the inverse column shows you the inverse of that hint (if it exists).
You can make a 10053 trace for a specific SQL now with dbms_sqldiag.dump_trace. Event 10046,since a while, has level 16 for plan_stat=all_executions. So if you’re tracing a session and some SQL already has a execution plan, normally you would not get to see that plan in the trace. Now you can force to write those plans to trace anyway. Level 64 would even enable you to write only trace data for SQL that took longer than a certain amount of time (60 second I believe). Although, Julian wasn’t able to test that yet. But I really would like to try that out soon.
Oradebug was one of the main topics in Julian’s presentation. Here is a quick way to see the current SQL of a top process:
1. top (note the PID)
2. oradebug <PID>
3. oradebug current_sql
Testing Jumbo Frames for RAC – Neil Johnson
I didn’t know much about Jumbo frames, so I decided to go to Neil Johnson’s session. Soon, when the next discussion about jumbo frames starts, I can join in, because Neil did a good job explaining this matter. It actually isn’t such a difficult topic: it’s an OS thing. Without fancy frames options, long streams of data are broken in packets with 1500 bytes of data. With jumbo frames you’re limited to 9000 bytes.
When Neil told that in RAC this could be useful for the LMS process, I sat right up. Because we have databases that have LMS processes that use quite a bit of CPU.
Neil showed how he did his research. In the process the listener could learn how strace can be used, so here also I was all ears. In the end, in Neil’s story, jumbo frames saved precious CPU cycles, as he managed to demonstrate with strace, tcpdump and perf.