How can we sell security? (Part 2)

At the Security Roundtable at the UKOUG Tech 2014, an important topic was how we can convince organizations to work on improving security. I originally envisioned this as a keynote like presentation. But seeing that UKOUG Tech 2015 was still a year away and people tend to favour technical presentations, I decided to write this as a multipart series on how to get organizations interested in doing IT security properly.

 

Part 2: Brush up your presentation techniques and people will listen.

Boring beginnings

About ten years ago I got interested in presentation techniques. I did a one day course once about Method-R, the performance method Cary Millsap described in his book Optimizing Oracle Performance. It was very interesting material. Unfortunately, I wasn’t that interesting. After lunch I saw peoples eyes go shut. I also got low scores and these people were right. I was boring that day on a topic that had so much potential. Continue reading

Posted in Oracle security | Tagged , , , , , , , , , | Leave a comment

How can we sell security? (Part 1)

I like to say that projects wherein you try to improve security on existing systems are like losing weight: everybody wants to, but not everybody does. If you would ask management if they consider security imporant, they probably would say “yes” (just like losing weight is important). So, does that mean you can spend time and resources to improve security? Hmm. That’s a problem.

Why? Patching and hardening a database or application, or even changing passwords that have for a long time stayed the same, is risky business. When people are responsible for the availability of an IT infrastructure and they have to choose between the risk of possible application issues now due to a security improvements and a very unknown risk of getting hacked later, people tend to chose availability and stability now. Continue reading

Posted in Oracle security | Tagged , , , , , , | Leave a comment

A review of UKOUG Tech 2014, day 3

Oracle RAC, Data Guard & Pluggable Databases. When MAA meets Multitenant – Ludovico Caldara

Of multitenant databases I know only the basics. I decided to go to Ludovico Caldara’s session anyway. Luckily he had a few minutes of introduction about why you would want to do RAC and pluggable databases. One reason is that it’s easier move databases that at a certain point use more resources and need to. Also it’s easier to do parameter changes (like processes or the SGA).

One of the things he warned about several times, is that container databases can have a maximum of only 512 (singleton?) services and in RAC this limit can get a real barrier at some point. Continue reading

Posted in Conferences | Tagged | Leave a comment

A review of the UKOUG Tech 2014, day 2

With much enthusiasm I started day 2 of the UKOUG Tech 2014.

Sunrise from my hotel in Liverpool

Sunrise from my hotel in Liverpool

Continue reading

Posted in Conferences | Tagged | 1 Comment

UKOUG Tech 2014, day 1

Storage Replication is For Losers – James Morle

I was a bit late out of bed, so I had to hurry to jump just in time in the right room. I had decided to follow James Morle’s session about data replication. James had just become father again, so we were treated with some pictures here and there in the presentation.

But what it really was about, was that the usual storage replication does things in the wrong order and can make the wrong things wait. James warned us of consistency groups: storage grouping of database files that force data to replicate in order, even though you want the redo to go first. What you get, is longer log file parallel write times, sometimes in seconds. (Mental note: check if we have these problems. Then visit storage team) Continue reading

Posted in Conferences | Tagged | Leave a comment

UKOUG Tech 2014 – Super Sunday

After the UKOUG Tech 2011 in Birmingham, I really wanted to go to a new UKOUG Tech conference. And this year I am back, now in Liverpool. Saturday, after landing and traveling by a slow bus to the city centre, I had time to visit The Beatles Story and got to see how Beatlemania and the British Invasion of the USA had emerged. In a shaking and twisting mood (on the inside) I went back to the hotel. I had a really good diner at the Salt House, a tapas restaurant. I’m not going to blog about every place where I eat, but this placce catered my highly sophisticated tastes 🙂

When I woke up on “Super Sunday” (that’s how the UKOUG named it) and looked out of the window, I saw two Santas. Then two more Santas, and a mother Santa and a stroller and three little Santas running around her. After breakfast, I saw even more Santas. So I followed the Santas and found there was some kind of running event where everyone was dressed as Santa Claus (some in blue). It was the Liverpool Santa Dash. Continue reading

Posted in Uncategorized | Tagged | Leave a comment

Parallel query performance consistency

A collegue just asked me “didn’t you have a query to see what SQL has ran in parallel?” I’ve used this months ago and completely forgot, but this might come in handy again some day, so I’ll put this here.

col BEGIN_INTERVAL_TIME for a40
col FORCE_MATCHING_SIGNATURE for 9999999999999999999999

select a.BEGIN_INTERVAL_TIME
, a.INSTANCE_NUMBER
, b.SQL_ID
, b.PLAN_HASH_VALUE
, b.executions_delta EXEC_DELTA
, round(b.ELAPSED_TIME_DELTA/decode(b.EXECUTIONS_DELTA,0,1,b.EXECUTIONS_DELTA)/1000000,2) "Elapsed time (sec.)" 
, b.PX_SERVERS_EXECS_TOTAL PX_SERV_TOT
, b.PX_SERVERS_EXECS_DELTA PX_SERV_DELTA
from   dba_hist_snapshot a, 
       dba_hist_sqlstat b 
where  a.snap_id=b.snap_id 
and    a.begin_interval_time>sysdate-1
and    a.instance_number=b.instance_number
and b.PX_SERVERS_EXECS_TOTAL>0
and round(b.ELAPSED_TIME_DELTA/decode(b.EXECUTIONS_DELTA,0,1,b.EXECUTIONS_DELTA)/1000000,2)>1
order by a.BEGIN_INTERVAL_TIME, PX_SERV_TOT desc
/

Continue reading

Posted in Oracle performance tuning | Tagged , , , , , | 2 Comments

We’re hiring a DBA, Fusion Middleware specialist, EM specialist and Linux admin

In our team we need a couple of seasoned Oracle specialists and we’re also looking for one junior Oracle Database Administrator.

To be specific we’re looking for people with the following skills, who want to become employed at Rabobank Netherlands.

An Oracle Database Administrator

A Junior Oracle Database Administrator (Young Professionals Programme IT)

(This is actually a great opportunity to learn to become a professional Oracle DBA and to grow quite a lot. And to work with Oracle Exadata right away? That’s not a bad start in my book. Allthough you’ll be spoiled.)

An Oracle Specialist Enterprise Monitoring

A Linux Administrator with Oracle affinity

(Oracle affinity because the platform is Oracle Exadata of course)

An Oracle Fusion Middleware Specialist

An Oracle Business Process Management Specialist

(The vacancies are written in Dutch)

There lots of interesting developments at our department at this moment. We’re going to migrate to Oracle Exadata machines and for critical databases Oracle Database Vault and Transparent Data Encryption will be implemented.

I’ve written before about our team. We’re a self organizing team, which basically means you will have more influence than usual.

Couple of practical things: our location is Zeist, the Netherlands (though you might end up in the headquarters of Rabobank Nederland in Utrecht a couple of times per week, depending of your role), the official language for Rabobank documents is English, but you’ll find that a lot of meetings will be either in Dutch or English.

If you are interested, send your resume to newjobatrabo@marcel-jan.eu. I will make sure it will find it’s way to my manager.

Posted in IT Operations | Tagged , , , , , , | Leave a comment

Review: The Phoenix Project

A while ago my collegue Martijn Ten Heuvel gave me a copy of The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win by Gene Kim, Kevin Behr and George Spafford. He asked if I could return it after reading after a week or two or so, but I don’t read that fast. I bought my own (digital) copy, and read this book in record time anyway.

The book starts as your ultimate IT Operations horror scenario. In fact it’s quite a thriller. If I picture the worst events I’ve encountered in 17 years IT Operations combined, that is what protagonist, Bill Palmer, encounters as new VP of IT Operations. Fragile old applications that fall over at the drop of a hat, every new deployment of key applications basically require lots of rework to get them working and only one guy that knows how to solve the issues every time. Many weekends and nights are spend to get things working. On top of that the site of Bill’s employer, Parts Unlimited, at some point even leaks credit card data.

Although it’s your ultimate nightmare scenario, there are many recognisable events. Like the way IT Operations is juggling with changes, projects, IT projects and unplanned work. Also recognisable is how IT Operations gets the blame for lots of issues that have their origins in updates of the application. Eventually Bill meets Eric, who becomes member of the board and who tells Bill to look at IT as a manufacturing process, not unlike the manufacturing done by Parts Unlimited. At the manufactory’s work floor they’ve changed course more than a decade ago. They identified their bottlenecks and tried to find ways to work with them efficiëntly and started working according to methods also used at Toyota, like using kanban boards.

In the second half of the book, Bill and his team get more and more grip on work in progress, unplanned work and projects. Bill and the manager of the development team, Chris, conclude that the chaotic and buggy delivery of the last update of Phoenix is not to be repeated. So they start to collaborate.

A fun part is where Bill decides to ask heads of the business what they want the most from IT. And when they add things up, Bill and Chris find out they can deliver that much faster outside the Phoenix project. In fact, nobody specifically seemed to have asked for Phoenix at all.

All with all, it’s quite a readable book and gives you an idea what it means to do things like DevOps. I definitely have to read it again some time.

Posted in IT Operations | Tagged , , , | Leave a comment

Function based indexes sans the NULLs

A collegue asked my help with a performance issue. He was trying to tune a query that ran on a 6 TB table. The query had to return rows for a couple of specific statusses and for that it should only return 143,000 rows. For this a function based index was created with the following function:

DECODE(pt.STATUS, 'Awaiting', 'Awaiting','OnHold','OnHold',NULL)

But the optimizer refused to use the index. Instead it did a full table scan on another large table (400 GB), and the whole thing took hours. But with an index hint the response time was a matter of seconds.

My collegue already prepared an event 10053 trace for the query without a hint and I did one for the one with a hint. I used BeyondCompare 3 to see where differences were. This was a part of the trace of the unhinted execution plan where Oracle determined the access path of the 400 GB table:

Access Path: TableScan
Cost: 4850857.52 Resp: 4850857.52 Degree: 0
Cost_io: 4821117.00 Cost_cpu: 264001190158
Resp_io: 4821117.00 Resp_cpu: 264001190158
****** trying bitmap/domain indexes ******
****** finished trying bitmap/domain indexes ******
Best:: AccessPath: TableScan
Cost: 4850857.52 Degree: 1 Resp: 4850857.52 Card: 849590.75 Bytes: 0

And here is the same part for the hinted query:

Access Path: index (FullScan)
Index: I_INSTR_PK_INSTRKEY
resc_io: 88744279.00 resc_cpu: 1419200639011
ix_sel: 1.000000 ix_sel_with_filters: 1.000000 
Cost: 88904156.20 Resp: 24695598.94 Degree: 4
Best:: AccessPath: IndexRange
Index: I_INSTR_PK_INSTRKEY
Cost: 88904156.20 Degree: 4 Resp: 24695598.94 Card: 849590.75 Bytes: 0

Well, one thing that I noticed, was that calculated cost of the function based index was for some reason about twice as high as the full table scan.

But then my collegue asked why the index was so small (100 MB). A full index scan only resulted in 143,000 rows. Concatenated with a second column it was much larger, 50 GB.

So I started thinking. NULLs are not stored in an index (unless you add a constant as a concatenated column for example). What about this function? Let’s look at the function once more:

DECODE(pt.STATUS, 'Awaiting', 'Awaiting','OnHold','OnHold',NULL)

So Awaiting and OnHold are rare occurring statusses. And this decode says: IF not rare occurring status 1 and not rare occurring status 2, THEN NULL. I’ve read the documentation and nope, these NULL values are not stored. Concatenated with another column, you can store these NULLs however.

Depending on what you want to accomplish, you need to think twice about using functions that can result in NULLs in function based indexes.

Posted in Oracle performance tuning | Tagged , , , , | Leave a comment