Oracle RAC, Data Guard & Pluggable Databases. When MAA meets Multitenant – Ludovico Caldara
Of multitenant databases I know only the basics. I decided to go to Ludovico Caldara’s session anyway. Luckily he had a few minutes of introduction about why you would want to do RAC and pluggable databases. One reason is that it’s easier move databases that at a certain point use more resources and need to. Also it’s easier to do parameter changes (like processes or the SGA).
One of the things he warned about several times, is that container databases can have a maximum of only 512 (singleton?) services and in RAC this limit can get a real barrier at some point.
When doing Data Guard on active pluggable databases you need to use the standbys=NONE argument to make sure you don’t need an Active Data Guard license. Since 188.8.131.52 the password file is automatically send to the standby side.
SQL Injection in APEX – More Attacks (& Defences) – Tim Austwick
Now security sessions are nice, but it’s even nicer to follow a hacking session once in a while. Tim Austwick did a session about SQL injection in APEX. Now I haven’t done much with APEX, but it wasn’t difficult for me to follow this session.
So APEX application are rather SQL heavy and as we learned, APEX applications are therefore dynamic SQL heavy. Other programming languages usually have frameworks and you don’t see dynamic SQL there as often. You need APEX 4.2.1 (or 4.2.6) to have a version without much SQL injection issues already in it. Also you don’t need APEX Builder in production for some reason. J
I’ve learned that in APEX there is a table called apex040200.wwv_flow_fnd_user which contains usernames and hashed passwords. Tim used “John the Ripper” as password cracker tool and in the demo he could find passwords in minutes when they weren’t complex.
If you want to encapsulate quotes in a string you can basically ignore DBMS_ASSERT.ENQUOTE_LITERAL, because it doesn’t understand strings like “O’Reilly” or “’s Gravenhage” (the more official Dutch name for The Hague) very well. You can use DBMS_ASSERT for view and table names (USE_SIMPLE_NAME I believe the procedure was called).
An interesting quote from Tim at the end of the session: “Of most APEX applications, 50% has a SQL injection bug somewhere in there”.
Data Guard 12c New Features in Action – Uwe Hesse
Uwe Hesse was so confident in the stability of Data Guard in 12c, that he did a live demo. He talked about the new features in 12c, mainly Far Sync and Cascading Standby.
I’ve already heard about Far Sync. Far Sync databases only receive redo data and archives (synchronous) and sends them (asynchronous). It doesn’t have datafiles and you can’t failover to it. It’s just an in-between station for remote standby databases and Far Sync allows you to do this without a performance impact. When you switchover, you also need a far sync database on the other side.
Cascading Standbys allow you to have in-between standby databases that synchronize with other standby databases. There is a bug though: when you want to change the protection mode to max performance, the primary database goes down. It’s a bug that also exists in 11g R2.
Defining Custom Compliance Rules Using 12c EM – Philip Brown
This was a session about how you can define (security) compliance rules in EM12c using the Life Cycle Management Pack. This pack contains many features, like the ones previously in the Provisioning Pack. With that you can also use BI Publisher (on the EM repository, if I’m not mistaken). I thought this interesting, because at our site we’re building our own application for this.
Setting up the custom compliancy rules is divided in two parts: the 12c Config Extensions, that query your database, and the Compliance Rules, that check for compliancy in the data gathered by the Config Extensions. The Config Extensions can have many SQL that perform checks and Philip advices to create only a couple of Config Extensions with a lot of SQL checks.
The advice Philip gave for creating the checks, rang familiar from my own work on our own compliancy application: make sure a check returns only one value and make sure it always returns a value. Because Oracle doesn’t understand the check when no rows are returned. It assumes the result is compliant. For example: when you check which users have the SELECT ANY DICTIONARY privilege. It’s quite possible that this returns no rows and for the compliancy check, that is not working well.
You can then define what a violation is and assign a weight to that. This is used to calculate how compliant you are. If you chose to assign a low weight, a couple of violations would still result in a high compliancy score and otherwise of course. OS Config Extensions are a different beast by the way. And since 184.108.40.206 there are Agent Rules and Manual Rules. Manual Rules are checks that you do manually and you enter the result in EM 12c. Philip also noted that this part of the LCMP is pretty stable.
Behind the Mask, Oracle Data Masking – Niall Litchfield
The Data Masking Pack is one of those options you rarely see used. Strange really, because for testing reasons about every company I have known, used production data. Are there no policies for obfuscating in test environments? According to Niall, there sure are regulations in the EU and in the USA. Still, it’s rare to see Data Masking Pack in the wild.
Data Masking is however not a Next, Next, Next, Implement affair. First you have to create an Application Data Model with a developer (if you’re a DBA). With this in hand (an XML file), choices have to be made what to do with the data. There are rules you can use for well-known sensitive data, like social security numbers, credit card data and .. ISBN numbers (for some reason).
With the Data Masking Pack you can then generate SQL and PL/SQL and run that on an environment with production data (but not production itself of course). Despite efforts of parallellisation, execution the data masking step could actually take quite some time. A couple of million of rows could take you hours. Someone in the audience told of jobs that even took 24 hours to finish.
AWR and ASM, Deep Dive With EM12c and Beyond – Kellyn Pot’Vin
A great thing about conferences like this one, is that you can meet people you only know from Twitter, blogs and books and exchange ideas and experiences. For the last session, I went to see Kellyn Pot’Vin (actually Gorman now), who I know from her books and Twitter. She talked about AWR and ASH.
To get the data I need to solve performance issues, I have already done my own research on AWR and ASH, so the first half was not new to me. But what was fairly new for me, was the AWR Warehouse. This feature offloads all your AWR data to a central warehouse. This runs on EM 220.127.116.11, the repository should be 18.104.22.168 or higher and your target databases can be anything from 10.2.0.4 and up.
It’s a limited Enterprise Edition license and it comes free with Diagnostics Pack (and Tuning?) as long as you don’t change the AWR Warehouse into a RAC database or something with Data Guard. The tables are partitioned, by the way, so you can stuff your warehouse with lots of AWR data and still get good performance.
You can use Enterprise Manager to look into the data, or you can query the warehouse yourself. Kellyn promises that the queries that you know use against the AWR repository in your database, won’t have to change much. This gives interesting possibilities. You can look at the database related load on a node and see from what database came the most load.
Kellyn also told a little of what to expect in later versions. It seems we can expect that OW Watcher data will go in the warehouse as well. Sounds interesting.
And that was the end of UKOUG Tech 2014. After the last session everyone quickly went his or her own way. I’m really going to miss this. I will already vow here I will try to return next time as a speaker. I already have some ideas I cooked up in my hotel room last night.