I like to say that projects wherein you try to improve security on existing systems are like losing weight: everybody wants to, but not everybody does. If you would ask management if they consider security imporant, they probably would say “yes” (just like losing weight is important). So, does that mean you can spend time and resources to improve security? Hmm. That’s a problem.
Why? Patching and hardening a database or application, or even changing passwords that have for a long time stayed the same, is risky business. When people are responsible for the availability of an IT infrastructure and they have to choose between the risk of possible application issues now due to a security improvements and a very unknown risk of getting hacked later, people tend to chose availability and stability now.
But you as database (/OS/middleware/application) specialist might see big security risks now. Maybe excessive privileges have been granted. Maybe important database users have very simple passwords that you are not allowed to change. Maybe the application has dangerous SQL injection leaks. And maybe software hasn’t been patched for years. You see the risks, but you don’t get permission or time to do something about it. How can you change things?
In his session at the UKOUG Tech 2014 conference Pete Finnigan said that to start security improvements, you need a security standard. And in absence of that you could describe what you consider to be a secure database (or OS, middleware stack or application I guess). That’s a great step to begin with. Next, make a list of all the things that deviate from that standard. And order the list according to the issues with the greatest risk in a top 10 or 20 for your organization.
Now you still might not get approval or time to start working on these issues. Your manager might see risks, but what he/she forgets is that there is a big difference between the downtime caused by work on security issues or unforeseen issues as a result of that, and the risk of actually being hacked: changes on security issues can be tested and they can be rolled back. Being hacked can be considered irreversible.
When you change the password of a schema owner, and an application can’t connect anymore, you can change the password back (or solve the issue). If revoking the DBA role from a user and replacing it with minimal necessary privileges causes unforeseen issues, you can give the DBA role back (or solve the issue). Database patches can be rolled back, backups can be restored, application changes can be rolled back. Work on security issues doesn’t have to result in hours of downtime, even in badly documented infrastructures.
But if you get hacked, you get hacked. That’s permanent.
It’s important for an organization to accurately consider the risks involved with taking on a security project. Next time you discuss risks of security improvements with your management, don’t forget to tell the risks of doing nothing.
You can also listen to this blogpost on Soundcloud: