Introduction
Oracle auditing is a large, complex, and confusing subject with many technology options. We aim to demystify those and help you make educated technology choices, guiding you to a solution that works for you. From capturing data to getting value from it and from a DIY to high-end solutions, let’s explore Oracle auditing.

Capture
Capture is the raw data collection. Auditing always starts with gathering information. After that, you need to process that data, store it, report on it, etc. Those will be addressed later in the Getting Value section.
While getting value out of the information is challenging, getting the information to begin with is even more challenging, with far-reaching implications on what is available to process.
Challenges
Why is it complicated to capture audit data? It seems like it should be easy to do.
The underlying problem is that Oracle runs massive amounts of activity at incredible speeds. Any interference with this highly tuned machine can cripple performance and slow down the database to a crawl. So, performance impact is the biggest challenge for capturing activity.
A second obstacle is that we typically must audit DBAs. However, these privileged accounts control the capture technologies built into the database, rendering them ineffective in monitoring privileged activity.
The SYS account in Oracle is especially problematic since many auditing facilities don’t capture its activity. That is because SYS is a built-in database account, and it is also used for internal database activity.
Another difficulty is internal database activity. Oracle has many capabilities that run activity internally and are not visible from the outside, like PL/SQL anonymous blocks to procedures, triggers, internal Java programs, and scheduled jobs. Some capture technologies (specifically packet inspection) can only see activity outside the database and won’t see the SQLs, tables, and columns of internal activity.
We’ll discuss these limitations as we review each technology.

What data could you collect?
When auditing an Oracle database, you have a few different types of information you might need to collect.
The most basic requirement is to capture Logons and Failed Logons. That means monitoring connections to the database (or sessions) and where they originate. That includes data like login time, logout time, user name, program, IP address, etc.
Another common requirement is SQLs. SQLs are the commands executed in the sessions. They break down into three categories: DDL, DML, and Queries (select). This breakdown is significant because Queries are not easy to capture and limit the capture technologies we can use. Query capture is critical for detecting data theft, DML capture is to identify unauthorized changes, and DDLs relate to change control. If data theft is a risk, you need to capture queries, and such capture technology will allow you to capture any SQL.
Selective capture is a way to reduce the overhead by not capturing all the SQLs. It also simplifies processing since there is far less data. It is not ideal because you will miss a lot of information, but it is the only way with built-in Oracle auditing capabilities. Even if you capture everything, you should consider what you will do with the captured data. Selective capture might be a good option if you later discard most of your captured data and only record a few SQLs (see more in the getting value section below).
Capturing before and after values is not a frequent requirement except in some industries where this is required for compliance (especially banking and finance). Before and after values means auditing the values that change in the database. If someone, for example, updates salaries using salary=salary*2, this type of capture will record all the original salaries and the salaries after the change. In this example, there are no values in the SQL, which explains why SQL capture, even with bind variables, doesn’t address this need.
Another capability that isn’t auditing but is related to the capture technology is blocking. Blocking allows you to prevent certain SQLs from executing. It is a powerful preventive capability that, in some solutions, is embedded into the capture engine. Finally, there’s a simplified form of auditing leveraging metadata snapshots. For example, you can identify changes between the list of users yesterday and today. It is a type of change control monitoring and can be implemented for configuration, users, permissions, and objects (e.g., tables, views, etc.)
Available technologies
This summary table shows which auditing technology is relevant to what data capture. As you can see, almost nothing is perfect. These technologies also have many limitations and downsides, and several are related. So read the details below.
Logins | SQLs | Before & After Values | Metadata Snapshot | Blocking | |||
DDL | DML | Queries | |||||
Do-it-yourself | ✓ | ||||||
Traditional Auditing | ✓ | ✓ | ✓ | ✓ | |||
FGA | ✓ | ✓ | ✓ | ✓ | |||
SYS auditing | ✓ | ✓ | ✓ | ✓ | |||
Unified Auditing | ✓ | ✓ | ✓ | ✓ | |||
Triggers | ✓ | ✓ | |||||
Logminer | ✓ | ||||||
Packet inspection | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Full Capture | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Do It Yourself
Do-it-yourself (DIY) is a simple solution you can build in-house without complex technology or processing. For example, you can take a daily snapshot of the users in the database and identify changes from the previous day. It’s not strong security, but it will help validate change control. Aside from its limited capabilities, the main downside of this and any DIY project is the time and effort your DBA team will need to invest in implementation, maintenance, and operation.
Traditional Auditing
Traditional Auditing (using the AUDIT statement) offers selective Oracle auditing. You can choose to audit specific statement types, access to particular tables, or activity by specific users (and some combinations of these). The audit_trail configuration option controls how the data is recorded with options of os, db, xml, and extended added to the last two.
This is the primary method for native Oracle auditing. Oracle is phasing it out and replacing it with the more comprehensive Unified Auditing. The main limitations of traditional auditing are:
- The audit records mostly contain the time, database account, OS account, machine, and table name. Adding the extended option will also log the SQL. However, this is not the right solution if you need additional information, such as the IP address.
- It does not capture SYS activity. SYS can even purge the audit data without any message. That is a crippling limitation that requires another Oracle feature (SYS auditing) to compensate. It is also the main advantage of the newer Unified Auditing.
- Its ability to control DBAs is limited since they administer it. The log contains actions that manipulate auditing and purge data, but these can be easily excused.
- Performance impact is the biggest problem, with a base impact of over 1,000% for short SQLs (audit_trail=db). When you also record the SQL text, the impact is 2,000% (audit_trail=db, extended). When writing to the OS the overhead is over 800% (audit_trail=os). The XML option is irrelevant, with an overhead of over 80,000%.
Even just turning on audit_trail=db instead of audit_trail=none incurs an overhead of 20%. That’s without auditing anything. The reason is that once the auditing system is on, Oracle evaluates the auditing rules, and that evaluation process creates an overhead even when there are no auditing rules. - You can only record a limited volume of activity. Overheads over 5% are considered unacceptable in Oracle environments, not to mention the resulting disk footprint. The high overhead of traditional auditing means it is only practical if scoped down to a small number of SQLs.
- The administration complexity, like creating the auditing rules and managing the audit data, is also significant. That means the DBA time required to manage native Oracle auditing.
- Other Oracle utilities, like RMAN, SQL Loader, and Data Pump, can also access the data. These have other auditing facilities and are not audited by Traditional Auditing.

Fine-Grained Auditing (FGA)
Fine-grained auditing (FGA) is an Oracle auditing feature aiming to enhance the functionality of traditional auditing by being even more selective. FGA can limit auditing to specific column access or only audit access to particular rows (like department=5 or salary>1000). FGA aims to address performance problems when auditing tables with a high activity volume.
However, limiting the rows and columns is a poor solution, and FGA is rarely used. Customers who encounter such problems usually opt for other auditing methods that don’t require you to be so selective. For example, with Core Audit, you can audit 1 billion SQLs with less than 3% overhead using 32GB of disk space.
SYS Auditing
SYS auditing refers to the Oracle configuration parameter AUDIT_SYS_OPERATIONS. When set to true, it records all the activity by the SYS user. It is vital because SYS is a built-in database account that DBAs can always access and cannot be audited by traditional auditing. SYS users can also delete information from the audit data without a trace. SYS is a complicated user to audit because the database uses it internally.
SYS auditing has significant limitations:
- It does not record internal database activity, only external one. That means it is not difficult to circumvent.
- It cannot be selective and automatically records all the SYS activity.
- If DBAs can access the operating system, they can delete or manipulate the audit data.
- It has an overhead of over 1,200%. When measuring with network round trip (which is more appropriate for external transactions), the overhead is over 90%.
- The audit data usually contains a lot of noise from various activities and internal database operations. It is next to impossible to locate meaningful information inside it if you don’t know what you’re looking for.
Unified Auditing
Unified Auditing (using an AUDIT POLICY) is the new built-in Oracle auditing mechanism that combines features from traditional auditing, FGA, and SYS auditing. It has better coverage, but the overhead is far worse.
On the good side, SYS auditing is much more comprehensive and similar to other users. Internal SYS activity is recorded and only what the policy requires (so less noise). Unified Auditing can also record activity from other Oracle tools. This all combines to much better coverage.
But the limitations are:
- The audit records mostly contain the time, database account, OS account, machine, application, table name, and SQL text. This is not the right solution if you need additional information, such as the IP address.
- DBAs administer it so it cannot effectively control them. While there is a log of actions that stop auditing and purge data, these can be easily excused.
- Performance impact is the biggest problem, with an overhead of about 4,000% for short SQLs.
- You can only audit a limited volume of activity. Any overhead over 5% is unacceptable in Oracle environments, not to mention the even larger disk footprint. Unified Auditing is not practical unless limited to a small number of SQLs.
- The administration complexity, like creating the auditing rules and managing the audit data, is significant. That means DBA time that is required to manage Oracle Unified Auditing.
The bottom line is that Unified Auditing can be a good option for a low-volume capture to comply with minor compliance requirements (see more later). Capturing activity on active tables or aiming to use the data for analysis, such as behavioral profiles and anomalies, is not realistic.

Triggers
Triggers are small code fragments the database can execute in response to activity.
There are three kinds of triggers. Logon triggers fire when someone logs in. You can use those two to audit login activity. While possible, it is an uncommon auditing approach for sessions. There are also DDL and DML triggers. However, the SQL text is only visible in a limited number of cases.
However, DML triggers can audit the before and after values. That is a realistic method for auditing them.
The problem with triggers is that they run as part of the SQL and increase the time it takes to finish. Because auditing triggers usually record information to other tables, they tend to cause a high overhead by adding another DML to the current DML. In other words, DML triggers on highly transactional tables cause a significant performance impact.
Beyond overhead, the downside of triggers is that they are controlled by DBAs and others with appropriate permissions and can be disabled by anyone who wishes to bypass them.
It is also important to note that there are no triggers for queries. Triggers are irrelevant when it comes to data theft risk.
Logminer
Logminer is an Oracle feature that can extract information from the Oracle redo logs. The redo logs are part of how Oracle works. They contain every change in the database and are used for crash recovery. Unrelated to their primary function in crash recovery, you can process the redo logs to extract all the before and after values. That is useful for security, replication, and other purposes.
Using the redo logs is beneficial because processing is performed after the database activity finishes. While the processing takes resources from the machine, it happens in parallel and does not slow down database transactions.
While you can always process the redo logs with Logminer, you can extract better-quality information if you ask Oracle to capture additional information. You can do that with a SQL like “ALTER DATABASE ADD SUPPLEMENTAL LOG DATA”. Note that you can record different types of supplemental log data.
Oracle redo logs also cycle where the previous redo logs are copied to a separate location and referred to as Archive logs. Production databases always have Archive logs set up, but the retention time depends on the environment. But, Logminer can mine both the archive logs and online redo logs. You only need to have the logs somewhere to mine them.
You can access Logminer with the assistance of a DBA or using a solution like Core Audit.
Packet Inspection
Packet inspection technology deciphers communication going in and out of the database to identify the SQLs. There are two implementations: as a network sniffer outside the machine or with a kernel agent inside it.
As a network sniffer, this technology doesn’t impact database performance but can’t see local communication. That’s usually unacceptable. Most deployments use the local agent. That allows the solution to capture local communication, but it generates a high network overhead.
Both deployment methods have challenges with network encryption, but one of the most significant limitations is the lack of visibility inside the database. That means you can try to infer what happens inside the database based on the SQL, but that’s not always possible. That’s because databases can run procedures, triggers, Java programs, and anonymous blocks. These are all pieces of code that can generate SQL inside the database. SQLs that would be invisible to a solution based on packet inspection.
Some solutions based on packet inspection can also block unwanted activity. However, there is usually a timing problem that allows offensive requests to go through.
Full Capture
Core Audit Full Capture is a next-generation auditing technology that connects directly to the SQL engine. It was designed from the ground up to address the unique challenges of security and auditing Oracle activity. The information quality is superior to any other method with negligible overhead, and DBAs cannot bypass it.
Full Capture does what you’d expect from a capture technology: see everything without impacting the database. That gives you 100% visibility with less than 3% overhead.
It can also capture the before and after values by integrating with low-overhead triggers and Logminer. So you have both alternatives with different pros and cons.
Finally, Full Capture has blocking capabilities that can return errors or warnings for offensive SQLs. Unlike packet inspection, Full Capture has no timing or other security holes.

Capture Conclusion
The most challenging part of capture is queries. Usually, it is also the most important since it relates to data theft. Once you have queries, you have most other information. If you also need before and after values, you can achieve that in Oracle without a user interface or as part of a solution that supports it.
That means you have four basic options:
- Do-it-yourself (DIY) – Use Unified Auditing to capture activity and process it yourself (see below). You can also add triggers to record before and after values or extract the information from Logminer. There are no licensing costs, but it is very time-consuming to build and maintain. We do not recommend it as it tends to be a never-ending project with unsatisfactory results.
- Low-cost solution – solutions that use one of the Oracle native auditing features like traditional or unified auditing. These solutions take care of storage and reporting but rely on the database to perform the capture. DBA auditing is ineffective due to the capture method. Also, these solutions cannot scale to record a large volume of SQLs. This may be passable for some compliance requirements but nothing more. They cannot provide quality security without effectively monitoring DBAs, higher volume recording for sensitive data access, and anomalies needed for application control.
- Packet inspection solution – a high-end auditing solution that looks at database packets. It can handle much more volume than the low-cost solutions but provides reports along the same lines. In some ways, these are worse than low-cost solutions that rely on Unified Auditing since packet inspectors cannot see internal database activity. These solutions are reasonable for compliance but deliver limited security.
- Core Audit is a high-end solution that can capture everything you need and provide powerful reports, alerts, and analysis. There are no limitations or security holes, and it delivers the most comprehensive security.
Getting Value
Capturing audit data is only the first half of the problem. Once you have the data, you must convert it into meaningful information. A repository with billions of SQLs is useless by itself. This section briefly discusses what can be done to gain value from all this data.
Compliance Reporting. A basic expectation for database auditing is to achieve compliance. To do that, you need reports on logons, failed logons, privileged activity, access to sensitive data, and more. The challenge is deciding what to record and how to create meaningful reports from the data. Effective reporting is based on three basic principles:
- The subject of the report must be realistic in your environment. This type of reporting works for subsets of activity that are high-risk and low-volume. Like DDLs, DBAs running DMLs, etc.
- Find the correct way to report on it. Many times in aggregate form, like counting the number of logins by each user instead of listing them.
- Tune the reports without losing security value. For example, exclude activity from a monitoring script but ensure what you exclude can’t pose a security risk.
Compliance reporting isn’t technically challenging once you have the captured data. It only requires a repository, a reporting engine, and a little time. In Core Audit, you have a highly scalable repository, an easy-to-use reporting engine, and wizards with built-in reports to get you started. More importantly, you have proactive forensics that saves you time by helping you figure out what to put in the reports.
Anomaly Analysis. Most database activity doesn’t lend itself to compliance reporting and requires something that can scale to massive volumes. That requires a special type of repository and automation that can locate unusual activity in it. These only exist in high-end solutions.
Core Audit can alert when there are new active users when they connect from different applications or IPs or are active at unusual times, when unusual SQLs access sensitive information, of unusual SQL volumes, potential SQL injection attacks, and much more. There are many ways to slice, dice, compare, and contrast the activity to find anomalous behavior.
Reactive Forensics. Reactive forensics means obtaining additional details about security events. These events can be a suspicious line in a report, a potential breach, an indication from another source, and more. Whatever the event, you always need to know more about what happened.
The challenge is having a repository with the information. In most solutions, you are limited to the events you choose to record. In Core Audit, aside from the compliance repository, you also have the security repository that automatically stores information about every SQL in your database. While the security repository is less detailed, it ensures you’ll always be able to know what happened.
Proactive Forensics. This type of forensics gives you visibility into all the database activity. It has multiple objectives, but the most important is developing controls. You cannot design effective compliance reports or anomaly alerts without a solid understanding of what’s happening in the database. You cannot even understand the threats you’re up against.
You cannot secure what you cannot see, and proactive forensics is a highly recommended first step to protect your Oracle database. But it’s also central to identifying poor security practices, attacks and breaches, changes in behavioral patterns, gaps in the controls you deployed, and more.
Core Audit Proactive Forensics includes graphical analysis tools to visualize the activity profiles from various dimensions. For example, it has time-based stacks, tree graphs, Sunburst charts, a Sankey, network graphs, and more.

Oracle Solutions
In addition to capture technologies, Oracle offers two security solutions. These are not part of the database licensing and incur significant additional costs, not to mention setup, administration, etc.
Audit Vault and Database Firewall (AVDF)
Oracle Audit Vault and Database Firewall is a packet inspection solution that can supplement its data from Oracle native auditing features like Traditional Auditing, FGA, SYS auditing, and Unified Auditing. Despite the confusing name, it is a single solution that contains auditing and preventive features.
This mesh of technologies (packet inspection and native auditing) aims to compensate for the limitations of each capture method. Packet inspection solutions that are only network-based, like AVDF, cannot see local or internal database activity. Local machine activity is critical as it contains privileged activity from inside the machine and, in some cases, application activity when the application runs on the database server. On the native auditing side, each feature has different limitations, but they are all crippled by the volume of activity they can handle.
The combined solution can capture all the external and some local and internal activity, as long as the volume isn’t high. However, AVDF cannot merge the data sources, so it stores them independently. That means there’s a significant overlap between the activity captured by native auditing and one captured by packet inspection.
AVDF also contains features like anomaly and blocking for the network traffic it captures. While it doesn’t have a repository of all the activity for full forensics, it can identify, record, and block some anomalous activity on the fly.
Database Vault (DV)
Database Vault is a preventive measure, not an auditing solution. It aims to limit the access of system DBAs and prevent them from accessing the data schema.
DV redefines database administration roles differently and isolates them into multiple categories. However, the administration of the data schema remains the same and is probably assigned to one or more DBAs. So, while it has some benefits, it doesn’t solve the problem.
DV is an incredibly complicated and intrusive solution that, ultimately, fails to deliver much security. There are easier and more effective solutions, like Core Audit, that are more effective and less intrusive to the organization.
Best Practices
Here are some basic best practice steps to help you get started:
- Define your drivers. Are you looking to comply with a particular regulation or secure your database against some threats? Why are you doing this project, and what do you aim to achieve? It’s hard to succeed without objectives.
- Identify available resources and define the frame. Is there a timeline for completion? Is there a budget you can use? Who will be part of this project, and how familiar are they with Oracle? Knowing your resources lets you define realistic goals.
- What are your capture requirements? Is data theft a risk, and do you need to capture queries? Are you worried about unauthorized changes to the data? Do you need to capture before and after values? Is privilege abuse a concern? This stems from your risks, security needs, and compliance requirements.
- What value do you expect to gain from the data? Do you need compliance reports or anomaly alerts? Are you worried about SQL injection? This depends on the level of security and compliance you aim to achieve.
- Choose an approach. Your resources and objectives will quickly guide you between a DIY, a low-cost product, or a high-end solution. Contact a few vendors to get demos and quotes to quantify costs and gauge potential value. Contact us, and we’ll provide you with more information and assistance for free.
- If this is a DIY project or if you’re buying a low-cost solution, you’ll need to experiment on production. That is to ensure the performance impact is acceptable and determine the filters you need for a reasonable storage footprint.
When using Core Audit, start with Proactive Forensics to understand your database activity profile and design your controls. - Audit local access, DBAs, and privileged accounts. Local access is a popular attack vector in data theft and ransomware attacks. DBA accounts are also a high-risk vector for credential theft and privilege abuse.
- With Core Audit, control the application account. Application accounts are the target of SQL injection and other application vulnerabilities. Auditing them requires high-end capture and anomaly analysis.
- Look at other accounts you need to control. Consider new, unapproved, and dormant accounts.
- Improve control effectiveness. Perform a gap analysis to determine the effectiveness of your controls. That includes estimating false negatives (undetected events) and unknown attack potential (like zero-day). There are several methods to do that.
Final thoughts
There are multiple technologies and solutions to secure an Oracle database. They range from DIY to low-cost products and high-end solutions. It all depends on your requirements and available resources. Contact us, and we’ll help you find the best approach for your particular case.