Blue Core Research
Contact Us
How to audit a SQL Server database?
Learn about the different technology options for auditing SQL Server and how to gain value from the data you collect.

Introduction

SQL Server auditing is a large and complex subject with many technology options. We aim to demystify those and help you make educated technology choices, guiding you to a solution that works for you. From capturing data to getting value from it and from a DIY to high-end solutions, let’s explore the world of SQL Server auditing.

Capture

Capture is the raw data collection. All auditing starts with gathering information. After that, you need to process that data, store it, report on it, etc. Those will be addressed later in the Getting Value section.

While getting value out of the information is challenging, getting the information to begin with is even more challenging, with far-reaching implications on what is available to process.

Challenges

Why is it complicated to capture audit data? It seems like it should be easy to do.

The underlying problem is that databases run massive amounts of activity at incredible speeds. Any interference with this highly tuned machine can cripple performance and slow down the database to a crawl. So, performance impact is the biggest challenge for capturing activity.

A second obstacle is that we typically must audit DBAs and the ‘sa’ account. However, these privileged accounts control the capture technologies built into the database, rendering them ineffective in monitoring privileged activity.

Another difficulty is that procedures are commonly used in SQL Server. Many applications in SQL Server don’t run SQL against the database but execute procedures. These procedures execute SQL inside the database engine. It’s important because some capture technologies (packet inspection) can only see activity outside the database, and to see the SQLs, tables, and columns, you need to capture inside the database.

We’ll discuss these limitations as we review each technology.

What data could you collect?

When auditing a SQL Server database, you have a few different types of information you might need to collect.

The most basic requirement is to capture Logins and Failed Logins. That means monitoring connections to the database (or sessions) and where they originate. That includes data like login time, logout time, user name, program, IP address, etc.

Another common requirement is SQLs. SQLs are the commands executed in the sessions. They break down into three categories: DDL, DML, and Queries (select). This breakdown is significant because Queries are not easy to capture and limit the capture technologies we can use. Query capture is critical for detecting data theft, DML capture is to identify unauthorized changes, and DDLs relate to change control. If data theft is a risk, you need to capture queries, and such capture technology will allow you to capture all types of SQL.

Selective capture is a way to reduce the auditing complexity by not capturing all the SQLs. It reduces overhead and simplifies processing since there is far less data. It is far from ideal because you will miss a lot of information, but sometimes, it is the only practical way. Even if you capture everything, you should consider what you will do with the captured data. Selective capture might be a good option if you intend to discard most of it and only record a few SQLs (see more in the getting value section below).

Capturing before and after values is not a frequent requirement except in some industries where this is required for compliance (especially banking and finance). Before and after values means auditing the values that change in the database. If someone, for example, updates salaries using salary=salary*2, this type of capture will record all the original salaries and the salaries after the change. In this example, there are no values in the SQL, which explains why SQL capture doesn’t address this need.

Another capability that isn’t auditing but is related to the capture technology is blocking. Blocking allows you to prevent certain SQLs from executing. It is a powerful preventive capability that, in some solutions, is embedded into the capture engine.

Finally, there’s a simplified type of auditing using metadata snapshots. For example, you can identify changes between the list of users yesterday and today. It is a type of change control monitoring and can be implemented for configuration, users, permissions, and objects (e.g., tables, views, etc.)

Available technologies

This summary table shows which auditing technology is relevant to what data capture. As you can see, almost nothing is perfect. These technologies also have limitations and downsides, so read the details below.

LoginsSQLsBefore
& After
Values
Metadata
Snapshot
Blocking
DDLDMLQueries
Do-it-yourself
C2 Audit mode
SQL Server Audit
Profiler/Trace
Extended Events
Triggers
Translation logs
Packet inspection
Full Capture

Do It Yourself

Do-it-yourself (DIY) is a simple solution you can build in-house without complex technology or processing. For example, you can take a daily snapshot of the users in the database and identify changes from the previous day. It’s not strong security, but it will help validate change control. Aside from its limited capabilities, the main downside of this and any DIY project is the time and effort your DBA team will need to invest in implementation, maintenance, and operation.

SQL Server Auditing

Several technologies are associated with SQL Server auditing, and most are related to others. We listed them all to help clarify it.

SQL Server C2 Audit is a configuration parameter in SQL Server (sp_configure ‘c2 audit mode’) that automatically records all the database activity. It has an extremely high overhead and creates massive files. It is not a realistic option for auditing SQL Server.

SQL Server Audit uses Extended Events. It is a wrapper on the same thing with similar pros and cons.

SQL Server Profiler or Trace is a previous feature in SQL Server that was replaced by Extended Events. SQL Server trace has a higher performance impact and is deprecated (which means Microsoft will stop supporting it at some point). There is no reason to use it nowadays.

Extended Events is a built-in SQL Server capture capability. It is the best capture option that does not incur additional licensing costs. It is common in low-cost solutions (see more later), and you can use it in a DIY project. However, it has limitations that include:

  • It does not capture the client’s IP address. If IP addresses are a requirement, this is not the solution.
  • DBAs and privileged accounts can disable it, so it has limited effectiveness in monitoring them.
  • The overhead can be high and depends on the database activity profile. Even using only the ring buffer (a high-performance in-memory capture), a worst-case scenario can reach over 1,200%. While such a high overhead is unlikely, a significant one is highly plausible. It is only in low-activity databases that the impact would be insignificant.
  • Recording the activity can require large files. Depending on the type of storage, that can further increase the performance impact.
  • Consider selective auditing. By auditing only certain activities, you can significantly reduce the file sizes. From a performance perspective, this could reduce the overhead by no more than half because checking the filter conditions also has an impact. The downside is that setting up the selective filters is tedious and time-consuming.

The bottom line is that Extended Events can be a good capture solution for low-activity databases. That is especially true when using selective auditing to record a handful of SQLs to comply with minor compliance requirements (see more later).

Triggers

Triggers are small code fragments the database can execute in response to activity.

There are three kinds of triggers. Logon triggers fire when someone logs in. DDL triggers fire when a DDL is executed. You can use those two to audit that type of activity. These events are also infrequent and unlikely to cause a significant performance overhead. While possible, it is an uncommon auditing approach for sessions and DDLs.

The last type of trigger is a DML trigger. It fires when data changes in the database. These triggers do not see the SQL but can capture the before and after values.

The problem with triggers is that they run as part of the SQL and increase the time it takes to finish. Because DML triggers usually record information to other tables, they tend to cause a high overhead by adding another DML to the current DML. In other words, DML triggers on highly transactional tables cause a significant performance impact.

Beyond overhead, the downside of triggers is that they are controlled by DBAs and others with appropriate permissions and can be disabled by anyone who wishes to bypass them.

It is also important to note that there are no triggers for queries. Triggers are irrelevant when it comes to data theft risk.

Transaction Logs

Translation logs are part of how SQL Server works. They contain every change in the database and are used for crash recovery. Unrelated to their primary function in crash recovery, SQL Server can process these logs to extract all the before and after values. That is useful for security, replication, and other purposes.

Three mechanisms in SQL Server process the translation logs in a way that is relevant for security:

  • Change tracking is the old mechanism. Use the newer Change Data Capture.
  • Change data capture (CDC) stores information about before and after values. The downside is that it requires some administration and regular maintenance, and the output is not trivial to understand. However, it is probably the best of the three.
  • Temporal tables are a method for tracking table content over time. The idea is to add a time dimension to the table so you can run queries referring to when. For example, what was John’s salary on January 17th, or what did the table look like on a particular day and time? The downside is that depending on the table size and frequency of change, this can require a lot of space and resources.

The good thing about leveraging the transaction logs is that processing is performed after the database activity finishes. While the processing takes resources from the machine, it happens in parallel and does not slow down database transactions.

While these are all part of SQL Server and require no additional licensing, they are controlled by DBAs, and you will probably need DBA assistance to access them. The information in any of these mechanisms takes effort to consume and is not currently integrated into your application or any other user interface.

Packet Inspection

Packet inspection technology deciphers communication going in and out of the database to identify the SQLs. There are two implementations: as a network sniffer outside the machine or with a kernel agent inside it.

As a network sniffer, this technology doesn’t impact database performance but can’t see local communication. That’s usually unacceptable. Most deployments use the local agent. That allows the solution to capture local communication, but it generates a high network overhead.

Both deployment methods have challenges with network encryption, but one of the most significant limitations is the lack of visibility inside the database. Since stored procedures are popular in SQL Server, it is common not to know the SQL, table, or column.

The lack of internal visibility is also a challenge because SQL Server receives batches, not SQLs. A batch is a TSQL block with multiple SQLs or a small program. Breaking down batches and inferring what they do inside the database ranges from challenging to impossible.

Some solutions based on packet inspection can also block unwanted activity. However, there is usually a timing problem that allows offensive requests to go through.

Full Capture

Core Audit Full Capture is a next-generation auditing technology that connects directly to the SQL engine. It was designed from the ground up to address the unique challenges of security and SQL Server auditing. The information quality is similar to the data provided by Extended Events. However, Full Capture does not have the overhead, and DBAs cannot bypass it.

Full Capture does what you’d expect from a capture technology: see everything without impacting the database. That gives you 100% visibility with less than 3% overhead.

Full Capture can also capture the before and after values by integrating with low overhead triggers or SQL Server replication (using transaction logs). So you have both alternatives with different pros and cons.

Finally, Full Capture has blocking capabilities that can return errors or warnings for offensive SQLs. Unlike packet inspection, Full Capture has no timing or other security holes.

Capture Conclusion

The most challenging part of capture is queries. Usually, it is also the most important since it relates to data theft. Once you have queries, you have most other information. If you also need before and after values, you can achieve that in SQL Server without a user interface or as part of a solution that supports it.

That means you have four basic options:

  • Do-it-yourself (DIY) – Use Extended Events to capture activity and process it yourself (see below). You can also add triggers or CDC to record before and after values. There are no licensing costs, but it is very time-consuming to build and maintain. We do not recommend it as it tends to be a never-ending project with unsatisfactory results.
  • Low-cost solution –  a solution that uses Extended Events or the Profiler. These solutions take care of storage and reporting but rely on the database to perform the capture. DBA auditing is ineffective due to the capture method. Also, these solutions cannot scale to record a large volume of SQLs. This may be passable for some compliance requirements but nothing more. They cannot provide quality security without effectively monitoring DBAs, higher volume recording for sensitive data access, and anomalies needed for application control.
  • Packet inspection solution – a high-end auditing solution that looks at database packets. It can handle much more volume than the low-cost solutions but provides reports along the same lines. In some ways, these are worse than the low-cost solutions since they cannot see internal database activity. These solutions are reasonable for compliance but deliver limited security.
  • Core Audit is a high-end solution that can capture everything you need and provide powerful reports, alerts, and analysis. There are no limitations or security holes, and it delivers the most comprehensive security.

Getting Value

Capturing audit data is only the first half of the problem. Once you have the data, you must convert it into meaningful information. A repository with billions of SQLs is useless by itself. This section briefly discusses what can be done to gain value from all this data.

Compliance Reporting. A basic expectation for database auditing is to achieve compliance. To do that, you need reports on logons, failed logons, privileged activity, access to sensitive data, and more. The challenge is deciding what to record and how to create meaningful reports from the data. Effective reporting is based on three basic principles:

  1. The subject of the report must be realistic in your environment. This type of reporting works for subsets of activity that are high-risk and low-volume. Like DDLs, DBAs running DMLs, etc.
  2. Find the correct way to report on it. Many times in aggregate form, like counting the number of logins by each user instead of listing them.
  3. Tune the reports without losing security value. For example, exclude activity from a monitoring script but ensure what you exclude can’t pose a security risk.

Compliance reporting isn’t technically challenging. It just requires a repository, a reporting engine, and a little time. In Core Audit, you have a highly scalable repository, an easy-to-use reporting engine, and wizards with built-in reports to get you started. More importantly, you have proactive forensics that saves you time by helping you figure out what to put in the reports.

Anomaly Analysis. Most database activity doesn’t lend itself to compliance reporting and requires something that can scale to massive volumes. That requires a special type of repository and automation that can locate unusual activity in it. These only exist in high-end solutions.

Core Audit can alert when there are new active users when they connect from different applications or IPs or are active at unusual times, when unusual SQLs access sensitive information, of unusual SQL volumes, potential SQL injection attacks, and much more. There are many ways to slice, dice, compare, and contrast the activity to find anomalous behavior.

Reactive Forensics. Reactive forensics means obtaining additional details about security events. These events can be a suspicious line in a report, a potential breach, an indication from another source, and more. Whatever the event, you always need to know more about what happened.

The challenge is having a repository with the information. In most solutions, you are limited to the events you choose to record. In Core Audit, aside from the compliance repository, you also have the security repository that automatically stores information about every SQL in your database. While the security repository is less detailed, it ensures you’ll always be able to know what happened.

Proactive Forensics. This type of forensics gives you visibility into all the database activity. It has multiple objectives, but the most important is developing controls. You cannot design effective compliance reports or anomaly alerts without a solid understanding of what’s happening in the database. You cannot even understand the threats you’re up against.

You cannot secure what you cannot see, and proactive forensics is a highly recommended first step to protect your SQL Server database. But it’s also central to identifying poor security practices, attacks and breaches, changes in behavioral patterns, gaps in the controls you deployed, and more.

Core Audit Proactive Forensics includes graphical analysis tools to visualize the activity profiles from various dimensions. For example, it has time-based stacks, tree graphs, Sunburst charts, a Sankey, network graphs, and more.

Best Practices

Here are some basic best practice steps to help you get started:

  1. Define your drivers. Are you looking to comply with a particular regulation or secure your database against some threats? Why are you doing this project, and what do you aim to achieve? It’s hard to succeed without objectives.
  2. Identify available resources and define the frame. Is there a timeline for completion? Is there a budget you can use? Who will be part of this project, and how familiar are they with SQL Server? Knowing your resources lets you define realistic goals.
  3. What are your capture requirements? Is data theft a risk, and do you need to capture queries? Are you worried about unauthorized changes to the data? Do you need to capture before and after values? Is privilege abuse a concern? This stems from your risks, security needs, and compliance requirements.
  4. What value do you expect to gain from the data? Do you need compliance reports or anomaly alerts? Are you worried about SQL injection? This depends on the level of security and compliance you aim to achieve.
  5. Choose an approach. Your resources and objectives will quickly guide you between a DIY, a low-cost product, or a high-end solution. Contact a few vendors to get demos and quotes to quantify costs and gauge potential value. Contact us, and we’ll provide you with more information and assistance for free.
  6. If this is a DIY project or if you’re buying a low-cost solution, you’ll need to experiment on production. That is to ensure the performance impact is acceptable and determine the filters you need for a reasonable storage footprint.
    When using Core Audit, start with Proactive Forensics to understand your database activity profile and design your controls.
  7. Audit local access, DBAs, and privileged accounts. Local access is a popular attack vector in data theft and ransomware attacks. DBA accounts are also a high-risk vector for credential theft and privilege abuse.
  8. With Core Audit, control the application account. Application accounts are the target of SQL injection and other application vulnerabilities. Auditing them requires high-end capture and anomaly analysis.
  9. Look at other accounts you need to control. Consider new, unapproved, and dormant accounts.
  10. Improve control effectiveness. Perform a gap analysis to determine the effectiveness of your controls. That includes estimating false negatives (undetected events) and unknown attack potential (like zero-day). There are several methods to do that.

Final thoughts

There are multiple technologies and solutions to secure a SQL Server database. They range from DIY to low-cost products and high-end solutions. It all depends on your requirements and available resources. Contact us, and we’ll help you find the best approach for your particular case.

Ask a Question

If you have a question or a comment, please let us know. We’ll be happy to hear from you.