Contact Us
Database Security Technology: Don’t Stay Behind the Curve
Learn about the different technology generations in database security to understand the protections you have and determine the ones you need. What are the limitations of each technology, from blind spots to performance problems?

We spend millions on firewalls and EDR protecting the perimeter and endpoints. But the actual prize, your data, often sits in a vault protected by a 25-year-old padlock. Our information is what attackers are after, yet it is what we protect the least. In a house made of doors, we obsess over the doors, neglecting the vault. The very thing the intruders are coming for.

If you have better visibility into employees’ phishing emails than the SQL commands hitting your most sensitive tables, you should consider realigning your security priorities. The attention you are pouring into perimeter and endpoint protection is better spent on your data, housed in databases protected by outdated, partial, and overpriced solutions.

Have you recently reviewed the technology used to protect your databases? You may be far more aware of the latest antivirus than the latest advancements in database protection. However, staying ahead of the threats means you cannot remain behind the technological curve. Let’s do a quick checkup. You can compare the state of your defenses with current database security tech.

You Can’t Protect What You Can’t See

Visibility is the bedrock of security. Many teams build their defenses on theory rather than reality. Crafting alerts for what they imagine is happening, unaware of what is actually occurring inside their systems.

Test your posture: Do you know the users and programs that connected to your database in the past week? Do you know what they did? If you don’t, do you think that is critical information you should have? Without visibility, you are likely to have activity without protection (blind spots) and defenses against imaginary activity that doesn’t exist. How would you rate the effectiveness of your security between a solid 10 and a house of cards?

Data Capture Drives Our Defenses

Beyond the visibility requirements, data drives everything in a security solution. It drives reports, alerts, blocking actions, and more. Everything in an activity control solution relies on knowing the activity. You cannot alert on activity you haven’t seen, nor can you block it.

While providing historical visibility to everything that ever happened is more technologically challenging (see below), some of the core technological requirements are the same.

Technological Data Barriers

Achieving 100% visibility is a monumental technical challenge that older solutions consistently fail to deliver. The capture technology is the foundation for visibility and is the most expensive technology in activity control solutions. That’s also where the age of the product truly shows. Legacy vendors never replace their core capture engines; they only change the UI over their 25-year-old foundations. Those solutions still use, today, capture tech from a quarter of a century ago.

Nowadays, we divide database activity control into 3 generations:

  • 1st gen: Relies on built-in database auditing capabilities for captured data.
  • 2nd gen: Capturing communication packets. Developed in response to performance overhead.
  • 3rd gen: Capturing activity from inside the SQL engine. Developed in response to the lack of visibility and ineffective security.

All of these technology generations must overcome three primary hurdles to deliver effective security. We’ll start with a table and general explanations:

Feature1st generation
Native Auditing
2nd generation
Packet Capture
3rd generation
SQL Engine Capture
Performance OverheadHighHigh network loadLow
VisibilityExtremely limited (by performance)Limited
(by capture tech)
Complete and airtight
StorageExtremely limited (by performance)Limited and expensive
(by storage tech)
Dual repository: Efficient primary + 360° alternate
Usage & ObjectiveLow-end solutions.
Pass basic audits.
Mostly compliance.
(Compliance checkbox)
Compliance + Breach Detection & Response.

Some more details, but the deep technical dive is towards the end of the article:

  1. Performance Impact: Databases are highly tuned machines, and capturing the billions of SQLs they execute without impacting their performance is a massive challenge.

    Traditional auditing solutions (1st gen) rely on built-in database capabilities that are notorious for a crippling performance impact. We cannot make a trade-off between “fast” and “safe” – we must have both. Therefore, all future generations are driven by the need to capture the activity without impacting performance. Both 2nd and 3rd generation technologies can monitor at scale without a significant footprint, but the 3rd generation fares better.
  2. Blind Spots: Databases are complex machines offering many pathways to connect and run activities. Since we cannot predict the path of an attack, we must see everything that happens everywhere. This becomes even more vital since smart attacks deliberately seek the shadows.

    1st gen products could, in theory, see everything. However, the performance impact meant seeing everything was not realistic, and we had to settle for a minuscule subset of the activity. Moving from the 1st to the 2nd generation, we gained performance but lost visibility. 2nd-gen solutions have significant limitations with local activity (inside the box), encrypted traffic, and internal SQLs (like anonymous blocks, stored procedures, and triggers). We’ll explain the details later, but in the world of cybercrime, a blind spot is an invitation, and 3rd-gen solutions can always see everything, closing that door. Solving the visibility problem without the performance hit allows 3rd-gen solutions offer continuous full capture with negligible database impact. That is the primary reason for the evolution of this technology.
  3. Storage: Moving beyond capture to visibility and advanced analysis requires keeping a record of everything that was captured (everything that ever happened). Capturing activity without recording it is not as helpful.

    1st and 2nd gen solutions offer only one option: recording individual SQLs. These repositories also require significant disk space, which necessitates rules that allow you to pick and choose what to record. 3rd gen solutions come with a dual repository strategy: a primary repository that records individual SQLs (but more efficiently) and an alternate repository with aggregate data about all the activity. The primary repository is more efficient because it eliminates duplicate texts, reaching a capacity density of 1 billion SQLs in 32 GB of disk space. However, the alternate aggregate repository is the big differentiator. For a few megabytes per day, this secondary repository ensures you can always know what happened. It drives capabilities that deliver remarkable visibility, drill-downs, and analysis. The dual repository isn’t just a way to save disk space; it completely transforms the way we do security.

    If a breach happened 6 months ago, this tech is your only way to “time travel” and know what the application did on a Tuesday at 2 AM. Not to mention that it’s the only tech that can alert you in time, so that you don’t find out only 6 months later.

Advanced modern 3rd-gen solutions are the only ones that address every requirement, delivering powerful defenses. However, most products are geared towards compliance with partial and weak security.

3rd-gen SQL Engine capture achieves the best of both worlds without compromise. It is the architecture we chose to build Core Audit upon, as we believe it’s the only way to meet the security requirements of modern enterprises. It offers visibility that 1st-gen could only aspire to, and a negligible performance impact lower than 2nd-gen. Combined with a powerful storage engine that lets us harness all that data, 3rd-gen is the best-in-class solution.

Zero Trust, Always Verify

Having data is only half the battle. We then have to use it to gain security. The first question is what kind of security do we expect to achieve? Do we want to trim down the requirements and ignore large portions of the activity, or do we want to secure everything?

It may seem like a trick question, but let’s explain. Some people believe that with billions of SQLs, they can’t secure everything. They think they must focus on a handful of things, and that even those will be too much to handle. That’s the old perspective. Nowadays, our objective is to secure everything. Not to assume someone or something is safe because none are. This change in core requirements and expectations drives a vastly different approach to activity control.

That is the question of Zero Trust.

The traditional perspective assumes certain authenticated users are who they say they are and that what they do is safe. They implicitly trust both the identity (the username is the person on the other end of the connection) and their behavior (they are “good people”). Modern security assumes the opposite.

In a “Trust Nothing” mindset, we cannot trust the identity or the behavior. It isn’t just about the internal threat, like suspecting your DBAs and application managers. It is about the reality of authentication – we can never be sure who is on the other side of the line. It means that we monitor the account, not just the person. That is the reality of the threat landscape:

  • Prime Targets: DBA and application accounts are the objectives, and attackers aim to impersonate them. Additionally, these individuals also pose the highest risk for abuse of privilege.
  • Credential Theft: Stolen credentials aren’t a ‘maybe’ – it’s a primary attack vector. For example: key loggers, accessing files used by administrators to store passwords, or retrieving the application password from the application. Obtaining valid credentials to connect to a database is not trivial, but not impossible either. That means you cannot assume that a DBA’s login equals the DBA’s intent. Luckily, stolen credentials usually suggest the connection originates from a different activity source (machine or application), and those you can identify through simple session controls.
  • Compromised Endpoints: A trusted DBA desktop could be under the control of a remote hacker. Attackers can connect from the same machine and application that the DBA uses, rendering them indistinguishable from the real person. This attack vector can only be identified by inspecting the actions taken inside the connection (the SQLs).

Even if you trust your internal users (which you shouldn’t), you cannot trust a connection to the database originated from that person. A malicious attack would, eventually, manifest through a valid database account. We must, therefore, control every SQL in every account since that’s how we are attacked.

SubjectQuestionLooking atControl typeExploited by
IdentityWho are you?UsernameAuthenticationStolen credentials
ContextWhere are you coming from?Remote Machine & ProgramSession ControlAbuse or Endpoint compromise
BehaviorWhat are you doing?SQLRules & AnomaliesNone

Protecting everything seems like a monumental task, but it is the only way to operate security today.

Test your posture: Could you detect an attack if an attacker used a compromised DBA desktop to connect to the database? Would you know if a DBA chose to steal data? Will you get an alert of data exfiltration from inside the database server by someone who penetrated the machine? These are not minor fringe attack vectors. It is a core component of the threat landscape. If you can’t see the activity or cannot analyze it well, you are unlikely to stop such attacks or even know they occurred.

Zero Trust and Capture

Beyond the challenge of securing billions of SQLs, Zero Trust poses strong requirements for activity capture:

  • Capture Independence – if capture depends on anyone with database access, then we are implicitly trusting that someone. Specifically, trusting DBAs to manage database activity capture requires us to trust DBAs. This is a major problem with 1st-gen.
  • Blind Spots – a blind spot in capture is exactly what attackers are looking for. It’s like a dark corner not covered by a security camera. Especially when some of the people we shouldn’t trust are skilled database professionals who are intimately familiar with our defenses. Circumventing the capture tech should be impossible, and that is a major problem with 2nd-gen solutions.
  • Constant Flow – capture must ship data off the machine as quickly as possible. The audit server must receive a stream of all database activity in near-realtime. Otherwise, it creates a vulnerability gap that lets attackers intervene or manipulate the data. That is one of the problems with 1st-gen solutions.
  • Airtight Blocking – similarly to the blind spot problem, circumventing blocking should also be impossible. If there’s a policy that prevents a DBA from touching a sensitive table, the DBA should not be able to do that. Period. That is a major concern with both 1st-gen and 2nd-gen solutions.

It may seem obvious, but given the dominant technologies in the market, it’s necessary to emphasize that a solution with obvious and well-known blind spots or weaknesses is not an effective security solution. While not completely useless, such solutions are something you should definitely seek to replace.

The Application Gap

The application is a common attack vector and many times proves to be the weakest link. SQL injection remains a top threat, and application flaws are the prime gateway to database penetration.

Securing the application software is a never-ending task. Programming guidelines and code reviews are essential endeavors, but they deliver limited benefits. We must, therefore, protect the application through the database as well. Ignoring 99% of the database activity because it’s “too hard” or because we don’t know how to do it is a strategic blind spot we cannot afford.

Especially because with the right technology, it’s not hard at all. Application Behavioral Analysis can compare the application behavior today with the past months and alert us to any changes in behavior. That is not the only use for anomaly analysis, but it is the most critical one.

Test your posture: Will you be aware of a SQL injection attack from the application? That is one of the most well-known application weaknesses, but not the only one. If you cannot detect a primary attack vector like that, do your defenses even qualify as effective?

Security Posture Tests

So far, we introduced several tests to evaluate your current level of security. A proper security system should be able to handle all of them. We didn’t even go into complex attacks. These are primary attack vectors that any database defense must address.

You must be able to say with confidence that yes, you know what’s happening in your database. And yes, you will detect an attack from a compromised DBA desktop. And yes, you will detect DBAs abusing their privileges. And yes, you will detect a SQL injection attack.

Making these claims is vital because that’s exactly how attackers get your data. That is not a theoretical exercise. It is the precise battle you’re facing. It is also the reason most breaches are detected by third parties: security personnel are unable to detect most attacks.

If you are looking for a solution that hits these 3rd Gen benchmarks, this is exactly what we focus on at Blue Core Research. Protect your database properly, so you are not an easy prey, destined to become another statistic.

Technological Barriers to Secure Everything

But let’s set our sights higher. Not just targeting specific well-known attacks. We don’t want to do the bare minimum – we want to do the maximum. That means we want to secure everything. To control every single SQL in the database.

As you may imagine, setting up controls to secure every SQL in every database connection is not trivial. It requires a combination of methodologies and supporting technologies. But with the right tools, we can protect each of those billions of SQL.

The exact approach must be tailored to each database, but it generally involves a combination of these methodologies:

  1. Session control: As a general best security practice, it’s good to control accounts, programs, and IPs that connect to the database. A stray behavior from regular patterns is an easy red flag that shouldn’t be ignored.
  2. Accounts that don’t require sensitive data access: Some accounts, like DBAs, shouldn’t access sensitive data (or the data schema at all). Alerting or blocking such access is a simple control that can almost eliminate the risk from these accounts.
  3. Anomalous sensitive data access: Most sensitive data access occurs in predictable patterns. It’s always the same SQLs and, usually, from the same source. Identifying a change in sensitive data access behavior is an essential indicator of a potential attack.
  4. Application behavioral analysis: Applications run billions of SQLs, but those SQLs repeat themselves. By observing application behavior, solutions can easily flag changes in activity patterns. Any application attack or attempted attack will, inevitably, trigger this alert.
  5. DDLs: Validating DDLs that come through a change control process is a fundamental control against unauthorized metadata changes. While most DDLs are legitimate, unauthorized ones require investigation.
  6. Row counts and more: Solutions offer a lot more controls, such as monitoring how much data is extracted, by whom, when, using how many SQLs, and more.

The bottom line is that there’s no magic bullet for securing all the SQLs in the database. It’s always a matter of divide and conquer. But we need sufficient technologies to handle every subset of that division.

Technology Deep Dive

Now, let’s pop the hood and peek a little inside. Let’s understand the barriers in these technologies and why there are so many of them.

1st Gen Native Auditing

Native auditing is a general term for functionality that comes built-in in the database for free. Simply put, asking the database to record what happens inside it. For example, Native Oracle or SQL Server Auditing. Native auditing is notorious for having a crippling performance impact, and that impact is a result of 3 converging implementation hurdles:

  • Additional needed data – when databases record activity, they need to record additional information that isn’t usually available in the function that performs the recording. For example, recording an audit line of a SQL execution requires the database to look up other relevant information, like the username and the program.
  • Disk I/O – recording activity means writing to disk. If the records are written to a database table, that also means going through the various ACID mechanisms the database uses to avoid corruption.
  • Inline activity – databases perform all this extra recording activity as part of the transaction. In other words, the database transaction must wait for the database to collect the needed data and write everything to disk. Every recorded transaction suffers a significant delay, and sometimes, also, transactions that aren’t recorded.

Additionally, 1st-gen solutions depend on a DBA to configure and maintain the relevant auditing features in the database. It means that 1st-gen is extremely limited in controlling DBA activity. To audit DBAs, the capture must be independent of them. You cannot effectively control the accounts used to control the capture mechanism.

Bottom line: 1st-gen capture technology can only be used for a small fraction of the activity, and even then, with a measurable performance impact and limited effectiveness. It is also not ideal for controlling DBA accounts. However, the benefit of 1st gen is that it can tap into any database activity, something only the 3rd gen can compete with.

2nd Gen Packet Capture

Database packet capture technology was originally developed in the late 1990’s for database performance monitoring. In the early 2000’s, it was repurposed for database compliance. That was the best database security tech at the time, and it is still sold today. The main current vendors are Imperva (founded in 2002) and IBM Guardium (founded in 2002). The original technology was a pure network sniffer, but it quickly bolted on kernel drivers for the database server to “sniff” local activity. The primary technological hurdles are:

  • Encrypted Activity – Looking at network packets between the database client and server is impossible when the activity is encrypted. The challenge is severe because even if a database doesn’t require network encryption, it will, by default, accept requests to encrypt the connections. That means that requesting an encrypted connection allows an attacker to bypass the solution.

    For some databases, the solutions can use a man-in-the-middle approach using the database encryption key. They decrypt and re-encrypt the traffic in both directions. This method has multiple limitations, but the most crippling is that it doesn’t work for local activity (see below).
  • Local Activity – As a network sniffer, a solution cannot see internal traffic inside the database server. Seeing local traffic is vital because DBAs often connect locally and because some applications run locally. Local connections are also a common attack vector for hackers who penetrate the database server. For example, all double extortion ransomware attacks breach the database and encrypt the data files. That means the attacks come from within the database server and are invisible to a sniffer.

    To capture local activity, packet capture solutions install a kernel driver on the database server. Beyond the stability concerns, this poses three primary challenges: (1) sending all the local traffic to the solution’s appliance takes significant network bandwidth and often requires a dedicated network card; (2) the kernel driver cannot encrypt/decrypt all the activity, and, therefore, local encrypted activity is invisible to the solution; and, (3) blocking activity is a major challenge (see below).
  • Internal Activity – Databases are complex machines with a whole internal universe. For example, they can internally run anything from short ad-hoc scripts to entire programs. This makes security much harder because it’s easy to run a small program inside the database that looks innocuous, but performs malicious activity. Without visibility into what happens inside the database, it’s impossible to consider it secure. Internal Activity, however, is something packet capture technologies can never see.
  • Blocking Activity – Some customers wish to block, not just report, alert, and investigate. There’s a massive difference in the value propositions, which is the subject of another article. But when it comes to blocking, packet capture solutions pose a unique dilemma to their customers.

    The difficulty is not so much with remote network connections, but local ones. The kernel driver that captures local activity can usually operate in one of two modes: (1) send the activity to the appliance and wait for a response before letting the packet through. That creates impossible latency in the communication, and (2) send the packet through, and if a blocking decision comes back later, disconnect the session. Neither option is good, and both cripple the value customers expect from blocking functionality.

Bottom line: 2nd-gen solutions are not ideal, but they used to be the best we could expect. With a complex myriad of holes and limitations, they offer a minimal level of security and mostly cater to compliance-seeking audiences. However, there is better tech now, and it’s time to upgrade.

3rd Gen SQL Engine Capture

There were several attempts to overcome the visibility wall in the late 2000s (like Hedgehog and IDB). The most successful was the SQL Engine Capture we chose to use in Core Audit. This technology relies on internal database facilities that enable the solution to capture everything that goes through the SQL engine. It does so without native auditing and at less than 3% overhead.

The trick lies in high-speed, in-memory collection of micro fragments of information. These fragments are later assembled on the audit server into complete audit records. This type of capture avoids disk IO and is parallel to the transaction, so it doesn’t slow it down. Collecting available micro fragments eliminates the need to look up additional information inside the database, making everything run blazing fast.

3rd-gen tech eliminates the visibility-performance trade-off that bogs down 1st-gen and 2nd-gen. That’s why we use it in Core Audit. It delivers essential capabilities required by the security landscape that 1st and 2nd gen tools cannot reach by design:

  • Visibility – 3rd gen looks at the SQLs as they flow through the SQL engine. In other words, it sees the SQL when the database does. That means it sees encrypted activity, local activity, internal activity, and everything else the SQL engine executes.
  • Performance – Since the database already did the heavy lifting of decrypting the traffic and parsing the packets, this capture only needs to extract the SQL statement. That means a tiny CPU and network footprint. Additionally, the integration doesn’t use native auditing, so the database doesn’t perform any additional work to collect, process, or manage information. The database operates exactly as it normally does in an optimized execution mode, but fires small events that 3rd gen can convert into a full auditing stream.
  • Blocking – by evaluating SQL blocking rules in an extension of the database engine, 3rd gen essentially extends the built-in database security functionality, providing a custom blocking facility in the database engine.

Limitations – since the technology runs on the database machine, it requires installation on the database server. That means that this solution cannot be used in environments where the customer has no control over their own database, such as certain cloud environments. In those environments, 3rd gen must “downgrade” to use other mechanisms, like 2nd gen capture.

The Cloud Security Paradox

Cloud database security, the way it exists today, is an exercise in cognitive dissonance. On the one hand, once you hand over control of your infrastructure, you inevitably give up control over your security. On the other hand, you are legally and morally responsible for the security of your data, as the cloud provider is responsible for everything except that. That’s the paradox – you are responsible for security but without control.

If you use a fully managed DBaaS (like AWS RDS or Azure SQL), you are often “forced” back into whatever the cloud provider lets you do with their database. Currently, that means 1st-gen security. However, the problem with DBaaS is more fundamental than activity control, since the cloud provider has access that you cannot manage or see. From their DBA staff, to storage, backups, and even server access, you cannot expect security to be tight without having visibility and control.

In DBaaS, you don’t have control over your database server, which means you cannot use 3rd-gen security. You could use a partial version of 2nd-gen operating purely as a network sniffer. However, that creates a huge black hole of what happens in the machine, on network paths you don’t control, not to mention what executes internally inside the database. With such limited 2nd-gen coverage, you are effectively forced down to your cloud provider’s 1st-gen capabilities.

While DBaaS (e.g., AWS RDS, Azure SQL) offers operational advantages, it creates a “security black box”. That is part of the cost of moving to a fully managed service. The easiest way to regain control in the cloud is to switch to IaaS (e.g., AWS EC2, Azure VM). That will allow you to use 3rd-gen solutions. For high-sensitivity data, the industry is seeing a return to IaaS or even to on-prem.

The most recent cloud offering is serverless, which, obviously, gives up even more control in exchange for operational advantages. In serverless, you are sharing the database with countless others. Not only do you have no control, but even the cloud vendor has limited isolation between you and others. Isolation that depends purely on security controls within the database software. If there’s a bug or a mistake in those internal database controls, the walls vanish, and companies are exposed to the data of others. It is highly recommended not to store sensitive data in serverless environments.

Using DBaaS (or serverless) is a convenience decision, not a security decision. It may be valid for some data in some organizations, but until cloud providers offer 3rd-gen visibility in DBaaS, the 1st-gen they currently offer is as good as it gets. Moving to a managed cloud remains a calculated risk – one where the more control you give up, the more you trade deep security for operational ease and increase your liability. Hopefully, in time and with customer demand, cloud providers will offer better control, visibility, and transparency into what’s happening in their environments.

Final Thoughts

While newer solutions and technologies offer many more capabilities, this article focused on the most basic requirements: capturing and processing data. While obvious and inescapable, older technologies fail to deliver even such fundamental essentials.

Information security is about the data. For years, the industry has been obsessed with antivirus and perimeter tools. While those have their place, we have to wonder: how did Information Security become more about firewalls and endpoints than protecting the Information? How do we fail to invest and upgrade old, obsolete technologies that are directly responsible for safeguarding our data?

You may only be looking for a compliance checkbox, and your current tech stack is sufficient to pass an audit. However, consider that auditor requirements evolve, and old tech won’t pass these audits forever. More importantly, remember that compliance is not security, and a breach is a far more devastating event than a failed audit. Wouldn’t it be wise to get actual security for the same price you’re currently paying for a compliance checkbox?

Protecting data is the core requirement of our craft. Staying behind the curve with legacy, overpriced solutions that offer partial visibility was never a good idea. With modern threats, failing to protect the data is no longer an option. The technologies you must have are available right now. You can protect your data today.The audit checkbox is a security blanket that doesn’t even stop the cold. If your defense technology cannot protect you, you aren’t “secure” – you’re just lucky. At least for now. Upgrade to a modern database security solution and control your future. It’s time to catch up.

If you have a question or a comment, please let us know. We’ll be happy to hear from you.