Story of a Cyberattack

Story of a Cyberattack

How can a day with the potential to be the worst in your career turn out positive or even pretty good? This fictional story describes realistic events from a day of a breach when things happened right. See what it takes to make it happen.

The Attack Begins

The blip of a new email flashed on Cora’s screen. It was yet another alert from Core Audit, and it wasn’t the first one of the day. But a quick glance at the SQLs and adrenaline jolted her awake. It felt like caffeine was pumping directly into her brain. That is not a false positive. It’s an attack. Someone is successfully running an SQL injection attack through the application and into the database.

Cora Blue is a DBA in the security team and understood exactly what was happening. That is the first stage of the attack. The intruder is trying to determine if the application is vulnerable and gather information about the type of database and the tables inside it. So far, it’s probably just a script collecting information, but depending on how good this attacker is, it will soon graduate to a full-blown data breach. It could take days, sometimes hours, but in the worst case, it can be a few minutes before they start to exfiltrate data. There’s no time to waste.

Taking Action

Cora knows what she needs to do. Even though taking the application offline will probably be enough in this case, the policy demands shutting down both the application and the database. As a senior member of the security team, she has the permissions required to do that. Better safe than sorry, and a few minutes later, everything is down and offline. While the systems were going offline, she sent a message to the notification group. That will get to the response team and everyone else that needs to know.

By the time the response team assembled, Cora got a chance to glance through the Core Audit forensic information. It looks like the hackers were still in the discovery phase, and nothing was taken. She found the App Server the attack came from and the time. From the SQLs, she could even guess the compromised area of the application – the first part of the SQL is the original code with the table and columns of the original query.

Now that the team is all on Zoom there are two immediate objectives. Getting the application and database back online safely is important as the helpdesk is probably already getting calls. It’s a good thing they were part of the notification chain. But more pressing is how they got into the network, whether they are still inside, and what else needs to be done immediately to stop this attack in its tracks.

The small response team includes Cora Blue, who identified the event and is also the database expert of the team. Brandon Reese, the know-it-all sysadmin. Randall Shield who controls the network. And Audrey Cortez, who represents upper management.

What Happened?

Cora quickly explains what happened: a SQL injection attack hit the database through app server CBR002 starting at 14:35, but the app server and database were shut down before anything was taken. Audrey commends her for her diligence and quick response. That’s not the first time the team gets together and everyone knows Audrey can talk for a while. But they also know she doesn’t mind being interrupted, especially when time is precious. And before Audrey can continue, Brandon politely interrupts sharing his screen showing a grep of the Apache logs.

The logs clearly show the rapid fire of the HTTP requests that caused the SQL injection and the IP that sent them. It looks like the attack came from the VPN IP range. As Brandon continued to Core Audit to look at the application auditing forensic screens, Randall, the network admin, found the DHCP IP assignment showing the attack came from the home computer of Holly Ackerman, the administrative assistant for one of the VPs.

A few seconds later, Brandon also found the attack in the Core Audit forensics confirming it was Holly’s login in the application. While no one in the team knows Holly well, she doesn’t seem very computer savvy and probably never heard the term SQL Injection. Someone must have taken over her home computer and used it to launch the attack.

Neutralizing the Threat

Either way, the team disables Holly’s account logging her out of the VPN. The helpdesk team will call her to explain, the security team will later start to investigate her computer, and the Windows team will eventually help her reinstall it. Brandon sends a short email update to the breach notification group which will kick that into motion.

The team believes that at this point the attack is mostly contained. They need to see if Holly’s user was used anywhere else, get the database online, find the problem in the application, and figure out how to get it online safely. Also need to investigate Holly’s PC to figure out how someone got in, though she probably just clicked on the wrong email.

The day is not over, but it looks like they prevented the worst.

How did they Succeed?

Let’s review some of the key points that made this story a success for the security team.

First of all, despite a perimeter breach that was probably unavoidable, they had a database IDS solution that recognized the anomalous data access and sent an alert. Core Audit was pivotal in triggering the response. Without that alert, this story would have been very different. It would be months before they found out about the attack, and probably only as a result of a notification from law enforcement. The response team would then be analyzing a months-long data breach, trying to determine how many systems were compromised and who they should notify that their data was stolen.

Secondly, Cora Blue, a knowledgeable and trained cybersecurity DBA got the alert. She knew what to look for and recognized the attack. She also had forensic information about the database activity to support immediate action. Without Core Audit, there is no forensic information about every SQL and no way to determine what transpired in the database.

There were also simple policies telling Cora how to respond, and she had the permissions needed to take action and stop the attack in its tracks.

The team assembled quickly, was small enough to operate efficiently, and had knowledgeable people with access to all the systems. The team met many times before and had a good working relationship. It was also not the first time they attempted to respond to an attack as everyone knew exactly what to do.

Everything can work out well when you get alerts, have supporting forensic information, and have talented, trained people, that know what to do.

Learn more about

DATABASE SECURITY

Core Audit


Contact us to
LEARN MORE

Write to us

Risk Control Matrix

Risk/Control Matrix

Using Overlapping Controls

Overlapping serial defenses can significantly reduce the chance of a successful attack. That’s a critical step to avoid a data breach.

We previously discussed data-centric security and the need for airtight defenses. Using IDS and IPS is a first step in that direction, but let’s take things further by creating overlapping controls that will tighten security much more.

The risk-control matrix is at the core of security planning. The matrix maps our risks to the controls we plan to apply. It’s not a revolutionary concept, but doing it can be challenging. To make the discussion less theoretical and more practical, we’ll take a real example from database security and use the Core Audit Risk-Control Matrix.

Core Audit Risk/Control Matrix

The Core Audit Risk/Control Matrix

Risks

Listing the risks is the first step in creating a matrix. To ensure we cover every risk and make it easier to map those into controls, we chose to go by database account.

Databases typically have accounts for DBAs, the application, and others. By enumerating all the database accounts, we can ensure we don’t have any holes on the risk side of the matrix.

There are also risks outside of our list of accounts. For example, from new accounts, vulnerabilities in the database that don’t require accounts, database access from inside the database machine, and data copied out of the database.

Next, we need to determine the possible exploits for each account. Generally, any account, including DBA accounts, can be exploited by their owners (an abuse of privilege), a hacker using stolen credentials, or some other impersonation like compromising the account owner’s desktop.

Application accounts have additional risk factors. The main one is a vulnerability in the application. For example, an SQL injection attack will trick the application into executing an SQL it’s not meant to run. Another risk is an unapproved change that went outside the change control process.

The added benefit of listing risks by account type is that this is also how we control them. From the database perspective, a DBA session is a DBA session. The database cannot distinguish between the real DBA and a successful impersonation. In that sense, there is little difference between a DBA abuse of privilege and an impersonation – in both cases, the DBA account executed malicious activity.

Controls

Each risk should have controls in response. The trick to tighter security is to have multiple controls that overlap. Let’s look at some examples:

The first level in Core Audit is identifying changes in configuration, users, permissions, objects, etc. Other than validating change control, this type of control will identify some attack side effects. For example, if a hacker creates a user for a backdoor, creates a table to exfiltrate data, and more.

That is, obviously, not enough, and at level 2, we can create reports and alerts on specific sessions and SQLs. These controls are efficient for low-volume activity. From a DBA account monitoring perspective, we could monitor for DMLs, DDLs, and access to sensitive tables. Looking at the application account, we can check for sessions originating outside the application server. Reports on other application activity are better left for Level 3 as they’ll be too high-volume to be effective in Level 2. These controls won’t catch every attack, but it’s a good start.

At level 3, we have Anomaly Analysis, which identifies changes in the behavioral activity profile. It’s an automation that can scale to extremely high activity volumes. That makes it a powerful and effective control over application activity as it tends to be repetitive. It can also eliminate repetitive SQLs from DBAs and point out new ones. Another use is analyzing activity sources to identify individuals connecting from different machines, programs, at unusual times, and more.

Another capability in Level 3 is Proactive Forensics, which gives security personnel visibility into who’s doing what in the database. Personnel can investigate activity sources, the amount of data accessed, the ratio between SQLs and rows, and more.

And that takes us to Level 4, which is preventive. Here, we need to be careful not to block legitimate activity. Those are the false positives that are problematic in IPS. But we can still block certain activities like DBAs accessing sensitive schemas, a sensitive table accessed from outside the application, enforcing separation of duties, and more.

Overlapping Controls

As you can see, each risk column has many potential controls, each addressing a different aspect of the risks. We can have multiple reports and alerts that address the same threat from a different perspective. We can also have both preventive and detective controls calibrated to different sensitivities.

We also used 5 different types of controls: Level 1, Declarative auditing (Level 2), Anomaly analysis (Level 3), Proactive Forensics (Level 3), and Preventive (Level 4). Each type has different strengths and weaknesses, like time investment, involvement of security personnel, visibility, prevention or detection, and more.

These overlaps are what make this type of security much tighter. While you probably won’t deploy too many reports and alerts, you can plan on deploying them based on the false positive rate and how much time you can devote to this security.

Applications

And we can also take things further and deploy similar defenses on the application. These are application activity controls independent of the database controls discussed in this article. Declarative auditing, Anomaly Analysis, Proactive forensics, and more – all work on application activity profiles.

By adding additional controls over the application, we will further reduce the risk of an application vulnerability. That will add additional barriers between an attacker and the data.

If you wish to know more, contact us at info@bluecoreresearch.com

IDS & IPS

IDS & IPS

Toward Airtight Security

Combining IPS and IDS significantly reduces false negatives. That’s the key to better security since false negatives are how we get breached.

We previously discussed data-centric defenses as the critical last line of defense. As such, one of our requirements is to try and make it as airtight as possible. That’s not a minor or trivial requirement. In this article, we’ll discuss how we can get there.

There are two concepts we’ll need to discuss:

  • False Positives and False Negatives
  • IPS and IDS

False negatives

False Negatives or type 2 errors, are when a security system fails to react to an attack. So, our objective is to reduce or try to eliminate false negatives. The better we do that, the more airtight our security.

False negatives are the dirty little secret of security systems – the subject no one talks about. No one talks about how many false negatives they have, how to measure them, estimate them, and generally, about the effectiveness of security. It’s an important subject, but one that everyone tries to avoid.

To understand how to reduce false negatives, we must first understand false positives.

False positives

A false positive or type 1 error, is the opposite – when a security system incorrectly classifies legitimate activity as an attack. For example, when a good email gets into the spam folder.

False positives can be annoying, as in the case of misclassified emails. But they can also prevent people from doing their jobs when, for example, they can’t log in to a system they need for work. These are examples of false positives in preventive security. Because they can be debilitating and cause a lot of complaints, IPS is designed and tuned to have low or, ideally, no false positives.

The Balance

As you can imagine, there’s a trade-off between false positives and false negatives. Reducing one tends to increase the other. So, as we reduce false positives so people can do their jobs, we inevitably increase false negatives, and more attacks go undetected.

Another way to look at it is the sensitivity of the security system. A sensitive system will detect more attacks but have a lot of false alerts. A system calibrated to be less sensitive will have fewer false alerts but miss many attacks.

IPS & IDS

That brings up the other subject of IPS and IDS. There’s a common misconception that prevention is more important than detection. The logic behind it is, why detect something when you can prevent it? It sounds like a good idea, but it’s wrong.

As we just said, in IPS, we have to reduce false positives, and that increases false negatives. But IDS isn’t required to have zero false positives. In IDS, we even expect some level of false positives. False positives in IDS are false alerts that security people receive. They can be annoying if they’re too frequent, but we expect them to a certain degree. Being able to accommodate some false positives lets us significantly reduce the false negatives. In other words, we calibrate IDS systems to be more sensitive and detect a much higher number of attacks.

The diagram below shows the ideal calibration of IDS and IPS with no false positives for the preventive and no false negatives for the detective. The area between Preventive and Detective illustrates in Red the attacks that go undetected by the preventive (its false negatives) and in Blue the false alerts of the detective (its false positives).

If there were false positives for the preventive, that would show as Blue crossing right over the preventive line and getting blocked. Similarly, false negatives for the detective would be Red crossing left over the detective line and not being detected.

Using a combination of IDS and IPS is a simple way to estimate the number of false negatives in the IPS. There are other ways of doing that using static analysis.

So what’s better, IDS or IPS?

The truth is that we need both. Preventive to block as much as possible and detective to identify and alert about the rest. That’s how we get close to airtight.

There are other strategic mechanisms for getting close to airtight, like using serial and partially overlapping defenses. That takes us to the risk-control matrix, which is the subject of another article.

If you want to know more or have a free discussion with one of our experts, please contact us at info@bluecoreresearch.com

Data-centric security

Data-centric security

Designing modern security

Balancing security isn’t simple. How can you identify the areas that require attention and invest the “right amount” of resources?

The problem with any issue comprised of many unrelated components is that it’s nearly impossible to plan a proper strategy, assign value, and achieve a balanced approach to cover everything. The result is uneven security. Some areas are heavily fortified, while others have no coverage.

The article starts putting some order in this chaos to understand where things belong and how valuable they are.

Perimeter and Data-Centric

First, let’s divide our security universe between measures designed to keep outsiders out and measures aimed to prevent bad people from getting to the data. Perimeter security prevents access to the corporate network, while data-centric security protects data from malicious access.

A good test to check if a security measure is part of the perimeter or data-centric is asking: “Is this going to protect us from an insider threat?” Insider threats are people who work for the company, so they are already inside the perimeter. If a measure will prevent them from stealing or modifying data, it’s a measure that protects the data.

Perimeter

In general, external attackers usually need to breach an internal asset before gaining access to data. That is a perimeter breach, a frequent precursor to a data breach.

However, there are a few exceptions to that rule. For example, a publicly-facing application could give data access without breaching anything other than that application. That is an example of data that can be accessed directly from the outside without perimeter protection. Another example is data outside the perimeter, like a laptop or a backup tape outside the building. They also have no perimeter protection and can be stolen without access to internal assets. Obviously, the perimeter also doesn’t protect us from an insider threat that’s already inside the perimeter.

Perimeter security is crucial, but none of these measures are airtight. That means perimeter security only aims to reduce the number of attacks on the data, not prevent them.

For example, email security only intends to reduce spam, not prevent it. The spam we get almost every day is definite proof of that. Even if one perimeter measure was airtight, there are too many other perimeter attack vectors that it’s foolish to pretend we can cover them all perfectly. Therefore, the perimeter aims to reinforce data-centric measures, not replace them. And relying on the perimeter alone ensures we’ll have a data breach.

Data-Centric

While some consider it an exaggeration, we believe data-centric measures should aspire to be airtight. That’s a strong statement in sharp contrast to the perimeter and is not easy to achieve. But it’s not impossible.

Data-centric, as a whole, is usually our last line of defense. We have no other barriers to protect us beyond it. When all the data-centric measures fail, we’ll have a data breach. So, if possible, we should overlay data-centric measures to get additional protection. That’s possible because data-centric measures deploy in serial, unlike perimeter protection, which is in parallel.

As data-centric revolves around data, database and application security are the main pillars. We also have server security along with physical and network security for the data center, encryption of data at rest, and controls against administrators. We’re talking about everything that stands between people and the data.

Summary

We have our first division of security between perimeter and data-centric. We also identified the role of data-centric security in protecting us from internal threats and external threats that penetrate the perimeter.

Finally, we aim to make our data-centric protection airtight. That will be the subject for the next two weeks, looking at IDS vs. IPS and how to leverage the risk/control matrix.

As people increasingly work from home, the perimeter becomes impossible to secure. We have no control over the physical perimeter of people at home, their home network, personal computers, or all their devices. A lost battle that symbolizes the death of the perimeter. One more reason why data-centric protection is the primary form of security in the 21st century.

LinkedIn Post

Check out a summary posted on LinkedIn

Read Post →

Q&A’s: Data Masking

Common Data Masking Questions:

Do you copy data out of production? Static data masking is a simple, easy, and effective way to protect it and prevent a breach.

1. Why mask? Because we can’t protect the data outside of production: Imagine copying customer data for testing. How could you protect it after copying it? Without data masking, you will expose all names, addresses, phone numbers, emails, financial information, and more. Static masking replaces these values with good fakes so you can test without jeopardizing your confidential information or that of the people who entrusted it to you.

2. Reverse the masking? Impossible. That’s the point: Unlike encryption, static masking is a one-way transformation. The masked data resembles the original but doesn’t reveal it. This irreversible process ensures your sensitive information isn’t exposed even if the masked data falls into the wrong hands.

3. Data integrity? Is a must. Otherwise, the application won’t function properly in the test environment, or the test will be ineffective. The masking process must preserve data validity, consistency, and referential integrity. It’s like an elaborate disguise: everything looks different but has to work the same way.

4. A single algorithm? Of course not. There are many ways to mask each type of data. Choosing the strategy that fits your requirements will ensure you achieve your security goals while getting the most out of your data.

For example, value manipulation will preserve some aspects of the original data but can potentially offer weaker security. Data generation will provide perfect security but may impair test quality. Data profiling and custom profiles are two other strategies that balance security and test quality.

5. Should I worry about performance? Yes and no: data masking performance isn’t an issue unless it’s so slow that masking is impossible. Let’s explain more:

There’s a common preconception that static data masking is inherently slow and resource-intensive, but it’s not a big deal since we only have to do it once after copying the data. Some would say just once.

The Truth

It doesn’t matter if a masking process takes 30 seconds, 5 minutes, or half an hour. It’s not something that runs too often, and it never runs on production systems, slowing down business-critical processes.

However, it’s not entirely true that it doesn’t matter since masking becomes impractical if it takes days or weeks to run. Nor is it true that masking runs only once, as it must run every time you refresh your test data. As masking becomes faster and easier, you can update your test data more frequently, getting more out of your data.

Performance Culprits

Slow masking is usually due to one of these reasons:

  • Product selection: different solutions offer different performance capabilities. Common reasons include code quality, database APIs, transaction size, etc. For example, chatty protocols combined with high latency can result in very slow masking.
  • Database performance: like any database-driven product, masking performance also depends on the performance of the underlying database. Most databases aren’t normally tuned for masking.
  • Triggers: can be one of the most challenging problems as these small pieces of code execute whenever data changes. When updating millions of rows, a trigger will run millions of times, causing the masking process to run forever. However, triggers are often essential for data validity and integrity, and you shouldn’t automatically disable them.

Taming the “performance beast”

Addressing these issues will allow data masking to become an integral component of a dynamic data lifecycle rather than a slow, unusable burden everyone wants to avoid.

Here are some ideas to consider:

Product selection is always challenging. Like all IT purchases, with data masking, you should also test several solutions in your environment using your network, database, and data volume. While trials can be time-consuming, they are the only way to ensure the solutions work well in your environment. Be careful not to rely on brand recognition, market analysts, or friendly advice, as they can backfire and result in a failed project.

Database performance can be improved with a little know-how. Data masking is a very write-intensive process that requires different database tuning since most applications are read-intensive. To temporarily improve database performance during masking, you can, for example, remove indexes and constraints, stop archiving, suspend replication, etc. Pre- and post-masking actions can help automate these actions during masking. Work with your DBAs to maximize your database write speed.

Finally, trigger performance issues can be challenging and require time and know-how. First, identify the triggers that run when you mask your data. Determine which are relevant to the data you are masking and disable the rest. Second, convert the necessary triggers into a vertical update procedure and use that procedure during masking instead of the triggers. It works because a single update of all the rows is much faster than millions of small updates. Core Audit can help speed up this process by identifying the SQLs that need to be rewritten as vertical updates.

Masking is essential

When looking at the details of performance problems, data integrity, etc., it’s easy to lose sight of the big picture: why is data masking so important?

  • Reduces risk: eliminating sensitive data outside of production will dramatically reduce your exposure and risk profile.
  • Simplifies compliance: Masking is essential to reduce the scope of various compliance and data privacy regulations. Systems that contain masked data aren’t typically subject to compliance.
  • Improves development: Masked data drives development and test environments, improving product quality, shortening development cycles, and accelerating project timelines.

Final Thoughts

Data masking is a critical component in the data lifecycle, enabling us to use our data to drive and improve many aspects of our business. From product development to testing, data analytics, and more, securely using our data outside of production lets us multiply the value we derive from it.

Masking is simple and essential but not trivial. Many projects fail for a variety of reasons, such as using an inappropriate solution, failing to define the right masking policies, performance issues, and more.

Through our experience working with customers, we found customers always succeed when they have the right solution and a support team committed to their success. Missing either component greatly decreases the chances of success, and lacking both guarantees failure.

Contact us today at info@bluecoreresearch.com to learn more about how we can help you mask and secure your data.

Database Visibility: Poll Results

Database Visibility:
What Everyone Is Missing

Recent polls of cybersecurity professionals show most respondents (82%) have partial or no visibility into their databases and need it. Few said they have good visibility (7%) or don’t need it (11%).

The surveys were conducted in various LinkedIn groups in English and Spanish, asking: “Do you have visibility into what’s happening within your database?”. Almost all English respondents claimed to have partial visibility (40%) or have no visibility and need it (53%).

In the Spanish survey, more people said they have good visibility or don’t require it, thereby lowering the demand for visibility from 93% to the 82% average.

The challenge of database visibility

Databases store a lot of sensitive corporate and third-party data, such as financial data, personal information, and much more. This data must not be compromised, leaked, or manipulated, but most of us have no visibility into who accesses it, when, or how.

We know we need visibility. Without it, we cannot design effective controls. It is a generally accepted first step in security: don’t fly blind. You can’t control what you can’t see. So how come database visibility is something most organizations lack?

Databases are challenging because of the massive activity volume and strict performance requirements. How do you get visibility into billions of SQLs? How do you even collect the information without impacting performance, let alone process and consume it? How do you gain control and know when someone is doing something malicious when there is so much noise? Even when limiting the scope to confidential table accesses, there is too much activity to comprehend.

But without visibility, how could you set up controls, reports, or alerts? What do you look out for? How will you know if a data breach occurred? And when one does, how did it happen?

How to obtain and improve visibility

As mentioned above, database visibility is challenging due to the high activity and performance requirements. It’s unlikely you’ll get far on your own without the right solution and underlying technologies.

Simple tools rely on built-in database functionality. These often have a high-performance impact while providing almost no visibility. Even expensive tools often use older technologies that can’t deliver the desirable visibility and value.

In recent years, Core Audit introduced significant advances and innovations in capture, processing, storage, and analysis technologies. The result offers impressive and unparalleled visibility. Talk to us to learn more and see how we can help you protect your data.

Final Thoughts

These days, we need to protect more data and more systems while facing greater risks than ever before. These challenges are significant, and we need the visibility and control provided by the most recent advancements.

These latest polls show that most cybersecurity professionals are aware of the importance of visibility. However, most organizations still lack visibility into their databases.

Contact us today at marketing@bluecoreresearch.com to learn more about how we can help you improve visibility and protect your sensitive information.

Anomaly Analysis

Anomaly Analysis

How can you control activity in busy systems like databases? How will you know there’s a malicious SQL inside billions of those? Read to learn more.

Anomaly analysis uses behavioral analysis, helping you save time while expanding your control to vast activity volumes.

These capabilities are made possible by the unique security repository technology in Core Audit. The anomaly analysis engine dynamically creates behavioral profiles based on the activity captured by the security repository, allowing you to compare the present with the past. Contrasting current activity with historical behaviors quickly highlights differences indicative of security problems.

That lets you break the preconception that you can’t control large activity volumes like the database or application. Anomaly analysis empowers you to detect the needle in the haystack to find a single offensive SQL within billions.

How does it work?

Like all IT systems, database activity is repetitive. While values change, patterns persist. While the anomaly analysis engine is capable of more, most analyses center on five aspects:

  1. New activity. Something seen today but not in the past. Like a new user, a new program, a new SQL, and more.
  2. High activity volume. Activity that exists both now and in the past but happens now more.
  3. Time of day – Activity that occurs at a different time now than in the past.
  4. Combined dimensions. A change in a single dimension can be a new user. A change in multiple dimensions is a new user + IP combination. Even if the user and IP are known, that user may have never used that IP.
  5. Filters. Focus different anomalies on different areas of interest. Like sensitive tables, the application, particular users, and more. Various subsets of the activity pose distinct risks, but they are also likely to exhibit different behaviors and, therefore, benefit from other types of anomalies.

There are many ways to select the type of anomalies to use. It could be behaviors you expect, patterns you observed in proactive forensics, the Core Audit wizards, the Core Audit Control Guide, or just trial and error. But one of the crucial features of the anomaly engine is that you can immediately test an anomaly and find the results. That makes choosing and tuning the anomalies a relatively simple process.

Benefits

We initially created the anomaly engine to help answer a common customer request. Looking at traditional declarative auditing, some customers said, “I don’t want to tell Core Audit what to report on. I want Core Audit to tell me what to look at.”

As the technology evolved, it helped secure subsets of the activity previously thought impossible to control. For example, to protect the application with its billions of SQLs.

Anomalies have also proven effective in detecting SQL injection and injection attempts. SQL injection, inevitably, causes the application to do something it isn’t supposed to do, thereby creating a new SQL construct that’s easy to identify.

Today, anomalies are a powerful tool to reduce reporting volume, securing huge activity volumes with a relatively low false positive ratio.

Final Thoughts

Anomaly analysis is vital to modern security, allowing you to do more with less. Control higher volumes from more sources with fewer people, less time, and a lower skill set.

Instead of pouring over endless reports, anomalies do much of the heavy lifting, quickly pointing you to potential problems.

While they’re not a magic solution that solves everything, anomalies can and should be a key element in any security strategy.

Talk to us to learn more and experience the Core Audit difference.

Proactive Forensics

Proactive Forensics

Visibility is the first step in any security effort. You can’t secure what you can’t see. Learn more about the value of visibility and how to achieve it.

One of the popular myths about security is that you can get it out of the box. Just install something, and voila! You’re magically secured. But that never works.

Regardless of what you’re trying to secure, your first step should always be understanding the activity. You should know how the system is used, by whom, when, etc. Gaining this kind of visibility into a live production system is fundamental to figuring out how to secure it.

“You can’t secure what you can’t see.”

However, gaining visibility into IT systems is not simple. It also gets increasingly challenging when the systems process large activity volumes. Databases can process thousands of SQLs every second. All this activity is nearly impossible to understand without the right tools.

Core Audit is a comprehensive security solution that includes, among its many capabilities, the ability to perform proactive forensic investigations. These will give you insight into what’s happening in your database. That’s the first step we recommend when setting up your security.

Once you know what’s happening in your systems, you can design effective reports, configure alerts that make sense, define practical anomaly analysis, and more.

But proactive forensics is not just about setting up your security. It also allows you to identify gaps in your security measures. As activity evolves, we must adjust the controls to fit the new activity patterns, and it’s impossible to do that without visibility. Otherwise, your controls will gradually become outdated and eventually obsolete.

Proactive forensics also lets you identify poor security practices. People sharing accounts, connecting from insecure locations, dumping tables out of the database, and more. While not a breach, these popular bad practices increase your exposure and make a data breach more likely.

There are many motives for regular activity review, but they all share the same underlying reason. We should include people in the security process. No matter the reports, alerts, or automation we create, regular human review can easily flag behaviors and security needs that a machine never will.

Talk to us to learn more and try Core Audit to see the difference.


Fitted Security

Fitted Security

Avoiding the pitfalls of the security trends to design a security strategy that fits your environment and optimizes your posture given the available resources.

Many organizations design their cybersecurity strategy and decide what solutions to purchase based on industry trends and best practices. The outcome is often imbalanced and inappropriate to the organization’s risk profile and security needs.

Best-practice implementations are usually one-size-fits-all and not tailored to the specific environment. Being predictable, there are usually tools and guides on the internet that can defeat them. A generic approach of this nature also fails to leverage advantages readily available when examining the specifics of the environment.

Many solutions come with out-of-the-box security. They contain built-in policies and signatures designed for a low false-positive rate in any environment. These have difficulty identifying attack variations and can never identify other vectors, resulting in a weaker defense.

This paper is about how to do security differently.

Perimeter Vs. Data Centric

Most security solutions fall into one of two categories: Perimeter security or Data-centric security.

Perimeter security aims to prevent attackers from getting into the network and accessing internal systems. A compromised desktop is not a data breach, but once attackers have this access, they can take the next step and try to gain access to the application or the database. A data breach would only occur if the attackers can navigate from that compromised desktop to the data.

Perimeter security has two strategic limitations:

  • Internal threats – when a threat is already inside the network and behind the perimeter, it cannot be secured by these types of defenses. The same applies to threats outside the network with VPN access (.e.g. partners, consultants, and more).
  • Statistical defense – perimeter protections almost always aim to reduce the number of penetrations but not to prevent them altogether. Regardless of how good our spam filters are, we continue to get spam emails. No matter how good our personnel training is, people still click on those phishing emails.

Data-centric is a methodology that revolves around the data. It starts with where the data is stored – the database and expands outwards to the applications that process that data.

Data-centric security includes database security, application security, IAM, and more. A strong data-centric posture can prevent a data breach regardless of how many attackers penetrated the perimeter. For example, cloud-based applications rely mainly on data-centric measures.

Perimeter security is a parallel defense where attackers only need to breach one of the measures to get in. Attackers that fail to penetrate the firewall can try the mail system, social engineering, etc. Perimeter security is as strong as its weakest link.

Data-centric security is a layered security resulting in a serial defense where attackers must penetrate all the layers. A successful application breach that fails to execute an attack against the database is a failed attack. Data-centric security is the last line of defense and aims to be as air-tight as possible.

Perimeter and Data-centric defenses are not mutually exclusive, and well-balanced strategies will include both.

Landscape Considerations

When building a security strategy, we must consider the threats we perceive as relevant. For example, are we worried about external threats, internal threats, or both? If internal threats are a significant concern, perimeter security will not be effective against those, and we should beef up the data-centric components.

We must also consider the effectiveness of different measures in the current landscape. For example, as more and more people work remotely, network security and endpoint security are less effective. Once people work from home, they are outside the corporate firewall and use their home computers. With minimal control over all these remote mini-offices with VPN connections, our perimeter defense becomes weaker than expected.

BYOD (Bring-Your-Own-Device) poses a similar challenge where many uncontrolled and insecure phones and tablets roam the corporate network inside our perimeter.

As the modern landscape changes and the effectiveness of perimeter controls diminishes, there’s a growing shift to internal defenses. Part of this shift includes a redefinition of the perimeter line – protecting the data center network is more important than the corporate network.

Asset Considerations

There are several considerations when choosing which systems to focus our defenses and how to allocate resources:

  • Risk – some systems pose a higher risk than others. For example, a firewall is necessary because, without it, we will have countless attackers inside our network. We need to defend the application because all the people using it or with network access to it form a large surface area. We must also protect our database because it stores all the data, and a breach would be catastrophic.
  • Effectiveness – some systems are easier to secure with better results, while others can be expensive to protect or have limited security benefits. For example, firewalls are an effective measure and are cheap and easy to deploy – that’s an easy choice. Application or database security (depending on the solution used) could be difficult or expensive to deploy but with good results. IAM can be costly to deploy but with limited value.
  • Attack paths – some systems are on a critical path between the attackers and the data. Others are on paths that attackers could bypass. For example, an attacker must communicate with the database to extract the data. An attack is unlikely to succeed without it. However, whether an application is on such a path depends on the environment. Sometimes, the main path to the data is through a single application. In other cases, multiple applications or direct database connections offer alternative paths.

Balancing the risk, effectiveness, and attack paths is a central consideration in cybersecurity strategy, and there are multiple balancing methods. However, it’s important to remember there are many variables. For example, the effectiveness of the security depends on the solution and technology you’re evaluating since different solutions will differ in cost and effectiveness.

IDS and IPS

Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) are fundamentally different methodologies in security.

IPS aims to prevent attacks and breaches. These are classic security systems like firewalls, users & passwords, etc. IPS are critical components in any security strategy.

In contrast, IDS aims to detect attacks and alert or report on them. These include measures like auditing, SIEM, etc. IDS are also a critical component in any security strategy as they inform security personnel, allow them to initiate a response, and provide forensic information.

It’s important to understand the core differences between IPS and IDS:

  • Tolerance for false positives.
    False positives in IPS mean the system prevents users from performing legitimate activities. Therefore, IPS cannot tolerate false positives.
    False positives in IDS mean that reports or alerts require investigation by security personnel while users continue doing their jobs. While we don’t want too many false positives, most IDS are designed and calibrated to have a certain level of false positives.
  • Response time.
    IPS must determine whether activities are valid before letting them through. A slow IPS means IT systems have longer response times, and users complain. An IPS is, therefore, always designed to use real-time algorithms capable of making split-second judgment calls.
    IDS, on the other hand, can take their time to report or alert. IDS usually use more complex algorithms and can analyze large volumes of data to identify intrusions. They can refer to historical information or wait for future events to occur.
  • Circumvention.
    When an IPS system prevents an attack, the attacker is inevitably aware of the failed attempt and can try again. Therefore, attackers constantly challenge IPS systems until they find a way to penetrate them.
    IDS inform security personnel of the attack allowing them to respond in various ways. Responses can include taking systems offline, diverting attacks to honeypots, tracing the attack back to its source, and more. In all cases, attackers don’t get a second chance to try again against an identical defense. IDS systems are, therefore, much more difficult to circumvent.

Consequently, while IPS could prevent an intrusion, IDS is more likely to detect it and less likely to be circumvented. It’s always recommended, when possible, to deploy both types of systems. Deploy an IPS to block most attacks and deploy an IDS to detect the ones that got through.

Tailored Security

Every environment is different. Differences include the application architecture, technology stack, the number of users, administrator access, and more. Other parameters include the user roles, the programs in use, the network segments, the types of data, the activity profiles, etc.

Leveraging such parameters while implementing preventive and detective measures yields more effective results. Such tailored security cannot be out of the box as it requires evaluation, consideration, and planning. However, it’s vital for achieving high effectiveness in attack detection and a low false positive rate.

Customizing security this way takes time and effort, and there’s no magic bullet. But the results are far more likely to withstand an attack and prevent a data breach.

DCSA

Data-Centric Security Assessment is a service offered by Blue Core Research to help security departments quantify the risk to each of their systems and evaluate the effectiveness of different controls and strategies.

DCSA is an interview-based evaluation that uses your opinion about your systems. It will combine multiple parameters about each system in your environment and yield your risk level.

DCSA aims to help you choose the solutions and strategies with the most impact on your security to minimize the risk.

Final thoughts

The quality and strength of your security strategy depend on the time and effort that went into it. You will have a good security posture if you have a good balance between your perimeter and your data-centric, you created layered security, you combined both IPS and IDS on every system with a proper response strategy, you tailored your solutions to each environment, and you spread your resources according to the risk, effectiveness, and attack paths attackers are likely to follow.

That sounds like a lot, but it’s not that difficult. However, it means not following trends and buying solutions just because they are in fashion. It means putting efforts into implementations and not just looking to follow best practices. That requires effort and careful thought, but it’s the best way to lower the risk of a data breach.


SQL Injection Attack Detection

SQL Injection Attack Detection

This is a true story of a SQL injection attack on our website. Learn about the attack and why the Core Audit anomaly analysis database defense is the most effective way to combat this type of threat.

Introduction

We got an alert two days before New Year’s. It was shortly after midnight on December 30, 2021. It was a daily anomaly alert relating to the database backend of an old website, but it was clearly an attack.

As it later turned out, this was the first of several such attack attempts. There were three more in January of 2022, two more in February, and one more in August, October, and November. All in all, nine similar attacks over the course of a year.

Background

We use WordPress (an open-source content management system) on many of our websites and always with a MySQL backend (the most popular setup). The website in question was old, and due to compatibility problems, we haven’t updated some of the software components in a while. So when I first saw this alert, it seemed highly plausible that there was a vulnerability that led to a successful attack.

As it later turned out, this was a false positive. The reason for this false positive was that in addition to an old version of WordPress and plugins, this website also used an old version of Core Audit. But more on this later.

None of our websites contain sensitive information, but as this paper will show, protecting the database of any application with Core Audit is a highly effective means of detecting attacks and protecting the application.

The Anomaly

The anomaly alert had 168 lines, and the first line was this:

SELECT * FROM wp_users WHERE user_login = '') UNION ALL SELECT NULL-- HupP'

While each line was different, every line started with one of these:

SELECT * FROM wp_users WHERE user_login =''

SELECT * FROM wp_users INNER JOIN wp_usermeta ON user_id = ID WHERE meta_key = '' AND user_login = ''

What made it clear that this was an attack were the many end variations of the SQLs that looked like these:

;SELECT SLEEP(9)#' LIMIT 9

) AND SLEEP(9) AND (\\''=\\''

WAITFOR DELAY \\'' AND \\''=\\'' LIMIT 9

;SELECT SLEEP(9)#'

);SELECT PG_SLEEP(9)--'

;SELECT PG_SLEEP(9)--' LIMIT 9

);WAITFOR DELAY \\''--'

;WAITFOR DELAY \\''--' LIMIT 9

) AND 9=(SELECT 9 FROM PG_SLEEP(9))

UNION ALL SELECT NULL-- HupP'

) UNION ALL SELECT NULL-- HupP' LIMIT 9

UNION ALL SELECT NULL,NULL,NULL-- iIWs'

UNION ALL SELECT NULL,NULL,NULL,NULL-- ynbe'

Bear in mind that the empty strings ('') and the 9’s are not part of the original SQL. They relate to how the Core Audit security repository operates. This repository automatically collects all the SQLs in the database, so to reduce storage space and eliminate anomalies from embedded literals, it automatically strips all the numbers and strings.

The attempts listed above were scanning for a SQL injection vulnerability. They were trying to detect whether SQL injection was possible and discover details that would facilitate an attack:

  • The various sleep/delay statements would only work on particular databases. So a delayed response tells the attacker both that the injection was successful and the type of database used.
  • The various UNION and NULL permutations were trying to determine the number of fields in the query and whether they could append data from other tables.

In addition to many more variations of SLEEP and UNION, there were other colorful expressions like:

) AND (SELECT 9 FROM(SELECT COUNT(),CONCAT (0x7178707671,(SELECT (ELT(9=9,9))),0x71717 67171,FLOOR(RAND(9)9))x FROM INFORMATION_SCHEMA.CHARAC

) AND 9=CAST((CHR(9)||CHR(9)||CHR(9)||CHR(9)||CHR(9))||(SELECT (CASE WHEN (9=9) THEN 9 ELSE 9 END))::text||(CHR(9)||CHR(9)||CHR(9)||CHR(9)||C

) AND 9=CAST((CHR(9)||CHR(9)||CHR(9)||CHR(9)||CHR(9))||(SELECT (CASE WHEN (9=9) THEN 9 ELSE 9 END))::text||(CHR(9)||CHR(9)||CHR(9)||CHR(9)||CHR(9)) AS NUMERIC) AND (\\''=\\'')

;SELECT DBMS_PIPE.RECEIVE_MESSAGE(CHR(9)||CHR(9)||CHR(9)||CHR(9),9) FROM DUAL--'

AND 9=CONVERT(INT,(SELECT CHAR(9)+CHAR(9)+CHAR(9)+CHAR(9)+CHAR(9)+(SELECT (CASE WHEN (9=9) THEN CHAR(9) ELSE CHAR(9) END))+CHAR(9)+CHAR(9)+C

) AND (SELECT 9 FROM(SELECT COUNT(),CONCA T(0x7178707671,(SELECT (ELT(9=9,9))),0x717 1767171,FLOOR(RAND(9)9))x FROM INFORMATION_SCHEMA.CHARACTER_SETS GROUP BY x)a) AND (\\''=\\'')

) AND 9=CONVERT(INT,(SELECT CHAR(9)+CHAR(9)+CHAR(9)+CHAR(9)+CHAR(9)+(SELECT (CASE WHEN (9=9) THEN CHAR(9) ELSE CHAR(9) END))+CHAR(9)+CHAR(9)+CHAR(9)+CHAR(9)+CHAR(9))) AND (\\''=\\''

These SQLs are clearly unusual and seem to attempt to bypass a SQL injection protection system like a WAF.

Forensics

We experienced an attack. The alert left no doubt about that. The two remaining questions were:

  • Was the attack successful?
  • Which part of the application was targeted?

Database Forensics

The first question was easy to answer. I started by locating the queries in the reduced SQL forensic view. The reduced SQL repository has a 5-minute resolution, and all these SQLs executed within the 5-minute window of 8:20 am to 8:25 am.

Once I located the queries, I also saw the good news – they were all successful (no errors), and the number of rows retrieved was zero for all.

Why are successful executions good news? Because this attack was trying to test whether the application was vulnerable to SQL injection. In this scan, most of the queries were supposed to fail, with the few successful ones indicating the method to exploit a vulnerability. For example, the SQLs with PG_SLEEP() could never be successful on our MySQL database since this function only exists in PostgreSQL databases. Therefore, successful executions mean the injection attempt failed to modify the SQL construct.

Additionally, these SQLs didn’t retrieve data, so there was no leak. While most of these SQLs were not attempting to extract anything, it’s comforting to know nothing was retrieved.

In other words – this attack failed to break through the literal boundary. We’ll get back to this subject later and also answer the more interesting question of why we got the anomaly alert in the first place.

Application Forensics

Now that we have more information, it’s easy to search the Apache logs for more details on the attack and what it targeted.

The attack was between 8:20 am and 8:25 am and accessed the wp_users table. Looking at the Apache log, we can see this attack started at exactly 8:20 am:

[29/Dec/2021:08:20:00] "GET /wp-login.php?log=1&pwd=-9696%20UNION%20ALL%20SELECT%2024%2C24--%20ptzf"

This attack was a GET request to the wp-login.php script. That is the WordPress login page. The attack delivered its injection attempt through the password field that, in this first attempt, included:

-9696 UNION ALL SELECT 24,24-- ptzf

The last attack attempt was using this POST request at 8:21:51 am:

[29/Dec/2021:08:21:51] "POST /wp-login.php?Phws%3D5963%20AND%201%3D1%20UNION%20ALL%20SELECT%201%2CNULL%2C%27%3Cscript%3Ealert%28%22XSS%22%29%3C%2Fscript%3E%27%2Ctable_name%20FROM%20information_schema.tables%20WHERE%202%3E1--%2F%2A%2A%2F%3B%20EXEC%20xp_cmdshell%28%27cat%20..%2F..%2F..%2Fetc%2Fpasswd%27%29%23"

The Apache logs don’t record the POST parameters, but we can see this payload on the request:

Phws=5963 AND 1=1 UNION ALL SELECT 1,NULL,'<script>alert("XSS")',table_name FROM information_schema.tables WHERE 2>1--/**/; EXEC xp_cmdshell('cat ../../../etc/passwd')#

This payload is a little funny as it contains a SQL injection with a cross-site scripting scan (alert(“XSS”)) and an attempt to have SQL Server execute a shell command (EXEC xp_cmdshell) with a Unix/Linux command printing the content of a Unix/Linux password file (cat …/passwd).

That is a mix of attack fragments that could never work together. And there are several other things wrong with this last request and payload.

That indicates the person running the attack had little understanding of what they were doing. Combined with the fact that the whole scan lasted just under 2 minutes, it suggests this was a script or, more likely, several scripts they downloaded off the internet and executed against various websites.

False Positive

So why did the attack fail, and why did we get an anomaly alert anyway?

A SQL injection attack attempts to modify the SQL construct by breaking through the literal boundaries. In other words, when a SQL contains a literal like this:

SELECT * FROM wp_users WHERE user_login = 'JOHN'

A SQL injection attack attempts to send a string other than “JOHN” so the SQL construct will change. For example, the string “X' or 'Y'='Y” will result in this SQL:

SELECT * FROM wp_users WHERE user_login ='X' or 'Y'='Y'

By changing the SQL construct and adding or 'Y'='Y', the database will run something not intended by the developer who wrote the code. Putting a tag (') in the input broke through the literal boundary and allowed the attacker to alter the SQL construct. Escaping literals before embedding them into a SQL prevents this vulnerability.

In the SQL standard, you can escape tags (') by using double tags (''). When escaped, the above example will yield:

SELECT * FROM wp_users WHERE user_login = 'X'' or ''Y''=''Y'

In this case, the database will compare the user_login field to the entire string, and the word OR is just part of the user name, not part of the SQL construct.

WordPress must have escaped the input correctly in our attack, and the SQL construct was not modified. That’s why the SQLs executed without error, and the attack failed.

But why did we receive an anomaly alert if the attack didn’t break the literal boundary?

Unlike other databases, in MySQL, there are two ways to escape tags in strings. The first is with double tags ('') according to the SQL standard. But there’s a second method of preceding the tag with a backslash (\'). WordPress uses the second method.

That’s where the old version of Core Audit comes into play. That old version was released shortly after the release of MySQL support, and the literal stripping in the Reduced SQL repository did not support the backslash escape method for MySQL databases. As a result, the old version thought the tags were not escaped and didn’t strip the literals correctly.

Once we discovered the unintended consequence of falsely detecting these SQL injection attacks, we purposely kept that old Core Audit version on that website. We wanted to see how many more failed attacks we’ll experience. As stated earlier, this old website experienced nine attempts over the following year. Once we upgraded the Core Audit version, we stopped receiving these false positive anomaly alerts.

Application attack surface area

You might wonder why someone attempted a SQL injection attack that doesn’t work against WordPress. The attackers, like anyone else, had access to WordPress. So why didn’t they know this attack would fail?

That is an interesting question that highlights the complexities of supply chain attacks in some 3rd party applications, like WordPress.

The first thought that comes to mind is that maybe some versions of WordPress are susceptible to this attack. However, SQL injection is a well-known threat, and the WordPress development team is strict about escaping inputs before embedding literals in SQLs. While using bind variables would be safer, web developers have a penchant for embedding literals.

However, it’s worthwhile noting that multiple versions and patches significantly increase the attack surface area. The reason is that most organizations upgrade and patch applications from time to time and can never be sure which vulnerabilities they eliminated or introduced and at what point.

But since it’s unlikely that any WordPress version was susceptible to a SQL injection on the login screen, that brings up another interesting feature of WordPress – Plugins.

A big part of the power and flexibility of WordPress are the tens of thousands of plugins that can extend it. These plugins are code that can attach to various places in WordPress and modify its behavior in almost any way imaginable.

Plugins offer power and flexibility to the users, but they also pose a security risk. WordPress plugins introduce additional vendors, developers, coding standards, changes to the database data model, new execution paths in the application, and, of course, new vulnerabilities.

The risk in plugins is not only vulnerabilities in 3rd party software but also the risk of supply chain attacks. Such attacks could be initiated by the plugin authors or by a hacker who altered their source code.

It’s unlikely that WordPress was ever susceptible to this attack, but it’s highly plausible that some plugin was. The attacker was probably targeting a plugin that was not installed in our WordPress.

Final thoughts

SQL injections are notoriously hard to detect and, even more so, to prevent. However, Core Audit anomaly analysis was able to easily alert on those attacks.

Not only was anomaly analysis able to detect the attack, but it did it without any attack signatures or support for PHP or WordPress. And, actually, without even looking for a SQL injection attack.

And that is the reason anomaly analysis is so effective against SQL injection – it’s not looking specifically for that. It’s searching for SQLs that are new to the application and are, therefore, suspicious. SQL injection can masquerade in many ways but, by definition, is not part of the SQL vocabulary of the application. Anomaly analysis will, therefore, always flag it as suspicious.