Story of a Cyberattack

Story of a Cyberattack

How can a day with the potential to be the worst in your career turn out positive or even pretty good? This fictional story describes realistic events from a day of a breach when things happened right. See what it takes to make it happen.

The Attack Begins

The blip of a new email flashed on Cora’s screen. It was yet another alert from Core Audit, and it wasn’t the first one of the day. But a quick glance at the SQLs and adrenaline jolted her awake. It felt like caffeine was pumping directly into her brain. That is not a false positive. It’s an attack. Someone is successfully running an SQL injection attack through the application and into the database.

Cora Blue is a DBA in the security team and understood exactly what was happening. That is the first stage of the attack. The intruder is trying to determine if the application is vulnerable and gather information about the type of database and the tables inside it. So far, it’s probably just a script collecting information, but depending on how good this attacker is, it will soon graduate to a full-blown data breach. It could take days, sometimes hours, but in the worst case, it can be a few minutes before they start to exfiltrate data. There’s no time to waste.

Taking Action

Cora knows what she needs to do. Even though taking the application offline will probably be enough in this case, the policy demands shutting down both the application and the database. As a senior member of the security team, she has the permissions required to do that. Better safe than sorry, and a few minutes later, everything is down and offline. While the systems were going offline, she sent a message to the notification group. That will get to the response team and everyone else that needs to know.

By the time the response team assembled, Cora got a chance to glance through the Core Audit forensic information. It looks like the hackers were still in the discovery phase, and nothing was taken. She found the App Server the attack came from and the time. From the SQLs, she could even guess the compromised area of the application – the first part of the SQL is the original code with the table and columns of the original query.

Now that the team is all on Zoom there are two immediate objectives. Getting the application and database back online safely is important as the helpdesk is probably already getting calls. It’s a good thing they were part of the notification chain. But more pressing is how they got into the network, whether they are still inside, and what else needs to be done immediately to stop this attack in its tracks.

The small response team includes Cora Blue, who identified the event and is also the database expert of the team. Brandon Reese, the know-it-all sysadmin. Randall Shield who controls the network. And Audrey Cortez, who represents upper management.

What Happened?

Cora quickly explains what happened: a SQL injection attack hit the database through app server CBR002 starting at 14:35, but the app server and database were shut down before anything was taken. Audrey commends her for her diligence and quick response. That’s not the first time the team gets together and everyone knows Audrey can talk for a while. But they also know she doesn’t mind being interrupted, especially when time is precious. And before Audrey can continue, Brandon politely interrupts sharing his screen showing a grep of the Apache logs.

The logs clearly show the rapid fire of the HTTP requests that caused the SQL injection and the IP that sent them. It looks like the attack came from the VPN IP range. As Brandon continued to Core Audit to look at the application auditing forensic screens, Randall, the network admin, found the DHCP IP assignment showing the attack came from the home computer of Holly Ackerman, the administrative assistant for one of the VPs.

A few seconds later, Brandon also found the attack in the Core Audit forensics confirming it was Holly’s login in the application. While no one in the team knows Holly well, she doesn’t seem very computer savvy and probably never heard the term SQL Injection. Someone must have taken over her home computer and used it to launch the attack.

Neutralizing the Threat

Either way, the team disables Holly’s account logging her out of the VPN. The helpdesk team will call her to explain, the security team will later start to investigate her computer, and the Windows team will eventually help her reinstall it. Brandon sends a short email update to the breach notification group which will kick that into motion.

The team believes that at this point the attack is mostly contained. They need to see if Holly’s user was used anywhere else, get the database online, find the problem in the application, and figure out how to get it online safely. Also need to investigate Holly’s PC to figure out how someone got in, though she probably just clicked on the wrong email.

The day is not over, but it looks like they prevented the worst.

How did they Succeed?

Let’s review some of the key points that made this story a success for the security team.

First of all, despite a perimeter breach that was probably unavoidable, they had a database IDS solution that recognized the anomalous data access and sent an alert. Core Audit was pivotal in triggering the response. Without that alert, this story would have been very different. It would be months before they found out about the attack, and probably only as a result of a notification from law enforcement. The response team would then be analyzing a months-long data breach, trying to determine how many systems were compromised and who they should notify that their data was stolen.

Secondly, Cora Blue, a knowledgeable and trained cybersecurity DBA got the alert. She knew what to look for and recognized the attack. She also had forensic information about the database activity to support immediate action. Without Core Audit, there is no forensic information about every SQL and no way to determine what transpired in the database.

There were also simple policies telling Cora how to respond, and she had the permissions needed to take action and stop the attack in its tracks.

The team assembled quickly, was small enough to operate efficiently, and had knowledgeable people with access to all the systems. The team met many times before and had a good working relationship. It was also not the first time they attempted to respond to an attack as everyone knew exactly what to do.

Everything can work out well when you get alerts, have supporting forensic information, and have talented, trained people, that know what to do.

Learn more about

DATABASE SECURITY

Core Audit


Contact us to
LEARN MORE

Write to us

IDS & IPS

IDS & IPS

Toward Airtight Security

Combining IPS and IDS significantly reduces false negatives. That’s the key to better security since false negatives are how we get breached.

We previously discussed data-centric defenses as the critical last line of defense. As such, one of our requirements is to try and make it as airtight as possible. That’s not a minor or trivial requirement. In this article, we’ll discuss how we can get there.

There are two concepts we’ll need to discuss:

  • False Positives and False Negatives
  • IPS and IDS

False negatives

False Negatives or type 2 errors, are when a security system fails to react to an attack. So, our objective is to reduce or try to eliminate false negatives. The better we do that, the more airtight our security.

False negatives are the dirty little secret of security systems – the subject no one talks about. No one talks about how many false negatives they have, how to measure them, estimate them, and generally, about the effectiveness of security. It’s an important subject, but one that everyone tries to avoid.

To understand how to reduce false negatives, we must first understand false positives.

False positives

A false positive or type 1 error, is the opposite – when a security system incorrectly classifies legitimate activity as an attack. For example, when a good email gets into the spam folder.

False positives can be annoying, as in the case of misclassified emails. But they can also prevent people from doing their jobs when, for example, they can’t log in to a system they need for work. These are examples of false positives in preventive security. Because they can be debilitating and cause a lot of complaints, IPS is designed and tuned to have low or, ideally, no false positives.

The Balance

As you can imagine, there’s a trade-off between false positives and false negatives. Reducing one tends to increase the other. So, as we reduce false positives so people can do their jobs, we inevitably increase false negatives, and more attacks go undetected.

Another way to look at it is the sensitivity of the security system. A sensitive system will detect more attacks but have a lot of false alerts. A system calibrated to be less sensitive will have fewer false alerts but miss many attacks.

IPS & IDS

That brings up the other subject of IPS and IDS. There’s a common misconception that prevention is more important than detection. The logic behind it is, why detect something when you can prevent it? It sounds like a good idea, but it’s wrong.

As we just said, in IPS, we have to reduce false positives, and that increases false negatives. But IDS isn’t required to have zero false positives. In IDS, we even expect some level of false positives. False positives in IDS are false alerts that security people receive. They can be annoying if they’re too frequent, but we expect them to a certain degree. Being able to accommodate some false positives lets us significantly reduce the false negatives. In other words, we calibrate IDS systems to be more sensitive and detect a much higher number of attacks.

The diagram below shows the ideal calibration of IDS and IPS with no false positives for the preventive and no false negatives for the detective. The area between Preventive and Detective illustrates in Red the attacks that go undetected by the preventive (its false negatives) and in Blue the false alerts of the detective (its false positives).

If there were false positives for the preventive, that would show as Blue crossing right over the preventive line and getting blocked. Similarly, false negatives for the detective would be Red crossing left over the detective line and not being detected.

Using a combination of IDS and IPS is a simple way to estimate the number of false negatives in the IPS. There are other ways of doing that using static analysis.

So what’s better, IDS or IPS?

The truth is that we need both. Preventive to block as much as possible and detective to identify and alert about the rest. That’s how we get close to airtight.

There are other strategic mechanisms for getting close to airtight, like using serial and partially overlapping defenses. That takes us to the risk-control matrix, which is the subject of another article.

If you want to know more or have a free discussion with one of our experts, please contact us at info@bluecoreresearch.com

Data-centric security

Data-centric security

Designing modern security

Balancing security isn’t simple. How can you identify the areas that require attention and invest the “right amount” of resources?

The problem with any issue comprised of many unrelated components is that it’s nearly impossible to plan a proper strategy, assign value, and achieve a balanced approach to cover everything. The result is uneven security. Some areas are heavily fortified, while others have no coverage.

The article starts putting some order in this chaos to understand where things belong and how valuable they are.

Perimeter and Data-Centric

First, let’s divide our security universe between measures designed to keep outsiders out and measures aimed to prevent bad people from getting to the data. Perimeter security prevents access to the corporate network, while data-centric security protects data from malicious access.

A good test to check if a security measure is part of the perimeter or data-centric is asking: “Is this going to protect us from an insider threat?” Insider threats are people who work for the company, so they are already inside the perimeter. If a measure will prevent them from stealing or modifying data, it’s a measure that protects the data.

Perimeter

In general, external attackers usually need to breach an internal asset before gaining access to data. That is a perimeter breach, a frequent precursor to a data breach.

However, there are a few exceptions to that rule. For example, a publicly-facing application could give data access without breaching anything other than that application. That is an example of data that can be accessed directly from the outside without perimeter protection. Another example is data outside the perimeter, like a laptop or a backup tape outside the building. They also have no perimeter protection and can be stolen without access to internal assets. Obviously, the perimeter also doesn’t protect us from an insider threat that’s already inside the perimeter.

Perimeter security is crucial, but none of these measures are airtight. That means perimeter security only aims to reduce the number of attacks on the data, not prevent them.

For example, email security only intends to reduce spam, not prevent it. The spam we get almost every day is definite proof of that. Even if one perimeter measure was airtight, there are too many other perimeter attack vectors that it’s foolish to pretend we can cover them all perfectly. Therefore, the perimeter aims to reinforce data-centric measures, not replace them. And relying on the perimeter alone ensures we’ll have a data breach.

Data-Centric

While some consider it an exaggeration, we believe data-centric measures should aspire to be airtight. That’s a strong statement in sharp contrast to the perimeter and is not easy to achieve. But it’s not impossible.

Data-centric, as a whole, is usually our last line of defense. We have no other barriers to protect us beyond it. When all the data-centric measures fail, we’ll have a data breach. So, if possible, we should overlay data-centric measures to get additional protection. That’s possible because data-centric measures deploy in serial, unlike perimeter protection, which is in parallel.

As data-centric revolves around data, database and application security are the main pillars. We also have server security along with physical and network security for the data center, encryption of data at rest, and controls against administrators. We’re talking about everything that stands between people and the data.

Summary

We have our first division of security between perimeter and data-centric. We also identified the role of data-centric security in protecting us from internal threats and external threats that penetrate the perimeter.

Finally, we aim to make our data-centric protection airtight. That will be the subject for the next two weeks, looking at IDS vs. IPS and how to leverage the risk/control matrix.

As people increasingly work from home, the perimeter becomes impossible to secure. We have no control over the physical perimeter of people at home, their home network, personal computers, or all their devices. A lost battle that symbolizes the death of the perimeter. One more reason why data-centric protection is the primary form of security in the 21st century.

LinkedIn Post

Check out a summary posted on LinkedIn

Read Post →

2024 Webinar May 16

Webinar
Application Security

Thursday, May 16th

11 am PT – 1pm CT – 2pm ET

Free webinar: Protect your applications

Application security is challenging, with many attack vectors to consider, so how can we combat all these threats?

Join our webinar and listen to our experts discussing mitigation strategies for various attacks across multiple application tiers.

Topics:

  • Application tiers: protecting the different areas of the application stack from the database to the end-user.
  • Threats & Attacks: protecting against SQL injection, Cross-site scripting (XSS), Privilege abuse, and more.
  • And most importantly, how to strengthen your application security through detection and prevention technologies.

Don’t miss our event
SIGN UP HERE

First name
Last name
Corporate email
Mobile number
Company
Title
State
How did you hear about this event?

Speakers

Ana Houlston

DBlandIT

Christian Carlos

TS4B

Felipe Araya Jaramillo

BT Consulting

Carlos Miquelarena

Blue Core Research

Paulo Camiletti

Blue Core Research

Q&A’s: Data Masking

Common Data Masking Questions:

Do you copy data out of production? Static data masking is a simple, easy, and effective way to protect it and prevent a breach.

1. Why mask? Because we can’t protect the data outside of production: Imagine copying customer data for testing. How could you protect it after copying it? Without data masking, you will expose all names, addresses, phone numbers, emails, financial information, and more. Static masking replaces these values with good fakes so you can test without jeopardizing your confidential information or that of the people who entrusted it to you.

2. Reverse the masking? Impossible. That’s the point: Unlike encryption, static masking is a one-way transformation. The masked data resembles the original but doesn’t reveal it. This irreversible process ensures your sensitive information isn’t exposed even if the masked data falls into the wrong hands.

3. Data integrity? Is a must. Otherwise, the application won’t function properly in the test environment, or the test will be ineffective. The masking process must preserve data validity, consistency, and referential integrity. It’s like an elaborate disguise: everything looks different but has to work the same way.

4. A single algorithm? Of course not. There are many ways to mask each type of data. Choosing the strategy that fits your requirements will ensure you achieve your security goals while getting the most out of your data.

For example, value manipulation will preserve some aspects of the original data but can potentially offer weaker security. Data generation will provide perfect security but may impair test quality. Data profiling and custom profiles are two other strategies that balance security and test quality.

5. Should I worry about performance? Yes and no: data masking performance isn’t an issue unless it’s so slow that masking is impossible. Let’s explain more:

There’s a common preconception that static data masking is inherently slow and resource-intensive, but it’s not a big deal since we only have to do it once after copying the data. Some would say just once.

The Truth

It doesn’t matter if a masking process takes 30 seconds, 5 minutes, or half an hour. It’s not something that runs too often, and it never runs on production systems, slowing down business-critical processes.

However, it’s not entirely true that it doesn’t matter since masking becomes impractical if it takes days or weeks to run. Nor is it true that masking runs only once, as it must run every time you refresh your test data. As masking becomes faster and easier, you can update your test data more frequently, getting more out of your data.

Performance Culprits

Slow masking is usually due to one of these reasons:

  • Product selection: different solutions offer different performance capabilities. Common reasons include code quality, database APIs, transaction size, etc. For example, chatty protocols combined with high latency can result in very slow masking.
  • Database performance: like any database-driven product, masking performance also depends on the performance of the underlying database. Most databases aren’t normally tuned for masking.
  • Triggers: can be one of the most challenging problems as these small pieces of code execute whenever data changes. When updating millions of rows, a trigger will run millions of times, causing the masking process to run forever. However, triggers are often essential for data validity and integrity, and you shouldn’t automatically disable them.

Taming the “performance beast”

Addressing these issues will allow data masking to become an integral component of a dynamic data lifecycle rather than a slow, unusable burden everyone wants to avoid.

Here are some ideas to consider:

Product selection is always challenging. Like all IT purchases, with data masking, you should also test several solutions in your environment using your network, database, and data volume. While trials can be time-consuming, they are the only way to ensure the solutions work well in your environment. Be careful not to rely on brand recognition, market analysts, or friendly advice, as they can backfire and result in a failed project.

Database performance can be improved with a little know-how. Data masking is a very write-intensive process that requires different database tuning since most applications are read-intensive. To temporarily improve database performance during masking, you can, for example, remove indexes and constraints, stop archiving, suspend replication, etc. Pre- and post-masking actions can help automate these actions during masking. Work with your DBAs to maximize your database write speed.

Finally, trigger performance issues can be challenging and require time and know-how. First, identify the triggers that run when you mask your data. Determine which are relevant to the data you are masking and disable the rest. Second, convert the necessary triggers into a vertical update procedure and use that procedure during masking instead of the triggers. It works because a single update of all the rows is much faster than millions of small updates. Core Audit can help speed up this process by identifying the SQLs that need to be rewritten as vertical updates.

Masking is essential

When looking at the details of performance problems, data integrity, etc., it’s easy to lose sight of the big picture: why is data masking so important?

  • Reduces risk: eliminating sensitive data outside of production will dramatically reduce your exposure and risk profile.
  • Simplifies compliance: Masking is essential to reduce the scope of various compliance and data privacy regulations. Systems that contain masked data aren’t typically subject to compliance.
  • Improves development: Masked data drives development and test environments, improving product quality, shortening development cycles, and accelerating project timelines.

Final Thoughts

Data masking is a critical component in the data lifecycle, enabling us to use our data to drive and improve many aspects of our business. From product development to testing, data analytics, and more, securely using our data outside of production lets us multiply the value we derive from it.

Masking is simple and essential but not trivial. Many projects fail for a variety of reasons, such as using an inappropriate solution, failing to define the right masking policies, performance issues, and more.

Through our experience working with customers, we found customers always succeed when they have the right solution and a support team committed to their success. Missing either component greatly decreases the chances of success, and lacking both guarantees failure.

Contact us today at info@bluecoreresearch.com to learn more about how we can help you mask and secure your data.

Database Visibility: Poll Results

Database Visibility:
What Everyone Is Missing

Recent polls of cybersecurity professionals show most respondents (82%) have partial or no visibility into their databases and need it. Few said they have good visibility (7%) or don’t need it (11%).

The surveys were conducted in various LinkedIn groups in English and Spanish, asking: “Do you have visibility into what’s happening within your database?”. Almost all English respondents claimed to have partial visibility (40%) or have no visibility and need it (53%).

In the Spanish survey, more people said they have good visibility or don’t require it, thereby lowering the demand for visibility from 93% to the 82% average.

The challenge of database visibility

Databases store a lot of sensitive corporate and third-party data, such as financial data, personal information, and much more. This data must not be compromised, leaked, or manipulated, but most of us have no visibility into who accesses it, when, or how.

We know we need visibility. Without it, we cannot design effective controls. It is a generally accepted first step in security: don’t fly blind. You can’t control what you can’t see. So how come database visibility is something most organizations lack?

Databases are challenging because of the massive activity volume and strict performance requirements. How do you get visibility into billions of SQLs? How do you even collect the information without impacting performance, let alone process and consume it? How do you gain control and know when someone is doing something malicious when there is so much noise? Even when limiting the scope to confidential table accesses, there is too much activity to comprehend.

But without visibility, how could you set up controls, reports, or alerts? What do you look out for? How will you know if a data breach occurred? And when one does, how did it happen?

How to obtain and improve visibility

As mentioned above, database visibility is challenging due to the high activity and performance requirements. It’s unlikely you’ll get far on your own without the right solution and underlying technologies.

Simple tools rely on built-in database functionality. These often have a high-performance impact while providing almost no visibility. Even expensive tools often use older technologies that can’t deliver the desirable visibility and value.

In recent years, Core Audit introduced significant advances and innovations in capture, processing, storage, and analysis technologies. The result offers impressive and unparalleled visibility. Talk to us to learn more and see how we can help you protect your data.

Final Thoughts

These days, we need to protect more data and more systems while facing greater risks than ever before. These challenges are significant, and we need the visibility and control provided by the most recent advancements.

These latest polls show that most cybersecurity professionals are aware of the importance of visibility. However, most organizations still lack visibility into their databases.

Contact us today at marketing@bluecoreresearch.com to learn more about how we can help you improve visibility and protect your sensitive information.

Anomaly Analysis

Anomaly Analysis

How can you control activity in busy systems like databases? How will you know there’s a malicious SQL inside billions of those? Read to learn more.

Anomaly analysis uses behavioral analysis, helping you save time while expanding your control to vast activity volumes.

These capabilities are made possible by the unique security repository technology in Core Audit. The anomaly analysis engine dynamically creates behavioral profiles based on the activity captured by the security repository, allowing you to compare the present with the past. Contrasting current activity with historical behaviors quickly highlights differences indicative of security problems.

That lets you break the preconception that you can’t control large activity volumes like the database or application. Anomaly analysis empowers you to detect the needle in the haystack to find a single offensive SQL within billions.

How does it work?

Like all IT systems, database activity is repetitive. While values change, patterns persist. While the anomaly analysis engine is capable of more, most analyses center on five aspects:

  1. New activity. Something seen today but not in the past. Like a new user, a new program, a new SQL, and more.
  2. High activity volume. Activity that exists both now and in the past but happens now more.
  3. Time of day – Activity that occurs at a different time now than in the past.
  4. Combined dimensions. A change in a single dimension can be a new user. A change in multiple dimensions is a new user + IP combination. Even if the user and IP are known, that user may have never used that IP.
  5. Filters. Focus different anomalies on different areas of interest. Like sensitive tables, the application, particular users, and more. Various subsets of the activity pose distinct risks, but they are also likely to exhibit different behaviors and, therefore, benefit from other types of anomalies.

There are many ways to select the type of anomalies to use. It could be behaviors you expect, patterns you observed in proactive forensics, the Core Audit wizards, the Core Audit Control Guide, or just trial and error. But one of the crucial features of the anomaly engine is that you can immediately test an anomaly and find the results. That makes choosing and tuning the anomalies a relatively simple process.

Benefits

We initially created the anomaly engine to help answer a common customer request. Looking at traditional declarative auditing, some customers said, “I don’t want to tell Core Audit what to report on. I want Core Audit to tell me what to look at.”

As the technology evolved, it helped secure subsets of the activity previously thought impossible to control. For example, to protect the application with its billions of SQLs.

Anomalies have also proven effective in detecting SQL injection and injection attempts. SQL injection, inevitably, causes the application to do something it isn’t supposed to do, thereby creating a new SQL construct that’s easy to identify.

Today, anomalies are a powerful tool to reduce reporting volume, securing huge activity volumes with a relatively low false positive ratio.

Final Thoughts

Anomaly analysis is vital to modern security, allowing you to do more with less. Control higher volumes from more sources with fewer people, less time, and a lower skill set.

Instead of pouring over endless reports, anomalies do much of the heavy lifting, quickly pointing you to potential problems.

While they’re not a magic solution that solves everything, anomalies can and should be a key element in any security strategy.

Talk to us to learn more and experience the Core Audit difference.

Proactive Forensics

Proactive Forensics

Visibility is the first step in any security effort. You can’t secure what you can’t see. Learn more about the value of visibility and how to achieve it.

One of the popular myths about security is that you can get it out of the box. Just install something, and voila! You’re magically secured. But that never works.

Regardless of what you’re trying to secure, your first step should always be understanding the activity. You should know how the system is used, by whom, when, etc. Gaining this kind of visibility into a live production system is fundamental to figuring out how to secure it.

“You can’t secure what you can’t see.”

However, gaining visibility into IT systems is not simple. It also gets increasingly challenging when the systems process large activity volumes. Databases can process thousands of SQLs every second. All this activity is nearly impossible to understand without the right tools.

Core Audit is a comprehensive security solution that includes, among its many capabilities, the ability to perform proactive forensic investigations. These will give you insight into what’s happening in your database. That’s the first step we recommend when setting up your security.

Once you know what’s happening in your systems, you can design effective reports, configure alerts that make sense, define practical anomaly analysis, and more.

But proactive forensics is not just about setting up your security. It also allows you to identify gaps in your security measures. As activity evolves, we must adjust the controls to fit the new activity patterns, and it’s impossible to do that without visibility. Otherwise, your controls will gradually become outdated and eventually obsolete.

Proactive forensics also lets you identify poor security practices. People sharing accounts, connecting from insecure locations, dumping tables out of the database, and more. While not a breach, these popular bad practices increase your exposure and make a data breach more likely.

There are many motives for regular activity review, but they all share the same underlying reason. We should include people in the security process. No matter the reports, alerts, or automation we create, regular human review can easily flag behaviors and security needs that a machine never will.

Talk to us to learn more and try Core Audit to see the difference.