2024 Webinar May 16

Webinar
Application Security

Thursday, May 16th

11 am PT – 1pm CT – 2pm ET

Free webinar: Protect your applications

Application security is challenging, with many attack vectors to consider, so how can we combat all these threats?

Join our webinar and listen to our experts discussing mitigation strategies for various attacks across multiple application tiers.

Topics:

  • Application tiers: protecting the different areas of the application stack from the database to the end-user.
  • Threats & Attacks: protecting against SQL injection, Cross-site scripting (XSS), Privilege abuse, and more.
  • And most importantly, how to strengthen your application security through detection and prevention technologies.

Don’t miss our event
SIGN UP HERE

First name
Last name
Corporate email
Mobile number
Company
Title
State
How did you hear about this event?

Speakers

Ana Houlston

DBlandIT

Christian Carlos

TS4B

Felipe Araya Jaramillo

BT Consulting

Carlos Miquelarena

Blue Core Research

Paulo Camiletti

Blue Core Research

Q&A’s: Data Masking

Common Data Masking Questions:

Do you copy data out of production? Static data masking is a simple, easy, and effective way to protect it and prevent a breach.

1. Why mask? Because we can’t protect the data outside of production: Imagine copying customer data for testing. How could you protect it after copying it? Without data masking, you will expose all names, addresses, phone numbers, emails, financial information, and more. Static masking replaces these values with good fakes so you can test without jeopardizing your confidential information or that of the people who entrusted it to you.

2. Reverse the masking? Impossible. That’s the point: Unlike encryption, static masking is a one-way transformation. The masked data resembles the original but doesn’t reveal it. This irreversible process ensures your sensitive information isn’t exposed even if the masked data falls into the wrong hands.

3. Data integrity? Is a must. Otherwise, the application won’t function properly in the test environment, or the test will be ineffective. The masking process must preserve data validity, consistency, and referential integrity. It’s like an elaborate disguise: everything looks different but has to work the same way.

4. A single algorithm? Of course not. There are many ways to mask each type of data. Choosing the strategy that fits your requirements will ensure you achieve your security goals while getting the most out of your data.

For example, value manipulation will preserve some aspects of the original data but can potentially offer weaker security. Data generation will provide perfect security but may impair test quality. Data profiling and custom profiles are two other strategies that balance security and test quality.

5. Should I worry about performance? Yes and no: data masking performance isn’t an issue unless it’s so slow that masking is impossible. Let’s explain more:

There’s a common preconception that static data masking is inherently slow and resource-intensive, but it’s not a big deal since we only have to do it once after copying the data. Some would say just once.

The Truth

It doesn’t matter if a masking process takes 30 seconds, 5 minutes, or half an hour. It’s not something that runs too often, and it never runs on production systems, slowing down business-critical processes.

However, it’s not entirely true that it doesn’t matter since masking becomes impractical if it takes days or weeks to run. Nor is it true that masking runs only once, as it must run every time you refresh your test data. As masking becomes faster and easier, you can update your test data more frequently, getting more out of your data.

Performance Culprits

Slow masking is usually due to one of these reasons:

  • Product selection: different solutions offer different performance capabilities. Common reasons include code quality, database APIs, transaction size, etc. For example, chatty protocols combined with high latency can result in very slow masking.
  • Database performance: like any database-driven product, masking performance also depends on the performance of the underlying database. Most databases aren’t normally tuned for masking.
  • Triggers: can be one of the most challenging problems as these small pieces of code execute whenever data changes. When updating millions of rows, a trigger will run millions of times, causing the masking process to run forever. However, triggers are often essential for data validity and integrity, and you shouldn’t automatically disable them.

Taming the “performance beast”

Addressing these issues will allow data masking to become an integral component of a dynamic data lifecycle rather than a slow, unusable burden everyone wants to avoid.

Here are some ideas to consider:

Product selection is always challenging. Like all IT purchases, with data masking, you should also test several solutions in your environment using your network, database, and data volume. While trials can be time-consuming, they are the only way to ensure the solutions work well in your environment. Be careful not to rely on brand recognition, market analysts, or friendly advice, as they can backfire and result in a failed project.

Database performance can be improved with a little know-how. Data masking is a very write-intensive process that requires different database tuning since most applications are read-intensive. To temporarily improve database performance during masking, you can, for example, remove indexes and constraints, stop archiving, suspend replication, etc. Pre- and post-masking actions can help automate these actions during masking. Work with your DBAs to maximize your database write speed.

Finally, trigger performance issues can be challenging and require time and know-how. First, identify the triggers that run when you mask your data. Determine which are relevant to the data you are masking and disable the rest. Second, convert the necessary triggers into a vertical update procedure and use that procedure during masking instead of the triggers. It works because a single update of all the rows is much faster than millions of small updates. Core Audit can help speed up this process by identifying the SQLs that need to be rewritten as vertical updates.

Masking is essential

When looking at the details of performance problems, data integrity, etc., it’s easy to lose sight of the big picture: why is data masking so important?

  • Reduces risk: eliminating sensitive data outside of production will dramatically reduce your exposure and risk profile.
  • Simplifies compliance: Masking is essential to reduce the scope of various compliance and data privacy regulations. Systems that contain masked data aren’t typically subject to compliance.
  • Improves development: Masked data drives development and test environments, improving product quality, shortening development cycles, and accelerating project timelines.

Final Thoughts

Data masking is a critical component in the data lifecycle, enabling us to use our data to drive and improve many aspects of our business. From product development to testing, data analytics, and more, securely using our data outside of production lets us multiply the value we derive from it.

Masking is simple and essential but not trivial. Many projects fail for a variety of reasons, such as using an inappropriate solution, failing to define the right masking policies, performance issues, and more.

Through our experience working with customers, we found customers always succeed when they have the right solution and a support team committed to their success. Missing either component greatly decreases the chances of success, and lacking both guarantees failure.

Contact us today at info@bluecoreresearch.com to learn more about how we can help you mask and secure your data.

Database Visibility: Poll Results

Database Visibility:
What Everyone Is Missing

Recent polls of cybersecurity professionals show most respondents (82%) have partial or no visibility into their databases and need it. Few said they have good visibility (7%) or don’t need it (11%).

The surveys were conducted in various LinkedIn groups in English and Spanish, asking: “Do you have visibility into what’s happening within your database?”. Almost all English respondents claimed to have partial visibility (40%) or have no visibility and need it (53%).

In the Spanish survey, more people said they have good visibility or don’t require it, thereby lowering the demand for visibility from 93% to the 82% average.

The challenge of database visibility

Databases store a lot of sensitive corporate and third-party data, such as financial data, personal information, and much more. This data must not be compromised, leaked, or manipulated, but most of us have no visibility into who accesses it, when, or how.

We know we need visibility. Without it, we cannot design effective controls. It is a generally accepted first step in security: don’t fly blind. You can’t control what you can’t see. So how come database visibility is something most organizations lack?

Databases are challenging because of the massive activity volume and strict performance requirements. How do you get visibility into billions of SQLs? How do you even collect the information without impacting performance, let alone process and consume it? How do you gain control and know when someone is doing something malicious when there is so much noise? Even when limiting the scope to confidential table accesses, there is too much activity to comprehend.

But without visibility, how could you set up controls, reports, or alerts? What do you look out for? How will you know if a data breach occurred? And when one does, how did it happen?

How to obtain and improve visibility

As mentioned above, database visibility is challenging due to the high activity and performance requirements. It’s unlikely you’ll get far on your own without the right solution and underlying technologies.

Simple tools rely on built-in database functionality. These often have a high-performance impact while providing almost no visibility. Even expensive tools often use older technologies that can’t deliver the desirable visibility and value.

In recent years, Core Audit introduced significant advances and innovations in capture, processing, storage, and analysis technologies. The result offers impressive and unparalleled visibility. Talk to us to learn more and see how we can help you protect your data.

Final Thoughts

These days, we need to protect more data and more systems while facing greater risks than ever before. These challenges are significant, and we need the visibility and control provided by the most recent advancements.

These latest polls show that most cybersecurity professionals are aware of the importance of visibility. However, most organizations still lack visibility into their databases.

Contact us today at marketing@bluecoreresearch.com to learn more about how we can help you improve visibility and protect your sensitive information.

Anomaly Analysis

Anomaly Analysis

How can you control activity in busy systems like databases? How will you know there’s a malicious SQL inside billions of those? Read to learn more.

Anomaly analysis uses behavioral analysis, helping you save time while expanding your control to vast activity volumes.

These capabilities are made possible by the unique security repository technology in Core Audit. The anomaly analysis engine dynamically creates behavioral profiles based on the activity captured by the security repository, allowing you to compare the present with the past. Contrasting current activity with historical behaviors quickly highlights differences indicative of security problems.

That lets you break the preconception that you can’t control large activity volumes like the database or application. Anomaly analysis empowers you to detect the needle in the haystack to find a single offensive SQL within billions.

How does it work?

Like all IT systems, database activity is repetitive. While values change, patterns persist. While the anomaly analysis engine is capable of more, most analyses center on five aspects:

  1. New activity. Something seen today but not in the past. Like a new user, a new program, a new SQL, and more.
  2. High activity volume. Activity that exists both now and in the past but happens now more.
  3. Time of day – Activity that occurs at a different time now than in the past.
  4. Combined dimensions. A change in a single dimension can be a new user. A change in multiple dimensions is a new user + IP combination. Even if the user and IP are known, that user may have never used that IP.
  5. Filters. Focus different anomalies on different areas of interest. Like sensitive tables, the application, particular users, and more. Various subsets of the activity pose distinct risks, but they are also likely to exhibit different behaviors and, therefore, benefit from other types of anomalies.

There are many ways to select the type of anomalies to use. It could be behaviors you expect, patterns you observed in proactive forensics, the Core Audit wizards, the Core Audit Control Guide, or just trial and error. But one of the crucial features of the anomaly engine is that you can immediately test an anomaly and find the results. That makes choosing and tuning the anomalies a relatively simple process.

Benefits

We initially created the anomaly engine to help answer a common customer request. Looking at traditional declarative auditing, some customers said, “I don’t want to tell Core Audit what to report on. I want Core Audit to tell me what to look at.”

As the technology evolved, it helped secure subsets of the activity previously thought impossible to control. For example, to protect the application with its billions of SQLs.

Anomalies have also proven effective in detecting SQL injection and injection attempts. SQL injection, inevitably, causes the application to do something it isn’t supposed to do, thereby creating a new SQL construct that’s easy to identify.

Today, anomalies are a powerful tool to reduce reporting volume, securing huge activity volumes with a relatively low false positive ratio.

Final Thoughts

Anomaly analysis is vital to modern security, allowing you to do more with less. Control higher volumes from more sources with fewer people, less time, and a lower skill set.

Instead of pouring over endless reports, anomalies do much of the heavy lifting, quickly pointing you to potential problems.

While they’re not a magic solution that solves everything, anomalies can and should be a key element in any security strategy.

Talk to us to learn more and experience the Core Audit difference.

Proactive Forensics

Proactive Forensics

Visibility is the first step in any security effort. You can’t secure what you can’t see. Learn more about the value of visibility and how to achieve it.

One of the popular myths about security is that you can get it out of the box. Just install something, and voila! You’re magically secured. But that never works.

Regardless of what you’re trying to secure, your first step should always be understanding the activity. You should know how the system is used, by whom, when, etc. Gaining this kind of visibility into a live production system is fundamental to figuring out how to secure it.

“You can’t secure what you can’t see.”

However, gaining visibility into IT systems is not simple. It also gets increasingly challenging when the systems process large activity volumes. Databases can process thousands of SQLs every second. All this activity is nearly impossible to understand without the right tools.

Core Audit is a comprehensive security solution that includes, among its many capabilities, the ability to perform proactive forensic investigations. These will give you insight into what’s happening in your database. That’s the first step we recommend when setting up your security.

Once you know what’s happening in your systems, you can design effective reports, configure alerts that make sense, define practical anomaly analysis, and more.

But proactive forensics is not just about setting up your security. It also allows you to identify gaps in your security measures. As activity evolves, we must adjust the controls to fit the new activity patterns, and it’s impossible to do that without visibility. Otherwise, your controls will gradually become outdated and eventually obsolete.

Proactive forensics also lets you identify poor security practices. People sharing accounts, connecting from insecure locations, dumping tables out of the database, and more. While not a breach, these popular bad practices increase your exposure and make a data breach more likely.

There are many motives for regular activity review, but they all share the same underlying reason. We should include people in the security process. No matter the reports, alerts, or automation we create, regular human review can easily flag behaviors and security needs that a machine never will.

Talk to us to learn more and try Core Audit to see the difference.