Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Ulf Mattsson, CTO Protegrity: Data Security to Protect PCI Data Flow

July 2014 by Ulf Mattsson, CTO, Protegrity Corporation

There are innumerable ways that data thieves can attack and penetrate your network security. As the saying goes, it’s not if your systems will be breached, but when. Every organization, especially those that handle PCI data, should operate under the assumption that sooner or later, they will be breached.
So what can we do about it? The new best practices to protect sensitive data, and the data flow throughout the enterprise, are designed with this assumption in mind. They are about reducing risk of data loss, and responding quickly to attacks when they occur.

First, minimize the amount of sensitive data you collect and store. Some elements, such as PIN numbers and CVV/CVC codes are prohibited from being stored, but in general, if you’re not using it, you’re only increasing risk with no returns. If you are using it, or planning to, minimize the number of systems that store or process sensitive data. This will make it easier to protect the data, as you will have less to defend.

The next step is to implement some sort of data security, as required by PCI DSS regulations. While access controls provide a basic level of protection, they do nothing to protect the data flow, and the PCI council has recognized a need to go beyond them. Data security is applied in one of two ways: coarse grained security, at the volume or file level; and fine grained security, at the column or field level.

Coarse grained security, such as volume or file encryption, also provides adequate protection for data at rest, but volume encryption does nothing once the data leaves that volume. File encryption can also protect files in transit, but as with access controls may lead to issues with sensitive and non-sensitive data cohabitation. And as an “all-or-nothing” solution, once a file is unencrypted, the entire file is in the clear.

The highest levels of data flow security and accessibility can be attained through fine grained data security methods. These methods are commonly implemented using encryption or tokenization, or for one-way transformation, masking, hashing or redaction. They protect the data at rest, but also in transit, and in use. Sensitive data protected in this way will remain secure in memory, in transit wherever it flows, and in some cases, in use. In addition, non-sensitive data remains completely accessible, even when stored in the same file with sensitive information.

However, there are significant differences between the types of fine grained data security. Encryption changes the data into binary code cipher text, which is larger than the original data, and completely unreadable to processes and users. This is a positive in terms of its security. You don’t want anyone reading sensitive data (especially payment card data) that is not authorized. The advent of split knowledge and dual control of cryptographic keys can also improve security, by dividing keys between two or more people. However, there are negatives with encryption when storage is at a premium, as the larger data sets of crypto-text will fill up your stores faster. And if processes and users need regular access to unencrypted sensitive data for job functions, field level encryption can create performance issues.

Tokenization transforms the data, while preserving the data type and length. For example, the output after tokenizing a credit card number can look identical to a real number, even though it is has been randomized and protected. This transparency can be extended to bleed through portions of the original number, for example the first six digits, or the last four of a card number. This exposed business intelligence, and a one-to-one relationship with the original data, can allow many users and processes to perform job functions on tokenized data, rather than detokenizing each time a transaction occurs. The size of the data remains the same, so storage is unaffected, and performance can be nearly equal to clear text data. In addition, one of the biggest benefits of tokenization is that systems that only process tokens are considered out of scope for PCI DSS compliance audits.

Just as important as where and how you protect the data is when you protect it. Securing data from the moment it is created or enters the enterprise is key to removing gaps in security and protecting the data flow. Wherever the data travels from the point of creation or ingestion, it will remain protected. There are numerous scalable solutions, from gateways to ETL process augmentation, which can provide for massive amounts of incoming data. Obviously, it is also imperative to protect the data through the point of archive or disposal, to prevent data loss.

Returning back to access, you must also define who can access the data in the clear. While granular security allows for full access to non-sensitive data, and methods such as tokenization can provide actionable business intelligence from protected sensitive data, there are some processes and users that may require access to sensitive data in the clear. Fine grained security methods can be defined to allow various levels of access. For instance, one user or process may only be authorized to view one sensitive field and no others. Another may be allowed access to all but one sensitive field. Tokenization can even allow authorization of partial fields. When defining these roles, it may be helpful to assign authority by either those with access, or those without, whichever is fewer.

Taking it back to a higher level, a data flow, by definition, travels between systems. Even after the number of systems containing or processing sensitive data has been minimized, the remaining systems require a unified security approach. Unless all of these systems contain the same keys (or token tables) and data security policy, consistent authorization becomes impossible, and gaps in security begin to develop. It’s important to think on this higher level, especially because your enterprise is elastic, growing and shrinking over time, and your data security should be able to adapt to the varying scale, as well as the heterogeneous nature of the enterprise IT environment.

The last, but not least, important step is monitoring, to respond swiftly to attacks when they occur. Extensive, granular auditing on access attempts can alert you of possible unauthorized data extraction events at a very early stage. Typically, external threats will only be able to steal secure data, which will be worthless, but it is important to remediate weaknesses in your systems, before attackers burrow in and steal keys or high level credentials. In addition, rogue authorized employees and other users with privileged access (such as consultants) can still view and steal data in the clear. Monitoring is your only defense against such inside threats. Auditing daily usage and setting strict parameters for access can create a clear picture of normal operations, and allow you to create alerts when activity deviates from this baseline.
Following these new standards in data security can help to ensure your data remains secure throughout your enterprise, not only at rest, but in transit and in use. As always, it is highly recommended that you thoroughly research solutions before implementation, and decide on a method (or methods) that best suit the data type(s), use case, and risk involved in your specific environment.


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts