Ryan Wakeham, Author at NetSPI The Proactive Security Solution Mon, 29 Apr 2024 03:16:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.netspi.com/wp-content/uploads/2024/03/favicon.png Ryan Wakeham, Author at NetSPI 32 32 The Value of Detective Controls https://www.netspi.com/blog/technical-blog/adversary-simulation/the-value-of-detective-controls/ Sun, 22 Sep 2013 07:00:17 +0000 https://www.netspi.com/the-value-of-detective-controls/ Security professionals have spent the majority of their time focusing on preventative controls for years. Recently, as organizations have begun to accept that they cannot prevent every threat agent, they have also begun to realize the value of detective controls.

The post The Value of Detective Controls appeared first on NetSPI.

]]>
For as long as I can remember, security professionals have spent the majority of their time focusing on preventative controls. Things like patching processes, configuration management, and vulnerability testing all fall into this category. The attention is sensible, of course; what better way to mitigate risk than to prevent successful attacks in the first place?

However, this attention has been somewhat to the detriment of detective controls (I’m intentionally overlooking corrective controls). With budget and effort being concentrated on the preventative, there is little left over for the detective. However, in recent years, we have seen a bit of a paradigm shift; as organizations have begun to accept that they cannot prevent every threat agent, they have also begun to realize the value of detective controls.

Some may argue that most organizations have had detective controls implemented for years and, technically speaking, this is probably true. Intrusion detection and prevention systems (IDS/IPS), log aggregation and review, and managed security services responsible for monitoring and correlating events are nothing new. However, in my experience, these processes and technologies are rarely as effective as advertised (IDS/IPS can easily be made ineffective by the noise of today’s networks, logs are only worth reviewing if you’re collecting the right data points, and correlation and alerting only works if it’s properly configured) and far too many companies expect plug-and-play ease of use.

Detective controls should be designed and implemented to identify malicious activity on both the network and endpoints. Just like preventative controls, detective controls should be layered to the extent possible. A good way to design detective controls is to look at the steps in a typical attack and then implement controls in such a way that the key steps are identified and trigger alerts.

Below is a simplified example of such an approach:

Attack StepKey Detective Control
Gain access to restricted network (bypass network access control)Network access control alerts for unauthorized devices
Discover active systems and servicesIDS / IPS / WAF / HIPS; activity on canary systems that should never be accessed or logged into
Enumerate vulnerabilitiesIDS / IPS / WAF / HIPS; activity on canary systems that should never be accessed or logged into
Test for common and weak passwordsCorrelation of endpoint logs (e.g., failed login attempts, account lockouts); login activity on canary accounts that should never be used
Execute local system exploitAnti-malware; monitoring of anti-malware service state; FIM monitoring security-related GPO and similar
Create accounts in sensitive groupsAudit and alert on changes to membership in local administrator group, domain admin group, and other sensitive local and domain groups
Access sensitive dataLogging all access to sensitive data such as SharePoint, databases, and other data repositories
Exfiltrate sensitive dataData leakage prevention solution; monitor network traffic for anomalies including failed outbound TCP and UDP connections

This example is not intended to be exhaustive but, rather, in meant to illustrate the diversity of detective controls and the various levels and points at which they can be applied.

While every environment is slightly different, the general rules remain the same: implementing controls to detect attacks at common points will greatly increase the efficacy of detective controls while still sticking within a reasonable budget. The one big caveat in all of this is that, in order to be truly effective, detective controls need to be tuned to the environment; no solution will perform optimally right out of the box. At the end of the day, proper application of detective controls will still cost money and require resources. However, the impact of an attack will be reduced substantially through strong detective controls.

The post The Value of Detective Controls appeared first on NetSPI.

]]>
2013 Cyber Threat Forecast Released https://www.netspi.com/blog/technical-blog/vulnerability-management/2013-cyber-threat-forecast-released/ Wed, 12 Dec 2012 07:00:26 +0000 https://www.netspi.com/2013-cyber-threat-forecast-released/ Ryan Wakeham's thoughts on the recently released Georgia Tech Information Security Center and Georgia Tech Research Institute's 2013 report on emerging cyber threats.

The post 2013 Cyber Threat Forecast Released appeared first on NetSPI.

]]>
The Georgia Tech Information Security Center and Georgia Tech Research Institute recently released their 2013 report on emerging cyber threats. Some of these threats are fairly predictable, such as cloud-based botnets, vulnerabilities in mobile browsers and mobile wallets, and obfuscation of malware in order to avoid detection. However, some areas of focus were a bit more surprising, less in a revelatory sense and more simply because the report specifically called them out. One of these areas is supply chain insecurity. It is hardly news that counterfeit equipment can make its way into corporate and even government supply chains but, in an effort to combat the threat, the United States has redoubled efforts to warn of foreign-produced technology hardware (in particular, Chinese-made networking equipment).  However, the report notes that detecting counterfeit and compromised hardware is a difficult undertaking, particularly for companies that are already under the gun to minimize costs in a down economy.  Despite the expense, though, the danger of compromise of intellectual property or even critical infrastructure is very real and should not be ignored. Another interesting focus of the report is healthcare security.  The HITECH Act, which was enacted in 2009, provided large incentives for healthcare organizations to move to electronic systems of medical records management. While the intent of this push was to improve interoperability and the level of patient care across the industry, a side effect is a risk to patient data. The report notes what anyone who has dealt with information security in the healthcare world already knows: that healthcare is a challenging industry to secure.  The fact that the report calls out threats to health care data emphasizes the significance of the challenges in implementing strong controls without impacting efficiency. Addressing the threats of information manipulation, insecurity of the supply chain, mobile security, cloud security, malware, and healthcare security, the report is a recommended read for anyone in the information security field. The full report can be found at: https://www.gtsecuritysummit.com/pdf/2013ThreatsReport.pdf

The post 2013 Cyber Threat Forecast Released appeared first on NetSPI.

]]>
Thoughts on Web Application Firewalls https://www.netspi.com/blog/technical-blog/web-application-pentesting/thoughts-on-web-application-firewalls/ Mon, 15 Oct 2012 07:00:26 +0000 https://www.netspi.com/thoughts-on-web-application-firewalls/ I recently attended a talk given by an engineer from a top security product company and, while the talk was quite interesting, something that the engineer said has been bugging me a bit. Let's discuss.

The post Thoughts on Web Application Firewalls appeared first on NetSPI.

]]>
I recently attended a talk given by an engineer from a top security product company and, while the talk was quite interesting, something that the engineer said has been bugging me a bit. He basically stated that, as a control, deploying a web application firewall was preferable to actually fixing vulnerable code. Web application firewalls (WAFs) are great in that they provide an additional layer of defense at the application layer. By filtering requests sent to applications, they are able to block certain types of malicious traffic such as cross-site scripting and SQL injection. WAFs rarely produce false positives, meaning that they won’t accidently block legitimate traffic. And WAFs can be tuned fairly precisely to particular applications. Additionally, WAFs can filter outbound traffic to act as a sort of data leak prevention solution. But is installing a WAF preferable to writing secure code? Or, put differently, is having a WAF in place reason enough to disregard secure coding standards and remediation processes? I don’t think so. WAFs, like other security controls, are imperfect and can be bypassed. They require tuning to work properly. They fall victim to the same issues that any other software does: poor design and poor implementation. While a WAF may catch the majority of injection attacks, for example, a skilled attacker can usually craft a request that can bypass application filters (particularly in the common situation that the WAF hasn’t been completely tuned for the application, which can be an extremely time-consuming activity). We have seen this occur quite often in our penetration tests; the WAF filters automated SQL injection attempts executed by our tools but fails to block manually crafted injections. I’m not saying that organizations shouldn’t deploy web application firewalls. However, rather than using a WAF in place of good application secure application development and testing practices, they should consider WAFs as an additional layer in their strategy of defense-in-depth and continue to address application security flaws with code changes and security validation testing.

The post Thoughts on Web Application Firewalls appeared first on NetSPI.

]]>
Web Application Testing: What is the right amount? https://www.netspi.com/blog/technical-blog/web-application-pentesting/web-application-testing-what-is-the-right-amount/ Fri, 22 Jun 2012 07:00:26 +0000 https://www.netspi.com/web-application-testing-what-is-the-right-amount/ It is becoming more common these days (though still not common enough) for organizations to have regular vulnerability scans conducted against Internet-facing, and sometimes internal, systems and devices. Let's dive into this.

The post Web Application Testing: What is the right amount? appeared first on NetSPI.

]]>
It is becoming more common these days (though still not common enough) for organizations to have regular vulnerability scans conducted against Internet-facing, and sometimes internal, systems and devices. This is certainly a step in the right direction, as monthly scans against the network and service layer are an important control that can be used to detect missing patches or weak configurations, thereby prompting vulnerability remediation. Perhaps unsurprisingly, some application security vendors are applying this same principle to web application testing, insisting that scanning a single application numerous times throughout the year is the best way to ensure the security of the application and related components.  Does this approach make sense? In a handful of cases, where ongoing development is taking place and the production version of the application codebase is updated on a frequent basis, it may make sense to scan the application prior to releasing changes (i.e. as part of a pre-deployment security check). Additionally, if an organization is constantly deploying simple websites, such as marketing “brochureware” sites, a simple scan for vulnerabilities may hit the sweet spot in the budget without negatively impacting the enterprise’s risk profile. However, in most cases, repeated scanning of complex applications is a waste of time and money that offers little value beyond identifying the more basic of application weaknesses.  Large modern web applications are intricate pieces of software. Such applications are typically updated based on a defined release cycle rather than on a continual basis and, when they are updated, functionality changes can be substantial. Even in the cases where updates are relatively small, the impact of these changes to the application’s security posture can still be significant. Due to these facts, repeated scans for low-level vulnerabilities simply do not make sense. Rather, comprehensive testing to identify application-specific weaknesses, such as errors related to business logic, is necessary to truly protect against the real threats in the modern world.  Your doctor might tell you to check your blood pressure every few weeks but he would never lead you to believe that doing so is a sufficient way to monitor your health. Rather, less frequent but still regular comprehensive checkups are recommended. So why would you trust an application security vendor that tells you that quantity can make up for a lack in quality? There may be a place in the world for these types of vendors but you shouldn’t be entrusting the security of your critical applications to mere testing for low-hanging fruit. A comprehensive approach that combines multiple automated tools with expert manual testing is the best way to ensure that your web applications are truly secure.

The post Web Application Testing: What is the right amount? appeared first on NetSPI.

]]>
Enterprise Vulnerability Management https://www.netspi.com/blog/technical-blog/vulnerability-management/enterprise-vulnerability-management/ Thu, 24 May 2012 07:00:26 +0000 https://www.netspi.com/enterprise-vulnerability-management/ Secure360 conference recap from Ryan Wakeham.

The post Enterprise Vulnerability Management appeared first on NetSPI.

]]>
Earlier this month, at the Secure360 conference in St. Paul, Seth Peter (NetSPI’s CTO) and I gave a presentation on enterprise vulnerability management.  This talk came out of a number of discussions about formal vulnerability management programs that we have had both internally at NetSPI and with outside individuals and organizations.  While many companies have large and relatively mature security programs, it would not be an exaggeration to say that very few have formalized the process of actively managing the vulnerabilities in their environments. To accompany our presentation, I created a short white paper addressing the subject. In it, I briefly address the need for such a formal program, summarize a four phase approach, and offer some tips and suggestions on making vulnerability management work in your organization. When reading it, keep in mind that the approach that I outline is by no means the only way of successfully taking on the challenge of managing your security weaknesses. However, due to our unique vantage point as both technical testers and trusted program advisors for many organizations across various industries, we have been able to pull together an approach that incorporates the key elements that will allow this sort of program to be successful. Download Ryan’s white paper: An Approach to Enterprise Vulnerability Management

The post Enterprise Vulnerability Management appeared first on NetSPI.

]]>
Pentesting the Cloud https://www.netspi.com/blog/technical-blog/cloud-pentesting/pentesting-the-cloud/ Mon, 19 Mar 2012 07:00:26 +0000 https://www.netspi.com/pentesting-the-cloud/ Thoughts on cloud pentesting after much discussion and buzz at an industry conference.

The post Pentesting the Cloud appeared first on NetSPI.

]]>
Several months ago, I attended an industry conference where there was much buzz about “The Cloud.”  A couple of the talks purportedly addressed penetration testing in the Cloud and the difficulties that could be encountered in this unique environment; I attended enthusiastically, hoping to glean some insight that I could bring back to NetSPI and help to improve our pentesting services.  As it turns out, I was sorely disappointed. In these talks, most time was spent noting that Cloud environments are shared and, in executing a pentest against such an environment, there was a substantially higher risk of impacting other (non-target) environments.  For example, if testing a web application hosted by a software-as-a-service (SaaS) provider, one could run the risk of knocking over the application and/or the shared infrastructure and causing a denial of service condition for other customers of the provider in addition to the target application instance.  This is certainly a fair concern but it is hardly a revelation.  In fact, if your pentesting company doesn’t have a comprehensive risk management plan in place that aims to minimize this sort of event, I recommend looking elsewhere.  Also, the speakers noted that getting permission from the Cloud provider to execute such a test can be extremely difficult.  This is no doubt due to the previously mentioned risks, as well as the fact that service providers are typically rather hesitant to reveal their true security posture to their customers.  (It should be noted that some Cloud providers, such as Amazon, have very reasonable policies on the use of security assessment tools and services.) In any case, what I really wanted to know was this: is there anything fundamentally different about testing against a Cloud-based environment as compared with testing against a more traditional environment? After much discussion with others in the industry, I have concluded that there really isn’t. Regardless of the scope of testing (e.g., application, system, network), the underlying technology is basically the same in either situation.  In a Cloud environment, some of the components may be virtualized or shared but, from a security standpoint, the same controls still apply.  A set of servers and networking devices virtualized and hosted in the Cloud can be tested in the same manner as a physical infrastructure.  Sure, there may be a desire to also test the underlying virtualization technology but, with regard to the assets (e.g., databases, web servers, domain controllers), there is no difference.  Testing the virtualization and infrastructure platforms (e.g., Amazon Web Services, vBlock, OpenStack) is also no different; these are simply servers, devices, and applications with network-facing services and interfaces.  All of these systems and devices, whether virtual or not, require patching, strong configuration, and secure code. In the end, it seems that penetration testing against Cloud environments is not fundamentally different from testing more conventional environments.  The same controls need to exist and these controls can be omitted or misapplied, thereby creating vulnerabilities.  Without a doubt, there are additional components that may need to be considered and tested.  Yet, at the end of the day, the same tried and true application, system, and network testing methodologies can be used to test in the Cloud.

The post Pentesting the Cloud appeared first on NetSPI.

]]>
The Annual Struggle with Assessing Risk https://www.netspi.com/blog/technical-blog/vulnerability-management/the-annual-struggle-with-assessing-risk/ Tue, 07 Feb 2012 07:00:26 +0000 https://www.netspi.com/the-annual-struggle-with-assessing-risk/ In my experience, one of the security management processes that causes the most confusion among security stakeholders is the periodic risk assessment. Let's discuss.

The post The Annual Struggle with Assessing Risk appeared first on NetSPI.

]]>
In my experience, one of the security management processes that causes the most confusion among security stakeholders is the periodic risk assessment.  Most major information security frameworks such as ISO/IEC 27002:2005, the PCI Data Security Standard, and HIPAA, include annual or periodic risk assessments and yet a surprising number of organizations struggle with putting together a risk assessment process. Fundamentally, the concept of a risk assessment is straightforward: identify the risks to your organization (within some defined scope) and determine how to treat those risks.  The devil, of course, is in the details.  There are a number of formal risk assessment methodologies that can be followed, such as NIST SP 800-30, OCTAVE, and the risk management framework defined in ISO/IEC 27005 and it makes sense for mature organizations to implement one of these methodologies.  Additionally, risk assessments at larger companies will often feed into an Audit Plan.  If you’re responsible for conducting a risk assessment for a smaller or less mature company, though, the thought of performing and documenting a risk assessment may leave you scratching your head. The first step in any risk assessment is to identify the scope of the assessment, be they departments, business process, systems and applications, or devices.  For example, a risk assessment at a financial services company may focus on a particular business unit and the regulated data and systems used by that group.  Next, the threats to these workflows, systems, or assets should be identified; threats can include both intentional and unintentional acts and may be electronic or physical.  Hackers, power outages, and hurricanes are all possible threats to consider.  In some cases, controls for addressing the vulnerabilities associated with these threats may already exist so they should be taken into account.  Quantifying the impact to the organization should one of these threats be realized is the next step in the risk assessment process.  In many cases, impact is measured in financial terms because dollars are pretty tangible to most people but financial impact is not always the only concern.  Finally, this potential impact should be combined with the likelihood that such an event will occur in order to quantify the overall risk.  Some organizations will be satisfied with quantifying risk as high, medium, or low, but a more granular approach can certainly be taken. When it comes to treating risks, the options are fairly well understood.  An organization can apply appropriate controls to reduce the risk, avoid the risk by altering business processes or technology such that the risk no longer applies, share the risk with a third party through contracts (including insurance), or knowingly and objectively determine to accept the risk. At the conclusion of all of the risk assessment and treatment activities, some sort of documentation needs to be created.  This doesn’t need to be a lengthy formal report but, whatever the form, it should summarize the scope of the assessment, the identified threats and risks, and the risk treatment decisions.  Results from the Audit Plans can also assist in this documentation process. Most organizations already assess and treat risks operationally and wrapping a formal process around the analysis and decision-making involved should not be overwhelming.  Of course, different organizations may need more rigor in their risk assessment process based on internal or external requirements and this is not meant to be a one-size-fits-all guide to risk assessment.  Rather, the approach outlined above should provide some guidance, and hopefully inspire some confidence to security stakeholders who are just starting down the road of formal risk management.

The post The Annual Struggle with Assessing Risk appeared first on NetSPI.

]]>
Why I Hate The Cloud https://www.netspi.com/blog/technical-blog/cloud-pentesting/why-i-hate-the-cloud/ Wed, 26 Oct 2011 07:00:26 +0000 https://www.netspi.com/why-i-hate-the-cloud/ The Cloud is one of the "new big things" in IT and security and I hate it.  To be clear, I don't actually hate the concept of The Cloud (I'll get to that in a minute) but, rather, I hate the term. Hear me out...

The post Why I Hate The Cloud appeared first on NetSPI.

]]>
The Cloud is one of the “new big things” in IT and security and I hate it.  To be clear, I don’t actually hate the concept of The Cloud (I’ll get to that in a minute) but, rather, I hate the term. According to Wikipedia, cloud computing is “the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet).”  What this pretty much amounts to is outsourcing.  There are a lot of reasons that people “move to The Cloud” and I’m not really going to dive into them all; suffice it to say that it comes down to cost and the efficiencies that Cloud providers are able to leverage typically allow them to operate at lower cost than most organizations would spend accomplishing the same task.  Who doesn’t like better efficiency and cost savings? But what is cloud computing really?  Some people use the term to refer to infrastructure as a service (IaaS), or an environment that is sitting on someone else’s servers; typically, the environment is virtualized and dynamically scalable (remember that whole efficiency / cost savings thing).  A good example of an IaaS provider is Amazon Web Services.  Software as a service (SaaS) is also a common and not particularly new concept that leverages the concept of The Cloud.  There are literally thousands of SaaS providers but some of the better known ones are Salesforce.com and Google Apps.  Platform as a Service (PaaS) is less well-known term but the concept is familiar: PaaS providers the building blocks for hosted custom applications.  Often, PaaS and IaaS solutions are integrated.  An example of a PaaS provider is Force.com.  The Private Cloud is also generating some buzz with packages such as Vblock, and OpenStack; really, these are just virtualized infrastructures. I’m currently at the Hacker Halted 2011 conference in Miami (a fledgling but well-organized event) and one of the presentation tracks is dedicated to The Cloud.  There have been some good presentations but both presenters and audience members have struggled a bit with defining what they mean by The Cloud.  One presenter stated that “if virtualization is involved, it is usually considered to be a cloud.”  If we’re already calling it virtualization, why do we also need to call it The Cloud? To be fair, The Cloud is an appropriate term in some ways because it represents the nebulous boundaries of modern IT environments.  No longer is an organization’s IT infrastructure bound by company-owned walls; it is an amalgamation of company and third party managed party services, networks, and applications.  Even so, The Cloud is too much of a vague marketing term for my taste.  Rather than lumping every Internet-based service together in a generic bucket, we should say what we really mean.  Achieving good security and compliance is already difficult within traditional corporate environments.  Let’s at least all agree to speak the same language.

The post Why I Hate The Cloud appeared first on NetSPI.

]]>
Mobile Devices in Corporate Environments https://www.netspi.com/blog/technical-blog/mobile-application-pentesting/mobile-devices-in-corporate-environments/ Wed, 12 Oct 2011 07:00:25 +0000 https://www.netspi.com/mobile-devices-in-corporate-environments/ Mobile computing technology is hardly a recent phenomenon but, with the influx of mobile devices such as smartphones and tablet computers into the workplace, the specter of malicious activity being initiated by or through these devices looms large.

The post Mobile Devices in Corporate Environments appeared first on NetSPI.

]]>
Mobile computing technology is hardly a recent phenomenon but, with the influx of mobile devices such as smartphones and tablet computers into the workplace, the specter of malicious activity being initiated by or through these devices looms large.  However, generally speaking, an information security toolkit that includes appropriate controls for addressing threats presented by corporate laptops should also be able to deal with company-owned smartphones.   My recommendations for mitigating the risk of mobile devices in your environment include the following:

  • Establish a Strong Policy
  • Educate Users
  • Implement Local Access Controls
  • Minimize the Mobile Footprint
  • Restrict Connectivity
  • Restrict Web Application Functionality
  • Assess Mobile Applications
  • Encrypt, Encrypt, Encrypt
  • Enable Remote Wipe Functionality
  • Implement a Mobile Device Management System
  • Provide Support for Employee-Owned Devices

For more detailed information, take a look at the white paper that I just put together on the subject: Dealing with Mobile Devices in a Corporate Environment.

The post Mobile Devices in Corporate Environments appeared first on NetSPI.

]]>
Do You Know Where Your Data Is? https://www.netspi.com/blog/technical-blog/vulnerability-management/do-you-know-where-your-data-is/ Tue, 04 Oct 2011 07:00:25 +0000 https://www.netspi.com/do-you-know-where-your-data-is/ When it comes to application of security controls, many organizations have gotten pretty good at selecting and implementing technologies that create defense-in-depth.

The post Do You Know Where Your Data Is? appeared first on NetSPI.

]]>
When it comes to application of security controls, many organizations have gotten pretty good at selecting and implementing technologies that create defense-in-depth.  Network segmentation, authorization and access control, and vulnerability management are all fairly well understood and generally practiced by companies these days.  However, many organizations are still at risk because they can’t answer a simple question: where is sensitive data?  It should go without saying but if a company can’t identify the locations where sensitive data is stored, processed, or transmitted, it will have a pretty hard time implementing controls that will effectively protect that data. Two effective methods for identifying sensitive data repositories and transmission channels are data flow mapping and automated data discovery.  A comprehensive and accurate approach will include both.  Note, of course, that both methods assume that you have already defined what types of data are considered sensitive; if this is not the case, you will need to go through a data classification exercise and create a data classification policy. Data flow mapping is exactly what it sounds like: a table-top exercise to identify how sensitive data enters the organization and where it goes once inside.  Data flow mapping is typically pretty interview-centric, as you will need to really dig into the business processes that manipulate, move, and store sensitive data.  Depending on the size and complexity of your organization, data flow mapping could either be very straightforward or extremely complicated.  However, it is the only reliable way to determine the actual path that sensitive data takes through your organization.  As you conduct your interviews, remember that you want to identify all the ways that sensitive data is input into a business process, where it is stored and processed, who handles it and how, and what the outputs are.  Make sure that you get multiple perspectives on individual business processes as validation and also match up the outputs of one process with the inputs of another.  It is not uncommon for employees in one business unit or area to have misunderstandings about other processes; your goal is to piece together the entire puzzle. Automated data discovery does a poor job of shedding light on the mechanisms that move sensitive data around an organization but it can be very valuable for validating assumptions, identifying exceptions, and helping to reveal the true size of certain data repositories.  There are a number of free and commercial tools that can be used for data discovery (one of the most popular free tools is Cornell University’s Spider tool) but they all aim to accomplish the same objective: provide you with a list of files and repositories that contain data that you have defined as sensitive.  Good places to start your discovery include network shares, databases, portal applications, home drives on both servers and workstations, and email inboxes.  Be aware that most discovery tools will require that you provide or select a regular expression that matches the format of particular data fields.  However, some more advanced commercial tools also provide signature learning features. Ultimately, your data discovery exercise should result in a much improved understanding of how sensitive data passes through your organization and where it is stored.  The next step is to determine how to apply controls based on where data is stored, processed, and transmitted.  Also, where necessary, business processes may need to be adjusted in order to consolidate data and meet data protection requirements.   While identification of sensitive data is only the first phase in a process that will result in better data security and reduced risk, it is an absolutely critical step if application of security controls is to be effective.

The post Do You Know Where Your Data Is? appeared first on NetSPI.

]]>