Nabil Hannan, Author at NetSPI The Proactive Security Solution Thu, 29 Aug 2024 19:51:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.netspi.com/wp-content/uploads/2024/03/favicon.png Nabil Hannan, Author at NetSPI 32 32 The Balancing Act of In-House vs Third-Party Penetration Testing https://www.netspi.com/blog/executive-blog/penetration-testing-as-a-service/in-house-vs-third-party-penetration-testing/ Thu, 29 Aug 2024 19:51:16 +0000 https://www.netspi.com/?p=25370 Discover how combining in-house and third-party penetration testing brings a hybrid approach to enhance your cybersecurity strategy.

The post The Balancing Act of In-House vs Third-Party Penetration Testing appeared first on NetSPI.

]]>
TL;DR 

Balancing in-house and third-party penetration testing involves weighing control and customization against scalability and specialized skills. In-house teams offer deep organizational knowledge and build a culture of security internally, but they can be costly. Outsourcing pentesting to a third-party provides access to expert talent, flexibility, and cost-effectiveness, but may pose quality and dependency risks. Effective pentesting programs often combine both approaches to optimize resources and manage fluctuating demands. Selecting the right provider depends on their quality, engagement process, flexibility, and additional advantages.

Introduction 

Penetration testing is a critical practice for any organization serious about cybersecurity. But I’ve seen the debate between insourcing and outsourcing these crucial efforts go on for years. While many security teams have talented in-house pentesting specialists, I’ve found that the most effective approach often involves both in-house expertise and third-party penetration testers. 

This hybrid model offers the flexibility and scalability necessary to create a robust and dynamic penetration testing program. Here’s why I believe that integrating both in-house and third-party penetration testing can produce superior results. 

The Value of In-House Penetration Testing 

Deep Organizational Knowledge 

In-house penetration testers have the advantage of being deeply embedded within the organization. They possess a thorough understanding of the company’s internal systems, applications, and overall business context. This familiarity enables them to identify vulnerabilities that might be overlooked by external testers who lack this nuanced perspective. 

Consistent Collaboration 

Another benefit of in-house testers is the ability to build strong relationships with various teams within the organization. Frequent interactions with development, network, and cloud teams foster a culture of security, helping it become part of the organization’s culture. 

Immediate Availability 

In-house teams are always on standby, ready to address urgent security needs. They can quickly respond to incidents, perform ad-hoc tests, and continuously monitor systems without the delays that might come with scheduling external testers.

The Added Benefits of Third-Party Providers 

Scalability and Flexibility 

One of the primary advantages I’ve found in outsourcing penetration testing is scalability. It’s difficult to predict the demand for testing, which can fluctuate based on system changes and development cycles. Third-party providers can easily scale their services to meet these unpredictable demands, adding testers for short bursts of intensive testing and scaling down during quieter periods. 

Specialized Expertise 

Certain technologies require niche skills that are scarce in the industry. For example, I’ve found that finding mainframe penetration testers is notoriously difficult. Third-party providers often have access to a broader pool of specialized talent that brings deep expertise when needed, without having to hire or train full-time employees for a niche requirement. 

Fresh Perspectives 

Third-party testers bring a fresh set of eyes to any security landscape. Continuous internal testing can lead to complacency, but external testers can offer new insights, approach problems differently, and identify vulnerabilities that in-house experts might miss due to familiarity. 

Objectivity and Compliance 

Third-party pentesting vendors play a crucial role in helping organizations meet various compliance requirements. Many regulatory frameworks, such as PCI DSS, HIPAA, and GDPR, necessitate regular security assessments to ensure that sensitive data is adequately protected. By engaging external vendors, organizations can benefit from their specialized expertise and industry knowledge, ensuring that their pentesting processes align with compliance standards.

The Power of a Hybrid Approach 

Balancing Workload 

A hybrid model allows organizations to balance the workload efficiently. In-house teams can handle regular, ongoing tasks while third-party providers tackle overflow work and special projects requiring unique skills. This ensures that all security needs are met without overburdening internal resources. 

Comprehensive Coverage 

By combining in-house and third-party efforts, organizations achieve more comprehensive coverage. Internal testers offer detailed, context-aware insights, while external experts provide objective assessments and uncover hidden threats. 

Quality Assurance 

Having both in-house and third-party testers allows for quality assurance through A/B testing. Organizations can compare the findings of internal and external teams, ensuring that no vulnerabilities are missed and maintaining a high standard of security. 

Selecting a Penetration Testing Provider 

When choosing a third-party penetration testing provider, it’s crucial to consider not only their technical capabilities, but also their engagement process. Look for providers who offer additional benefits like ease of collaboration, flexibility, and access to advanced technologies. 

Factors to consider:  

  1. Quality of Testing: Ensure the provider has a track record of delivering high-quality penetration testing reporting and proven results.  
  2. Engagement Process: The provider should be easy to work with and offer a seamless engagement process. 
  3. Flexibility and Scalability: The ability to scale resources based on your needs is vital. 
  4. Related Services: Look for partners who offer value-added services, like integrated defect tracking systems and real-time access to test results
  5. Innovation in Tools and Tactics: Vendors that prioritize innovation are better positioned to stay up to date with current threat actor methods, identify weaknesses before they can be exploited, and ensure a proactive approach to safeguarding sensitive information. 
  6. Strong Reputation: Word of mouth speaks volumes when looking for a pentesting provider. Ask around, read reviews, and dig into tough questions when evaluating a vendor to ensure they’ve proven their success. 
In-Blog Ad:  How to Choose a Penetration Testing Company

Addressing Potential Risks 

Retention Challenges 

In-house teams face risks related to talent retention. Skilled penetration testers are highly sought after, and there’s always a risk of turnover. One way to mitigate this is by investing in continuous learning (give NetSPI’s Hack Responsibly blog a read) and career advancement opportunities to keep the team engaged and motivated. 

Quality Assurance in Outsourcing 

When relying on third-party providers, ensuring the quality of their work is crucial. Organizations should conduct thorough vetting processes and establish clear contracts that outline expectations, quality metrics, and deliverables. Regular feedback loops and performance reviews can help maintain high standards. 

Cost Considerations 

Both insourcing and outsourcing come with financial considerations beyond salaries. In-house teams require ongoing training and resources, while third-party providers’ costs depend on the scope and frequency of their engagements. A hybrid model allows for more predictable budgeting by balancing fixed and variable costs. 

Final Thoughts 

We all know that no single approach fits all. The optimal penetration testing strategy often involves a blend of in-house expertise and third-party specialization. This hybrid model not only enhances flexibility and scalability but also ensures that your organization benefits from diverse expertise and fresh perspectives. 

By strategically combining the strengths of both in-house and outsourced resources, you can build a penetration testing program that is not only robust but also adaptable and capable of meeting the evolving demands of cybersecurity. 

Ready to take your pentesting to the next level? Explore The NetSPI Platform, designed to provide you with unparalleled visibility and flexibility in managing your proactive security testing program. Request a demo today and experience the future of pentesting. 

The post The Balancing Act of In-House vs Third-Party Penetration Testing appeared first on NetSPI.

]]>
How Threat Actors Attack AI – and How to Stop Them https://www.netspi.com/blog/executive-blog/adversarial-machine-learning/how-threat-actors-attack-ai-and-how-to-stop-them/ Tue, 16 Jul 2024 14:00:00 +0000 https://www.netspi.com/?p=24911 Learn about common AI attack paths that threat actors use and how you can bolster your own AI security with AI/ML penetration testing. 

The post How Threat Actors Attack AI – and How to Stop Them appeared first on NetSPI.

]]>
It’s not often that I have the chance to speak to a room full of CISOs, but I was especially excited to present when I recently had this opportunity. I spoke on the trending topic of Gen AI and LLMs, specifically what types of AI security testing CISOs should be looking for when implementing these systems.

AI is something that can no longer be ignored. It’s undergoing rapid expansion, and its adoption for various business purposes is evident. Whether it’s being utilized in the back office, enabling customers in the front office, or engaging in intriguing projects like creating AI that could chat with other AI, the possibilities are endless. These developments can range from amusing demonstrations to scenarios that might instill fear about AI’s potential impact.

Despite the concerns, the challenges posed by AI are manageable, as long as you follow sound strategies and solutions to effectively navigate and harness its power.

To kick it off, we polled the audience to get a sense of their readiness for AI. Here’s what they said:

Methodology: Consider these survey results to be a pulse check on CISOs’ use of AI in business today. These survey results are based on an average of 34 responses gathered from polling an audience of CISOs or similar roles during a presentation.

  • The majority of respondents (82%) said they are already using or planning on using AI as part of their business.
  • The majority of respondents (82%) who are implementing AI in their businesses also reported they are training the AI model on their own data. Only 15% of responses said they were not using their own data to train their AI model.
  • When it comes to the respondents’ understanding of the origins and integrity of that training data, 47% indicated they were somewhat aware of the origins, but it needed improvement. And 44% said they were not very confident in the data sources. That’s a total of 91% who were unsure about the origins of their data sources, and conversely, only 9% of respondents felt confident about the data they’re training their AI models on.
  • Lastly, when looking at measures in place for the quality and consistency of the data used to train AI models, the majority of respondents (63%) said they have no specific measures in place.

So what does this all mean?  

Most organizations are trying to use LLMs to enable their businesses to operate at a much faster pace. The key to achieving this is allowing LLMs and generative AI models to learn from their data and leverage it effectively. Clearly, we were all on the same journey here.  

As you can see, many people struggle with the quality of their data and its sources. This is a common challenge for businesses, as data constantly grows. Data classification, specifically email classification, is both a problem and a nuisance. Without a systematic approach, data classification isn’t very effective. 

The quality and consistency of the data used to train models is another area of concern. While you might currently know the source and quality of your data, maintaining this over the long term without systemic controls could be challenging. Some organizations rely on manual reviews, but these are not continuous, leading to data drift depending on the frequency of these manual processes. My hope is that people feel a little less stressed because we can all see we’re up against the same challenges.

Are you already using (or planning on using) AI as part of your business?

Are you training (or planning on training) the AI model using your own data?

How well do you understand the origins and integrity of the data you plan to use for AI training?

How do you ensure the quality and consistency of the data used for training your AI models?

Security Testing AI and LLMs

When NetSPI conducts AI/ML penetration testing, we divide our testing capabilities into three categories:

1. Machine learning security assessment

Organizations are building models and seeking help with testing them. In this area, we offer a comprehensive assessment designed to evaluate ML models, including Large Language Models (LLMs), against adversarial attacks, identify vulnerabilities, and provide actionable recommendations to ensure the overall safety of the model, its components, and their interactions with the surrounding environment. 

2. AI/ML web application penetration testing

This is where most of our AI/ML security testing requests come from. Organizations are paying service providers for access to models, which they then deploy on their networks or integrate into applications through APIs or other means. We receive many requests to help organizations understand the implications of their AI model decisions, whether they are purchasing or integrating them into their solutions.

3. Infrastructure security assessment 

This category of AI/ML security testing centers on the infrastructure surrounding your model. Our infrastructure security assessment covers network security, cloud security, API security, and more, ensuring that your company’s deployment adheres to defense in depth security policies and mitigates potential risks.

Testing Methodology in AI Security

At NetSPI, we take a collaborative approach to testing AI models by partnering closely with organizations to understand their models and objective functions. Our goal is to innovate testing techniques that challenge these models to ensure robustness beyond standard benchmarks like OWASP Top 10 for Large Language Model Applications. Here’s how we achieve this:

5 Ways Malicious Actors Attack AI Models

Evasion

One of the most prevalent techniques involves manipulating models designed to detect and decide based on input. By crafting inputs that appear benign to humans but mislead machines, we demonstrate how models can misinterpret data.

Data Poisoning

We scrutinize the training data itself, examining biases and input sources. Understanding how data influences model behavior allows us to detect and mitigate potential biases that could affect model performance.

Data Extraction

Extracting hidden data from models provides insights that may not be readily accessible. This technique helps uncover vulnerabilities and improve model transparency. 

Inference and Inversion

Exploring the model’s ability to infer sensitive information from inputs, we uncover scenarios where trained models inadvertently leak confidential data.

Availability Attacks

Similar to denial-of-service attacks on traditional systems, threat actors can target AI models to disrupt their availability. By understanding the underlying mathematical principles, we explore vulnerabilities in model resilience.

It’s crucial to recognize that AI at its core operates on mathematical principles. Everything discussed here – whether attacks or defenses – is grounded in mathematics and programming. Just as models are built using math, they can also be strategically challenged and compromised using math. 

The limitless potential of AI for business applications is matched only by its potential for adversarial attacks. As your team explores developing or incorporating ML models, ensuring robust security from ideation to implementation is crucial. NetSPI is here to guide you through this process. 

The post How Threat Actors Attack AI – and How to Stop Them appeared first on NetSPI.

]]>
Getting Started with API Security Best Practices  https://www.netspi.com/blog/executive-blog/application-pentesting/get-started-with-api-security-best-practices/ Tue, 13 Jun 2023 14:00:00 +0000 https://www.netspi.com/get-started-with-api-security-best-practices/ API security has become a top priority and NetSPI’s API pentesting can help you get started with API security best practices.

The post Getting Started with API Security Best Practices  appeared first on NetSPI.

]]>
In simple terms, an API (application programming interface) is a piece of software used to talk to other pieces of software. The use of APIs continues to spike with no signs of slowing down. This presents more pathways that have the potential to be exploited, especially if API security isn’t prioritized through activities such as application penetration testing. Oftentimes security for APIs isn’t part of the development phase, but rather addressed after a launch if at all. 

The growing need for securing APIs over the last five years inspired Open Web Application Security Project (OWASP) to create the API Security Top 10, a list of the top API vulnerabilities facing developers and DevSecOps today. The 2023 list was just released and concluded API1:2023 – Broken Object Level Authorization and API2:2023 – Broken Authentication have remained in the top places for security concerns since 2019, showing us more work is needed to address these core vulnerabilities. 

Knowing that more and more APIs are being used to build software, security implications need to be top of mind for all IT leaders.

API Security is the Underdog We’re All Rooting For 

Organizations require clarity on the fact that API security needs to be prioritized alongside other security domains. Traditionally, software goes through security testing as a whole, instead of testing the APIs individually. This form of testing leads to missed information and possible vulnerabilities for adversaries to take advantage of.

Typically software includes many APIs, and automated scanning tools aren’t able to provide comprehensive results. Manual testing is needed to fully understand the breadth of security implications — which is a challenge for many organizations due to time, resource, and budget constraints.

API Security versus Application Security 

API security is a subset of application security that is more challenging because APIs are harder to remember to secure, given their development process and lack of use case foreshadowing.  

When a developer is building small bits of software, like APIs, they may not be able to foreshadow how it will ultimately be used, so security can fall to the wayside. Rather, when developers build a larger software application (general applications), security professionals often automatically think of adding security controls such as authentication, input validation, or output coding. The shift that needs to happen when working with APIs is that those automatic security responses are built into the requirements to become an inherent property of the APIs.

API Security Best Practices 

The traditional pillars of AppSec apply to making APIs more secure, such as input validation, output coding, authentication, error handling, and encryption to name a few. IT security leaders need to think of these pillars and all the different ways in which APIs can be used to build out comprehensive security controls.  

In short, organizations need to build secure development frameworks with APIs that take the security considerations out of the developers’ hands – since they often don’t possess a security-first mindset – and build security directly into the APIs themselves. 

Go back to the basics. Every CISO can benefit from this practice. Just like with general software security, if you don’t go back to the basics first, you won’t be able to mature the program. Right now, the basics are where organizations are struggling. NetSPI’s 2023 Offensive Security Vision Report had similar findings. These foundational security flaws are ever-present, and we’re still challenged by the basics across attack surfaces.

Questions to Consider Before API Pentesting 

API penetration testing is conducted in a similar manner to traditional web application testing. However, there are several nuances to API pentesting that must be considered during the scoping phase. Overall consultants require engagement from API developers to ensure that testing is done thoroughly. These questions explore what is specifically needed to maximize API pentesting success – from the very beginning.

1. Production vs Staging: Is it possible to provide testers with an API staging environment? 

NetSPI recommends providing penetration testers with a staging API environment. If testing is done in staging, the testers can use more thorough and invasive/comprehensive attacks. If testing is done in production, then testers will be forced to resort to more conservative attacks to avoid negatively affecting the system and disrupting the end-users.  

2. Rate Limiting: How is rate limiting implemented on the target API? Is rate limit testing in scope for this engagement? 

By leveraging rate limiting flaws, attackers can exploit race condition bugs or rack up costly service hosting bills.  

3. WAF Disabled: Is it possible to disable the API’s WAF or allow list the penetration tester’s IP range during the testing window? 

If possible, we recommend API WAFs are disabled when testing occurs. If testing is done in production, consider allow listing your testing team’s IP range. Read more on how it adds value to API pentesting here

4. New Features: Are there any new features in scope that we should focus on? 

New features that haven’t been reviewed for security issues are more likely to be vulnerable than hardened code.  

5. Denial of Service (DoS) Testing: During the test, will DoS testing be in scope? 

Denial of Service vulnerabilities of APIs can have a catastrophic impact on software systems.  

6. Source Code Assisted Testing: Will source code be provided to consultants during the test? 

By providing source code, consultants are enabled to test applications more thoroughly without additional cost. For additional information on source code assisted penetration tests, check out our article on “Why You Should Consider a Source Code Assisted Penetration Test.” 

Due to their programmatic nature, APIs provide additional customer interaction during the scoping process. By providing testers with the information listed above, testers are able to provide maximum value during an API penetration test and maximize the return on investment. 

Prioritize API Security with NetSPI's API Penetration Testing. Get Started.

Predictions for the Future of Security API 

Going forward, we’ll likely see a software development paradigm shift over the next five years that combines features from REST and SOAP security. There is likely to be a software development paradigm where some features from each method are used to create a combined superior method – something we’re already starting to see with Adobe and Google. This combination will take security out of the hands of the developers and allow for better “secure by design” adoption. We must enable developers to innovate with confidence.

Additionally, the concept of identity and authentication is changing — we need to move away from the traditional use of usernames and passwords and two-factor authentication, which relies on humans not making any errors. The authentication workflow will shift to what companies like Apple are doing around identity management with innovations like the iOS16 passkeys, and could even impact the OWASP API Security Top 10. This will be developed through APIs. 

APIs provide incredible value with connectivity between systems. They are here to stay, making API security a much-needed focus. NetSPI’s Application Penetration Testing gives your team a proactive advantage with identifying, prioritizing, and remediating vulnerabilities in a single platform. Bring proactivity to your API security by requesting a quote today.

The post Getting Started with API Security Best Practices  appeared first on NetSPI.

]]>
Infosecurity Europe 2022: Observations from the ExCel https://www.netspi.com/blog/executive-blog/security-industry-trends/infosecurity-europe-oberservations/ Tue, 05 Jul 2022 13:00:00 +0000 https://www.netspi.com/infosecurity-europe-oberservations/ Learn about three top key observations from Infosecurity Europe you need to know and what they mean.

The post Infosecurity Europe 2022: Observations from the ExCel appeared first on NetSPI.

]]>
Bolstering over 350 exhibitors and more than 190 expert sessions, Infosecurity Europe is one of the largest gatherings of cybersecurity professionals in Europe. This year, the NetSPI team made an appearance in the exhibitor hall.  

During Infosecurity Europe, NetSPI officially announced its expansion into the EMEA region. We’ve experienced growing demand from EMEA organizations, and we feel that NetSPI is well-positioned to deliver in this region. 

Aside from the hustle and bustle of the conference itself, we devoted much of our time to the exhibitor hall – where we noticed a few interesting themes. Continue reading for our three key observations from Infosecurity Europe and our conversations with the EMEA cybersecurity community. 

Automate Where Necessary 

Walking the floor, the automation message was prevalent among vendor solutions. However, in conversations with end users, the underlying message was that automation needs to serve a purpose, linked to, for example, improving cybersecurity workflows and processes. As Lalit Ahluwalia, writes in this Forbes article, the top drivers for automation include the lack of skilled workers, lack of standardization, and the expanded attack surface

It is also important to understand that technology alone should not be viewed as a “silver bullet.” There is a fundamental need to ensure that skilled humans can triage the data to ensure accurate results and that the information delivered is valuable and actionable.  

Automation should enable humans to do their job better and spend more time on the tasks that matter most (e.g., in penetration testing, looking for critical vulnerabilities that tools cannot find). For more on the importance of people in cybersecurity, read Technology Cannot Solve Our Greatest Cybersecurity Challenges, People Can

Tightening of Venture Capital Funding and Cybersecurity Budgets 

Another heavily discussed topic at Infosecurity Europe centered around funding, budgets, and priorities. 

With the onset of COVID-19, we noticed an over-expansion of cybersecurity vendors – this was evident in the exhibitor space. We attribute this partly to the rise in remote work, increased ransomware attacks in the past year, and companies’ expanding attack surfaces.  

The cause for concern? 

With the current global economic downturn, many vendor solutions are now seen as a “nice to have”, budgets are being squeezed, and end users are prioritizing their investments based on risk.  

We also had conversations with end users who felt that the whole market is becoming a “Noah’s ark” of solutions – i.e., there are a lot of solutions that have been built in the hope end users see value. We foresee not just a consolidation of the vendors in the market, but also a consolidation of the actual solutions that end users view as critical to their needs. 

The reality is that financial winds of change are blowing, whether it is customers focusing on maximising the return on their budget, or investment dollars looking for a home, there is a tightening coming. While our industry is relatively well-placed to withstand these financial pressures, the ability to build those trusted relationships with our customers and help them achieve tangible positive outcomes will be a key differentiator. 

Emphasis on Business Enablement  

It was refreshing to see many vendors focus less on fear, uncertainty, and doubt and more on business enablement and benefits to the customer.  

Understanding how technology supports initiatives that enable a company to grow is a win-win tactic in our book. This is a positive change and one that will help customers understand which products and services are vital as they mature their security programs.  

The Future of Information Security in EMEA 

There is no doubt that cybersecurity is a vital component of every business, and that was evident at the conference. We’re excited to be a part of the momentum in the EMEA region and support the global cybersecurity community through our platform driven, human delivered methodology and our focus on business enablement. 

Infosecurity Europe may be over, but that doesn’t mean our conversation has to end. Connect with NetSPI today to learn how we can meet your global offensive security needs.

The post Infosecurity Europe 2022: Observations from the ExCel appeared first on NetSPI.

]]>
Addressing Application Security Challenges in the SDLC https://www.netspi.com/blog/executive-blog/application-pentesting/application-security-challenges-sdlc/ Tue, 28 Jun 2022 13:00:00 +0000 https://www.netspi.com/application-security-challenges-sdlc/ Learn how Idan Plotnik, CEO of Apiiro, addresses challenges in application security and tips to help businesses protect against Log4Shell.

The post Addressing Application Security Challenges in the SDLC appeared first on NetSPI.

]]>
In recent years, more organizations have adopted the “shift left” mentality. This concept moves application security testing earlier in the software development life cycle (SDLC) versus the traditional method of implementing security testing after deployment of the application.  

By shifting left, an organization can detect application vulnerabilities early and remediate them, saving time and money, and ultimately not delaying the release of the application.  

But not everything comes wrapped in a beautiful bow. In application security, I witnessed that shifting left comes with its fair share of trouble – two in fact: 

  • Overworked and understaffed teams
  • Friction between application security engineers and development teams 

During his time at Microsoft, Idan Plotnik, co-founder and CEO at Apiiro experienced these two roadblocks and created an application security testing tool that addressed both. I recently had the opportunity to sit down with him to discuss the concept of shift left and other application security challenges.  

Continue reading for highlights from our conversation including contextual pentesting, open-source security, and tips on how a business can better prepare for remote code execution vulnerabilities like Log4Shell. For more, listen to the full episode on the Agent of Influence podcast.  

Why is it important to get more context on how software has changed and apply that to pentesting? 

Idan Plotnik: One of the biggest challenges we are hearing is that organizations want to run penetests more than once throughout the development life cycle but are unsure of what and when to test. You don’t want to spend valuable time on the pentester, the development team, and the application security engineer to run priority or scoping calls in every release. You want to identify the crown jewels that introduce risk to the application. You want to identify these features as early as possible and then alert your pentesting partner so they can start pentesting early on and with the right focus. 

It’s a win-win situation.  

On one hand, you reduce the cost of engineers because you’re not bombarding them with questions about what you’ve changed in the current release, when and where it is in the code, and what are the URLs for these APIs, etc.  

On the other hand, you’re reducing the costs of the pentest team because you’re allowing them to focus on the most critical assets in every release.  

Nabil Hannan: The traditional way of pentesting includes a full deep dive test on an application. Typically, the cadence we’ve been seeing is annual testing or annual requirements that are driven by some sort of compliance pressure or regulatory need.  

I think everybody understands why it would be valuable to test an application multiple times, and not just once a year, especially if it’s going through changes multiple times in a year. 

Now, the challenge is doing these tests can often be expensive because of the human element. I think that’s why I want to highlight that contextual testing allows the pentester to hone and focus only on the areas where change has occurred.  

Idan: When you move to agile, you have changes daily. You need to differentiate between changes that are not risky to the organization or to the business, versus the ones that introduce a potential risk to the business. 

It can be an API that exposes PII (Personally Identifiable Information). It can be authorization logic change. It can be a module that is responsible for transferring money in a trading system.  

These are the changes that you need to automatically identify. This is part of the technology that we developed at Apiiro to help the pentester become much more contextual and focused on the risky areas of the code. With the same budget that you have today, you can much more efficiently reduce the risks.  

Learn more about the partnership between NetSPI and Apiiro. 

Why is open-source software risk so important, and how do people need to think about it? 

Idan: You can’t look at open source as one dimension in application security. You must take into consideration the application code, the infrastructure code, the open-source code, and the cloud infrastructure that the application will eventually run on.  

We recently built the Dependency Combobulator. Dependency confusion is one of the most dangerous attack vectors today. Dependency confusion is where you’re using an internal dependency without a proper naming convention and then an attacker goes into a public package manager and uses the same name.  

When you can’t reach your internal artifact repository or package manager, it will automatically fall back and access the package manager on the internet. Then, your computer will fetch or download the malicious dependency with the malicious code, which is a huge problem for organizations.  

The person who founded the dependency confusion attack suddenly receive HTTP requests from within Microsoft, Apple, Google, and other enterprises because he found some internal packages while browsing a few websites. He just wanted to play with the concept of editing the same packages with the same name to the public repository. 

This is why we need to help the community and provide them with an open-source framework that they can extend, so that they can run it from their CLI or CI/CD pipeline for every internal dependency. Contributing to the open-source community is an important initiative.  

What can organizations do to be better prepared for similar vulnerabilities to Log4Shell? 

In December 2021, Log4Shell sent security teams into a frenzy ahead of the holiday season. Idan and I discussed four simple steps organizations can take on to mitigate the next remote code execution (RCE) vulnerability, including: 

  1. Inventory. Inventory and identify where the vulnerable components are.
  2. Protection. Protect yourself or your software from being attacked and exploited by attackers from the outside.
  3. Prevention. Prevent developers from doing something or getting access to the affected software to make additional changes until you know how to deal with the critical issue.
  4. Remediation. If you do not have that initial inventory that is automated and happening systemically across your organization and all the different software that is being developed, you cannot get to this step.  

For the full conversation and additional insights on application security, listen to episode 39 of the Agent of Influence podcast.

The post Addressing Application Security Challenges in the SDLC appeared first on NetSPI.

]]>
Multi-Factor Authentication: The Bare Minimum of IAM https://www.netspi.com/blog/executive-blog/security-industry-trends/multi-factor-authentication-the-bare-minimum-of-iam/ Tue, 10 May 2022 12:00:00 +0000 https://www.netspi.com/multi-factor-authentication-the-bare-minimum-of-iam/ Learn how protecting your organization, employees, and customers starts with multi-factor authentication.

The post Multi-Factor Authentication: The Bare Minimum of IAM appeared first on NetSPI.

]]>
What is the typical authentication setup for personal online accounts? The username and password. 

For too long, we have depended on this legacy form of authentication to protect our personal data. As more people rely on the internet to manage their most important tasks — online banking, applying for loans, running their businesses, communicating with family, you name it — many companies and services still opt for the typical username and password authentication method, often with multi-factor authentication as an option, but not a requirement.  

To combat the sophisticated attacks of hackers today, multi-factor authentication methods must be considered the bare minimum. [For those unfamiliar with the concept, multi-factor authentication, or MFA, requires the user to validate their identity in two or more ways to gain access to an account, resource, application, etc.] Then, starting on that foundation, security leaders must consider what other identity and access management practices can they implement to better protect their customers? 

For more insights on this global challenge, we spoke with authentication expert Jason Soroko, CTO-PKI at Sectigo, during episode 40 of the Agent of Influence podcast to learn more about the future of multi-factor authentication, symmetric and asymmetric secrets, digital certificates, and more. Continue reading for highlights from our discussion or listen to the full episode, The State of Authentication and Best Practices for Digital Certificate Management

Symmetric Secrets vs. Asymmetric Secrets  

The legacy username and password authentication method no longer offers enough protection. Let’s take a deep dive into symmetric secrets and asymmetric secrets to better understand where we can improve our processes. 

Symmetric secrets are an encryption method that use one key for both encrypting and decrypting a piece of data or file. Here’s a fun anecdote that Jason shared during the podcast: “Let’s say you and I want to do business. We agree that I could show up at your door tomorrow and if I knock three times, you will know it’s me. Well, somebody could have overheard us having that conversation to agree to knock three times. It’s the same thing with a username and password. That’s a shared symmetric secret.” 

According to Jason, the issue with this method is that the secret had to be provisioned out to someone or, in today’s context, keyed into memory on a computer. This could be a compromised endpoint on your attack surface. Shared secrets have all kinds of issues, and you only want to utilize them in a network where the number of resources is extremely small. And we should no longer use them for human authentication methods. 

Instead, we need to shift towards asymmetric secrets.   

Asymmetric secrets, which are used to securely send data today, have two keys: private and public. The public key is used for encryption purposes only and cannot be used to decrypt the data or file. Only the private key can do that. 

The private key is never shared; it never leaves a secured place (e.g., Windows 10, Windows 11, trusted platform module (TMP), etc.) and it’s what allows the authentication to occur securely. Not only that, but asymmetric secrets don’t require the 123 steps of authentication, improving the user experience overall. The ability for a hacker to guess or steal the asymmetric secret is much more difficult because it is in a secure element, Jason explains. 

Of course, some organizations have no choice but to stick with ancient legacy systems due to financial reasons. But the opportunity here is to complement that legacy authentication method with other controls so you can enhance your authentication system. 

Pitfalls of SMS Authentication 

If you’re considering SMS authentication, I hate to be the breaker of bad news, but that doesn’t offer comprehensive protection. SMS authentication was never built to be secure, and it was never intended to be used the way it is used popularly today. Now, not only do we have the issue of people using a protocol that’s inherently insecure by design, but hackers can easily intercept authentication messages sent via SMS. 

As Jason shared on the podcast, the shocking truth is that SMS redirection is commercially available. It only costs around $16 to persuade the telecommunications company to redirect SMS messages to wherever you want them to go, which shows how easily hackers can obtain messages and data. 

Learn more about telecommunications security, read: Why the Telecoms Industry Should Retire Outdated Security Protocols. 

Three Best Practices for Managing Digital Certificates 

Even with the implementation of multi-factor authentication, how do you know if a person or a device is trustworthy to allow inside your network? 

You achieve that with digital certificates also known as public key certificates. They’re used to share public keys and verify the ownership of a public key to the person or device that owns it. 

With so many people moving to remote work, this only amplifies the number of digital certificates to authenticate each day. It’s important to manage your digital certificates effectively to mitigate the risk of adversaries trying to access your organization’s network. 

For additional reading on the security implications of remote work, check out these articles: 

To get you started toward better digital certificate management, Jason shared these three best practices: 

  1. Take inventory: Perform a proper discovery of all the certificates that you have (TLS, SSL, etc.) to gain visibility into how many you have.
  2. Investigate your certificate profiles: Take into consideration your DevOps certificates, your IoT certificates, etc., and delve into how the certificates were set up, who set them up, how long the bit-length is, and whether is it a proper non-deprecated cryptographic algorithm.
  3. Adapt to new use cases: Look towards the future to determine if you can adapt to new use cases (e.g., can this be used to authenticate BYOD devices or anything outside the Microsoft stack, how will the current cryptographic algorithms today differ in the future, what about hybrid quantum resistance, etc.). 

The Future of Multi-Factor Authentication 

As mentioned at the beginning for this article, multi-factor authentication should be considered the bare minimum, or foundation, for organizations today. For organizations still on the fence about implementing this authentication method, here are three reasons to start requiring it: 

  • A remote workforce requires advanced multi-factor authentication to verify the entities coming into your network.
  • Most cyberattacks stem from hackers stealing people’s username and password. Multi-factor authentication adds additional layers of security to prevent hackers from accessing an organization’s network.
  • Depending on which method your organization utilizes, multi-factor authentication provides a seamless login experience for employees — sometimes without the need for a username or password if using biometrics or single-use code. 

More organizations are choosing to adopt multi-factor authentication and we can only expect to see more enhancements in this area.  

According to Jason, artificial intelligence (AI) will play an important role. Take convolutional neural networks for example. This is a type of artificial neural network (AAN) used to analyze images. If we were to apply convolutional neural networks to cybersecurity, we could train it to identify malicious known binaries or patterns quickly and accurately. Of course, this is something to look forward to in the foreseeable future. 

An area we’ve certainly made much progress on, though, is the ability to use machine learning to determine malicious activity in the credit card fraud detection space. 

Multi-Factor Authentication is Only the First Step 

At a bare minimum, every organization should start with multi-factor authentication and build from there. One-time passwords, email verification codes, or verification links are user-friendly and go a long way in effective authentication.  

Cyberwarfare coupled with a remote workforce and government scrutiny should prompt companies everywhere to bolster their cybersecurity defenses. The authentication methods and best practices Jason Soroko shared with me on the Agent of Influence podcast are a step in the right direction toward protecting your organization, employees, and — most importantly — your customers. 

Put your IAM and authentication processes to the test against real attacker techniques. Explore NetSPI’s red team operations.

The post Multi-Factor Authentication: The Bare Minimum of IAM appeared first on NetSPI.

]]>
Application Security: Shifting Left to the Right Degree https://www.netspi.com/blog/executive-blog/application-pentesting/shift-left-secure-software/ Tue, 12 Apr 2022 12:00:00 +0000 https://www.netspi.com/shift-left-secure-software/ Read application security best practices from our cybersecurity podcast discussion with Maty Siman, CTO at Checkmarx.

The post Application Security: Shifting Left to the Right Degree appeared first on NetSPI.

]]>
In application security, DevOps, and DevSecOps, “shift left” is a guiding principle for how organizations should implement security practices into the development process. For this reason, today’s application security testing tools and technologies are built to facilitate a shift left approach, but the term has taken on a new meaning compared to when it first entered the scene years ago.

Over the past decade, software development has drastically changed with the proliferation of impactful technology, such as APIs and open-source code. However, shift left has remained a North Star for organizations seeking to improve application security. Its meaning has become more nuanced for those attempting to achieve a mature application security framework.

I recently sat down with Maty Siman, Founder and CTO at Checkmarx on our Agent of Influence podcast to discuss application security and the concept of shift left. You can listen to the full episode here. Let’s explore four highlights from the discussion:

The “Lego-ization” of Software 

In the past, developers would build their solutions from the ground up, developing unique libraries to carry out any desired functionality within an application. Today, developers leverage a wide range of tools and technologies, such as web services, open-source code, third party solutions and more, creating software that is ultimately composed of a variety of different components.

As Maty alluded to during the Agent of Influence podcast, many in the industry have referred to this practice as the “lego-ization” of software, piecing together different premade, standardized Lego blocks to form a unique, sound structure.

While both traditional and modern, lego-ized methods are forms of software development, they demand a different set of expertise. This is where mature application security frameworks become invaluable. Maty explains that today’s developers are often working around the clock to keep up with the pace of digital transformation; they cannot just focus on code for vulnerabilities. They must also look at how the different components are connected and how they communicate with one another.

Each connection point between these components represents a potential attack surface that must be secured – but addressing this can also become a source of friction and perceived inconvenience for developers.  

The Impact of Today’s Open Source and API Proliferation 

The recent proliferation of software supply chain security threats has made the situation even more complex and dire for software developers, as malicious actors look to sneak malicious code into software as it’s being built.

As Siman explains during our podcast conversation, open source code makes up anywhere from 80 to 90 percent of modern applications. Still, developers are pulling these resources from a site like GitHub often without checking to see if the developer who created the package is trustworthy. This further exacerbates the security risk posed by the lego-ized development practices we see today, Maty warns.

Additionally, in recent years, there has been an explosive growth in the usage of APIs in software development. Organizations now leverage thousands of APIs to manage both internal and external processes but have not paid enough attention to the challenge of securing these deployments, according to Maty.

However, efforts have been made to set organizations on the right path in securing APIs, such as the OWASP API Security Project – but there is still a lot of work to be done. Check out the OWASP API Top 10 list, co-written by Checkmarx’s Vice President of Security Research, Erez Yalon.

Read: AppSec Experts React to the OWASP Top 10 2021

Many organizations are not aware of which or how many APIs their services take advantage of, which presents an obstacle towards securing them. As a result, Maty explains that the concept of a “software bill of materials,” or SBOM, is beginning to take shape as organizations seek to better understand the task at hand.

With APIs quickly becoming a favored attack vector for cybercriminals, the importance of developers getting a handle on API security cannot be overstated, which is especially crucial for application penetration testing. Simultaneously, the task is an immense one that many developers see as a headache or hindrance to their main goal, which is to deliver new software as quickly as possible. 

Shifting Left in an Evolving Application Development Landscape  

While the trends outlined above certainly present significant challenges when it comes to application security, they are not insurmountable. Maty advises that organizations can and should implement certain changes in their approach to application security to better support developers with appropriate application security testing tools and other resources.

One of the main issues organizations face in modern application security testing, including application penetration testing or secure code review, lies in the effort to shift left. Shift left is sometimes seen as a source of friction in the developer community. It is about finding and managing vulnerabilities as early as possible, which has only become more difficult and complex as development has evolved.

Read: Shifting Left to Move Forward: Five Steps for Building an Effective Secure Code Review Program

The amount of innovation in software development and implementation means that shifting as far left as possible is not always feasible or even the best approach. While detecting vulnerabilities in code as early as possible is a priority in application security, attempting to force developers to do so too early in the development process can exhaust developers and slow software delivery, as Maty advises.

For example, the use of integrated development environment (IDE) plugins can often make developers feel hindered and nagged by security rather than empowered by it. While they represent a shift to the extreme left in terms of security, they are not always a good idea to impose on developers.

No Right Way to Shift Left in Application Security 

Ultimately, the proper way to shift left is going to vary across organizations, depending on the software they are building and what is going into it. It is paramount to take a tailored approach that balances the security responsibilities placed on developers with the need to maintain agility and deliver software quickly.

Application development has changed significantly, and we can expect it to continue to change in the coming years. Creating and maintaining a mature application security framework will depend on maintaining a proper understanding of the tools and technologies developers are using and adjusting the organizational approach to application security accordingly.

For more, listen to episode 32 of Agent of Influence with Maty of Checkmarx:

For more, listen to episode 32 of Agent of Influence with Maty of Checkmarx: “Shift Left, But Not Too Left”: A Conversation on AppSec and Development Trends.

The post Application Security: Shifting Left to the Right Degree appeared first on NetSPI.

]]>
Why the Telecoms Industry Should Retire Outdated Security Protocols https://www.netspi.com/blog/executive-blog/security-industry-trends/why-telecoms-should-retire-outdated-security-protocols/ Tue, 08 Mar 2022 13:00:00 +0000 https://www.netspi.com/why-telecoms-should-retire-outdated-security-protocols/ Learn how the telecommunications industry can invest in end-to-end encryption to secure user data and prevent breaches.

The post Why the Telecoms Industry Should Retire Outdated Security Protocols appeared first on NetSPI.

]]>
The Federal Communications Commission (FCC) recently announced its proposal to update data breach laws for telecom carriers.

A key change in the proposal? Eliminating the seven-business-day waiting period required of businesses before notifying customers of a breach.  

Although the proposed FCC change would allow companies to address and mitigate breaches more quickly, it does not solve the greater issue at hand: The sensitive data collected by the telecoms industry is constantly at risk of being exploited by malicious actors.  

The Telecoms Threat Environment 

Protecting data within the telecoms industry is instrumental in ensuring customer privacy and safety.  

When telecom companies experience a data breach, hackers often target customer proprietary network information (CPNI) – “some of the most sensitive personal information that carriers and providers have about their customers,” according to the FCC. This includes call logs, billing account information, as well as the customer’s name, phone number, and more.  

In August 2021, T-Mobile suffered the largest carrier breach on record, with over 50 million current and former customers affected.  

To protect customers from further breaches, the telecoms industry must deploy configurations securely, enable end-to-end encryption, and return to security basics by enabling automation in vulnerability discovery and penetration testing.  

Misconfiguration Risk 

Networks, specifically telecommunications channels, continue to increase in complexity, causing an increased risk for misconfigured interfaces within organizations.  

From these misconfigurations, attackers can stitch together multiple weaknesses and pivot from one system to another across multiple organizations.  

In October 2021, LightBasin, a hacking group with ties to China, compromised telecom companies around the world. LightBasin used multiple tools and malware to exploit weaknesses in systems that were configured with overly permissive settings and neglected to follow the principle of least privilege.  

These hacking tactics are not unique. Had the telecoms industry instituted the proper channels for alerting and blocking on common attack patterns and known tactics, techniques, and procedures (TTPs) that attackers use widely they may have been able to prevent the LightBasin attack.  

Additionally, to protect against future attacks and data breaches, industries should build proper standards and automation to ensure that configurations are deployed securely and consistently monitored.  

The Need for End-to-End Encryption 

Enabling end-to-end encryption within mobile communication networks could help to combat some of the lateral movement strategies used by LightBasin and similar hacker groups.  

This lateral movement within telecommunications networks can be challenging for the industry to address for multiple reasons. The overarching issue? Telecommunications systems were not originally developed with security in mind and are not secure by design.  

The telecoms systems have flaws that cannot be fixed without major architectural changes and these systems have evolved to be utilized in a way that’s outside of the original creators’ intent.  

In particular, these mobile communications networks were not built with a quality of service guarantee or any type of end-to-end encryption to ensure that users’ data is not exposed while in transit.  

WhatsApp, for example, uses the Signal protocol to asymmetrically encrypt messages between users. The encrypted messages then get transmitted the via a WhatsApp facilitated server.  

This ensures that only the intended recipient can decrypt the message and others who attempt to do so will fail. Legacy telecoms players should adopt a similar approach for added protection to users’ communications.  

While end-to-end encryption can protect against lateral movement strategies, this does not mean the security is infallible. Just because the communication channel is secure doesn’t ensure application security. Users are still vulnerable to social engineering attacks, malware, and, as in WhatsApp’s case, the app itself may be vulnerable.  

To truly secure user data, the telecoms industry security must invest in holistic security strategies including application security testing.  

For more on end-to-end encryption, read Why Do People Confuse “End-to-End Encryption” with “Security”? 

Collaboration and Coordination 

As the telecoms industry begins to prioritize security, organizations harnessing the networks must also prioritize security.  

This includes ensuring multi-factor authentication between users and systems, the principle of least privilege, or even proper input validation and output encoding.  

In tandem, the telecoms industry should strive to build automated vulnerability management processes where possible. This ensure continuous checks and balances are in place to secure all deployed systems – both at the software and infrastructure levels.  

Where hackers have only become more sophisticated in the technology and methods used to acquire data, the telecoms industry has neglected to keep up.  

Currently, messages and calls can be spoofed, data is not encrypted while in transit, and the quality of service and protection is not guaranteed. We have adopted a network with inherent flaws in its design from a security perspective, and these systems are used by billions of people across the globe.  

The change in FCC guidelines mark significant progress. Given the current threat environment, security efforts in the telecoms industry must be prioritized to ensure billions of people and their data are protected. 

The post Why the Telecoms Industry Should Retire Outdated Security Protocols appeared first on NetSPI.

]]>
Penetration Testing Services vs. Bug Bounty Programs https://www.netspi.com/blog/executive-blog/penetration-testing-as-a-service/penetration-testing-services-versus-bug-bounty/ Tue, 01 Feb 2022 22:12:33 +0000 https://www.netspi.com/penetration-testing-services-versus-bug-bounty/ What are the greatest differences between pentesting and bug bounties? We break it down into six components: personnel, payment, vulnerabilities, methodology, time, and strategy.

The post Penetration Testing Services vs. Bug Bounty Programs appeared first on NetSPI.

]]>
While in the Kingdom of Saudi Arabia for the @Hack cybersecurity conference, we noticed a disconnect in the understanding of penetration testing. Many of the people we spoke with assumed pentesting and bug bounty programs were one and the same.

Spoiler alert: that assumption is incorrect. While they share a similar goal, pentesting services and bug bounties vary in impact and value.

In an effort to demystify the two vulnerability discovery activities, in this blog we will cover how each are used in practice, key differences, and explain the risks associated with solely relying on bug bounties.

What is a Bug Bounty Program?

Simply put, a bug bounty program consists of ethical hackers exchanging critical vulnerabilities, or bugs, for recognition and compensation. 

The parameters of a bug bounty program may vary from organization to organization. Some may scope out specific applications or networks to test and some may opt for a “free-for-all” approach. Regardless of the parameters, the process remains the same. A hacker finds a vulnerability, shares it with the organization, then, once validated, the organization pays out a bounty to the hacker. 

For a critical vulnerability finding, the average payout rose to $3,000 in 2021. Bounty payments have come a long way since 2013’s ‘t-shirt gate,’ where Yahoo offered hackers a $12.50 company store credit for finding a number of XSS (cross-site scripting) vulnerabilities – yikes.

What is Penetration Testing?

Penetration testing is an offensive security activity in which a team of pentesters, or ethical hackers, are hired to discover and verify vulnerabilities. Pentesters simulate the actions of a skilled adversary to gain privileged access to an IT system or application, such as cloud platforms, IoT devices, mobile applications, and everything in between. 

Pentesting also helps organizations meet security testing requirements set by regulatory bodies and industry standards such as PCI and HIPAA.

Pentesters use a combination of automated vulnerability discovery and manual penetration testing techniques. They work collaboratively to discover and report all vulnerability findings and help organizations with remediation prioritization. Pentesting partners like NetSPI work collaboratively with in-house security teams and are often viewed and treated as an extension of that team.

Penetration testing has evolved dramatically over the past five years with the emergence of penetration testing as a service (PTaaS). PTaaS enables more frequent, transparent, and collaborative testing. It streamlines vulnerability management and introduces interactive, real-time reporting. 

As an industry, we’ve shifted away from traditional pentesting where testers operate behind-the-curtain, then deliver a long PDF list of vulnerabilities for security teams to tackle on their own.

What is Penetration Testing? For a more detailed definition, how it works, and criteria for selecting your penetration testing partner, read our guide.

6 Core Differences Between Pentesting and Bug Bounties

So, what are the greatest differences between pentesting and bug bounties? Let’s break it down into six components: personnel, payment, vulnerabilities, methodology, time, and strategy.

Personnel

Pentesters are typically full-time employees that have been vetted and onboarded to provide consistent results. They often work collaboratively as a team, rather than relying on a single tester. 

Bug bounty hackers operate as independent contractors and are typically crowdsourced from across the globe. Working with crowdsourced hackers can open the door to risk, given you cannot be 100% confident in their intentions and motives. 

Will they sell the intel they gather to a malicious party for additional compensation? Will they insert malicious code during a test? With full-time employees, there are additional guardrails and accountability to ensure the hacking is performed ethically.

Payment

With penetration testing vendors, the payment model can vary. Cost is often influenced by the size of the organization, the complexity of the system or application, vendor experience, the scope, depth, and breadth of the test, among other factors. 

With a bug bounty program, the more severe the vulnerability, the more money a bug bounty hunter makes. Keep in mind that negotiation of the bounty payment is very common with bug bounty programs, so it is important to factor in the time and resources to manage those discussions.

Additionally, one cause for concern with bug bounty payments is that instead of reporting vulnerabilities as they are found, it’s common for hackers to hold on to the most severe vulnerabilities for greater payout and recognition during a bug bounty tournament. 

Vulnerabilities

Because of the pay-per-vulnerability model bug bounty programs follow, it’s no surprise that many are focused solely on finding the highest severity vulnerabilities over the medium and low criticality ones. However, when chained together, lower severity vulnerabilities can expose an organization to significant risk.

This is a gap that penetration testing fills. Penetration testers chain together seemingly low-risk events to verify which vulnerabilities enable unauthorized access. Pentesters do prioritize critical vulnerabilities, but they also examine all vulnerabilities with a business context lens and communicate the risk each could pose to operations if exploited.

Vulnerability findings aside, there are also key differences in how the results are delivered. With bug bounties, it’s up to the person who found the vulnerability to decide when to disclose the flaw to the program – or save it for a tournament as mentioned above, or even disclose it publicly without consent.

Modern penetration testing companies like NetSPI operate transparently and report findings in real time as they are discovered. Plus, pentesters validate and retest to confirm the vulnerability exists, evaluate the risk it poses, and determine if it was fixed effectively.

Methodology

The greatest difference in the testing methodology of bug bounty programs and penetration testing services is consistency.

From our discussions with security leaders, the biggest challenge they face with bug bounty programs is that service, quality, project management, and other key methodology factors often lack consistency. Notably, the pool of independent contractors varies across experience and expertise. And the level of effort diminishes as rewarding, critical vulnerabilities are found and researchers move on to opportunities with greater opportunity for compensation.

Penetration testing is more methodical in nature. Testers follow robust checklists to ensure consistency in the testing process and make certain that they are not missing any notable gaps in coverage. They also hold each other accountable by working on teams. At NetSPI, our pentesters use the workbench in our Resolve PTaaS technology platform to collaborate and maintain consistency.

For any organization that has legal, regulatory, or contractual obligations for a robust security testing bug bounties simply cannot meet those requirements. Bug bounty programs are opportunistic. There is no assurance of full coverage testing as they do not adhere to defined methodology or checklists to ensure consistency from assessor to assessor, or assessment to assessment. Some bug bounties can use checklists upon request – for a hefty added cost.

Time

While bug bounty programs are evergreen and always-on, traditional penetration testing has been limited by time-boxed assessments.

To address this, first and foremost we recommend organizations provide their pentesting team with access to source code or perform a threat modeling assessment to equip their team with information a malicious hacker could gain access to in the wild. This allows pentesters to accurately emulate real attackers and spend more time finding business critical vulnerabilities.

The pentesting industry is rapidly evolving and is becoming more continuous, thanks to the PTaaS delivery model and attack surface management. Gone are the days of annual pentests that check a compliance box. We see a huge opportunity for integration with attack surface management capabilities to truly offer continuous testing of external assets.

Strategy

Penetration testing is a strategic security activity. On the other hand, bug bounty programs are very tactical and transactional: find a vulnerability, report it, get paid for it, then move on to the next hunt.

As noted earlier, penetration testing is often viewed as an extension of an internal security team and collaborates closely with defensive teams. You can also find pentesting partners that offer strategic program maturity advisory services. Because of this, pentesters deeply understand the systems, networks, applications, etc. and can assess them holistically. This is particularly beneficial for complex systems and large organizations with massive technology ecosystems.

Furthermore, strategic partnerships between penetration testing vendors and their partners lead to a greater level of trust, institutional knowledge, and free information exchange. In other words, when you work with a team of penetration testers on an ongoing basis, their ability to understand the mechanics of your company and its technologies lends itself to discovering both a greater number and higher quality of vulnerabilities.

Final Thoughts

The way penetration testing has and continues to evolve fills many of the gaps left by bug bounty programs. There is certainly room for both bug bounty programs and penetration testing in the security sector – in many cases the services complement one another. However, it is important to understand the implications and risks associated when deciding where to focus your efforts and budget. 

The post Penetration Testing Services vs. Bug Bounty Programs appeared first on NetSPI.

]]>
Best Practices for Software Supply Chain Security https://www.netspi.com/blog/executive-blog/security-industry-trends/best-practices-software-supply-chain-security/ Tue, 18 Jan 2022 13:00:00 +0000 https://www.netspi.com/best-practices-software-supply-chain-security/ Take these four steps to improve your software supply chain security, including enforcing security awareness training, enacting policy and standards adherence, and more.

The post Best Practices for Software Supply Chain Security appeared first on NetSPI.

]]>
Today’s business environment extends far beyond traditional brick and mortar organizations. Due to an increased reliance on digital operations, the frequency and complexity of supply chain cyber attacks — also known as vendor risk management or third-party security — are growing exponentially. It’s apparent that business leaders can no longer ignore supply chain security.

Not only did we see an increase in supply chain attacks in 2021, but the entire anatomy of an organization’s attack surface has evolved significantly. With more organizations shifting to a remote or hybrid workforce, we’ve seen a spike in cloud adoption and a heavy reliance on digital collaboration with third-parties.

Over the past few years we’ve introduced many new risks into our software supply chains. So, how do we ensure we don’t become the next SolarWinds or Accellion? In this blog, we reveal four supply chain security best practices to get you started on solid footing.

First, understand where the threats are coming from. 

With so many facets of the supply chain connected through digital products, organizations and security leaders need to understand which sectors are most vulnerable and where hackers can find holes — both internally and externally.

A recent study found that 70% of all breaches are caused by an outside force, and 17% were specifically from malware. This is to be expected. As software developers have been outsourced more frequently, the doors have opened to traditional malware attacks and breaches. Businesses need to understand how and where their resources can be accessed, and whether these threats can be exploited. However, malicious code detection is known to be very difficult. Standard code reviews won’t always identify these risks, as they can be inserted into internally-built software and mimic the look and feel of regular code. This is one of the biggest trends leaders must be aware of and fully understand which threats could impact their organization.

In addition to malware, hackers have begun attacking multiple business assets outside of an organization’s supply chain through “island hopping.” We’re seeing 50% of today’s cyber attacks use this technique. Security leaders need to identify and monitor island hopping attacks frequently to stay ahead of the vulnerability. Gone are the days where hackers target an organization itself — instead adversaries are going after an organization’s partners to gain access to the initial organization’s network.

Supply Chain Security Best Practices

How do organizations ensure they don’t become the weakest link in the supply chain? First and foremost, be proactive! Businesses must look at internal and external factors impacting their security protocol and implement these four best practices.

1. Enforce security awareness training.

Ensure you are training your staff not only when they enter the organization, but also on a continuous basis and as new business emerges. Every staff member, regardless of level or job description, should understand the organization’s view and focus on security, including how to respond to phishing attempts and how to protect data in a remote environment. For example, in a retail environment, all internal employees and third-party partners should understand PCI compliance, while healthcare professionals need a working knowledge of HIPPA. The idea is to get everyone on the same page so they understand the importance of sensitive information within an organization and can help mediate a threat when it is presented.

2. Enact policy and standards adherence.

Adherence to policies and standards is how a business keeps progressing. But, relying on a well-written standard that matches policy is not enough. Organizations need to adhere to that policy and standards, otherwise they are meaningless. This is true when working with outside vendors as well. Generally, it’s best to set up a policy that meets an organization where it is and maps back to its business processes – a standard coherence within an organization. Once that’s understood, as a business matures, the policy must mature with it. This will create a higher level of security for your supply chain with less gaps.

In the past, we’ve spent a lot of time focusing on policies and recommendations for brick and mortar types of servers. With the new remote work and outsourcing increasing, it’s important to understand how policies transfer over when working with vendors in the new remote setting. 

3. Implement a vendor risk management program.

How we exchange information with people outside of our organization is critical in today’s environment. Cyber attacks through vendor networks are becoming more common, and organizations need to be more selective when choosing their partners.

Once partners are chosen, security teams and business leaders need to ensure all new vendors are assessed with a risk-based vendor management program. The program should address re-testing vendors according to their identified risk level. A well-established, risk-based vendor management program involves vendor training — follow this three-tiered approach to get started: 

  • Tier one: Organizations need to analyze and tier their vendors based on business risk so they can hone in on different security resources and ensure they’ve done their due diligence where it matters most. 
  • Tier two: Risk-based assessments. The higher the vendor risk, the more their security program should be accessed to understand where an organization’s supply chain could be vulnerable – organizations need to pay close attention here. Those categorized as lower risk vendors can be assessed through automated scoring, whereas medium risk vendors require a more extensive questionnaire, and high-risk vendors should showcase the level of their security program through penetration testing results. 
  • Tier three: Arguably most important for long term vendor security. Re-testing vendor assessments should be conducted at the start of a partnership, and as that partnership grows, to make sure they’re adhering to protocol. This helps confirm nothing is slipping through the cracks and that the safety policies and standards in place are constantly being met. 

4. Look at the secondary precautions. 

Once security awareness training, policy, and standards are in place, and organizations have established a successful vendor risk management program, they can look at secondary proactive measures to keep supply chain security top of mind. Tactics include, but are not limited, to attack surface management, penetration testing services, and red team exercises. These strategic offensive security activities can help identify where the security gaps exist in your software supply chain.

Now that so many organizations are working with outside vendors, third-party security is more important than ever. No company wants to fall vulnerable due to an attack that starts externally. The best way to prepare and decrease vulnerability is to have a robust security plan that the whole company understands. By implementing these four simple best practices early on, businesses can go into the new year with assurance that they won’t be the weakest link in the supply chain — and that they’re safeguarded from external supplier threats.

The post Best Practices for Software Supply Chain Security appeared first on NetSPI.

]]>