paint-brush
What happens after you accidentally leak secrets to a public code repository by@shhgit
332 reads
332 reads

What happens after you accidentally leak secrets to a public code repository

by Paul Price @ shhgitFebruary 14th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Fraudsters constantly scan public code repositories for these secrets to gain a foothold into systems. Shhgit finds over 165,000 secrets everyday single across public GitHub, GitLab, and Bitbucket repositories. The fallout can be catastrophic in terms of financial loss and reputational damage. We purposely leaked valid Amazon AWS credentials to a public GitHub repository. We chose to leak AWS keys because we know they are highly sought after by fraudsters with all sorts of different motives — espionage, spamming, financial gain or blackmail. But what happens immediately after leaking secrets?

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - What happens after you accidentally leak secrets to a public code repository
Paul Price @ shhgit HackerNoon profile picture

How we find secrets in your code before the bad guys do.

Disclaimer: I am the Founder and CEO of shhgit. We find secrets in your code — before the bad guys do.

Accidentally leaking secrets — usernames and passwords, API tokens, or private keys — in a public code repository is a developer's and security team's worst nightmare. Fraudsters constantly scan public code repositories for these secrets to gain a foothold into systems. Code is more connected than ever so often these secrets provide access to private and sensitive data — cloud infrastructures, database servers, payment gateways, and file storage systems to name a few. But what happens after a secret is leaked? And what is the potential fallout?
Fraudsters compromised our cloud infrastructure in less than a minute after leaking AWS keys to a public GitHub repository
One thing is for certain: the fallout can be catastrophic in terms of financial loss and reputational damage. The now-infamous Uber data breach of 2016 was the result of an engineer accidentally uploading credentials to a public GitHub repository. These credentials allowed bad actors to access Uber's backend databases and confidential information. This breach ultimately resulted in a then-record-breaking — ouch.More recently, the sophisticated hack against IT management software giant SolarWinds could have originated from . This supply-chain attack and the resulting fallout will go down as one of the largest in history, joining the likes of WannaCry. The hack affected over 18,000 SolarWinds customers including various US federal agencies and tech giants such as Microsoft and FireEye. As news of the hack broke, SolarWinds saw its market cap cut in half and fall by billions of dollars after its share price dipped below pre-IPO levels.
Thankfully these hard-hitting headlines are the exception and not the rule. We can look to publicly disclosed reports on bug bounty platforms such as HackerOne to get a feel for how often these secrets are being found and reported. At least by the good guys. These three reports from 2020 could easily have been a front-page headline or resulted in huge regulatory fines — they got lucky.
  1. The exposed administrative credentials of their GitLab Enterprise server on GitHub, allowing access to their private code repositories.
  2. Not for the first time, leaked credentials on GitHub from allowed for access to an API endpoint exposing PII information of Costa Rican citizens.
  3. 's leakage of a CircleCI API token on GitHub allowed for further access to private code repositories and their secrets within.

Finding these secrets is what we do. Shhgit finds over 165,000 secrets everyday single across public GitHub, GitLab, and Bitbucket repositories. More than half of these are valid and live. And our data suggests it is an ever-increasing problem that often goes unnoticed. You can catch a small
glimpse of the scale of the problem :

Code as an attack vector

But what happens immediately after leaking secrets? To find out we purposely leaked valid Amazon AWS credentials to a public GitHub repository. We chose to leak AWS keys because we know they are highly sought after by fraudsters with all sorts of different motives — espionage, spamming, financial gain or blackmail. But we didn't quite realise how quickly it would happen...

We wanted to limit our liabilities as much as possible, even though we used a new AWS account loaded with free credits. We definitely do not want to be end up for this experiment. AWS' Cost Management tool alerts you if you go over a set budget but it will not stop you from going past it. To be on the safe side we created a script to automatically destroy any new EC2 instances (servers) shortly after their creation. This gives us enough time to forensically capture the server and analyse what the hackers were doing.

(15:12 UTC) First, we created a new IAM user and attached basic S3 read and EC2 policies. This means the account will only have permissions to access our file storage and spin up new cloud servers. We then published the AWS secret keys to a new GitHub repository.

(15:16 UTC) Just four minutes later we receive an e-mail from the AWS team notifying us of the exposed credentials — neat!

Amazon automatically revokes exposed keys by applying a special policy called AWSCompromisedKeyQuarantine. This effectively prevents bad actors from using the key. But it somewhat renders our experiment useless if the keys can't be used. We knew this would happen beforehand so we created a simple script to automatically remove the quarantine policy if found (tip: never do this). Now we just wait for fraudsters to take the bait.

(15:17 UTC) Shhgit happened to be monitoring all activity by the user who leaked the secrets, so a minute later we received an e-mail alert of the leak.

(15:18 UTC) A minute later we detected a flurry of activity on the key from an IP address based in the Netherlands. Threat intelligence feeds associates the IP address with spamming, exploitation of Windows hosts, and running a TOR exit node.

The first batch of commands helps the attacker understand the lay of the land. Shortly after the bad actor spun-up two c6g.16xlarge EC2 instances — AWS' most powerful compute instance. Undetected this would have cost us thousands of dollars a month. Any guesses on motives?

Cryptomining.We analysed the server afterward and it was a base install of Ubuntu with XMRig installed which is used to mine for $XMR (Monero) — nothing overly exciting.

(15:54 UTC) Shortly after, another actor with an Israeli residential IP address used the secrets to access our S3 buckets (file storage) and download its entire contents (which we filled with random data). We booby-trapped some of the files with tracking pixels but unfortunately, none of them triggered.

One attacker who copied the files from our S3 buckets started a live chat conversation through the shhgit.com homepage (the bucket was named shhgit) — a snippet of the conversation below.
Ultimately, this ends up in a demand for money to pay him for his services in finding the 'bug'. We politely declined his generous offer. We believe this could have quickly turned into an extortion attempt if the files were worth any value.In total, the leaked secrets were used to access the AWS account 13 times over a 24 hour period. 4 of the attackers carried out actions similar to the above. The remaining 9 seemingly only verified if the credentials were valid (i.e. check if they could login). We presume to be either manually reviewed later or sold on a darknet forum.

6 minutes to compromise

It took just 6 minutes for the malicious actor to find the leaked credentials on GitHub and compromise our AWS account. So why did we say 'in less than a minute' at the beginning of the post? Attacks like this are made possible by GitHub's "real time" feed. Every fork, code commit, comment, issue and pull request is fed into a public events stream. Bad actors watch this stream to identify new code and scan it for secrets. GitHub delays the feed by 5 minutes — presumably for security reasons — making the earliest possible time a bad actor could have captured the secrets 15:17 UTC, 5 minutes after committing them.

Keeping your secrets, secret

Fraudsters are scanning public code repositories for your secrets. They are counting on you, or your team, or your suppliers, or your contractors, to mess up and take full advantage of it. As with all security controls, a defence in depth approach is always best so consider implementing the following:

  1. Don't put your secrets in your code in the first place. Secrets management solutions such as can help you here — think environment variables crossed with your password manager in your CI/CD pipelines. Whilst Amazon caught and revoked our AWS keys shortly after them
  2. Whilst Amazon caught and revoked our AWS keys shortly after them being committed, this isn't true for all secret types (e.g., database connection strings) or platforms (e.g., GitLab and Bitbucket). Strengthen your DevSecOps strategy with automated secrets detection to alert you if shhgit hits the fan. We happen to be really good at that.
  3. Gain visibility of your code and understand where it sits and how it is secured. This is especially true if you have large teams or contractors distributed across many systems. A good first step is to monitor public GitHub, GitLab and Bitbucket streams. We happen to be really good at that too.
  4. Implement the principle of least privilege across your secrets. Only grant them permissions needed to carry out their specific function and ensure they are only valid for as short as possible. Implement a zero-trust model and plan for the worse by taking the position that your secrets will be breached. This will greatly mitigate the impact if any secrets are leaked.

Previously published at

바카라사이트 바카라사이트 온라인바카라