It’s July 2025. We’re in an era of “vibe coding”, bringing incredible AI ideas to life faster than ever before. We’re building agents that can plan, reason, and interact with the world. And yet, somehow, we’re still tripping over the first and most basic hurdle: pushing API keys directly into public GitHub repositories.
With the AI world gaining so much popularity, I felt like most builders had already understood the basics. The number one rule on that list? Don’t hardcode credentials in your codebase.
Right now, those specific API keys for AI providers can mostly be used for regular API calls. Attackers can leverage them to interact with the LLM providers on your account. They can potentially exhaust your resources by hitting API limits or hike up your costs on a pay-as-you-go plan.
But in the near future, the risk gets much bigger. More and more of these services will have external services connected to them. We are going in the direction of MCP servers, in-server integrations with external tools, and multi-agent systems. That means the API key isn’t just a key to a model anymore; it’s the key to the door of a personal email inbox or a sensitive database.
Since it’s so easy to bring ideas to life with vibe coding, I decided to build a dashboard which will show you how bad it is.
I call it AI Leak Watch. (You can see it here: https://ai-keys-leaks.begimher.com)

It’s a simple dashboard that does one thing: it scans GitHub for common AI key patterns and shows you the raw, unfiltered numbers. The very first scan found an overwhelming 189,600 potentially exposed keys for major AI providers.
Disclaimer: This dashboard was created for educational purposes. I bet some of the keys it finds are not useable, but I also bet some of them are.
So, how do we fix this? It’s not complicated, but it requires discipline.
Protect Your Secrets at the Source
Never, ever hardcode them. If you’re running something scrappy for a temporary local dev session, that’s fine. But that code should never, ever see a git push. If you think you might forget, don’t do it in the first place. Use a real secrets management service like HashiCorp Vault or AWS Secrets Manager.
Build a Safety Net
Assume mistakes will happen and plan for them. Use tools to catch secrets before they get exposed. You can scan your local code before you commit with solutions like my own ASH (Automated Security Helper). You should also rely on built-in platform features like GitHub’s native Secret Scanning and monitor your public environment continuously with open-source giants like TruffleHog.
Educate Others
Make this a non-negotiable part of your team’s culture. Teach other developers how to handle secrets properly.
The world we’re building with AI is incredibly powerful. We’re teaching AI to build the future, but we can’t even teach ourselves not to leak our keys.
Let’s work to bring down those numbers. Let’s give the AI we’re building a better example to learn from.