If you’ve been building artificial intelligence (AI) tools lately, you’ve probably noticed something: Your development workflow has become incredibly connected. Tools such as model context protocol (MCP) sit at the center of it all, acting as the ‘brain’ that orchestrates how your large language model (LLM)-powered projects talk to databases, cloud services, APIs, design tools and messaging platforms.
This hyperconnectivity makes rapid integration possible. We can now spin up a project that uses PostgreSQL, Google AI APIs, Figma and Telegram notifications in minutes.
But here’s what we’ve learned from analyzing 1,785 valid secrets leaked to public GitHub from files associated with AI development, such as mcp.json: This convenience comes with a profound security cost that most of us aren’t prepared for.
MCP configuration files—such as mcp.json—are designed to store all the credentials needed to connect these systems. But these files are also among the riskiest in the entire stack. Developers often hardcode secrets into MCP files for quick setup, following patterns suggested by documentation, AI assistants or pure convenience. Once these files are committed—deliberately or by accident—their secrets aren’t just exposed for that project. They can be copied, shared and persisted in Git history across forks and clones, multiplying the attack surface with every integration.
These files, designed to store the ‘context’ needed for AI tools to function, have become repositories for every credential required in hyperconnected development workflows.
A typical MCP-powered application might simultaneously integrate PostgreSQL databases, Google AI APIs, Perplexity for search, Figma for design assets and Telegram for notifications—each requiring its own set of credentials stored in the same configuration file.
Security experts have been warning about this exact problem. The OWASP AI Testing Guide specifically calls out the risk of secrets leaking through AI configuration files—and now we’re witnessing this occurrence on a large scale.
1,785 Valid Secrets Leaked in AI Development Files
We scanned public GitHub for valid secrets in files related to AI code editors—Claude Code, Cursor and Windsurf. What we found in configuration files such as mcp.json or claude_desktop_config.json reveals the scale of this problem (and this is your reminder to never commit working credentials in such files into a Git repository).
Across public GitHub, 1,785 valid secrets were discovered in 2025 alone (as of June 20).
These aren’t false positives or test credentials—these are real, working secrets connected to existing databases and services. The kind that could drain your OpenAI credits overnight or give strangers access to your production database.
The data shows 40 different types of leaked secrets, with database credentials dominating the landscape. PostgreSQL URIs represent 33% of all leaks, while AI application programming interface (API) key sprawl accounts for a combined 39.1% of exposures: Google AI (22.6%), Perplexity (16.3%) and various other AI services.
Perhaps, most concerning is the cross-platform bleeding effect—the same types of secrets appear consistently across Claude configurations, Cursor IDE files and MCP setups, indicating that this isn’t a tool-specific problem but a systemic issue with how hyperconnected AI development handles credentials.

- Database credentials (such as postgres_uri) represent 35.5% of all leaks—highest risk category
- AI API keys (Google, Perplexity) account for 39.1% combined—significant financial exposure
- 62% of leaks occur in Cursor IDE configuration files—indicates widespread adoption
- 40 different types of secrets exposed—broad attack surface

A month-over-month explosion — from just 21 leaks in January to 601 in June 2025 — correlates directly with increased adoption of AI development tools and the expanding ecosystem of services they connect to. What’s especially surprising is that these leaks are happening in technologies and workflows that are relatively new, where we would expect developers to be especially vigilant and security-conscious. Instead, the data demonstrates that even with modern tools, developers are not making a meaningful effort to follow best security practices.
Why AI Development Creates Perfect Conditions for Secret Leaks
The modern AI development workflow creates a perfect storm of convenience over security. Here’s what we’ve observed:
- AI tools suggest an easy path. When developers set up integrations, AI assistants and documentation often recommend putting credentials directly in configuration files ‘for quick setup’.They follow these patterns because they work immediately—and who has time to set up proper secret management when you just want to test if Claude can talk to our database?
- Multiple environments, same bad habits. We see the same risky patterns across Cursor, Claude, VSCode and other environments. This indicates that it is not a tool-specific problem; it’s how we’ve collectively decided to work with AI.
- The ‘local first’ illusion. Developers store secrets in files named. .local. json or tuck them away in personal directories, thinking ‘this won’t be shared’. But modern development workflows—Git repositories, Docker containers and team collaboration—make ‘local’ a dangerous myth.
- Backup proliferation. The presence of files such as claude/mcp.json. backup and .complete_working_backup shows we’re creating multiple copies of sensitive configurations. Each backup multiplies the exposure points.
- Documentation as configuration. We found live credentials in reference files, ruledocumentsand even markdown tutorials. Developers are documenting their working setups without sanitizing sensitive data—thereby creating instruction manuals for attackers.

The concentration of leaks in specific development environments tells its own story. Cursor IDE alone accounts for 62.3% of the 1,785 secrets we’ve found, indicating widespread adoption and inadequate security defaults.
What These Leaks Actually Cost You
Let us put this in practical terms. When we see these leaked credentials, here’s what attackers can actually do:
- Your database is completely exposed (35.5% of leaks): PostgreSQL and MongoDB URIs don’t just leak data; they give complete database access. An attacker could exfiltrate everything, hold it for ransom or simply delete everything.
- Your AI budget becomes someone else’s playground (39.1% of leaks): Think about those Google AI, OpenAI and Perplexity API keys? These leaks can reveal competitive intelligence through API usage patterns and expose proprietary prompts and model interactions and even cause denial-of-service (DDoS) incidents through quota consumption.
- Design & Collaboration Compromise (6.7%) through Figma, Notion and Jira tokens enables intellectual property theft and product roadmap exposure. While smaller in percentage, these leaks can provide competitors with detailed insights into product strategy and development timelines.
- Infrastructure hijacking through cloud provider keys enables resource abuse, cryptomining operations and massive infrastructure costs, while messaging platform credentials can be weaponized for spam, phishing and brand damage.
Here’s How We Fix This
Developers are often storing credentials in easily accessible but poorly secured locations, turning our development environments into treasure troves for attackers. This isn’t happening by accident; it’s the predictable result of how modern AI development tools encourage rapid experimentation.
Start with automated guardrails. Implement security scanning, commit hooks and least privilege patterns at every step.
Here’s the thing about guardrails: they don’t slow down the flow; they make it sustainable. They prevent the technical debt and cost overruns that arise from catching issues too late, especially when teams are burning through premium AI service credits.

Use a pre-commit scanner. The fastest way to catch leaks before they leave the laptop is with tools such as ggshield. It scans our code and config files for secrets every time we commit—preventing accidental pushes of API keys, database URIs and tokens. This one step can stop most leaks at the source.
The Bottom Line
The AI development community is moving fast, maybe too fast. We’re building incredible things, but we’re also creating a secret sprawl problem that’s growing exponentially. The 1,785 leaked secrets we found represent just what’s visible on public GitHub. The real number is likely much higher.
The good news? This is fixable. It doesn’t require abandoning the tools that make us productive. It just requires treating security as a first-class citizen in our hyperconnected workflows.
Our future selves (and our bills) will thank us.



