Blocking Builds Is Not Being Mean
I want to tell you about a conversation I had with a senior engineer about three months into a role where we'd just rolled out automated security gates in the CI pipeline. She pulled me aside after a standup and said, "Look, I get that security matters. But every time I try to ship a fix, your tool is blocking me for something that has nothing to do with my change. It feels like you're more interested in metrics than in helping us ship."
That conversation stung because she wasn't wrong — at least, not entirely. The gate was firing. It was blocking her. But the rate of false positives was too high, the error messages were useless, and we hadn't done the work to explain to the team why any of this was happening. We had built an enforcement mechanism without building a relationship.
This post is about that gap — and how to close it.
Why Build Gates Exist
Let's start at the beginning, because I find that a lot of the cultural friction around security gates exists because developers don't actually understand why the gates are there. Not because they don't care, but because nobody explained it in terms that connected to their reality.
A build gate is a commitment. When you set up a gate that blocks merges on critical CVEs, you're making a commitment to your customers: "We will not knowingly ship code with a known-critical vulnerability when a fix is available." That's it. It's not about the security team's preferences. It's not about compliance checkboxes. It's about not doing something that would be embarrassing and harmful if the press were watching.
The alternative — shipping code with known critical vulnerabilities and planning to fix it later — is a choice. Sometimes it's the right choice (during an incident, when the fix is risky and the vulnerability isn't reachable). But it should be a conscious, documented choice made by people who understand the risk, not a default that happens because nobody looked.
Build gates make the choice explicit. They say: "If you want to merge this, you need to actively decide that the security concern is acceptable." That's not gatekeeping — that's accountability.
The Most Common Pushback (And How to Handle It)
"This vulnerability isn't in my code"
This comes up constantly with SCA findings. A developer ships a feature and the pipeline blocks because a dependency three levels deep has a CVE. "I didn't write that code. That's not my vulnerability."
Technically accurate. Irrelevant to the attacker.
The conversation I have goes something like: "You're right that you didn't write that code. But when that library's vulnerability gets exploited in our production environment, the data that's at risk is our customers' data. The exposure exists because it's in our running application, not because it was written by us."
That said — and this is important — not every transitive dependency vulnerability is worth blocking a merge over. A vulnerability in a development dependency that never runs in production is a different risk level than one in a production runtime. Gate calibration matters enormously here. If your gates are blocking on every informational-severity finding in every test dependency, the developer is right that the gate is miscalibrated and you should fix it.
"The security team added these gates and never told us"
This one hurts because it's almost always a process failure on the security side. Rolling out build gates without communicating what they are, why they exist, and what developers should do when they fire is a recipe for hostility.
Before you flip a gate from "warn" to "block," do the following:
- Give a heads-up at an engineering all-hands or in the eng newsletter — "Starting March 15, we'll be blocking merges on critical CVE findings. Here's what that means and here's what to do."
- Run in warn-only mode for at least two sprints so developers can see what would have blocked.
- Create documentation. A wiki page, a runbook, whatever your team uses — but there needs to be a place where a developer can go at 11pm when they're on-call and their pipeline just failed.
- Make the error messages actionable. "Security check failed" is useless. "SCA: CVE-2024-1234 (CRITICAL) found in
requests==2.28.0. Fix: upgrade torequests>=2.31.0. Docs: [link]" is useful.
"You're blocking our deploy and there's an incident happening"
This scenario is real and requires a real answer. When there's a production incident and the fix is sitting in a pipeline that's blocked on a security finding, the security gate needs to yield.
Every security gate should have an emergency bypass mechanism. Not a permanent bypass, not a way to disable the gate forever — but a time-boxed, audited, human-approved override for genuine emergencies.
In practice, this looks like:
# GitHub Actions: allow bypass with approval - name: Security Gate - SCA if: github.event.inputs.security_bypass != 'true' uses: aquasecurity/trivy-action@master with: exit-code: 1 severity: CRITICAL
Or a pipeline variable that requires an approver to set, with the approval logged and creating an automatic ticket for post-incident review.
The existence of an emergency bypass is not a weakness in your security program. It's a feature. A system without a manual override will eventually be permanently disabled by someone who needed an override and couldn't get one. Better to have a controlled, audited bypass than to have your entire security program burned down because it stood between an engineer and fixing a production outage.
"Security is slowing us down"
This one requires honest self-examination before you respond. Is it true?
A security gate that adds 45 minutes to every PR build is a problem. A security gate that has a 40% false positive rate is a problem. A security gate with no documentation and no support process is a problem. If developers are saying security is slowing them down and any of those things are true — they're right and you should fix it.
But if the gates are fast, well-tuned, and the complaint is really "we wish we didn't have to think about security" — then the answer is: I hear you, and I understand the frustration, but the cost of not thinking about security is paid by our customers, not by us. Our job is to make it as low-friction as possible, not to make it disappear.
Scenarios From Real Life
Scenario 1: The Log4Shell Moment
December 2021. Log4Shell drops. Every org in the world is scrambling to figure out if they're using Log4j and where. Companies with SCA in their pipelines and a maintained software bill of materials (SBOM) could answer that question in hours. Companies without it were doing manual inventory for weeks.
I was in a platform security role at the time, and we spent three days doing manual grep work across dozens of repositories to understand our exposure. It was painful, it was error-prone, and I swore I'd never be in that position again. The SCA gates that I pushed hard to implement in the following months were a direct result of that experience.
When a developer asks me why we scan dependencies, I tell them this story. Not as a lecture — as a "here's why I personally care about this and here's what it felt like to not have it."
Scenario 2: The Staging Environment Secret
A junior developer was working on a feature that needed access to a staging database. They tested locally, committed their .env file accidentally (happens to everyone), and pushed to a feature branch. Gitleaks caught it in the pipeline within 30 seconds and blocked the merge.
Without the gate, that .env file would have been in version control history, potentially visible to anyone with read access to the repo. It would have needed a git history rewrite, credential rotation, and an incident report.
With the gate, the developer got an immediate, clear failure message, spent five minutes removing the file and rotating the credential, and moved on with their day. Zero incident. No drama.
I share this story with teams because it illustrates that build gates are not adversarial. That particular gate helped that developer avoid a problem they'd have felt terrible about.
Scenario 3: The Container That Ran as Root
A data pipeline container had been running as root for two years. Nobody had thought about it. A container scanning gate we added flagged it as a misconfiguration. The team pushed back: "We've been running this for two years, nothing has happened, why does it matter now?"
The answer: because running as root means that if the container process is compromised through a vulnerability in the data pipeline code or its dependencies, the attacker has root in the container and a much easier path to container escape. The two years with no incident is survivorship bias, not evidence of safety.
They fixed it. It took about four hours of work to drop privileges and test. Six weeks later, a CVE was published in one of their pipeline dependencies that could lead to arbitrary code execution under certain conditions. With the root fix in place, the blast radius of that CVE was substantially reduced.
How to Be a Partner, Not a Gatekeeper
The framing matters more than almost anything. You are not the security team blocking developers from shipping. You are the security team helping developers ship safely. The difference is not semantic — it changes everything about how you show up.
Respond to pipeline failures fast. When a gate blocks a developer and they file a ticket or message you, respond quickly. Every hour you leave them blocked erodes the relationship. Even if the answer is "this finding is real and needs to be fixed," getting that answer quickly is valuable.
Own the false positives. When your tools generate false positives, acknowledge them, suppress them, and track them. Don't make developers feel like they need to fight with you to get a clearly-wrong finding resolved.
Invest in developer education. A developer who understands why a vulnerability class exists is more likely to fix it correctly and less likely to resent the gate. Lunch-and-learns, internal tech talks, Slack answers with context — invest the time.
Celebrate the wins. When a gate catches a real issue, talk about it (appropriately — no public shaming of the developer who introduced it). Make security wins visible so people understand that the gates aren't just theater.
Ask for feedback. Run quarterly surveys or retrospectives with engineering teams about the security tooling. Developers will tell you what's painful, what's useful, and what they wish existed. That feedback is gold.
The Trust Transaction
Here's the core of it: build gates are a trust transaction. Developers are trusting that the gates are well-calibrated, that the error messages are useful, that there's a process for dealing with problems, and that the security team is a fair partner. When that trust is present, build gates work well. When it's absent, they become an obstacle course.
Your job as a security practitioner is to earn and maintain that trust. Not by lowering standards — but by being competent, communicative, and honest about tradeoffs. By fixing your tools when they're wrong. By being available when developers need help. By making it clear that you're all on the same side.
Blocking a build is not being mean. Blocking a build badly — with no context, no support, no empathy for the developer on the other end — that's a problem. The difference is entirely in how you do it.
Takeaway: Build gates are security commitments, not security theater. They work when they're well-calibrated, clearly communicated, and backed by a security team that shows up as a partner. Get the culture right and the technical enforcement follows naturally. Get the culture wrong and you'll spend more time fighting developers than fighting attackers.