visit
Most SAST tools target security compliance auditors. Their goal is to raise an issue for anything even remotely suspicious. There's no fear of false positives for those tools because the auditors will figure it out; after all it's the auditors' job to sort the wheat from the chaff and the signal from the noise.
But for years now the rallying cry at SonarSource has been "Kill the noise!" As a developer-first company, we know there's little tolerance among developers for crying wolf. So our guiding principle has been to prefer "reasonable" false negatives to raising false positives. What does that mean in practical terms? Well, let's play with some numbers. Let's say you have a codebase with 12 Vulnerabilities. That's 12 things that absolutely need fixing. A typical SAST analysis might raise 500 issues in total, and then the auditors will spend x weeks sorting through that to bring you, the developer, the audit result maybe a month or so after you've moved on to other code.They expect you - as the developer - to find and fix the true Vulnerabilities.
This scenario's even worse, both for you and for the security of the codebase. Because let's be honest, it won't take many false positives for you to throw up your hands and declare the whole thing a waste of time. Now, nothing gets fixed.
At SonarSource, we're keenly aware of that. That's why we accept reasonable false negatives. Instead of raising 12 real Vulnerabilities that are ultimately lost and ignored in a sea of false positives, we'd rather raise only 10 real Vulnerabilities that actually get fixed and miss the other two.
Don't misunderstand. We're not missing those other two (theoretical!) issues because we're sloppy or lazy. Sometimes in implementing a rule you have to strike a balance between catching every single issue … and also getting a few False Positives in the net, or tuning the rule sensitivity down to eliminate False Positives… and missing a few real issues at the same time.SonarSource developer Loic Joly recently gave on striking that delicate balance. As he explained, when we're faced with this choice, we're going to choose false negatives every time.
It's an issue of credibility. As I said earlier, we know that developers don't have patience with False Positives. So we make sure that when we raise an issue, there's something to fix. That doesn't mean we never raise False Positives. We're human too, and if developers were perfect, you wouldn't need us to start with. But our mission is giving you an accurate SAST analysis, and killing the noise. And it makes all the difference.Previously published at