Software Developers and Security Tools

I read two different things last week which really made me think in different ways and drove me to where the seed for this blog post was ready to grow. The first was Dan North's post "McKinsey Developer Productivity Review" about a McKinsey "insight", which attempted to tackle some pretty hard problems about measuring developers and understanding the development process. The second was a post about "Software Supply Chain Vendor Landscape" by Clint Gibler and Francis Odum, which true to its title, captured a useful overview of security tooling around our software supply chain.

What those two articles got me thinking about was how developers relate to security tools now.

Security Requirements

One thing I took away from Dan North's post was a healthy reminder of the nuance to the art and science of software engineering. Several times in the discussion he reminded readers that a lot of time for software developers should be thinking about how to solve the problem. Not just continually writing code and running tools. In a successful team, it also requires a lot of time mentoring, training and guiding others.

In security, people often simplify what they think developers should do. Just run this tool, do something with the output and you'll be more secure! A problem with this point of view is that in my experience, all AppSec tools produce a huge amount of noise. That is - information that I really shouldn't take action on. Not only that, it is often hard to get the tools running in a meaningful way to begin with.

An interesting question to ask is how security can engage with the rich problem solving part of software development. I used to think about this when we were talking about Rugged Software before DevOps even existed as popular concept. But even that was more of a mindset thing than a practical guide. I think there is really no substitute for having folks think hard about security while defining software requirements. All kinds of important problems only manifest here and not in any of the tools I've seen - things like:

  • software architecture (library and infrastructure)
  • the correct implementation of authorization
  • fraud visibility and identification

None of these get done with tools or automation and all require developers to stop and think.

More importantly, I believe that for all practical purposes they always will. There are security problems that are inherently more conceptual and harder to detect with a tool. It might be interesting to see if you can train AI to identify common pitfalls, but I don't personally believe that such solutions will exist in a near term horizon relevant to developers.

So committing to training, defining security requirements, estimating security time into our roadmap and taking time to think about security as part of software development is an important and worthy cause. Maybe there should be a security jam session once a week.

What about automation?

Security Automation

My general impression is that Application Security practitioners think of security automation as an unequivocal Good Thing™. If you can automate something, then you should. Often, security organizations adopt this view and start with "let's do the security we can automate first". I believe this position is fraught for several reasons.

Of course, this position plays also to vendors and high growth business models as well. If something requires human time (eg. the time thinking about the problem to understand it better) then it isn't going to be conducive to a SaaS based hockey sticked outcome. The result is that a lot of tools in the landscape focus on problems that are reducible (another concept Dan North touched on) to simpler problems. Or at least problems that they claim are reducible. Those that aren't tend to be noisy.

Some good questions to ask (as a developer) when you think about adopting security tooling include:

  • How serious is the typical issue this tool will find?
  • What are my other options for identifying those issues?
  • How will the output of the tool change other things I'm doing (eg. large backlog of unimportant issues)?

In my experience, if you can't mentor your team to do these kinds of things themselves through their understanding of the code, you may be heading toward a hard time when you turn the tool on.

I also advise people to build the automation they can imagine first and then plug in commercially excellent tools instead of the other way around. Too often, we build crappy automation that development teams won't adopt and pair that false positives that make the whole thing fall down.

When and Where The Tools Work

Security tools are great for developers where they solve a reducible problem. Meaning, they make something definite (but maybe otherwise hard or time consuming) very fast. A reducible problem might be as simple as naive software composition analysis (SCA): are you referencing an old vulnerable version of Log4J? That is an easy question to answer quickly in most cases. While I was writing this, I added a license check for LGPL (the only license you probably need to actually worry about) to the open source Crush tool so that we can instantly see if a bad thing is present.

Security tools also work great for developers when they are built right into the tools they are using. If my command line tool automatically does (quick) work when I do things, that is a bonus for me. Eg. if I can check for secrets in a pre-commit hook, that is sweet.

The more I make this true at an infrastructure level, the better. For example, if I can preferentially pull dependencies from a registry that hosts secure versions, then if those are secure we are making the overall outcome better with minimal changes required from the developer. If I can check Terraform with OPA before deploying, that seems like a good thing - as long as it doesn't create a lot of noise.

Of course, if you have the resources to run a great Application Security team at your company, then these tools can be amazingly useful. You can dig in and really understand the output. You can tune them to work well. You can use them where you want and ignore them where you don't.


Some of the opportunities I see to improve how we think about security and software development are so fundamental and against the grain of everything in the industry. They make me seem like a Luddite or anti-automation (which I'm not). They involve empowering developers and giving them time to think about and solve problems ... which just happen to be security problems.

Some specific opportunities include:

  • Carve out time (more than you think) to work on requirements and understand security.
  • Train developers on security with content that is at a level to engage and challenge them.
  • Encourage developers to use their "unstructured non-typing code into the editor" time to think about security topics.
  • Develop our own registry of libraries we believe are secure. Track hits to libraries we haven't reviewed or tested yet. Then review the ones we use the most.
  • Use a toolchain that allows us to used signed artifacts. Restrict who we accept signed artifacts from.
  • Build your own code review process and tools. We use our Jemurai/crush tool to help us do easy things fast so we can focus on the hard things during code review. While writing this post, I added checks for LGPL to the tool.

Share this article with colleagues

Matt Konda

Founder and CEO of Jemurai

Popular Posts

Ready to get started?

Build a comprehensive security program using our proven model.
© 2012-2024 Jemurai. All rights reserved.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram