Legislating Software Security

March 2, 2023

Today when I sat down to work, I had a number of questions sitting in Twitter and Slack about a Wall St. Journal article covering a new national cybersecurity strategy focus on insecure software. The headline was "Biden National Cyber Strategy Seeks to Hold Software Firms Liable for Insecurity."

As a software security professional, I have a lot of ideas about this, so I thought I would try to quickly capture them here. TL;DR is - I am worried that focusing on liability instead of standardization may make this strategy untenable. I argue we should focus on standardization.

Incentives and Liability

I think it is true that software firms don't usually have proper incentives for cybersecurity.

This is partly because there aren't effective market forces that value security and partly because software firms understand they can't expose themselves to this liability.

Some people think this can be solved with clearer penalties, others with smarter cyber insurance. A deep and unaddressed problem with these visions is that there aren't really common understandings of what the problems are or how to prevent them. What is the standard?

We routinely find security problems in software that has been otherwise well built and by reputable developers and organizations. Do we think the developers didn't want to build in security? Do we think the firm's project managers actively told developers not to do things for security?

Ultimately, the fundamental problem with the proposed approach is that we don't have an industry standard way to answer the question of "how much security is enough"?

Not to mention, let us not forget that most software is built around open source software that people aren't getting paid to write in the first place!

Complexity

It is also true that software construction and delivery is extremely complex and so legislating it may be fraught with issues. Consider:

  • Code gets deployed in different places for different reasons with different security requirements
  • There is no such thing as absolutely secure code, but a spectrum of a little secure to very secure
  • Lots of code is built with third party libraries (both commercial and open source) which could be used in many different ways and may be maintained by volunteers
  • There is no standard set of definitions for almost anything around how applications are built and deployed
  • There is no standard for quantifying risk
  • There is no standard for quantifying damages (liability)

I'm worried that given these complexities, it will be impossible to write legislation that works and doesn't break software development for at least some set of companies. I'm worried that the impact could be disproportionately felt by smaller companies that don't have the resources to meet the new requirements. In other words, it may stall innovation.

It will be decades before we know whether the requirements are consistently working.

Based on these complexities, I think that building a framework around penalties and liabilities is unlikely to work well or have the impact expected.

What Would Help

Grants might help. OWASP, the main community working on application security has an annual budget of about $2M which is mostly used to run conferences that effectively spread information. Imagine if OWASP were not dependent on industry and could deploy teams of people to really research and iterate on the Top 10, Cheatsheets and open Training!? I'm a big fan of OWASP (former Board Member), but it is shocking how under-resourced it is. A very small grant could help advance software security where it lives now. I'm not saying it has to all go to OWASP, distributing grants to Apache Foundation, Safecode and others could be a very effective way to make more resources more widely available.

Developing standards might help. Another problem is that there is no standard understanding of risk or liability. Redefining what software vendors could be liable for is an interesting idea, but we need a much more robust and tested framework to understand what that might look like in reality. Many contracts we see promise "Professional and workmanlike" standard of delivery. What does that mean from a security perspective? Also, I think most software license agreements already limit liability. How could we define realistic and commensurate liability? At a more detail level, which software security issues would the companies be liable for?

Properly funding existing mechanisms might help. There are some existing processes that could work better if they weren't so under-resourced. Consider FTC investigations into PII leaks and HIPAA fines. These could be effective controls if there were resources to really govern what has become a huge industry.

Wasn't it Marc Andreessen who said "software is eating the world"? Do we think that our software security measures have kept up with that? I definitely don't.

On the other hand, telling software companies that they aren't doing a good enough job with security without telling them what a good job looks like is problematic. Defining what a good job is introduces a lot of complexity. Consider that the software industry doesn't even have well defined or respected certifications or qualifications. That's not because people haven't tried, it is because it is hard. By the time one certification gets implemented, a whole new set of best practices have evolved! I think we all want that evolution to happen - and for better and cooler software to keep being built. So how do we control the process without stifling innovation?

Share this article with colleagues

Matt Konda

Founder and CEO of Jemurai

Popular Posts

Ready to get started?

Build a comprehensive security program using our proven model.
© 2012-2024 Jemurai. All rights reserved.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram