Monthly ArchiveMarch 2018

Dependency Management for Developers

Matt Konda No Comments

I recently got asked about best practices for dependency management by an old colleague.  In response, I wrote the following which I realized might be useful to a broader audience.


So … things like github’s new notification on this have been automated via platforms like CodeClimate or via Jenkins internally at dev shops for a long time using tools such as:

  • Retire.js (JavaScript)
  • Bundler Audit (Ruby)
  • Safety Check (Python)
  • Dependency Check (Java)

Generally, we write these checks into a build so that the build fails if there are vulnerabilities in libraries.  We have contributed to an open source project called Glue that makes it easy to run tools such as these and then push the results into JIRA or a CSV for easier integration with normal workflows.

Note that there are also commercial tools that do that ranging from Sonatype to BlackDuck to Veracode’s Software Composition Analysis.  I generally recommend starting with an open source thing to prove you can do the process around the thing and then improving the thing.

At a higher level, I pretty much always recommend that companies adopt a tiered response system such as the following:

  1. Critical – This needs to get fixed ASAP, interrupts current development and represents a new branch of whatever is in Prod.  Typical target turnaround is < 24 hours.  Examples of vulnerabilities in this category might be remote code execution in a library – especially if it is weaponized.
  2. Flow – This should get fixed in the next Sprint or a near term unit of work, within the flow of normal development.  These types of issues might need to get addressed within a week or two.  Typical examples are XSS (in non major sites, major sites treat XSS like an emergency).
  3. Hygiene – These are things that really aren’t severe issues but if we don’t step back and handle them, bad things could happen.  If we don’t update libraries when they come out with minor updates then we get behind.  The problem with being far behind (eg. 3 year old JQuery) is that if there is an issue and the best fix is to update to current, the actual work to remediate could involve API changes that require substantial development work.  So philosophically, I think of this as being the equivalent of keeping ourselves in a position where we could realistically meet our Critical SLA (say 24 hours) on updating to address any given framework bug.

An important part of this is figuring out how a new identified issue gets handled in the context of the tiering system.  In other words, if a new issue arises, we want to know who (a team?) gets to determine which tier it should be handled as.  We definitely do not want to be defining the process and figuring this all out while an active major issue looms in the background.

Of course, with all of this and particularly the hygiene part, we need to be pragmatic and have a way to negotiate with dev teams.  We can weigh the cost of major updates to a system against the cost of keeping it running.  Planned retirement of applications can be the right answer for reducing overall risk.

Ultimately, we want to translate the risk we see with this into terms stakeholders can understand so that they don’t balk at the hygiene work and they are prepared when we need to drop current work to accommodate critical updates.

Using the OWASP Top 10 Properly

Matt Konda No Comments

I have gone to great lengths to strictly separate my OWASP activities from my Jemurai activities in an effort to honor the open and non-commercial aspects of OWASP to which I have committed so much volunteer time and energy.

Today I want to cross the streams for a very specific reason, not to promote Jemurai but to stop and think about how some of OWASP’s tools are used in the industry and hopefully prevent damage from misuse.

I want to address perspectives reflected in the following statements, which I’ve heard from a few folks:

I want a tool that is OWASP Compliant

And:

I need a tool to test for the OWASP Top 10

And:

I want an OWASP Report

Or alternatively:

Our tool tests for the OWASP Top 10

OWASP Is Generally Awesome

First of all, let’s ground ourselves.  OWASP provides terrific open resources for application security ranging from the Top 10 to ZAP to ASVS to OpenSAMM to Juice Shop to Cheat Sheets and many more.  The resources are invaluable to all sorts of folks and when used as intended are extremely valuable and awesome.

The Top 10 Is Not Intended To Be Automated

Let me be very clear:  there is no tool that can find the OWASP Top 10.  The following are items among the Top 10 that can rarely if ever be identified by tools.

  • #3:  Sensitive Data Exposure
    • Tools can only find some predefined categories of sensitive data exposure.  In my experience, a small subset.  One reason is that sensitive data is contextual to a company and system.  Another is that exposure can mean anything from internal employees seeing data to not encrypting data.
  • #5:  Broken Access Control
    • Tools can’t generally find this at all.  This is because authorization is custom to a business domain.  A tool can’t know which users are supposed to have access to which things to be able to check.
  • #6:  Security Misconfiguration
    • Tools can find certain known misconfigurations that are always wrong (eg. old SSL), but things like which subnets should talk to other subnets aren’t going to be identified by Burp or ZAP.  We find custom tools that can find them but they are just that custom.
  • #10:  Insufficient Logging & Monitoring
    • What does this even mean?  Our team has been delivering custom “security signal” for a while, but this isn’t binary that you have it or you don’t.  No company I have ever seen has comprehensive evidence.  There’s no tool you can plug in and immediately “get it”.

Even among the things that can be identified, there is no one tool that can find all of them.

Stepping back, its actually a good thing that the Top 10 isn’t easily identified by a tool.  That reflects the thought and human expert opinions that went into it.  It wasn’t just a bunch of canned stuff that got put together.

What The Top 10 is Great For

The Top 10 are a great resource to help frame a conversation among developers, to be the basis for training content, to be thought provoking for people actively thinking about the issues at hand.  More than 10 items is too much.  These approximate the best ideas we currently have.  Of course there is always room for improvement but the Top 10 are a great resource when used well.

Please just understand them as a guide and not a strict compliance standard or something a tool can identify.

References

https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project