Commercial Software Using Open Source

Commercial Software Using Open Source

Matt Konda No Comments
  Application Security OpenSource

Here’s an interesting slightly different spin on the otherwise tired “Open Source” vs. “Closed Source” being more secure debate!

The topic is inspired by a conversation with a client that is using a whole slew of old open source libraries.  They know they need to update those libraries, but it is very difficult because they are using them as part of a distribution of a commercial product that they are paying for.  So, they buy XYZ product and pay good money for it.  XYZ product brings in all of these dependencies that are problematic.  The customer can’t update the libraries because it may break the commercial product and because the commercial product is large and not open source, they can’t see and edit the code to fix issues that arise.  Ultimately, it seems like that’s the vendor’s responsibility.  But that doesn’t mean they see it that way.

The obvious avenue for recourse is to tie payment to keep the libraries up to date.  That isn’t standard contract language … yet.

Another way to hedge in this situation is, of course, to avoid commercial platforms that don’t have a strong track record of updating their dependencies.  It would be interesting to see a scorecard or standardized assessment method to keep track of which vendors and which products do well keeping up with updates in their dependencies.  It seems like it might be relatively easy to ascertain …

In this case, we have an organization that went all in on a commercial platform and even has their own code built on top of it.  Now they wonder if they should have built from a lower level base so that they wouldn’t be locked into a platform that they really can’t update.  Interesting design decision, right!?

Tend Your Digital Garden

Matt Konda No Comments
  Application Security Engineering

Something that is really hard about application security is that it isn’t something you can just point a tool at and be finished at some point in time.  It is always going to take ongoing work.  I like to use the analogy of a garden.  Both the plants in the garden and the conditions around them change no matter what we do.  Maintaining a beautiful garden is a labor of love and an ongoing investment in time.  We could think of our applications in the same way.

Unfortunately, many applications look more like this example of an overgrown garden.  The original intent of many applications tends to get bent, expanded or even lost as systems evolve.  In some cases, the original beauty and architecture are lost in the complexity and difficulty managing the result.

When we think about application security, we are always looking for ways to make it a habit – something that people naturally think about and take care of.  I’d even go so far as to say that tending our security garden needs to be a labor of love.

So what do we do?  There are many layers to these examples that we can learn from:

  • We get tools to help us: clippers, weed whackers, fertilizer, hoses, wheelbarrows, etc.  We learn how to use the tools.
  • We plan to work in the garden periodically.  If we don’t, we know it is going to take more work dedicated to clean up.
  • We plan the garden and take out the plants that aren’t working.
  • We balance our time around different areas.  One wildly overgrown plant can make the whole garden less pleasant.  We know some plants take more work than others.
  • We aren’t afraid to get dirty.  We know it is work.  We’re satisfied when we’re done.

Unfortunately, with software, outside of the development team, it is often difficult to tell whether the garden looks great and is well tended or if it is a bit of a mess…

That’s one of the key reasons Jemurai is built the way we are – around expert software developers that know good software – only very strong developers can look at systems and help make them beautiful like the Japanese garden pictured above.

 

Top 5.5 AppSec Predictions Sure To Go Wrong

Matt Konda No Comments
  Application Security

In keeping with an all too popular industry practice of producing year end Top 10 lists, at Jemurai we developed a Top 5.5 Application Security Trends for 2018.  It is obviously meant to be a little bit fun, given the “Top 5.5” title but we tried to capture what we think are significant important things to keep in mind.

#1.  Continued Framework Level Vulnerabilities

  • Expect to see additional massive breaches related to framework level vulnerabilities that were slow to be identified and patched (old and new).
  • Recommendations:
    • Actively stay up to date on libraries
    • Use a mechanism to detect in CI/CD that your libraries are aging
    • Commit to maintenance

#2.  Innovation Applying Artificial Intelligence and Machine Learning to Security

  • Expect to see more threat intelligence, smarter intrusion detection, better malware detection, improved identity – all through these technologies.
  • Recommendations:
    • If you are very mature and have money, look to these tools.
    • If you are not very mature or don’t have money, work on the basics first.
    • If you are a security company, figure out where these fit for your tools.

#3.  Changes to Static Analysis Market

  • Companies will adopt smaller, purpose built static code analysis tools
  • Companies will start developing their own tooling to perform checks in a DevOps fashion, especially for their growing cloud environments.
  • Commercial tools will continue to have high false positive rates, be too slow to include in developer workflows and will work well with only a few programming languages.
  • Recommendations:
    • Think twice before adopting a new static tool.
    • Look at the API and make sure it is usable (REST / JSON).
    • Leverage open tools to get the basics done and prove a process.
    • Teach your developers and ops (DevOps folks) ways to think about security.

#4.  Security Engineering

  • Companies will start to see the value in security libraries for things like:
    • Audit information
    • Application security signal
    • Encryption
    • Honey Data
    • Customize cloud auditing and assurance
  • Recommendations:
    • Look for places where security impacts architecture and consider building reusable component to handle it properly.

#5.  Software for Risk and Security Program Management

  • Just like companies use systems for procurement, recruiting, HR, finance and business flows, companies will start using software to help them manage their risk and security programs.
  • Recommendations:
    • Keep an eye out for these.  Try to identify your best practices and assess if the tools can help keep programs moving.

#5.5  Some Things That Should Not Be Forgotten Will Be Lost

  • Tools are never a panacea but we will increasingly focus on tools.
  • Awesome instructor led hands on training is expensive and hard to find but worth it.  Computer based training is widely hated by developers, but it will grow much faster.
  • Authorization is hard and tools don’t find gaps.  No advances will be made.
  • It doesn’t matter what you find, it matters what you fix.  We’ll continue to see a focus on finding problems instead of fixing them.
  • People will reuse passwords.  This will undermine all sorts of other controls but we won’t see substantial change.

Turns Out Policy in Markdown in Github Works!

Matt Konda No Comments
  Security Policy

I’ve seen policies from lots of companies big and small.  Generally, I’m a techie engineer so I don’t love policy.  I’ve also seen a fair number of companies that clearly don’t follow their policy.  I’ve also seen companies that get certifications like SOC2 and ISO that are meaningless because they systematically lie and their auditors (not us, we don’t do auditing) never check lots of basic things we see.  Sometimes the security teams at the companies aren’t lying, they just don’t even know the truth about their own company.  I get it, there’s all kinds of reasons we can’t always have nice things.

In response to that, we spent a few years at Jemurai trying to write minimal policies that people could understand and follow.  I even published a blog post last summer about it and we tried selling a minimal policy bundle off of our website.  It seemed like a good idea at the time.  I think the philosophy was generally sound in a pure sense.

The problem is, people use policy as a defense against auditors and without more explicit direction, you can’t say you have controls around a variety of things.  You don’t even know you need to know the answer to questions about data loss protection or mobile devices in your network.  Inevitably, sooner or later someone is going to run up against a SIG Lite or a more exhaustive partner checklist or some trigger that forces them to articulate a more complicated policy.

To update our position on this, while staying arms length from auditing and full on policy work in the future, we developed policies in Markdown and published them to our private Github repo.  They look nice and everybody can immediately see what the policies are and who changed them when.  We can also track approvals using pull requests.  For smaller tech companies this makes for a simple more digestible way to get, use and publish policy.  It keeps it in a relevant and accessible place.  We can share it with their security point of contact by letting them fork our policy in Github.  They can subscribe to updates and merge our new best practices in as they evolve.  So far, this seems to be a good direction.

 

Your Vulnerability Spreadsheet Says More Than You Think

Matt Konda No Comments
  Application Security

More often than I’d care to say, I work on projects where a client has a vulnerability spreadsheet to rule them all.  They’re using the spreadsheet to track all of the open items that were found across all of their projects with different tools and pentests.

One initial interesting point is that these companies don’t particularly seem to consider this data to be extremely sensitive.  The spreadsheets get mailed around with different tabs for different apps or organizations to teams across the company.  Good thing we just told all of our IT and development resources where the known problems are …

Going a little deeper, I can often tell a lot about a company based on what I see in the spreadsheet.  Maybe a simple Apache web server patch hasn’t been applied in 9 months.  Maybe some teams respond and others don’t.  Maybe it’s hard to find owners or they keep shifting.

Experienced security folks can often map vulnerabilities in a report back to the tools that find them.  You know the X-Frame-Options item came from ZAP or Burp.  The listable directory too.  Content types, password fields that don’t have autocompletion turned off, JS from foreign sites, etc.  You know the drill.

Something that I also find very interesting is what is NOT in the report.  If there aren’t ANY authorization related findings or other items that I wouldn’t expect a tool to find, I can often be quite confident that either the application testing was very time-limited or the testing methodology did not include human-driven testing.  This should be a red flag.

Unless you are trying to pay for only an automated test, look for a vulnerability you know a tool can’t find but ANY tester should.  Maybe even consider adding something you can use as a canary to test the tester.  There are lots of examples but an easy one is a user from one access role or organization being able to access data or a function from another access role or organization – basically an authorization failure.  Tools can’t find these.  If you don’t have any, it might be because your testers are only using tools.

Announcing “Inside Out” Tech Talks

Matt Konda No Comments
  Engineering Startup

As a small, growing and disruptive company we place a major focus on training our employees.

We’ve tried a lot of different things:  Capture the Flag games, internal videos, weekly tech talks, etc.  It’s an ongoing challenge and a continually improving process.  In a recent team discussion, we realized that there might be an interesting value to making some of those tech talks public.  It’s a way for us to provide something valuable to our community while giving our team a platform to present and cross training on technology and software security problems we’re facing.  For example, we’re seeing Hashicorp Vault and Marathon at Client X, or we’re using OWASP Glue with Jenkins at Client Y.

Somehow we came up with the idea of Inside Out Tech Talks, where we take one of our regular tech talks and make it open to the public.

The first will be 12/13 at 1:00 PM CST.

Join us on Zoom:  https://zoom.us/meeting/register/cd9408314686923e7510d14dfea9e911.

The topic is Security Automation.

 

Free Developer Security Training Wed, 11/15 @ 1pm CST

Keely Caldwell No Comments
  Application Security

Jemurai is hosting a free Dev Security training on “3 Open Source Tools for Secrets Management.” Join us at 1 pm CST on 11/15, and learn from Jemurai CEO and application security expert, Matt Konda.

In this training you will learn:

  • Security vulnerabilities that emerge from storing secrets in Git
  • 3 open source tools for managing secrets
  • Solutions in clear language, applicable to both engineering & security

This training is beneficial to the leadership and staff of both engineering and security teams.

Sign up here: https://www.jemurai.com/webinar/3topopensourcetoolsforsecretsmanagement

Thinking About Secrets

Matt Konda No Comments
  Application Security Engineering

Introduction

We have two types of projects that often uncover secrets being shared in ways that aren’t well thought through.

  1. During code review, it is actually rare that we do not find some sort of secret.  Maybe a database password or maybe an ssh key.  Sometimes, it is AWS credentials.  We’ve even built our own code review assist tool to check for all the ones we commonly see.
  2. During security engineering or appsec automation projects we end up wiring static analysis tools to source code and JIRA and this often uncovers plaintext secrets in git.

So, generally, plaintext secrets are everywhere.

Best Practice

I said above that people are sharing secrets in ways that aren’t well thought through.  Let me expand on that.  If all of your developers have access to secrets, that’s a problem.  Of course, most developers aren’t going to do anything nefarious but they might, especially after they leave.  Most companies have a further challenge that it is very difficult to change system passwords.  So .. developer leaves the company and chances are low any secrets are changing.  Suppose a laptop with source code on it gets stolen?

The other problem with having secrets around is that it makes it easy for an attacker to pivot and find other things they can target.  Suppose I get code execution on Server 1.  If all of the secrets Server 1 uses are stored in files on the server that the code uses, it makes that pivot to get on Server 2 via SSH or pull data from DB Server 3 trivial.

Testing

Here are two instant ways to look for secrets in your code:

docker run owasp/glue -t sfl https://github.com/Jemurai/triage.git

This runs a check for sensitive files that are often in the source code.

docker run owasp/glue -t trufflehog https://github.com/Jemurai/triage.git

This looks for entropy in the files in a project.  It can take a while to run but is a good way to find things like keys or generated passwords.

Nothing beats a person that is looking carefully and knows what to look for but grep is your friend for sure.  We try to find these and offer alternatives for storing secrets.

The Alternatives

Generally, we want to keep secrets in a place where:

  • People can’t just go read them
  • It is easy to change them
  • We know if anyone uses them (audit)

We see a lot of projects adopting Vault and tools like it to store secrets.

Even better is a scenario where credentials don’t even exist but get generated for a particular action and then automatically expired.  Ideally, we require MFA.  99-Design’s AWS-Vault does this with AWS and its sessions in an elegant way.  This pattern, in general, allows us to know that there aren’t existing passwords out there that people could use without our necessarily realizing.  It also reduces the challenge of responding to a stolen laptop for example.

References

An older reference from a ThoughtWorker:  https://danielsomerfield.github.io/turtles/

A tool:  https://www.vaultproject.io/

Another tool:  https://github.com/99designs/aws-vault

An upcoming Jemurai Tech Talk:  https://www.jemurai.com/webinar/3topopensourcetoolsforsecretsmanagement

Free Developer Security Training: Improve Your Application Security Wed, 10/10 @ 1PM CST

Keely Caldwell No Comments
  Application Security

 

We have a free developer security training:“3 Practices Your Dev Team Can Adopt Today to Improve Application Security.” It takes place on this Wednesday, October 11 at 1 PM CST.

You will learn:

  • Why it’s important for Devs & Devs management to add security into the SDLC: architecture, user stories code review, unit & integration tests and Q&A
  • Actionable activities the Dev Team can implement today
  • Solutions described in clear language, applicable to both engineering and security

Sign up here: https://www.jemurai.com/webinar/3itemsdevscanaddtodaytoimproveapplicationsecurity

Jemurai takes an agile, iterative approach to implementing security into our clients’ code & SDLC. This free training will provide tips for doing so in your environment.

Popular Media Coverage of Software and Formal Methods

Matt Konda No Comments
  Application Security

It is interesting … in the wake of Equifax and other recent news, The Atlantic has published several articles about software:

I say it is interesting because I am completely torn about both of them.  On the one hand, they are correct.  The Equifax Breach should not really be a surprise and the fact that there are coding errors in any system of significant size is something that most software developers or security professionals would accept without argument.

On the other hand, complacency or acceptance is the last thing that I would advocate for developers, consumers or companies after the Equifax breach.  I’ve already written about that here.

Furthermore, while formal methods present an interesting direction for software verification, in practice they are limited to very specific use cases.  I’ve never seen them employed professionally for any widely used application.  That doesn’t mean they aren’t or couldn’t be, but if I haven’t seen it – probably its not real or accessible for common developers yet.

An interesting side effect of these articles being in The Atlantic is that people who wouldn’t usually ask about these things are asking.  I’ve heard about each of these articles from numerous people at clients and partners.  I suppose that is a benefit of having the discussion – provided people have the attention span to continue the discussion.

The “Saving the World From Code” article also included a general quote which I think probably should have been attributed to Marc Andreesen in the Wall St. Journal in 2011:

It’s been said that software is “eating the world.”

The fact that it is not, makes me wonder just a bit about the context the article is written from.  One thing I can’t argue with is the substance of that quote, which again was from 2011.  I would perhaps add to it that software is flawed everywhere.  I just don’t buy that formal systems or rigorous modeling are a realistic near term solution for that.  Many of our clients are adopting new languages or technology – sometimes with more security issues – even as we work to secure their systems.  The idea of a 4GL language, which has been an idea for almost my whole professional career, where we can assemble a program in an increasingly sophisticated IDE with visual blocks like the hacking scene in the movie Swordfish seems unachievable in practice.  If anything, I prefer simpler text editors than ever before.

Ultimately, there is a lot that we can do to secure our systems.  Things like threat modeling to identify and then isolate scope, actively working on architecture, building common reliable blocks, teaching developers, building cultures that value security, using tools and smarts to think about scenarios, teaching practices that encourage security to be a first class part of the SDLC … all of these are real things people in the real world are doing to make software safer.  I doubt there is a silver bullet that somehow avoids the people understanding the problem – we have to accept that as a cost or accept the insecurity of the software we use.  I guess that’s why people hire us to help them secure their software.