Ginkgo for BDD Infrastructure Security Testing

July 29, 2019

Recently I’ve been working on a series of unit tests in Ginkgo (a popular BDD testing framework for Golang) and thought it might make for an interesting point of reference.

The tests ensure that my builds detect security misc onfigurations in our AWS / Kubernetes environment. I write the code just like a regular test case.

The upshot is that with this base set of tests, we can drop into your organization and within just a few days supply unit tests that run in your build pipeline to provide assurance around the state of your AWS or Kubernetes environment.

Maybe the best part is that any developer on your team can run and update the tests, which capture and self document the intended configuration.

Cloud Configuration Checks

If you’ve been tracking what we do at Jemurai, you know we built a tool called JASP to help our clients identify potential security configuration issues. We stopped actively selling and marketing it, but we still use it on projects and it has a ton of value. You can check it out if you want at https://app.jasp.cloud.

But what it struggles with are things that people often think should be easy. Things like:

  • What accounts should be present in our AWS Organization? What S3 buckets should be public? What ports should be open? What users should exist?
  • I think of these things as contextual to the application. Some things should always be true. S3 should always be encrypted, MFA should always be turned on. JASP checks for those things perfectly well.
  • Other things are very specific to a customer’s environment and desired setup. So we can use tests to capture what they intend and confirm that the resulting setup is correct.

Why Go? Why Ginkgo?

I’ve been working more and more in Go (see S3S2 and GAA) for a few reasons. I like the typing. I like the speed of development. I like being able to build cross platform native executables. I like the concurrency model. I like the robustness of the cloud SDK’s. I see a lot of the Kubernetes community looking at it. I guess it’s also because I never really liked JavaScript on the server (don’t tell my team) and I was an early Ruby user before Python made a surge with both security and cloud tools. In any case, I’m enjoying Go.

Ginkgo provides a standard framework for describing expected behavior, which is the root of BDD (or Behavior Driven Design). I used to write tests in Cucumber or even RSpec with Ruby and I liked that model because you are effectively declaring how you want something to work instead of worrying about checking the details of its implementation.

So I write a test that logically says something like:

  1. When I am in the AWS Cloud
  2. Related to S3
  3. In US-EAST-1
  4. I expect a bucket called internal to be non-public
  5. I expect another bucket called testing to be public

So How Does It Work?

In the framework, the logic described above would look like:

Describe("AWS-S3", func(){
    Context("US-EAST-1", func(){
        It("Should have a public 'testing' bucket", func(){
            bucket := GetBucket('testing') // Made up example
            Expect(bucket.Name).To(Equal("testing"))
            Expect(bucket.Visibility).To(Equal("public")) //oversimplification
        }
        It("Should have a private 'internal' bucket", func(){
            bucket := GetBucket('internal') // Made up example
            Expect(bucket.Name).To(Equal("internal"))
            Expect(bucket.Visibility).To(Equal("private")) //oversimplification
        }
    })
}

If there is no public ‘testing’ bucket, then the test will fail. If there is an ‘internal’ bucket that is not private, the test will fail. The tests are self documenting examples of what the configuration should be.

By building a small suite of these tests, we can verify anything that the AWS SDK lets us see… which is almost everything!

CI/CD

One cool thing about this is that once you have the tests built, any CI/CD system that can run and test Go programs can run the tests and fail the build.

Extending This Idea

Having built JASP and now getting deeper into these types of infrastructure tests, one thing I can say is that we don’t want to have to write every test ourself or change the code for the test just because a new S3 bucket is out there.

So we have tests that represent the known state expectations in JSON but still read like this so that we can capture the configuration and also test and fail a build based on deviations from it. In theory, I can commit a file to GitHub that represents a configuration and now my tests will actively check that the actual cloud looks like what I say it should look like.

Again, I’m assuming people will run a tool like JASP to get basic and broad security configuration checks.

But then I am recommending that we build on our tests to make the pieces that differ from environment to environment testable.

Areas we’ve written tests:

  1. Identity (IAM, Orgs)
  2. Storage (S3, DB)
  3. Network (VPC)

Ping us if you’d like to talk more about this.

References

Share this article with colleagues

Matt Konda

Founder and CEO of Jemurai

Popular Posts

Ready to get started?

Build a comprehensive security program using our proven model.
© 2012-2024 Jemurai. All rights reserved.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram