OpenSource

How it Works: TOTP Based MFA

Aaron Bedra No Comments

Introduction

Multi-Factor Authentication has become a requirement for any application that values security. In fact, it has become a regulatory requirement in some industries and is being adopted as a requirement in several others. We often discover misconceptions or downright misunderstandings about how MFA works and think this is a topic worth diving into. This particular article will focus on one of the most common second factors, Time based One Time Password, or TOTP.

TOTP authentication uses a combination of a secret and the current time to derive a predictable multi-digit value. The secret is shared between the issuer and the user in order to compare generated values to determine if the user in fact posses the required secret. You may have heard this incorrectly referred to as “Google Authenticator”. While Google had a major part in popularizing this method, it has nothing to do with how TOTP actually works. Any site may create and issue tokens and any mobile application with a correct implementation of TOTP generation may produce a one time value. In this article we will implement server side TOTP token issuing and discuss its security requirements.

To read more about TOTP token generation, please take a look at RFC 6238.

The example code in this article is written in Java. This task can be accomplished in any programming language that supports the underlying cryptographic functions.

Establishing a Seed

The foundation for the security of a TOTP token begins with the seed. This value is used in conjunction with the current time to derive the instance of the token. Because time can be calculated it is not suitable as the only value for our token. Choosing a seed is incredibly important and should not be left up to the user. Seeds should be randomly generated using a Cryptographically Secure Pseudo Random Number Generator. You can determine the number of bytes you want to use, and in this example we are using 64. We will use the SecureRandom implementation provided by the Java language.

static String generateSeed() {
    SecureRandom random = new SecureRandom();
    byte[] randomBytes = new byte[SEED_LENGTH_IN_BYTES];
    random.nextBytes(randomBytes);
    return printHexBinary(randomBytes);
}

In order to cosume the token we will return the hex representation of the bytes generated. This allows us to pass the value around a bit easier.

Establishing a counter

The other side of TOTP token generation relies on the current time. We take the current time represented as a long, which is the number of seconds since epoch. This can be derived using System.currentTimeMillis() / 1000L. Next, we take the value and divide it by our period, or the number of seconds the token will be valid before rotating. We will use a value of 30 in our example, which is the recommended setting. Finally, we need to put the value into a byte array. There are a few ways to do this, but the following method is on the conservative side accounting for non 64 bit longs and possible endian differences.

private static byte[] counterToBytes(long time) {
    long counter = time / PERIOD;
    byte[]
    buffer = new byte[Long.SIZE / Byte.SIZE];
    for (int i = 7; i >= 0; i--) {
        buffer[i] = (byte)(counter & 0xff);
        counter = counter >> 8;
    }
    return buffer;
}

Once we have this value we can execute our hmac operation to produce the long form of our OTP value.

Generating a Value

Using the seed as a key and the counter as a message, we will derive our long form OTP value. The value returned from our HMac operation will be truncated in order to produce the 6 digit value we will compare against in the end. The following code is an HMacSHA1 operation using the standard Java encryption libraries. While RFC 6238 describes the possible options of HMacSHA256 and HMacSHA512, they are not viable when distributing the secret for use on most mobile authenticator applications.

private static byte[] hash(final byte[] key, final byte[] message) {
    try {
        Mac hmac = Mac.getInstance("HmacSHA1");
        SecretKeySpec keySpec = new SecretKeySpec(key, "RAW");
        hmac.init(keySpec);
        return hmac.doFinal(message);
    } catch (NoSuchAlgorithmException | InvalidKeyException e) {
        log.error(e.getMessage(), e);
        return null;
    }
}

With the ability to hash our seed and message we can now derive our TOTP value:

static String generateInstance(final String seed, final byte[] counter) {
    byte[] key = hexToBytes(seed);
    byte[] result = hash(key, counter);

    if (result == null) {
        throw new RuntimeException("Could not produce OTP value");
    }

    int offset = result[result.length - 1] & 0xf;
    int binary = ((result[offset]     & 0x7f) << 24) |
                 ((result[offset + 1] & 0xff) << 16) |
                 ((result[offset + 2] & 0xff) << 8)  |
                 ((result[offset + 3] & 0xff));

    StringBuilder code = new StringBuilder(Integer.toString(binary % POWER));
    while (code.length() < DIGITS) code.insert(0, "0"); 
    return code.toString(); 
}

Using the seed as the key and the counter as the message, we perform the necessary conversions, perform the hash, and truncate the message according to the specification. Finally, we divide the result by 10 to the power of the expected digits (in our case 6) and convert that to a string value. If the result is less than 6 digits we pad it with zeros.

Providing the Secret to the User

In order to provide the secret to the user, we need to provide a consistent string in a format that allows the user to generate tokens reliably. There are several ways to do this, including simply providing the secret and issuer to the user directly. The most common method is by providing a QR code that contains the information. For this example we will use the Zebra Crossing library.

class QrCode {
    private static final int WIDTH = 350;
    private static final int HEIGHT = 350;

    static void generate(String applicationName, String issuer, String path, String secret) {
        try {
            String qrdata = String.format("otpauth://totp/%s?secret=%s&issuer=%s", applicationName, secret, issuer);
            generateQRCodeImage(qrdata, path);
        } catch (WriterException | IOException e) {
            System.out.println("Could not generate QR Code: " + e.getMessage());
        }
    }

    private static void generateQRCodeImage(String text, String filePath) throws WriterException, IOException {
        QRCodeWriter qrCodeWriter = new QRCodeWriter();
        BitMatrix bitMatrix = qrCodeWriter.encode(text, BarcodeFormat.QR_CODE, WIDTH, HEIGHT);
        Path path = FileSystems.getDefault().getPath(filePath);
        MatrixToImageWriter.writeToPath(bitMatrix, "PNG", path);
    }
}

Executing this code will save a PNG with the corresponding QR code. In a real world situation, you would render this image directly to the user for import by their application of choice.

Consuming the Token

In order to test our implementation we will need a program that can accept an otpauth:// string or a QR Code. This can be done a number of ways. If you want to do this via a mobile device, you can use Google Authenticator or Authy. Both of these programs will scan a QR code. If you want to try locally, 1Password provides a way to import a QR image by adding a label to a login and selecting One-Time Password as the type. You can import the created image using the QR code icon and selecting the path to the generated image. Once the secret is imported it will start producing values. You can use these as your entry values into the example program.

Protecting the Seed

It is important to respect the secret for what it is — a secret. With any secret we must do our part to protect it from misuse. How do we do this? Like any other piece of persisted sensitive information, we encrypt it. Because these secrets are not large in size, we have a number of options at our disposal. The important part is not to manage this step on your own. Take advantage of a system that can encrypt and decrypt for you and just worry about storing the encrypted secret value. There are cloud based tools like Amazon KMS, Google Cloud Key Management, and Azure Key Vault as well as services you can run like Thycotic Secret Server, CyberArk Conjur, and Hashicorp Vault. All of these options require some kind of setup, and some are commercial products.

Setting up Secret Storage

To keep this example both relevant and free of cost to run, we will use Hashicorp Vault. Vault is an open source project with an optional enterprise offering. It’s a wonderful project with capabilities far past this example. There are a number of ways to install Vault, but since it is a single binary, the easiest way is to download the binary and run it. Start vault with the development flag:

λ vault server -dev

During the boot sequence you will be presented with an unseal key and a root token. The example program will expect the VAULT_TOKEN environment variable to be set to the root token provided. Your output will be similar to the following:

λ vault server -dev ==> Vault server configuration:

Api Address: http://127.0.0.1:8200
Cgo: disabled
Cluster Address: https://127.0.0.1:8201
Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: (not set)
Mlock: supported: false, enabled: false
Storage: inmem
Version: Vault v0.11.2
Version Sha: 2b1a4304374712953ff606c6a925bbe90a4e85dd

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variable:

$ set VAULT_ADDR=http://127.0.0.1:8200

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: mkniY94IlJngQz07gfPZlQnZnvEHMXWQ3/MiFegsfr8=
Root Token: 4uYnD1vVZZcNkbYe03t0cLkh

Development mode should NOT be used in production installations!

Once Vault is booted you will need to enable the Transit Backend. This allows us to create an encryption key inside of Vault and seamlessly encrypt and decrypt information.

λ export VAULT_ADDR=http://127.0.0.1:8200
λ vault secrets enable transit
Success! Enabled the transit secrets engine at: transit/

λ vault write -f transit/keys/how_it_works_totp
Success! Data written to: transit/keys/how_it_works_totp

λ echo "my secret" | base64
Im15IHNlY3JldCIgDQo=

λ vault write transit/encrypt/myapp plaintext=Im15IHNlY3JldCIgDQo=
Key Value
--- -----
ciphertext vault:v1:/HeILzBTv+JbxdaYeKLVB9RVH9o/b+Lilrja88VhCuaSSlvUY+IzHp2Uλ vault write transit/encrypt/myapp plaintext=Im15IHNlY3JldCIgDQo=

λ vault write -field=plaintext transit/decrypt/myapp ciphertext=vault:v1:/HeILzBTv+JbxdaYeKLVB9RVH9o/b+Lilrja88VhCuaSSlvUY+IzHp2U | base64 -d
"my secret"

We can now encrypt and decrypt our TOTP secrets. The only thing left is to persist those secrets so that they can be referenced on login. For this example we will not be creating a complete user system, but we will setup a database and create an entry with an encrypted seed value to show what the end to end process will resemble.

To encrypt and decrypt our seed we can use a Vault library. The following example demonstrates the essential pieces:

String encryptSeed(String seed) throws VaultException {
    final Map<String, Object> entry = new HashMap<>();
    entry.put("plaintext", seed);
    final LogicalResponse response = client.logical().write("transit/encrypt/myapp", entry);
    return response.getData().get("ciphertext");
}

String decryptSeed(String ciphertext) throws VaultException {
    final Map<String, Object> entry = new HashMap<>();
    entry.put("ciphertext", ciphertext);
    final LogicalResponse response = client.logical().write("transit/decrypt/myapp", entry);
    return response.getData().get("plaintext");
}

Note the lack of error handling. In a production system you would want to handle the negative and null cases appropriately.

Finally, we take the output of the vault encryption operation and store it in our database. The sample code contains database handling logic, but it is typical boilerplate database code an not directly relevant to explaining the design of a TOTP system.

Drift

By now it should be pretty obvious that time synchronization is of the utmost importance. If the server and client differ more than the period, the token comparison will fail. The RFC describes methods for determining drift and tolerance for devices that have drifted for too many periods. This example does not address drift or resynchronization, but it is recommended that a production implementation address this issue.

Running the Example

The source code for this example is available on Github. Make sure you have read an executed all of the steps above before attempting to run the example. Additionally, you will need to have PostgreSQL installed and running. If this is your first time running the example, you will need to be sure to import the generated token using your preferred application before attempting to type in a value. You should have your MFA token generator application open and the test token selected. You can setup and execute the program by running:

createdb totp
mvn flyway:migrate
mvn compile
# For Unix users
export VAULT_TOKEN=XXX
# For Windows users
set VAULT_TOKEN=XXX
mvn -q exec:java -Dexec.mainClass=com.jemurai.howitworks.totp.Main

You will be prompted to enter your token value. After pressing return the program will echo the value you entered, the expected token value, and if the values match. This is the core logic necessary to confirm a TOTP based MFA authentication sequence. If your token values do not match, make sure to enter your token value with plenty of time to spare on the countdown. Because we have not implemented a solution that accounts for drift, the value must be entered during the same period the server generates the expected value. If this is your first time running the example you will need to import the QR code that was generated before the input prompt. If everything was done correctly you will see output similar to the following:

MFA Token:
808973
Entered: 808973 : Generated: 808973 : Match: true

At this point you have successfully implemented server side TOTP based MFA and used a client side token generator to validate the implementation.

Security Pitfalls of TOTP

For a long time TOTP or really, just OTP based MFA was the best option. It was popularized by RSA long before smart phones were capable of generating tokens. This method is fundamentally secure but is open to human error. Well crafted phishing attacks can obtain and replay TOTP based MFA responses. Several years ago FIDO and U2F were introduced and this is now the “most secure” option available. It comes with benefits and drawbacks and like all solutions should be carefully considered before use.

Conclusions

Multi-Factor Authentication is an important part of the security of your information systems. Any system providing access to sensitive information should employ the use of MFA to protect that information. With credential theft via phishing on the rise, this could be one of the most important controls you establish. While this article examines an implementation of TOTP token based MFA, you should seek an established provider like Okta, OneLogin, Duo, Auth0, etc. to provide a production ready solution.

Validating Search Engine Indexers

Aaron Bedra No Comments

Introduction

Not all bots are created equal. Some bots are good, some bots are bad, and some bots are not what they appear to be. This article will discuss an aspect of bots that attempt to exploit the sensitive nature of SEO optimization rules.

Fundamentals

We love search engines. If it weren’t for search engines most of our sites would never be discovered. When the search engine bot comes knocking, we best let it in, or we will suffer the consequences of not existing. Let’s face it, if you aren’t on the first page of the search results, you don’t exist. Let’s break down a request made by a search engine bot:

66.249.64.147 – – [24/Sep/2018:07:58:32 -0400] “GET / HTTP/1.1” 200 2947 “-” “Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”

We can identify this request as coming from Google’s indexing bot. We do this by examining the User Agent provided during the request. Unfortunately, this header can be set to any value and cannot be trusted. While we understand this, it is common for site operators to always allow this user agent for fear of vanishing from the Internet.

We are now presented with a problem. How do we know that the actor identifying as the Google indexer actually belongs to Google? Luckily the popular search engines typically provide documentation on how to validate an actor. For example, Google documents this process at https://support.google.com/webmasters/answer/80553?hl=en. At this point you are probably wondering how this pattern applies to other search engine bots. The good news is that while the User Agents and valid domains may change, the process is still the same. It goes something like this:

  • Does the User Agent match a list of provided user agent strings?
  • If yes, perform a reverse lookup on the IP address of the actor
  • Examine the domain of the lookup and ensure it matches the provided domain(s)
  • If yes, perform a forward lookup of the host and check to see if the forward lookup matches the IP address provided

If all of these checks pass, the actor is a valid search engine bot. If any of these checks fails, it isn’t. We are now faced with the problem of validating any actor that claims to be a search engine indexer. Blindly allowing these actors introduces a technical weakness that could lead to a loss event. It is interesting to observe this trend in action. You can do so by searching your logs for actors that match a pattern and performing the verification steps manually. You will likely fine a number of actors that fail the test. We have observed that even on low traffic sites, there are requests every day that fail validation.

We all know that doing this manually doesn’t scale and that neither does manual IP blocking. As usual, this is a process that can be automated.

NGINX Bot Verifier

It’s important to keep edge processing at the edge. This is a common mistake in application security. It’s typically easiest to take all application logic and put it in the application. This makes deployment and operations easier. The problem with this practice is that it allows requests to make it to your application that never should have. This presents an opportunity for an actor to exploit a vulnerability in your application. It’s effectively lack of input validation on a meta level. In general, if the request doesn’t look right and your application shouldn’t process it, it should be rejected before your application has to try.

Applying this idea leads us to an obvious choice; do it at the webserver layer. There are options for this, but NGINX is the most widely used webserver available. It also has a nice API for creating custom modules and request handlers. Because this validation process only looks at the User Agent and IP address of the request, it requires very little work to perform the validation and keep actors from hitting our application.

I took some time recently to put this idea into an NGINX module. You can find the code at https://github.com/abedra/ngx_bot_verifier. It’s an open source project and all ideas, pull requests, and issues are welcome. The module handles the validation steps described and works for the following search engines:

  • Google
  • Yahoo!
  • Bing
  • Baidu
  • Yandex

Because the validation happens inline with the request, and the validation requires a response from DNS, that validation can introduce perceived latency. In order to minimize this the module is backed by a Redis cache. All validation results are cached to prevent latency for subsequent requests. The cache expiry is handled by a timeout configured in the module directives and simply expires over a period of time since the last validation.

Why Does it Matter?

You’ve got a lot to do. The point of automating away threats like this is to reduce the noise and increase the signal of more sophisticated attacks. It produces a heightened awareness of threats and allows you to better understand what types of attacks you are seeing day to day and provides valuable information that could help you understand what attackers are after. Data produced by these types of tools also supports threat models and risk analysis to better support your overall information security program.

Conclusions

This is just one of the many problems you face every day in the world of web application defense. There are many ways to solve this problem, but I encourage you to do so. There are some search engines that make validation difficult. For example, DuckDuckGo uses Amazon EC2 instances that don’t resolve back to DuckDuckGo and thus cannot follow the typical validation process. There are other search engines that present different challenges. If you do decide to give this module a try, let us know. If you find it useful, have questions, or would like to see improvements, let us know. We believe we can make the world a little safer by sharing, and we are hoping that this starts a broader conversation on handling bad robots.

Using the OWASP Top 10 Properly

Matt Konda No Comments

I have gone to great lengths to strictly separate my OWASP activities from my Jemurai activities in an effort to honor the open and non-commercial aspects of OWASP to which I have committed so much volunteer time and energy.

Today I want to cross the streams for a very specific reason, not to promote Jemurai but to stop and think about how some of OWASP’s tools are used in the industry and hopefully prevent damage from misuse.

I want to address perspectives reflected in the following statements, which I’ve heard from a few folks:

I want a tool that is OWASP Compliant

And:

I need a tool to test for the OWASP Top 10

And:

I want an OWASP Report

Or alternatively:

Our tool tests for the OWASP Top 10

OWASP Is Generally Awesome

First of all, let’s ground ourselves.  OWASP provides terrific open resources for application security ranging from the Top 10 to ZAP to ASVS to OpenSAMM to Juice Shop to Cheat Sheets and many more.  The resources are invaluable to all sorts of folks and when used as intended are extremely valuable and awesome.

The Top 10 Is Not Intended To Be Automated

Let me be very clear:  there is no tool that can find the OWASP Top 10.  The following are items among the Top 10 that can rarely if ever be identified by tools.

  • #3:  Sensitive Data Exposure
    • Tools can only find some predefined categories of sensitive data exposure.  In my experience, a small subset.  One reason is that sensitive data is contextual to a company and system.  Another is that exposure can mean anything from internal employees seeing data to not encrypting data.
  • #5:  Broken Access Control
    • Tools can’t generally find this at all.  This is because authorization is custom to a business domain.  A tool can’t know which users are supposed to have access to which things to be able to check.
  • #6:  Security Misconfiguration
    • Tools can find certain known misconfigurations that are always wrong (eg. old SSL), but things like which subnets should talk to other subnets aren’t going to be identified by Burp or ZAP.  We find custom tools that can find them but they are just that custom.
  • #10:  Insufficient Logging & Monitoring
    • What does this even mean?  Our team has been delivering custom “security signal” for a while, but this isn’t binary that you have it or you don’t.  No company I have ever seen has comprehensive evidence.  There’s no tool you can plug in and immediately “get it”.

Even among the things that can be identified, there is no one tool that can find all of them.

Stepping back, its actually a good thing that the Top 10 isn’t easily identified by a tool.  That reflects the thought and human expert opinions that went into it.  It wasn’t just a bunch of canned stuff that got put together.

What The Top 10 is Great For

The Top 10 are a great resource to help frame a conversation among developers, to be the basis for training content, to be thought provoking for people actively thinking about the issues at hand.  More than 10 items is too much.  These approximate the best ideas we currently have.  Of course there is always room for improvement but the Top 10 are a great resource when used well.

Please just understand them as a guide and not a strict compliance standard or something a tool can identify.

References

https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

Commercial Software Using Open Source

Matt Konda No Comments

Here’s an interesting slightly different spin on the otherwise tired “Open Source” vs. “Closed Source” being more secure debate!

The topic is inspired by a conversation with a client that is using a whole slew of old open source libraries.  They know they need to update those libraries, but it is very difficult because they are using them as part of a distribution of a commercial product that they are paying for.  So, they buy XYZ product and pay good money for it.  XYZ product brings in all of these dependencies that are problematic.  The customer can’t update the libraries because it may break the commercial product and because the commercial product is large and not open source, they can’t see and edit the code to fix issues that arise.  Ultimately, it seems like that’s the vendor’s responsibility.  But that doesn’t mean they see it that way.

The obvious avenue for recourse is to tie payment to keep the libraries up to date.  That isn’t standard contract language … yet.

Another way to hedge in this situation is, of course, to avoid commercial platforms that don’t have a strong track record of updating their dependencies.  It would be interesting to see a scorecard or standardized assessment method to keep track of which vendors and which products do well keeping up with updates in their dependencies.  It seems like it might be relatively easy to ascertain …

In this case, we have an organization that went all in on a commercial platform and even has their own code built on top of it.  Now they wonder if they should have built from a lower level base so that they wouldn’t be locked into a platform that they really can’t update.  Interesting design decision, right!?

Glue 0.9.4 and Scout2

Matt Konda No Comments

Glue

We spend a fair amount of time building and using OWASP Glue to improve security automation at clients.  The idea is generally to make it easy to run tools from CI/CD (eg.  Jenkins) and collect results in JIRA.  In a way, Glue is like ThreadFix or other frameworks that collect results from different tools.  Recently, we thought it would be cool to extend some of what we were doing to AWS.  We have our own scripts we use to examine AWS via APIs but we realized that Scout2 was probably ahead of us and it would be a good place to start.

Scout2

The fine folks at NCCGroup wrote and open sourced a tool for inspecting AWS security called Scout2.  You can use it directly, and we recommend it, based on the description here:  https://github.com/nccgroup/Scout2.  It produces an HTML report like this:

For most programmers, running Scout2 is easy.  It just requires a little bit of python setup and an appropriate AWS profile.  So it wasn’t so much the barrier to entry that made us want to integrate it into Glue so much as the idea that we could take the results and integrate them into the workflow (JIRA) that we are using for other findings from other tools.  We thought that having an easy way to pull the results together and publish them based on Jenkins would be pretty useful.

What’s Coming with Glue

Glue has been a fun project that we’ve used opportunistically.  The next set of goals with Glue is to clean it up, improve tests and documentation and prepare for a legitimate 1.0 release.  At that point, we’ll probably also try to get Glue submitted for Lab status as an OWASP project.

Automate All The Things

Matt Konda No Comments

Today I gave a talk at a company’s internal security conference about automation.  The slides are on speakerdeck.  A video is on Vimeo.

The point of the talk was threefold:

  1. Explain where automation works well and examples of where we use it with OWASP Glue
  2. Explain newish cool automation like cloud analysis and pre-audit preparation
  3. Talk about how really, automation can only get us so far because we need the interaction and communication to fix things

I’d be interested to hear feedback!

 

The 10 OWASP Commandments

Matt Konda No Comments

Here at Jemurai, we have at least a few Hamilton fans.  OK, I might be the biggest … but I’m definitely not alone.

At our quarterly meeting in early April, we were talking about our window of opportunity and “not throwing away our shot”, and somehow we started talking about “The Ten Duel Commandments” song and how cool it would be to do a version of it for the OWASP Top 10.

After no more than a few days later, one of our key contributors, Corregan Brown, had written lyrics.  A week later we had an audio version.  Now here’s a video to back it up.  All written and produced by Corregan.  I enjoy it because it is factual, educational, clever and fun.  Thanks, Corregan!

Of course, this is just an artistic rendition to draw attention to the great work OWASP and the Top 10 project team has done.

Ten OWASP Commandments from Jemurai on Vimeo.

Glue Update

Matt Konda No Comments

There have been several recent improvements with Glue.  Its been awesome to have more people committing to the project and adding in different ways.

One is related to ZAP integration, which is finally getting more of the attention it needs.  Another is related to reporting to JIRA.  Still another is a way to fail builds only on certain thresholds of errors.  We have also been working on integrations for Contrast and Burp.  We’ve added a more representative Jenkins Build Pipeline integration example.

We added support to search for entropy in passwords via TruffleHog.

What would you like to see in Glue?  Where do you think we need to be to get to a credible 1.0?

Glue 0.9.3

Matt Konda No Comments

Introduction

At Jemurai, we contribute extensively to OWASP Glue and use it on some of our projects where it makes sense to tie together automation around security.  We kept seeing the same types of integration challenges and found that it was useful to have a common starting point to solve them.  It is far from perfect and we would refer people to alternatives like ThreadFix and OWTF

What is Glue?

Glue is basically a Ruby gem (library) that knows how to run a variety of security tools, normalize the output to a set structure and then push the output to known useful places like Jira.  We package Glue in a docker image to try to make it easy to set up all the different important moving parts (eg. Java, Python, Ruby and tools).  You can get and run Glue from docker as easy as:

docker run owasp/glue:0.9.3

Or, for a more helpful example, we can run brakeman and get the output as follows:

docker run --rm owasp/glue:0.9.3 -t brakeman https://github.com/Jemurai/triage.git

Architecture

The idea behind Glue is to be able to process different types of files via Mounters.  Then to be able to analyze using different Tasks, filter with different Filters and report with different Reporters (CSV, Jira, Pivotal, etc.).  Ultimately, there are the concept of stages that can be easily extended.  The reason I’m writing the post today is because I wanted to add a Bandit task and it was so easy that all I had to do was add this one file:  https://github.com/OWASP/glue/blob/master/lib/glue/tasks/bandit.rb.

When I was done, you could run bandit from the Glue docker image and push results to Jira or anywhere else:

docker run --rm owasp/glue:0.9.3 -t bandit https://github.com/humphd/have-fun-with-machine-learning.git

The following diagram illustrates the stages of the pipeline of functions that Glue performs.

Check out Glue or reach out if you want to talk about some of the common challenges in security automation.