Supporting Remote Work Securely

On Friday we wrote a blog post that talked about remote work and security from a workers perspective. We included a checklist. In this post, we want to develop that idea and talk about it more generally from a company and IT strategy perspective. We’ll start with some pictures to illustrate some of the issues.

The content of this post is also in this google slides presentation.

A Basic Network

Consider a basic network for a classic “small” company.

When the laptop or phone at the bottom come out (as when work is not on premise), everything falls apart. Identity won’t work. Access to files won’t work. Access to internal systems won’t work. In short, in a classic pre-cloud IT model without an explicit VPN strategy, many things don’t work.

A More Realistic Company Network

Most companies have more of a hybrid network.

In this network:

Tools That May Not Work

Some tools we put in place for security, simply will not work the same way without adaptation.


Building a VPN now to restore connectivity to specific internal systems may solve certain problems. It will come with oversight and will not get you back to where you started in terms of the corp network and full connectivity.

Its a little late to start talking about business continuity strategy, but anywhere that it is possible to leverage cloud based services using a shared identify (SSO) system is going to be the most resilient to specific cloud or network issues.

Therefore, we advocate that companies bite the bullet and use cloud based resources wherever possible.

Near Term

Medium Term

Long Term


It is time to quickly embrace the cloud and SaaS based services.

Use a risk based approach to prioritize.


Security Culture - Introducing OWASP

In the latest video of our Security Culture series we give a 2 minute overview of, an amazing resource for developers.

OWASP Resources

OWASP resources include:

Log4J Security Issue

This post is a quick summary around the Log4J security issues happening in December 2021.
It includes a summary, a video, a PDF of slides we presented and extensive references.

The TL;DR is: update Log4J to 2.16.0 and keep watching for subsequent updates.

The 10,000 Foot View Summary of The Issue

Log4J is a widely used Java library.

It has a problem where if it is asked to process a malicious string, it will allow
an attacker to run their own code on a targeted server. This can happen in both
authenticated (where we know the user) and unauthenticated (anonymous) cases depending
on the application.

This issue is being actively probed and attacked.

The simplest fix is to patch. I expect further developments, so I recommend watching
for additional updates.

The Rough Detail

Log4J is a logging library that is used in a wide array of applications. I probably used it in
over half of the projects I’ve worked on in my career.

It is very normal for a developer to want to log something that a user enters. For example:

String user = getCurrentUser();
String document = request.getParameter("Document");
Logger logger = LogManager.getLogger(Thing.class.getName());
logger.debug("User {} requested document: {}", user, document);

That log statement where the user and document get put into a log statement is where the problem
occurs. One clear problem is that these statements are basically everywhere in code and it
would be nearly impossible to audit all of them.

The fix is basically to use a version of Log4J that doesn’t do the magic on the malicious
string by default. Alternatives are to tell an older version you don’t want that feature. Or
in extreme cases to rip the offending class right out of the log4j library.

There are a variety of ways to scan for the issue, and to identify log4j libary versions
locally. Even a simple approach of looking at dependencies could help.


The vulnerability was disclosed to the Apache Log4J security team 11/24.

It was released to the public 12/10.

Patches 2.15.0 and 2.16.0 have been release since then.

I would look in log files from November forward for malicious activity as soon as possible.

A Video

I put together this 17 minute video overview that covers:

The Slides

If you want to read and navigate yourself, the slides are here.


We looked at a lot of sources as we navigated this issue. Here are some that we thought were helpful:

Pipeline Security Automation

This post talks about how we approach security automation in BitBucket Pipelines. It also introduces some new open source tools we built and use in the process.

Security In Pipelines

We’ve written before about using GitHub Actions and provided an Action friendly “workflow” with our Crush tool.

At a high level, Pipelines and Actions just do some computing work for you in Atlassian or Github’s data center. Often that work is related to source code, testing, or deployment.

Both leverage containers heavily for sandboxing the work that happens in the build and provide some level of abstraction that you can use to build your own pipes or workflows.

On some level, we’re using them to do the same work we used to use Jenkins or CircleCI or Travis to do.

We like automating security in Pipelines or Actions because it makes it easy to build security into your natural development workflows.

Real World

So what are we really talking about? What do we want to have happen?

We want to be able to apply any automated tools flexibly in a controlled environment that already has our code artifacts and knows about the key events.

Triggers for actions and pipelines can be:

The tools we want to run may range from:

Of course, we can run security unit tests and trigger other integrations (eg. SonarCloud) as well.

Just remember:

Tools are necessary but must be used by skilled operators.

I can’t tell you how often I see security tools installed but not used effectively.

OK Show Us Code

Here is an example of how we have a pipeline configured:

    - parallel:
      - step:
          name: Run Crush
          image: golang:1.16
            - go get -u
            - crush examine --threshold 7 . > crush.json
            - crush.json
      - step:
          name: Dep Check
          image: openjdk:8
            - wget
            - unzip
            - rm
            - ./dependency-check/bin/ --failOnCVSS 6 --exclude **/dependency-check/**/*.jar -f JSON --prettyPrint --scan .
            - dependency-check-report.json
    - step:
        name: Report
        image: golang:1.16
          - go get -u
          - go get -u
          - depcheck2off ./dependency-check-report.json > ./
          - off2jira ./
          - off2jira ./crush.json

Let’s walk through it and talk about what is happening.

First, the branches part tells BitBucket when to run the pipeline. In this case, it will be on any push to a branch under securityautomation.


We like doing this because it helps to isolate your security related changes and ensures that what you are finding doesn’t break other builds. In the long run, we want to have security tooling run more often.

Then we need to understand that there are three steps defined in the pipeline:

Crush and Depenendency Check are both code analysis so they can run in parallel. Hence the parallel: before their step: definitions.

To run Crush, we pull a base golang image image: golang:1.16, install Crush and run it. We drop the output into an artifact that means it will be available later.

- step:
    name: Run Crush
    image: golang:1.16
        - go get -u
        - crush examine --threshold 7 . > crush.json
        - crush.json

Running Dependency Check is similar. You can see that we’re pulling a release from GitHub and unzipping it. This is on an openjdk image. Then we invoke dependency check and put the report in an artifact.

- step:
    name: Dep Check
    image: openjdk:8
        - wget
        - unzip
        - rm
        - ./dependency-check/bin/ --failOnCVSS 6 --exclude **/dependency-check/**/*.jar -f JSON --prettyPrint --scan .
        - dependency-check-report.json

The next part “Report” is interesting and we’re going to put it in a whole new section.

Reporting and Rethinking Security Tooling

Once we have Crush and Dependency Check output, we want to do something with it. We could leave it in BitBucket as an artifact and refer to the plain text file. That is better than not running the tools, but we also want make these visible and integrate into our normal processes.

Here’s how that looks in the pipeline we defined where we’re pushing the issues identified by Crush and OWASP Dependency Check to JIRA:

- step:
    name: Report
    image: golang:1.16
       - go get -u
       - go get -u
       - depcheck2off ./dependency-check-report.json > ./
       - off2jira ./
       - off2jira ./crush.json

Here we are installing and using two new golang based tools:

The basic philosophy is to build small tools that do one thing and do it in a simple and predictable way. This goes directly against our own past approach with OWASP Glue which we retired.

With Glue, you could rundifferent tools and push the output to a variety of trackers. The problem was that you ended up with a JVM, Python, Ruby, Node and an ever growing docker image. That made it hard to incorporate into pipelines efficiently. We also had to maintain everything and keep everything working to get an update pushed. It was a monolith.

With the Jemurai autom8d set of tools, we’re taking more of a classic Unix philosophy and building small purpose built utilities that can be put together in a number of ways.

So far we have:

We already have plans to build:

We also want to adapt and integrate some other code we have that does an inventory and metrics across repositories.

We’d love to hear from you about others that would be useful! We can help with this type of automation while doing so with open tooling you can leverage for the long term.

Leverage Pipelines

The great thing about pipelines (and actions) is that once you understand them, you can effectively push security tooling to a large number of projects quite easily.

Note that there are compute charges associated with running pipelines (or actions).

We have also had good success helping companies who leverage BitBucket or GitHub cloud because we can actually help commit the code that starts the automation project off. Combined with some training and a retained ongoing support setup - we can enable clients to very quickly improve their internal app security posture.


Cloud Security Auditing With Steampipe

This post talks about how we use different tools to accomplish different tasks in a cloud security context, zooming in on Steampipe as a tool that should make it very easy to prepare for and meet audit requirements.

Cloud Security Auditing

There are a couple of different things that we think of when we think of cloud security auditing.

One is a pure security activity of checking all of the configuration details on all of the services we are using to make sure they are configured properly.

Another is to support an external audit that somehow proves that we are doing the right thing across our infrastructure.

With SOC 2 and other audits, we are increasingly seeing tools introduced that supposedly help to magically speed up the audit process. As may be obvious already, I’m skeptical.
The idea that AI is going to magically help us with cloud security is laughably naive.

The cool thing about Steampipe, and really what the cloud API’s themselves enable, is that we can do a lot of that inventory and preparation oursives.

Spoiler: the tool can’t help you be secure, you have to do the work.

Finding Problems

Some tools are good at finding issues. I would categorize Prowler and ScoutSuite in this group. You run them to identify issues and they help you find problems. They are both open source and very useful. We built a commercial tool like this called JASP - so we know a thing or two about how these tools work and what they are good for and not good for.

JASP makes it basically as simple as possible to get everything running, keep it running consistently and provide reporting over time and alerting around issues.

Steampipe also supports checks against several CIS Benchmarks, including AWS, Azure and GCP.

Getting these running is easy for a DevOps person who is already using a CLI with any of these tools. So you can use Steampipe to do your “problem finding.”

There are commercial tools (including from the cloud providers) for finding problems too. Generally,
I feel they are not used very effectively and if you’re looking to find problems in your environment you might as well start with open source options. You need to understand them and the output to fix anything anyway.

On the downside, all of these tools produces huge lists of problems and lack context of the environment, including how the pieces fit together and what really is a security issue.

You can use these tools to prepare for an audit like a SOC 2, but it is likely they will have you doing a whole lot of extra work to clean up your report that the auditor won’t ask about and they might miss simple things the auditor really does care about - like are users in the correct IAM privileged roles.

Finding What Is

Building an inventory of your systems is outside the scope of the tools that find problems. If a system has an issue it will show up in the report but if it doesn’t, it won’t.

So when it comes to doing an audit, if the auditor says they want a list of EC2 instances, that may not be easy to supply. Not to mention the fact that you want to know what you have before you start the audit.

You can use native tools like AWS Config to keep track of what you have. You can use the GUI to do this work too, taking screenshots of the configuation. But this is kind of painful.

Enter Steampipe. With Steampipe, you can basically write queries against your accounts to list resources - including properties you might want to check.

For example, the following will show users without MFA:

select user_id, name, password_last_used, mfa_enabled
from aws_iam_user

I can query databases, EC2 instances, all kinds of things through the steampipe interface which basically produces a SQL interface on top of the AWS APIs.

A more detailed example is being able to query EC2 instances that have unencrypted volumes attached.

vols -> 'Ebs' ->> 'VolumeId' as vol_id,
aws_ec2_instance as i
cross join jsonb_array_elements(block_device_mappings) as vols
join aws_ebs_volume as vol on vol.volume_id = vols -> 'Ebs' ->> 'VolumeId'
not vol.encrypted;

Cool, right!

Unified Process

Something that is awesome about Steampipe is that it supports a lot of services through plugins for everything from AWS, GCP, Azure, Slack, Zoom, Alibaba, CloudFlare, DigitalOcean, Jira, Kubernetes, Shodan, Zendesk and more. It also has mods that implement checks against the data it can collect.

So I can use a plugin and then build queries to talk to all of these services and have a unified process for doing inventory and auditing. Once I know how to use it, I can really get a process in place quickly.

Own Your Tools and Actively Look

One of my favorite things about Steampipe is that you can (and we do) wrap the queries in scripts (in our case python) that allow us to run a series of queries and essentially translate audit requests for evidence into scripts that we can tweak and automate on the fly.

Ultimately, I believe that if you’re going to have developers or ops folks anywhere on the spectrum of DevOps managing your infrastructure, these are the types of tools and approaches that empower them to secure your environment and crush your audit.

I don’t know anyone at Turbot or who works on Steampipe, but I’m excited about recommending it for these types of proactive tech security projects.


Email from a Security Researcher

Yesterday, for the Nth time, a client had a “security researcher” send an email about a “high-impact” security vulnerability. I’ve crafted this response a few times so I figured I would blog about it.

Email from a Security Researcher

So here’s the email:

Hi <name>,

I'm <"researcher" name>, a penetration tester, and I have found a high-impact security vulnerability in the <company name> web app.

How can I report the issue details? Also, I'm inquiring if you reward reporting valid vulnerabilities.

Thanks, <"researcher" name>

Digression About Vulnerability Disclosure Programs

In general, I’m a fan of having a vulnerability disclosure program. Fundamentally, a disclosure program has to outline rules of engagement for reporting a vulnerability and a timeline for expecting a response. It might or might not involve a reward. The program should include some sort of scope.

This is positive, because often folks that are interested with software tinker with it (hack it) and find things that are important. Before disclosure programs and bug bounties, there was a lot of hostility to “hackers” who reported these kinds of issues - and so sometimes those issues would get sold into the zero day market. A disclosure program presents a company’s positive attitude toward researchers reporting issues and gives a framework for it to happen in a trustworthy way.

I’m also a fan of bug bounty programs, which are similar but generally imply that there is a reward for a reported vulnerability, a more explicit scope and a sense of what types of vulns may or may not be reported. Bug Bounty programs are often intermediated by firms like BugCrowd or HackerOne.

I have met a bunch of folks that are very active in this particular security community, and I’ve run bounty programs at large companies. There are a lot of great people here.

As others have stated previously more eloquently than I will, there are times and places to start a bounty program.

The TLDR; of which seems to be: bug bounties are great when you’ve got basic hygiene figured out and you have a way to handle an ongoing volume of reports.

Otherwise, you might just get inundated with information you don’t know how to deal with and only very little of which is realistically important to your security.

The Gory Detail

The truth is, these types of emails are common and often a result of someone opportunistically scanning for issues and hoping to make a little money.

They frequently identify issues such as missing X-Frame-Options HTTP headers or similar. Often, they are actually low or even informational severity, contrary to what the researcher’s email says.

Now, if the person is a legit security researcher letting you know about a problem, you want to thank them and give them a way to share what they know and ideally give them something in return. A researcher might accept recognition or swag. In the long run, we want to make these legitimate and commensurate with the value of the identified issue. I have seen researchers submit amazingly useful findings.

The problem is that most of these submissions, especially when framed like the one above, are not significant security findings at all and they are being used to SPAM a large number of companies with the hope that some will pay.

I have also seen “researchers” turn into extortionists who publicly complain about the way a company handles a minor problem in order to get attention and a reward.

To respond effectively, we want to engage the earnest researcher while shutting down the discussion with the extortionist.

I recommend responding with something like this email template:

Hi there,

Thank you for reaching out. We do not currently have a vulnerability disclosure program, reward program or bug bounty in place.

That being said, we have folks on the team that have been involved in those types of programs and know how to run them and it is something we may do in the future.

If you would like to report the issue to this security@ email, we will track it in good faith and consider providing some kind of award or recognition if we can.

At the same time, having run these programs in the past, we also know that there are a lot of folks out there who run scanners and submit the results to try to claim rewards. Those types of findings aren’t the type of issues that our program can reward.

We certainly appreciate the security community and understand the value of a mutually positive model for interaction. We’re committed to engaging with integrity.

We look forward to hearing from you.

Thank you,

Of course, you have to actually engage with integrity and track the issue. If you do start a program, you should reward the researchers that reported real findings. You also have to communicate with researchers and fix issues.

Epic Security Failure and Risk

All I could do was facepalm after somebody pointed me to an article about how Microsoft unleashed a death star on hackers …

"Microsoft unleashes 'Death Star' on SolarWinds hackers in extraordinary response to breach" GeekWire Article

Let’s talk about failure.

Start with Sympathy

Look, its a bad situation.

Lots of IT and Security folks are working really hard right now. That includes people at SolarWinds, Microsoft, your favorite security vendor and companies we depend on every day.

None of what I’m about to say is personal or trying to make anything harder for anyone. Its all hard.

The Reality

But look, it’s a really bad situation.

Let’s say that 450 of the Fortune 500 and dozens of Federal systems were breached. I can think of any number of serious things that any adversary would want that access for.

Notice that none of this is as small as stealing tens or hundreds of millions of dollars or some intellectual property.

At this scale, the potential damage is hard to overstate.

The Security Reality

No seriously, it’s even worse than all of this. From the scant publicly available information, we can infer that hackers were in major networks for 7 months without being really detected.

Did these networks have XDR? Yes. Yes, they did.

Did these networks have next gen firewalls? Yes. Yes, they did.

Did these networks have AI and intrusion detection? Yes. Yes, they did.

Did all of these organizations review their supply chain for security with vendor compliance processes? Yes, they did.

Did these organizations have, among them, the very best the security industry has to offer? Yes. Again, I’m going to say they did. After all, we’re seeing key Federal systems and ~450 of the Fortune 500 impacted.

This is nothing short of a The Emperor’s New Clothes moment for cybersecurity. What were all these tools for? Is it maybe possible that they were oversold?

The Supposed Death Star Response

OK, I mean it’s cool that Microsoft helped sinkhole the DNS (with GoDaddy and others). We’ve seen cases where an individual malware researcher found the C&C domain and disabled it. That doesn’t feel like a major blow.

What about the Windows Defender updates that could find and then automatically disable this malware? Well, sure, but isn’t that their job? That’s the basic idea of Defender—to release updates. It’s actually 7 months too late.

The bottom line is that this piece about Microsoft getting medieval on some hackers is bullshit.

Most of the US cybersecurity system has been owned and it is a time for humility and a reckoning. I’m asking myself if this article was paid PR to try to sugar coat the message that guess what folks - an entire industry just suffered an epic failure and basically can’t be trusted.


So … what should people do?

Well, I would stop buying SecurityThings™ and start quantifying and isolating risk. This will take people, knowledge, processes and time.

If you assume that everything you have has been compromised, and there is no one thing you can do to secure it, what do you do next?

Start with the information you really need to safeguard or update.

Move on to systems you would need to be able to recreate or bring back online in a new way.

Some things aren’t worth defending. At least not when it is as hard as it is in the real world. I think that line just moved a lot for a lot of companies. Recalibrate.

Hang in there and keep at it. Just think fresh and for yourself. The impact we can make as security practitioners is both evident and in question.


Risk and Threat Modeling with Mind Maps

In security we talk a lot about understanding risk. That informs the advice we give and decisions we make. A tool I like to use for brainstorming about risk is a threat model in the form of a mind map. It is a simple starting point for thinking about threats.

In this post, we’ll talk about the new simple tool we released and how we use it.

Threat Modeling

Some threat modeling techniques (see references) can be very thorough and rigorous to implement. This is good if your team has the time and resources to do this, but it can be challenging if you don’t have dedicated resources that really understand the methodology.

We tend to start with a simpler mind map and store it with the developer’s project artifacts. OWASP’s
Threat Dragon does this, as does Threagile, but for various reasons we just like the flexibility of an
open ended mind map.

The process we use starts with preseeded model. We might build out more detailed examples of these, but as you can see from the screen shot above, we have preloaded:

We don’t aim for perfection. Maybe the first step is getting everyone on the same page about what our assets are!

The process is iterative and evolves. That’s one reason the tool allows you to build a model and then save it to a local JSON file. You can just put this in your source project and then open it later to see it again or make adjustments.

This is still very much brainstorming territory. We’re trying to come up with realistic threat scenarios.

The Tool

To support this, we started with a simple D3.js project that does generic mind mapping and set it up with some preloaded data and hosted it here:

Note that this is a static site. The tool runs completely in your browser. We don’t know what your data looks like or have any information about what you do with it.

The tool basically lets you add nodes, rename them, delete them and navigate based on arrows and take action based on keys:

It also lets you preload a starting model. We are thinking we may try to capture some additional starters and add those so that people can get a running start on different scopes of models (eg. organizational, network, application, cloud env, business process, etc.).

To be quite honest, the tool should work for doing simple mind mapping totally unrelated to threat modeling too. We didn’t want to start out with too many assumptions about how it would be used.

Just A Starting Point

Although we like the simplicity of the current tool, we have some ideas for extending it.

We would like to add additional template models that can be adapted for more specific scenarios. Basically to help people think about risk more proactively by preseeding the models with things they may care about.

We have also thought about bridging into a more quantitative model as a resulting step from the initial mind map. Imagine that we could create a row in a risk table for every combination of leaves in the different trees.

We might see an adversary targeting an asset via a part of the attack surface and we can then provide some level of calculation around that to think about risk more quantitatively. The quantitative part is hard to get right (kudos to FAIR) but with a decent UI, the mindmap provides the structure for the taxonomy and maybe makes the risk calculation more accessible.

AdversaryAssetAttack SurfaceRecordsProbabilityImpactAggregateRisk
Organized CrimeFinancial AccountPartner System10,0005%10050,000Low
Organized CrimeFinancial AccountWordPress Marketing Site010%1000Low
Organized CrimeFinancial AccountOrder System1,000,0002%1002,000,000Medium
Organized CrimeFinancial AccountAdvertising System05%1000Low
Rogue UserBitcoinInternal IT1,000,0005%100,0005,000,000,000High
Rogue UserPasswordsOrder System1,000,0002%10200,000Low

This model includes additional information like records, probably, impact, aggregate and a risk score. You can see that it helps us zoom in on the systems and data that really matters.

We have also imagined a way that a user could click on different nodes in the mind map and essentially map out the most important attack trees and create vectors:

The current model doesn’t support either of these, but we are putting thought into how they should work. We would welcome input here!

Credits and Thanks

Note that the threat modeling tool is built upon open source work by folks we’d like to thank:

Michael Bostock for the D3.js framework and key examples.
Jeremy Darling who’s mindmap example was where we started.


Here are some good general Threat Modeling references:

Lets Talk About Blockchain

Let’s talk about Blockchain. I think many people in the security world are already appropriately skeptical of all of the “let’s use blockchain for this” trends, but in this post we wanted to dig into it a bit and talk about why not to use blockchain.

What Is Blockchain

Blockchain isn’t just one thing really, it is a combination of things that come together in a neat way to make something like bitcoin possible. Bitcoin, by the way, is a great use of blockchain technology. Here is the original paper, which describes a lot of this in original detail - and I think in a surprisingly readable way.

So what is involved? Well, blockchain is kind of fundamentally based on some of the following concepts (again see the paper above for more complete coverage).

First, we need an identity which is defined with cryptographic keys (public and private, think a wallet). Anyone can make one. This allows us to participate. Assume that the keys will be used in different interactions with the blockchain in ways that cryptographically ensure a key owner is a key owner.

Next we need a network to participate in. This is comprised of a number of nodes that handle the actual chain. You could think of them as servers but that’s not really what they are. They’re just workers. They include the idea of a time server, which ensures that transactions are chained properly. The proof of work idea (think mining) is done continuously and as transactions happen, the nodes integrate them into their historical ledger - or tracker. Many people like the idea of the distributed network because it decentralizes the operation of the blockchain. Just like with having a wallet, anyone can participate as a node.

The chain idea comes from the fact that the order of transactions can’t be subverted or broken. One transaction can only happen after another and if it tries to present itself to the network out of order it will be rejected. The network collectively understands how to use proof of work to ensure the integrity of both the next transaction and all of the history leading up to the next transaction.

We often associate privacy with blockchain, but that is only because we don’t always know who a particular key (wallet) owner is. We don’t need to know for the system to work. However, the actual content of the ledger is not private at all. The balance and transaction history for a given key (wallet) is visible to all participants.

Where Does It Fit

The conclusion I draw from the above analysis, is that blockchain fits as a solution where:

  1. Time is a fundamental factor
  2. Content of the blockchain is public
  3. Content of the blockchain is distributed (and not owned by any particular entity)
  4. There are transactions
  5. Any user may participate (and users do participate)
  6. The integrity of each transaction needs to be verifiable across the users
  7. There should be no single point of failure for a transaction
  8. It must be practically impossible to fake a transaction
  9. The result of the transaction can be fully captured in a transfer of digital information

Most misuses of blockchain arise because people focus in on #8 and just want to take the part about not being able to fake a transaction without the other context about what it means to be in a distributed ledger system etc.

But blockchain does not fit just because some of these might be true. It certainly does not fit just because we have a data integrity problem to solve. There are plenty of existing technical solutions to keep track of history of data and to mark its integrity, which is usually what people want when they say blockchain.

Example: Voting

The most obvious reason that blockchain is a bad fit for voting is that our votes are private.

But beyond that, time is not a relevant factor in voting and while a vote might seem like a transaction, you can’t really see it as related to other votes or something that would require a ledger to track the flow of votes or parts of votes between arbitrary parties.

Finally, there is no need for (or existing plausibly trustable) distributed data backbone.

Basically, when I see voting with blockchain I assume there is snake oil or sales magic happening. Among the folks I think know a lot about this and are worth consulting with further are Matt Blaze Emergency Voting and this Consensus Report on Securing the Vote.

Example: Medical Records

In a medical context, it may be tempting to think about different providers as different users updating a ledger of health care records, but there are a bunch of reasons why that falls down in practice. Medical records are also private so we can’t just put them on a blockchain. The players are not really open, but contractually bound participants in a predetermined process and an existing network.

We can think about provenence and history of the record in this case, but there are much simpler ways to keep track of the integrity of a series of medical history updates.

Basically, even if we just do the following, we’ve met the same key objectives we would have with blockchain and it is something a programmer could just do without having to learn or explore blockchain:

  1. Strongly authenticate the people that touch the records
  2. Track every version of the record
  3. Track a hash of the record with each version

It would also be neat to have a better way to distribute medical records safely. So that I can show up and my digital archive is available to whoever needs it. But nothing about blockchain makes that easier. It also doesn’t have any mechanism for who can see my data, which is an important factor in medical applications - including the idea of an emergency override so that a life and death situation doesn’t escalate too far because of technology.

Note that blockchain doesn’t provide any mechanism to protect the data in the transaction. It doesn’t specify encryption techniques, key management protocols, etc. It is only designed to allow participants, each with their own key (identity) to participate in a public way.

Example: Logistics

We’ve seen companies that want to use blockchain for logistics - to ensure handoff and tracking of packages or pallets are properly handled across several different logistics providers. The idea would be the state of the shipment could only progress along a timeline (using the time factor) and be changed in known ways by traceable participants in the process. Each change looks a bit like a transaction and now you can have a series of integrity checks that allows the parties to all see how the shipment is progressing. The fact that there are different parties involved participating in and watching the transaction kind of fits the blockchain model.

But again, unless the shipment data is intended to be public, the ledger has to itself be private. It could be then distributed among a network of participants - but now you’re talking about building a network again and specifying how the participants interact to particpate. Further, you could have just built a simple API with similar controls as outlined for medical (authentication, versioning, hashing) and been done with it.

Example: Smart Cities

This article advocating for the use of Blockchain in Smart Cities falls into the same traps. It presents the reason for adopting blockchain as cybersecurity, but in a context of “connected urban objects”. Where the cybersecurity comes from is unclear. What it is protecting and from whom is not clear either.

If we make each “urban object” into a participant in blockchain, then presumably they all become targets. If we go the path of “info tokens” from all of the “streetlights, meters, parking lots, waste bins, Wi-Fi hotspots, video surveillance cameras” etc., on the blockchain as advocated, are the controls in all of those distributed objects somehow secured? Automagically? Obviously, not. How having blockchain on these devices is going to help prevent ransomware is laughably void of meaning.

Since blockchain … you remember right … doesn’t say anything about privacy of the data on the ledger the data will all be accessible to any device in the blockchain network. The existance of a network that all of those devices even participate in seems wild.

Is the streetlight going to buy something from a parking meter that then someone else can’t buy? The whole point of blockchain is to secure a series of transactions that are related and contrained like an account. That doesn’t seem to apply at all here.

Real Estate Title

One of the cool ideas that comes along with blockchain is the disruption of industries to allow for less friction in transactions.

Consider when you buy a home. Several parts of that process rely on a Title Insurance company, who’s job it is to basically ensure that the person selling the property actually owns it and that the person buying it has the funds to do so. Ownership of land is public record. Land can be split and assembled and described in very fine technical detail.

So in this case, the data on the blockchain is public, the identities are required, the transactions basically make sense. Forging a transaction would be a huge problem. So it should be a good fit right? Mostly.

For blockchain to really work here, there has to be an incentive for people to participate in the system as a part of the network. Otherwise, you lose the distributed ledger and decentralization. With bitcoin, the incentive to participate was a digital currency which could be traded for hard cash. With a real estate scenario, I don’t think anyone would be willing to set aside increasing portions of land as part of the incentive structures for nodes / miners. Maybe that could be solved with cryptocurrency. One way or another, we would need a network to exist for this idea to work and if a company does it, then a lot of the “good things™” about blockchain get eroded.

Not All Wrong

Maybe the hype about blockchain does capture a real need for a clearer standard around data sharing and data provenance. It is likely any solution for that will include strong identities (keys, wallets) similar to those we see with blockchain. It could be possible to build a simple wrapper for a database that would allow appending, signing and storing data encrypted. It would also have to be able to provide queries to get the effective record at any given point in time. Making this more of an accessible technology option might do a lot to help achieve some of the goals of those advocating for blockchain.


The idea that blockchain is the easiest or best way to handle identity and integrity in common use cases is simply not true.

Use blockchain for cryptocurrency. Otherwise, it is probably not a fit.

Crush Github Action

Everyone is talking about pushing left. I feel like I’ve been talking about Agile Security since like 2010. Whatever we’re going to call it, the idea is that we want to be able to do our work earlier in the development process where developers can touch and feel it.

Its not all about tools

Although I’m going to show how to do a neat little tool integration, I want to re-iterate that application security is only a little bit about tools.

The green compliance checkbox you get with tools is never real. – Me. 🙂

By all means, use tools.

But pushing left also includes:

Remember Crush

Crush (Sort of short for code review helps us) is a simple tool we built to help us when we’re doing code review. To make it simple, we search for strings that are often associated with significant errors. We offered a deeper intro in this blog post about why we wrote crush or the README.

Will Butler wrote a great blog post about how to find vulnerabilities in code. We integrated all of his “bad words” into Crush
in the most recent update. Some were already there. You can run Crush to find them like this:

docker run -v ldir:/tmp/target jemurai/crush:0.6.1 examine --directory /tmp/target --tag badwords --threshold 1

Where ldir is a local directory that Docker can see (on a Mac, this is easy if it is under your user, eg. /Users/<you>/directory). It will be noisy. I think Will is onto some good stuff with the badwords but initial testing with this suggests it might be too much to be useful and could require further tuning. That is why we set the threshold to 1 for most of these checks. Any easy way to make it work the way you want would be to up the thresholds on the things you really want to check, then only check above that threshold.

The way we use Crush is to quickly make sure we didn’t miss anything obvious during a code review. Its like a code review assist. We might see everything anyway, but this just makes sure we don’t miss anything we should catch.

Crush is open source, written in Golang and generally fast enough to use in a git precommit, CI/CD or almost any process.

GitHub Actions

I got really excited about GitHub actions with ZAP’s blog post about using ZAP.

It turns out it is very easy to package anything that can run as a Docker container as a GitHub Action. Here is the code we use in Crush:

name: 'Jemurai Crush'
description: 'Crush code - automate code review'
  icon: 'code'
  color: 'red'
    description: 'The directory to scan'
    required: true
    default: '.'
  using: 'docker'
  image: 'Dockerfile'
    - 'examine'
    - '--debug'
    - 'true'
    - '--directory'
    - ${{ inputs.dir }}
Simply by putting this in our root directory in a file called action.yml, and then releasing it, we can make our action available.  In this case, our tool can be run like this:
docker run -v <local-dir>:/tmp/target jemurai/crush examine --debug true --directory /tmp/target.

Using the Action

Using the action is super easy - just drop a file describing the action you want to to this directory: .github/workflows/crush.yml.

# This is a basic workflow to help you get started with Actions
name: CRUSH

# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
    branches: [ master ]
#  schedule:
    # Runs every day At 01:00.
    #   - cron:  '0 1 * * *'
    runs-on: ubuntu-latest
    name: Assisted code review
      - name: Checkout
        uses: actions/checkout@v2
          ref: master
      - name: Crush Scan
        uses: jemurai/crush@v0.6.1
          dir: .

That’s it! Now we’re cooking with gas, as … someone used to say.

Now, we’re still working on better integrations where we can make Crush create a better report or automatically log GitHub issues. But the basic idea is so easy and clear that it would be silly not to start playing with these types of automations now!