Feature Spotlight: Network Scanning

As part of securityprogram.io we offer network vulnerability scanning. Most standards (eg. PCI) require that you do at least quarterly vulnerability scanning. Vulnerability scanning is important for identifying resources on your networks and figuring out that they may have holes that an attacker could exploit.

Vulnerability scanning is a pretty basic activity that every organization with any internet facing systems should have in place. That is why we include it in SPIO. Otherwise, clients have to go find a scanning vendor and spend who knows how much extra time and money getting it in place.

What Makes A Great Scanner?

Our founder, Matt Konda, spent 4 years building a PCI ASV certified vulnerability scanner. Excellent scanning products on the market are differentiated by effective signature mechanisms, sophisticated reports, false positive management, integrated endpoint agents/management and low time to signature for newly released CVE's.

The more you integrate vulnerability management, the more sophisticated the workflows and management features are. Some scanners do more checks and fuzzing around web applications versus just network level checks. So in some cases, having a great scanner is worth it.

The problem is, in most all cases, the scanning is pretty dumb. It is just checking for open ports on a host, reading the banner and using something like a regular expression (regex) to extract a version number and then comparing it to a database of known vulnerabilities. In other words, at its core, the technology isn't that sophisticated.

SPIO Scanning Features

The features we include around scanning are focused around the core nuts and bolts of the offering. To make the offering robust and as up to date as possible, we leverage a widely used open source vulnerability scanning tool. As it turns out, this can be tricky to set up and optimize - so our customers find it nice that they don't have to worry about it.

As an SPIO user, you can manage your environments (what should be scanned) in the application. You can then view recent reports, which are provided in PDF and csv format for easier handling. We keep track of past reports so that you can always show that you've done your quarterly scanning duties.

Maybe one of the most important related features is that our team will help you identify which issues are real and need to be addressed. Vulnerability scanners are notorious for creating a lot of false positive findings. Sifting the real issues from the mass of common findings takes experience in the form of a trained eye. What this looks like to our customers is that we set up the initial environments (we can even help you do DNS discovery and the like to identify scan targets) then each quarter clients get items escalated that require attention.

Let Us Assist You!

In the Assisted Tier of SPIO, our team helps you understand the scan results! This ensures that your team is able to understand and effectively fix the real issues. It also means you don't waste your time on false positives!

We tried to make our vuln scanning as simple and pragmatic as possible. Whether you have us help you, or you do it yourself, the tools are right there for you in securityprogram.io.

Feature Spotlight: Vendor Tracking

Many of our securityprogram.io customers find us because they are being subjected to a larger company's vendor management process and they don't really know what to do.

One of our major goals as a company is to systematically help small cool innovative companies develop security maturity so that they can compete and win with bigger companies.

An important part of developing security maturity is managing your own vendors and the potential risks they introduce. In this post we'll talk about vendor risk, common processes for dealing with it and how we handle it in our tool.

Did you know that with SPIO Assisted, we can do vendor tracking for you?

Vendor Risk

Does anyone remember the Target breach disclosed in 2013? It stands out has being a very large breach (40M credit cards) but also for having been one of the first highly publicized breaches where the entry point turned out to be a third party HVAC vendor. This may have been the moment in time where attention started to more deeply focus on third parties.

The problem, of course, is that you can build a great system and do all the right things for security in your system and your code - but if you integrate with or build upon something that isn't secure, in many common cases, you inherit their weaknesses. People don't want to buy things that they could easily know are weak.

This has gone beyond being a Good Idea™ and become something more like a mandatory minimum bar for doing business with most bigger companies.

We have seen all kinds of risky vendors:

The Process of Vendor Tracking

The first step in dealing with vendors is to figure out who your vendors are and how you should track them. We often ask finance for a list of vendors. Then we try to get pulled into procurement processes so that we'll know that a vendor is being vetted and onboarded by the accounting team.

You wouldn't believe how common it is that organizations use vendors without realizing it. Maybe someone in engineering set up a "free" account. Maybe someone in IT paid for a backup service with their company credit card. Getting a handle on who your vendors even are can be trickier than you might think.

Once you know who your vendors are, you need to think about what you need to know about them. Do they handle your most sensitive data? Do they handle it carefully? Do you need an audit to confirm that they do?

The diagram below illustrates an example flow chart you could build for your vendor management program.

Vendor Management Flow

Tracking Vendors

One way to help make sure you are doing the right diligence on vendors is to use an application to help structure the process. That's why we build a vendor management module into securityprogram.io.

SPIO Add Vendor

The Vendor Tracker makes it easy to:

Vendor Questionnaire

In the big scheme of things, Vendor Tracking is a pragmatic and minimal feature in SPIO. There are platforms you can buy that make it easy to administer very complex vendor management programs. We are not trying to compete with those, but to give smaller companies the basics that they need.

Let Us Assist You!

In the Assisted Tier of SPIO, our team helps you with vendor management. This ensures that your process is consistent and effective. It also makes it faster because many of our clients use the same vendors, so we don't necessarily have to do a full deep dive on diligence for every one of them.

For this to be effective, we still need to get plugged into your procurement process so that we know that a vendor is being onboarded, or renewed. But once we know that, and how they are being used, we can do most of the evaluation on our own. This can be a major time saver for our customers.

We tried to make vendor tracking as simple and pragmatic as possible. Whether you have us help you, or you do it yourself, the tools are right there for you.

Pipeline Security Automation

This post talks about how we approach security automation in BitBucket Pipelines. It also introduces some new open source tools we built and use in the process.

Security In Pipelines

We’ve written before about using GitHub Actions and provided an Action friendly “workflow” with our Crush tool.

At a high level, Pipelines and Actions just do some computing work for you in Atlassian or Github’s data center. Often that work is related to source code, testing, or deployment.

Both leverage containers heavily for sandboxing the work that happens in the build and provide some level of abstraction that you can use to build your own pipes or workflows.

On some level, we’re using them to do the same work we used to use Jenkins or CircleCI or Travis to do.

We like automating security in Pipelines or Actions because it makes it easy to build security into your natural development workflows.

Real World

So what are we really talking about? What do we want to have happen?

We want to be able to apply any automated tools flexibly in a controlled environment that already has our code artifacts and knows about the key events.

Triggers for actions and pipelines can be:

The tools we want to run may range from:

Of course, we can run security unit tests and trigger other integrations (eg. SonarCloud) as well.

Just remember:

Tools are necessary but must be used by skilled operators.

I can’t tell you how often I see security tools installed but not used effectively.

OK Show Us Code

Here is an example of how we have a pipeline configured:

    - parallel:
      - step:
          name: Run Crush
          image: golang:1.16
            - go get -u github.com/jemurai/crush@v1.0.5
            - crush examine --threshold 7 . > crush.json
            - crush.json
      - step:
          name: Dep Check
          image: openjdk:8
            - wget https://github.com/jeremylong/DependencyCheck/releases/download/v6.1.5/dependency-check-6.1.5-release.zip
            - unzip dependency-check-6.1.5-release.zip
            - rm dependency-check-6.1.5-release.zip
            - ./dependency-check/bin/dependency-check.sh --failOnCVSS 6 --exclude **/dependency-check/**/*.jar -f JSON --prettyPrint --scan .
            - dependency-check-report.json
    - step:
        name: Report
        image: golang:1.16
          - go get -u github.com/jemurai/depcheck2off@v1.0.0
          - go get -u github.com/jemurai/off2jira@v1.0.0
          - depcheck2off ./dependency-check-report.json > ./depcheck.off.json
          - off2jira ./depcheck.off.json
          - off2jira ./crush.json

Let’s walk through it and talk about what is happening.

First, the branches part tells BitBucket when to run the pipeline. In this case, it will be on any push to a branch under securityautomation.


We like doing this because it helps to isolate your security related chnages and ensures that what you are finding doesn’t break other builds. In the long run, we want to have security tooling run more often.

Then we need to understand that there are three steps defined in the pipeline:

Crush and Depenendency Check are both code analysis so they can run in parallel. Hence the parallel: before their step: definitions.

To run Crush, we pull a base golang image image: golang:1.16, install Crush and run it. We drop the output into an artifact that means it will be available later.

- step:
    name: Run Crush
    image: golang:1.16
        - go get -u github.com/jemurai/crush@v1.0.5
        - crush examine --threshold 7 . > crush.json
        - crush.json

Running Dependency Check is similar. You can see that we’re pulling a release from GitHub and unzipping it. This is on an openjdk image. Then we invoke dependency check and put the report in an artifact.

- step:
    name: Dep Check
    image: openjdk:8
        - wget https://github.com/jeremylong/DependencyCheck/releases/download/v6.1.5/dependency-check-6.1.5-release.zip
        - unzip dependency-check-6.1.5-release.zip
        - rm dependency-check-6.1.5-release.zip
        - ./dependency-check/bin/dependency-check.sh --failOnCVSS 6 --exclude **/dependency-check/**/*.jar -f JSON --prettyPrint --scan .
        - dependency-check-report.json

The next part “Report” is interesting and we’re going to put it in a whole new section.

Reporting and Rethinking Security Tooling

Once we have Crush and Dependency Check output, we want to do something with it. We could leave it in BitBucket as an artifact and refer to the plain text file. That is better than not running the tools, but we also want make these visible and integrate into our normal processes.

Here’s how that looks in the pipeline we defined where we’re pushing the issues identified by Crush and OWASP Dependency Check to JIRA:

- step:
    name: Report
    image: golang:1.16
       - go get -u github.com/jemurai/depcheck2off@v1.0.0
       - go get -u github.com/jemurai/off2jira@v1.0.0
       - depcheck2off ./dependency-check-report.json > ./depcheck.off.json
       - off2jira ./depcheck.off.json
       - off2jira ./crush.json

Here we are installing and using two new golang based tools:

The basic philosophy is to build small tools that do one thing and do it in a simple and predictable way. This goes directly against our own past approach with OWASP Glue which we retired.

With Glue, you could run different tools and push the output to a variety of trackers. The problem was that you ended up with a JVM, Python, Ruby, Node and an ever growing docker image. That made it hard to incorporate into pipelines efficiently. We also had to maintain everything and keep everything working to get an update pushed. It was a monolith.

With the Jemurai autom8d set of tools, we’re taking more of a classic Unix philosophy and building small purpose built utilities that can be put together in a number of ways.

So far we have:

We already have plans to build:

We also want to adapt and integrate some other code we have that does an inventory and metrics across repositories.

We’d love to hear from you about others that would be useful! We can help with this type of automation while doing so with open tooling you can leverage for the long term.

Leverage Pipelines

The great thing about pipelines (and actions) is that once you understand them, you can effectively push security tooling to a large number of projects quite easily.

Note that there are compute charges associated with running pipelines (or actions).

We have also had good success helping companies who leverage BitBucket or GitHub cloud because we can actually help commit the code that starts the autoamtion project off. Combined with some training and a retained ongoing support setup - we can enable clients to very quickly improve their internal app security posture.


How to Improve the Security of Your Applications: A Starting Point

When we implement security programs, we often advise clients to build an inventory of their applications. There are a lot of things we can do when we know what our inventory is. We can do this right in the available tools developers are already using. This post covers one way to do this.


When we know what applications we have, we can effectively plan what work needs to be done for each one.

If we have 10 apps with secrets hard coded in the repos, we can track that until all 10 are remediated.

If we have 1,000 apps that need to have dependencies updated, we can start to put a plan in place that allows us to do that over time.

Most of the time, most companies we know, don’t do a great job of tracking information about applications, automating the collection of and making that data accessible or visible.


Most projects we see these days are using some git variant—BitBucketGitLabGitHubProjectLocker, etc. Since developers are already using these platforms to store code, what if we just put the meta information in the repo with the code?

So the imagine if we add a new file in every repo: /appmeta.json.

Now we can write a program to list all of the repos for an org and pull out their security state. Well, as you will see the security state also includes more general information, which is why we called it appmeta instead of security.json. But of course, you could adapt this practice and do all of this yourself with just the properties you care about in the scope you want.


What meta information do we care about?

At a high level:

Security is just part of it.

Consider the following example, which we will go through section by section:

"name": "securityprogram.io",
"description": "A platform for implementing security programs.",
"stage": "live",
"team": "SPIO",
"slack": "securityprogramio",
"github": "github.com/jemurai/spio",
"plan": "https://dev.azure.com/Jemurai/SecurityProgram.io/_backlogs",
"adr": "docs/adr/"
"support": {
"slack": "securityprogramchat",
"email": "support@securityprogram.io",
"github": "github.com/jemurai/spio",
"documentation": "https://github.com/Jemurai/spio"
"ops": {
"email": "support@securityprogram.io",
"github": "github.com/jemurai/spio",
"documentation": "https://github.com/Jemurai/spio"
"continuity": {
"tier": 2,
"comment": "Important for SPIO business but not business critical
for clients.",
"email": "support@securityprogram.io",
"plan": "link"
"security": {
"tier": 1,
"summary": "Contains security information about clients.
Very sensitive.",
"email": "support@securityprogram.io",
"github": "github.com/jemurai/spio",
"threatmodel": "",
"soxdata": false,
"pcidata": false,
"hippadata": false,
"piidata": true,
"codereview": "2/24/2020",
"training": "4/14/2020",
"linting": "3/01/2020",
"securityrequirements": "2/24/2020",
"securityunittests": "",
"dependencies": "3/05/2020",
"staticanalysis": "",
"dynamicanalysis": "",
"pentest": "planned",
"signal": "",
"audit": ""


At the top level we have:

NameThe name of the project
DescriptionA description
StageWhat lifecycle stage is the system in?
TeamTeam responsible for the project.


Then we have a section about the development of the app. This includes:

SlackThe Development Slack Channel
GitHubThe URL of the project in GitHub
PlanThe location of the development plan
ADRArchitecture decision records

The idea is to make it easy for this information to be collected and distributed beyond the development team, who undoubtedly already has access to these things and hopefully knows about them.


For support, we have similar but different attributes:

SlackThe slack channel for support
EmailHow to reach the support team via email
GithubURL for issues or other project info
DocumentationWhere to get support documentation

If you are using intercom or zendesk or other support tools, you can include those URL’s here so that it is easy for everyone to find support.


In some cases, we may have an ops team that works in a different set of tools. We can capture them here for a given project. In the example in this post, it is basically the same as Dev and Support.


BCP stands for business continuity planning. Having information about the plan, contacts, recovery, tier, etc. makes it easy to standardize and find the right people when needed.

TierThe tier of app. Typically 1 is most critical. (Numeric)
CommentText around the tier.
EmailEmail to use to contact BCP related team.
PlanLink to the response plan.


The security properties reflect the security state of the application.

TierNumeric tier of app. (Programmable)
SummaryText around the tier and app
EmailWho to email about security for the app.
GithubWhere code lives
ThreatModelLink to the threat model (eg. ThreatDragon)
soxdataDoes the app have Sarbanes Oxley related data? (Y/N)
pcidataDoes the app have credit card data (Y/N)
phidataDoes the app have personal health data (Y/N)
piidataDoes the app have personally identifiable information (PII) (Y/N)
codereviewWhen was the last code review? (Date)
trainingWhen was the team last trained on security (OWASP TOP 10) (Date)
lintingWhen was linting last run? (Date)
securityrequirementsSecurity requirements are incorporated up to what date? (Date)
securityunittestsSecurity unit tests are running up to what date? (Date)
dependenciesAutomated dependency checking was run what date? (Date)
staticanalysisWhen was static analysis last run? (Date)
dynamicanalysisWhen was dynamic analysis last run? (Date)
pentestWhen was the last pentest? (Date)
signalSignal function up to date as of? (Date)
auditAudit function up to date as of? (Date)

As you can see there is a lot here. You could remove attributes you don’t care to track. You could add new ones that you want to track.


We are considering building some automation (think a tool written in Golang or JS) that you could point at a GitHub Organization and it would iterate through the repositories, pull this file and compile data - maybe even a semi static web view that would look like a rich inventory… if you’re interested, let us know. Maybe we can give you early access to help test.