Why Developers Matter For Security

November 7, 2019

This post talks about the critical importance of actively engaging software developers in security activities and presents a few timely real world examples where this was not done sufficiently and companies paid the price.

Robinhood Gold

The first example this week is from Robinhood. Robinhood is a low cost trading platform. It turns out that you can essentially leverage more money than you had in your account. Some users were calling it an infinite cheat code! Here’s how a Bloomberg writer explained it.

Here’s how the trade works. Users of Robinhood Gold are selling covered calls using money borrowed from Robinhood. Nothing wrong with that. The problem arises when Robinhood incorrectly adds the value of those calls to the user’s own capital. And that means that the more money a user borrows, the more money Robinhood will lend them for future trading.

This is a great example where we can pretty easily think of the bad situation by putting on our villain hat. What would you want to do on a trading platform? Get money. Or make trades with money that isn’t yours. The latter is exactly what is described in this reddit post which exposed the issue.

So I would assume that Robinhood has an AppSec program. I would assume they had static code analysis. I would even assume that they had pentesting done against their system. If they haven’t done these things in the industry they are in, they are probably on the wrong side of all of the regulations. But obviously these activities did not identify this issue! This could be because the Gold tier was out of scope, or the pentester didn’t understand the capabilities of the trading system.

On the other hand, a developer that understood what this function meant and who had coded the system to support this function should know that it was or was not possible. All we really need to do is educate and empower them to raise their hand and start a team discussion about the proper handling of this type of scenario. Of course, business analysts and stakeholders need to participate to define the correct behavior.

Twitter Spies

Twitter also had some interesting problems this week. It was exposed that several employees had likely been in the employ of other governments or interests. Specifically, they were charged with spying for Saudi Arabia.

This raises a critical question around who should have access to data. Inside Twitter, there are undoubtedly ways for people to get access to power tools, reporting systems, and raw data. This in itself is not an issue. You can’t run a platform like Twitter without having the data, tools to manage and manipulate the data, and people with access to those tools.

What you can do though is:

  • Strictly limit who can see the data
  • Track who does see data
  • Store data with controls commensurate to the exposure
  • Build active monitoring functions (human or otherwise) to detect misuse

The fact that Twitter did not detect that insiders were exploring thousands of accounts and capturing personal information associated with the accounts from the inside of the business reflects a gap in internal controls - likely at the application level.

It is chilling to think that these spies were specifically searching out dissidents and funneling information from inside Twitter to a government that may have murdered a journalist, Jamal Khashoggi.

I wonder what other governments have agents in place in Twitter … let alone Google, Facebook, Apple, AT&T, Disney, Amazon, etc. I would assume at least a few have them in all of the above!

The answer again is to have developers trained and empowered to think about these angles on security so that they can properly implement controls which make it difficult to get access to these data, track access for later review, and set off alarms when it is accessed.

Trend Micro Insiders (via CRM?)

Trend Micro, a security company, had its own security issue this week in which it revealed that an employee had accessed customer information with criminal intent and sold it to a “currently unknown third-party malicious actor”.

It is most likely that the Trend Micro customer information is in a commercial CRM system (e.g. Salesforce, Hubspot, etc.) which was either not configured properly to prevent the access or did not properly alert the Trend Micro team about the unusual access pattern. The incident took place in August and 68,000 records were impacted. This sounds like something the CRM developers should know could be possible and which Trend Micro should have known about before November through channels other than customer complaints containing substantial legitimate information (real information about the customer’s licenses).

This is not to point at Trend Micro but to reflect on what the “solution” is - and my take is that it is building security auditing and metrics into software so that incidents like this aren’t discovered so far after the fact but can be identified while they are in progress.

Unfortunately, the developers of the CRM system probably weren’t thinking about how things could go wrong or didn’t have the time and money to spend building an appropriate solution. Again, developers matter a lot when it comes to security.

Developer DNA

These real world examples show that there are significant advantages to making security part of a development organization’s culture or DNA.

Ever since I started Jemurai in 2012, I knew abstractly that if you wanted to talk to someone who really knew what code was doing, the best place to go was directly to the developer.

Sure a BA, QA, VP, Stakeholder, AppSec person or Pentester might know, but the developer has to know. They coded it.

In secure dev trainings, we teach not only security topics, such as the OWASP Top 10 (see this post for more), but also processes and habits that developers can use to keep security top of mind. Examples range from code review checklists to writing abuse cases in a villain persona.

Ultimately, it takes a lot of work and a cultural shift to create a team where developers actively embrace security. It is not easy. We always work to meet them halfway by providing easier tooling, libraries to help, and being reasonable about expectations. We are careful not to inundate developers with false positives because that is the worst thing you can do culture wise - it undermines your own credibility and the security culture when you insist on forcing developers to fix things that don’t matter.

Note that I am not advocating that we immediately hold developers accountable for security. Sadly, this is not an established enough skillset that we could turn on a dime and start requiring developers to do this. But it is something we need to start working toward.

Conclusion

We love developers. We embrace bringing security to developers and have had awesome results in real world scenarios building shared tooling and culture to promote better security. Developers are part of the solution. They are a key part of the solution to real world problems like the ones we saw this week.

References

Share this article with colleagues

Popular Posts

Ready to get started?

Build a comprehensive security program using our proven model.
© 2012-2024 Jemurai. All rights reserved.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram