HashiCorp Field CTO Weekly: It's all Pipes and an Ode to Accountants
Volume 101
Greetings dear reader.
As this externally published newsletter is still relatively new, it’s worth reiterating what we’re going on about here. This newsletter is traditionally published by the Field CTO team here at HashiCorp and focuses on looking over news of the previous week(s) and giving general commentary about what we’ve observed and how this relates to parts of industry HashiCorp is concerned with. And sometimes we just riff on whatever tickles our fancy.
As a brief introduction, I’m Jake Lundberg, Field CTO covering the western US and Canada. My career has mainly been focused around all manners of IT Operations with scope ranging from the US Air Force to large corporations and very small startups. I came to HashiCorp from the AdTech industry as a user of Vagrant, Consul, Terraform and a dabbling of Vault. And because I started here shortly after Justice League was released and people think I look like Jason Mamoa (poor Jason), I’ve been honored to carry the nickname “Aquaman” during my tenure here.
Alright then, let’s get on with it!
CryptoCurrency Two Ways
I can’t help but be elated about the upcoming changes to Ethereum that shifts from proof-of-work to proof-of-stake and its potential implications on overall energy consumption. While I can’t claim superior levels of efficiency in everything I do, whenever I have a chance to reduce the amount of energy it takes to do a thing and produce the least amount of waste, I’m happy about it. The move to proof-of-stake could reduce electricity consumption from that comparable to the total electricity usage of Argentina, to something more mundane like the processing needed for SETI-at-home (may you Rest In Peace).
Now whether or not you think digital currency is actually going to be a thing in the long run is certainly worth examining. However, count the amount of “digital transactions” (e.g. credit/debit card, Paypal, Venmo, Revolut, Alipay, WeChat..etc) versus cash transactions you’ve had in the last week. Even in a cash-friendly household like ours, the scale tips heavily towards digital (and certainly not influenced by rewards programs at all). We’re already use to the process, does the currency beneath it really matter? </TROLL>
Australia, not to be shown up in the crypto news, goes old skool and creates currency with cryptography embedded. Congrats to the Tasmanian youngster who solved 4 out of 5 of these puzzles in around an hour. Quite the clever devil aren’t you? Now if only we could print our respective tax codes on a coin and have some teenage kid crack them. Gauntlet thrown Australia.
Supply Chain Attacks Are Here To Stay
Even though we as an industry are working hard towards running Zero Trust operations, not all of us are quite there yet. For those of us working on open source projects, a relatively concerning issue around performing a supply chain attack on a Github project has emerged. What’s scary is the quote:
We’re told that the code at issue doesn’t necessarily have to merge. It’s the merge request that allows the attacker to compromise the repo by exposing an access token that enables future abuse.
And while I am focused on open source projects because they take code in from outside developers, this kind of pattern could also be an issue with an internal project and a nefarious employee. The good news is that Github has a fairly extensive page on Security Hardening for Github Actions. There’s also a page on integrating OpenID Connect with HashiCorp Vault to provide just-in-time credentials for your pipelines.
A similar related issue centers around storage of secrets in plain text in the actual application code base for mobile apps.
“We discovered that over half (53%) of the apps were using the same AWS access tokens found in other apps,” they said in an analysis on Sept. 1 . “Interestingly, these apps were often from different app developers and companies. [Eventually] the AWS access tokens could be traced to a shared library, third-party SDK, or other shared component used in developing the apps.”
This is also where a product like HashiCorp Vault shines. It allows for using very specific least-privilege identities to retrieve secrets and helps remove mistakes caused by hard-coding cloud or other API credentials into applications. Vault has a concept of “dynamic credentials” that can grant short-lived tokens to any cloud provider with the additional benefit of time-based, usage-based and API driven revocation. When building your developer platforms, consider how to remove all of your credentials out of version control and create least-privilege access with your identities. Otherwise you might have leakage between your pipelines, and you certainly don’t want that (warning, contains Seinfeld). Different pipes go to different places indeed.
Shift Right as Well as Left
Assuming you have secured your pipelines to not leak credentials, there are still some other process improvements in hardening your pipelines. I still sit on the side of burdening developers less with operations, security, finance, policy, data, etc and instead, providing those other functions more capabilities to automate their jobs. As an ops person who learned to code after the beginning of my career, learning to code was just part of the discipline. Using version control, CI/CD pipelines and testing properly (and to be honest, this was probably my weakest area) were some things that really helped me improve the quality of infrastructure releases.
I like the suggestions DoorDash gives in **6 Pull Request Tricks You Should Adopt Now** as learning some of these would have certainly made my life easier. The tl;dr is:
Write Descriptive and Consistent Names
Create a Clean PR Title and Description
Keep PRs Short
Manage Disagreements Through Direct Communication
Avoid Rewrites by Getting Feedback Early
Request Additional Reviewers to Create Dialogue
And while this is a highly tactical bit of information, the important thing is techniques like this are widely published and can be reviewed by your organization to determine if it’s a good fit or not. People are a critical part of your supply chain and continuous enablement is fundamental to their success. One fascinating concept for helping with that enablement is using internal Developer Advocates to help enable your workforce.
The foundation here is similar to what is happening in most of the organizations who’ve had a go at DevOps for a bit; building a developer platform that allows them to insert code and get out an application that’s running. If you haven’t quite reached developer Nirvana, getting there requires strong community and communication according to the article. My favorite suggestion is around communication though.
Two-way communication also requires listening. An organization can build a platform and tools for developers, but first, the organization has to listen to developers to understand their needs before building these tools. Then, once the tools are developed, communicating with the developers is just as important.
I’m constantly revisiting the skill of Active Listening, it’s made a tremendous impact on both my career and personal life.
I Like Money
Last week, Brent mentioned our State of the Cloud Survey , but I wanted to focus on one particular area, the one where 94% of respondents say they are wasting money in the cloud.
Overspending in the cloud isn’t just common, it’s ubiquitous. More than 9 in 10 respondents noted avoidable cloud spend, most commonly due to some combination of idle or underused resources (66%), overprovisioned resources (59%), and lack of needed skills (47%).
This is highly likely why we have another <function>Ops contender in the operations world: FinOps.
According the FinOps Foundation page
FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.
Or with a bit of editing:
FinOps is data-driven spending decisions.
Now, my mom and step-dad are both accountants and when I couldn’t sleep, I’d have them teach me about accounting. Before I dozed off however, I picked up a few things and a lot of what FinOps is trying to do sounds quite a bit like Cost Allocation and specifically Direct Costs accounting with continuous feedback loops.
But we have a fundamental disconnect between data center and cloud/SaaS operations in that we often find out the Cost of Goods AFTER we’ve consumed them in the cloud and then it’s likely we’re not sure what those costs are actually associated with. I suppose you could argue the same in datacenter operations as well. Did we actually know how much data, compute and memory were associated with specific business units or their applications? Most likely not, especially once virtualization hit the scene, but at least the spending was approved beforehand and money was allocated and spent from a specific budget.
The big gain with almost all cloud-based resources is that you can tag or label them with metadata like application, business unit, chargeback ID, etc, and then aggregate/associate that metadata with the actual cost to run the resources.
A combination of Infrastructure-as-Code (IaC) such as Terraform combined with a Policy-as-Code (PaC) framework like HashiCorp Sentinel (or any number of alternatives like Open Policy Agent) can both set the structure for easily assigning metadata to resources (IaC) as well as enforcing the metadata is actually assigned at runtime (PaC). Further, because the cloud vendors also allow you to query the base price for all resources via API, you can both estimate as well as enforce cost measures based on business logic.
But why do you need both IaC and PaC? Mainly to make IaC as re-usable as possible and still allow enforcement of run-time values. That is if you have to hard-code the values allowed for every combination of metadata, your code base will inflate quickly and will very likely become unmaintainable. As a simplified example, imagine you have Terraform code to create AWS EC2 instances to run applications on. The instance-type could be something that costs anywhere from $4.90 (t4g.nano) to $49,000 (u-12tb1.112xlarge) per month. Should you run a $49k/month instance? Possibly. If the instance supports a revenue generating application where processing data for a month generates $100k in revenue, you’ve made $51k/month running that instance (ignoring all other costs for the sake of simplicity). But if you’re only going to make $10/month, maybe the $4.90/month instance makes more sense.
At a minimum, workload identification, policy enforcement and continuous feedback loops with observability and KPI systems will be foundational in helping organizations make these types of decisions.
Heck, now that Ethereum has potentially reduced the energy consumption of transactions, it’s possible this kind of dynamic allocation system could be rolled into smart contracts to both allocate and release funds for cloud and SaaS usage. Or, we could just pass around Bored Apes, you decide.
Thanks for tuning in, Aquaman out.

