Recently, one of our Site Reliability Engineers (SRE) noticed a workload running in our cluster which he hasn’t seen before. The workload was consuming some resources and the SRE wanted to apply some updates to the cluster but was not sure who owns the workload, and doesn’t know if his updates would have an impact on it or not. The SRE reached out to the development team on Slack concerning the Workload to see if anyone knew anything about this new workload. It was late in the day, and it took some time to get a reply. Many people did not know about this new workload, or who owns it, or what part of the system it belonged to. Some of us even wondered if this could be a malicious application. Should we terminate it and remove it right away? What if it is part of the new feature we launched a few days ago?Eventually, we figured out that it was part of a prototype that an engineer was working on with someone from the business team to experiment with a new feature. But what if this was some malicious application? Or it was causing an issue in production and we needed to reach the owner of this service? Many critical hours would’ve passed without reaching a resolution. In such a distributed and decentralized environment and as your teams continue to grow, it becomes challenging to know everything and ensure everyone is a good citizen. So what should we do?
How To Establish Better Governance And Avoid This Problem?
If we mandate it that each new service or every change made to production should be reviewed by the SRE team, we will certainly slow the development team and slow the pace of innovation. Also, the manual review does not guarantee the detection of all violations. Therefore, the solution is to establish a governance framework and automate its process as much as possible in order for us to have the opportunity to scale.There are three key elements that should be defined in order to establish good governance around the issue we faced.
These elements are “targets”, “policies”, and “triggers”.
The target in this case would be all workloads in our cluster. We should apply the policy to all types of workloads: ReplicaSets, StatefulSets, Jobs, DaemonSets, etc. The policy would be to enforce that each workload has an “owner” label with the email of the engineer responsible for it. For the trigger, we thought initially that it is enough to run this once a day to find out who was violating this policy.