High-performing engineering teams and the Holy Grail

Jeremy Meiss Director, DevRel & Community

So back to the tech industry….

Forrester 2021 Total Economic Empact study Using best-in-class CI/CD platforms can provide: $7.8 million saved from shorter software development cycles. $4.3 million recuperated in lost developer productivity. 50% decrease in annual infrastructure spend. $1.7 million estimated value of improved code quality.

CI/CD Benchmarks for high-performing teams Duration Mean time to recovery Success rate Throughput

So what does the data say?

Duration the foundation of software engineering velocity, measures the average time in minutes required to move a unit of work through your pipeline

So what is an ideal Duration?

<=10 minute builds “a good rule of thumb is to keep your builds to no more than ten minutes. Many developers who use CI follow the practice of not moving on to the next task until their most recent checkin integrates successfully. Therefore, builds taking longer than ten minutes can interrupt their flow.” — Paul M. Duvall (2007). Continuous Integration: Improving Software Quality and Reducing Risk

Duration: What the data shows Benchmark: 5-10mins

“Why so much lower than the Duration benchmark?”

Improving test coverage Add unit, integration, UI, and end-to-end testing across all app layers Incorporate code coverage tools into pipelines to identify inadequate testing Include static and dynamic security scans to catch vulnerabilities Incorporate TDD practices by writing tests during design phase

Optimizing your pipelines Use test splitting and parallelism to execute multiple tests simultaneously Cache dependencies and other data to avoid rebuilding unchanged portions Use Docker images custom made for CI environments Choose the right machine size for your needs

Duration and the Platform Team Identify and eliminate impediments to developer velocity Set guardrails and enforce quality standards across projects Standardize test suites and CI pipeline configs, i.e. shareable config templates and policies Welcome failed pipelines, i.e. fast failure Actively monitor, streamline, and parallelize pipelines across the org

Mean time to Recovery the average time required to go from a failed build signal to a successful pipeline run

Mean time to recovery is indicative of resilience

“A key part of doing a continuous build is that if the mainline build fails, it needs to be fixed right away. The whole point of working with CI is that you’re always developing on a known stable base.” — Fowler, Martin. “Continuous Integration.” Web blog post. MartinFowler.com. 1 May 2006. Web.

So what MTTR is ideal?

<=60min MTTR on default branches

MTTR: What the data shows Benchmark: 60mins

“10 minutes is a striking improvement - what happened?”

Two factors impacting reduced MTTR Economic pressures in the macro environment + rising competition in the micro environment, forcing teams to prioritize product stability and reliability over growth High performers increasingly rely on platform teams to achieve steadier and more resilient development pipelines with built-in recovery mechanisms.

Treat your default branch as the lifeblood of your project

Getting to faster recovery times Set up instant alerts for failed builds using services like Slack, Twilio, or Pagerduty. Write clear, informative error messages for your tests that allow you to quickly diagnose the problem and focus your efforts in the right place. SSH into the failed build machine to debug in the remote test environment. Doing so gives you access to valuable troubleshooting resources, including log files, running processes, and directory paths.

MTTR and the Platform Team Ephasise the value of deploy-ready, default branches, with clear processes & expectations for failure recovery across all projects Set up effective monitoring and alerting systems, and track recovery time Limit frequency and severity of broken builds with role-based AC and config policies Config- and Infrastructure-as-Code tools limit potential for misconfig errors Actively monitor, streamline, and parallelize pipelines across the org

Success Rate number of passing runs divided by the total number of runs over a period of time

So what Success rate is ideal?

90%+ Success rate on default branches

Success rate: What the data shows Benchmark: 90%+ on default

Success rate and the Platform Team With low success rates, look at your MTTR and shorten recovery time first Set a baseline success rate, then aim for continuous improvement, looking for flaky tests or gaps in test coverage Be mindful of patterns and influence of external factors, i.e. decline on Fridays, holidays, etc.

Throughput average number of workflow runs that an organization completes on a given project per day

So what Throughput is ideal?

It depends.

Throughput: What the data shows Benchmark: at the speed of your business

Throughput and the Platform Team Map goals to reality of internal and external business situations, i.e. customer expectations, competitive landscape, codebase complexity, etc. Capture a baseline, monitor for deviations Alleviate as much developer cognitive load from day-to-day work

High-Performing Teams in 2023

“Surely <insert programming language> helps me achieve the “Holy Grail”!?”

Thank You. timeline.jerdog.me IAmJerdog jerdog /in/jeremymeiss For feedback and swag: circle.ci/jeremy @jerdog@hachyderm.io