Batching
Speed up your testing and merging workflow with Merge Batching
Last updated
Speed up your testing and merging workflow with Merge Batching
Last updated
Batching allows multiple pull requests in the queue to be tested as a single unit. Given the same CI resources, a system with batching enabled can achieve higher throughput while also reducing the net amount of CI time spent per pull request. By enabling batching, the cost per pull request in the Merge Queue can be reduced by almost 90%. For example, in the table below, you can see how batching affects the amount spent testing pull requests in the queue.
Batch Size | Pull Requests | Testing Cost | Savings |
---|---|---|---|
Batching is enabled in the Merge Settings of your repo in the Trunk webapp.
The behavior of batching is controlled by two settings in the Merge Queue: Target Batch Size: The largest number of entries in the queue that will be tested in a single batch. A larger target batch size will help reduce CI cost per pull request but require more work to be performed when progressive failures necessitate bisection. Maximum Wait Time: The maximum amount of time the Merge Queue should wait to fill the target batch size before beginning testing. A higher maximum wait time will cause the Time-In-Queue metric to increase but have the net effect of reducing CI costs per pull request.
If a batch fails, Trunk Merge Queue will move it to a separate queue for bisection analysis. In this queue, the batch will be split in various ways and tested in isolation to determine the PRs in the batch that introduced the failure. PRs that pass this way will be moved back to the main queue for re-testing. PRs that are believed to have caused the failure are kicked from the queue.
By enabling batching along with pending failure depth and optimistic merging you can realize the major cost savings of batching while still reaping the anti-flake protection of optimistic merging and pending failure depth.\
Combined, Pending Failure Depth, Optimistic Merging, and Batching can greatly improve your CI performance because now Merge can optimistically merge whole batches of PRs, with far less wasted testing.
The downsides here are very limited. Since batching combines multiple pull requests into one, you essentially give up the proof that every pull request in complete isolation can safely be merged into your protected branch. In the unlikely case that you have to revert a change from your protected branch or do a rollback, you will need to retest that revert or submit it to the queue to ensure nothing has broken. In practice, this re-testing is required in almost any case, regardless of how it was originally merged, and the downsides are fairly limited.
Time (mm:ss) | Batching 4; Maximum Wait 5 minutes | Testing |
---|---|---|
event | queue |
---|---|
00:00
enqueue A
----
01:00
enqueue B
----
02:30
enqueue C
----
05:00
5 min maximum wait time reached
Begin testing ABC
Enqueue A, B, C, D, E, F, G
main
<- ABC <- DEF +abc
Batch ABC fails
main
<- ABC
pending failure depth keeps ABC from being evicted while DEF
main
<- ABC (hold) <- DEF+abc
DEF passes
main
<- ABC <- DEF+abc
optimistic merging allows ABC and DEF to merge
merge
ABC, DEF
A, B, C, D, E, F, G, H, I, J, K, L
12x
0%
AB, CD, EF, GH, IJ
6x
50%
ABCD, EFGH, IJKL
3x
75%
ABCDEFGH, IJKL
1.5x
87.5%
ABCDEFGHIJKL
1x
92%