Batching

Speed up your testing and merging workflow with Merge Batching

Batching allows multiple pull requests in the queue to be tested as a single unit. Given the same CI resources, a system with batching enabled can achieve higher throughput while also reducing the net amount of CI time spent per pull request. By enabling batching, the cost per pull request in the Merge Queue can be reduced by almost 90%. For example, in the table below, you can see how batching affects the amount spent testing pull requests in the queue.

Batch SizePull RequestsTesting CostSavings
1

A, B, C, D, E, F, G, H, I, J, K, L

12x

0%

2

AB, CD, EF, GH, IJ

6x

50%

4

ABCD, EFGH, IJKL

3x

75%

8

ABCDEFGH, IJKL

1.5x

87.5%

12

ABCDEFGHIJKL

1x

92%

Enable Batching

Batching is enabled in the Merge Settings of your repo in the Trunk webapp.

Configuring Batching

The behavior of batching is controlled by two settings in the Merge Queue: Target Batch Size: The largest number of entries in the queue that will be tested in a single batch. A larger target batch size will help reduce CI cost per pull request but require more work to be performed when progressive failures necessitate bisection. Maximum Wait Time: The maximum amount of time the Merge Queue should wait to fill the target batch size before beginning testing. A higher maximum wait time will cause the Time-In-Queue metric to increase but have the net effect of reducing CI costs per pull request.

Time (mm:ss)Batching 4; Maximum Wait 5 minutesTesting

00:00

enqueue A

----

01:00

enqueue B

----

02:30

enqueue C

----

05:00

5 min maximum wait time reached

Begin testing ABC

What happens when a batch fails testing?

If a batch fails, Trunk Merge Queue will move it to a separate queue for bisection analysis. In this queue, the batch will be split in various ways and tested in isolation to determine the PRs in the batch that introduced the failure. PRs that pass this way will be moved back to the main queue for re-testing. PRs that are believed to have caused the failure are kicked from the queue.

Batching + Optimistic Merging and Pending Failure Depth

By enabling batching along with pending failure depth and optimistic merging you can realize the major cost savings of batching while still reaping the anti-flake protection of optimistic merging and pending failure depth.\

eventqueue

Enqueue A, B, C, D, E, F, G

main <- ABC <- DEF +abc

Batch ABC fails

main <- ABC

pending failure depth keeps ABC from being evicted while DEF

main <- ABC (hold) <- DEF+abc

DEF passes

main <- ABC <- DEF+abc

optimistic merging allows ABC and DEF to merge

merge ABC, DEF

Combined, Pending Failure Depth, Optimistic Merging, and Batching can greatly improve your CI performance because now Merge can optimistically merge whole batches of PRs, with far less wasted testing.

What are the risks of batching?

The downsides here are very limited. Since batching combines multiple pull requests into one, you essentially give up the proof that every pull request in complete isolation can safely be merged into your protected branch. In the unlikely case that you have to revert a change from your protected branch or do a rollback, you will need to retest that revert or submit it to the queue to ensure nothing has broken. In practice, this re-testing is required in almost any case, regardless of how it was originally merged, and the downsides are fairly limited.

Last updated