Batch entry processing and worker tuner improvements

This MR speeds up entry processing by moving from one Temporal activity per entry to micro-batched entry activities.

Micro-batching reduces Temporal orchestration overhead (activity scheduling, state transitions, event history growth, and worker dispatch round-trips) by amortizing that overhead across multiple entries per activity. For fast entries, this fixed orchestration cost is a large share of total runtime, so fewer activity invocations significantly improve throughput.

10k entries with fixed activity concurrency

Screenshot

10k entries with worker tuner on

Screenshot

5k entries with fixed activity concurrency

Screenshot

5k entries with worker tuner on

Screenshot

Edited by Ahmed Ilyas

Merge request reports

Loading