Detailed Summary: MSCCL++: Rethinking GPU Communication Abstractions for Cutting-Edge AI Applications

Citation: Aashaka Shah, Abhinav Jangda, Binyang Li, Caio Rocha, Changho Hwang, Jithin Jose, Madan Musuvathi, Olli Saarikivi, Peng Cheng, Qinghua Zhou, Roshan Dathathri, Saeed Maleki, Ziyue Yang. Microsoft Research / Microsoft Azure. arXiv:2504.09014v3, August 21, 2025. In production at Microsoft Azure; adopted by AMD RCCL.

Code: https://github.com/microsoft/mscclpp


Abstract

MSCCL++ is a novel GPU communication library built on a two-level abstraction: a minimal Primitive Interface (put, signal, wait, flush) that closely mirrors GPU hardware capabilities, and higher-level Collective and DSL APIs built on top of it. The primitive interface exposes three channel types (MemoryChannel for thread-copy, PortChannel for DMA/RDMA, SwitchChannel for NVSwitch multimem), enabling expert optimizations that NCCL's higher-level abstractions prevent. Compared to NCCL, RCCL, and MSCCL, MSCCL++ achieves up to 5.4× speedup for collective communication and up to 15% end-to-end speedup for LLM inference (Llama2-70b decode).


1. Motivation and Problem Statement

1.1 The Custom Stack Problem

Modern AI applications are deployed on fast-evolving, heterogeneous hardware. The enormous compute demand from LLMs is pushing the industry to aggressively upgrade chips and interconnects, resulting in powerful but immature hardware that differs dramatically from previous generations. General-purpose libraries like NCCL require significant time to optimize for each new hardware, so cutting-edge AI applications typically develop custom communication stacks:

This creates redundant development effort, non-portable code, and ecosystem fragmentation. The authors argue this is a library interface design problem, not just an optimization problem.

1.2 Why NCCL Primitives Are Limiting

NCCL provides four primitives: send, recv, copy, reduce. The paper identifies three key limitations:

Wasted GPU cycles. send and recv block all participating GPU threads until data transfer completes. For NVLink/PCIe intra-node transfers, send uses many threads to copy data — efficient. But for inter-node InfiniBand transfers, send only needs to wake a CPU thread to issue ibv_post_send, yet it ties up all GPU threads in busy-wait (see Figure 2 in paper). This wastes hundreds of GPU thread cycles per send operation.

Inflexible synchronization. NCCL primitives are self-synchronizing: every send/recv pair requires a synchronization fence. This prevents optimizations like rotating buffers (using two alternating buffers to reduce barrier count from two per iteration to one). An optimization that is safe with asymmetric barriers is impossible under NCCL's symmetric self-synchronization model.

Interconnect limitations. NCCL uses only thread-copy mode for NVLink, missing the 15.8% higher bandwidth achievable with DMA-copy mode. For a ring AllReduce benchmark on 8×A100-80G: thread-copy achieves 227 GB/s; DMA-copy achieves 263 GB/s. NCCL's ring AllReduce and tree kernels use 94 and 96 registers per thread respectively; MSCCL++'s equivalent uses only 32 registers, causing fewer register spills, better instruction cache hit rates, and fewer executed instructions.


2. Background

2.1 Collective Communication for AI

AllReduce = ReduceScatter + AllGather. AllReduce sums partial results from all GPUs and distributes the result. ReduceScatter sums and distributes (each GPU receives 1/N of the total). AllGather collects distributed buffers. In tensor parallelism, ReduceScatter and AllGather are often used separately; in data parallelism, AllReduce is used for gradient synchronization.

2.2 NCCL Architecture (Reference Implementation)

NCCL initialization: Exchanges metadata (ranks, node count, GPU count, NVLink topology, IB/Ethernet links), builds ring and tree topologies, allocates send/receive buffers.

NCCL kernel structure: A GPU kernel processes collective operations using four building block primitives (send, recv, copy, reduce). The kernel uses static thread groupings of 128–640 threads per primitive call. Algorithm choice is based on message size at kernel launch time.

See Figure 1 in paper: Ring ReduceScatter in NCCL code showing primitive calls for each ring step.


3. MSCCL++ Architecture

3.1 Three-Level Hierarchy

User Application (PyTorch, etc.)
        │
        ├── Collective API  (NCCL-compatible drop-in; wraps Primitive API)
        │
        ├── DSL API         (MSCCLang-style; generates instructions for DSL Executor kernel)
        │
        └── Primitive API   (Hardware-level: put, signal, wait, flush)
                │
                ├── PortChannel       (DMA/RDMA: GPU→CPU queue→ibv_post_send)
                ├── MemoryChannel     (Thread-copy: GPU threads write to peer GPU memory)
                │       ├── LL protocol (low latency, sync per small chunk)
                │       └── HB protocol (high bandwidth, sync per large chunk)
                └── SwitchChannel     (NVSwitch multimem: hardware reduce/broadcast)
                        │
                        └── NVLink SHARP (multimem.ld_reduce / multimem.st PTX)

3.2 Communication Channels

MemoryChannel uses peer-to-peer GPU memory access (thread-copy). Multiple GPU threads read from source and write to destination via NVLink, xGMI, or PCIe. Two protocols:

MemoryChannel primitives: put (multi-threaded write to peer GPU memory), read (poll + read flag-synchronized data), write (write data + flag in LL protocol).

PortChannel uses port-mapped I/O: GPU threads write descriptors to a lock-free request queue; a dedicated CPU thread polls the queue and issues ibv_post_send (InfiniBand), ibv_atomic_add (for semaphore increments), or cudaMemcpy (DMA intra-node). This approach:

PortChannel primitives: put (enqueue transfer request; GPU writes to head of request queue; CPU reads from tail, calls ibv_post_send), signal (CPU calls ibv_atomic_add to atomically increment peer's semaphore), flush (GPU waits for CPU to process all queued requests; CPU calls ibv_poll_cq).

SwitchChannel uses NVLink SHARP through NVSwitch hardware for in-switch collective operations on H100/H200. Two primitives:

SwitchChannel uses multimem.ld_reduce and multimem.st PTX instructions, which are new in NVLink 4.0 (H100). A multimem address is a virtual address that points to different physical addresses on each GPU in the collective.

3.3 Primitive API Design Principles

Zero-copy: No intermediate buffers. put transfers directly from source to destination without staging.

One-sided: Initiated by one peer without explicit receiver participation. The sender calls put and signal; the receiver calls wait to know when data is available.

Asynchronous: put is non-blocking; signal is strictly ordered after put but asynchronous itself. flush is the synchronization barrier that ensures completion.

Separation of data transfer and synchronization: put transfers data; signal notifies the remote peer; wait blocks until notification arrives; flush ensures local completion (safe buffer reuse). This separation enables batching multiple puts before a single signal — removing NCCL's requirement for one synchronization per primitive call.

Example (Figure 4 in paper): put(src, dst, size)signal()flush() on sender; wait() on receiver.

3.4 DSL API

Reimplements MSCCLang over MSCCL++ primitives. Python-based language to describe communication algorithms at a high level, converted to an instruction sequence executed by the DSL Executor kernel at runtime. Key additions over MSCCLang: new instructions based on MSCCL++ Primitive API, and a lifted restriction allowing a single GPU thread block to access multiple GPUs simultaneously (impossible in original MSCCLang).

Development experience: 9 months of use shows DSL reduces development time from weeks to days compared to Primitive API. DSL introduces ~3% overhead on average (up to 18% in corner cases) from the interpreter overhead of the DSL Executor.

3.5 Collective API

Reimplements the full NCCL API using MSCCL++ kernels written with the Primitive API. Drop-in replacement: applications replace NCCL/RCCL with the MSCCL++ Collective library without changing application code. Collective kernels implement the best algorithms (1PA, 2PA, 2PH) using the appropriate channels for each message size and topology.


4. Collective Algorithms in Detail

4.1 One-Phase All-Pairs (1PA)

All N GPUs simultaneously send all local data to all other N-1 GPUs. Each GPU holds a copy of all data after this phase, then performs local reduction. Communication volume: (N-1)/N × total_data per GPU (worse than ring/tree). But for very small messages (< few KB), the latency savings from fewer synchronization steps outweigh the bandwidth cost. MSCCL++ implements 1PA using LL protocol with MemoryChannel (no need for large bandwidth). Efficient implementation eliminates unnecessary synchronizations that NCCL/MSCCL require due to their symmetric primitive model.

4.2 Two-Phase All-Pairs (2PA)

Phase 1: ReduceScatter in all-pairs manner (each GPU collects and reduces 1/N of total data). Phase 2: AllGather in all-pairs manner (each GPU broadcasts its reduced data to all others). More bandwidth-efficient than 1PA; used for single-node collectives up to a few MB. Multiple variants: PortChannel (DMA-copy), MemoryChannel LL, MemoryChannel HB, SwitchChannel. Rotating buffers optimization: for up to a few MB, uses two alternating buffers to halve synchronization count — possible only because MSCCL++ allows asymmetric barrier semantics.

Concurrent multi-GPU reads: MSCCL++ allows a single thread group to read from multiple GPUs simultaneously, performing reduction in one pass. NCCL/MSCCL must read from each peer GPU sequentially, requiring multiple reduction steps.

4.3 Two-Phase Hierarchical (2PH)

Used for multi-node collectives. Hierarchical approach minimizes cross-node traffic by performing most reduction locally within each node. Two versions:

Small-message LL variant: Each node performs local ReduceScatter (splitting data into GPU-count chunks, creating redundancy for cross-node). Cross-node communication done in all-pairs manner. Local collective pipelined with cross-node communication to overlap.

Large-message HB variant: Local ReduceScatter pipelined with cross-node all-pairs communication. Number of data chunks equals GPU count for efficient link utilization. Avoids redundant data transmission that the small-message version incurs.


5. Portability and Implementation Effort

5.1 AMD MI300x Support

MSCCL++ originally supported only NVIDIA GPUs. AMD MI300x support was added in 7 weeks by one developer:

MI300x topology: Infinity Fabric (xGMI) peer-to-peer connects all 8 GPUs in a node, unlike NVLink on NVIDIA (hub-and-spoke via NVSwitch on H100). Optimal strategy: write to all peers simultaneously (parallelize across peers in outer loop) rather than sequentially writing to each peer (inner loop). MSCCL++ DSL requires changing only the order of two nested for loops to implement this topology-specific optimization.

5.2 NVSwitch / SwitchChannel Development

SwitchChannel interface for NVSwitch multimem: 8 weeks for two developers (learning API + abstracting as channel + developing AllReduce using SwitchChannel). 15 lines of Python DSL code implement the SwitchChannel-based 2PA algorithm. This 15-line DSL algorithm achieves 56% higher bandwidth than equivalent MemoryChannel implementation on H100.


6. Evaluation

6.1 Environments

Name GPU GPUs/node Intra-node Network
A100-40G NVIDIA A100 40G 8 NVLink 3.0 Mellanox HDR IB (200 Gb/s, 1 NIC/GPU)
A100-80G NVIDIA A100 80G 8 NVLink 3.0 Mellanox HDR IB (200 Gb/s, 1 NIC/GPU)
H100 NVIDIA H100 8 NVLink 4.0 Quantum-2 CX7 IB (400 Gb/s, 1 NIC/GPU)
MI300x AMD MI300x 8 Infinity Fabric Gen 4 Quantum-2 CX7 IB (400 Gb/s, 1 NIC/GPU)

Baselines: NCCL 2.26.2, RCCL 2.20.5 (AMD), MSCCL 2.23. For NCCL/MSCCL/RCCL: fine-tuned per-environment using environment variables (nChannels, chunkSize, algorithm type, topology XML file). For MSCCL: uses fastest algorithm from MSCCL scheduler. All libraries use user buffer registration (ncclMemAlloc) and CUDA/HIP Graph APIs where available.

6.2 AllReduce Results — A100-40G (Figure 8)

1-node (8 GPUs):

2-node (16 GPUs): 2PH algorithm. MSCCL++ substantially faster for both small and large messages.

4-node (32 GPUs): 2PH algorithm (DSL only). Consistent speedups across message sizes.

6.3 AllGather Results — A100-40G (Figure 9)

6.4 H100 Single-Node AllReduce (Figure 11)

6.5 MI300x Single-Node AllReduce (Figure 12)

6.6 Performance Gain Breakdown

NCCL vs. MSCCL: All benefit from better algorithms. Small message benefit: MSCCL uses all-pairs algorithm (not supported by NCCL which defaults to ring — suboptimal for latency). Large message multi-node benefit: MSCCL uses 2PH hierarchical algorithm (not used by NCCL) providing better bandwidth.

MSCCL vs. MSCCL++: MSCCL++ uses the same algorithms but implements them with lower-overhead primitives. At 1 KB (1PA algorithm, both libraries): MSCCL++ cuts latency by 47% compared to MSCCL — pure primitive-level overhead reduction. At large messages (1 GB, intra-node): MSCCL++ uses PortChannel (DMA-copy); NCCL/MSCCL cannot use DMA-copy intra-node.

6.7 LLM Inference — Llama2-70b (Figure 10)

vLLM v0.3.3 modified to use MSCCL++ AllReduce for tensor parallelism. 1-node A100-80GB×8, tensor parallelism = 8, CUDA graphs enabled for decodes. Range of batch sizes (bsz = 1, 4, 16, 64) and sequence lengths (16–2048 tokens).


NCCL (NVIDIA): State-of-the-art baseline. Provides ring, tree, and NVLS algorithms. MSCCL++ is designed to surpass NCCL by exposing more hardware flexibility.

RCCL (AMD): Hard fork of NCCL for AMD GPUs; substantially diverged from NCCL. Adopted MSCCL++ as an upstream dependency, validating MSCCL++'s design.

MSCCL (Microsoft): Predecessor; allows custom algorithms via MSCCLang DSL but uses NCCL primitives internally. Limited by NCCL primitive overhead.

NVSHMEM (NVIDIA OpenSHMEM): GPU-side one-sided communication API (nvshmemx_putmem_warp). Parallel work with similar GPU-side primitives, but no evidence it outperforms NCCL/MSCCL++ for collective communication. Not open-source.

ARK (NSDI 2023): GPU-driven code execution for distributed deep learning. GPU-side control plane for communication. Implemented as a monolithic end-to-end ML system rather than a standalone library.

CoCoNet (ASPLOS 2022): Breaking computation-communication abstraction barrier via fused kernels and a DSL. Requires additional scheduler and custom DSL code; not a drop-in replacement.

SCCL, TACCL, TE-CCL: Synthesize efficient collective algorithms for given topologies. Use NCCL primitives as backends; MSCCL++ primitives would improve their performance.

TensorRT-LLM, vLLM: Custom AllReduce implementations for single-node LLM inference. Not general-purpose or multi-node. MSCCL++ Collective API achieves equivalent single-node AllReduce performance while supporting multi-node.


8. Section-by-Section Paragraph Summaries

Section 1 — Introduction

States the central problem: fast-evolving hardware forces practitioners to write custom communication stacks from scratch, creating redundant effort and fragmented ecosystem. Identifies root cause: existing libraries (NCCL, RCCL) hide hardware to simplify programming but prevent performance optimizations. Proposes MSCCL++ as an alternative with separation of concerns: primitive hardware interface + higher-level portable interfaces. Three APIs: Primitive (for experts), DSL (for algorithm authors), Collective (for most users, NCCL-compatible).

Section 2.1 — Background: Collective Communication

Reviews AllReduce, ReduceScatter, AllGather, AllToAll. Notes communication accounts for 10–40% of LLM end-to-end latency (GPT-3 inference: 30% in AllReduce).

Section 2.2.1 — NCCL Architecture

Describes NCCL initialization process (topology discovery, ring/tree construction, buffer allocation). Describes NCCL kernel structure with four primitives. Provides Ring ReduceScatter pseudocode (Figure 1) showing the loop structure, send buffer management, and reduce operations.

Section 2.2.2 — NCCL Limitations

Three limitations: wasted GPU cycles (busy-wait in IB send), inflexible synchronization (symmetric barrier prevents rotating-buffer optimization), interconnect optimization gap (thread-copy only, no DMA-copy intra-node).

Section 2.3 — Existing Communication Libraries

NCCL API is not flexible enough for all scenarios. MSCCL (predecessor) improves algorithm customization but is still limited by NCCL primitives.

Section 3 — MSCCL++ Overview

Describes the three-level hierarchy (Figure 3). Introduces PortChannel, MemoryChannel, SwitchChannel as the three communication channel types. Notes channels correspond to port-mapped I/O, memory-mapped I/O, and switch-mapped I/O — general computer architecture concepts that generalize to future hardware.

Section 3.2.2 — Communication Primitives (PortChannel)

Defines put (one-sided, zero-copy, async), signal (async, ordered after put), wait (sync, waits for remote signal), flush (local sync, ensures put completion). Figure 4 shows the put-signal-flush / wait pattern. Figure 5 shows All-pairs ReduceScatter in MSCCL++.

Section 3.2.3 — Advantages

Asynchronous communication: batching synchronizations, freeing GPU resources during transfers, enabling inter-kernel optimizations. Specialized kernels: channel-specific code paths eliminate dead code and register spills (32 vs. 94-96 registers per thread vs. NCCL). Interconnect optimization: both thread-copy and DMA-copy available via channel type selection.

Section 4.1 — Initialization

Multi-process setup. Bootstrap API (send, recv, allGather, barrier) using POSIX sockets by default; overridable with MPI or torch.distributed. Communicator creates channels and registers buffers.

Section 4.2.1 — PortChannel Implementation

GPU writes request to lock-free FIFO queue (head/tail in cudaMallocManaged). CPU thread polls queue tail. For put: CPU calls ibv_post_send (async return). For signal: CPU calls ibv_atomic_add on peer semaphore. For flush: GPU waits for queue head ≥ tail; CPU calls ibv_poll_cq. Figure 7 shows the complete workflow.

Section 4.2.2 — MemoryChannel Implementation

HB protocol: 16-byte loads/stores for maximum bandwidth; synchronizes per-chunk with signal/wait on GPU semaphore. LL protocol: flag appended every N-1 elements; receiver polls flag before reading; N constrained to 4/8/16 bytes (vector instruction granularity); asynchronous flush (no-op, because put returns only when write is in progress).

Section 4.2.3 — SwitchChannel Implementation

Reduce: multimem.ld_reduce PTX instruction reads from multimem virtual address (points to different physical addresses on each GPU), performs reduce in NVSwitch hardware, returns result. Broadcast: reads local element, calls multimem.st to broadcast to all GPUs via NVSwitch.

Section 4.3 — DSL API

Python-based MSCCLang extension. Compiles algorithm description to instruction sequence run by DSL Executor kernel. Supports new MSCCL++ instructions and multi-GPU-per-thread-block access. 3% average overhead vs. Primitive API (up to 18% corner case). Useful for rapid prototyping and algorithm description.

Section 4.4 — Collective API and Kernels

1PA: LL protocol + MemoryChannel, used for very small messages (< few KB) in single-node. 2PA: multiple variants (PortChannel, MemoryChannel LL/HB, SwitchChannel) for single-node up to a few MB. 2PH: LL variant (small multi-node messages, more redundant data, fewer sync steps) and HB variant (large multi-node messages, pipelined, bandwidth-efficient). All algorithms implemented using Primitive API.

Section 4.5 — Portability

AMD MI300x in 7 weeks (<10 AMD-specific lines). SwitchChannel in 8 weeks. MSCCL++ design enables rapid hardware support via shallow primitive layer.

Section 5 — Evaluation

See Section 6 above for detailed numbers. Four hardware environments. Key findings: (a) NCCL vs. MSCCL benefit comes from better algorithms; (b) MSCCL vs. MSCCL++ benefit comes from lower-overhead primitives enabling same algorithms to run faster; (c) H100 SwitchChannel +56% bandwidth; (d) vLLM decode 4–15% speedup; (e) DSL 3% slower than Primitive on average.

Surveys NCCL, RCCL, MSCCL, NVSHMEM, ARK, CoCoNet, UCX/MPI, SCCL/TACCL/TE-CCL, TensorRT-LLM, vLLM. Positions MSCCL++ as more flexible than NVSHMEM (which lacks collective performance) and more general than TensorRT-LLM/vLLM custom AllReduce (which are single-node only).

Section 7 — Conclusion

MSCCL++ achieves up to 5.4× speedup for collectives and 15% for LLM inference. Key insight: exposing primitive functionalities as user interface enables optimizations previously impossible. In production at Microsoft Azure; adopted by AMD RCCL.


9. Architecture Diagram

┌─────────────────────────────────────────────────────────────────────┐
│                   User Application (PyTorch, etc.)                  │
└──────────┬──────────────────────┬───────────────────────────────────┘
           │                      │                         │
    ┌──────▼──────┐       ┌───────▼──────┐        ┌────────▼────────┐
    │ Collective  │       │   DSL API    │        │  Primitive API  │
    │ API (NCCL-  │       │  (MSCCLang)  │        │ (put/signal/    │
    │ compatible) │       │              │        │  wait/flush)    │
    └──────┬──────┘       └───────┬──────┘        └────────┬────────┘
           │                      │                         │
           └──────────────────────┴─────────────────────────┘
                                  │
                    ┌─────────────▼────────────┐
                    │     MSCCL++ Channels      │
                    │                           │
                    │  ┌─────────────────────┐  │
                    │  │   PortChannel        │  │
                    │  │   GPU→CPU queue→NIC  │  │
                    │  │   (IB/RDMA/DMA)      │  │
                    │  └─────────────────────┘  │
                    │  ┌─────────────────────┐  │
                    │  │   MemoryChannel      │  │
                    │  │   Thread-copy P2P    │  │
                    │  │   LL / HB protocols  │  │
                    │  └─────────────────────┘  │
                    │  ┌─────────────────────┐  │
                    │  │   SwitchChannel      │  │
                    │  │   NVSwitch multimem  │  │
                    │  │   reduce/broadcast   │  │
                    │  └─────────────────────┘  │
                    └──────────────┬────────────┘
                                   │
                    ┌──────────────▼────────────┐
                    │   Hardware & Interconnects │
                    │  NVLink / PCIe / xGMI /   │
                    │  InfiniBand / Ethernet     │
                    └───────────────────────────┘

10. Relevance to DynamICCL

High relevance as context and motivation. Moderate direct applicability.

1. The definitive analysis of NCCL limitations. MSCCL++ provides the most rigorous and publicly available characterization of NCCL's performance limitations: wasted GPU cycles in IB send (many threads blocked for a one-thread task), inflexible synchronization preventing rotating-buffer optimization, and inability to use DMA-copy intra-node. DynamICCL's RL agent operates within NCCL's existing parameter space rather than replacing primitives — but understanding these limitations explains why DynamICCL's optimization can yield gains even within the NCCL framework.

2. Algorithm-message-size relationship directly maps to DynamICCL's action space. MSCCL++ data (Figures 8, 9) quantifies precisely why ring is suboptimal for small messages (high step-count latency) and why all-pairs is faster at small scales. DynamICCL's action space includes ring, tree, collnet_direct, collnet_chain, nvls, nvls_tree, pat — exactly the algorithm variants that MSCCL++ shows have dramatically different performance at different message sizes. The MSCCL++ results provide empirical justification for DynamICCL's multi-algorithm action space.

3. NVLS performance validated. SwitchChannel on H100 achieves 56% higher bandwidth than MemoryChannel and 2.2–2.8× over NCCL. This directly validates the importance of DynamICCL having nvls and nvls_tree in its algorithm action space. On H100 clusters with NVSwitch, the RL agent should learn to select nvls/nvls_tree for large messages in single-node collective scenarios.

4. Protocol-level insight. MSCCL++ LL and HB protocols correspond closely to NCCL's LL, LL128, and Simple protocols. The MSCCL++ observation that LL is faster for small messages (fewer elements per synchronization) and HB for large messages (amortized synchronization over large bulk) directly informs why DynamICCL's protocol dimension (ll/ll128/simple) matters for performance.

5. nChannels and nThreads analogy. NCCL's nChannels controls the degree of parallelism in collective execution (how many ring/tree instances run simultaneously). MSCCL++ Collective API tunes "nChannels" as an environment variable (noted as one of the fine-tuning parameters for NCCL baseline). DynamICCL's nChannels action (1–8) maps to this parameter. MSCCL++ results show that nChannels interacts non-trivially with message size and topology — supporting DynamICCL's approach of learning this jointly.

6. Not a direct competitor. MSCCL++ requires expert developers to write custom algorithms in the Primitive or DSL API. DynamICCL selects among existing NCCL algorithms automatically. MSCCL++ offers a larger optimization space; DynamICCL is more accessible. The two approaches are complementary: MSCCL++ represents the ceiling of what's achievable; DynamICCL automates part of the search for optimal configurations.

7. Microsoft Azure production deployment. MSCCL++ is in production at Microsoft Azure. This is evidence that the NCCL optimization problem is real and high-value, and that ML-infrastructure organizations are willing to invest significant engineering effort in communication library optimization. DynamICCL's RL-based approach offers a more automated path to a subset of the same gains.