Megatron-FSDP is an NVIDIA-developed PyTorch extension that provides a high-performance implementation of Fully Sharded Data Parallelism (FSDP). It offers seamless cross-compatibility with major deep learning frameworks and parallelism libraries, making it easy to scale your PyTorch models across multiple GPUs and nodes.
Megatron-FSDP can provide up to 25% speed up and 23% memory savings compared to FSDP2.
- Easy Integration: Simple
fully_shardfunction for quick model parallelization - High Performance: Optimized for NVIDIA GPUs with efficient memory management
- Cross-Framework: Works seamlessly with PyTorch, Huggingface Transformers, Megatron-LM, Megatron Bridge and TransformerEngine
- Scalable: Supports both single-node multi-GPU and multi-node distributed training
- Flexible Configuration: Configurable sharding strategies and process groups
- Advanced Bucketing: Data-type aware bucketing system to minimize the overhead of collective operations
- Buffer Management: Zero copy communication is achieved by reorganizing the storage of parameters and main grad with
ParamAndGradBufferclass - Communication Overlapping: Improved communication overlap of paramter all-gather and gradient reduce-scatter
- FP8 Mixed Precision with Transformer Engine: Compatibility with Transformer Engine enables efficient FP8 mixed precision training
- Gradient accumulate fusion support with Transformer Engine: Remove the explicit gradient copy to the communication buffer in backwards pass
- SM Usage Reduction with SHARP: FSDP's
All-Gather(AG) andReduce-Scatter(RS) collectives are designed to overlap with compute kernels. However, standard NCCL communication kernels can consume a significant number of GPU SMs (e.g., 16-32 SMs), "stealing" resources from compute (GEMM) kernels and reducing overall TFLOPS. - In-Switch Processing: We leverage SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) to offload these collective operations. SHARP performs aggregation and reduction computations directly on the network switches (InfiniBand or NVLink Switch) instead of on the GPU SMs. This dramatically reduces the SM consumption for communication to 1-6 SM freeing up GPU resources for compute. It also provides lower communication latency, especially in large, scaled-out workloads.
- Symmetric Optimizations for MNNVL: We support symmetric-based optimizations, introduced in NCCL v2.27, which enable switch offloading for Multi-Node NVLink (MNNVL) systems such as GB200/GB300. This allows the same SM-saving benefits over the high-bandwidth NVLink fabric itself.
- Hierarchical Collectives: When an FSDP sharding domain spans both NVLink and InfiniBand, the library utilizes hierarchical SHARP collectives (e.g., NVL-SHARP + IB-SHARP) to optimize the communication path across the entire system topology.
pip install megatron-fsdp
- PyPI: https://pypi.org/project/megatron-fsdp/
- Source Code: https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/distributed/fsdp/src
Transform your PyTorch model to use Fully Sharded Data Parallelism with just a few lines:
import torch
from megatron_fsdp import (
fully_shard_model,
fully_shard_optimizer,
)
"""
Enable FSDP with Megatron-FSDP via the `fully_shard_*` API.
"""
# Shard your model.
model = fully_shard_model(
model,
fsdp_unit_modules=[
YourModelLayerClass,
"import.path.to.model.class.YourModelLayerClass",
],
...
)
# Shard your optimizer.
optimizer = fully_shard_optimizer(
torch.optim.Adam(model.parameters(), lr=1e-3)
)
# Your model is now ready for distributed training!fully_shard / fully_shard_model / fully_shard_optimizer are simple entrypoints into MegatronFSDP.
- No need to call
fully_shardon all the sub-modules, just pass your sub-module classes or import paths tofully_shard! - Seamlessly preserves the identity of your training loop with only a few lines of code and multiple options for initialization:
fully_shard_*is a two-line change when sharding the model and optimizer separately.fully_shardis a one-line change for previously-initialized models and optimizers.
Compare this with FSDP2:
import torch
from torch.distributed.fsdp import fully_shard
# Your existing model and optimizer.
model = YourModel()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
# Enable FSDP with FSDP2.
for module in model.modules():
# Sub-Modules to shard.
if isinstance(module, YourModelLayerClass):
fully_shard(module)
fully_shard(model)
# Your model is now ready for distributed training!Megatron-FSDP is compatible with torch.compile, but this feature is still experimental and may introduce performance regressions in some workloads.
import torch
from megatron_fsdp import (
fully_shard_model,
fully_shard_optimizer,
MixedPrecisionPolicy,
)DeviceMesh simplifies the construction of complex arrangements of devices
to support various parallelisms.
from torch.distributed.device_mesh import DeviceMesh
# Initialize DeviceMesh.
device_mesh = torch.distributed.device_mesh.init_device_mesh(
"cuda",
mesh_shape=(dp_outer_size, dp_shard_size, cp_size, tp_size),
mesh_dim_names=("dp_outer", "dp_shard", "cp", "tp"),
)
# Only relevant when using HSDP, where we also need the full DP group for data parallelism,
# This sub-mesh can be provided to distributed samplers or dataloaders.
device_mesh[("dp_outer", "dp_shard")]._flatten("dp")
# Only required if using CP. Otherwise, just pass dp_shard to FSDP.
device_mesh[("dp_shard", "cp")]._flatten("dp_shard_cp")
# Only required if using HSDP. Otherwise, don't pass hybrid_fsdp_group.
device_mesh[("dp_outer", "dp_shard", "cp")]._flatten("hsdp")
hsdp_group = device_mesh["hsdp"].get_group()
# Initialize DeviceMesh for expert parallel (EP) modules when using FSDP + EP.
expert_device_mesh = torch.distributed.device_mesh.init_device_mesh(
"cuda",
mesh_shape=(dp_outer_size, expt_dp_shard_size, expt_tp_size),
mesh_dim_names=("dp_outer", "dp_shard_cp", "tp"),
)
expert_device_mesh[("dp_outer", "dp_shard_cp")].flatten("hsdp")
hsdp_expt_group = expert_device_mesh["hsdp"].get_group()This wraps the model in a MegatronFSDP class that schedules the sharding lifecycle of the model parameters and gradients during training and inference.
model = fully_shard_model(
# PyTorch (Root) Module
model,
# Sharded Modules
fsdp_unit_modules=[...],
# Device Mesh
device_mesh=device_mesh
# Always required for FSDP or HSDP.
dp_shard_dim="dp_shard_cp",
# Set this required argument to use HSDP instead of FSDP. Otherwise, set this to None.
dp_outer_dim="dp_outer",
# Only required for TP-sensitive models (i.e. Megatron-LM / TransformerEngine)
# or when using DTensor-based TP. Otherwise, set this to None.
tp_dim="tp",
# Only required when using HSDP. Otherwise, set this to None.
hybrid_fsdp_group=hsdp_group,
# Only required when using HSDP + EP. Otherwise, set this to None.
hybrid_fsdp_expt_group=hsdp_expt_group,
# Only required for FSDP + EP. Otherwise, set this to None.
expt_device_mesh=expt_device_mesh,
# FSDP Sharding Strategy: no_shard (0) / optim (1) / optim_grads (2) / optim_grads_params (3)
zero_dp_strategy=3,
outer_dp_sharding_strategy=1,
# Initialize the model on devices in shards to avoid OOM. Requires device("meta")-init for model.
init_model_with_meta_device=True,
# Mixed-Precision Policy for controlling compute and communication precision in Megatron-FSDP.
mixed_precision_policy=MixedPrecisionPolicy(),
# Sync parameters and gradients each step. Allows for gradient transformations after backward pass,
# and synchronizes parameters and gradients across HSDP groups, but deactivates compute-communication
# overlap going into the subsequent training step.
sync_model_each_microbatch=True,
# Preprocess state dict for DCP checkpointing. Required for Torch Distributed Checkpoint.
preproc_state_dict_for_dcp_ckpt=True,
)The original torch.nn.Module can be accessed at MegatronFSDP.module.
Initialize your optimizer on the Megatron-FSDP model distributed Parameter(s).
If your optimizer has already been initialized, either use the fully_shard
entrypoint, or use optimizer.add_param_group({"params": model.parameters()})
after resetting your optimizer state via optimizer.param_groups.clear()
and optimizer.state.clear().
optimizer = torch.optim.Optimizer(model.parameters())fully_shard_optimizer modifies your optimizer.step(), optimizer.zero_grad(),
and distributed optimizer parameters to punctually trigger scheduled FSDP operations
for Megatron-FSDP.
fully_shard_optimizer(
# PyTorch Optimizer
optimizer,
# Preprocess state dict for DCP checkpointing.
# Required for Torch Distributed Checkpoint.
preproc_state_dict_for_dcp_ckpt=True,
)Extended arguments to step() and zero_grad() control these FSDP operations:
optimizer.step(
...,
# Sync all gradients before the optimizer step. Alternatively enabled using
# `sync_model_each_microbatch=True` in MegatronFSDP.
sync_grad_before_optimizer_step=True,
# After `optimizer.step()`, install optimized weights into MegatronFSDP's buffers.
install_optimized_model_weights=True,
)
optimizer.zero_grad(
...,
# Also zero out MegatronFSDP's gradient accumulation buffers.
zero_grad_buffer=True
)Distributed checkpoints can be saved and loaded using Torch DCP. Alternatively, you can load non-distributed checkpoints before fully-sharding your model with any existing checkpoint utility compatible with PyTorch Modules.
# Save model and optimizer state.
torch.distributed.checkpoint.save(
{"model": model.state_dict(), "optimizer": optimizer.state_dict()},
checkpoint_id=str(CKPT_DIR)
)
# Load model and optimizer state.
ckpt_state_dict = {"model": model.state_dict(), "optimizer": optimizer.state_dict()}
torch.distributed.checkpoint.load(state_dict=ckpt_state_dict, checkpoint_id=str(CKPT_DIR))
# `model.load_state_dict(strict=False)` is only necessary to ignore TE FP8 extra state
# that is missing from the DCP checkpoint but present in TEBaseModule.
# Megatron-FSDP does not support TE FP8 extra state checkpointing with DCP.
model.load_state_dict(ckpt_state_dict["model"], strict=False)
optimizer.load_state_dict(ckpt_state_dict["optimizer"])Megatron-FSDP's fully_shard_* API has a comprehensive set of arguments for fine-tuning your model's performance.
-
fsdp_unit_modulesis a list of sub-module classes orstrimport-paths associated with modules that you wantMegatronFSDPto fully-shard.- Required if
1,2, or3are specified as the sharding strategy. Defaults toNone, in which case Megatron-FSDP will replicate the parameters similar to DDP.
- Required if
-
zero_dp_strategy(andouter_dp_sharding_strategy) configure different degrees of zero-redundancy data parallelism as described in ZeRO (Zero Redundancy Optimizer). It reduces CUDA memory utilization during model training by distributing model parameters, gradients, and optimizer states across multiple devices in the DPProcessGroup, and collectively communicating subsets of parameters and gradients to specific devices when needed for computation or differentiation. More aggressive sharding strategies will entail more communication overhead, withno_shardbeing the least memory efficient but most communication efficient, andoptim_grads_paramsbeing the most memory efficient but least communication efficient. Additionally,outer_dp_sharding_strategysupportsno_shard(Hybrid-Sharded Data Parallelism (HSDP)) andoptim(HFSDP= Fully-Sharded Optimizer State + HSDP, requireszero_dp_strategy='optim_grads_params'), after specifying the "outer" DP group (dp_outer_dim/hybrid_fsdp_group).- Default:
optim_grads_paramsor3forzero_dp_strategyandno_shardor0forouter_dp_sharding_strategy -
0orno_shardimplies that your model is not sharded. Similar memory usage toDDP. -
1oroptimimplies that your optimizer state is sharded for distributed optimization. Similar to optimizer state sharding inZeRO-DP. -
2oroptim_gradsimplies that your optimizer state and gradients are sharded. Similar toZeRO-2. -
3oroptim_grads_paramsimplies that your optimizer state, gradients, and training parameters are sharded. Similar toZeRO-3.
- Default:
-
device_meshis atorch.distributed.DeviceMeshthat informsMegatronFSDPof your distributed environment for sharding in conjunction with hardware configuration and other parallelisms. If not provided,megatron_fsdp.fully_shard(_model)will build an FSDP DeviceMesh for you automatically.-
dp_shard_dimis the name of the sub-mesh required for FSDP sharding, and is commonly the flattened combination of the data parallel (DP) and context parallel (CP) sub-meshes.- When model parameters are replicated across DP-CP during the backward pass, resultant gradients across DP and CP ranks are reduced simultaneously, normalized by the DP-CP world size. For more information about how ring attention shards the sequence dimension through the attention and non-attention layers of the Transformer, refer to: Ring Attention with Blockwise Transformers for Near-Infinite Context.
-
dp_outer_dimis the name of the sub-mesh corresponding to the "outer" DP group, which is required for replication or sharding in HSDP.fully_shardwill perform HSDP ifdp_outer_dimis specified. -
tp_dimis the name of the sub-mesh used for tensor parallelism (TP), which is required for(FSDP, TP)-strided sharding when using Megatron-LM or Torch-nativeDTensorTP.- For more information about tensor parallelism, refer to: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism.
-
hybrid_fsdp_groupis theProcessGroupwhich contains all ranks in the flatteneddp_shard_dimanddp_outer_dimsub-meshes utilized to specify the(DP-Outer, DP-Shard)sharded mesh coordinates for the weight and gradient buffers. Required for HSDP. -
hybrid_fsdp_expt_groupdefines the data-parallel communication group for expert parameters. It is required for HSDP.
-
-
expt_device_meshis anothertorch.distributed.DeviceMeshtailored for the expert parallel (EP) modules inMegatronFSDP.-
dp_shard_dimis the name of the sub-mesh required for FSDP sharding of the EP modules, enabling expert data parallelism (EDP). -
tp_dimis the name of the sub-mesh used for expert tensor parallelism (ETP), which is required for(FSDP, ETP)-strided sharding when using Megatron-LM or Torch-nativeDTensorETP.
-
-
init_model_with_meta_devicehasMegatronFSDPinitialize yourmeta-device model in shards on every CUDA device to avoid OOM when initializing extremely large models that cannot fit on a single device. Users can initialize their model on ameta-device (with torch.device('meta'): ...), andMegatronFSDPwill further shard and initialize the model parameters layer-by-layer adhering to the customizablemodule.reset_parametersmethod, which prevents the entire model from being allocated in memory at any point during runtime.- Defaults to
False. - Note that the
deviceargument which installs your model on a specific device or rank will be deactivated wheninit_model_with_meta_device=True.
- Defaults to
-
mixed_precision_policytakes amegatron_fsdp.MixedPrecisionPolicythat configures mixed-precision compute and communication for Megatron-FSDP. Configuration options include:-
main_params_dtypecontrols the data-type for parameters used in distributed optimization or quantization.- Defaults to
torch.float32. - If set to
None, the native model compute parameter data-type will be utilized. - Requires specification (cannot be
None) when usingFP8parameters with Megatron-FSDP.
- Defaults to
-
main_grads_dtypecontrols the data-type for gradients used in distributed optimization.- Defaults to
None, the model native gradient data-type will be utilized. - While
torch.float32(or higher) is recommended for accuracy at scale, asmain_grads_dtypecontrols the data-type for gradient accumulation,Noneis more flexible and uses pre-determined parameter gradient logic in mixed-precision scenarios, such asBF16forFP8/FP4parameters quantized via TransformerEngine.
- Defaults to
-
grad_comm_dtypecontrols the data-type for gradient communications (RS / AR) when reducing gradients. Lower precisiongrad_comm_dtypeimproves (communication) performance, but may increase memory utilization or sacrifice gradient precision in certain cases.- Defaults to
None, themain_grads_dtypedata-type will be utilized, and no additional memory is allocated whengrad_comm_dtype == main_grads_dtype. - If using HSDP (either DP-Replicate or DP-Outer in
outer_dp_sharding_strategy),no_shard,optim, or aFixedPoolAllocator(fsdp_double_buffer), allocatingdtype-custom gradient communication buffers (per FSDP group) adds memory overhead of up to 10% or more, and users should consider the performance-memory trade-off when using this feature. - If using NCCL UBR v2.27+ (
nccl_ub=True), gradient reduction may be performed in high-precision depending on the network domain (NVLink or IB), and can enable mixed-precision communication and accumulation, e.g. setting grad_comm_dtype toBF16can supportFP32reduction even though we haveBF16input and output communication buffers. Otherwise, gradients will be reduced ingrad_comm_dtype(and accumulated inmain_grads_dtype) as usual.
- Defaults to
-
-
overlap_grad_reduceandoverlap_param_gatherwill overlap gradientreduce-scatterand parameterall-gathergroup communications with backward and forward compute with asynchronous calls and pre-fetching. (In the case ofno_shard, parameters are not gathered but gradientall-reduceis overlapped.)- Both default to
True.
- Both default to
-
sync_model_each_microbatchwill trigger await(MegatronFSDP.finish_grad_sync()) on gradient reduction, parameter de-allocation, and optimizer parameter / gradient installation (in preparation foroptimizer.step()) after every forward-backward pass. When using HSDP, parameters and gradients will be all-gathered and reduced respectively on the "outer" DP group each training step instead of each optimization cycle. This behavior is desirable for a transparent and user-friendly sharded training loop where post-backward transformations on the gradient and a clean compute / memory state are necessary within and between training iterations, but damages performance in situations where optimization is delayed (e.g. gradient accumulation) when the communications of the previous training iteration can be overlapped with the compute of the next training iteration. Will also overrideis_last_microbatch/microbatch_countlogic inMegatronFSDP.- Defaults to
Trueforfully_shard, but defaults toFalsewhen using theMegatronFSDPclass directly. - Can also be controlled with the
MegatronFSDP.sync()context manager, or through invokingMegatronFSDP.set_model_auto_sync(bool). - WARNING: When this synchronization feature is activated in conjunction with
no_shard/0oroptim/1sharding strategies, the user is responsible for callingMegatronFSDP.zero_grad_buffer()oroptimizer.zero_grad()after the subsequent forward-backward pass. This is because un-sharded gradients are all-reduced directly into the gradient accumulation buffer, and this buffer should not be all-reduced more than once per optimization cycle! Analogous to the justification for theno_sync()API for PyTorch DistributedDataParallel.
- Defaults to
-
enable_fine_grained_param_gathermodifies FSDP to all-gather parameters with per-Module granularity instead of collectively unsharding all sub-modules of a unit module in Megatron-FSDP.- Defaults to
False.
- Defaults to
-
keep_fp8_transpose_cachewill keep the fp8 transpose cache when usingMegatronFSDP. This option will cause (number of parameter$\times$ 1 Byte) of memory overhead, but can skip the weight transpose operation in the backward propagation. This feature will not give any benefit from the Blackwell architecture.- Defaults to
False.
- Defaults to
-
use_decoupled_gradinstalls the reduced gradient into a separate buffer:Parameter.decoupled_grad. This buffer is utilized by specific optimizers, such as TransformerEngine'sFusedAdam, and can be used to temporarily store your gradient for customtorch.nn.Optimizer(s).- Defaults to
False. - Required for
transformer_engine.pytorch.optimizers.FusedAdam.
- Defaults to
-
nccl_ubwill allocate and register the NCCL userbuffer for param and grad buffers. This option enables an SM-efficient NCCL algorithm that could improve the performance of overlapped computations. This flag will be much more effective when used together with SHARP if the FSDP communication includes both NVL and IB domains. Enabling this option will cause additional memory overhead due to the requirement to enable thefsdp_double_bufferoption.- Only effective when using with Megatron-Core.
- Defaults to
False. - By default we try to use NCCL window (symmetric) registration if it is available. If not it falls back to conventional local registration.
-
fsdp_manual_registrationwill manually register the FSDP communication buffers with the NCCL user buffer. For symmetric registration with large models, the registration itself can take a significant amount of time. This option minimizes the number of registration calls to reduce the registration time. However, with this option enabled, you need to manually call theParamAndGradBuffer.manual_buffer_registration()function after the first iteration. This is already implemented in the Megatron-LM training loop. In other use cases, users are expected to call this function themselves.- This is an example of required modification in the training loop.
def train(...): ... # After the first iteration, user need to call the # ParamAndGradBuffer.manual_buffer_registration() function in the training loop if (iteration == start_iteration + 1): if isinstance(model, megatron_FSDP) and model.ddp_config.fsdp_manual_registration: param_and_grad_buffer = getattr(model, "param_and_grad_buffer", None) if param_and_grad_buffer is not None: param_and_grad_buffer.manual_buffer_registration()
- Only effective when using with Megatron-Core.
- This option is only effective when
nccl_ubis enabled. - Defaults to
False, but will be automatically enabled in Megatron-LM.
- This is an example of required modification in the training loop.
-
disable_symmetric_registrationwill disable NCCL window (i.e. symmetric) registration when usingnccl_ub.- Defaults to
False.
- Defaults to
-
fsdp_double_bufferwill use persistently allocated double buffers for temporarily-defined memory needed inMegatronFSDPcommunications. Having persistent double buffers may increase peak VRAM utilization, but is required to register NCCL user buffers (nccl_ub=True) forMegatronFSDP. Currently, this is only supported for simple repetitive model structures such as GPT.- Defaults to
False. Automatically overridden toTruewhennccl_ubis enabled.
- Defaults to
-
preproc_state_dict_for_dcp_ckptaddsmodel.state_dict()andoptimizer.state_dict()post-hooks that modify the model and optimizer state in preparation fortorch.distributed.checkpoint.{save,load}(Torch DCP) checkpointing. Specifically, it adds__create_write_items__and__create_chunk_list__methods to Tensors utilized by Torch DCP to redistribute parameters when saving and loading model and optimizer checkpoints. Can be deactivated should the user need a custom distributed checkpointing strategy.- Defaults to
True.
- Defaults to
🧮 Using Megatron-FSDP with TransformerEngine
Megatron-FSDP natively supports mixed-precision activations and parameter sharding in conjunction with TransformerEngine.
- Within the
transformer_engine.pytorch.autocast(recipe: transformer_engine.common.recipe.Recipe)context, model activations are converted based on the recipe. - Within the
transformer_engine.pytorch.quantized_model_init(recipe: transformer_engine.common.recipe.Recipe)context, TransformerEngine native modules (e.g.transformer_engine.pytorch.TransformerLayer) have their parameters converted based on the recipe.- Requires FP8 model activations, i.e.
transformer_engine.pytorch.autocast.
- Requires FP8 model activations, i.e.
# FP8 Recipe
fp8_recipe = transformer_engine.common.recipe.MXFP8BlockScaling(
fp8_format=transformer_engine.common.recipe.Format.HYBRID,
)
# Construct TransformerEngine model with FP8 parameters.
with transformer_engine.pytorch.quantized_model_init(
recipe=fp8_recipe,
# Needed for FP8 parameters with Megatron-FSDP.
preserve_high_precision_init_val=True,
):
te_model = transformer_engine.pytorch.TransformerLayer(...)
# Fully-shard the model.
mfsdp_model = fully_shard_model(
module=te_model,
fsdp_unit_modules=[te.pytorch.TransformerLayer],
# Only FSDP / ZeRO-3 supports FP8 parameters.
zero_dp_strategy=3,
# FP32 main weights needed for FP8 parameters.
mixed_precision_policy=MixedPrecisionPolicy(
main_params_dtype=torch.float32
),
# Needed for select FP8 recipes.
keep_fp8_transpose_cache=True,
)
# Evaluate and differentiate the model with FP8 activations.
with transformer_engine.pytorch.autocast(recipe=fp8_recipe):
mfsdp_model(x).sum().backward()ℹ️ TransformerEngine kernels have a fair bit of configuration constraints when using FP8-quantized parameters, such as using fused QKV parameters or defining activations and parameters with shapes compatible to FP8 CuBLAS kernels on supported hardware from NVIDIA. To properly initialize TransformerLayer, you can refer to the toy model used in our FP8 unit tests: Megatron-LM/tests/unit_tests/distributed/fsdp/test_mfsdp_fully_shard.py::TestMegatronFsdpFullyShard::test_fully_shard_te_quantized.