This example demonstrates data flow tracking for an application graph with 2 root operators, 2 leaf operators, and 3 paths from the roots to the leaves. It also demonstrates the add_probe_operator function to track latency and message counts for intermediate operators.
The example showcases two key aspects of data flow tracking:
- Application-level tracking: Using the
DataFlowTrackerAPI to monitor end-to-end latencies and message counts across the entire application graph. - Operator-level tracking: Using the
get_data_flow_tracking_label()API within thePingMxOpoperator to access message label information during compute, including:- Number of paths the message has traversed
- Path names (comma-separated list of operators)
- End-to-end latency for each path in milliseconds
Visit the SDK User Guide to learn more about the Data Flow Tracking feature.
- using deb package install or NGC container:
/opt/nvidia/holoscan/examples/flow_tracker/cpp/flow_tracker
- source (dev container):
./run launch # optional: append `install` for install tree ./examples/flow_tracker/cpp/flow_tracker - source (local env):
${BUILD_OR_INSTALL_DIR}/examples/flow_tracker/cpp/flow_tracker
- using python wheel:
# [Prerequisite] Download example .py file below to `APP_DIR` # [Optional] Start the virtualenv where holoscan is installed python3 <APP_DIR>/flow_tracker.py
- from NGC container:
python3 /opt/nvidia/holoscan/examples/flow_tracker/python/flow_tracker.py
- source (dev container):
./run launch # optional: append `install` for install tree python3 ./examples/flow_tracker/python/flow_tracker.py - source (local env):
export PYTHONPATH=${BUILD_OR_INSTALL_DIR}/python/lib python3 ${BUILD_OR_INSTALL_DIR}/examples/flow_tracker/python/flow_tracker.py