{"id":13325,"date":"2019-01-22T06:00:21","date_gmt":"2019-01-22T14:00:21","guid":{"rendered":"https:\/\/developer.nvidia.com\/blog\/?p=13325"},"modified":"2023-04-03T12:40:00","modified_gmt":"2023-04-03T19:40:00","slug":"gpu-telemetry-nvidia-dcgm","status":"publish","type":"post","link":"https:\/\/developer.nvidia.com\/blog\/gpu-telemetry-nvidia-dcgm\/","title":{"rendered":"Setting Up GPU Telemetry with NVIDIA Data Center GPU Manager"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-13331 alignright\" src=\"https:\/\/developer.nvidia.com\/blog\/wp-content\/uploads\/2019\/01\/2018-05-04-DGX-2-06-625x390.png\" alt=\"\" width=\"625\" height=\"390\" \/>Understanding GPU usage provides important insights for IT administrators managing a data center. Trends in GPU metrics correlate with workload behavior and make it possible to optimize resource allocation, diagnose anomalies, and increase overall data center efficiency.\u00a0<a href=\"https:\/\/developer.nvidia.com\/data-center-gpu-manager-dcgm\" target=\"_blank\" rel=\"noopener noreferrer\">NVIDIA Data Center GPU Manager<\/a>\u00a0(DCGM) offers a comprehensive tool suite to simplify administration and monitoring of NVIDIA Tesla-accelerated data centers.<\/p>\n<p>One key capability provided by DCGM is GPU telemetry. DCGM includes sample code for integrating GPU metrics with open source telemetry frameworks such as <a href=\"https:\/\/collectd.org\" target=\"_blank\" rel=\"noopener noreferrer\">collectd<\/a>\u00a0and <a href=\"https:\/\/prometheus.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Prometheus<\/a>. The DCGM API can also be used to write custom code that can integrate with site specific telemetry frameworks.<\/p>\n<p>Let\u2019s look at\u00a0how to integrate DCGM with collectd on a CentOS system, making GPU telemetry data available alongside your existing telemetry data.<\/p>\n<h2 id=\"h.47sq9o3l3qko\"  >Integrating DCGM with collectd<a href=\"#h.47sq9o3l3qko\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n<h3 id=\"h.11c21tn3ismt\"  >Prerequisites<a href=\"#h.11c21tn3ismt\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h3>\n<p>First you need to install and configure collectd and DCGM.<\/p>\n<p>If collectd is not already present on the system, you\u00a0can install it from the EPEL repository. (Unless otherwise specified, all command line examples need to be run as a superuser.)<\/p>\n<pre class=\"prettyprint\"># yum install -y epel-release\r\n# yum install -y collectd<\/pre>\n<p>DCGM is available free-of-charge from the <a href=\"https:\/\/developer.nvidia.com\/data-center-gpu-manager-dcgm\" target=\"_blank\" rel=\"noopener noreferrer\">NVIDIA website<\/a>. Download the x86_64 RPM package and install it.<\/p>\n<pre class=\"prettyprint\"># rpm --install datacenter-gpu-manager-1.5.6-1.x86_64.rpm<\/pre>\n<p>The DCGM host engine service (nv-hostengine) needs to be running in order to collect the GPU telemetry data.<\/p>\n<pre class=\"prettyprint\"># nv-hostengine<\/pre>\n<p>Verify the DCGM host engine service is running by using it to query the current temperature of the GPUs. Note, this command can be run as a non-superuser.<\/p>\n<pre class=\"prettyprint\">$ dcgmi dmon -e 150 -c 1<\/pre>\n<p>If you want to\u00a0automatically start the host engine when the system starts, configure a DCGM systemd service. Otherwise the host engine will need to be started manually whenever the system restarts.<\/p>\n<pre class=\"prettyprint\">&#091;Unit&#093;\r\nDescription=DCGM service\r\n\r\n&#091;Service&#093;\r\nUser=root\r\nPrivateTmp=false\r\nExecStart=&#47;usr&#47;bin&#47;nv-hostengine -n\r\nRestart=on-abort\r\n\r\n&#091;Install&#093;\r\nWantedBy=multi-user.target<\/pre>\n<h2 id=\"h.5za8svmky8ch\"  >Setting up the DCGM collectd plugin<a href=\"#h.5za8svmky8ch\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n<p>Now that you\u2019ve successfully installed\u00a0collectd and DCGM, the real work to integrate the two begins. The DCGM package includes a sample collectd plugin\u00a0implemented\u00a0using the DCGM Python binding. The plugin needs to be installed and configured to use it with collectd.<\/p>\n<p>First, copy the DCGM Python binding and collectd plugin to the collectd plugin directory. The DCGM collectd plugin installs into\u00a0a subdirectory to separate\u00a0it from other collectd plugins.<\/p>\n<pre class=\"prettyprint\"># mkdir &#47;usr&#47;lib64&#47;collectd&#47;dcgm\r\n# cp &#47;usr&#47;src&#47;dcgm&#47;bindings&#47;*.py &#47;usr&#47;lib64&#47;collectd&#47;dcgm\r\n# cp &#47;usr&#47;src&#47;dcgm&#47;samples&#47;scripts&#47;dcgm_collectd_plugin.py &#47;usr&#47;lib64&#47;collectd&#47;dcgm<\/pre>\n<p>Next, verify that the plugin is configured with the correct location of the DCGM library (<code>libdcgm.so<\/code>) on this system. The DCGM library is installed in <code>\/usr\/lib64<\/code>\u00a0on CentOS systems\u00a0by default. Edit <code>\/usr\/lib64\/collectd\/dcgm\/dcgm_collectd_plugin.py<\/code>\u00a0so that the variable <code>g_dcgmLibPath<\/code>\u00a0is set to <code>\/usr\/lib64<\/code>.<\/p>\n<pre class=\"&quot;prettyprint\"># sed -i -e 's|\\(g_dcgmLibPath =\\) '\"'\"'\/usr\/lib'\"'\"'|\\1 '\"'\"'\/usr\/lib64'\"'\"'|g' \/usr\/lib64\/collectd\/dcgm\/dcgm_collectd_plugin.py<\/pre>\n<p>The DCGM plugin is initially configured to collect a number of generally useful GPU metrics. \u00a0You can customize the list of metrics by modifying the <code>g_publishFieldIds<\/code>\u00a0variable. You\u2019ll find the names and meaning of the available fields in <code>\/usr\/src\/dcgm\/bindings\/dcgm_fields.py<\/code>.<\/p>\n<h2 id=\"h.lslnx28s1mjk\"  >Configuring collectd<a href=\"#h.lslnx28s1mjk\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n<p>Once the DCGM collect plugin has been set\u00a0up, collectd still needs to be configured to recognize the new metrics.<\/p>\n<p>First, configure collectd to recognize the DCGM plugin by adding <code>dcgm.conf<\/code>\u00a0to <code>\/etc\/collectd.d<\/code>.<\/p>\n<pre class=\"&quot;prettyprint\">LoadPlugin python\r\n&lt;Plugin python&gt;\r\n\u00a0 \u00a0 \u00a0 ModulePath \"\/usr\/lib64\/collectd\/dcgm\"\r\n\u00a0 \u00a0 \u00a0 LogTraces true\r\n\u00a0 \u00a0 \u00a0 Interactive false\r\n\u00a0 \u00a0 \u00a0 Import \"dcgm_collectd_plugin\"\r\n&lt;\/Plugin&gt;<\/pre>\n<p>Second, add a corresponding <a href=\"https:\/\/collectd.org\/documentation\/manpages\/types.db.5.shtml\" target=\"_blank\" rel=\"noopener noreferrer\">collectd type<\/a>\u00a0for each of the GPU fields defined in <code>\/usr\/lib64\/collectd\/dcgm\/dcgm_collectd_plugin.py<\/code>. Assuming no additional fields were defined, append the following type information to <code>\/usr\/share\/collectd\/types.db<\/code>.<\/p>\n<pre class=\"prettyprint\">### DCGM types\r\necc_dbe_aggregate_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0          value:GAUGE:0:U\r\necc_sbe_aggregate_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0          value:GAUGE:0:U\r\necc_dbe_volatile_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0           value:GAUGE:0:U\r\necc_sbe_volatile_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0           value:GAUGE:0:U\r\nfb_free\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0          value:GAUGE:0:U\r\nfb_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0         value:GAUGE:0:U\r\nfb_used\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0          value:GAUGE:0:U\r\ngpu_temp \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0         value:GAUGE:U:U\r\ngpu_utilization\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0          value:GAUGE:0:100\r\nmem_copy_utilization\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0             value:GAUGE:0:100\r\nmemory_clock\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0     value:GAUGE:0:U\r\nmemory_temp\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0      value:GAUGE:U:U\r\nnvlink_bandwidth_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0           value:GAUGE:0:U\r\nnvlink_recovery_error_count_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0value:GAUGE:0:U\r\nnvlink_replay_error_count_total\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0  value:GAUGE:0:U\r\npcie_replay_counter\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0              value:GAUGE:0:U\r\npcie_rx_throughput\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0       value:GAUGE:0:U\r\npcie_tx_throughput\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0       value:GAUGE:0:U\r\npower_usage\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0      value:GAUGE:0:U\r\npower_violation\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0          value:GAUGE:0:U\r\nretired_pages_dbe\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0        value:GAUGE:0:U\r\nretired_pages_pending\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0            value:GAUGE:0:U\r\nretired_pages_sbe\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0        value:GAUGE:0:U\r\nsm_clock\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0         value:GAUGE:0:U\r\nthermal_violation\u00a0        \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0value:GAUGE:0:U\r\ntotal_energy_consumption\u00a0\u00a0         \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0value:GAUGE:0:U\r\nxid_errors\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0       value:GAUGE:0:U<\/pre>\n<p>If you defined additional GPU fields when installing the DCGM collectd plugin, then a corresponding collectd type needs to be manually added to the list above. The Python field name in <code>\/usr\/lib64\/collectd\/dcgm\/dcgm_collectd_plugin.py<\/code>\u00a0and the collectd type in <code>\/usr\/share\/collectd\/types.db<\/code>\u00a0are related, but different. To correlate the two variants of a metric name, use the field ID defined in <code>\/usr\/src\/dcgm\/bindings\/dcgm_fields.py<\/code>\u00a0to correlate the two variants of a metric name. For example, <code>DCGM_FI_DEV_GPU_TEMP<\/code> represents the GPU temperature in <code>\/usr\/lib64\/collectd\/dcgm\/dcgm_collectd_plugin.py<\/code>. Looking up this field in <code>\/usr\/src\/dcgm\/bindings\/dcgm_fields.py<\/code>\u00a0shows that it corresponds to fieldID 150. The list of collectd visible field names can be obtained from the command <code>dcgmi dmon -l<\/code>; the collectd type name corresponding to field ID 150 is <code>gpu_temp<\/code>.<\/p>\n<h2 id=\"h.goh8cz66anqs\"  >(Re-)Start collectd<a href=\"#h.goh8cz66anqs\" class=\"heading-anchor-link\"><i class=\"fas fa-link\"><\/i><\/a><\/h2>\n<p>When DCGM is successfully integrated with collectd, output similar to what is shown below should be reported by collectd when it starts.<\/p>\n<pre class=\"&quot;prettyprint\">collectd[25]: plugin_load: plugin \"python\" successfully loaded.\r\n\u2026\r\ncollectd[25]: uc_update: Value too old: name = f707be0c326d\/dcgm_collectd-GPU-ace28880-3f61-dbc4-1f8c-0dc7916f3108\/gpu_temp-0; value time = 1539719060.000; last cache update = 1539719060.000;\r\ncollectd[25]: uc_update: Value too old: name = f707be0c326d\/dcgm_collectd-GPU-ace28880-3f61-dbc4-1f8c-0dc7916f3108\/power_usage-0; value time = 1539719060.000; last cache update = 1539719060.000;\r\ncollectd[25]: uc_update: Value too old: name = f707be0c326d\/dcgm_collectd-GPU-ace28880-3f61-dbc4-1f8c-0dc7916f3108\/ecc_sbe_volatile_total-0; value time = 1539719060.000; last cache update = 1539719060.000;\r\ncollectd[25]: uc_update: Value too old: name = f707be0c326d\/dcgm_collectd-GPU-ace28880-3f61-dbc4-1f8c-0dc7916f3108\/ecc_dbe_volatile_total-0; value time = 1539719060.000; last cache update = 1539719060.000;\r\ncollectd[25]: uc_update: Value too old: name = f707be0c326d\/dcgm_collectd-GPU-ace28880-3f61-dbc4-1f8c-0dc7916f3108\/ecc_sbe_aggregate_total-0; value time = 1539719060.000; last cache update = 1539719060.000;\r\n...<\/pre>\n<p>The GPU data provided by DCGM can be visualized along side the rest of your monitoring data, as shown in figure 1.<\/p>\n<figure id=\"attachment_13323\" aria-describedby=\"caption-attachment-13323\" style=\"width: 625px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13323 size-large\" src=\"https:\/\/developer.nvidia.com\/blog\/wp-content\/uploads\/2019\/01\/dashboard-625x474.png\" alt=\"Sample output charts\" width=\"625\" height=\"474\" srcset=\"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-625x474.png 625w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-300x228.png 300w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-768x583.png 768w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-395x300.png 395w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-119x90.png 119w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-362x275.png 362w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-145x110.png 145w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard-1024x777.png 1024w, https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2019\/01\/dashboard.png 1075w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><figcaption id=\"caption-attachment-13323\" class=\"wp-caption-text\">Figure 1. Example output from collectd, visualized by Grafana<\/figcaption><\/figure>\n<h1 id=\"h.yqtdd7sbtzfr\">Summary<\/h1>\n<p>Integrating <a href=\"https:\/\/developer.nvidia.com\/data-center-gpu-manager-dcgm\">DCGM<\/a> with the collectd telemetry framework provides IT administrators with a comprehensive view of GPU usage. If you are already using collectd, the information in this blog post will enable you to include GPU monitoring on the same pane of glass as the rest of your telemetry data. If you are using another telemetry framework, please see Chapter 4 of the DCGM User\u2019s Guide for more information on how to integrate GPU metrics into your solution.<\/p>\n<p>GPU telemetry is just scratching the surface of the <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-data-center-gpu-manager-cluster-administration\/\">full feature set<\/a>\u00a0of DCGM. DCGM also includes active health checks, diagnostics, as well as management and accounting capabilities.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Understanding GPU usage provides important insights for IT administrators managing a data center. Trends in GPU metrics correlate with workload behavior and make it possible to optimize resource allocation, diagnose anomalies, and increase overall data center efficiency.\u00a0NVIDIA Data Center GPU Manager\u00a0(DCGM) offers a comprehensive tool suite to simplify administration and monitoring of NVIDIA Tesla-accelerated data &hellip; <a href=\"https:\/\/developer.nvidia.com\/blog\/gpu-telemetry-nvidia-dcgm\/\">Continued<\/a><\/p>\n","protected":false},"author":463,"featured_media":11067,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"publish_to_discourse":"","publish_post_category":"318","wpdc_auto_publish_overridden":"","wpdc_topic_tags":"","wpdc_pin_topic":"","wpdc_pin_until":"","discourse_post_id":"602369","discourse_permalink":"https:\/\/forums.developer.nvidia.com\/t\/setting-up-gpu-telemetry-with-nvidia-data-center-gpu-manager\/148799","wpdc_publishing_response":"success","wpdc_publishing_error":"","nv_subtitle":"","ai_post_summary":"<ul><li>NVIDIA Data Center GPU Manager (DCGM) provides a comprehensive tool suite to simplify administration and monitoring of NVIDIA Tesla-accelerated data centers.<\/li><li>DCGM can be integrated with open-source telemetry frameworks such as collectd and Prometheus to provide GPU telemetry data.<\/li><li>Integrating DCGM with collectd enables IT administrators to have a comprehensive view of GPU usage alongside other monitoring data.<\/li><\/ul>","footnotes":"","_links_to":"","_links_to_target":""},"categories":[503],"tags":[608,2377],"coauthors":[536],"class_list":["post-13325","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-simulation-modeling-design","tag-hpc","tag-tutorial","tagify_workload-data-center-cloud","tagify_workload-networking-communications"],"acf":{"post_industry":["HPC \/ Scientific Computing"],"post_products":[],"post_learning_levels":[],"post_content_types":["Tutorial"],"post_collections":[]},"jetpack_featured_media_url":"https:\/\/developer-blogs.nvidia.com\/wp-content\/uploads\/2018\/06\/image2.jpg","primary_category":{"category":"Simulation \/ Modeling \/ Design","link":"https:\/\/developer.nvidia.com\/blog\/category\/simulation-modeling-design\/","id":503,"data_source":""},"nv_translations":[],"jetpack_shortlink":"https:\/\/wp.me\/pcCQAL-3sV","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts\/13325","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/users\/463"}],"replies":[{"embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/comments?post=13325"}],"version-history":[{"count":17,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts\/13325\/revisions"}],"predecessor-version":[{"id":39505,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/posts\/13325\/revisions\/39505"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/media\/11067"}],"wp:attachment":[{"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/media?parent=13325"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/categories?post=13325"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/tags?post=13325"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/developer-blogs.nvidia.com\/wp-json\/wp\/v2\/coauthors?post=13325"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}