If you are using the current version of Cumulus NetQ, the content on this page may not be up to date. The current version of the documentation is available here. If you are redirected to the main page of the user guide, then this page may have been renamed; please search for it there.

NVIDIA NetQ 3.2 Release Notes

Download 3.2 Release Notes xls    Download all 3.2 release notes as .xls

3.2.1 Release Notes

Open issues in 3.2.1

Issue IDDescriptionAffectsFixed
2556205
NetQ CLI: User cannot remove a notification channel when threshold-based event rules are configured.3.2.1-3.3.03.3.1
2556006
NETQ-8311
NetQ Infra: Customers with cloud deployments who wish to use the lifecycle management (LCM) feature in NetQ 3.3.0 must upgrade their NetQ Cloud Appliance or Virtual Machine as well as the NetQ Agent.3.2.13.3.0-3.3.1
2555854
NETQ-8245
NetQ Agent: If a NetQ Agent is downgraded to the 3.0.0 version from any higher release, the default commands file present in the /etc/netq/commands/ also needs to be updated to prevent the NetQ Agent from becoming rotten.3.0.0-3.3.1
2553453
NETQ-7318
The netqd daemon logs a traceback to /var/log/netqd.log when the OPTA server is unreachable and netq show commands are run.3.1.0-3.3.1
2551545
NETQ-6640
Infra: Rarely, after a node is restarted, Kubernetes pods do not synchronize properly and the output of netq show opta-health shows failures. Node operation is not functionally impacted. You can safely remove the failures by running kubectl get pods | grep MatchNodeSelector | cut -f1 -d' ' | xargs kubectl delete pod. To work around the issue, do not label nodes using the API. Instead label nodes through local configuration using kubelet flag “–node-labels”.3.1.0-3.3.1
2549649
NETQ-5737
NetQ UI: Warnings might appear during the post-upgrade phase for a Cumulus Linux switch upgrade job. They are caused by services that have not yet been restored by the time the job is complete. Cumulus Networks recommend waiting five minutes, creating a network snapshot, then comparing that to the pre-upgrade snapshot. If the comparison shows no differences for the services, the warnings can be ignored. If there are differences, then troubleshooting the relevant service(s) is recommended.3.0.0-3.3.1
2549319
NETQ-5571
NetQ UI: The legend and segment colors on Switches and Upgrade History card graphs sometimes do not match. These cards appear on the lifecycle management dashboard (Manage Switch Assets view). Hover over graph to view the correct values.3.0.0-3.3.1
2549246
NETQ-5529
NetQ UI: Snapshot comparison cards may not render correctly after navigating away from a workbench and then returning to it. If you are viewing the Snapshot comparison card(s) on a custom workbench, refresh the page to reload the data. If you are viewing it on the Cumulus Default workbench, after refreshing the page you must recreate the comparison(s).2.4.0-3.2.13.3.0-3.3.1
2543867
NETQ-3451
NetQ UI: If either the hostname or the ASN of a BGP peer is invalid, the full screen BGP Service card does not provide the ability to open cards for a selected BGP session.2.3.0-2.4.1, 3.0.0-3.3.1

Fixed Issues in 3.2.1

Issue IDDescriptionAffects
2553951
NETQ-7546
Infra: In an on-premises deployment, the Kafka change logs can fill the NetQ appliance or VM disk space rapidly on systems with a large number of MAC or neighbor entries. If the disk usage exceeds 90%, the NetQ service is partially or completely disrupted. To workaround this issue, reduce the retention setting for log cleanup to 30 minutes by running the following script on your NetQ appliance/VM or the master server in a clustered arrangement:

MASTER_IP=‘cat /mnt/admin/master_ip’ ; topics=“netq-app-route-route_key_v1-changelog netq-app-macs-macs_key-changelog netq-app-neighbor-neighbor_key_v1-changelog netq-app-macfdb-macfdb_key_v3-changelog” ; for topic in $topics ; do kubectl exec -it rc/kafka-broker-rc-0 – kafka-topics –zookeeper $MASTER_IP –topic $topic –alter –config delete.retention.ms=1800000 ; done
3.2.0
2553793
NETQ-7506
NetQ CLI: For an on-premises deployment, an access_key and secret_key are not needed for the CLI to access the NetQ Collector. When these keys are configured NetQ assumes the system is in a cloud deployment and tries to validate the SSL certificates. This fails because for NetQ Collectors, the SSL certificates are usually self signed. As a result, the CLI fails with the following error:
cumulus@switch:~# netq show agentsFailed to process command. Check /var/log/netqd.log for more details
You also see an error in /var/log/netqd.log similar to this:
2020-10-01T01:44:51.534875+00:00 leaf01 netqd[4782]: ERROR: GET request failed https://st-ts-01:32708/netq/telemetry/v1/object/bgp?count=2000&offset=02020-10-01T01:44:51.535251+00:00 leaf01 netqd[4782]: ERROR: HTTPSConnectionPool(host=‘st-ts-01’, port=32708): Max retries exceeded with url: /netq/telemetry/v1/object/bgp?count=2000&offset=0 (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: {color:#d04437}CERTIFICATE_VERIFY_FAILED{color}] certificate verify failed: self signed certificate (_ssl.c:1056)')))
To resolve the failure, remove the access_key and secret_key from the CLI configuration
cumulus@switch:~# rm -f /etc/netq/.loginkeys.aescumulus@switch:~# rm -f /etc/netq/.login.aes
3.2.0
2553758
NETQ-7489
NetQ CLI: When the NetQ Collector is configured with a proxy server for the CLI to access cloud APIs the SSL certificate validation fails because the proxy provides its own self-signed certificate. This causes the CLI to fail with the following error:
cumulus@switch:~# netq show agentsFailed to process command. Check /var/log/netqd.log for more details
You also see an error in /var/log/netqd.log similar to this:
2020-10-01T01:44:51.534875+00:00 leaf01 netqd[4782]: ERROR: GET request failed https://st-ts-01:32708/netq/telemetry/v1/object/bgp?count=2000&offset=02020-10-01T01:44:51.535251+00:00 leaf01 netqd[4782]: ERROR: HTTPSConnectionPool(host=‘st-ts-01’, port=32708): Max retries exceeded with url: /netq/telemetry/v1/object/bgp?count=2000&offset=0 (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: {color:#d04437}CERTIFICATE_VERIFY_FAILED{color}] certificate verify failed: self signed certificate (_ssl.c:1056)')))
Two options are available to work around this issue:* If the NetQ Collector has Internet access, configure the CLI to point to the cloud API instance directly:
cumulus@switch:~# netq config add cli server api.netq.cumulusnetworks.com port 443cumulus@switch:~# netq config restart cli
* To use the proxy server:
1. Delete the token file. Run sudo rm /tmp/token.aes.
2. Edit the _/etc/netq/netq.yml_ file as follows. The password is entered as cleartext.
netq-cli:
port: 32708
server: <cloud-appliance-IP-address>
vrf: <default/mgmt>
premises: <customer-premise>
username: <customer-email-address>
password: <password>
opid: <opid-here>

Note: OPID is not directly visible to user. File a [support ticket|https://cumulusnetworks.com/support/file-a-ticket/] for assistance with completing the configuration.
3. Restart the the CLI. Run netq config restart cli.
3.2.0

3.2.0 Release Notes

Open issues in 3.2.0

Issue IDDescriptionAffectsFixed
2555854
NETQ-8245
NetQ Agent: If a NetQ Agent is downgraded to the 3.0.0 version from any higher release, the default commands file present in the /etc/netq/commands/ also needs to be updated to prevent the NetQ Agent from becoming rotten.3.0.0-3.3.1
2553951
NETQ-7546
Infra: In an on-premises deployment, the Kafka change logs can fill the NetQ appliance or VM disk space rapidly on systems with a large number of MAC or neighbor entries. If the disk usage exceeds 90%, the NetQ service is partially or completely disrupted. To workaround this issue, reduce the retention setting for log cleanup to 30 minutes by running the following script on your NetQ appliance/VM or the master server in a clustered arrangement:

MASTER_IP=‘cat /mnt/admin/master_ip’ ; topics=“netq-app-route-route_key_v1-changelog netq-app-macs-macs_key-changelog netq-app-neighbor-neighbor_key_v1-changelog netq-app-macfdb-macfdb_key_v3-changelog” ; for topic in $topics ; do kubectl exec -it rc/kafka-broker-rc-0 – kafka-topics –zookeeper $MASTER_IP –topic $topic –alter –config delete.retention.ms=1800000 ; done
3.2.03.2.1-3.3.1
2553793
NETQ-7506
NetQ CLI: For an on-premises deployment, an access_key and secret_key are not needed for the CLI to access the NetQ Collector. When these keys are configured NetQ assumes the system is in a cloud deployment and tries to validate the SSL certificates. This fails because for NetQ Collectors, the SSL certificates are usually self signed. As a result, the CLI fails with the following error:
cumulus@switch:~# netq show agentsFailed to process command. Check /var/log/netqd.log for more details
You also see an error in /var/log/netqd.log similar to this:
2020-10-01T01:44:51.534875+00:00 leaf01 netqd[4782]: ERROR: GET request failed https://st-ts-01:32708/netq/telemetry/v1/object/bgp?count=2000&offset=02020-10-01T01:44:51.535251+00:00 leaf01 netqd[4782]: ERROR: HTTPSConnectionPool(host=‘st-ts-01’, port=32708): Max retries exceeded with url: /netq/telemetry/v1/object/bgp?count=2000&offset=0 (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: {color:#d04437}CERTIFICATE_VERIFY_FAILED{color}] certificate verify failed: self signed certificate (_ssl.c:1056)')))
To resolve the failure, remove the access_key and secret_key from the CLI configuration
cumulus@switch:~# rm -f /etc/netq/.loginkeys.aescumulus@switch:~# rm -f /etc/netq/.login.aes
3.2.03.2.1-3.3.1
2553758
NETQ-7489
NetQ CLI: When the NetQ Collector is configured with a proxy server for the CLI to access cloud APIs the SSL certificate validation fails because the proxy provides its own self-signed certificate. This causes the CLI to fail with the following error:
cumulus@switch:~# netq show agentsFailed to process command. Check /var/log/netqd.log for more details
You also see an error in /var/log/netqd.log similar to this:
2020-10-01T01:44:51.534875+00:00 leaf01 netqd[4782]: ERROR: GET request failed https://st-ts-01:32708/netq/telemetry/v1/object/bgp?count=2000&offset=02020-10-01T01:44:51.535251+00:00 leaf01 netqd[4782]: ERROR: HTTPSConnectionPool(host=‘st-ts-01’, port=32708): Max retries exceeded with url: /netq/telemetry/v1/object/bgp?count=2000&offset=0 (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: {color:#d04437}CERTIFICATE_VERIFY_FAILED{color}] certificate verify failed: self signed certificate (_ssl.c:1056)')))
Two options are available to work around this issue:* If the NetQ Collector has Internet access, configure the CLI to point to the cloud API instance directly:
cumulus@switch:~# netq config add cli server api.netq.cumulusnetworks.com port 443cumulus@switch:~# netq config restart cli
* To use the proxy server:
1. Delete the token file. Run sudo rm /tmp/token.aes.
2. Edit the _/etc/netq/netq.yml_ file as follows. The password is entered as cleartext.
netq-cli:
port: 32708
server: <cloud-appliance-IP-address>
vrf: <default/mgmt>
premises: <customer-premise>
username: <customer-email-address>
password: <password>
opid: <opid-here>

Note: OPID is not directly visible to user. File a [support ticket|https://cumulusnetworks.com/support/file-a-ticket/] for assistance with completing the configuration.
3. Restart the the CLI. Run netq config restart cli.
3.2.03.2.1-3.3.1
2553453
NETQ-7318
The netqd daemon logs a traceback to /var/log/netqd.log when the OPTA server is unreachable and netq show commands are run.3.1.0-3.3.1
2551545
NETQ-6640
Infra: Rarely, after a node is restarted, Kubernetes pods do not synchronize properly and the output of netq show opta-health shows failures. Node operation is not functionally impacted. You can safely remove the failures by running kubectl get pods | grep MatchNodeSelector | cut -f1 -d’ ' | xargs kubectl delete pod. To work around the issue, do not label nodes using the API. Instead label nodes through local configuration using kubelet flag “–node-labels”.3.1.0-3.3.1
2549649
NETQ-5737
NetQ UI: Warnings might appear during the post-upgrade phase for a Cumulus Linux switch upgrade job. They are caused by services that have not yet been restored by the time the job is complete. Cumulus Networks recommend waiting five minutes, creating a network snapshot, then comparing that to the pre-upgrade snapshot. If the comparison shows no differences for the services, the warnings can be ignored. If there are differences, then troubleshooting the relevant service(s) is recommended.3.0.0-3.3.1
2549319
NETQ-5571
NetQ UI: The legend and segment colors on Switches and Upgrade History card graphs sometimes do not match. These cards appear on the lifecycle management dashboard (Manage Switch Assets view). Hover over graph to view the correct values.3.0.0-3.3.1
2549246
NETQ-5529
NetQ UI: Snapshot comparison cards may not render correctly after navigating away from a workbench and then returning to it. If you are viewing the Snapshot comparison card(s) on a custom workbench, refresh the page to reload the data. If you are viewing it on the Cumulus Default workbench, after refreshing the page you must recreate the comparison(s).2.4.0-3.2.13.3.0-3.3.1
2543867
NETQ-3451
NetQ UI: If either the hostname or the ASN of a BGP peer is invalid, the full screen BGP Service card does not provide the ability to open cards for a selected BGP session.2.3.0-2.4.1, 3.0.0-3.3.1

Fixed Issues in 3.2.0

Issue IDDescriptionAffects
2551790
NETQ-6732
CLI: Upgrade to NetQ 3.1.0 using the CLI fails due to an authentication issue. To work around this issue, run the netq bootstrap master upgrade command as usual, then use the Admin UI to complete the upgrade at https://<netq-appl-vm-hostname-or-ipaddr>:8443.3.1.0-3.1.1
2551641
NETQ-6673
Infra: NetQ VM installation fails if the designated disk size is greater than 2TB. To work around this issue, specify the disk for cloud deployments to be between 256GB and 2TB SSD, and for on-premises deployments to be between 32 GB and 2TB.2.4.0-3.1.1
2551569
NETQ-6650
CLI: When a proxy server is configured for NetQ Cloud access and lifecycle management (LCM) is enabled, the associated LCM CLI commands fail due to incorrect port specification. To work around this issue, configure the NetQ Collector to connect directly to NetQ Cloud without a proxy.3.1.0-3.1.1
2549344
NETQ-5591
UI: The lifecycle management feature does not present general alarm or info events; however, errors related to the upgrade process are reported within the NetQ UI.3.0.0-3.1.1