If you are using the current version of Cumulus NetQ, the content on this page may not be up to date. The current version of the documentation is available here. If you are redirected to the main page of the user guide, then this page may have been renamed; please search for it there.



note text

<! – text to comment out -–>


When entering a time value in the netq show evpn command, you must include a numeric value and the unit of measure:

  • w: week(s)
  • d: day(s)
  • h: hour(s)
  • m: minute(s)
  • s: second(s)
  • now
When using the between option, the start time (text-time) and end time (text-endtime) values can be entered as most recent first and least recent second, or vice versa. The values do not have to have the same unit of measure.

cumulus@tor-1:mgmt:~$ netq tor-1 show wjh-drop between now and 7d

Matching wjh records: Drop type Aggregate Count

L1 560 Buffer 224 Router 144 L2 0 ACL 0 Tunnel 0 cumulus@tor-1:mgmt:~$ netq tor-1 show wjh-drop details between now and 7d

Matching wjh records: Drop type Aggregate Count Reason

L1 556 None Buffer 196 WRED Router 144 Blackhole route Buffer 14 Packet Latency Threshold Crossed Buffer 14 Port TC Congestion Threshold L1 4 Oper down

install prep, cloud: cumulus@ip-10-150-10-10:~$ netq bootstrap reset purge-db Successfully reset the node. Please bootstrap the node again before continuing. cumulus@ip-10-150-10-10:~$ netq bootstrap master interface eth0 tarball ‘s3://netq-archives/latest/netq-bootstrap-‘3.2.0-SNAPSHOT.tgz 2020-09-29 15:53:40.295564: master-node-installer: Extracting tarball s3://netq-archives/latest/netq-bootstrap-3.2.0-SNAPSHOT.tgz 2020-09-29 15:55:15.991860: master-node-installer: Checking package requirements 2020-09-29 15:55:16.217339: master-node-installer: Using IP: 2020-09-29 15:55:18.300543: master-node-installer: Initializing kubernetes cluster

Successfully bootstrapped the master node cumulus@ip-10-150-10-10:~$

When multiple jobs are running, scroll down or use the filters above the jobs to find the jobs of interest:
  • Time Range: Enter a range of time in which the upgrade job was created, then click Done.
  • All switches: Search for or select individual switches from the list, then click Done.
  • All switch types: Search for or select individual switch series, then click Done.
  • All users: Search for or select individual users who created an upgrade job, then click Done.
  • All filters: Display all filters at once to apply multiple filters at once. Additional filter options are included here. Click Done when satisfied with your filter criteria.

By default, filters show all of that items of the given filter type until it is restricted by these settings.

Switch Card/all alarms

  1. Click in the Global Search field.

  2. Enter the hostname or IP address of a switch.

    The medium Switch card shows the total number of alarms, and a distribution of alarms across three categories. Click Alarms to view the count of alarms. Click Charts to view a graph of the alarms over the time period on the card (default is 24 hours).

  3. Change to the full-screen card using the size picker to view a list of all of the individual alarms.

cumulus@switch:~$ netq show resource-util

Matching resource_util records:
Hostname          CPU Utilization      Memory Utilization   Disk Name            Total                Used                 Disk Utilization     Last Updated
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- ------------------------
exit01            9.2                  48                   /dev/vda4            6170849280           1524920320           26.8                 Wed Feb 12 03:54:10 2020
exit02            9.6                  47.6                 /dev/vda4            6170849280           1539346432           27.1                 Wed Feb 12 03:54:22 2020
leaf01            9.8                  50.5                 /dev/vda4            6170849280           1523818496           26.8                 Wed Feb 12 03:54:25 2020
leaf02            10.9                 49.4                 /dev/vda4            6170849280           1535246336           27                   Wed Feb 12 03:54:11 2020
leaf03            11.4                 49.4                 /dev/vda4            6170849280           1536798720           27                   Wed Feb 12 03:54:10 2020
leaf04            11.4                 49.4                 /dev/vda4            6170849280           1522495488           26.8                 Wed Feb 12 03:54:03 2020
spine01           8.4                  50.3                 /dev/vda4            6170849280           1522249728           26.8                 Wed Feb 12 03:54:19 2020
spine02           9.8                  49                   /dev/vda4            6170849280           1522003968           26.8                 Wed Feb 12 03:54:25 2020