NetQ is comprised of two main install components: the NetQ Telemetry Server, and the
cumulus-netq metapackage which gets installed on Cumulus Linux switches. Additionally, for host network visibility and containers, you can install host OS-specific metapackages.
This section walks through the basic install and setup steps for installing and running NetQ on the following supported operating systems:
- Cumulus Linux
- Ubuntu 16.04
- Red Hat Enterprise Linux 7
- CentOS 7
Before you get started, you should review the release notes for this version.
Install the NetQ Telemetry Server
The NetQ Telemetry Server comprises a set of individual Docker containers for each of the various server components that are used by NetQ, for the NetQ CLI used by the service console, and for the service console itself.
It is available in one of two formats:
- VMware ESXi 6.5 virtual machine
- A QCOW/KVM image for use on Ubuntu 16.04 and Red Hat Enterprise Linux 7 hosts
The NetQ telemetry server containers are completely separate from any containers you may have on the hosts you are monitoring with NetQ. The NetQ containers will not overwrite the host containers and vice versa.
- Download the NetQ Telemetry Server virtual machine. On the Downloads page, select NetQ from the Product menu, then click Download for the appropriate hypervisor — KVM or VMware.
- Import the virtual machine into your KVM or VMware hypervisor.
Start the NetQ Telemetry Server. There are two default user accounts you can use to log in:
- The primary username is admin, and the default password is CumulusNetQ!.
- The alternate username is cumulus, and its password is CumulusLinux!.
Once the NetQ Telemetry Server is installed, if you're interested using the telemetry server in high availability (HA) mode, please read the HA mode chapter to learn how to configure the telemetry server instances. For both HA and standalone modes, you need to configure NetQ Notifier.
In addition, if you intend to use NetQ with applications like PagerDuty or Slack, you need to configure those applications to receive notifications from NetQ Notifier.
Note the external IP address of the host where the telemetry server is running, as you need this to correctly configure the NetQ Agent on every node you want to monitor. The telemetry server gets its IP address from DHCP; to get the IP address, run
ifconfig eth0 on the telemetry server.
For HA mode, you need to note the IP addresses of all three instances of the telemetry server.
If you need the telemetry server to have a static IP address, manually assign one:
gatewaylines to the eth0 configuration, specifying the telemetry server's IP address and the IP address of the gateway:
- Save the file and exit.
Install the NetQ Agent
To manage a node with NetQ Agent and send notifications with NetQ Notifier, you need to install an OS-specific metapackage on each node. The node can be a:
- Cumulus Linux switch running version 3.3.0 or later
- Server running Red Hat RHEL 7.1, Ubuntu 16.04 or CentOS 7
- Linux virtual machine running one of the above Linux operating systems
The metapackage contains the NetQ Agent, the NetQ command line interface and the NetQ library, which contains a set of modules used by both the agent and the CLI.
Install the metapackage on each node to monitor, then configure the NetQ Agent on the node.
If your network uses a proxy server for external connections, you should configure a global proxy so
apt-get can access the metapackage on the Cumulus Networks repository.
Installing on a Cumulus Linux Switch
/etc/apt/sources.listand add the following line:
Update the local
aptrepository, then install the metapackage on the switch:
Installing on an Ubuntu, Red Hat or CentOS Server
To install NetQ on Linux servers running Ubuntu, Red Hat or CentOS, please read the Host Pack documentation.
Configuring the NetQ Agent on a Node
Once you install the NetQ packages and configure the NetQ Telemetry Server, you need to configure NetQ on each Cumulus Linux switch to monitor that node on your network.
- To ensure useful output, ensure that NTP is running.
On the host, after you install the NetQ metapackage, restart
rsyslogso logs are sent to the correct destination:
Link the host to the telemetry server you configured above; in the following example, the IP address for the telemetry server host is 198.51.100.10:
This command updated the configuration in the
/etc/netq/netq.ymlfile. It also enables the NetQ CLI.
After starting or restarting the agent, verify that the agent can reach the server by running the following command:
Configuring the Agent to Use a VRF
If you want the NetQ Agent to communicate with the telemetry server only via a VRF, including a management VRF, you need to specify the VRF name when configuring the NetQ Agent. For example, if the management VRF is configured and you want the agent to communicate with the telemetry server over it, configure the agent like this:
You then restart the agent as described in the previous section:
Configuring the Agent to Communicate over a Specific Port
By default, NetQ uses port 6379 for communication between the telemetry server and NetQ Agents. If you want the NetQ Agent to communicate with the telemetry server via a different port, you need to specify the port number when configuring the NetQ Agent like this:
If you are using NetQ in high availability mode, you can only configure it on port 6379 or 26379.
Removing or Decommissioning an Agent from a Node
You can decommission a NetQ agent on a given node. You may need to do this when you
- RMA the switch or host being monitored
- Change the hostname of the switch or host being monitored
- You move the switch or host being monitored from one data center to another
Early Access Feature
Decommissioning a NetQ Agent is an early access feature in Cumulus NetQ 1.2.
Decommissioning the node removes the agent from the NetQ database. However, the history for this node is preserved in case you need to go back in time to perform a diagnostic investigation.
To decommission the NetQ agent on a node, do the following steps:
Enable the EA features:
Decommission the agent on the hostname specified by [hostname]:
Then restart the agent for the change to take effect:
Configuring Debug Logging for the NetQ Agent
In order to debug the NetQ Agent, you need to enable debug-level logging:
/etc/netq/netq.ymlfile and add a log_level section for the NetQ Agent:
Restart the NetQ Agent:
Configuring NetQ Notifier on the Telemetry Server
NetQ Notifier listens to events from the telemetry server database. When NetQ Notifier is running on the NetQ Telemetry Server, it sends out alerts. NetQ Notifier runs in the NetQ Telemetry Server virtual machine only; the NetQ Agents on the nodes only communicate with it. If the telemetry server is being run in HA mode, then the Notifier only runs on the telemetry server that is the master, and the Notifier on the master telemetry server is the only one to accept messages to publish.
NetQ Notifier runs exclusively in a virtual machine; its configuration is stored in the
/etc/netq/netq.yml file and you control it using
systemd commands (such as
systemctl stop|start netq-notifier). The
netq.yml file also contains the configuration for the NetQ CLI running in the VM.
You need to configure two things for NetQ Notifier:
- The events for which you want to receive notifications/alerts, like sensors or BGP session notifications.
- The integrations for where to send those notifications; by default, they are
rsyslog, PagerDuty and Slack.
NetQ Notifier sends out alerts based on the configured log level, which is one of the following:
- debug: Used for debugging-related messages.
- info: Used for informational, high-volume messages.
- warning: Used for warning conditions.
- error: Used for error conditions.
The default log level setting is info, so NetQ Notifier sends out alerts for info, warning and error conditions.
By default, all notifications/alerts are enabled, and logged in
/var/log/netq-notifier.log. You only need to edit the notifications if there is something you don't want to monitor.
NetQ Notifier is already integrated with
rsyslog. To integrate with PagerDuty or Slack, you need to specify some parameters.
To configure alerts and integrations on the NetQ Telemetry Server:
As the sudo user, open
/etc/netq/netq.ymlin a text editor.
Configure the following in the
- Change the log level: If you want a more restrictive level than info.
- Configure application notifications: To customize any notifications, uncomment the relevant section under netq-notifier Configurations and make changes accordingly.
- Configure PagerDuty and Slack integrations. You can see where to input the information for these integrations in the example
- For PagerDuty, enter the API access key (also called the authorization token) and the integration key (also called the service_key or routing_key).
For Slack, enter the webhook URL. To get the webhook URL, in the Slack dropdown menu, click Apps & integrations, then click Manage > Custom Integrations > Incoming WebHooks > select Add Configuration > select the channel to receive the notifications such as #netq-notifier in the Post to Channel dropdown > then click Add Incoming WebHook integration. The URL produced by Slack looks similar to the one pictured below:
Copy the URL from the Webhook URL field into the
/etc/netq/netq.ymlfile under the Slack Notifications section. Uncomment the lines in the sections labeled netq-notifier, notifier-integrations and notifier-filters, then add the webhook URL value provided by Slack:
When you are finished editing the file, save and close it.
Stop then start the NetQ Notifier daemon to apply the new configuration:
If your webhook does not immediately send a message to your channel, look for errors in syntax. Check the log file located at
Example /etc/netq/netq.yml Configuration
The following sample
/etc/netq/netq.yml file is on the NetQ Telemetry Server itself. Note that the
netq.yml looks different on a switch or host monitored by NetQ; for example, the backend server IP address and port would be uncommented and listed.
/etc/netq/config.d to configure NetQ Notifier or putting other YML files in the
/etc/netq directory overrides the configuration in