I would like to share some of my experience about NSM High Availability service management
and tuning of NSM server. I have gathered a list of items;
1) Interpretation of HA status command
2) Relocating NSM services
3) Troubleshoot NSM DB backup process and syncronization of non-db files
4) Synchronization of NSM db manually between NSM peer servers
5) NSM Maintenance
7) NSM GUI Client in HA mode
In High Availibility, NSM consists of two main services GuiSvr and DevSvr. Both of these services are managed by the HA service and shouldn’t be stopped/started manually. Only one of the NSM servers (i.e primary) runs these services (i.e GuiSvr and DevSvr) at one time not together.
GuiSvr handles DB related operations and Gui client requests.
DevSvr handles device communication. Both of these services must be running for NSM to function properly.
If you would like to remove NSM(Network & Security Manager) related software packages and disk files to do a fresh install, the following command does this;
rpm -qa | grep netscreen | xargs rpm -e ; rm -rf /usr/netscreen/* /var/netscreen/*
It removes rpm packages containing “netscreen” and then deletes everything under default installation directories /usr/netscreen/ /var/netscreen
When I was trying to update my SRX cluster via NSM, I received an error message “idpd busy in commit. Please try again later.” and I found the KB article http://kb.juniper.net/InfoCenter/KB21334 for this issue according to which “commit confirmed” should be disabled under Preferences->Device Update->Netconf
Good to learn!
While using NSM, I have noticed that I have an alarm for one device but I couldn’t see what the alarm is really about.
Right click on the alarm was only bringing the normal device menu but nothing else. Then I found that I have to follow Investigate->Realtime Monitor->Device Monitor path to see the alarm content. It is good to note it here:)
There are two options that you can use to add an SRX cluster into NSM:
- You will add each nodes separately by using their fxp0 interface IP addresses
- You will configure virtual chassis and add the entire cluster as a single node
As the topic of this post is virtual chassis, set the following configuration at SRX side;
root@srx210-1#set chassis cluster network-management cluster-master
Then add a new device NOT a cluster in NSM device view by using the redundant interface (e.g reth0) IP address of the SRX cluster. In the end you will see a single device instead of two-node cluster like below;