- Migrating from Netdisco 1.x
App::Netdisco::Manual::ReleaseNotes - Release Notes
This document will list only the most significant changes with each release of Netdisco. You are STRONGLY recommended to read this document each time you install and upgrade. Also see the Changes file, for more information.
This distribution (App::Netdisco) is a complete rewrite of the Netdisco application. Users often ask whether they can run both versions at the same time, and whether the database must be copied. Here are the guidelines for migrating from Netdisco 1.x:
You can run both Netdisco 1.x and App::Netdisco web frontends at the same time, using the same database (if "
safe_password_store" is set to "
Only enable the backend daemon and discovery jobs from either Netdisco 1.x or App::Netdisco.
You can share a single database between Netdisco 1.x and App::Netdisco. The deploy script for App::Netdisco will make some schema changes to the database, but they are backwards compatible.
This release will automatically migrate user passwords to have stronger hashing in the database (a good thing!). This is incompatible with Netdisco 1.x web frontend, so if you must maintain backward-compatibility, set the following in your
The number of parallel DNS queries running during node discovery has been reduced to 10 for maximum safety, but resulting in lower macsuck performance. If you have a robust DNS infrastructure, you can probably put it back up to something like 50 or 100:
dns: max_outstanding: 100
SNMP community strings provided in the
community_rw configuration setting will no longer be used for read actions on a device (despite having "
rw" in the setting name).
If you have the same community string for read and write access, then you must set both
community_rw in your
deployment.yml file. In any case, we recommend using the new
snmp_auth configuration format which supercedes both these settings.
This release includes support for Device and Node expiry from your database. This is an important part of housekeeping for your installation, and our recommendation is to enable this feature such that suitably old Devices and Nodes are expired nightly.
Add the following to your "
housekeeping" configuration in
deployment.yml, to have a nightly check at 11:20pm:
housekeeping: expiry: when: '20 23 * * *'
You should also configure one or more of
expire_nodes_archive to a number of days. See the Configuration documentation for further details.
If you use an Apache reverse proxy, we recomment increasing the timeout from our previous example of 5 seconds to, perhaps 60. This is because some reports do take more time to run their queries on the database. See Deployment documentation for details.
If you were using the
X::Observium plugin, you'll now need to install the separate distribution App::NetdiscoX::Web::Plugin::Observium.
This release fixes a number of issues with the poller, and is a recommended upgrade.
During Arpnip, Node IPs are resolved to DNS names in parallel. See the
dns configuration option for details. Note that the
nodenames configuration items from release
2.018000 are no longer available.
This release includes new support for SNMPv3 via the
snmp_auth configuration option. Please provide feedback to the developers on your experience.
The previous mentioned bug in Macsuck is now fixed.
There is a bug in Macsuck whereby in rare circumstances some invalid SQL is generated. The root cause is known but we want to take more time to get the fix right. It should only be a few more days.
no_port_control configuration setting is now called
check_userlog and its logic is inverted. Don't worry if this is not familiar to you - the option is only used by Netdisco Developers.
The dangerous action log messages are now saved to the database. In a future version there will be a way to display them in the web interface.
Some of the "dangerous action" confirmation dialogs offer to take a log message (e.g. Port Control, Device Delete). Currently the log messages are not saved. This feature will be added in the next release.
The backend poller daemon is now considered stable. You can uncomment the
housekeeping section of the example configuration and thereby enable regular device (re-)discovery, arpnip and macsuck.
You can now configure LDAP authentication for users.
The read-write SNMP community is now stored in the database, when used for the first time on a device. If you don't want the web frontend to be able to access this, you need to:
deployment.ymlfiles for web frontend and daemon, such that only the daemon config contains any community strings.
Use separate PostgreSQL users for web frontend and daemon, such that the web frontend user cannot SELECT from the
Users can be managed through the web interface (by admins only).
You can now simplify database configuration to just the following, instead of the more verbose
plugins/DBIC setting which was there before:
database: name: 'netdisco' host: 'localhost' user: 'someuser' pass: 'somepass'
REMOTE_USER environment variable and
X-REMOTE_USER HTTP Header are now supported for delegating authentication to another web server. See the Deployment and Configuration documentation for further details.
This release contains the first version of our new poller, which handles device and node discovery. Please make sure to backup any existing Netdisco database before trying it out.
You can remove any settings from
~/environments/deployment.yml which you didn't edit or add to the file yourself. All defaults are now properly embedded within the application. See the new
deployment.yml sample which ships with this distribution for an example.
The default environment configuration file
develpment.yml has been renamed to
deployment.yml. This better reflects that users are not developers, and also fits with the default for PSGI compatible cloud deployment services.
Please rename or copy your environment file:
mv ~/environments/development.yml ~/environments/deployment.yml
The installation is now relocateable outside of a user's home directory by setting the
NETDISCO_HOME environment variable. This defaults to your own home directory.