In this sample chapter from Cisco ISE for BYOD and Secure Unified Access, 2nd Edition, explore the configuration steps required to deploy ISE in a distributed design. Content also covers the basics of using a load balancer.
This chapter covers the following topics:
Configuring ISE nodes in a distributed environment
Understanding the HA options available
Using load balancers
IOS load balancing
Maintaining ISE deployments
Chapter 5, “Making Sense of the ISE Deployment Design Options,” discussed the many options within ISE design. At this point, you should have an idea of which type of deployment will be the best fit for your environment, based on the number of concurrent endpoints and the number of Policy Service Nodes (PSN) that will be used in the deployment. This chapter focuses on the configuration steps required to deploy ISE in a distributed design. It also covers the basics of using a load balancer and includes a special bonus section on a very cool high-availability (HA) configuration that uses Anycast routing, and covers patching distributed ISE deployments.
Configuring ISE Nodes in a Distributed Environment
All ISE nodes are installed in a standalone mode by default. When in a standalone mode, the ISE node is configured to run all personas by default. That means that the standalone node runs Administration, Monitoring, and Policy Service personas. Also, all ISE standalone nodes are configured as their own root certificate authority (CA).
It is up to you, the ISE administrator, to promote the first node to be a primary administration node and then join the additional nodes to this new deployment. At the time of joining, you also determine which services will run on which nodes; in other words, you determine which persona the node will have.
You can join more than one ISE node together to create a multinode deployment, known commonly in the field as an ISE cube. It is important to understand that before any ISE nodes can be joined together, they must trust each other’s administrative certificate. Without that trust, you will receive a communication error stating that the “node was unreachable,” but the root cause is the lack of trust.
Similar to a scenario of trying to connect to a secure website that is not using a trusted certificate, you would see an SSL error in your web browser. This is just like that, only it is based on Transport Layer Security (TLS).
If you are still using the default self-signed certificates in ISE, you’ll be required to import the public certificate of each ISE node into each other ISE node’s Administration > System > Certificates > Trusted Certificates screen, because they are all self-signed (untrusted) certificates and each ISE node needs to trust the primary node, and the primary node needs to trust each of the other nodes.
Instead of dealing with all this public key import for these self-signed certificates, the best practice is to always use certificates issued from the same trusted source. In that case, only the root certificates need to be added to the Trusted Certificates list.
Make the Policy Administration Node a Primary Device
Because all ISE nodes are standalone by default, you must first promote the ISE node that will become the Primary Policy Administration Node (PAN) to be a primary device instead of a standalone.
From the ISE GUI, perform the following steps:
Step 1. Choose Administration > System > Deployment. Figure 18-1 shows an example of the Deployment screen.
Figure 18-1 Deployment Screen
Step 2. Select the ISE node (there should only be one at this point).
Step 3. Click the Make Primary button, as shown in Figure 18-2.
Figure 18-2 Make Primary Button
Step 4. At this point, the Monitoring and Policy Service check boxes on the left have become selectable. If the primary node will not also be providing any of these services, uncheck them now. (You can always return later and make changes.)
Step 5. Click Save.
After saving the changes, the ISE application restarts itself. This is a necessary process, as the sync services are started and the node prepares itself to handle all the responsibilities of the primary PAN persona. Once the application server has restarted, reconnect to the GUI, log in again, and proceed to the next section.
Example 18-1 show application status ise Command Output
atw-ise245/admin# show application status ise ISE PROCESS NAME STATE PROCESS ID -------------------------------------------------------------------- Database Listener running 5851 Database Server running 75 PROCESSES Application Server initializing Profiler Database running 6975 ISE Indexing Engine running 1821 AD Connector running 10338 M&T Session Database running 1373 M&T Log Collector running 2313 M&T Log Processor running 2219 Certificate Authority Service disabled EST Service disabled SXP Engine Service disabled TC-NAC Docker Service disabled TC-NAC MongoDB Container disabled TC-NAC RabbitMQ Container disabled TC-NAC Core Engine Container disabled VA Database disabled VA Service disabled pxGrid Infrastructure Service disabled pxGrid Publisher Subscriber Service disabled pxGrid Connection Manager disabled pxGrid Controller disabled PassiveID Service disabled DHCP Server (dhcpd) disabled DNS Server (named) disabled atw-ise245/admin#
Register an ISE Node to the Deployment
Now that there is a primary PAN, you can implement a multinode deployment. From the GUI on the primary PAN, you will register and assign personas to all ISE nodes.
From the ISE GUI on the primary PAN, perform the following steps:
Step 1. Choose Administration > System > Deployment.
Step 2. Choose Register > Register an ISE Node, as shown in Figure 18-3.
Figure 18-3 Choosing to Register an ISE Node
Step 3. In the Host FQDN field, enter the IP address or DNS name of the first ISE node you will be joining to the deployment, as shown in Figure 18-4.
Figure 18-4 Specifying Hostname and Credentials
Step 4. In the User Name and Password fields, enter the administrator name (admin by default) and password.
Step 5. Click Next.
Step 6. On the Configure Node screen, shown in Figure 18-5, you can pick the main persona of the ISE node, including enabling of profiling services. You cannot, however, configure which probes to enable yet. Choose the persona for this node. Figure 18-5 shows adding a secondary Administration and Monitoring node, while Figure 18-6 shows adding a Policy Service Node.
Figure 18-5 Configure Node Screen Secondary Admin and MnT Addition
Figure 18-6 Configure Node Screen Policy Service Node Addition
Step 7. Click Submit. At this point, the Policy Administration Node syncs the entire database to the newly joined ISE node, as you can see in Figure 18-7.
Figure 18-7 Sync Initiated
Step 8. Repeat these steps for all the ISE nodes that should be joined to the same deployment.
Ensure the Persona of All Nodes Is Accurate
Now that all of your ISE nodes are joined to the deployment, you can ensure that the correct personas are assigned to the appropriate ISE nodes. Table 18-1 shows the ISE nodes in the sample deployment and the associated persona(s) that will be assigned. Figure 18-8 shows the final Deployment screen, after the synchronization has completed for all nodes (a check mark in the Node Status column indicates a node that is healthy and in sync).
Figure 18-8 Final Personas and Roles
Table 18-1 ISE Nodes and Personas
| ISE Node | Persona |
| atw-ise244 | Administration, Monitoring |
| atw-ise245 | Administration, Monitoring |
| atw-ise246 | Policy Service |
| atw-ise247 | Policy Service |
Understanding the HA Options Available
There are many different items to note when it comes to high availability (HA) within a Secure Access deployment. There are the concerns of communication between the PANs and the other ISE nodes for database replications and synchronization, and communication between the PSNs and Monitoring nodes for logging. There is also the issue of authentication sessions from the network access devices (NAD) reaching the PSNs in the event of a WAN outage, as well as a NAD recognizing that a PSN may no longer be active, and sending authentication requests to the active PSN instead.
Primary and Secondary Nodes
PANs and Monitoring & Troubleshooting (MnT) nodes both employ the concept of primary and secondary nodes, but they operate very differently. Let’s start with the easiest one first, the MnT node.
Monitoring & Troubleshooting Nodes
As you know, the MnT node is responsible for the logging and reporting functions of ISE. All PSNs will send their logging data to the MnT node as syslog messages (UDP port 20514).
When there are two monitoring nodes in an ISE deployment, all ISE nodes send their audit data to both monitoring nodes at the same time. Figure 18-9 displays this logging flow.
Figure 18-9 Logging Flows
The active/active nature of the MnT nodes can be viewed easily in the administrative console, as the two MnTs get defined as LogCollector and LogCollector2. Figures 18-10 and 18-11 display the log collector definitions and the logging categories, respectively.
Figure 18-10 Logging Targets
Figure 18-11 Logging Categories
Upon an MnT failure, all nodes continue to send logs to the remaining MnT node. Therefore, no logs are lost. The PAN retrieves all log and report data from the secondary MnT node, so there is no administrative function loss, either. However, the log database is not synchronized between the primary and secondary MnT nodes. Therefore, when the MnT node returns to service, a backup and restore of the monitoring node is required to keep the two MnT nodes in complete sync.
Policy Administration Nodes
The PAN is responsible for providing not only an administrative GUI for ISE but also the critical function of database synchronization of all ISE nodes. All ISE nodes maintain a full copy of the database, with the master database existing on the primary PAN.
A PSN may receive data about a guest user, and when that occurs it must sync that data to the primary PAN. The primary PAN then synchronizes that data out to all the ISE nodes in the deployment.
Because the functionality is so arduous, and having only a single source of truth for the data in the database is so critical, failing over to the secondary PAN is usually a manual process. In the event of the primary PAN going offline, no synchronizations occur until the secondary PAN is promoted to primary. Once it becomes the primary, it takes over all synchronization responsibility. This is sometimes referred to as a “warm spare” type of HA.
Promote the Secondary PAN to Primary
To promote the secondary PAN to primary, connect to the GUI on the secondary PAN and perform the following steps:
Step 1. Choose Administration > System > Deployment.
Step 2. Click Promote to Primary. Figure 18-12 illustrates the Promote to Primary option available on the secondary node.
Figure 18-12 Promoting a Secondary PAN to Primary
Auto PAN Failover
An automated promotion function was added to ISE beginning with version 1.4. It requires there to be two admin nodes (obviously) and at least one other non-admin node in the deployment.
The non-admin node will act as a health check function for the admin node(s), probing the primary admin node at specified intervals. The Health Check Node will promote the secondary admin node when the primary fails a configurable number of probes. Once the original secondary node is promoted, it is probed. Figure 18-13 illustrates the process.
Figure 18-13 Promoting a Secondary PAN to Primary with Automated Promotion
As of ISE version 2.1, there is no ability to automatically sync the original primary PAN back into the ISE cube. That is still a manual process.
Configure Automatic Failover for the Primary PAN
For the configuration to be available, there must be two PANs and at least one non-PAN in the deployment.
From the ISE GUI, perform the following steps:
Step 1. Navigate to Administration > System > Deployment.
Step 2. Click PAN Failover in the left pane, as shown in Figure 18-14.
Figure 18-14 PAN Failover
Step 3. Check the Enable PAN Auto Failover check box.
Step 4. Select the Health Check Nodes from the drop-down lists. Notice the primary PAN and secondary are listed to the right of the selected Health Check Nodes, as shown in Figure 18-14.
Step 5. In the Polling Interval field, set the polling interval. The interval is in seconds and can be set between 30 and 300 (5 minutes).
Step 6. In the Number of Failure Polls Before Failover field, enter the number of failed probes that have to occur before failover is initiated. Valid range is anywhere from 2–60 consecutive failed probes.
Step 7. Click Save.
Policy Service Nodes and Node Groups
PSNs do not necessarily need to have an HA type of configuration. Every ISE node maintains a full copy of the database, and the NADs have their own detection of a “dead” RADIUS server, which triggers the NAD to send AAA communication to the next RADIUS server in the list.
However, ISE has the concept of a node group. Node groups are made up of PSNs, where the PSNs maintain a heartbeat with each other. Beginning with ISE 1.3, the PSNs can be in different subnets or can be Layer 2 adjacent. In older ISE versions, the PSNs required the use of multicast, but starting in version 1.3 they use direct encrypted TCP-based communication instead:
TCP/7800: Used for peer communication
TCP/7802: Used for failure detection
If a PSN goes down and orphans a URL-redirected session, one of the other PSNs in the node group sends a Change of Authorization (CoA) to the NAD so that the endpoint can restart the session establishment with a new PSN.
Node groups do have another function, which is entirely related to data replication. ISE used a serial replication model in ISE 1.0, 1.1, and 1.1.x, meaning that all data had to go through the primary PAN and it sent the data objects to every other node, waiting for an acknowledgement for each piece of data before sending the next one in line.
Beginning with ISE 1.2 and moving forward, ISE begins to use a common replication framework known as JGroups (http://bfy.tw/5vYC). One of the benefits of JGroups is the way it handles replications in a group or segmented fashion. JGroups enables replications with local peers directly without having to go back through a centralized master, and node groups are used to define those segments or groups of peers.
So, when a member of a node group learns endpoint attributes (profiling), it is able to send the information directly to the other members of the node group directly. However, when that data needs to be replicated globally (to all PSNs), then the JGroups communication must still go through the primary PAN, which in turn replicates it to all the other PSNs.
Node groups are most commonly used when deploying the PSNs behind a load balancer; however, there is no reason node groups could not be used with regionally located PSNs. You would not want to use a node group with PSNs that are geographically and logically separate.
Create a Node Group
To create a node group, from the ISE GUI, perform the following steps:
Step 1. Choose Administration > System > Deployment.
Step 2. In the Deployment pane on the left side of the screen, click the cog icon and choose Create Node Group, as shown in Figure 18-15.
Figure 18-15 Choosing to Create a Node Group
Step 3. On the Create Node Group screen, shown in Figure 18-16, enter in the Node Group Name field a name for the node group. Use a name that also helps describe the location of the group. In this example, SJCO was used to represent San Jose, Building O.
Figure 18-16 Node Group Creation
Step 4. (Optional) In the Description field, enter a more detailed description that helps to identify exactly where the node group is (for example, PSNs in Building O). Click Submit.
Step 5. Click OK in the success popup window, as shown in Figure 18-17. Also notice the appearance of the node group in the left pane.
Figure 18-17 Success Popup
Add the Policy Service Nodes to the Node Group
To add the PSNs to the node group, from the ISE GUI, perform the following steps:
Step 1. Choose Administration > System > Deployment.
Step 2. Select one of the PSNs to add to the node group.
Step 3. Click the Include Node in Node Group drop-down arrow and select the newly created group, as shown in Figure 18-18.
Figure 18-18 Assigning a Node Group
Step 4. Click Save.
Step 5. Repeat the preceding steps for each PSN that should be part of the node group.
Figure 18-19 shows the reorganization of the PSNs within the node group in the Deployment navigation pane on the left side.
Figure 18-19 Reorganized Deployment Navigation Pane
Using Load Balancers
One high-availability option that is growing in popularity for Cisco ISE deployments is the use of load balancers. Load balancer adoption with ISE deployments has skyrocketed over the years because it can significantly simplify administration and designs in larger deployments. As Figure 18-20 illustrates, with load balancing, the NADs have to be configured with only one IP address per set of ISE PSNs, removing a lot of the complexity in the NAD configuration. The load balancer itself takes care of monitoring the ISE PSNs and removing them from service if they are down and allows you to scale more nodes behind the virtual IP (VIP) without ever touching the network device configuration again.
Figure 18-20 Load-Balanced PSN Clusters
Craig Hyps, a Principal Technical Marketing Engineer for ISE at Cisco, has written what is considered to be the definitive guide on load balancing with ISE, “How To: Cisco & F5 Deployment Guide: ISE Load Balancing Using BIG-IP.” Craig wrote the guide based on using F5 load balancers, but the principles are identical regardless of which load balancer you choose to implement. You can find his guide here: https://communities.cisco.com/docs/DOC-68198.
Instead of replicating that entire large and detailed guide in this chapter, this section simply focuses on the basic principles that must be followed when using ISE with load balancers.
General Guidelines
When using a load balancer, you must ensure the following:
Each PSN must be reachable by the PAN/MnT directly, without having to go through Network Address Translation (NAT). This sometimes is referred to as routed mode or pass-through mode.
Each PSN must also be reachable directly from the endpoint.
When the PSN sends a URL-Redirection to the NAD, it uses the fully qualified domain name (FQDN) from the configuration, not the virtual IP (VIP) address.
You might want to use Subject Alternative Names (SAN) in the certificate to include the FQDN of the load-balancer VIP.
The same PSN is used for the entire session. User persistence, sometimes called needs to be based on Calling-Station-ID.
The VIP gets listed as the RADIUS server of each NAD for all 802.1X-related AAA.
Includes both authentication and accounting packets.
Some load balancers use a separate VIP for each protocol type.
The list of RADIUS servers allowed to perform dynamic-authorizations (also known as Change of Authorization [CoA]) on the NAD should use the real IP addresses of the PSNs, not the VIP.
The VIP could be used for the CoAs, if the load balancer is performing source NAT (SNAT) for the CoAs sent from the PSNs.
Load balancers should be configured to use test probes to ensure the PSNs are still “alive and well.”
A probe should be configured to ensure RADIUS is responding.
HTTPS should also be checked.
If either probe fails, the PSN should be taken out of service.
A PSN must be marked dead and taken out of service in the load balancer before the NAD’s built-in failover occurs.
Since the load balancer(s) should be configured to perform health checks of the RADIUS service on the PSN(s), the load balancer(s) must be configured as NADs in ISE so their test authentications may be answered correctly.
Failure Scenarios
If a single PSN fails, the load balancer takes that PSN out of service and spreads the load over the remaining PSNs. When the failed PSN is returned to service, the load balancer adds it back into the rotation. By using node groups along with a load balancer, another of the node group members issues a CoA-reauth for any sessions that were establishing. This CoA causes the session to begin again. At this point, the load balancer directs the new authentication to a different PSN.
NADs have some built-in capabilities to detect when the configured RADIUS server is “dead” and automatically fail over to the next RADIUS server configured. When using a load balancer, the RADIUS server IP address is actually the VIP address. So, if the entire VIP is unreachable (for example, the load balancer has died), the NAD should quickly fail over to the next RADIUS server in the list. That RADIUS server could be another VIP in a second data center or another backup RADIUS server.
Anycast HA for ISE PSNs
This section exists thanks to a friend of the author who is also one of the most talented and gifted technologists roaming the earth today. E. Pete Karelis, CCIE No. 8068, designed this high-availability solution for a small ISE deployment that had two data centers. Figure 18-21 illustrates the network architecture.
Figure 18-21 Network Drawing and IPSLA
Anycast is a networking technique where the same IP address exists in multiple places within the network. In this case, the same IP address (2.2.2.2) is assigned to the Gig1 interfaces on all the PSNs, which is connected to an isolated VLAN (or port group in VMware), so that the PSN sees the interface as “up” and connected with the assigned IP address (2.2.2.2). Each default gateway (router) in each data center is configured with a static route to 2.2.2.2/32 with the Gig0 IP address of the PSN as the next hop. Those static routes are redistributed into the routing protocol; in this case EIGRP is used. Anycast relies on the routing protocols to ensure that traffic destined to the Anycast address (2.2.2.2) is sent to the closest instance of that IP address.
After setting up Anycast to route 2.2.2.2 to the ISE PSN, Pete used EIGRP metrics to ensure that all routes preferred the primary data center, with the secondary data center route listed as the feasible successor (FS). With EIGRP, there is less than a 1-second delay when a route (the successor) is replaced with the backup route (the feasible successor).
Now, how do we make the successor route drop from the routing table when the ISE node goes down? Pete configured an IP service-level agreement (IPSLA) on the router that checked the status of the HTTP service on the ISE PSN in the data center every 5 seconds. If the HTTP service stops responding on the active ISE PSN, then the route is removed and the FS takes over, causing all the traffic for 2.2.2.2 to be sent to the PSN in the secondary data center. Figure 18-22 illustrates the IPSLA function, and when it occurs the only route left in the routing table is to the router at the secondary data center.
Figure 18-22 IPSLA in Action
All network devices are configured to use the Anycast address (2.2.2.2) as the only RADIUS server in their configuration. The RADIUS requests will always be sent to whichever ISE node is active and closest. Authentications originating within the secondary data center go to the local PSN.
Example 18-2 shows the interface configuration on the ISE PSN. The Gig0 interface is the actual routable IP address of the PSN, while Gig1 is in a VLAN to nowhere using the Anycast IP address.
Example 18-2 ISE Interface Configuration
interface gig 0 !Actual IP of Node ip address 1.1.1.163 255.255.255.0 interface gig 1 !Anycast VIP assigned to all PSN nodes on G1 ip address 2.2.2.2 255.255.255.255 ip default-gateway [Real Gateway for Gig0] !note no static routes needed.
Example 18-3 shows the IPSLA configuration on the router, to test port 80 on the PSN every 5 seconds but to timeout after 1000 msec. When that timeout occurs, the IP SLA object will be marked as “down,” which causes changed object tracking to remove the static route from the route table.
Example 18-3 IPSLA Configuration
ip sla 1
!Test TCP to port 80 to the actual IP of the node.
!"control disable" is necessary, since you are connecting to an
!actual host instead of an SLA responder
tcp-connect 1.1.1.163 80 control disable
! Consider the SLA as down if response takes longer than 1000msec
threshold 1000
! Timeout after 1000 msec.
timeout 1000
!Test every 5 Seconds:
frequency 5
ip sla schedule 1 life forever start-time now
track 1 ip sla 1
ip route 2.2.2.2 255.255.255.255 1.1.1.163 track 1
Example 18-4 shows the route redistribution configuration where the EIGRP metrics are applied. Pete was able to use the metrics that he chose specifically because he was very familiar with his network. His warning to others attempting the same thing is to be familiar with your network or to test thoroughly when identifying the metrics that would work for you.
Remember, you must avoid equal-cost, multiple-path routes, as this state could potentially introduce problems if RADIUS requests are not sticking to a single node. Furthermore, this technique is not limited to only two sites; Pete has since added a third location to the configuration and it works perfectly.
Example 18-4 Route Redistribution
router eigrp [Autonomous-System-Number] redistribute static route-map STATIC-TO-EIGRP route-map STATIC-TO-EIGRP permit 20 match ip address prefix-list ISE_VIP !Set metrics correctly set metric 1000000 1 255 1 1500 ip prefix-list ISE_VIP seq 5 permit 2.2.2.2/32
Cisco IOS Load Balancing
Cisco network devices have a lot of intelligence built into them to aid in an intelligent access layer for policy and policy enforcement. One such intelligence level is the capability to perform local load balancing of RADIUS servers. This does not mean using a Cisco switch as a server load balancer instead of a dedicated appliance. Instead, it refers to the capability of the access layer switch to load-balance the outbound authentication requests for endpoints that are authenticated to the switch itself.
Enabling IOS RADIUS server load balancing only takes one additional command. After all the PSNs are defined as AAA servers in the switch, use the radius-server load-balance global configuration command to enable it.
Example 18-5 shows use of a show command to verify that multiple ISE servers are configured.
Example 18-5 Verifying All ISE PSNs Are Configured on Switch
3750-X# show aaa server | include host RADIUS: id 4, priority 1, host 10.1.100.232, auth-port 1812, acct-port 1813 RADIUS: id 5, priority 2, host 10.1.100.233, auth-port 1812, acct-port 1813 RADIUS: id 6, priority 3, host 10.1.100.234, auth-port 1812, acct-port 1813
Example 18-6 shows how to enable IOS load balancing
Example 18-6 Enabling IOS Load Balancing
3750-X(config)# radius-server load-balance method least-outstanding batch-size 5
Maintaining ISE Deployments
Having a distributed deployment and load-balanced architecture are certainly critical items to scaling the deployment and ensuring it is highly available, but there are also critical basic maintenance items that should always be considered to ensure the most uptime and stability. That means having a patching strategy and a backup and restore strategy.
Patching ISE
Cisco releases ISE patches on a semi-regular basis. These patches contain bug fixes and, when necessary, security fixes. Think about the Heartbleed and Poodle vulnerabilities that were discovered with SSL. To ensure that bug fixes are applied, security vulnerabilities are plugged, and the solution works as seamlessly as possible, always have a planned patching strategy.
Patches are downloaded from Cisco.com, under Downloads > Products > Security > Access Control and Policy > Identity Services Engine > Identity Services Engine Software, as shown at the top of Figure 18-23.
Figure 18-23 ISE Downloads Page
Search the list of software available for your specific version of ISE. Figure 18-24 illustrates the naming convention for ISE patches. Cisco ISE patches are normally cumulative, meaning that installing 1.2 patch 12 will include all the fixes in patches 1 through 11 as well.
Figure 18-24 Anatomy of ISE Patch Nomenclature
After identifying the correct patch file, follow these steps:
Step 1. Download the required patch.
Step 2. From the ISE GUI, navigate to Administration > System > Maintenance > Patch Management.
Step 3. Click the Install button, as shown in Figure 18-25.
Figure 18-25 Patch Management Screen
Step 4. Click Browse, select the downloaded patch, and click Install, as shown in Figure 18-26.
Figure 18-26 Installing the Selected Patch
As the patch is installed on the PAN, you are logged out of the GUI and the patch is distributed from the PAN to all nodes in the ISE cube. After the patch is successfully installed on the PAN, it is applied to all nodes in the cube one at a time, in alphabetical order.
You can log back into the PAN when it’s finished restarting services or rebooting. Click the Show Node Status button shown previously in Figure 18-25 to verify the progress of the patching. Figure 18-27 shows the resulting status of each node’s progress for the patch installation.
Figure 18-27 Node Status
Backup and Restore
Another key strategy to assuring the availability of ISE in the environment is having a solid backup strategy. There are two types of ISE backups: configuration backup and operational backup. These two types are most easily related to backing up the product databases (configuration) and backing up the MnT data (operational).
Figure 18-28 shows the backup screen in ISE, located at Administration > System > Backup & Restore.
Figure 18-28 Backup & Restore Screen
As shown in Figure 18-28, the backups are stored in a repository, and can be restored from the same repository. You can schedule backups to run automatically or you can run them manually on demand. You can view the status of a backup from either the GUI or the CLI, but you can view the status of a restore only from the CLI.
Summary
This chapter reviewed the basic principles of deploying distributed ISE nodes, high availability for ISE Policy Administration and Monitoring & Troubleshooting nodes. It examined the pillars of successful load balancing with ISE Policy Service Nodes, failover selection on Cisco Catalyst switches, and IOS load balancing.
This chapter also emphasized the importance of having regular backups in addition to a highly available design, and described where to configure those backups in addition to patching an ISE deployment.