AppBoard is implemented using a highly scalable web application architecture. As a Java application running inside an Apache Tomcat server, AppBoard is able to make use of multi-core and multi-processor systems with large amounts of RAM on 64-bit operating systems. In addition to scaling vertically on a single system, AppBoard supports horizontal scaling to handle even greater loads and/or to provide for high availability environments through the use of a shared configuration database. AppBoard can be used in the following configurations:
In cases where high-availability is required then regardless of the load a cluster configuration is recommended. In cases where load is a concern refer to the Performance Tuning & Sizing documentation for more information.
Whether running a load-balanced, failover, or cold-standby configuration the following components are required:
The overall cluster configuration is made up of separate parts that follow the cluster architecture:
Also consider that establishing a new cluster and maintaining the cluster may have different approaches as outlined below.
In simple single-server AppBoard configurations it is recommended to use the built-in, in-process, H2 configuration database. However, in cluster configurations the configuration needs to be shared and kept in sync across two or more nodes so an external configuration database is required. Setting up an external database for redundancy operation has two main steps:
While the shared configuration database takes care of AppBoard content, enPortal content, and provisioning information, there is other configuration and filesystem assets that need to be maintained per node:
The recommended approach, which also serves to ensure full-backups are made of the system, is to configure the Backup export list to include all filesystem assets. Then when establishing, or updating a cluster, the archive can be used to maintain the filesystem components. Also, other custom approaches may also be suitable such as filesystem synchronization tools.
The license and database configuration will need to be handled when first establishing the cluster but will not need to be changed after that point.
The Load Balancer can distribute sessions to one or more AppBoard nodes using any standard load balancing algorithm (e.g. Round-Robin, smallest load, fewest sessions, etc.). The only requirement is that the session affinity is maintained such that a single user is always routed to the same AppBoard node during the full duration of the session.
The two session cookies used by AppBoard are JSESSIONID and enPortal_sessionid. When configuring the Load Balancer for session affinity, it is recommended to use enPortal_sessionid to avoid any conflicts with other applications that may also have a JSESSIONID cookie.
The following URL can be used by the load balancer as a means of testing AppBoard availability:
This script returns a HTTP status code 200 (success) if all components of AppBoard are running properly, otherwise it returns a 500 (internal error) if there is an issue. And in the case the AppBoard server isn’t running, then of course there will be no response.
Whether running on the bare metal or within virtualized environments the clustering configuration remains the same.
Some virtualization environments may offer their own layer of fault tolerance although this is usually targeted at reducing/eliminating the impact of hardware failure – e.g. VMware Fault Tolerance to transparently failover a guest from a failed physical host to a different physical host such that everything continues un-interrupted. This type of system is useful on it’s own but may not be aware of application-level failures that can also occur.
The following process can be used to establish a cluster environment. Please make sure you have read over the previous sections to understand the overall architecture and configuration components.
The following process assumes an existing environment, although a clean install environment can be used too.
The following applies to the primary node. Even in a purely load-balanced configuration pick one of the nodes as the primary for purposes of establishing the cluster:
The following applies to all other nodes in the cluster:
Additional verification of the whole cluster can be performed by logging into the separate nodes from different locations and viewing the Session Management system administration page and verifying that all sessions are visible regardless of the node logged into.
For full product updates a process similar to establishing a new cluster is required. A full archive should be performed. This should be tested in isolation with the new build to ensure there are no upgrade issues. If changes need to be made due to the upgrade then produce a new full archive on the new version of the product for use when establishing the cluster. As the entire cluster will need to be taken down, and re-established, this process will require some scheduled downtime to complete.
In high-availability environments if scheduled downtime is not possible then another approach would be to take down one node, re-establish a new cluster with a different external configuration database, then failover all clients to the new node. Other nodes can then be re-established pointing to the new configuration database, and then update the load balancer configuration to bring all nods back online.
For patch updates these are usually overlaid onto an existing install and do not modify configuration. This can be done in-place and typically just require the AppBoard server to be restarted.
Content and provisioning changes made directly on the cluster are immediately available to all cluster nodes and nothing further needs to be done.
This section covers updates that are made either in a separate development or staging environment that need to be moved to production. For this there are two approaches: