Note: The information in this post applies to the 11g R2 versions of OAAM and OAM only ( at the time of writing, 188.8.131.52, 184.108.40.206 and 220.127.116.11).
The problem we are trying to solve
How OAAM manages connections to OAM
- oaam.uio.oam.webgate_id - defines the webgate ID used by OAAM. This defaults to
IAMSuiteAgentand should not be changed.
- oaam.oam.csf.credentials.enabled - this property, when set, uses the Fusion Middleware Credential Store Framework (CSF) to securely store password, such as the webgate password. This should always be set to true.
- oaam.uio.oam.security.mode - defines the communication security between OAAM and OAM, can be either 1 (open), 2 (simple) or 3 (cert). Open is the default.
- oaam.uio.oam.host - defines the primary OAM hostname to which OAP connections should be established.
- oaam.uio.oam.port - defines the OAP port for the primary OAM host (this defaults to 5575)
- oaam.uio.oam.secondary.host - defines the secondary, or failover, OAM hostname. OAP connections will only be established to this host if connections to the primary OAM host fail.
- oaam.uio.oam.secondary.host.port - defines the OAP port for the secondary OAM host (this defaults to 5575)
- oaam.oam.oamclient.minConInPool - defines the minimum number of OAP connections that OAAM will maintain in its pool. This setting will obviously be respected by each OAAM server.
- oaam.uio.oam.num_of_connections - defines the target (maximum) number of OAP connections to the primary OAM server that OAAM will maintain in its pool. This setting will obviously be respected by each OAAM server. The default value is 5.
- oaam.uio.oam.secondary.host.num_of_connections - defines the target (maximum) number of OAP connections to the secondary OAM server that OAAM will maintain in its pool. This setting will obviously be respected by each OAAM server. The default value is 5.
- oaam.oam.oamclient.timeout - the period in seconds that a request will wait for an available OAP connection before timing out. The default is 3600 seconds (1 hour) which is way too high and should always be reduced to not more than 60 seconds in production.
- oaam.oam.oamclient.periodForWatcher - defines the rest period (in seconds) for the OAAM Pool Watcher thread, a thread which periodically checks the health of connections in the pool. The default is 3600 seconds (1 hour) which should probably be reduced to around 300 (5 minutes) for production deployments.
- oaam.oam.oamclient.initDelayForWatcher - defines the initial delay (in seconds) before the OAAM Pool Watcher thread starts to check connections. The default is 3600 seconds (1 hour) which should probably be reduced to around 300 (5 minutes) for production deployments.
Options for OAAM to OAM connection load balancing
1: Override deployment-wide properties on a per-host basisIn a deployment where the number of OAAM nodes matches the number of OAM nodes exactly, then a fairly sensible and robust load balancing approach is simply to allocate a single primary and a single secondary OAM server to each OAAM server. This can be achieved by overriding the deployment-wide oaam.uio.oam.host and oaam.uio.oam.secondary.host settings on each individual OAAM host. In order to do this, first ensure that you delete the applicable property values from the OAAM database via the OAAM console. Then pass a unique value to each OAAM server instance at startup via a java property, e.g. -Doaam.uio.oam.host=<primary_host_name> and -Doaam.uio.oam.secondary.host=<secondary_host_name> Consider a deployment comprising two OAAM hosts (Host A and Host B) and two further OAM hosts (Host C and Host D). Using this approach, Host A would be configured with the following settings: oaam.uio.oam.host: Host C and oaam.uio.oam.secondary.host: Host D while Host B would be configured with oaam.uio.oam.host: Host D and oaam.uio.oam.secondary.host: Host C This configuration would ensure that both OAM hosts received an equivalent number of connections, thus providing load balancing, while also providing resilience in case either OAM server should become unavailable. This approach, though, would suffer from a number of drawbacks, including the following:
- unsuitable for deployments where the number of OAM and OAAM nodes is asymmetric and not even.
- manageability is reduced as OAAM console cannot be used to configure per-host parameter values.
- would not scale much beyond two nodes while still providing high availability. The loss of more than one OAM node at any one time would potentially render certain OAAM nodes unusable.
- no way to rebalance load across OAM nodes in case an OAAM node goes down.
2: Use virtual hostnamesThe second option is similar to the first, in that it allows for the definition of a single primary and a single secondary OAM server for each OAAM server. In this case, though, rather than overriding domain-wide property values, the approach is to user virtual hostnames to define the OAM servers. For example, we would define the following: oaam.uio.oam.host: oam-primary.domain.com oaam.uio.oam.secondary.host: oam-secondary.domain.com We would then use the /etc/hosts file on each OAAM node to define exactly which physical OAM server IP address the virtual hostnames oam-primary and oam-secondary should resolve to. In our above scenario, OAAM HOST A would have entries in its hosts file mapping oam-primary to the IP address for OAM Host C and oam-secondary to the IP address for OAM Host D. HOST B would instead map oam-primary to the IP address for OAM Host D and oam-secondary to the IP address for OAM Host C. In cases where OAAM and OAM servers are co-located on the same hardware, we can use a shortcut and specify "localhost" as the oaam.uio.oam.host value. This approach provides pretty much exactly the same benefits as the first option and incurs the same drawbacks, with the possible exception that it may prove somewhat easier to manage in production. In particular, the fact that any of the virtual mappings could be changed dynamically (without needing to restart OAAM) would be a definite advantage of this strategy.
3: Use an external load balancerPerhaps the most obvious solution to this problem is to insert some form of external load balancer between OAAM and OAM. In this case, OAAM is configured such that the oaam.uio.oam.host property points to the address of the load balancer, which then in turn distributes requests to the OAM servers according to whatever algorithm is desired. In this scenario, it does not even make sense to define the oaam.uio.oam.secondary.host property (unless there is a second, redundant load balancer in place) since it's assumed that the load balancer itself will only route requests to active OAM nodes. This approach has a number of benefits when compared to options 1 and 2 above, including the following:
- can be used to balance load from any number of OAAM servers to any number of OAM servers; there is no requirement for symmetry
- better scalability beyond 2 nodes
- better manageability via load balancer console, rather than host files/command-line switches