Here is an conceptual topology for an MDC deployment.
This should be pretty self-explanatory. Notice the Global Load Balancers (GLBR); both the New York and London data centers must be front-ended with a GLBR for MDC support. This allows the a user request to be routed to a different data center when:
- The data center goes down.
- There is a load spike causing redistribution of traffic.
- Certain applications are deployed in only one data center.
- WebGates are configured to load balance within one data center but failover across data centers.
DeploymentThere are two parts to deploying MDC. The first part is 'cloning' the configuration from the master site to a secondary site using the Test-to-Production (T2P) process. The second part is to enable the MDC configuration so that each partner site is aware of each other. This post will only cover the T2P procedure. T2P is not new; however, many of our legacy OAM customers may not be familiar with T2P. I will describe the commands I executed to clone a master site to a secondary site using T2P.
More details on T2P can be found in Oracle Fusion Middleware guide here.
MDC supports both active-active and active-passive/stand-by scenarios. The following prerequisites must be satisfied before deploying Multi-Data Centers:
- All Data Center clusters must be front ended by a single Load Balancer.
- Clocks on the machines in which Access Manager and agents are deployed must be in sync. Non-MDC Access Manager clusters require the clocks of WebGate agents be in sync with Access Manager servers. This requirement applies to the MDC as well. If the clocks are out of sync, token validations will not be consistent resulting in deviations from the expected behaviors regarding the token expiry interval, validity interval, timeouts and the like.
- The identity stores in a Multi-Data Center topology must have the same Name.
- The first Data Center is designated as Master and will be cloned (using T2P tools) for additional Data Centers.
- All configuration and policy changes are propagated from the Master to the Clone using the WLST commands provided as part of the T2P Tooling.
- Each Data Center is a separate WebLogic Domain and the install topology is the same.
For more details on the scripts I used , please check the documentation here.
- Export the OPSS schema from the 'master' DB instance. Set the ORACLE_HOME to the db home directory and execute the 'expdp' command.
- Import the OPSS schema to the secondary/cloned DB. Make sure that the schema on the secondary/cloned DB instance is loaded via RCU. Load both the OAM and OPSS schema on the secondary DB instance and note down the schema names.
- On the 'master' machine you need to copy the binaries. Server state is immaterial. Make sure to create the /oam_cln_log directory first. I also recommend you create a separate directory to store MDC related artifacts; for example /u01/MDC_FILES.
- On the 'master' machine you need to copy the configuration. Both the administration server and all managed servers need to up and running. The Weblogic server must also be in production mode.
The following commands are to be executed on the 'clone' machine.
- Copy the following files from the master environment: oamt2pbin.jar, oamt2pConfig.jar, pasteBinary.sh, oraInst.loc and cloningclient.jar. The oamt2pbin and oamt2pConfig jar files should have been created with the copy commands above. The cloningclient.jar, pasteBinary.sh and oraInst.loc can be found within the /oracle_common directory.
- Using the pasteBinary.sh script will copy the binary data (oamt2pbin.jar) to the new server. No Oracle software with the exception of the Java should be installed on the new machine. In this example, a place-holder directory /u01/IAM1 and /u01/MDC_FILES/oam_cln_log needs to exists before running the command below.
- Next we need to extract a move plan file. This file allows you to modify some of the details of the new environment. The script is called 'extractMovePlan.sh' and is located under /oracle_common/bin.
Once the 'moveplan.xml' was created, I changed the following:
- All host names endpoints. For example, my Master host name was iam1.us.oracle.com, I changed this to iam2.us.oracle.com. If you have multiple components on the same machine make sure you modify all properties that make sense in you deployment.
<configProperty> <name>Listen Address</name> <value>iam2.us.oracle.com</value> <itemMetadata> <dataType>STRING</dataType> <scope>READ_WRITE</scope> </itemMetadata> </configProperty>
- WLS machine name and Node Manager host name.
<configGroup> <type>MACHINE_CONFIG</type> <configProperty id="Machine1"> <configProperty> <name>Machine Name</name> <value>IAM2</value> <itemMetadata> <dataType>STRING</dataType> <scope>READ_WRITE</scope> </itemMetadata> </configProperty> <configProperty> <name>Node Manager Listen Address</name> <value>iam2.us.oracle.com</value> <itemMetadata> <dataType>STRING</dataType> <scope>READ_WRITE</scope> </itemMetadata> </configProperty> <configProperty> <name>Node Manager Listen Port</name> <value>5556</value> <itemMetadata> <dataType>INTEGER</dataType> <scope>READ_WRITE</scope> </itemMetadata> </configProperty> </configProperty> </configGroup>
- Schema owners. Make sure you change both the OPSS and OAM schema configuration property.
<configProperty> <name>User</name> <value>MDC2_OPSS</value> <itemMetadata> <dataType>STRING</dataType> <scope>READ_WRITE</scope> </itemMetadata> </configProperty> <configProperty> <name>User</name> <value>MDC2_OAM</value> <itemMetadata> <dataType>STRING</dataType> <scope>READ_WRITE</scope> </itemMetadata> </configProperty>
- Now we paste the configuration on the target/clone machine using the 'moveplan.xml' we just modified.
You should now be able to start the Administration/OAM servers on the secondary/cloned machine.