Wednesday, August 31, 2011

Bridging federation protocols with OIF

I just wrapped up a project for a customer with a slightly odd federation use case.

On the one side was an IdP that could generate SAML assertions.
On the other side was an app that could only accept either a username+password or an OpenID.

We bridged the gap with OIF and a bit of config.

In broad strokes heres what you do...

  • Install OIF
  • Setup OIF as OpenID OP
  • Setup OIF as a SAML Service Provider

For SP initiated:
You need to Configure OIF to use the Federation SSO proxy authn engine.

When the user reaches the OpenID enabled app the app will send the user to OIF. OIF will see that it needs to send the user to the SAML IdP and will redirect them there. The user goes to the SAML IdP, logs in and then comes back with SAML assertion. OIF consumes the assertion and generates an OpenID identity and redirects the user back to the OpenID Relying Party.

For IdP initiated:
You need to setup an SP Integration Module (abbreviated to SPIM).

The user stars out at the SAML IdP which generates a SAML assertion and sends to OIF. OIF validates the SAML Assertion and invokes the SPIM. The SPIM kicks the user into the OpenID flow and they get redirected on to the OpenID RP.

It's all actually pretty straightforward once you understand what's going on.

Monday, August 29, 2011

Updated dead simple certificate authority

Back in April I posted a shell script I wrote to implement a dead simple Certificate Authority for testing purposes. I recently revisited that script because I needed JKS files in addition to the PEM format files it created.

Without further ado my new and improved script is available right after the break.

Friday, August 26, 2011

How to reset your WLS super user password

Occasionally, we get into situations where we do not have the Weblogic super user (usually username = weblogic) password.  For myself, this sometimes happens when I'm using a VM that someone else created where they didn't properly document all the account info.  A more serious situation is if an organization actually somehow loses this information for their real deployments.

Recently, our friend Atul Kumar made a good post on how to reset the WebLogic super user account.  I think this is valuable information that everyone should have on hand:

Wednesday, August 24, 2011

Who said IAM isn't funny

So, let's face it.  Us security/IAM guys are usually a fairly sober bunch.  Maybe we are even a little too serious at times, but that is what "they" pay us to be.  Still, it's good to be able to find humor in our work.  So, with that in mind, I thought I would share this excellent video produced by our crack Oracle IAM product marketing team:

Monday, August 22, 2011

Exception when using an OIF Business Process Plug-in

If you write a Business Processing plug-in for Oracle Identity Federation (OIF) and follow the installation instructions in the documentation you may encounter NoClassDefFoundError looking for org.apache.commons.codec.DecoderException. Here's what that exception looks like:

java.lang.RuntimeException: javax.servlet.ServletException: java.lang.NoClassDefFoundError: org/apache/commons/codec/DecoderException
        at Source)
        at Source)
        at Source)
        at javax.servlet.http.HttpServlet.service(
        at javax.servlet.http.HttpServlet.service(
        Truncated. see log file for complete stacktrace
Caused By: javax.servlet.ServletException: java.lang.NoClassDefFoundError: org/apache/commons/codec/DecoderException
        at weblogic.servlet.internal.ServletStubImpl.execute(
        at weblogic.servlet.internal.ServletStubImpl.execute(
        at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(
        at weblogic.servlet.internal.RequestDispatcherImpl.forward(
        at Source)
        Truncated. see log file for complete stacktrace
Caused By: java.lang.NoClassDefFoundError: org/apache/commons/codec/DecoderException

This is easily fixed - just copy Oracle/Middleware/user_projects/domains/IDMDomain/servers/wls_oif1/tmp/_WL_user/oif-libs/i78h77/APP-INF/lib/commons-codec-1.2.jar to the same lib directory as your plug-in and other jars listed in the docs. PS: don't forget to copy the updated files in when you apply a patch!

Wednesday, August 17, 2011

5 Minutes or Less: On SAML Audiences, Entities and Issuers

I’ve recently helped a customer who wanted to integrate a home-built SAML Identity Provider with a Weblogic Service Provider. After exchanging metadata and going through all the necessary configuration on both sides, they came across this error in Weblogic server logs:

####<Aug 15, 2011 4:55:19 PM EDT> <Debug> 
<SecurityAtn> <> <server1> <[ACTIVE] 
ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> 
<<WLS Kernel>> <> 
<1313441719095> <BEA-000000> 
- IdentityAssertionException>
####<Aug 15, 2011 4:55:19 PM EDT> 
<Debug> <SecuritySAML2Service> <> 
<bi_server1> <[ACTIVE] ExecuteThread: '0' for queue: 
'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> 
<1313441719097> <BEA-000000> <exception 
info [Security:090377]Identity Assertion Failed, 
[Security:090377]Identity Assertion Failed, 
[Security:096539]AudienceRestriction condition not satisfied (no matching 

We can clearly see that Weblogic’s Assertion Consumer Service (ACS) is trying to validate the SAML assertion. As part of that, it is verifying the AudienceRestriction condition.

According to the SAML specification, “the <AudienceRestriction> element specifies that the assertion is addressed to one or more specific audiences identified by <Audience> elements… The Audience URI MAY identify a document that describes the terms and conditions of audience membership. It MAY contain the unique identifier URI from a SAML name identifier that describes a system entity.” It also says that “the audience restriction condition evaluates to Valid if and only if the SAML relying party is a member of one or more of the audiences specified.

If you can manage to look at the actual SAML assertion being generated by the Identity Provider, you should be able to what the Identity Provider is adding as <Audience> elements. In this customer case, it was:

<saml2:Conditions NotBefore="2011-08-15T20:54:11.000Z" NotOnOrAfter="2011-08-15T20:58:11.560Z">

The ACS was actually complaining about the <Audience> value, which is wrong here.

It turns out that the Audience value must match the service provider ID. In the case of a Weblogic Service Provider, such value is the Entity ID, specified in Weblogic Console as part of the Service Provider metadata definition in the “SAML 2.0 General” tab, as in the following screen:


The Entity ID parameter uniquely identifies a partner across federation interactions.

The customer then managed to make their home-built Identity Provider adding the value of to the <Audience> element and things got all on track.

And have in mind that the URL format is only a recommendation. It can theoretically be any string less than 1024 characters long.

Another thing to be aware of is that the Assertion Consumer Service will also try to verify the <Issuer> element value in the incoming token against the “Issuer URI” in the Service Provider partner definition.


 And the “Issuer URI” value comes from the Identity Provider metadata definition that is imported into Weblogic’s Service Provider.

Tuesday, August 16, 2011

Node Manager Security

I was asked a good question by a customer recently. The question was how can you secure Node Manager communication and how do clients authenticate themselves with the node manager. I didn’t know much about node manager security but after doing some research I thought it would be helpful to share the answers.

Secure Communication with Node Manager

Node Manager and its clients use a custom communication protocol to communicate. However, this protocol can be SSL enabled for secure communication.

The settings that control this are located in the file but can also be passed in on the command line when you start Node Manager, in which case the command line values override what is in the properties file.

To enable SSL for Node Manager Communication, you set SecureListener to ‘true’ (which is the default).

By default, Node Manager uses the WLS demonstration Identity (DemoIdentity.jks) and Trust (DemoTrust.jks) keystores located in WL_HOME/server/lib.

You can change this by modifying the KeyStores property which takes 3 values:

Use the demonstration Identity and Trust keystores located in the BEA_HOME\server\lib directory that are configured by default. The demonstration Trust keystore trusts all the certificate authorities in the Java Standard Trust keystore (JAVA_HOME\jre\lib \security\cacerts)

Uses a keystore you create, and the trusted CAs defined in the cacerts file in the JAVA_HOME\jre\lib\security\cacerts directory.

Uses Identity and Trust keystores you create.

With each of these KeyStore modes there are other associated properties you will have to set to configure NodeManager for SSL.
These properties are: CustomIdentityAlias, CustomIdentityKeyStoreFileName, CustomIdentityKeyStorePassPhrase, CustomIdentityKeyStoreType, CustomIdentityPrivateKeyPassPhrase, and JavaStandardTrustKeyStorePassPhrase.
You can kind of figure out from the names which settings are needed for which KeyStores stores but you can also read more about these settings here.

Node Manager and Authentication

There are actually two separate authentications that occur during the utilization of Node Manager. . First, it will compare the incoming NodeManager credentials against an encrypted file that has been established during nmEnroll or pack/unpack of the domain. This username and password is specific to an entire domain and is only used for communicating with the NodeManager. It does not have anything to do with managed servers. For more information about setting or changing these credentials, see here.

The second way NodeManager make use of a username and password is for the managed server that the NodeManager starts and monitors. This is usually supplied to the NodeManager either from the config.xml file or from an individual making a client call to the NodeManager. The NodeManager will then encrypt this value and write it to disk so that the managed server can use those credentials for startup. For more information see the documentation here.

Monday, August 15, 2011

Sun 2 Oracle Upgrade Waveset to OIM Part 1

This post is for our former Sun customers out there that are using the Waveset Identity Management product.

Recently, there was a webcast on the subject of moving from Waveset to OIM with a particular focus on connectors.  The webcast was recorded and can be found here.  The slides have also been published separately and can be found here.

To be a part of the next session on October 25th 2011 at 9am Pacific, please register.

The  webcast mentions the self paced learning guide for OIM - this can be found on the learning website. In addition you can get the latest guide posted on our website. In addition, during the session Raghu discussed step by step instructions for doing self service UI customizations on OIM. These steps can be found documented online.  If you have not read the upgrade guide that compares and contrasts objects in Waveset and OIM you can get a copy from our site.

If you would like to join the discussion group hosted on send a note to The access is restricted to customers only.
Avantika and Raghu have documented answers to the questions asked during the session

Q: Reconciliation - can this be configured to update attributes on the target system in OIM
A: Yes OIM reconciliation can push values out to target resources.

Q: Netbeans can create a remote project for Waveset is there an equivalent for Jdeveloper for OIM ?
A: The answer is no.

Q: Deferred Task in Waveset can be  stored on the user object so that periodically these tasks can perform maintenance like removing excessive access. Is this similar in OIM? How does this work in OIM ?
A: In OIM these work as "scheduled tasks" - they run in the background - these tasks are  not attached to the specific user object but operate similarly. They are configured as general events and basically the task looks for a user with the general event.

Q: Workflow in Waveset, do we provide automated tool to convert workflows ?
A: Customers have to re-define workflows OIM. In most cases the configuration is simplified in OIM. One example, where all self-service workflows were configured from scratch in Waveset, OIM provides workflow templates per resource.

Q: What is the equivalent of Activesynch from Waveset in OIM ?
A: The OIM "Reconciliation" feature provides the same capability as the "ActiveSynch" Feature in Waveset.

Q: How does Waveset integration with OIA compare to OIM integration with OIA ?
A:  Today the OIM integration more advanced that the integration in Waveset. Since patchset 1 released in June, OIM provides risk scoring feedback directly to OIA along with all of the preventative SOD checking and Role integration provided by Waveset

Q: What about look and feel customization can we do this in OIM ?
A: Yes - similar to Waveset OIM allows look and feel customization

If you would like to join the discussion group on please send a note to the alias  The discussion group is restricted to customers. Any other questions send to the alias and I've been told you will get you a response. 

Saturday, August 13, 2011

Live Webcast: Layering Enterprise Security with Oracle Access Management

Live Webcast - Layering Enterprise Security with Oracle Access Management
The proliferation of external and internal security threats in the enterprise, cloud, and mobile application delivery ecosystems requires effective security and policy controls. Oracle Access Management offers simple, effective and holistic solutions to safeguard against threats and streamline compliance while ensuring robust end-to-end security for applications, web services and data. This webcast will highlight how organizations can leverage cutting-edge access management technologies such as risk-based authentication, context-aware security, and identity propagation, to create a secure enterprise environment by leveraging existing IT investments.

Attend this live, complimentary webcast sponsored by IOUR (Independent Oracle Users Group) to discover how Access Management solutions from Oracle can help you address your security and compliance goals with simplicity. You will also learn about:

  • The latest innovations in Oracle Access Management solutions
  • A comprehensive approach to implementing end-to-end security with access management
  • Case studies of real world deployments
Register now for this webcast.

Thursday, August 11, 2011

OAM 11g: Configuring Data Sources

Wanted to share an experience I encountered recently configuring the OAM Console. This is specific to OAM 

This post is part of a larger series on Oracle Access Manager 11g called Oracle Access Manager Academy. An index to the entire series with links to each of the separate posts is available

When you first install OAM 11g one of the first things a customer will do is to setup a new data store. But first let’s take a look at the default configuration. If you take a look at the ‘UserIdentityStore1’ data source you will notice a new feature where a data source can be a ‘Default’ store, a ‘System’ store or both. This data store (WebLogic Embedded LDAP) is set to both the ‘Default’ store and ‘System’ store.

The ‘Default’ data store is used by Security Token Service. The ‘System’ store is what is used to authenticate an OAM administrator. When you select a data store to be the system store, you will need to define user(s) to the administrators group. You can read here for more information on data sources:
Now again a customer will most likely need to configure a new data store and possibly use that data store as the default and/or system store. Be aware that once you change the ‘system’ store you can potentially lock yourself out of the OAM console!
Here is a screen shot of the data store I configured:

The data store is pointing to an OID back end with test users. I created a user ‘testuser1’ as the administrator for the ‘system’ store as shown above.
When you ‘Apply’ this setting you will see a Warning:

You will also be asked to validate the administrator. I validated using ‘testuser1’.
Now let’s look at the WLS configuration. Out of the box it still had the default settings as seen here:

Now this is where you could run into some trouble. Remember the warning we received when configuring the ‘system’ store. You need to make sure that the data store you specified as the ‘system’ store is reflected somewhere in your providers list in WLS Console.
Now let’s say that you forget to add an LDAP provider within WLS or more likely the provider was configured incorrectly where the testuser1 does not exists. In my example, when you try to login to the OAM console as ‘weblogic’ user, you will get an access denied page. If you try to login as ‘testuser1’, you will receive an incorrect username/password page.
When logging in as the ‘weblogic’ user, this user exists in the Default Authenticator, but is not part of the Administrators group as defined in the system store, thus the access denied page. For my 'testuser1', this user does not exist in the default authenticator, thus the incorrect username/password error.
Now there are two ways to get you back into the OAM Console:
1) Create the uid ‘testuser1’ in Embedded LDAP used by WLS. This is assuming that the Default Authentication provider is listed. This is not recommended however, better yet…
2) Stop the managed server ‘oam_server1’. Now you should be able to log in with the original ‘weblogic’ user you created when installing the domain.
Remember the warning we got when assigning a new 'system' store? Well that basically means that you need to make sure that one of the WLS providers are in sync with the system store defined in the OAM console.

Thursday, August 4, 2011

Couple of things you need to know about the User/Role API

The idea of the User/Role API is to abstract developers from the identity store where users and groups are kept. A developer can basically interact with any identity provider supported by Weblogic server using the same methods. The javadoc can be found here:

In this post I want to alert you about two caveats:

1) User/Role API is able to query data from only one provider. If you want to query multiple identity stores, you need to go through an OVD Authenticator (or libOvd). And depending on how you get a handle to the identity store, the order in which providers are defined in Weblogic server Console as well as their CONTROL FLAGs do matter.

Shamelessly borrowing content from FMW Application Security Guide:

"OPSS initializes the identity store service with the LDAP authenticator chosen from the list of configured LDAP authenticators according to the following algorithm:

  1. Consider the subset of LDAP authenticators configured. Note that, since the context is assumed to contain at least one LDAP authenticator, this subset is not empty.
  2. Within that subset, consider those that have set the maximum flag. The flag ordering used to compute this subset is the following:
    Again, this subset (of LDAPs realizing the maximum flag) is not empty.
  3. Within that subset, consider the first configured in the context.

    The LDAP authenticator singled out in step 3 is the one chosen to initialize the identity store service."

Lack of such understanding is a big source of headache.

Weblogic server ships with DefaultAuthenticator as the out-of-box authentication provider with the CONTROL FLAG set as REQUIRED.  Customers typically want to retrieve users from an enterprise-wide LDAP server, like OID or Active Directory. They go ahead and define a new authenticator and put it as the first in the providers list. But they leave DefaultAuthenticator untouched, because they still want to leverage the weblogic user as the administrator. And when some application relying on the User/Role API is executed (Oracle's BPM and BIP are examples), a problem is just about to happen, because none of the users and groups defined in the enterprise-wide identity store are found. And the solution to this is pretty simple: switch DefaultAuthenticator's CONTROL FLAG from REQUIRED to SUFFICIENT. What happens now during authentication time is that if the user is not found in the first authenticator, the lookup falls back to DefaultAuthenticator, so leveraging weblogic user is not a problem. And that will also make the User Role API querying the identity provider that you want (the first in the list).

2) Depending on how you get a handle to the identity store, provider-specific metadata (user, password, address, root search base) won't be reused and you'll be forced to define it in code again (of course you can externalize them to some properties file, but it is still a double maintenance duty).

That said, let's examine possible ways of getting a handle to the identity store.

IdentityStoreFactoryBuilder builder = new IdentityStoreFactoryBuilder();
IdentityStoreFactory oidFactory = null;
Hashtable factEnv = new Hashtable();
// Creating the factory instance
factEnv.put(OIDIdentityStoreFactory.ST_SECURITY_PRINCIPAL, “cn=orcladmin”);
oidFactory = builder.getIdentityStoreFactory(“
OIDIdentityStoreFactory”, factEnv);
Hashtable storeEnv = new Hashtable();
IdentityStore oidStore = oidFactory.getIdentityStoreInstance(storeEnv);
// Use oidStore to perform various operations against the provider

Look at how specific this snippet is to OID and how we're passing metadata that is already available in the provider definition itself. By doing this, you do not incur in the problem described in my bullet #1, because you're going directly against a specific identity store. You're not leveraging the definitions in Weblogic server at all.

But if you do this...

Wednesday, August 3, 2011

Tuning WebLogic LDAP Authentication Providers

I’ve been involved in a fair amount of activity over the last month involving customers who want or need to tune their WebLogic LDAP authentication providers for a production environment.

Too often I have seen customers simply lacking in awareness to the fact that the authentication providers can and should be tuned from their default settings. The fact is that the default values in the LDAP authentication providers are better sized to development environments (and in some cases the development environments of 5 years ago) then they are to today’s production environments. So, the first step is awareness that the authentication providers include setting related to cache performance, connection management, and handling of group lookups that can and should be tuned in order to maximize the performance of your applications.

So, in this post I’d like to go through the authentication provider settings that affect performance, discuss what each setting does, and discuss what some guidelines and how these differ from the defaults.

First, let’s briefly discuss how to find the settings you’ll want to tune. Login to the WLS admin console, on the left hand side under domain structure click security realms and then “myrealm”. From there, click on the providers tab and select the LDAP authentication provider that you want to tune. Once you are in the authentication provider configuration screen you’ll want to look in the “Provider Specific” and “Performance” tabs to modify the settings we are about to discuss. We will also discuss one setting that is located under the “Performance” tab of “myrealm” (or whatever you have named your active security realm) itself.

If all goes well you should see a screen that looks something like this after clicking the provider specific configuration tab:

In discussing the tuning of LDAP authentication providers, I like to divide the settings into 3 categories: LDAP connection settings, cache settings, and group lookup settings.  If you’d like to follow along with what the documentation says you can do so here:
Connection related settings:
Connection Timeout Limit – The maximum time in seconds to wait for the connection to the LDAP server to be established. The default is set to 0 which means that there is no maximum time limit. Note that this setting only comes into play when the authenticator is trying to open up a new connection in its pool of connections to the directory.

Connection Retry Limit – The number of times the server will attempt to connect to the LDAP server if the initial attempt fails. The default is 1. Again this setting applies only to situations where the authenticator is trying to open up new connections.

Now you may ask what happens if the connection timeout limit is reached and all retry attempts fail. The answer is that the authenticator will simply give up on the current request that it is trying to open a connection to handle and return a failure. Subsequent requests will be serviced from the available pool of connections until a new one must be opened again at which time the process will repeat itself.

I could be wrong but I see no good reason why one would want to wait forever. What you want to avoid is a cycle of death situation where degradation in LDAP performance is handled poorly and leads to things backing up more than they have to in the authentication provider. The specific values that you should go with for these settings are fairly environment specific in that they depend on your directory and network infrastructure but I think that 120 seconds for the Connection Timeout Limit and 5 for the Connection Retry Limit are good starting points.

Cache related settings:
Cache TTL and Cache Size

There are two cache settings on the “provider specific” tab of most LDAP authentication providers. These are Cache TTL and Cache Size. These setting refer to a “user related” cache in the authentication providers that cache the DN lookup that translates login names to full LDAP distinguished names and possibly caches some common attribute values following the lookup. I must stress that the authentication providers do not cache username and passwords. Real username/password authentication always results in a call to the directory. With that out of the way:

Cache TTL – is the time-to-live of entries in the cache in seconds. The default is 60 which seems low to me. I would consider upping this to 5 minutes and going from there.

Cache Size – is the size of the cache in kilobytes. The default is an absurdly low 32 KB. The per-entry size of this cache is low but I don’t think upping this to 2-4 MB would hurt.

The exception to the above recommendation would be a situation where you really don’t expect an individual user from hitting the authentication provider twice in a fixed period of time. In this case I would still up the Cache size some but might leave the TTL along at 60 seconds.

Principal Validator Cache

The Principal Validator Cache is actually a setting associated with the entire realm rather than the authentication provider and is configured in the “performance” tab of the realm itself. There are two settings associated with the cache: Enable WebLogic Principal Validator Cache and Max Weblogic Principals in Cache. It is enabled by default with the max number of principals defaulting to 500.

This cache is mentioned in the documentation in vague terms as something that can improve performance and indeed it is a fairly mysterious construct. What this cache does is cache signed Abstract Principals which are used in RMI calls when a Principal Validation Provider is being used.

The long and short of it is that this cache won’t have too much affect for most people and even in situations where it will be heavily hit, it is common for the validations to be associated with a limited number of service accounts. So, for the post part you can just leave these settings as is. However, don’t be afraid to bump up the number of cached principals, the default setting is very low considering the hardware you are likely to be running WLS on in production.

Group lookup related settings:

One of the most important, if not the most important piece of tuning you can do to a WLS LDAP authenticator is to change the Group Membership Searching from unlimited to limited and set the Max Membership Search Level to an appropriate value. Not only will this improve your performance, it will prevent you from encountering a loop that prevents users from logging in when two groups are members of each other. I blogged extensively about these two settings in a previous post entitled Weblogic, LDAP Authenticators, and Groups.
Rounding out the group lookup related settings are a group of settings that can be found in the performance tab of LDAP authentication providers.

These settings all deal with the group membership hierarchy cache. This cache stores the results of recursive membership group lookups or put another way it stores what groups are members of other groups.

The settings for this cache include a check box to enable the cache which is called Enable Group Membership Lookup Hierarchy Caching, a setting that controls the size of the cache called Max Group Hierarchies in Cache, and a setting that controls the time-to-live (in seconds) of cache entries called Group Hierarchy Cache TTL.

I recommend that you enable this cache if you utilize nested groups. I recommend that you set the Max Group Hierarchies in Cache to a value larger than the total number of groups in your directory. Finally, I recommend that you set the Group Hierarchy Cache TTL to a safe appropriate number. The default is 60 seconds which will improve performance, catch changes to the hierarchy fairly quickly, but still result in a fair amount of recursive group lookups. If you up this value to 5 minutes (300 seconds) which should still be safe for most people who aren’t doing funky things with dynamic groups then you should be able to improve performance a little more with no downside.


Tuning of WLS LDAP Authenticators is an overlooked component of successful WLS production deployments. Taking just a little time to change the LDAP authenticator performance related configuration settings from default values to values which are appropriate for your production environment can result in a much faster and more stable system.

Webcast tomorrow: Getting IT Right with an End-to-End Access Control Strategy

My good friends Marc Boroditsky (VP of Product Management) and Naresh Persaud (Director of Product Marketing) will be leading a webcast tomorrow on the topic of end-to-end access control.  I think it will be worth checking out.  For more information and to register for the event, follow this link.

Monday, August 1, 2011

OAM 11g Policy Model Part 4: Resource Protection Levels and Excluded Resources

This is the 4th post in my series going over the OAM 11g policy model and another post in the broader OAM 11g Academy series. To view the first post on the OAM 11g policy model, as well as the index to the entire OAM 11g Academy series, click here:
OAM 11g PS1 ( introduced two important enhancements related to resource definitions in the policy model:
  1. The ability to optionally include query strings as part of resource definitions.
  2. The designation of a protection level for a resource and the completely new concept of excluded resources that go with it.

In, resources could not include query strings and OAM essentially ignored query strings in its policy evaluation. In, there is a specific field for query strings in the resource definition screen of the OAM console that you can optionally make use of. We’ll come back to this in a future post.

For now I’ll point you at the documentation but also point out that you don’t have to define query strings if you don’t want to. Incoming request for resources that include query strings will still resolve to resources that have blank query string parameters. For example, if you define a resource /foo/bar.jsp and leave the query string field blank, it will still match requests for /foo/bar.jsp?x=y and the like.

Protection Levels
With that out of the way, I’d like to talk about the 2nd important enhancement to resource definitions in OAM 11g and that is the notion of protection level and in particular the designation of excluded resources.

When you define a resource in OAM 11g PS1, you specify a protection level from 1 of 3 choices: protected, unprotected, and excluded.

Protected resources must be included in an authentication policy that uses an authentication scheme with a protection level greater than 0. Protected resources can be associated with any authorization policy.

Unprotected resources must be included in an authentication policy that uses an authentication scheme with a protection level of 0. Most often this will be the anonymous authentication scheme. Unprotected resources can be associated with any authorization policy. Indeed, OAM will block access to unprotected resources that are not included in an authorization policy.

However, it is worth noting that it probably doesn’t make sense to put an unprotected resource into an authorization policy with constraints. If you plan on applying constraints to requests to a resource, then you should make that resource protected.

Session validation is still performed on requests to unprotected resource. However, if a user session times out or is otherwise invalidated and a user tries to access an unprotected resource, they will be let through but their name will not be propagated in the OAM_REMOTE_USER header, rather OAM_REMOTE_USE will be set to anonymous.

Basically, unprotected resources are the pre-PS1 equivalent of associating a resource with the anonymous authentication scheme.

Excluded resources are entirely new to PS1 ( When a request comes in and matches up with a resource that has been designated as excluded, then the webgate/agent just lets the request through.

No calls to the OAM server are made, no session validation is performed, and the OAM_REMOTE_USER header is not added to the request. Also of note, if you have configured your webgates/agents to issue certain cache control headers back to the browser, they will not be issued in the case of excluded resources.

As you can probably see, OAM’s handling of excluded resources is very fast because, well, it isn’t doing much for them.

Unprotected vs. Excluded
At this point (if you are like me) you are probably wondering about when resources should be designated as excluded and when they should be designated as unprotected. On some levels these are very similar designations, although there are some important differences.

For performance reasons, I think it is a good idea to designate as many of your resources as possible as excluded. At the same time you want to make sure that your applications are still secure and functional. So, I’ve come up with the following guidelines:

1) If a resource is private, which is to say only authenticated users should have access to it, then if should be designated as protected.

2) If the resource is public, which is to say that authenticated and unauthenticated users should have access to it but you want to be able to know the names of authenticated users and/or set responses to create headers containing certain information about authenticated users, then the resource should be designated as unprotected.

3) If you want to audit requests to a resource using OAM then a resource should be designated as protected or unprotected. Note that you can still audit using web server logs for excluded resources.

4) If you want session validation to be performed in advance of populating the OAM_REMOTE_USER header, then a resource should be designated as protected or unprotected.

5) A corollary to items 2 and 4 above is that if you want the WLS SSO Synchronization filter to be active and “protect” a resource should be designated as protected or unprotected. This is an important note for those using OAM to protect Oracle WebCenter or ADF based applications.

6) If none of the above are true and you have a resource that is public, that doesn’t need to know anything about the user, where you don’t care about using OAM to audit access to the resource, and you don’t care about the WLS SSO Synchronization Filter for the resource, then (finally) you can make the resource excluded.

Short Cut Guideline
A short cut to the above guidelines that many will find useful is to designate as excluded public static resources such as images, PDF, and static html.

If such resources are grouped in directories then you can exclude them by defining policies like:



If such resources are more scattered, then you should be a little more careful and define resources individually or by file/content type like:




The notion of protection level for resources is an important addition to OAM 11g. The designation of excluded resources is likewise very important and will prove very useful for maximizing performance of your OAM enabled applications. You can read more about protection levels, excluded resources, and query strings in resources (which we’ll blog more about later) in the documentation: