Friday, May 28, 2010

Using Oracle Entitlements Server to Secure Spring Applications

About 7 years ago, when I worked at Netegrity in the Architecture group, I remember a meeting that Deepak Taneja (CTO) and I had with some guy who was talking about combining SiteMinder with Aspect Oriented Programming (AOP). The guy (apologies...I have three kids now, and can barely remember my own name) was saying you could just "inject security" right into the application. I was like "Huh?"....It all sounded very cool and clever but this was right when Java was still very young, and SiteMinder (was and still is) a Web Access Management product. So the prospect of making a lot of calls with the SiteMinder Agent API - which I think used JNI then - out to provide authorization for Java methods seemed like "crazy talk".

Fast forward to the last day of FY2010, and here I am writing a post about how to integrate Oracle Entitlements Server with Spring using AOP. What is interesting is that just because the technology has improved (both on the Java side and on the Identity Management side) and the solution is possible, there are still some fundamental questions about the viability of such a model for enterprises. In the enterprise model, security is a coordinated effort between application developers and security administrators to make sure that applications are secure. The security model needs to be sufficiently flexible to meet the security requirements of the business and enable meaningful policy changes made with out having to engage expensive development resources to do so. Its with these goals in mind that I set off to build a "real" solution.

Annotations - A Brave New World, at least for me

I knew that I wanted to use aspects to automatically make the calls to OES. This is more of an AOP thing than per se a Spring thing, but I wanted a solution that was fairly generic and didn't rely on other frameworks. I also in a way wanted to see if there was a way to just magically get authorization called - security along with logging and exception handling has always been held out there as a good use case for AOP. The first challenge was to figure out how to get my code - Aspect - invoked. Obviously, getting called before every Java method made no sense. The good news is that you can define Pointcuts as part of spring-configuration. The bad news is that the syntax for this stuff is pretty messy. Example:

<aop:aspect id="beforeExample" ref="aBean">




I think this means whenever you're executing a method of any protection level in the package com.xyx.myapp.dao. This is pretty tricky stuff and if you wanted to secure a different method, you need to go change this. Also, the person doing this is obviously the developer, not a security person since all of the definitions are using Java packages. So, people would probably do when confronted with something really hard...either not use it or set the pointcut like the example above...very broad...all methods result in a call to the authorization aspect. So, what is the simpler way for a developer to indicate that a method should be secured? Annotations to the rescue. In SpringAOP, you can also define point cuts that look for annotations on methods or classes to indicate a pointcut. Here's the poincut I'm using for this example:

@Around("@annotation( ||


!@annotation( ) ")

This means call my aspect, if the method has an @Protected annotation or the class does and the method doesn't have an @UnProtected annotation. Basically, if you mark a class as @Protected all of the methods are protected by default. You can override that with an @UnProtected. You can also have a class which is unprotected, but mark individual methods as @Protected. My feeling is with this model, you'll probably never have to go change the pointcut. The annotations do all of the work.

In addition to @Protected and @UnProtected, I added three more tags:

  • @Resource - this is used at the class level to indicate where in the OES resource hierarchy this class sits or at the method level to indicate that when making an authorization call on a method of this object that this method is used to represent the OES resource. In a class has both, then the resource will be the concatenation of the class and method level @Resource values. If no @Resource annotations are found then the name of the class is used.
  • @Privilege - this is used at the class level to indicate the default privilege in OES for all of the @Protected methods on the class. It can also be used to override the default with a specific value for the method. If no @Privilege annotation is found, then the method signature is used (yuck)
  • @AppContext - this is used at the method or parameter level to indicate that this value should be passed to OES as an attribute it the AppContext. All methods in a class that have the @AppContext annotation will be called and have their values passed to OES. All of the parameters in a method call will be passed, even if they don't have an @AppContext - they get names param1, param2,..,paramN. @AppContext also has some optional parameters - passObject and isProtected. The passObject parameter is set to false by default which means that the authorization aspect will convert the object to a set of primitives. If this is set to true, then the object itself is passed. This can have advantages when working with the Java SM and custom extensions - attribute retrievers and eval functions. The other parameter, isProtected determines if when calling the method to retrieve the appContext and the method has an @Protected annotation should the authorization aspect make a call to OES. This attribute defaults to false, but there are cases where the authorization check should still occur

Here are some sample classes that illustrate more of the annotations that I'm using for this solution:

package test;


public class AccountImpl implements IAccount {

private String accountId;
private double balance;
private String accountType;
private IPerson accountOwner;

public AccountImpl() {
// TODO Auto-generated constructor stub

public AccountImpl(String accountId, double balance, String accountType,
IPerson owner) {
this.accountId = accountId;
this.balance = balance;
this.accountType = accountType;
this.accountOwner = owner;

public double getBalance() {

return this.balance;

public String toString() {
return super.toString();

public String getAccountId() {
return this.accountId;

public IPerson getAccountOwner() {
// TODO Auto-generated method stub
return this.accountOwner;

public String getAccountType() {
// TODO Auto-generated method stub
return this.accountType;

public void setAccountOwner(IPerson owner) {
this.accountOwner = owner;
public void setAccountId(String accountId) {
this.accountId = accountId;
public void setBalance(double balance) {
this.balance = balance;
public void setAccountType(String accountType) {
this.accountType = accountType;


package test;

import java.util.List;


public class PersonImpl implements IPerson {

private String name;
private String SSN;

private List<IAccount> accounts;

public List<IAccount> getAccounts() {
return accounts;

public void setAccounts(List<IAccount> accounts) {
this.accounts = accounts;

public PersonImpl() {
// TODO Auto-generated constructor stub

public PersonImpl(String name, String sSN) {
super(); = name;
SSN = sSN;

public String getName() {
return name;

public void setName(String name) { = name;

public String getSSN() {
return SSN;

public void setSSN(String sSN) {
SSN = sSN;

public void transfer(
@AppContext(value="from") IAccount acct1,
@AppContext(value="to") IAccount acct2,
@AppContext(value="amount", passObject=true) double amount) {



Some Design Considerations with Annotations

I made a conscious decision to limit the Annotations to the basic OES runtime API, which is very aligned with the XACML concept of Subjects, Actions, Resources and Environment with attributes. I did not expose the ability for developers to author policy or roles (@AllowableRoles) in the annotations. Other models support this, but you're creating a brittleness here, and if things change then developers will have to go back in and change the roles/policy annotations. OES has facilities for authoring these policies so all I'm trying to do here is get the integration between the OES runtime API and Spring simple, and then make the authoring of those policies sensible.

Before we move on its worth noting that I didn't do anything explicitly here about identifying the subject. This is basically a solved problem, where I'm assuming that the user is already authenticated and there is a simple way inside of the aspect to know who the user is. OES has the ability through the identity asserter to take a token (username or JAASSubject) and use that as the subject in the authorization decision.

Now, back to the thinking around these particular annotations. One of the other really hard problems in writing custom PEPs is getting the admin artifacts aligned with the runtime calls. OES has the ability to address this through discovery mode - which basically just captures the calls, and the resources, attributes and privileges in a file format that can be imported into OES admin console. This works fine, but with annotations we can actually do much better. For people who are steady readers of this blog, they'll remember that I used the Java Annotations Compiler capabilities in JDK 1.6 in the OWSMAC (OWSM assertions compiler project) to simplify the creation of the XML deployment descriptors for OWSM custom assertions. As architects, we have a tendency to fall in love with solutions, but I do think in this regard, the use of the annotations is the best choice. I created a simple annotation processor, and now when you compile the classes above, you get this output:

Buildfile: C:\Documents and Settings\jbregman\workspace\oes_spring_aop_test\build.xml
[delete] Deleting directory C:\Documents and Settings\jbregman\workspace\oes_spring_aop_test\bin
[mkdir] Created dir: C:\Documents and Settings\jbregman\workspace\oes_spring_aop_test\bin
[copy] Copying 1 file to C:\Documents and Settings\jbregman\workspace\oes_spring_aop_test\bin
[echo] oes_spring_aop_test: C:\Documents and Settings\jbregman\workspace\oes_spring_aop_test\build.xml
[javac] Compiling 5 source files to C:\Documents and Settings\jbregman\workspace\oes_spring_aop_test\bin
[javac] Processing Annotations: [,,,,]
[javac] Protected Classes to Process: [test.PersonImpl, test.AccountImpl]
[javac] PRIV FILE=file:/C:/Documents%20and%20Settings/jbregman/workspace/oes_spring_aop_test/bin/oes/priv
[javac] OBJECT FILE=file:/C:/Documents%20and%20Settings/jbregman/workspace/oes_spring_aop_test/bin/oes/object
[javac] ATTR FILE=file:/C:/Documents%20and%20Settings/jbregman/workspace/oes_spring_aop_test/bin/oes/attr

That's right - based on the annotations in the code, it creates OES import files. You can pull these directly into the OES Admin console. This saves a lot of time and effort in getting the coordination between runtime and admin, since they are using the same tooling and model. The most interesting of the files is the attr - this is list of dynamic attributes that are available at runtime.


These are some really nice attributes to write policies on...there is one param1 that I left there from testing, but basically you have a seamless binding between the code and the policy artifacts, for little more than adding some basic annotations.

Pulling it all together - the runtime

Now that we've build this model, let's see how it works at runtime. I have a very basic spring-config.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

<bean id="josh" class="test.PersonImpl">
<property name="name" value="Josh"/>
<property name="SSN" value="123-45-6789"/>
<property name="accounts">
<ref bean="acct2"/>

<bean id="chris" class="test.PersonImpl">
<constructor-arg type="java.lang.String" value="Chris"/>
<constructor-arg type="java.lang.String" value="987-65-4321"/>

<bean id="acct1" class="test.AccountImpl">
<property name="accountId" value="11111111"/>
<property name="balance" value="6000"/>
<property name="accountType" value="CHK"/>
<property name="accountOwner" ref="chris"/>

<bean id="acct2" class="test.AccountImpl">
<property name="accountId" value="22222222"/>
<property name="balance" value="7000"/>
<property name="accountType" value="SAV"/>
<property name="accountOwner" ref="josh"/>



Notice I'm using the Load Time Weaver (LTW) to pull in the aspects. I did this in the META-INF/aop.xml file

"-//AspectJ//DTD//EN" "">

<weaver options="-showWeaveInfo">

<!-- only weave classes in our application-specific packages -->
<include within="test.*" />



<!-- weave in just this aspect -->
<aspect name=""/>



The aop.xml and the aspects are all of the "overhead" that is placed on the developer. Here's the test program:

package test;

import org.springframework.context.ApplicationContext;

public class Test {

public static void main(String[] args) {

ApplicationContext ctx =
new ClassPathXmlApplicationContext(args[0],Test.class);

IPerson josh = (IPerson)ctx.getBean("josh");
IPerson chris = (IPerson)ctx.getBean("chris");
IAccount acct1 = (IAccount)ctx.getBean("acct1");
IAccount acct2 = (IAccount)ctx.getBean("acct2");

System.out.println("Chris SSN="+chris.getSSN());

josh.transfer(acct1, acct2, 63.45);



And this is the output:

>method-execution PersonImpl.setName(..)
>IsAccessAllowed action=write resource=test/PersonImpl appContext={
>method-execution PersonImpl.setSSN(..)
>IsAccessAllowed action=writeConfidential resource=test/PersonImpl appContext={
>method-execution PersonImpl.getSSN()
>IsAccessAllowed action=readConfidential resource=test/PersonImpl appContext={
Chris SSN=987-65-4321
>method-execution PersonImpl.transfer(..)
>IsAccessAllowed action=txfer resource=test/PersonImpl appContext=


There are 4 authorization calls made in the sample. The first two are made when setting the SSN and Name of the josh bean. There are no corresponding calls on the chris bean because in the spring-config.xml the chris bean is created using the constructor, while josh is done using the setters. I made constructors unprotected by rule, because it seemed like too many issues with having constructors protected. The next authorization call is made when getting chris SSN. The final authorization is the really interesting one where you see all of the context of the transfer being passed. You see that the List of IAccount are passed as items. I could have set the @AppContext to passObject, and then it would have simply passed the list. Either way, there is all of the information needed to make a meaningful authorization decision in OES.


Java/Spring/AOP have all come a long way since that meeting seven years ago. This sample demonstrates some of the "magic", the ability to seamlessly apply authorization policies with very little coding. Furthermore, it does it in such a way where I think there is a good balance between what the developer has to do to get this model enabled, and what security policy administrators/authors can then do with the artifacts the model generates. Finally, what really makes this solution possible is the performance and scalability of the OES engine. The fact that, unlike SiteMinder or centralized models, the Java SM can run in process, and make the decisions very very fast. With the passObject=true option, all of the information can be available in its native form to OES. This is another way that expensive serializing and de-serializing can be avoided. This authorization aspect discussed is functionally pretty basic - it does inbound checks and then calls proceed. You could image doing some interesting this with responses from OES - like adding additional where clauses to queries or data masking. Now that would be something.

Tuesday, May 25, 2010

When NOT to use OES' bulk calls

The simplest OES API is the one that allows you to ask "is this user authorized to do X?" Also known as isAccessAllowed. As I mentioned back in March OES also has a couple of calls that allow you to speed things up by making bulk calls...
One (isBulkAcessAllowed / isBulkAccessAllowed) allows you to send a batch of action/resource and context calls and get a list of true/false responses. The other (isChildResourceAccessAllowed / isChildResourceAccessAllowed) allows you to ask OES for a list of resources it knows about that the user is authorized to access.
These methods are great and can really improve your apps performance if you use them when appropriate. I've had to try to talk more than a few people out of using the them when they weren't necessary or proper and I thought I'd write down the reasoning for anyone else that needs it. Sometimes the most difficult and important part of having a particular tool in your toolbox is to know when to avoid its use.

First things first:
When you create resources in OES you're basically creating a hierarchy not unlike that of a filesystem. The end result is a tree and you can write policies on any node in the tree.

When you ask "isAccessAllowed" with a single resource you're asking about a particular node in that tree.

Let's talk about when not to use isChildResourceAccessAllowed first because it's the more obvious one.

Let's say I create a resource tree of the following resources:
  • /accounts
  • /accounts/checking
  • /accounts/savings
  • /accounts/investing
I'd mark checking, savings and investing as "virtual" which means that you can ask OES about anything that starts with those strings and it will evaluate the policies on the higher level object.

I'd probably write policies on checking and savings accounts that say to "you're allowed to withdraw money if you are the account owner". Then at runtime if I asked OES can "Chris" withdraw "$1,000,000" from /accounts/checking/234877682345 OES can answer that question quite easily - it just has to figure out if "Chris" owns that account and then it can say yes or no. It's then up to the banking app to decide if there's enough money in the account (though there never is).

Unless I actually went in and created every single checking account at the bank under /accounts/checking the isChildAccessAllowed() method can't work - OES doesn't know how to get a list of the checking accounts! (at least not in this example!)

Another case where it's inappropriate is when OES knows about lots of resources but you only really care about a few of them. For example if you had /myapp/feature1 through /myapp/feature1000 and you're trying to decide whether to show numbers one through five. You'd ask OES for all child resources and it would have to evaluate the policies for 1000 objects before you'd hear back about the five you actually cared about. So a big fat waste of OES' and your user's time.

So when should you use isChildAccessAllowed? When OES has a list of resources and you want to know about all of them that the user can see. Things like menu options or drawers on the Pill O-Matix make sense.

What about the other one - isBulkAccessAllowed()?

The isBulkAccessAllowed() method is simply a batch version of the original, simple isAccessAllowed call. If you have 100 resources in your application and you want to find out which the user is allowed to access you might call isBulkAccessAllowed. Seems perfectly logical, and it is. There's only one tiny nuance though: sometimes it's more trouble than its worth.

When you have a list of resources you can iterate over it and ask OES about each resource literally as you iterate over them. One time through and bing, bang, boom, your done. No muss, no fuss.

When you make the bulk call to OES you have to make the list of resources, call isBulkAccessAllowed() and then parse the result - meaning iterating over the resources again seeing if the resource is in the list of resources the call returned. Frankly a bit of a pain in the rump.

If you're making calls to a remote SM over Java or RMI you're saving the round trip time of talking to your PDP times the number of resources. But if you are using an embedded SM (e.g. the Java or WebLogic SM) then you've gained nothing and only caused yourself more grief.

So if you're using the embedded SM and might switch to a centralized SM in the future then by all means use the isBulkAccessAllowed() call. If on the other hand you know for a fact that you will always be using an embedded SM I generally say skip using the isBulkAccessAllowed() method.

Hope this helps someone out there!

Monday, May 24, 2010

Identity Propagation using JMS Transport with Oracle Server Bus 11gR1 PS2

In returning from Santa Clara, and the deep dive technical training on the new release, it made me release something about security in OSB. In general, when we discuss OSB security, we spend a lot of time talking about the message level (WS-Security), but not that much time in discussing what can be done at the transport. A lot of what is new in 11gR1 OSB is additional transports - SOA direct, transport JEJB, JCA - that optimizes connectivity among some of the existing FMW components. I'm still coming up to speed on the capabilities of those new transports, but I thought it would be worth discussing the security aspects one of the existing transports - JMS.

The scenario for this discussion is pretty simple: JMS Proxy to JMS Proxy to HTTP Business Service. So the question is what and how can I secure this type of set-up and how do I/can I propagate the user's identity? For purposes of this discussion, we'll use the SOAP/JMS JAX-RPC client to create the message to be placed on the JMS Proxy.

SOAP over JMS client to JMS Proxy

The SOAP/JMS client encapsulates the following steps:

  • Creating an InitalContext to the server
  • Looking up the ConnectionFactory
  • Looing up the Queue/Topic
  • Waiting for a response

This is a sample client that I used:

public static void main(String[] args) throws Exception {

"C:\\Documents and Settings\\jbregman\\workspace\\client-project\\sample.jks");

test.HttpBackEndServicePortBindingQSService service =
new test.HttpBackEndServicePortBindingQSService_Impl();

test.HttpBackEndService hwService = service.getHttpBackEndServicePortBindingQSPort();


String uri = "jms://"+URLEncoder.encode("OSB Project 1")+"/JMSProxyToBusinessService?URI=HelloWorldReq";

JmsTransportInfo ti =
new JmsTransportInfo(uri);


String hello = hwService.hello("jms");


From a security perspective, let's start with securing the transport between the client and the server. I'm using 1-way SSL, so this means that I need to be able to trust the certificate presented by the server. The JAX-RPC client uses the WLS SSL stack so I've specified the trust store set-up using the,, and system properties. Note: The current release of the SOAP over JMS protocol does not support 2-way SSL from a stand alone client. The URL that is used to create the initial context is specified on the WLStub.JMS_TRANSPORT_JNDI_URL property of the stub. Notice I'm using t3s and port 7002 - my SSL port. To authenticated the user and create the initial context, I'm using username and password. These values are specified on the Stub.USERNAME_PROPERTY and Stub.PASSWORD_PROPERTY properties respectively.
Once the user is authenticated, they need to be authorized to place a message on the JMS Queue. By default all users are authorized to do this, but from the WebLogic console, you can restrict which users can perform which actions on a queue. These policies are specified on the Security tab, which can be found on every JMS resource inside of WebLogic Server. In this example, I configured the user sender to be able to send to the Queue and the user receiver1 to be able to receive messages from the queue.

Once the sender has sent the message, the receiver must be authorized to pull the message off of the queue. This begs the question "In an OSB proxy, who is the user retrieving the messge?" This is the service account associated with the JMS Transport. You can also specify in the JMS transport that the message is to be retrieved using SSL. Again, you cannot use 2-way SSL.

So, once the receiver has pulled the message off of the queue, we can take a look at the JMS Transport headers and see a few interesting things

<xml-fragment xmlns:tran="" xmlns:xsi="" xmlns:jms="">
<tran:headers xsi:type="jms:JmsRequestHeaders">
<tran:user-header name="_wls_mimehdrContent_Type" value="text/xml; charset=utf-8"/>
<tran:user-header name="_wls_mimehdrAuthorization" value="Basic c2VuZGVyOndlbGNvbWUx"/>
<tran:user-header name="_wls_mimehdrSOAPAction" value="&quot;&quot;"/>
<tran:user-header name="URI" value="/OSB+Project+1/JMSProxyToBusinessService"/>

I think the two most interesting headers here are _wls_mimehdrAuthorization and JMSXUserID. The JMXUserID is a feature that WLS JMS has to pass the sender's identity. You can specify a default behavior (send if asked, send always) and then override in the destination. Here we see that it was the sender that sent the message, even though it was de-queued by receiver1. We'll talk more about _wls_mimehdrAuthorization in a second. For now, I'm going to hold on to the _wls_mimehdrAuthorization header by copying to the outbound request in the transport header.

Invoking 2nd JMS Proxy as JMS Business Service

The set-up for the next proxy is similar to the set-up of the first. The JMS transport can be configure to use 1-way SSL and a service account to retrieve messages. The protection on the second queue is a little different. The permission for who can receive messages is the service account - lets call this identity reciever2, but who can we restrict to sending messages to this queue? Unfortunately, neither the sender or receiver identity is propagated, automatically and the call to the second queue is done as anonymous. This is the debug details:

<xml-fragment xmlns:tran="" xmlns:xsi="" xmlns:jms="">
<tran:headers xsi:type="jms:JmsRequestHeaders">
<tran:user-header name="_wls_mimehdrContent_Type" value="text/xml; charset=utf-8"/>
<tran:user-header name="_wls_mimehdrAuthorization" value="Basic c2VuZGVyOndlbGNvbWUx"/>
<tran:user-header name="_wls_mimehdrSOAPAction" value="&quot;&quot;"/>
<tran:user-header name="URI" value="/OSB+Project+1/JMSProxyToBusinessService"/>
<tran:headers xsi:type="jms:JmsRequestHeaders">
<tran:user-header name="_wls_mimehdrAuthorization" value="Basic c2VuZGVyOndlbGNvbWUx"/>
<tran:user-header name="SOAPAction" value="&quot;&quot;"/>

Notice that the JMXUserID is receiver2. This is the user that pulled the message off of the second queue. Presumably, this is because the service account of the business service is receiver2.

If you want to restrict access to places messages on this queue, you'll need to do it using message-level security, and then using message level access control in OSB.

JMS Proxy invoking HTTP Business Service

Finally, time to invoke the HTTP Business Service. This can be done over 1-way SSL or 2-way SSL with a Service Key, but what if we want to pass the original user's identity? If only we had their username and password....low and behold, simply copy the _wls_mimehdrContent_ transport header to the Authorization HTTP header and you will have HTTP Basic Authentication to the business service. I'm not great at XQuery but $inbound/ctx:transport/ctx:request/tp:headers/tp:user-header[1]/@value retrieved the value of the header. We can see from the JWS Web Service that I used to test, that we're really getting the user's identity passed.

public class HttpBackEndService {

public String hello(String in) {
return "Hello "" "+in;

You get output like this:

Hello Subject:
Principal: sender
Principal: SenderGroup


WebLogic Server can define authorization policies on a JMS destination and OSB can work with those policies through the use of service accounts. You can pass the identity of the sender by using the JMXUserID. You can also capture the HTTP Basic Authentication of the user stored in the _wls_mimehdrContent_. Think very carefully about this. You are going to store the users passwords in a simple well known encoding. Its probably a better approach to use the worlds most dangerous identity asserter and a custom authentication to push the sender's identity on to the stack than holding on to the password. All told, the JMS protocol has rich security functionality and can be easier to implement than message level security. On the other hand, since JMS messages can be stored for a long time, in many cases it makes a great deal of sense to use message level protection - encryption of sensitive data and signing to avoid tampering. What do you think? Do you have to use message level security with JMS or is some of what I've shown is possible with the JMS transport and SSL effective in some situations?

Saturday, May 22, 2010

Performance Considerations for Various Security Models for Oracle Service Bus

I'm just back from California after a week of intensive OSB 11gR1 deep dive training/education. In one of the sessions we reviewed some of the performance numbers related to various security models in the release. The role of performance testing in engineering is to ensure that there are no performance regressions from the previous release as well as look for hot spots/low hanging fruit to improve the overall performance of the product.

In my role with the A-Team, I get a slightly different set of performance related questions. These are basically of the form "How fast will this be in my environment?". This is obviously a question which has no magical answer. The myriad of factors affecting the performance of a complex network such as a SOA/OSB deployment cannot be simplified into a pat answer like "20% faster". So, what is the best that we can do in giving guidance to customers in selecting a security model for their SOA and setting performance expectations?

In my experience, there are two things that many customers leave until the very end of their deployment - security and performance. I'm not sure why, but I suspect because both topics are some what tedious and complicated and are not as "critical" as delivering the business functionality of the application. The best advice that I can give is to consider these non-functional requirements early in the development process and make sure that there is plenty of time to analyze, test and refine the solution. What makes this even more complicated is that security and performance can be diametrically opposed forces in a solution. The fastest approach is to have no security as well. Security always makes things slower...think about how fast it is to walk through an open door vs going the security screening at the airport.

Specifically, when looking at web-services security one choice confronting customers is to use username and password or SAML to identify the caller. In this discussion, assume that you have all of the passwords of the end users so federation is not a requirement, but rather an option. In order to do an "apples to apples" comparison, we need to think about the performance (latency and through-put) of a number of variations. Let's look at a few approaches:

  • UsernameToken (UNT) clear text password over 1-way SSL
  • UNT digest password over HTTP
  • SAML Server Vouches with message signature over HTTP
  • SAML Bearer assertion over 2-way SSL

Just to be clear, I did not include UNT - clear text password or SAML Bearer over HTTP because there are not secure on their own, excluding network security. In the UNT case, since there is a clear-text password, the request really needs to be over SSL. In the SAML bearer case, if you use HTTP or 1-way SSL, then anyone can send a request as any user if the SAML assertion is unsigned. Even if the assertion is signed, you could take that assertion and add it to any message. If you have signed assertions over 1-way SSL, the assertion is for the user but can be take and added to any request. This last use case is on the border for me, but for the sake of argument lets leave that one out of further discussion.

In looking at each of these 4 scenarios in a little more detail, I think we need to look at what happens at the webservice consumer, webservice producer, and at the user store/identity directory.

UsernameToken (UNT) clear text password over 1-way SSL

At the web-service consumer, generating a clear-text UNT token is very straight forward. In initiating the one-way-SSL, there is a certificate and hostname verification that needs to be done, but this is also fairly inexpensive. In creating the SSL connection, the entire contents of the HTTP request are encrypted and sent. At the web-service producer, the request is decrypted and then the password is validated against the directory. This can be a somewhat expensive operation, especially if this an LDAP bind. The HTTP response is encrypted again and returned to the consumer who decrypts and processes the response.

UNT Digest over HTTP

The client generates a nonce, a timestamp and then uses those to create a SHA1 hash of the nonce, timstamp, and password. The UNT token is then added into the request and sent to the producer. The producer retrieves the username from the UNT and then retrieves the clear text password from the userstore. This doesn't mean that the password is stored in the clear, but rather decrypted, so the password is decrypted by the consumer and the SHA1 has is generated. Computing the SHA1 hash is pretty quick, and unlike encryption/decryption of SSL, not impacted by message size. The response is returned, and processed. The consumer does have some extra work to do...validating the nonce. In most cases, the nonce is checked against a database and then if valid, persisted. This is an extra read and write, compared to simply validating the password.

SAML Server Vouches with message signature over HTTP

The client needs to either generate directly or through an STS a SAML assertion. Once the assertion and a timestamp are added to the message, a digital signature of at least the timestamp and the assertion are added. This addition of the assertion and signature will make the message bigger - 100s of bytes easily. On the producer side, the signature needs to be validated as well as the SAML assertion. How the SAML assertion gets validated is important. Assuming no STS, the biggest question is do I need to validate this user in the directory? If you're using OSB's Virtual User capability, then no - otherwise, you'll be going against the directory to validate the user. The response will be signed and returned back to the web-service consumer, where the response signature and time stamp need to be validated.

SAML Bearer unsigned assertion over 2-way SSL

On the web-service consumer side, a SAML assertion is generated. Unlike the sender-vouches cnfirmation method, the entire HTTP request is encrypted using a session key established via 2-way SSL. The consumer needs to produce a certificate identifying themselves to the producer, which the producer needs to validate - this is all done as part of the SSL handshake. Depending on the producer's set-up, validating the client certificate can be simple - check CA, to moderate - check the LDAP server to validate, to complex - OCSP or CRL checking. The security vs. performance trade off of certificate validation is not unique to this scenario. It comes into play in the SAML sender-vouches scenario as well. Also like the SAML sender-vouches, id you're using virtual users, then there is no need to go to the directory. Otherwise, you'll also have to validate the user. Finally, the HTTP response is encrypted and returned to the consumer.


As you can see from the above discussion, there are many factors to consider when selecting a model that meets both security and performance requirements. Probably the first thing to figure out is if transport level (SSL) is even a possibility. Transport level security puts all of the security there, so if messages are long lived or sent to multiple parties then some message level will need to be used. SSL also can have the advantage of being able to persist SSL session keys and make repeated trips to the same host faster vs. message level which (unless you're using WS-SecureConversation) you'll have some key set-up overhead on every request. Make sure that you've got your directory infrastructure appropriately scaled - each authentication method - UNT Cleartext, UNT digest, SAML and SAML with virtual users - have different profiles in their use of the directory. Finally, like any SOA the message size and network capabilities also profoundly affect performance.

So, there is no simple answer to how security will affect performance or what are generically best practices in this regard, but hopefully this post illustrates some of the issues and considerations that go into selecting an effective and scalable solution.

Friday, May 14, 2010

Dynamic OWSM Policy and Policy Overrides for WLS, OSB and SOA Suite

First all, great job by Alex in his inaugural post on the blog.. I really think the OES-OVD combination solves a lot of interesting problems.

I've been spending a lot of time lately working on OSB and OWSM. As people probably know, 11gR1 OSB is released, and you can now use OWSM to secure both proxy and business services.. You have been able for a while to use OWSM for WLS JAX-WS services and of course you can use it to secure SOA Suite composites and BPEL processes. I really like, and customers really like, the idea of being able to centrally manage WS-Security policy from one place. I also like the fact that there is now a single web-services stack across all of these products that OOTB interoperates with eachother. If you want to know why this is a real improvement, I refer you to the OSB to WLS + SAML post. Also, OWSM has native WS-Security Kerberos Token profile...and I have gotten it to work very nicely with WCF (a post for another time).

Given all this, I've been sharpening my pencil on how to write custom assertions (policies) in OWSM 11g. As people will recall, I did an OES+OWSM custom assertion at OOW 2009, and to be honest, that was really the last time I looked at it. My recollection from that time, and my recent experience was that it was a little challenging. Writing the custom assertions is very similar to writing the custom SSPI plugins. Its the type of thing that you don't do that often, and when you do, once you can get the build script going and some decent samples, its not too bad. So, my contribution to this effort can be found at In addition to the OES+OWSM assertion from last year, I added a new project called OWSMAC - OWSM Annotations Compiler. What I tried to do was look at what was challenging in working with the custom assertions and try to make it really, really simple.

These are a few of things that OWSMAC does:

  • Automatically generate policy and assertion XML
  • Simplified XPath processing
  • Dynamic Reloading - no need to reboot the server after each little change
  • Consistent and predictable lifecycle
  • Programmatic selection of policy and overrides

I'm putting the cart before the horse here, as I've not really fully documented or "javadoc-ed" the project, but I have been able to solve a pretty interesting use case that a couple of customers have been interested in, so I wanted to share it now. I suggest that people join the project for updates on OWSMAC.

Dynamic Policy Selection

The scenario is that from an intermediary (likely OSB or SOA Suite composite) the request to the business service/reference requires message level security, but the specifics of the actual policy depend on some state - information in the message, or the location (network) of the destination or different targets have different security requirements - some partners want SAML and others want Username and Password. WS-SecurityPolicy once again is not sufficient or particularly helpful. You need to be able to determine the policy dynamically. I've done this through one of the samples in OWSMAC - DynamicClientPolicy

package owsmac.test;

import java.util.Map;

import javax.xml.soap.SOAPMessage;

import oracle.wsm.common.sdk.IContext;

import owsmac.annotations.Assertion;
import owsmac.annotations.AttachTo;
import owsmac.annotations.Category;
import owsmac.annotations.CustomMethod;
import owsmac.annotations.Executor;
import owsmac.annotations.MessageContextPropertyValue;
import owsmac.annotations.PolicyNameValue;


import oracle.wsm.common.sdk.ISOAPBindingMessageContext;
import oracle.wsm.policyengine.IExecutionContext;

import owsmac.annotations.DestroyMethod;
import owsmac.annotations.ExecutionContext;
import owsmac.annotations.FaultMethod;
import owsmac.annotations.InitMethod;
import owsmac.annotations.PolicyPropertyValue;
import owsmac.annotations.Property;

@Assertion(displayName = "A dynamic client policy", customType = Assertion.CustomType.policy, category =, attachTo = AttachTo.binding_client)
@Executor(category = Category.security_authentication)
public class DynamicClientPolicy {

public String selectedPolicy;

//@MessageContextPropertyValue(name = ClientConstants.WSS_CSF_KEY)
public String csfKey;

@MessageContextPropertyValue(name = "")
public String address;

@MessageContextPropertyValue(name = BindingProvider.USERNAME_PROPERTY)
public String samlUsername;

@MessageContextPropertyValue(name = "oracle.wsm.subject.precedence")
public String useSubjectPrecedence;

@Property(value = "localhost:389")
public static String LDAP_SERVER;

public static @ExecutionContext
IExecutionContext eCtx;

public static void init() {
System.out.println("In Init: "+eCtx.getAllProperties());
System.out.println("In Init: The LDAP Server is "+LDAP_SERVER);

public static void destroy() {
System.out.println("Destroyed Dynamic Client Policy");

public boolean onFault(IContext context) throws Exception {

ISOAPBindingMessageContext soapContext = (ISOAPBindingMessageContext)context;

SOAPMessage message = soapContext.getFault();

return true;

public boolean selectPolicy(IContext context) throws Exception {


return true;

* This is where the custom logic goes for selecting the policy
* @param content
private void getPolicyFromContext(IContext context) {

System.out.println("In the getContext.....");
Map<String,Object> properties = context.getAllProperties();

for (String property: properties.keySet()) {

Object value = properties.get(property);



System.out.println("The address is "+this.address);

if (this.address!=null && address.indexOf("UNT")!=-1) {

this.selectedPolicy = "oracle/wss_username_token_client_policy";
this.csfKey = "josh.creds";

} else {

this.selectedPolicy = "oracle/wss_saml_token_bearer_over_ssl_client_policy";
this.samlUsername = "foobar";
this.useSubjectPrecedence = "false";


System.out.println("The selected policy is "+this.selectedPolicy+" and user="+this.samlUsername);



The whole idea of OWSMAC is to allow people to use POJOs to build the assertions and let everything else happen "magically". I'll draw your attention to the selectPolicy method. This method has the @CustomMethod annotation with extendsPolicyNameValue. This basically means call this method, and when your done go invoke the policy stored in the field referenced in extendsPolicyNameValue. So, in this method, you can set the name of the policy and then also set additional policy overrides or programmatic overrides (these being the same as the properties for JAX-WS clients).

In the sample, we're just looking at the address (endpointURI) and then either invoking UNT - specifying the csf-key of the user or calling SAML and specifying the name of the user to include in the SAML assertion. In the SAML case, there is also something interesting going on - we're using 11gR1 PS SAML Identity Switching. Notice in order to this we're basically setting two properties - BindingProvider.USERNAME_PROPERTY and oracle.wsm.subject.precedence. The former is the name of the user (which doesn't have to exist in the user directory) and the latter is a flag that tells OWSM not to use the identity in the subject for the SAML assertion. Now, in order to perform identity switching, you need to grant a permission. The documentation is not particularly clear. The permission you need to grant is resource=<composite name> assert. In the text box, you enter resource=<appname> not <appname>.

This is the simple composite that illustrates the scenario.

The references are to WLS web-services protected by OWSM service policies.

The policy file that gets generated by OWSMAC is then uploaded into EM to create a custom policy.

And then attach the policy to the references to the SAML and UNT services. You can also attach the same policy in OSB


In an ideal world, there would be no need to this type of programmatic extension of the core policy model. The standards would be precise and comprehensive and all of the OOTB policies would never need to be changed. But with our experience with SSPI and the WLS core security model, there are always occasions where customer requirements fall into that 20%, so its good to know that there are ways to simply extend the core product functionality. I like the simplicity of OWSM and the binding of configuration and policy is broadly very useful. Invariably there will be scenarios like the one above, where more dynamic behavior is required. My plan going forward it to continue to use the annotations model with the OWSMAC samples to illustrate how to execute these types of scenarios. I'm looking for additional samples to prove out or ideally some help developing and shaping the project, for everyone's benefit. Who's with me?


Creating Custom Assertions

Monday, May 10, 2010

Web Center Integration with OES via OVD


Web Center is Oracle’s strategic direction for Portal customers. WebLogic Portal will still be supported for the time being but it is expected to gradually be replaced by Web Center. Currently, WebLogic Portal and Oracle Entitlements Server can be integrated by configuring the WLS Security Module from OES. It is even possible to accomplish the integration having OES be completely dedicated to providing role information without intervening in the authorization process of WLP. However, the same is not true for Web Center and OES. This article describes a prototype implementation of such an integration using OVD as an intermediary between Web Center and OES. Web Center can be configured to delegate its authentication to an LDAP based Authentication Provider, which in this case can be OVD. OVD has advanced customization capabilities that allow exposing a wide variety of data sources as if they were LDAP directory services. This capability of OVD is the corner stone of the WC-OES integration demonstrated in this implementation.


For starters, I would like to spend a little time describing the architecture and the various components playing a part in the solution. Oracle Virtual Directory (OVD) is an Oracle Product part of the Oracle Identity Management suite of products. OVD provides an elegant solution to the problem of integrating multiple heterogeneous data sources presenting them as a consolidated view which can be consumed by an LDAP client. This capability allows us to expose OES data in a way that can be consumed through Out of the Box mechanisms available in Web Center. This is the main principle that we keep in mind for this implementation. The way in which we accomplished this integration is by configuring a Custom Adapter which is a concept in OVD’s world that represents a non-LDAP source of data to be exposed as an LDAP like tree hierarchy. The functionality of the Custom Adapter is implemented within a plug-in configured to be attached to the Custom Adapter. The following diagram shows the high-level architecture of the solution:

Add Image

Fig. 1 – Integration Architecture for WC – OVD – OES products

As described in the previous picture, Web Center connects to OVD to authenticate users and extract Group membership information. OVD connects to OES using the OES Entitlements Plug-in to extract all available roles filtered by a configurable Application Context Name so only relevant roles are included in the results; or to test if a given user is a member of a particular role. OES applies the role policies to an identity previously asserted.

As you can also see, WebCenter’s Policy Store is kept in OID. It could have been kept in OVD if a Local Store Adapter was configured to allow OVD to store data itself. OID also stores the actual users that can login to WebCenter Spaces console.

Now, let’s take some time to discuss the responsibilities of each component within the context of the solution so you can understand better the design of this implementation and be able to effectively extend it, if such requirement would come your way. As we already mentioned the following components are involved:

  • OVD Custom Adapter: This is a configuration element you define in OVD’s Administration interface (Oracle Directory Services Manager) ODSM. This interface can be accessed directly through its own URL or by logging in to Enterprise Manager. By simply adding an adapter of Type: Custom Adapter we have a place holder for the Groups that will be returned by OVD when WebCenter queries for them.
  • OVD Custom Plug-in for OES: This is a Java implementation of OVD’s public interfaces and API’s. The implementation leverages OES BLM and Java APIs to extract the information from OES in response to WebCenter’s queries. The plug-in itself has the following components:
  • Plug-in Wrapper: This component is responsible for initializing the plug-in and its methods are invoked by OVD a various stages of OVD server initialization and operation. This component is responsible for loading the Spring Application Context containing the configuration for all the components of the solution.
  • Plug-in Implementation: This component is configured in the Spring Application Context configuration XML file. This component is the one holding the actual implementation of the methods defined by OVD’s published interfaces. The Plug-in Wrapper described above delegates the calls to this component.
  • DataRetriever: This is an interface defined by my implementation; it is not part of OVD or OES API’s. The purpose of this interface is to encapsulate the contract between OVD and the Data Source the DataRetriever interacts with. This way, we can decouple OVD’s required data format from the intrinsic workings of the data source providing that data.
  • FilterExecutor: This is another interface I defined as part of my implementation. This interface defines the contract for the components that will directly interact with OES in response to LDAP queries coming from the clients, in this case WebCenter. OES doesn’t understand LDAP patterns and URLs, so these components also act as mediators, parsing the LDAP queries to extract the pieces of information needed to invoke OES APIs. Each FilterExecutor registered in the configuration responds to specific LDAP queries. All filter executors are configured in a chain of filters all of them having the chance to process the incoming LDAP query. If the filter executor doesn’t understand the query coming in then simply won’t generate any results, so the query goes to the next filter executor down the chain. A single FilterExecutor can potentially process various types of queries; it is up to the FilterExecutor developer to write the filter executor to process only one or several LDAP queries.

Implementation Details

Now it’s time to talk about the implementation aspects of this integration. The Spring framework was selected to simplify the configuration of the components involved in the implementation and to allow easy customization of features of this implementation.


This component encapsulates the functionality associated with OES.

The properties specified in the definition of this bean are passed at construction time. In this particular case OVD provides the means to specify System Properties as Java Options via the opmn.xml file, which contains the class-path and the Java Options required by OES. In the process, we came across a limitation on the size of the class-path value specified in opmn.xml. To overcome this limitation we decided to create a JAR file that will contain a MANIFEST with a class-path parameter with the list of required JAR files for OES libraries.
The following listing shows its configuration:

In other integrations, a custom class loader has been introduced to isolate the libraries and classes used by OES and its clients to avoid conflicts with any libraries which the product OES is being integrated with may have dependencies upon. However, OVD seems to have a dependency on the System Class Loader to load all the required classes for modules running within OVD processes. Therefore, the introduction of a custom class loader was not feasible. So the MANIFEST file depicted above was a nice alternative to the custom class loader and also, as already explained, allowed us to overcome the opmn.xml value size limitation.


This component is the OVD plug-in implementation invoked by the OVD Plug-in wrapper called by OVD itself.

This bean has a very simple configuration. It is passed a list of Beans that implement the FilterExecutor interface. Each one of these process a particular LDAP query issued against OVD.

All filter executors have a reference to OESDataRetriever bean passed in their “retriever” property. The executors parse the LDAP query and determine which methods from the OESDataRetriever should be called.

OESDataRetriever makes calls to Wrapper classes for the OES Java and BLM API’s. These wrapper classes were created from the Java SSM Samples included with OES distribution. Similar objects have been used in other integrations described in other postings of this blog.

The following data flow diagram describes the implementation of this integration:

Fig. 2 – Implementation WC-OVD-OES Integration

Filter Executors

Now, let’s take a look at the filter executors and what they do in this implementation.

The allGroupsFilterExecutor responds to the following LDAP queries: (objectClass=*) and (|(objectclass = groupofuniquenames)(objectclass=groupofurls)). Then it communicates with OES using the BLM API Wrapper to extract all the roles existing in OES filtered by Application Prefix. Also the role AuthenticatedUser is removed from the list of returned roles since it is not a real role policy. The list of Roles is then converted to LDAP Groups and returned to the requesting client.

The isUserMemberOfGroupFilterExecutor responds to the following queries: (&(uniquemember=)(cn=)). This is to test if a user is a member of a group. The query to be processed here comes from the configuration of the OVD Authentication Provider and can be modified from the WebLogic Admin Console of the WebCenter Domain. This is defined in the static membership filter parameter of the configuration.

The webcenterGroupsSearchFilterExecutor responds to the following queries: (&(cn=)(objectclass=groupofuniquenames)). This filter is also defined in the configuration of WebLogic Security Realm in the OVD Authentication provider.


In summary the integration is based on a plug-in written for OVD which transforms the Role Policies configured in OES into Groups apparently static in nature but in reality are totally dynamic.

The implementation makes use of Spring to configure its various components.

The main components of this implementation are categorized as follows:

  • Data Retriever: Responsible to interact with the external system that will supply the data that OVD will expose as LDAP groups.
  • OES API Wrapper: This component is a session façade to OES through the JAVA API exposed by OES.
  • OES BLM API Wrapper: This component is a session façade to OES’s BLM API.
  • Filter Executor: Components in this role parse the incoming LDAP queries and potentially process the query to get the results and return them to OVD which in turn will return them to the calling client.

To self-contain the class loading and make sure the right libraries (jar files) were loaded the class-path argument of a MANIFEST file was utilized to isolate OVD’s class loading from the Plug-in’s class loader.