Monday, October 19, 2009

JSF and OES part 3

The third in an ongoing series of posts on securing a JSF based app (in this case using ADF components).

In the part 1 I plugged OES' Security Module into WebLogic Server. In part 2 I showed how to deploy the app and write OES policies to secure URLs and other objects that WebLogic automatically protects. In this post I'm going to rely on the small chunk code of I provided in a post that discussed calling OES from inside a J2EE app. If you haven't seen that post you should probably at least skim through it and grab the code.

If you have an existing JSF based application you probably already have some security logic embedded in it today. In my experience the most common way that people to do that in JSF seems to be abstracting all of that code into a bean and then setting the rendered or enabled attribute to an EL expression that invokes the authorization bean.

What I mean is that the bean has a method to get a boolean value like MaySeePatients:

public class AuthorizationBean {
public boolean getMaySeePatients() {
// everyone can see patients so I return true
return true;

You tell JSF to creating and manage an instance of the bean for you by defining it in faces-config.xml


And then in your JSF you would call the bean with something like this:

Swapping out the existing authorization logic to use OES instead is simple - add that ALESControl code I shared in my previous post and edit your bean to call OES.

Your bean then looks like this:

public class AuthorizationBean {
public boolean getMaySeePatients() {
AZRequestHandler az = new AZRequestHandler("view", "Patient");
return az.isAuthorized();

JSF provides a few other simple ways to call this logic, for example c:choose which implements if/then/else logic

Simple, huh?

Unfortunately while all of this makes it easy to protect your JSF application with OES, it relies on the developer (a) knowing what to secure and (b) properly calling the security system. Part of the point of OES is that once you plug it into your application you get a bunch of stuff secured automatically and if you need to secure anything else later you don't have to change the source code to the application.

So how do we do that?

Stay tuned!

Friday, October 16, 2009

JAVA Key Stores for SOA Security

In a how-to post on SAML OWSM client policies last month, Josh mentioned how he used separate “Alice and Bob” keystores for the client and the service. I think this is a very important notion and would like to expound upon it in this post.

I see too many people building out SOA development and test environments that utilize a single key store for all the components in their environment. Even worse, I see lots of people utilizing the same key and certificate pair for every component in their environment. If your environment is a dev/test environment built on Oracle products, this is likely the ever-popular “orakey” pair.

I believe that it is important to utilize separate keystores and key-certificate pairs for the client, service, and any intermediary components where keys and certificates are required in your development and test environments.

Taking the short cut of having components share keystores and even key-certificate pairs will only burn you down the road as you move forward with the development of your services and applications. Sharing keys and keystores can mask potential issues that will appear when you try to setup real keystores as you approach production. Further, sharing keys and keystores can actually make it more difficult to diagnose and solve certain issues that can occur during the build out and configuration of an environment.

I will now walk you though the simple steps of creating properly configured client “Alice” and service “Bob” keystores using the Oracle CertGen and ImportPrivateKey utilities. Following these steps you can have proper test keystores ready to go in a matter of minutes.

1) Create client (Alice) key-certificate pair signed by the demo CA cert "CertGenCA"

>> java utils.CertGen -certfile AliceCert -keyfile AliceKey -keyfilepass password

2) Create service (Bob) key-certificate pair signed by the demo CA cert "CertGenCA"

>> java utils.CertGen -certfile BobCert -keyfile BobKey -keyfilepass password

3) Create client (Alice) keystore with client key-certificate pair

>> java utils.ImportPrivateKey -certfile AliceCert.der -keyfile AliceKey.der -keyfilepass password -keystore Alice.jks -storepass password -alias alice -keypass password

4) Create service (Bob) keystore with client key-certificate pair

>> java utils.ImportPrivateKey -certfile BobCert.der -keyfile BobKey.der -keyfilepass password -keystore bob.jks -storepass password -alias bob -keypass password

5) Now add the root CA to both stores

>> keytool -importcert -file CertGenCA.der -keystore Alice.jks

>> keytool -importcert -file CertGenCA.der -keystore bob.jks

6) Add bob's public cert to Alice's store. This is needed to configure the recipient alias on the client

>> keytool -importcert -file BobCert.der -alias bob -keystore Alice.jks

7) Use and prosper!

Oracle Entitlement Server patches

I've gotten quite a few questions about where to find the OES patches on Metalink and thought it better to publish publicly.

To get OES CP2 login to metalink and then select in order:
  • Patches and Updates
  • Quick Links to: Latest Patchsets, Mini Packs and Maintenance Packs
  • Oracle Entitlements Server
  • MS Windows 32 bit
  • 10g R3
Obviously if you want Linux or Solaris patches swap that for Windows.

Also don't forget when installing OES that you have to have a static IP address for the admin server. You can probably do an install on a DHCP enabled host but the next time you get a new IP address everything will go sideways.

Update: As of April 2010 CP4 is now available and is recommended for all users using OES.

Thursday, October 15, 2009

What is SOA Security?

This question has come up in a few contexts lately. One interesting example was the recent creation of the soa-security sub category on . There is no multiple inheritance on the site, so it has to either go into security or soa. This begs the question "Is SOA security anything more than just regular security?" Regular security in this context means web-services security - authentication, authorization, audit etc for web-services. Admittedly, on this blog, we spend a lot of time discussing the mechanics of applying these types of controls to web-services in various products and architectures, but is that all that SOA security is?

Part of the success of this blog has been that its pretty much "manifesto-free", so I'll try to stay focused - the often overlooked area of SOA security is design time governance. Most services are not exposed on the public network. The security for the services at runtime is provided by simply being on the intranet, behind the firewall. We've discussed this before around the use of SAML Bearer confirmation method. The bigger issue is how to ensure that people are selecting the right services to use from the UDDI registry, and only those which they (either the developer of the application) is authorized to use. I think the notion of design time SOA security (often called SOA governance) is nicely discussed here, by Bob Rhubart.

There are some things about services and how they are composed and reused in SOA that need to be managed. How much design time governance (i.e. restrictions) obviously varies from environment to environment. I worked with one customer - that I saw again at OOW a few days ago - who pointed me to the the trustcom project. This is a project being looked at by European government agencies, and "frictionless commerce" with SOA. It is pretty dense stuff, but basically consider the concept of creating a virtual organization to go a perform a business (BPEL) process. A BPEL process has roles, so essentially you want to do is select a service provider for each role and then very easily swap providers in and out if for example, the provider is not meeting their SLAs. So when composing this process, you want to make sure that you're always using the "right" provider. So to do this, you need to look them up in the repository of which contains many, many service providers. So, the UDDI registry or repository needs to restrict access to only the members of the Virtual Organization and their services. There are obviously runtime implications to this model as well, but it does illustrate some of the interesting design time decisions around SOA Security (Governance)

Presenting at OpenWorld is an experience

Actually even attending OpenWorld is an experience. I've been to plenty of tech conferences before but OpenWorld is unlike anything I've seen.

Everything about the conference is big - the number of attendees, exhibitors, sessions, even the number of venues since we took over Moscone North, South and West plus the Marriott across the street and the Hilton. The shear number and breadth of the content was astounding and I'm looking forward to taking advantage of the replays.

Beyond the actual size there's all sorts of attention to detail... Things like blocking off a street for tents for lunch and convenience between Moscone North and South, providing free food & drinks in the exhibit halls, running busses from all of the area hotels to the Moscone. And then, just to say thank you to our customers there was a concert on Wed night with Aerosmith, Roger Daltrey, the Wailers and Three Dog Night.

I managed to get to the keynotes from Scott McNealy, Michael Dell, and Thomas Kurian but was in a customer meeting during Larry's and will have to catch it on replay. From what I've seen on Twitter, blogs and press releases he shared the stage with The Governator, talked about the Exadata V2 box, introduced Fusion Apps (the first enterprise apps built on a modern middleware platform) and a bunch more.

And then there's the actual sessions...

In what I'm sure was a complete mistake someone approved me to do a session on securing WebLogic applications. Naturally I should never be trusted to do something like that on my own so April, the OES product manager, and I did the presentation together.

The session was standing room only and atypically for technnical presentations the entire thing went off without a hitch. April did the slide show and then I ran through a demo. Since we had a strict time limit I decided to use a recording of the demo rather than doing it live. I had the live system ready if people wanted us to go off the script or dig into unexpected areas and I wound up using it to show people the code behind the scenes. The recording turned out to be a great idea since it freed me from having to remember which username to log in with and let me focus on what was actually happening and keep a closer eye on people's reactions.

The core take away from our presentation was that if you have J2EE apps deployed on WebLogic Server you should take a very close look at two other Oracle products - OAM and OES. OAM gives you single sign-on across all of your apps including both home grown and shrink wrapped. Web SSO is a well known technology, is pretty widely deployed and, I don't recall anybody in the audience asking any questions about that.

OES was a whole 'nother story. There were questions about nearly every aspect of OES including details of the components, the policy model, how it integrates into WebLogic, what the app server protect automatically, how it's used in an app, and of course licensing questions. April fielded a few questions about integrations with other products I'd not even heard of before and then we ran out of time. After we were kicked out of the room I spent another 20-30 mins in the hallway showing people various aspects of the GUI and answering even more questions about the product. All in all I couldn't have asked for a better experience with my first session.

Unfortunately after the presentation was over and I was headed over to get a bite to eat with Josh I realized that I'd completely forgotten to put a link to the blog! If you're reading this after attending my session thanks for making the effort to find me here!

Wednesday, October 14, 2009

OOW 2009 Presentation Questions

Thanks to everyone who attended our session yesterday. I'll be posting a recording of the demo shortly, but wanted to share a few questions from the audince.

"Can you apply OWSM policies at the operation level or only at the endpoint level?"

So, in contract to the existing WLS @annotation model, you can only apply policy at the endpoint level. Authorization can be done from within that policy based on the authorization, as we showed with the OES-OWSM custom assertion

"How is the SAML Assertion generated and consumed?"

The SAML is generated from within the OOTB OWSM SAML Assertion (policy assertion). It uses configuration information defined in the jps-config.xml - like the name of the issuer. The SAML is validated by the login module - but this is a different login module then the SAMLAuthenticator or SAML capabilities of native WLS.

These are not meant as dings against OWSM. As a long time WLS Security guy, I've become very pleased with the simplicity and ease of configuration that is provided by OWSM. I also really like the extensibility of the custom assertions - allows you to plug in deep inside of the web-services stack, and that is going to be handy at a TON of customers.

"What is the difference in positioning between OWSM and OSB?"

I think that the decision of when to use OWSM and when to use OSB goes well beyond the security capabilities. I think that there are some use cases around where using the SAML (partner management) capabilities of the WLS stack that is available with OSB would make sense, but this has to be weighed against the fact that OSB uses the WLS 9.2 Web Services stack.

I think that OWSM should not be confused with a full service bus. OWSM is a policy management layer for Web Services....OSB is way more than that.

Update on

After some discussions with the OTN people, we decided that soa-security is not a project, but rather a category (ok, a sub-category under soa). I created a new project under the soa-security category

OES SSPI Providers

The thinking is to use this project for WLS SSPI plug-ins that we use that are OES specific. The two I have in mind are:

  • OESBPELAuditor - the ability to audit changes to the OES policies are store them in XML, such that a BPEL process could consume the changes
  • OESAdjuductor - the previously mentioned adjudicator that makes integrating the OES WLS-SM more straight forward, but only apply OES decisions to OES resources and XACML decisions to WLS resources.

Seems likely that OWSM custom steps and assertions could also make a home in the soa-security sub category. As always, I'm open to suggestions, and I'm sure that this will evolve over time. Most of security is a configuration and administration problem, but there are occasions when some coding is required, and now by using, there is a vehicle to get this information out.

Tuesday, October 13, 2009

Oracle Service Bus: Pass-Through vs. Active Security

I was talking to a customer about inserting the Oracle Service Bus (OSB) into their SOA infrastructure. The customer was prepared for a lengthy effort to get OSB to forward the SAML assertion sent from the client on to the service.

However, the fact is, the security functionality in OSB supports this use case exactly and with minimal effort.

The processing of message level security headers can be handled in two distinct ways in OSB: Pass-Through and Active Intermediary modes.

As an active intermediary, OSB processes the security headers in the SOAP request and enforces security policies on the messages. Additionally, in this mode, OSB can add new security headers (including new authentication tokens) to the request that is forwarded on to the service. I’ll leave further discussion for other posts.

In pass-through mode, OSB leaves the SOAP message untouched and simply routes the request on to its destination service. This means that all security headers in the original request are preserved in the request sent on to the destination service. So if a SAML assertion is sent in the original request through OSB, it will be part of the request being sent on to the service.

Pass-through mode is great when OSB is being inserted into a working infrastructure (maybe for the purpose of SLA management) where the web services already have security in place and no identity transformation is required as requests move from the clients through OSB and on to the services themselves.

Monday, October 12, 2009

OOW Initial Impressions

Since I had to get up a 4 am to make my 6 am flight out to OOW, I am a little tired, but I had a few minutes back at the hotel before going out to dinner with IDM PM team, so just wanted to muster a quick post.

I forget how BIG Oracle is sometimes. OOW is a massive production, like an invasion of the Moscone center. It took me many minutes inside of the Oracle Demo grounds to find FMW, then IDM, then the people I was looking for to prep for my session tomorrow.

Once inside, some good discussions about a few use cases. I'll put this one out to the blog - in the OES, OWSM integration "Where should the policies for authorization, and specifically the policy defining the XPath of the attributes OES needs for authorization be managed?" For the presentation, I have them in OES policy, but I think there is a reasonable argument for them being defined inside of the OWSM policy. My thinking is that the encryption and signing policy (i.e. what parts of the body should be signed) are managed there...why not something like "Authorization XPath". Its obviously not as flexible....OES can determine on a per use or per role basis which attributes are required, but sometimes simpler is better, and maybe just having XPath defined in OWSM makes more sense. I'm on the fence...push me over.

Lunch with the boss, who even though I've worked for him for many years, I rarely get to see him face-to-face. Sushi across the street from Moscone is always good.

Also, at Hasan Rizvi's keynote, he made mention of a "standards based security platform for all of FMW" which is code for OPSS....solid mention.

Vikas and I finally huddled up and reviewed the demo/presentation for tomorrow. I'm excited, I think it all has come together quite nicely. Hopefully, I'll see some of you there tomorrow afternoon.

Saturday, October 10, 2009

They will let anyone have a SubVersion repository

I've created a new project on called soa-security. I see it as an extension of the information provided on this blog. Even though I recently updated the site with some "fancy" code style sheets, there is some stuff that you just need to the code to run and fully understand.

The first project I created was for the OESBPELAuditProvider. What is that you say? Well, this is an AuditProvider that can be configured to run inside of an OES Admin Server. It looks for specific messages that the Admin Server creates when policies are changed, and cpatures those changes in an XML file on the admin server. It creates them in the "pending" folder. When the changes are finally picked up by an SM (also audited), the XML files are copied from the "pending" folder to the "committed" folder. The idea would be for a BPEL process, using a file adapter to then go do something with these messages. The most obvious use of this would be to then go an calculate a whole bunch of authorizations.

The launch page for the project is

I'm relatively new to the samplecode site, but I'm not sure of the visibility of this project right now. May need some "approvals". Also, I think you may need to request access to the SubVersion repository....I'll check and see what I can do to make it publicly readable.

This will be the first of many project to be added there...but if other people have their own work and are you need a SubVersion repository to place it in...consider ours.

OES OWSM 11g Custom Assertion Finally Done

I'm leaving shortly for OOW, but I wanted to give a brief overview of the OES OWSM 11g custom assertion. This is my second OOW project, and once again this work - which I'll demonstrate at the session on Tuesday (shameless plug) - has provided an opportunity to put something together that should provide some value to the field and customers. Its hard sometimes to get the chance to build something a little bigger than a bread box, but that's the nature of this role. You move from engagement to engagement - one after another - not too much time for anything "broad".

So, this is an update of the OES-OWSM Custom step that was part of my OOW 2008 presentation. In doing the "migration" to 11g, I think I learned a number of things that I'll share over the coming weeks.

OES Adjudication Provider

I was frustrated by the complexity of securing an 11g SOA domain with OES. It seemed like the biggest issue was writing OES policies for the WLS resources. In both scenarios, I just wanted to call the OES API, and securing the WLS resources was just a consequence of the fact that ASIAuthorizer plugged into the SSPI framework, and that the Adjudicator couldn't tell the difference between OES and WLS resources. Well, for the OOW demo, I created an OES Adjudicator that only enforces the decision from the ASI or XACML authorizer depending on the resource. This greatly simplifies the deployment, because there are no OES policies for WLS resources. It took me a while to build it, but I think ultimately this is a reasonable solution for POC environments. It might be better in a production environment to author some basic policies for WLS resources in the ASI authorizer. My concern is that the overhead of going through two authorizers might not be worth the simplicity.

Authorization based on SAML Attributes

I extended the existing custom step to be able to resolve XPath queries from either the body or the header of the SOAP:Envelope. This opens up the possibility of not just doing authorization based on the content of the SOAP message, but also the headers. SAML Attribute's are part of the SAML Assertion, and that's available in the WS-Security header. This gives a concrete implementation of the attribute based authorization and federated authorization use cases discussed in this post. The implementation uses the SAML capabilities of OWSM 11g. They key capability here is the ability of the OWSM client side policy to generate a SAML Assertion based on the attributes of the user in the WLS LDAP. OWSM made this whole use case really very simple.

Writing an 11g OWSM Custom Assertion

I definitely picked-up some best practices from engineering, especially on how to get an OWSM custom Assertion into a policy that can be deployed to protect at WLS Web Service. In 11g environments, in makes sense to use OWSM to protect but composite web services as well at WLS web services. The reason is that you can get centralized policy management and avoid a lot of interoperability headaches. OWSM 11g and WLS 11g webservices stacks do work well together, but having the same stack for both producer and consumer greatly simplifies the process.

Like I said, there are more details to follow. Many of which I'll discuss/demonstrate at my OOW Session (2nd shameless plug), but all of which I will share on this blog in good time. For those who were awaiting the OES-OWSM 11g custom step and are not California Angels fans, feel free to contact me and I'll see what I can do about getting the step available ASAP. For Angel's fans and other who can wait, I get this information out as quickly as I can.

If anyone wants to meet up at OOW, I'm happy to accommodate, just ping me and let me know. Also, I'll try to post some updates from the conference.

Safe Travels

Wednesday, October 7, 2009

Calling OES from inside a J2EE web app

For my OpenWorld demo I needed to make calls to OES from inside my J2EE web application. There are a bunch of ways to do that - calling the Java API, making SOAP or RMI calls to a remote Security Module, using the tag library, and a few other lesser known ways. All of those ways are just fine, but my all time favorite way to call OES when I'm running inside WebLogic is to let someone else do all the hard work... so I use the OES Control.

The public javadoc describes the ALESControl at a high level. From that you gather that the control is a plug-in to Workshop for WebLogic that makes calling OES for WebLogic Portal or WebLogic Interaction easier. But why am I talking about it here when I am writing an app that has nothing to do with Portal or Interaction?

I'm glad you asked.

As long as you're code is running inside WebLogic the ALESControl provides the simplest interface to OES that you can imagine. Here's an example of calling OES using the ALESControl.

ALESControl ctrl = new ALESControlImpl();
if ( ctrl.isAccessAllowed(resource, action, m))
System.out.println( "access is allowed" );
System.out.println( "access is denied" );

The params resource and action are each a simple String. The third param, m, is a Map.

Notice what's not there? For one thing the user's identity - that comes from WebLogic's security context automatically. You also don't have to do any initialization, configuration or indeed anything that could be called hard or messy.

I tend to wrap even this simple code in my own interface so that if I ever repurpose some of my code and need to use some other interface to OES my changes are localized in one place.

Anyway here's my wrapper, or at least the part of it that you care about.

public class AZRequestHandler implements AZRequestInterface
private String action = "";
private String resource = "";
private HashMap m = new HashMap();

public void setAction( String action )
this.action = action;

public void setResource( String resource )
this.resource = "Application/" + resource;

public void addAttribute(String name, String value) {
(String) value '" + value + "'" );
m.put(name, value);

public void addAttribute(String name, int value) {
(integer) value " + value );
m.put(name, value);

public String[] getRoles()
ALESControl ctrl = new ALESControlImpl();

String roles[] = null;

try {
Collection x = ctrl.getRoles(resource, action, m);

roles = new String[x.size()];
int i = 0;
Iterator it = x.iterator();
while( it.hasNext() )
roles[i++] = (String);
catch (ALESControlException e) {
// TODO Auto-generated catch block
System.out.println( "Exception caught" );

return roles;

public boolean isAuthorized() {
ALESControl ctrl = new ALESControlImpl();

// Fall through = return false (fail safely)
boolean retval = false;

try {
if ( ctrl.isAccessAllowed(resource, action, m))
retval = true;

} catch (ALESControlException e) {
// TODO Auto-generated catch block
System.out.println( "Exception caught" );

return retval;

Yes, there are some System.out.println() calls in there and there's no actual handling of Exceptions. I leave that to you, but this is good enough for my little demo.

If you happen to be using JDeveloper to make your web app then to get the ALESControl interface all you need to do is add Oracle/Middleware/ales32-ssm/wls-ssm/lib.eclipsePlugins/ALESControl.jar to your projects Classpath entry. Then deploy the app as normal.

Sometimes simple is best.

Fat Client and SOA - A case for SAML Sender Vouches and STS

The basic scenario is that users need to "log into" their fat client applications, and then go and access some services (let's assume SOAP based) over the internet.

There are a number of questions that drive the solution in a case like this:

  • What directory/data source will users be authenticating to? A local source or a remote source?
  • Are those same directories/data sources readily available to the consuming services?

In the cases where the directory and the services are in the same security domain, and the directory is "readily available", there is no need for something elaborate. I think using the native authentication of the directory (say LDAP) and then passing the users identity to the services as something simple (HTTP Header or WS-Security UsernameToken (no password)) would probably work. Applications can just take the username (or dn) from the request and callback to the directory to get additional information. One last thing, you need to have some mitigation strategy for avoiding people spoofing DNs (adding a DN that isn't there's to the request). The simplest way to is to do the requests over 2-way SSL. Package the certificate with the application and there you go. BTW, the CSF function of OPSS is a nice approach for this user case - relies on Java Security to ensure that only authorized applications can have access to the credential (password for decrypting the private key).

The harder use case is more of a federated model - example, the user needs to authenticate locally, but the services are in another security domain. In this model, if there is additional information that the services need about the user, they need to be passed in some form. I think that SAML-Sender Vouches works nicely here. So, the application authenticates locally and then gets a SAML Assertion, signed by the issuer. The SAML Assertion could/should contain additional information needed by the service - groups/roles/attributes etc. The SAML Assertion is added to the message and the message is signed.

This is actual a good use case for an STS. Basically, the STS is taking username and password in and returning a SAML Assertion for the service. Think of it as a standards based authentication service, where the standard is WS-Trust. The stand-alone application can just be configured to point to the local STS and the application is done - no need to specify support for LDAP, RDBMS...that's left to the local deployment.

The reality is that you could actually solve the first scenario with SAML/STS it may just be overkill, but starting with this architecture does provide much more flexible business models. For example, some customers of the service want to authenticate locally, while others want to authenticate centrally. Not a problem. Its simply a matter of configuration. In the fully federated case, the centralized service trust the local authentication and can avoid the headache of password management. That issue can be pushed out to local directories - at least that's the vision.

CAPTCHA vs. Strong Authentication (with OAAM)

A colleague was asked by a customer for a comparison between using a CAPTCHA solution and Oracle Adaptive Access Manager (OAAM). As people try to understand the role of CAPTCHA and different “advanced” authentication solutions in general, this type of question is actually pretty common.

The most common CAPTCHA solutions involve a user picking a series of alphanumeric characters (often distorted or partially obfuscated) out of a generated image and entering the characters along with the rest of the input.

CAPTCHA injects this specific type of challenge-response flow into an authentication (or other web input) to ensure that the input is really coming from a human and not a computer. It is often used with authentication, self-registration, and other application specific interactions like concert ticket buying systems to prevent various denial of service attacks and other mass input abuses of the system.

While CAPTCHA (arguably) does a good job at making sure that a user really is human, that is all that it does. It does nothing to make an actual user authentic stronger. It does nothing to prevent phishing, nothing to detect or prevent fraud, nothing to mitigate stolen passwords. The credentials being supplied in a username and password form with CAPTCHA is still just a username and password.

On the other hand, strong authentication is about adding additional “stronger” credentials into the authentication to go along with a username and password. Usually this means incorporating something a user has like an ATM card, hardware authentication token, or software token or alternatively something a user knows like a series of personal questions that other people aren’t likely to know.

Along with strong authentication often come secure input technologies like personalized pictures and phrases, keypads, and sliders that are utilized to prevent phishing and stolen passwords in general.

OAAM is an exceptionally powerful yet easy to use and deploy strong authentication and fraud prevention solution. You can read more about its capabilities here and by reading the white paper found here. On a personal note, I think OAAM is a very strong product and a leader in the space.

While there is overlap between OAAM (or strong auth in general) and CAPTCHA technologies in that a strong auth solution can help ensure that an application is interacting with a human, there is still a conceivable need for both.

Strong authentication is for well… authentication and requires that a user exist and that additional authentication factors be provisioned prior to the authentication. This makes it inappropriate for registration and other interactions where the user may be anonymous. On the other hand CAPTCHA can be used without knowing specifically who a user is.

It may also be appropriate to use CAPTCHA in an interaction that occurs at some point after (strong) authentication to ensure that a human is still in control of the client system.

Protecting OAM (IdentityXML) with OSB

A great use case for this blog - "How to add OSB in front of OAM". This question came from a customer, and touches on a number of interesting issues

This link has all of the information about the WSDL for Identity.xml.

This link describes how to use WSDL in OSB

To clarify, you can load the WSDL as a file into OSB and use those local WSDLs to work with the services of IdentityXML. The fact that OAM does not expose the WSDL in the ?WSDL form is not an issue. OSB is happy to work with a file.

So, basically:

1 - Create a proxy service for a WSDL ( this will expose ?WSDL to clients)
2 - Create a business service for an Identity XML using the same WSDL
3 - Create a pipeline that maps that routes the requests (very simple mapping...not much work)

The most interesting part here is how to propagate the credentials through OSB.

IdentityXML uses a proprietary format - It doesn't use WS-Security. IdentityXML looks for all of the information in the body, but I would argue a "better" implementation and a better use of OSB would be to add more standard WS-Security tokens to the exposed WSDL.

For example, you could replace the username/password elements with WS-Security UsernameTokenProfile and the ObSSOToken could be mapped to a custom WS-Security BinaryToken (common) practice.

To this, you wouldn't process the WS-Security Envelope in OSB, but rather do some transformations from the exposed WSDL and WS-Security messages to their appropriate localtion inside of the actual IdentityXML endpoint running in OAM.

Tuesday, October 6, 2009

ADF^H^H^HJSF and OES part 2...

In my first post I started discussing using OES to secure an ADF application. In that post I said that I was using ADF as if it was just a bunch of JSF components. Someone kindly pointed out to me that if someone went searching for ADF and OES that my post wouldn't be all that helpful since they'd likely be interested in ADF's built in security model. So I am renaming this series of posts "JSF and OES". When I finish these posts I'll follow up with some ADF specific posts.

When I finished up last time I had a WebLogic Server domain configured to use the OES WebLogic Security Module. If you want to secure an application deployed to WebLogic Server you'll obviously need an application.

Step 1: Create a web app

Inside JDeveloper you can create a new web app a bunch of ways, but the fastest and best way is, since I want to use some of that sweet, sweet ADF goodness, us to just hit File-New and select "Fusion Web Application (ADF)" from the list of Applications. If you take the defaults for everything else you get two projects Model and ViewController. Congrats, you've created a Web App that has all of the ADF bits (including JSF) pre-wired for you.

Step 2: Deploying your app to WebLogic

This step threw me for a loop the first time...

You want to deploy the app to that WebLogic server we created in the last post. If you did everything right it should just be a matter of picking the right options off the menu. The thing that I goofed up was that I tried to deploy just the ViewController thinking that the Model would come along automatically based on the dependency. What you actually need to do is click the drop down next to the Application Name (not Model or ViewController) and select Deploy from that menu. Work your way through the menus to deploy the app and JDeveloper will create the EAR, deploy it to WebLogic and start it up.

Try accessing the application you created. If you don't know the app's context root you'll need to use the WebLogic console to find it inside the application's config page. You can adjust it there or you can create a weblogic.xml file and specify the context-root there.

Step 3: Secure the app

At this point if you access the application you'll be allowed right in since you didn't tell WebLogic to require user logon. That's easily fixed by just following the standard J2EE steps - edit web.xml either by hand or better yet using JDeveloper's web.xml friendly editor. Basic authentication is evil and I strongly encourage the use of Forms-Based Authentication, but either will work. Just to make sure everything is working try accessing the page and login as "weblogic"

If you're like most developers I know you've already tried to login with a username other than weblogic and you got the standard ugly Error 403--Forbidden page because, shockingly, you are not authorized.

Step 4: Create OES resources and policies

In order to create policies in OES you first have to tell OES about the resources it is protecting. Or at the very least you have to tell OES about some resource above the resources it's protecting and tell OES that anything with that prefix is OK; or in OES' words that parent resource is Virtual.

If you have a whole bunch of resource then Discovery Mode is your friend.
If you're deploying your first "hello world" application then follow along with me.

One of the best new features of OES since I began using it a couple of years ago is a little known logging option called DebugStore. One of my friends in engineering has earned a special place in my heart for the addition and I probably owe him a few lunches for all the time he's saved me with it. The DebugStore logging feature causes OES to spit out a block of data at the end of an Authorization that tells you everything you need to know to figure out why a user was authorized (or why they weren't).

To enable DebugStore open and toggle the comment off the line that contains DebugStore:

### Uncomment the following "log4j" line to enable logging of Policies
### This is helpful if you are having problems with OES policies or if you
### want to understand how OES policies are processed

If you uncomment that line, restart WebLogic, and try accessing the URL again with a user other than weblogic you'll get the 403 error again and something like this will show up in your system_console.log

========== BEGIN Policy Evaluation (2009-10-05 17:36:30,895 EDT) ==========
RequestResource is: //app/policy/DrApp/testapp_application1/url/testapp-ViewController-context-root
Name: //user/DrAppDir/user9001/
Groups: //sgrp/DrAppDir/allusers/
Resource Present: true
Roles Granted: //role/Everyone
Role Mapping Policies:
1. Result: true; Policy Type: grant
Role: //role/Everyone
Resource: //app/policy/DrApp
Subject: //sgrp/DrAppDir/allusers/
Constraints: NONE
Delegator: null

ATZ Policies: NONE
========== END Policy Evaluation (2009-10-05 17:36:30,895 EDT) ==========

That log block tells me everything I need to know.

DrApp is the name of the SSM I created, and DrAppDir is the name of the user directory I associated with the SSM. ConfigTool automatically created //app/policy/DrApp and "testapp_application1" is the web app (the EAR) that JDeveloper published. The other parts of the resource string are fairly obvious - "url" is a prefix that all HTTP URLs in the application appear under. And "testapp-ViewController-context-root" is the root of the app. For now the easiest thing to do is to open the OES GUI and create "testapp_application1" and "url" as resources.

You can create as many complicated policies as you want under the test app, but a simple policy will do well enough for me for now. When I created "url" I checked the box "Allow Virtual Resource", then created a simple policy on that resource granting the actions GET,POST, and HEAD to the group //sgrp/DrAppDir/allusers/.

Here's what that policy looks like in the GUI:

Don't forget to Save and Distribute your changes before accessing the site again.

One note of caution when you start writing policies. If there's a Deny policy that matches the resource and applies to the user then access will be denied no matter what other policies you have created. After that if you have any Grant policy that apply to the resource and to the user then access will be granted. These rules sound simple, but when you have 10 policies that apply to a bunch of resources it can get a bit confusing if you're creating them without a plan or design in mind.

In any case DebugStore will help you out.

Now to go buy my engineering friend lunch...

Monday, October 5, 2009

Configuring BPEL 11g Human Workflow Tasks to work with OpenLDAP

The documentation basically describes the process, but the "trick" is to make sure that the OpenLDAP provider (or Active Directory or whatever 3rd party LDAP that WLS supports) is the 1st instance in the WLS Realm. This is how JPS knows where to get the users,groups,roles from. Its a little quirky, but now you know.

Sunday, October 4, 2009

Fusion Security Bookmarks Now Available

Fusion security bookmarks

I moved over some of my old joshbregmanoracle bookmarks to a new user fusionsecurity. I like the fact that the google search on the blog fulls in the URL references by the blog, so I'll definately be using the search here to access all of my resources when I'm on the road.

Friday, October 2, 2009

The Impact of Oracle Entitlement Server (OES)

When customers are looking at deploying OES, or really an new piece of IT infrastructure, the question comes up "What is the cost to adopt this new thing?". Its sorta like a puppy...sure its cute and my kids love it and will love me for it, but who is gonna walk it, and clean-up after it, and when I go away I have to kennel it etc. So, this is the classic cost/benefit analysis that goes on every day inside of IT. Nothing new, but I just wanted to add some thoughts on the specifics of OES, its architecture and its impact on the organization.

The first question is "How are you going to deploy OES?". OES deploys in two ways - centralized and distributed. In the centralized model, you stand up a service, and applications talk to it over some protocol - SOAP or RMI. This has the advantage of having minimal application footprint (lightweight API), this can make adoption simpler, because you avoid the inevitable nastiness of getting a runtime loaded into foreign heterogeneous environment - i.e. classloading hell. The latency of making the authorization calls varies by protocol by its typically in the 10s of ms per call. This is not a lot, but if you have a webpage with 100 items and each call is 20 ms, people will notice the inclusion of OES. If this is the case, then using some of the child query APIs. This can eliminate multiple (read: chatty) API traffic. So, basically the centralized model has little impact on the infrastructure, but does requires some thinking and the software level.

The alternative is to deploy the Java SM/WLS SM into each application/container. This has the issue of maintaining and managing more software - think upgrade, but is very very very fast - latency below 1 ms. So, deploying embedded makes sense if you need the performance. Also, you can just call the API at will with out fear of the latency accumulating with multiple requests. Less impact of the software, more on the infrastructure.

Once you've figured out which model, and they are not mutually exclusive, you need to think about how to get OES integrated into application environments. In general, I don't see customers just writing whatever policies they want and calling them through the API. There needs to be some basic authorization model established and then implemented in OES. People do not typically put individual grants in OES admin console and then push out the changes - its simply not manageable. Instead, develop some policies that are data driven - example: Any customer can access any bank account they own (Good) vs. Josh can access bank account 12345 (Bad). Once the model is developed, wrapping the runtime API with a higher-level API that maps to the business domain model makes sense. In many cases, customers already have an existing model that OES is replacing, so this is a simple exercise. Just take the API that you have today - keep the interface, and replace the implementation will calls to OES. Then over time, extend the existing API to expose more of the functionality of OES.

Finally, who is going to own/operate OES. Typically, the development of the API/Policy is done by a development, business analysts and infosec working together. These policies once blessed are implemented, tested and deployed. Policies don't change that often, but the data that informs them does. OES has rich capabilities in the form of AttributeRetrievers to get information from a wide range of sources including LDAP, RDBMS, and custom. That information is managed there and then evaluated at runtime.

OES can be managed by application infrastructure or identity management teams. I've seen both. Typically, at the beginning of a deployment, it stays closed to the applications, since there is a lot of application context/dependencies. Over time as the model/deployment matures this function can move to the centralized IDM group.

Ultimately, OES is not that different from any other piece of middleware. How its deployed and managed at customer sights is not unique, but there are some subtleties that are listed above that are worth acknowledging and considering.

Thursday, October 1, 2009

Calling Oracle Service Bus from MSFT WCF Client Using an STS

I hope that this is the first of two posts. In the second post, I want to able to describe how to do this use case with out an STS. As people know from this blog, I think that an STS has a time and a place. When I first did this integration, there was a real reason for having the STS. We were implementing what was essentially the MSFT claims based authorization model. The STS was calling out to an entitlements system than needed to be invoked using native .net authentication. The alternative was to have OSB generate a Kerberos Ticket for a user that it didn't have the password, and call the entitlements service. Let's just say many people consider this against security best practices. Now that I'm faced with doing this again for another customer, I eager to figure out how to do this without the STS. That aside, here's the approach.

Also, I couldn't have done this without Symon Chang, Anand Kothari, Wil Hopkins - very very smart engineers.


WCF, by default uses windows authentication. Windows authentication is based on Kerberos, so from the WCF perspective, the most logical way of propagating identity would be to use WS-Security Kerberos Token Profile. This is the standard way of conveying a Kerberos Ticket in a SOAP Message. This is supported in WCF OOTB.

The problem is that OSB 10gR3 does not. OSB10gR3 has no support for Kerberos at the message level. OSB does have support for Kerberos as part of the transport level security provided by SPNEGO. As for message level, OSB 10gR3 has support for Usename/Password, X.509 Certificate, and SAML profiles for WS-Security 1.0. SAML provides the best fit for this use case since it allows for the identity in the windows environment to remain native, only relying on SAML when calling services on the OSB.

WCF also supports WS-Security SAML 1.1 Token Profile for WS-Security 1.0, so this seems like a good profile to use to meet the requirements, and therefore focus on. WCF requires a Security Token Service (STS) to generate the SAML Assertion. Microsoft provides a sample, but the sample needs to be modified to generate a SAML Assertion that OSB understands. Also, WCF favors symmetric bindings for WS-Security. This is probably because WS-Security Kerberos Token profile uses the Kerberos Session Id as the key. OSB 10gR3 only supports an asymmetric binding - X.509 certificates are used to sign the message and bind the SAML assertion to it.

On the OSB side, a pipeline needs to be configured to handle the WS-Security policies. The inbound policy is WS-Security SAML 1.1 Token Profile for WS-Security 1.0 and the outbound policy is that the message is signed by the service. This is because MSFT expects that an endpoint that is protected using WS-Security will secure the response as well. To support this OSB configuration, WLS Security realm needs to be configured to consume and validate SAML Assertions as well as configure Public/Private Key pairs and corresponding trust stores for the message signature operations dictated by the WS-Security policies.

The flow is that a WCF client calls the STS. The STS generates a SAML Assertion signed by the STS that contains the name of the user as the Subject. The SAML Assertion uses the sender-vouches confirmation method. The SAML Assertion is added to the WS-Security Header, and the message is signed by invoking service. The message is sent to OSB where the SAML Assertion is verified along with the message signature. Once the message is processed, the return message is signed by the OSB identity. The signature is validated by the WCF client to ensure that the message has not been tampered and was sent by the OSB.

Customizing the STS Sample to Work with OSB

The sample STS provided by Microsoft needs to be modified to work with OSB in this scenario. The sample STS has the following issues:

  • Sample STS needs to be modified to use X509RawCertificate format. OSB does not support SHA1Thumbprint
  • Sample STS needs to be modified to use Sender-Vouches confirmation method instead of Holder of Key.
  • Sample STS needs to be modified to use sign the assertion with the private key of the issuer, not the encrypted key. OSB does not the use of symmetric encrypted keys, only un-encrypted asymmetric keys.
  • Sample STS needs to be modified to include an AuthenticationStatement in the SAML Assertion. This is where OSB looks for the user's identity.
  • Sample STS needs to be modified to add a wsu:Id to the saml:Assertion, otherwise WCF cannot use it as an IssuedToken with an asymmetric binding

These issues can be addressed mainly by modifying the SamlTokenCreator.class

// Copyright (c) Microsoft Corporation. All rights reserved.
using System;

using System.Collections.Generic;
using System.Collections.ObjectModel;

using System.IdentityModel.Tokens;

using System.ServiceModel;
using System.ServiceModel.Security;
using System.ServiceModel.Security.Tokens;
using System.Text;
using System.Xml;
using System.Security.Cryptography.X509Certificates;
using System.Net.Security;
using System.ServiceModel.Channels;
using System.ServiceModel.Configuration;
using System.ServiceModel.Description;
using System.Configuration;
using System.Security.Principal;
using Common;

namespace Microsoft.ServiceModel.Samples.Federation
public sealed class SamlTokenCreator
#region CreateSamlToken()
/// <summary>
/// Creates a SAML Token with the input parameters
/// </summary>
/// <param name="stsName">Name of the STS issuing the SAML Token</param>
/// <param name="proofToken">Associated Proof Token</param>
/// <param name="issuerToken">Associated Issuer Token</param>
/// <param name="proofKeyEncryptionToken">Token to encrypt the proof key with</param>
/// <param name="samlConditions">The Saml Conditions to be used in the construction of the SAML Token</param>
/// <param name="samlAttributes">The Saml Attributes to be used in the construction of the SAML Token</param>
/// <returns>A SAML Token</returns>
public static SamlSecurityToken CreateSamlToken(string stsName,
BinarySecretSecurityToken proofToken,
SecurityToken issuerToken,
SecurityToken proofKeyEncryptionToken,
SamlConditions samlConditions,
IEnumerable<SamlAttribute> samlAttributes)

// Create a security token reference to the issuer certificate
SecurityKeyIdentifierClause skic = issuerToken.CreateKeyIdentifierClause<X509RawDataKeyIdentifierClause>();
SecurityKeyIdentifier issuerKeyIdentifier = new SecurityKeyIdentifier(skic);

//Get the user
WindowsIdentity wi = ServiceSecurityContext.Current.WindowsIdentity;

// Create a SamlSubject
SamlSubject samlSubject = new SamlSubject(SamlConstants.UserNameNamespace,
//Set the Confirmation method to Sender-Vouches

//Create the Authentication Statement
SamlAuthenticationStatement samlAuthStatement = new SamlAuthenticationStatement();
samlAuthStatement.SamlSubject = samlSubject;

// Put the SamlAttributeStatement into a list of SamlStatements
List<SamlStatement> samlSubjectStatements = new List<SamlStatement>();

// Create a SigningCredentials instance from the key associated with the issuerToken.
SigningCredentials signingCredentials = new SigningCredentials(issuerToken.SecurityKeys[0],

// Create the SamlAssertion
String assertionId = "_"+Guid.NewGuid().ToString();

SamlAssertion samlAssertion = new SamlAssertion(assertionId,
"uri:"+stsName.Replace(' ','_'),
new SamlAdvice(),

//Wrap the SamlAssertion so that the wsu:Id can be added
CustomSamlAssertion customAssertion = new CustomSamlAssertion(samlAssertion);

// Set the SigningCredentials for the SamlAssertion
customAssertion.SigningCredentials = signingCredentials;

// Create a SamlSecurityToken from the SamlAssertion and return it
SamlSecurityToken st = new SamlSecurityToken(customAssertion);

return st;


private SamlTokenCreator() { }

static X509Certificate2 LookupCertificate(StoreName storeName, StoreLocation storeLocation, string thumbprint)
X509Store store = null;
store = new X509Store(storeName, storeLocation);
X509Certificate2Collection certs = store.Certificates.Find(X509FindType.FindByThumbprint,
thumbprint, false);
if (certs.Count != 1)
throw new Exception(String.Format("FedUtil: Certificate {0} not found or more than one certificate found", thumbprint));
return (X509Certificate2)certs[0];
if (store != null) store.Close();


This code above references another class - CustomSAMLAssertion.class. This class fixes the issue of the SAMLAssertion not having a wsu:Id

using System;
using System.Collections.Generic;
using System.IdentityModel.Tokens;
using System.Text;
using System.Xml;
using System.IO;

namespace Common
class CustomSamlAssertion: SamlAssertion

public CustomSamlAssertion(SamlAssertion theAssertion):


public override void WriteXml(System.Xml.XmlDictionaryWriter writer, SamlSerializer samlSerializer, System.IdentityModel.Selectors.SecurityTokenSerializer keyInfoSerializer)
StringBuilder myBuilder = new StringBuilder();
XmlDictionaryWriter myWriter = XmlDictionaryWriter.CreateDictionaryWriter(XmlDictionaryWriter.Create(myBuilder));

base.WriteXml(myWriter, samlSerializer, keyInfoSerializer);


String contents = myBuilder.ToString();

//contents = contents + "";

XmlDictionaryReader reader =
XmlDictionaryReader.CreateDictionaryReader(XmlDictionaryReader.Create(new StringReader(contents)));

StringBuilder myBuilder2 = new StringBuilder();
XmlDictionaryWriter myWriter2 = XmlDictionaryWriter.CreateDictionaryWriter(XmlDictionaryWriter.Create(myBuilder2));

while (reader.Read())

WriteShallowNode(reader, writer);

catch (Exception e)
//String contents2 = myBuilder2.ToString();

//throw e;





void WriteShallowNode(XmlReader reader, XmlWriter writer)

if (reader == null)

throw new ArgumentNullException("reader");


if (writer == null)

throw new ArgumentNullException("writer");


switch (reader.NodeType)

case XmlNodeType.Element:


writer.WriteStartElement(reader.Prefix, reader.LocalName, reader.NamespaceURI);

writer.WriteAttributes(reader, true);

if (reader.LocalName.Equals("Assertion")) {


if (reader.IsEmptyElement)




case XmlNodeType.Text:



case XmlNodeType.Whitespace:

case XmlNodeType.SignificantWhitespace:



case XmlNodeType.CDATA:



case XmlNodeType.EntityReference:



case XmlNodeType.XmlDeclaration:


case XmlNodeType.ProcessingInstruction:

writer.WriteProcessingInstruction(reader.Name, reader.Value);


case XmlNodeType.DocumentType:

writer.WriteDocType(reader.Name, reader.GetAttribute("PUBLIC"), reader.GetAttribute("SYSTEM"), reader.Value);


case XmlNodeType.Comment:



case XmlNodeType.EndElement:






Configuring the WCF Client

WCF supports a large number of authentication methods and profile bindings simply and easily. This is typically done by modifying the configuration file through the WCF Service Configuration Editor. Unfortunately, there is no way through configuration to set-up the client. This needs to be done programmatically. The WS-Policy that OSB uses is essentially as follows:

<?xml version="1.0"?>

The translation between this policy and the WCF APIs is pretty straight forward with one exception - the SAML Token itself. In WCF, the SAML Token is retrieved from the STS, so the WCF client needs to be configured to communicate to it. In WCF, authentication from a token retrieved from an STS is called IssuedToken. All of this can be done programmatically through the WCF APIs. For simplicity sake, the creation of the custom AsymmetricSecurity binding can be encapsulated as a WCF Binding Element Extension. This allows for the inclusion of custom binding elements () inside of a custombinding.

<add name="osbsecurity" type="OSBWCFExtensions.OSBSecurityElement, OSBWCFExtensions, Version=, Culture=neutral, PublicKeyToken=63fc46aa660659ca" />

<binding name="HelloWorldServiceServiceSoapBinding">
<textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16"
messageVersion="Soap12" writeEncoding="utf-8">
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
maxBytesPerRead="4096" maxNameTableCharCount="16384" />
<osbsecurity STSAddress="http://fedtest/FederationSample/HomeRealmSTS/STS.svc"/>
<httpsTransport manualAddressing="false" maxBufferPoolSize="524288"
maxReceivedMessageSize="165536" allowCookies="false" authenticationScheme="Anonymous"
bypassProxyOnLocal="false" hostNameComparisonMode="WeakWildcard"
keepAliveEnabled="true" maxBufferSize="165536" proxyAuthenticationScheme="Anonymous"
realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false"
useDefaultWebProxy="true" requireClientCertificate="true"/>


Inside of the OSBSecurityElement, the WCF API calls are made that create the proper binding for sending a SAML Assertion to OSB.

protected override System.ServiceModel.Channels.BindingElement CreateBindingElement()

//Retrieve the STS Address from the config
ConfigurationProperty stsConfig = this.Properties["STSAddress"];

//Set-up the Asymmetric binding with the recipient and initiator's parameters
//Keys are identified by the issuersSerial as required by the policy
//OSB does not support derived keys, so they are disabled
X509SecurityTokenParameters initiatorParams = new X509SecurityTokenParameters(X509KeyIdentifierClauseType.IssuerSerial, SecurityTokenInclusionMode.AlwaysToRecipient);
initiatorParams.RequireDerivedKeys = false;

X509SecurityTokenParameters recipientParams = new X509SecurityTokenParameters(X509KeyIdentifierClauseType.IssuerSerial, SecurityTokenInclusionMode.Never);
recipientParams.RequireDerivedKeys = false;

AsymmetricSecurityBindingElement security = new AsymmetricSecurityBindingElement(recipientParams, initiatorParams);

security.SecurityHeaderLayout = SecurityHeaderLayout.Lax;
security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10;

//Configure the STS and the resulting SAML Assertion as a signed supporting token
WSHttpBinding stsBinding = new WSHttpBinding();

//This credential type is how the caller identifies themself to the STS
stsBinding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;

IssuedSecurityTokenParameters issuedTokenParamters =
new IssuedSecurityTokenParameters("", new EndpointAddress((String)base["STSAddress"]), stsBinding);

issuedTokenParamters.RequireDerivedKeys = false;
issuedTokenParamters.ReferenceStyle = SecurityTokenReferenceStyle.Internal;
issuedTokenParamters.InclusionMode = SecurityTokenInclusionMode.AlwaysToRecipient;

//Set this to process the signature of the response
security.AllowSerializedSigningTokenOnReply = true;

return security;

By using the custom binding element extension, the client code remains unchanged:

HelloWorldClient client = new HelloWorldClient();
Console.Out.WriteLine(client.test1("WCF Client"));

Configuring OSB Domain's Security Domian

The inbound SAML processing requires the creation and configuration of a SAML Identity Asserter. For this scenario, the SAML V2 Identity Asserter should be used. It supports SAML 1.1 sender-vouches subject confirmation method. It needs to be configured with an asserting party that corresponds to the STS. Since the SAML Assertion is signed, OSB needs to be configured to trust the signer of the assertion. This can be done my adding the certificate authorities (CAs) that make up the STS's certificate chain to the list of trusted CAs. Which keystore to add them to depends of the trust mode that the OSB domain is running, but by default, these can be added to the cacerts keystore found in JRE_HOME/jre/lib/security.

In some scenarios, the identity being asserted by the SAML assertion can be trusted, and in others, the identity needs to be validated against some other authentication source - mainly Active Directory. OSB domain can be configured to support both. To trust the identity, a SAML Authentication Provider needs to be added to the realm. Make sure to configure it with an appropriate JAAS Control Flag. The simplest way to avoid any conflicts is to mark all of the authentication providers as OPTIONAL. Also, the asserting party configuration in the SAML Identity Asserter needs to be configured to Allow Virtual Users. Otherwise, the SAML Authentication Provider will not work. If "Allow Virtual Users" is not checked for the asserting party, then the security realm will try to validate the user against the authentication providers configured for the realm. The name that the STS above generates is of the form domain/username. In most cases, a custom username mapper will need to be written and configured on the SAML Identity Asserter to split off the domain portion of the name.

A PKI CredMapper needs to be configured so that OSB can generate digital signatures for outbound requests. The PKI CredMapper is configured to point to a Java Keystore. The identity of the OSB should be available in this keystore, and should be the same identity as the ServiceCert configured in the WCF client.

Configuring the OSB Pipeline

The OSB service needs to be configured to process the WS-Security header sent by WCF. The inbound request message needs to be configured with the SAML Token Profile 1.0 - Sender Vouches policy.
<wssp:SecurityToken TokenType="">

The OSB response needs be signed. This can be done by creating a Service Key Provider that points to the identity of the OSB, and then adding the predefined Sign.xml policy to the response operation.


The mapping of WCF configuration to WS-Policy and the security capabilities are nicely described

A description of why wsu:Id needs to be added to the SAML Assertion

A good discussion of a variation of this use case

The sample STS from Microsoft, which has extended to integrate with OSB