Thursday, August 27, 2009

Configuring WLS 10.3 and OSB for "old school" SOA Security

In SOA, the propagation of identity is not limited to end-users. Identifying and tracking which application or service is invoking a service is just as important. In many cases, SLAs are by organization or application not by user. This is not to say that the user is unimportant, but that both the identity of the application and the identity of the user has their role in a typical SOA.

A common example is to have a service bus (OSB) fronting a collection of back-end services. The bus is interested in the application's identity and in entrusted to enforce authorization polices at the proxy service level - This application is authorized to call the viewCustomer service. The service that the request is routed to (the business service) is very interested in the actual user for audit purposes and finer grained authorization - User "X" can see those particular customers. Therefore it is essential for the web-service consumer to call the service passing both the application and the user identity.

Many good choices here, but I want to focus on the details of one solution in particular - SAML Token Profile using Sender-Vouches subject confirmation method. As the name implies, the sender - in this case the application - vouches for the subject - the user authenticated to the application. The application signs the message which contains a SAML Assertion with the user as the subject. The message is received by OSB, the SAML Assertion is validated and the identity of the user is established as the Subject. Before you close your browser - this is not another How to debug SAML post. There are some interesting nuances to this use-case.
  • How to get WLS to invoke a OSB service given that OSB and WLS 10.3 use different versions of WS-Policy?
  • How to get the message signed by the right "application", given that the PKI CredMapper only gets passed the user and the target service, not the application?
  • How to get OSB to do authorization based on the application's identity (message signer), not the SAML subject, yet make sure that when OSB calls the business service, that it uses the user's and not the application's identity?
I spent nearly an entire plane flight from Boston to San Francisco discussing the possibilities here, but this is the solution that I like:
  • JAX-RPC can be configured simply with a custom policy. This is basically a local file that the contains the client's policy. This is an alternative to retrieving the WS-Policy from the WSDL. Getting this set-up with JAX-WS is a little trickier, but also doable. Gerard Davidson's Blog has a nice example.
  • Next, short of writing a custom credential mapper (I promise I will post how), a nice simplifying assumption is that the application is the managed server or better the identity of the managed server. Configure all of the managed server's to use the same alias in a keystore with the same relative path (from domain root). This allows the PKI credmapper to map all users (some common group that all users are in like "customers") to the alias of the server. All SOAP requests out of the server will use the server identity. You can restrict this by specifying destination hosts/ports/URL etc if needed. Also, its worth noting that if you wanted to do this at the transport level, you could just enable the "Use Server Cert" check box on the SSL tab of the managed server.
  • Finally, I think, the key to doing authorization inside of OSB based on the identity of the application is to have the application appear in the JAAS Subject as a group. I think its simpler to do this from the calling application by using either a custom Authenticator to add the application name as a group or a NameMapper to add it to the SAML Assertion directly. Either way, when the assertion appears at OSB, the application name can easily be retrieved and placed in the JAAS Subject, which result in a group, which can be used for authorization inside of OSB.
This approach puts more of a burden on the client (configured with policy and passing the application name in the assertion), but I think it represents a very good use of the out-of-the-box capabilities of both WLS and OSB with little customization.

I'd be interested in hearing other people's ideas on how to solve this issue...maybe using 2-way SSL or UserNameTokens instead of SAML.

Wednesday, August 26, 2009

So that's what WebLogic Certificate Registry is for...

<11/08/2009 12h10min32s ACT> <Error> <> <BEA-000000> <CertPathBuilder does not support building cert path from class weblogic.security.pk.SubjectKeyIdentifierSelector
java.security.InvalidAlgorithmParameterException: [Security:090596]The WebLogicCertPathProvider was passed an unsupported CertPathSelector.
at weblogic.security.providers.pk.WebLogicCertPathProviderRuntimeImpl$JDKCertPathBuilder.engineBuild(WebLogicCertPathProviderRuntimeImpl.java:682)



In a previous post I talked a little bit about how the WebLogic Sercurity Framework can be extended to support OCSP and CRL checking. Besides being used in SSL validation, the CertificationProviders are used in validating signatures in web services messages. When a response is received, it is typically signed. The way that the certificate that signed the response is identified is through a <wsee:SecurityTokenReference> This refernce can be of several types - SubjectKeyIdentifier, IssuerSerialNumber, Thumbprint#SHA1. You can use use what is called a direct-reference, which is to say the actual certificate itself is passed in the message.

Assuming that you don't want to pass the certificate itself (they're big), and you're passing one of the referenced tokens back to WebLogic Server, how should it find it? CLV = Certificate Lookup and Validation. In the OCSP/CRL check post, we focused more on the validation part of the CertificationProvider. Here, we're interested in lookup. The OOTB CertificationProvider which essentially wraps the JDK's provider only supports direct references (X509). In order to support more other references, like say SubjectKeyIdentifier, you need to configure the a CertificateRegistry provider. You add the list of certificates from the WLS admin console, and now the signature on the response can be validated.


Basically, if you're using WS-Security, then you need to configure a CertificateRegistry.

Friday, August 21, 2009

Calling OES from inside of an Oracle SOA 11g Applications

Well - the POC in Amsterdam went like most POCs - very very busy, so not as much time for posting, or sleeping, as I'd hoped, but, I did at least get a strong integration between OES and BPEL. I've got a nice 3 hour layover before my flight back to Boston, so here are some of the details.

First of all, what is the use case and how is it different from doing authorization at the SOAP level with something like the to be built OWSM+OES integration? A couple distinctions:


  • Authorization at the SOAP endpoint level is for the user/subject invoking the end-point. A callout from inside of BPEL could be for any user.

  • Authorization at the SOAP endpoint results in a YES/NO decision. In the NO case, users are denied access. A callout from inside BPEL may result in something else...like a message being routed differently or elements being restricted

  • Authorizations at the SOAP endpoint level are typically concerned with what is going into the message, not what is going out. A callout from inside of BPEL can affect messages in/and out - SOAP endpoing can also, its just not typically done.


So this begs the question, is OES here just a rules engine? Why not use the built in rules engine in BPEL? Good questions, strawman Josh. I think the difference is that rules in the rules engine are configured by developers. Policies in OES can be centrally managed and span multiple applications/domains/organizations. They can be changed and applied in the application without making any code changes, and allows developers to focus on building true business logic, not security logic.

One more thought, before the details. I probably could have just called OES with WebServices SM as a service using SOAP, but in all of our testing SOAP does not perform nearly as well as in-process. Besides, in process in the context of SOA composite can allow us to write very context rich policies that perform very nicely. Enough of the set-up, here's the approach.

Once you've got the SOA domain configured (see my previous 3 part epic), you're doing to make a Java callout from inside of the BPEL process to OES. I found developing inside of a Java callout very cumbersome, so really all that the Java callout does is pass in 2 DOM Node instances - one for input and one for output. There was a lesson learned here - you don't need to set the variable from inside the embedded Java - just get the variable and update in place.

The Nodes are variables from inside of the BPEL process, so this sets up an interesting pattern - XML in (data) - apply some policy - XML out (data). This is just another variation of the data-security...much simpler in many respects because the data is already there - given to me by BPEL. Thing of this as a Post query data security...for some small set of managable records, apply security policy about which ones and which data from those elements should be included.

Now from inside of the OES Facade (the class that gets called from the BPEL to process the Node) instantiate a handle to the OES API (I used the JavaAPIExample that ships with the Java SM) and start making authorization decisions...oh wait. What if I don't have an identity or what if the users are part of the messages, and I want to do authorization on their behalf? SSPI to the rescue again - just use the UsernameIdentityAsserter...also in the examples directory of the java-sm. The UsernameIdentityAsserter will basically just authenticate you with a clear text token. Standard Disclaimer here: You need to make sure that if you are going to configure this IdentityAsserter, that there are other controls, otherwise anyone who knows a username could impersonate that user.

Ok, now that we have the AuthenticIdentity, start doing authorization. You have the Node, and it can be passed as a dynamic attribute to OES for policy evaluation. Alternatively, you could jut pull information out of the DOM and pass it to OES. I think there is also a generic pattern here (I'm too jet lagged to say more but the idea is that the XML document is passed in and then you do an authorization query where the resources in OES miminc the schema in XML to determine what parts of the document that the user can see). I think there are a lot of interesting approaches here.

One more, and really I think the coolest part of the integration is the use of Service Interfaces to retrieve additional data for authorization. I spent a lot of time on this approach because Service Interfaces are very common inside of BPEL, and I felt like it would avoid a lot of the class loading issues that some time arise with attribute retrievers. On the other-hand, I needed a way that performed, that is I didn't want to go fetch a bunch of additional data that was not going to evaluated by the policy. I wanted the policy to come back and say - "Hey can you go fetch the user'd data of birth...I need that." OES has a way to do that, in the form of a special response - MissingAttributeResponse. It tells you which attribute was not there. This only works if you don't include sys_defined() around the attributes in the OES policy.

GRANT (//priv/edit, //role/Editor, //app/policy/book) if book_still_in_print=true;

If you make an isAccessedAllowed call, the result will come back as false, but with a response that includes a MissingAttributeResponse for book_still_in_print. So, now how to get this information. The ServiceInterface for an application module can be accessed from inside the server using JNDI. When you received the response, grab the InitialContext and lookup the Service (you should be able to find it based on the @PortableWebService annotation on the ServiceImpl). Look at that! All of the methods of the ServiceInterface available to add data to a decision. Definitely simpler than configuring them externally from an AttributeRetriever. Normally, I frown on applications have their own connections to Policy Information Point, but this is an exceptional case - the data is already presumably there from inside the composite.

This may turn out to be a very specialized use case - post query data security from inside a BPEL process - and not a general purpose SOA security use case like OES+OWSM or even OES+BPEL in some other scenario (I'll always take requests), but I feel after watching this thing in action, I have very impressed with how easy it was to write policy to do very powerful things from inside a BPEL. I like the idea of an explicit callout instead of embedding the OES API inside of a custom XSLT transform - more transparency at a business/process level as to what is going on, but I'm sure there are some scenarios where XSLT calling OES makes sense.

Monday, August 17, 2009

Adding OES to a SOA 11g Domain - Part 3

Greetings from Amsterdam. After a trans-Atlantic flight and a 13 hour work day, more progress to report on getting OES to protect a SOA 11g domain.

First, a clarification from a previous post. When installing the SM, make sure that the ARME listening ports are not 8001. This is because this is the default port for the managed server. This is why I explicitly chose 28000 and 28001 (i.e. not 8000 and 8001). Apologies for not being clearer.

Next, a Linux only issue that we encountered that caused some serious consternation. The wls-set-env.sh script calls out to BEA_HOME/wlserever_10.3/server/bin/commEnv.sh. In this script, the MEM_ARGS are set to some vendor specific values, but regardless of the JDK vendor the values are too small to start the 11g admin or managed server. The work around is to make sure that the original MEM_ARGS in the startWebLogic.sh are preserved and not overridden by the set-wls-env.sh. If you don't do this, the server will start, but will soon die with an OutOfMemory Exception.

Finally, and this was a tricky one. If you want to access Enterprise Manager, then the user needs to be in a role called soa_admin. If you don't then you'll be able to log into the em console, but you won't be able to access any of the composites. This one is made especially tricky because EM is not making an isAccessAllowed call, but rather an isCallerInRole which is not discoverable.

Hopefully, this is the last post on my somewhat epic (all great stories are trilogies) quest to OES enable 11g SOA Domain. I'm almost there with the OES/BPEL integration, and should have it posted during some future bout of insomnia later this week.

Thursday, August 13, 2009

Configuring SOA Suite 11g with OES - Part 2

As it turns out, there is some more work to do to get a SOA 11g Managed Server to boot and get composites deployed from JDeveloper with the domain enabled with OES.

This whole exercise was greatly simplified by using the Policy Debugging capabilities of the SM.

Once you import the policies from discovery mode, you have to remember to bind the resources into the SM. If you don't, then for the resources that are not bound, you won't get access, but it won't log anything either. With the exception of the oesbpel resource which I'm using for the project, below appears to be the standard list of bindings for the SM in a SOA Suite Domain.


Once you get the bindings set-up, then you'll get proper messages for the resources in the log, but there is one nasty issue that requires a little work to fix. In CP2, you can create the instance in an Organization besides the default. Unfortunately, the discovery mode seems not to be fully in synch with this. The consequence is that for many of the resources that are discovered the privilege does not include the organization. So, you get a whole bunch of ABSTAIN decisions which result in not getting access. I've included a table below that illustrates the changes from the discovered privileges to the correct privileges that the SM uses:
















































Resource

Discovered Priv

Correct Priv

SOAJMSModule

recieve

RootOrg!someorg!SOAJMSModule!recieve

SOAJMSServer

recieve

RootOrg!someorg!SOAJMSServer!recieve

SOAJMSServer

send

RootOrg!someorg!SOAJMSServer!send

UMSJMSSystemResource

recieve

RootOrg!someorg!UMSJMSSystemResource!recieve

usermessagingserver

execute

RootOrg!someorg!usermessagingserver!execute

ws-pm

execute


any


soa-infra

post


any


shared/adm

access

RootOrg!someorg!shared!access


Notice that for the soa-infra and ws-pm bindings that we have to use the any privilege. This is because OES doesn't allow privs containing '-'. The OES SM uses an action RootOrg!someorg!soa-infra!execute, but you can't enter it. Go Figure! The any action will work just fine. If you really need fine grained control, you could try redeploying the application to a different URI or use the sys_privilege attribute in a constraint.

Finally, make sure to create a role mapping policy that grants the allusers group the Everyone role. It's not set-up by default, but all of the discovered policies are tied to the Everyone role.

Ironically, I don't actually need to secure the domain itself with OES. I need to get the domain enabled with OES so that I can programmatic calls from inside SOA composites. In case you're wondering, the reason why I didn't just keep the Default Authorizer and then add the ASIAuthorizer (OES) is that for unknown resources the ASIAuthorized returns ALLOW and the ASIAuthorizer returns ABSTAIN. This means that is the Adjudicator is set to "Not require unanimous PERMIT", then there is no way for OES to prevent access to resources that the DefaultAuthorizer sees as unknown with out forcing OES to return a DENY. This can make writing policies very complicated.

Now that the set-up is behind me, off to get OES working SOA composites. Stay tuned.

Wednesday, August 12, 2009

How to Configure a SOA Suite 11g Domain with OES

This is likely to be the first of many posts for an upcoming POC that I'm working on. I actually have to travel to Europe to go work with the local Oracle team there. This means lots of being awake at the wrong time, so I'm sure they'll be posts-a-plenty.

This POC is a realization of the "pre-cache" entitlements architecture that I discussed previously. To that end, I need to get the OES WLS SM (embedded PDP) running inside of a SOA Suite 11g domain.

DISCLAIMER: This is not a supported configuration at this time.

Part 0 - Setting up the Environment
  1. Get SOA Suite 11g Domain configured (1 admin server, 1 managed server)
  2. Install OES 10.1.4.3 and apply CP2. Make sure that you install CP2 before you try to start the admin server - it won't work. If you don't the admin server will come up, but you'll get an error like "Admin Service Not Available." Save yourself the trouble, and upgrade to CP2 first.
Part 1 - Configure the WLS SM for the SOA Suite Domain Admin Server
  1. Run the configtool and set-up the domain with the OES realm - set-up the ARME port as say 28000
  2. Now you'll need to make some changes to the security realm created by the configtool. By default, the config tool doesn't include the DefaultAuthenticator and the DefaultIdentityAsserter in the realm.
  3. First, you'll need to change the server to use the old realm - called my realm. Go into the config.xml and change the value of <default-security-realm> to myrealm. Now you can start the admin server.
  4. Next, log into the weblogic console http://localhost:7001/console, and create the DefaultAuthenticator and the DefaultIdentityAsserter. Set the JAAS Control Flag on both the DefaultAuthenticator and the DatabaseAuthenticator to SUFFICIENT and order the DefaultAuthenticator first (I'll explain why in a second).
  5. Before you restart the admin server, make sure to force a policy distribution from OES. The 100% way to do this is as follows:
  • Clean out the ARME cache - this means deleting the state.chk and all of the contents of the PolicyA and PolicyB directories under SSM-HOME/wls-ssm/instance/<instance_name>/work/runtime
  • Delete the instance from the OES Admin console. All that this will do is remove the entry from the database...it will get recreated when the ARME starts-up and contacts the admin server.
  • From the OES Admin console, push a policy distribution.
Finally, restart the SOA Suite Domain Admin Server.

Part 2 - Configure the WLS SM for the SOA Suite Domain Managed Server

  1. Run the instancewizard again, but this time for the managed server - use the same config name, but a different ARME port - say 28001
  2. Modify the set-wls-env.cmd in the SSM-HOME/wls-ssm/instance/<managed_server_instance_name>/bin to not include the xml-api.jar, xalan.jar, and xercesImpl.jar in the system classpath. If you don't make this change, then SOA Deployments will not work.
  3. Next, you need to create special start-up script for the managed server that ensure that a different instance is used for the managed and admin servers. If not, then the ARME ports will conflict. This can be done as follows:
  • Copy DOMAIN_HOME/bin/startWebLogic.bat to startWebLogic_ManagedServerSM.bat
  • Modify the DOMAIN_HOME/startWebLogic_ManagedServerSM.bat and change the call to the set-wls-env.bat from SSM-HOME/wls-ssm/instance/<admin_instance_name>/bin to SSM-HOME/wls-ssm/instance/<managed_server_instance_name>/bin
  • Copy DOMAIN_HOME/bin/startManagedWebLogic.bat to startManagedWebLogic_ManagedServerSM.bat
  • Modify the startManagedWebLogic_ManagedServerSM.bat from calling DOMAIN_HOME/bin/startWebLogic.bat to DOMAIN_HOME/bin/startWebLogic_ManagedServerSM.bat
Part 3 - Working in this Environment - Discovery Mode is Your Friend

The configtool secures enough of the adminserver so that it can start. It makes sense to run with in DiscoveryMode - at least at the beginning. Remember, there are two instances of the SM, so you'll need to modify the set-wls-env.bat of both SM. If you don't have the internal resources of SOA Suite protected, then bad things happen - i.e. composites don't get deployed.
Once you get everything starting and deploying smoothly, use the policy import tool to load the policies, and then change out of Discovery Mode

Oh, so the reason for having to have a specific ordering on the authentication providers, and having the DefaultAuthenticator first and sufficient is that the DatabaseAuthenticator adds a special IdentityDirectoryPrincipal, that JDeveloper doesn't have the classes - so I simplified this by "tweaking" the realm. In practice, the DatabaseAuthenticator is not really used, but it is the authentication provider that is created by default in the configtool. This is really just a minor issue, but I wanted people to understand why the change.

Saturday, August 8, 2009

When all you have is an STS, everything looks like a...

What is a reasonable use of a Security Token Service (STS)? Standards are very useful and powerful tools in enterprise architecture, but they have to be used to solve the right problems. WS-Trust, the standard that STS relies on is very flexible. Basically, you request a token and get at token back - you have a UsernameToken (username + password) and you get a SAML Assertion back.

So, this is useful when crossing security domains in federated models. For example, you need to call a 3rd party web-service and it requires a SAML assertion - call the STS, get the SAML Assertion and send it to the service. Simple enough. We can agree that for this type of use case, most web services clients can just generate the SAML assertion themselves, and sign the request - SAML sender-vouches. If the SAML assertion itself has to be signed then this can create complexity - requiring each service to have the private key of the issuer - so maybe, depending on the number of client applications that are required to federate, having a central service like an STS is preferred to having each client generate the SAML.

Another common use case for the STS generating a SAML assertion is attributued based authorization. The STS generates a SAML assertion containing the attributes required to access the service. This sounds good in practice, but how does the STS know what attributes are required? Are they published in the WSDL? Assuming that there was a standard way to do this, would services advertise what attributes are required to gain access? Not likely.

Instead, as in most federations, there needs to be some prior arrangement made between the service producer and consumer - you'll send me a SAML Assertion like this with these attributes. This means that the STS has to manage all of the meta-data for all of the partners. Is this practical? It might make more sense to just generate a SAML assertion with no attributes, and then have the service call-back to the "issuer" for more attributes as needed. The SAML protocol - SAML Attribute Query, with out WS-Trust or an STS, can be used to expose additional information to relying parties. There are definately scenarios where the relying party is not authorized to callback to the asserting domain, so in that case it might make sense to have the SAML Assertion contain a fixed set of common attributes. This generation could also be simplified by an STS.

As to not be accused of being an STS "hater", here's a scenario I've come across for a POC I'm working on that I actually like for an STS. In an online banking scenario, how the user authenticates (business card + PIN or personal card +PIN) determines which accounts they have access. Make a call to the STS - authenticate the user, and based on which authentication method they used, filter the accounts they access. Return the list of accounts in the SAML assertion. Use the accounts contained in the SAML assertion for personalization - I would also go to the system of record you authorizing transactions.

I guess the point is that WS-Trust/STS solves some good use cases, but it is not the only or best solution - neither is SAML or even WS-Security for that matter. In selecting standards for a project or an organization, consider the likely use cases and understand that simpler is almost always better.