Saturday, October 10, 2009
They will let anyone have a SubVersion repository
The first project I created was for the OESBPELAuditProvider. What is that you say? Well, this is an AuditProvider that can be configured to run inside of an OES Admin Server. It looks for specific messages that the Admin Server creates when policies are changed, and cpatures those changes in an XML file on the admin server. It creates them in the "pending" folder. When the changes are finally picked up by an SM (also audited), the XML files are copied from the "pending" folder to the "committed" folder. The idea would be for a BPEL process, using a file adapter to then go do something with these messages. The most obvious use of this would be to then go an calculate a whole bunch of authorizations.
The launch page for the project is https://soa-security.samplecode.oracle.com
I'm relatively new to the samplecode site, but I'm not sure of the visibility of this project right now. May need some "approvals". Also, I think you may need to request access to the SubVersion repository....I'll check and see what I can do to make it publicly readable.
This will be the first of many project to be added there...but if other people have their own work and are you need a SubVersion repository to place it in...consider ours.
OES OWSM 11g Custom Assertion Finally Done
So, this is an update of the OES-OWSM Custom step that was part of my OOW 2008 presentation. In doing the "migration" to 11g, I think I learned a number of things that I'll share over the coming weeks.
OES Adjudication Provider
I was frustrated by the complexity of securing an 11g SOA domain with OES. It seemed like the biggest issue was writing OES policies for the WLS resources. In both scenarios, I just wanted to call the OES API, and securing the WLS resources was just a consequence of the fact that ASIAuthorizer plugged into the SSPI framework, and that the Adjudicator couldn't tell the difference between OES and WLS resources. Well, for the OOW demo, I created an OES Adjudicator that only enforces the decision from the ASI or XACML authorizer depending on the resource. This greatly simplifies the deployment, because there are no OES policies for WLS resources. It took me a while to build it, but I think ultimately this is a reasonable solution for POC environments. It might be better in a production environment to author some basic policies for WLS resources in the ASI authorizer. My concern is that the overhead of going through two authorizers might not be worth the simplicity.
Authorization based on SAML Attributes
I extended the existing custom step to be able to resolve XPath queries from either the body or the header of the SOAP:Envelope. This opens up the possibility of not just doing authorization based on the content of the SOAP message, but also the headers. SAML Attribute's are part of the SAML Assertion, and that's available in the WS-Security header. This gives a concrete implementation of the attribute based authorization and federated authorization use cases discussed in this post. The implementation uses the SAML capabilities of OWSM 11g. They key capability here is the ability of the OWSM client side policy to generate a SAML Assertion based on the attributes of the user in the WLS LDAP. OWSM made this whole use case really very simple.
Writing an 11g OWSM Custom Assertion
I definitely picked-up some best practices from engineering, especially on how to get an OWSM custom Assertion into a policy that can be deployed to protect at WLS Web Service. In 11g environments, in makes sense to use OWSM to protect but composite web services as well at WLS web services. The reason is that you can get centralized policy management and avoid a lot of interoperability headaches. OWSM 11g and WLS 11g webservices stacks do work well together, but having the same stack for both producer and consumer greatly simplifies the process.
Like I said, there are more details to follow. Many of which I'll discuss/demonstrate at my OOW Session (2nd shameless plug), but all of which I will share on this blog in good time. For those who were awaiting the OES-OWSM 11g custom step and are not California Angels fans, feel free to contact me and I'll see what I can do about getting the step available ASAP. For Angel's fans and other who can wait, I get this information out as quickly as I can.
If anyone wants to meet up at OOW, I'm happy to accommodate, just ping me and let me know. Also, I'll try to post some updates from the conference.
Safe Travels
Wednesday, October 7, 2009
Calling OES from inside a J2EE web app
The public javadoc describes the ALESControl at a high level. From that you gather that the control is a plug-in to Workshop for WebLogic that makes calling OES for WebLogic Portal or WebLogic Interaction easier. But why am I talking about it here when I am writing an app that has nothing to do with Portal or Interaction?
I'm glad you asked.
As long as you're code is running inside WebLogic the ALESControl provides the simplest interface to OES that you can imagine. Here's an example of calling OES using the ALESControl.
ALESControl ctrl = new ALESControlImpl();
if ( ctrl.isAccessAllowed(resource, action, m))
System.out.println( "access is allowed" );
else
System.out.println( "access is denied" );
The params resource and action are each a simple String. The third param, m, is a Map
Notice what's not there? For one thing the user's identity - that comes from WebLogic's security context automatically. You also don't have to do any initialization, configuration or indeed anything that could be called hard or messy.
I tend to wrap even this simple code in my own interface so that if I ever repurpose some of my code and need to use some other interface to OES my changes are localized in one place.
Anyway here's my wrapper, or at least the part of it that you care about.
public class AZRequestHandler implements AZRequestInterface
{
private String action = "";
private String resource = "";
private HashMapm = new HashMap();
public void setAction( String action )
{
this.action = action;
}
public void setResource( String resource )
{
this.resource = "Application/" + resource;
}
public void addAttribute(String name, String value) {
(String) value '" + value + "'" );
m.put(name, value);
}
public void addAttribute(String name, int value) {
(integer) value " + value );
m.put(name, value);
}
public String[] getRoles()
{
ALESControl ctrl = new ALESControlImpl();
String roles[] = null;
try {
Collection x = ctrl.getRoles(resource, action, m);
roles = new String[x.size()];
int i = 0;
Iterator it = x.iterator();
while( it.hasNext() )
{
roles[i++] = (String)it.next().toString();
}
}
catch (ALESControlException e) {
// TODO Auto-generated catch block
System.out.println( "Exception caught" );
e.printStackTrace();
}
return roles;
}
public boolean isAuthorized() {
ALESControl ctrl = new ALESControlImpl();
// Fall through = return false (fail safely)
boolean retval = false;
try {
if ( ctrl.isAccessAllowed(resource, action, m))
retval = true;
} catch (ALESControlException e) {
// TODO Auto-generated catch block
System.out.println( "Exception caught" );
e.printStackTrace();
}
return retval;
}
}
Yes, there are some System.out.println() calls in there and there's no actual handling of Exceptions. I leave that to you, but this is good enough for my little demo.
If you happen to be using JDeveloper to make your web app then to get the ALESControl interface all you need to do is add Oracle/Middleware/ales32-ssm/wls-ssm/lib.eclipsePlugins/ALESControl.jar to your projects Classpath entry. Then deploy the app as normal.
Sometimes simple is best.
Fat Client and SOA - A case for SAML Sender Vouches and STS
There are a number of questions that drive the solution in a case like this:
- What directory/data source will users be authenticating to? A local source or a remote source?
- Are those same directories/data sources readily available to the consuming services?
In the cases where the directory and the services are in the same security domain, and the directory is "readily available", there is no need for something elaborate. I think using the native authentication of the directory (say LDAP) and then passing the users identity to the services as something simple (HTTP Header or WS-Security UsernameToken (no password)) would probably work. Applications can just take the username (or dn) from the request and callback to the directory to get additional information. One last thing, you need to have some mitigation strategy for avoiding people spoofing DNs (adding a DN that isn't there's to the request). The simplest way to is to do the requests over 2-way SSL. Package the certificate with the application and there you go. BTW, the CSF function of OPSS is a nice approach for this user case - relies on Java Security to ensure that only authorized applications can have access to the credential (password for decrypting the private key).
The harder use case is more of a federated model - example, the user needs to authenticate locally, but the services are in another security domain. In this model, if there is additional information that the services need about the user, they need to be passed in some form. I think that SAML-Sender Vouches works nicely here. So, the application authenticates locally and then gets a SAML Assertion, signed by the issuer. The SAML Assertion could/should contain additional information needed by the service - groups/roles/attributes etc. The SAML Assertion is added to the message and the message is signed.
This is actual a good use case for an STS. Basically, the STS is taking username and password in and returning a SAML Assertion for the service. Think of it as a standards based authentication service, where the standard is WS-Trust. The stand-alone application can just be configured to point to the local STS and the application is done - no need to specify support for LDAP, RDBMS...that's left to the local deployment.
The reality is that you could actually solve the first scenario with SAML/STS it may just be overkill, but starting with this architecture does provide much more flexible business models. For example, some customers of the service want to authenticate locally, while others want to authenticate centrally. Not a problem. Its simply a matter of configuration. In the fully federated case, the centralized service trust the local authentication and can avoid the headache of password management. That issue can be pushed out to local directories - at least that's the vision.
CAPTCHA vs. Strong Authentication (with OAAM)
A colleague was asked by a customer for a comparison between using a CAPTCHA solution and Oracle Adaptive Access Manager (OAAM). As people try to understand the role of CAPTCHA and different “advanced” authentication solutions in general, this type of question is actually pretty common.
The most common CAPTCHA solutions involve a user picking a series of alphanumeric characters (often distorted or partially obfuscated) out of a generated image and entering the characters along with the rest of the input.
CAPTCHA injects this specific type of challenge-response flow into an authentication (or other web input) to ensure that the input is really coming from a human and not a computer. It is often used with authentication, self-registration, and other application specific interactions like concert ticket buying systems to prevent various denial of service attacks and other mass input abuses of the system.
While CAPTCHA (arguably) does a good job at making sure that a user really is human, that is all that it does. It does nothing to make an actual user authentic stronger. It does nothing to prevent phishing, nothing to detect or prevent fraud, nothing to mitigate stolen passwords. The credentials being supplied in a username and password form with CAPTCHA is still just a username and password.
On the other hand, strong authentication is about adding additional “stronger” credentials into the authentication to go along with a username and password. Usually this means incorporating something a user has like an ATM card, hardware authentication token, or software token or alternatively something a user knows like a series of personal questions that other people aren’t likely to know.
Along with strong authentication often come secure input technologies like personalized pictures and phrases, keypads, and sliders that are utilized to prevent phishing and stolen passwords in general.
OAAM is an exceptionally powerful yet easy to use and deploy strong authentication and fraud prevention solution. You can read more about its capabilities here and by reading the white paper found here. On a personal note, I think OAAM is a very strong product and a leader in the space.
While there is overlap between OAAM (or strong auth in general) and CAPTCHA technologies in that a strong auth solution can help ensure that an application is interacting with a human, there is still a conceivable need for both.
Strong authentication is for well… authentication and requires that a user exist and that additional authentication factors be provisioned prior to the authentication. This makes it inappropriate for registration and other interactions where the user may be anonymous. On the other hand CAPTCHA can be used without knowing specifically who a user is.
It may also be appropriate to use CAPTCHA in an interaction that occurs at some point after (strong) authentication to ensure that a human is still in control of the client system.