Monday, February 28, 2011
OPSS Sample Application available on samplecode.oracle.com
Just a quick post to let anyone interested know that the OPSS sample mentioned on the blog a while back has been uploaded to SampleCode. You can get it at https://opss_sample_code.samplecode.oracle.com/
Labels:
opss
Thursday, February 24, 2011
Finding which JAR contains a class - one upping Mark
So my compatriot Mark posted a little utility he just wrote that makes it easy to find out which JAR file contains the file you're looking for.
I often want to search through a large number of JAR files looking for a particular class, and every time I do this I wish I had some utility to make it easier.Which inspired me to dust off my only part done equivalent and finish it up. So here's my version of Mark's utility. It's mostly the same - I just added a very simple cache so that you only have to read each jar file once.
#!/bin/sh TARGET="$1" CACHEDIR=~/.jarfindcache # prevent people from hitting Ctrl-C and breaking our cache trap 'echo "Control-C disabled."' 2 if [ ! -d $CACHEDIR ]; then mkdir $CACHEDIR fi for JARFILE in `find $PWD -name "*jar"` do CACHEFILE=$CACHEDIR$JARFILE if [ ! -s $CACHEFILE ]; then mkdir -p `dirname $CACHEFILE` nohup jar tvf $JARFILE > $CACHEFILE fi grep $TARGET $CACHEFILE > /dev/null if [ $? == 0 ] then echo "$TARGET is in $JARFILE" fi doneThanks for the inspiration Mark! Update: Mark enhanced the script to automatically update the cache whenever a jar file is changed. Get the updated script from Mark's blog Perhaps we should put this and our other clever tools in a public source code repository!
Monday, February 21, 2011
Non-Standard approaches to SOA Identity Propagation and Authentication
I wanted to take some time to talk about a couple non-standard approaches to identity propagation and authentication that I sometimes see people take when building their web services.
These approaches are non-standard both because they well… don’t utilize the large body of standards that exist for web service security and because they are outside of what would probably be considered best practices by most people in the industry.
Custom Security Headers
It is fairly common to see people develop services that require authentication through a custom security header.
Now, if your service will consume a token that is truly custom, which is to say outside the standard token types defined in the WSS specs, then this is a perfectly reasonable approach. A good example of this would be a service that can consume a web SSO token from Oracle Access Manager.
This approach is especially powerful if your service can consume the custom token plus one or more of the standard tokens. One example would be a service setup to consume the Oracle Access Manager cookie or a standard username token.
However, one mistake you often see people make is to hard code in a required custom security header that is in essence the same as one of the standard WSS authentication methods. Usually, it is a custom security header that just holds a username and password, which makes it equivalent to the WSS username token.
There are several reasons developers take this approach but they mostly center on not understanding the flexibility and value of the security functionality that is already built into the container that is running their service. For example, sometimes developers will say that they need a custom token since the container will only compare the username in username token to the UID attribute in the directory, which is simply not the case.
Even if custom handling of authentication is required, it is still better to utilize a standard token (with custom handling) over a custom token.
Another mistake you see people make is once they decide they need a custom token, they will just turn off security at the container level and process the token inside of their service. It is far better for custom authentication code to live inside of the container than in the service itself. Staying with the container for security will:
The second approach I’d like to discuss is specific to propagating an identity from a web app or service to a “downstream” service. The approach is simply to stick the user identity into the body of the request and consume it with custom code in the service. Now obviously, this approach is inherently insecure on its own, but it is an appropriate option in situations where the client of the service can be trusted. Usually this trust will be justified by the security that is present at the transport or network level.
Sticking a user identity into the body of the request is really analogous to use of the SAML bearer confirmation method which I discussed in my first post on this blog.
So, it is probably most useful to think of its appropriateness in the same terms. In the blog post on the bearer confirmation method I discussed some advantages that it had over just sticking the user identity into the request body. Now, I’d like to turn the table and look at some possible advantages putting the identity in the request body sometimes has over SAML with bearer and over creating a custom security header of a similar ilk.
Now, you are probably saying to yourself: Hold on Brian, you just laid out a hierarchy with using standards and container functionality at the top and custom approaches executed in the service code at the bottom.
Well that is true, but there is a reason that putting the identity in the request body and extracting it in the service should sometimes be considered. The main problem with SAML with the bearer confirmation method is a lack of support for it. This lack of support applies not only to server side WSS stacks but, perhaps even more importantly client side stacks. If you are creating a service that may be widely consumed and are operating in a very heterogeneous environment where client side developers maybe be using any number of different software packages for their development, then SAML with bearer may be the wrong choice for you. Likewise, it may be harder for consumers of your service to create a custom security header than to just stick the identity in the request body. Along similar lines, it may be easier to convey the format you want to use for the user identity in the WSDL if the identity is being put in the body.
The final consideration for whether propagating the identity in the request body is a good way to go centers on how many services you might have where such trusted identity propagation is deemed as acceptable. If you have a huge SOA infrastructure with tons of applications consuming tons of internal services then it probably makes sense to standardize on bearer or develop a custom security header and corresponding container plug-in to consume the header.
However, if you only have one service that fits this description and SAML with bearer isn’t supported by your stack, then it may be hard to justify the added effort of developing a custom token. In that case, just sticking the user identity in the request body may be the best way to go.
These approaches are non-standard both because they well… don’t utilize the large body of standards that exist for web service security and because they are outside of what would probably be considered best practices by most people in the industry.
Custom Security Headers
It is fairly common to see people develop services that require authentication through a custom security header.
Now, if your service will consume a token that is truly custom, which is to say outside the standard token types defined in the WSS specs, then this is a perfectly reasonable approach. A good example of this would be a service that can consume a web SSO token from Oracle Access Manager.
This approach is especially powerful if your service can consume the custom token plus one or more of the standard tokens. One example would be a service setup to consume the Oracle Access Manager cookie or a standard username token.
However, one mistake you often see people make is to hard code in a required custom security header that is in essence the same as one of the standard WSS authentication methods. Usually, it is a custom security header that just holds a username and password, which makes it equivalent to the WSS username token.
There are several reasons developers take this approach but they mostly center on not understanding the flexibility and value of the security functionality that is already built into the container that is running their service. For example, sometimes developers will say that they need a custom token since the container will only compare the username in username token to the UID attribute in the directory, which is simply not the case.
Even if custom handling of authentication is required, it is still better to utilize a standard token (with custom handling) over a custom token.
Another mistake you see people make is once they decide they need a custom token, they will just turn off security at the container level and process the token inside of their service. It is far better for custom authentication code to live inside of the container than in the service itself. Staying with the container for security will:
- Allow you to easily use your custom token with other services
- Allow you to switch to a standard authentication method down the road
- Keep you compatible with authorization and auditing done at the container level which you may want to keep even if you need to do custom authentication.
- Use of standard authentication tokens that are processed by the container’s out-of-the-box functionality.
- Use of standard authentication tokens that are processed by custom code in the container.
- Use of custom authentication tokens/headers that are processed by custom code in the container.
- Use of custom authentication tokens/headers that are processed by code in the service itself.
The second approach I’d like to discuss is specific to propagating an identity from a web app or service to a “downstream” service. The approach is simply to stick the user identity into the body of the request and consume it with custom code in the service. Now obviously, this approach is inherently insecure on its own, but it is an appropriate option in situations where the client of the service can be trusted. Usually this trust will be justified by the security that is present at the transport or network level.
Sticking a user identity into the body of the request is really analogous to use of the SAML bearer confirmation method which I discussed in my first post on this blog.
So, it is probably most useful to think of its appropriateness in the same terms. In the blog post on the bearer confirmation method I discussed some advantages that it had over just sticking the user identity into the request body. Now, I’d like to turn the table and look at some possible advantages putting the identity in the request body sometimes has over SAML with bearer and over creating a custom security header of a similar ilk.
Now, you are probably saying to yourself: Hold on Brian, you just laid out a hierarchy with using standards and container functionality at the top and custom approaches executed in the service code at the bottom.
Well that is true, but there is a reason that putting the identity in the request body and extracting it in the service should sometimes be considered. The main problem with SAML with the bearer confirmation method is a lack of support for it. This lack of support applies not only to server side WSS stacks but, perhaps even more importantly client side stacks. If you are creating a service that may be widely consumed and are operating in a very heterogeneous environment where client side developers maybe be using any number of different software packages for their development, then SAML with bearer may be the wrong choice for you. Likewise, it may be harder for consumers of your service to create a custom security header than to just stick the identity in the request body. Along similar lines, it may be easier to convey the format you want to use for the user identity in the WSDL if the identity is being put in the body.
The final consideration for whether propagating the identity in the request body is a good way to go centers on how many services you might have where such trusted identity propagation is deemed as acceptable. If you have a huge SOA infrastructure with tons of applications consuming tons of internal services then it probably makes sense to standardize on bearer or develop a custom security header and corresponding container plug-in to consume the header.
However, if you only have one service that fits this description and SAML with bearer isn’t supported by your stack, then it may be hard to justify the added effort of developing a custom token. In that case, just sticking the user identity in the request body may be the best way to go.
Thursday, February 10, 2011
External Custom Login Forms with Oracle Access Manager 11g
This is the 2nd post in my OAM 11g Academy series. To view the first post in the series which will be updated throughout to contain links to the entire series, click here: http://fusionsecurity.blogspot.com/2011/02/oracle-access-manager-11g-academy.html
While my intent was to make the first few posts on the topic of the OAM 11g policy model, I’ve been getting a ton of requests for help on how to do form based logins using a custom, externally hosted login form with OAM 11g. So, I’ve decided to take a short break from the policy model to tackle that topic.
It is very common for customers to want to redirect users to their own custom login form to authenticate into OAM. There are actually several sub-scenarios to this use case that I will address in a broader post about authentication in OAM 11g, but the thing I want to focus on today is the case of redirecting the user to a login page or application that is “externally” hosted outside of the OAM managed server.
The idea is that when it is time to authenticate the user, the user will be redirected to your own page or application that can be built using whatever technology you like including JSP pages, ASP/.net, perl, PHP, etc.. You can render the form to look like whatever you want and even potentially do some pre-processing of the users submission (POST) before sending the credentials along to OAM.
The information on how to do this can be divided into two sections: the authentication scheme configuration and the login.jsp itself.
While my intent was to make the first few posts on the topic of the OAM 11g policy model, I’ve been getting a ton of requests for help on how to do form based logins using a custom, externally hosted login form with OAM 11g. So, I’ve decided to take a short break from the policy model to tackle that topic.
It is very common for customers to want to redirect users to their own custom login form to authenticate into OAM. There are actually several sub-scenarios to this use case that I will address in a broader post about authentication in OAM 11g, but the thing I want to focus on today is the case of redirecting the user to a login page or application that is “externally” hosted outside of the OAM managed server.
The idea is that when it is time to authenticate the user, the user will be redirected to your own page or application that can be built using whatever technology you like including JSP pages, ASP/.net, perl, PHP, etc.. You can render the form to look like whatever you want and even potentially do some pre-processing of the users submission (POST) before sending the credentials along to OAM.
The information on how to do this can be divided into two sections: the authentication scheme configuration and the login.jsp itself.
Labels:
oam,
oam 11g academy
Wednesday, February 9, 2011
Certificate X509 Authentication in OAM 11g
From Brian: I'm adding this excellent post by Matt to our OAM 11g Academy series. To view the first post in the series which will be updated throughout to contain links to the entire series, click here: http://fusionsecurity.blogspot.com/2011/02/oracle-access-manager-11g-academy.html
Continuing on the OAM 11g theme, here's an overview of setting up X.509 Authentication in OAM 11g and contrasting it to OAM 10g.
OAM 11g as you already know is hosted on WebLogic. The Credential Collection modules are also on the app tier, which is a departure from OAM 10g model where credentials are collected at the web tier. This essentially means that you have to configure the OAM managed server to prompt for client certificates to perform OAM authentication in 11g, where in 10g you had to configure the web server to prompt the certs. I'll give you a quick overview of how this is done. I'm going to assume some level of understanding in creating the JKS and having certificates issued.
Assuming you are still using the Demo Identity and Trust stores, I recommend creating your own "Custom" stores. I used OpenSSL to create a Certificate Authority (CA), where I issued a server cert for the WebLogic server with FQDN of the server as the CN. I also issued a couple of client certificates to represent the end users. In the WebLogic console of the IAM domain, edit the settings of oam_server1 (assuming you kept default naming) to use the JKS of the domain for identity and trust.
In the SSL tab, I like to disable the Hostname Verification module. The important part is setting Two-Way SSL to "Client Certs Requested but not Enforced".
Restart oam_server1 to have these changes take effect.
Now browse to your OAM Console and under Authentication Modules, create the mapping of the Certificate attribute with the LDAP attribute. I disabled cert validation and put in a dummy OCSP to satisfy the application checking for valid URL.
Now you can use the existing X509Auth Scheme (as is) in authentication policies. Import the CA cert into your trusted authority store on your browser and your client certs in your personal store and test away. You can come into the app over HTTP. You'll be redirected to the HTTPS port of the credential collector and then back to HTTP.
The things I question about this implementation are having to go directly to the app tier for cred collection. Many customers don't want this tier exposed to the outside. Another concern is that once you turn on the certs optional setting, you get the cert prompt even if you're doing UN/P authentication.
Continuing on the OAM 11g theme, here's an overview of setting up X.509 Authentication in OAM 11g and contrasting it to OAM 10g.
OAM 11g as you already know is hosted on WebLogic. The Credential Collection modules are also on the app tier, which is a departure from OAM 10g model where credentials are collected at the web tier. This essentially means that you have to configure the OAM managed server to prompt for client certificates to perform OAM authentication in 11g, where in 10g you had to configure the web server to prompt the certs. I'll give you a quick overview of how this is done. I'm going to assume some level of understanding in creating the JKS and having certificates issued.
Assuming you are still using the Demo Identity and Trust stores, I recommend creating your own "Custom" stores. I used OpenSSL to create a Certificate Authority (CA), where I issued a server cert for the WebLogic server with FQDN of the server as the CN. I also issued a couple of client certificates to represent the end users. In the WebLogic console of the IAM domain, edit the settings of oam_server1 (assuming you kept default naming) to use the JKS of the domain for identity and trust.
In the SSL tab, I like to disable the Hostname Verification module. The important part is setting Two-Way SSL to "Client Certs Requested but not Enforced".
Restart oam_server1 to have these changes take effect.
Now browse to your OAM Console and under Authentication Modules, create the mapping of the Certificate attribute with the LDAP attribute. I disabled cert validation and put in a dummy OCSP to satisfy the application checking for valid URL.
Now you can use the existing X509Auth Scheme (as is) in authentication policies. Import the CA cert into your trusted authority store on your browser and your client certs in your personal store and test away. You can come into the app over HTTP. You'll be redirected to the HTTPS port of the credential collector and then back to HTTP.
The things I question about this implementation are having to go directly to the app tier for cred collection. Many customers don't want this tier exposed to the outside. Another concern is that once you turn on the certs optional setting, you get the cert prompt even if you're doing UN/P authentication.
Tuesday, February 8, 2011
Free Online Security Forum with Oracle and Accenture on Feb. 24
I wanted to let everyone know about a free online security forum with Oracle and Accenture that is coming up in a couple weeks. Now this is obviously not an A-list event since I’m not speaking at it, but I think it is definitely worth attending if you have time.
The event will feature a good line-up of speakers and sessions that will last from 9:00-1:00pm PT on Thursday, Feb. 24.
The speakers will include:
Mary Ann Davidson, Oracle’s Chief Security Officer, on industry-leading standards, technologies, and practices that ensure that Oracle products—and your entire system—remain as secure as possible.
Jeff Margolies, Partner, Accenture’s Security Practice—on key security trends and solutions to prepare for in 2011 and beyond.
Vipin Samar, Vice President of Oracle Database Security solutions—on new approaches to protecting data and database infrastructure against evolving threats.
Tom Kyte, Senior Technical Architect and Oracle Database Guru—on how you can safeguard your enterprise application data with Oracle’s Database Security solutions.
Nishant Kaushik, Chief Identity Strategist—on how organizations can look to Oracle Identity Management solutions to help them reduce fraud and streamline compliance.
To register for the event, please use the following link: Online Security Forum Registration
Don’t forget to tell them that the A-Team sent you :)
The event will feature a good line-up of speakers and sessions that will last from 9:00-1:00pm PT on Thursday, Feb. 24.
The speakers will include:
Mary Ann Davidson, Oracle’s Chief Security Officer, on industry-leading standards, technologies, and practices that ensure that Oracle products—and your entire system—remain as secure as possible.
Jeff Margolies, Partner, Accenture’s Security Practice—on key security trends and solutions to prepare for in 2011 and beyond.
Vipin Samar, Vice President of Oracle Database Security solutions—on new approaches to protecting data and database infrastructure against evolving threats.
Tom Kyte, Senior Technical Architect and Oracle Database Guru—on how you can safeguard your enterprise application data with Oracle’s Database Security solutions.
Nishant Kaushik, Chief Identity Strategist—on how organizations can look to Oracle Identity Management solutions to help them reduce fraud and streamline compliance.
To register for the event, please use the following link: Online Security Forum Registration
Don’t forget to tell them that the A-Team sent you :)
Thursday, February 3, 2011
Oracle Access Manager 11g Academy: The Policy Model (Part 1)
Today I begin what will be a long series of posts covering Oracle Access Manager 11g (OAM 11g). I will be calling this series “OAM 11g academy”.
OAM 11g was released last summer and constitutes a major upgrade/rewrite of OAM, which happens to be one of the more popular Oracle IAM products. My goal with this series is to help everyone attempting to use and deploy the product at various stages by explaining major OAM 11g concepts, making architectural recommendations, pointing out potential pain points, and walking you through common yet non-trivial tasks such as setting up authentication to an external custom login form.
For the entire series content, see here: Oracle Access Manager Academy Index
OAM 11g Policy Model Index:
OAM 11g Policy Model Overview -- Continue below...
OAM 11g Policy Model Part 2: Application Domains and Host Identifiers
OAM 11g Policy Model Part 3: Resources
OAM 11g Policy Model
Today I would like to kick off this series by giving a general overview of the OAM 11g policy model. I define policy model to broadly mean the set of configurations that determine how OAM will handle a given request. I will be following up today’s post with 3-4 more posts on the
At a conceptual level this means the configurations that determine whether a given resource is protected or unprotected, how to authenticate a user that is trying to access a protected resource, whether a given resource is authorized to make a given request, what headers and cookies to generate in the process of authenticating and authorizing a request, etc.
At a lower level I define policy model to describe all the objects that make up OAM policy configurations and how they relate to each other. This includes objects like resources, ID stores, authentication schemes, and policies themselves.
Yes the Policy Model for OAM 11g is New
The OAM 11g policy model is a little different from the 10g model. At first glance the 11g policy model may seem complicated and some people may feel a little intimidated at the idea of having to learn a whole new policy model from scratch. However, I’m here to tell you today that:
1) The OAM 11g policy model is the most straight forward, easiest to understand model in the WAM space.
2) There is still quite a bit of overlap with the 10g model, so OAM 10g users don’t have to feel like you are starting over.
The documentation actually does a pretty good job of laying out the nuts and bolts of the policy model including the object hierarchy.
Policy Model Overview: http://download.oracle.com/docs/cd/E14571_01/doc.1111/e15478/sso.htm#BJFGDIAJ
What You Need to Know
As I mentioned, when you first look at this documentation, it can seem pretty daunting. However, if you cut through the clutter following the steps I’m about to describe you will find that creating OAM 11g policies is fairly straight forward; even more so than with OAM 10g.
In the next few posts, I’ll break down the OAM 11g policy model in detail; but to get you started here is what you need to know:
1) When a user makes a request the host part of the URL is transformed into a host identifier and combined with the rest of the URL into an internal representation of the resource being protected. The best way to think about the host identifier is a binding between the hostnames (real or virtual) and URI based resources. I’ll cover host identifiers in more detail in my next post.
2) This internal representation of the request is then compared to the URL patterns of the resources you have defined. If there is a match then policies will be evaluated based on that resource. I’ll write more about the URL patterns for resources in my next post. The important thing to know for now is that a request will be matched to one and only one OAM resource. The algorthim used to decide what resource the request will be matched to in the event that more than one URL pattern match the URL in the request is a "best match" algorithm.
3) A resource can only be in no more than one authentication policy and no more than one authorization policy.
4) You choose how you want to authenticate users by changing the authentication scheme selected in an authentication policy.
5) You control what users can access what resources by creating constraints in authorization policies. Additionally, you can use OAM to generate HTTP headers containing information about the user or user session by defining responses in authorization policies. Responses can also be defined in authentication policies but most of the time you’ll want to define them in authorization policies. I’ll cover this in detail in a future post.
6) Anonymous access to resources can be granted by adding the resource to the application domain’s Public Resource authentication and authorization policies. The Public Resource authentication policy utilizes the anonymous authentication scheme and the Public Resource authorization policy simply contains no constraints. Both of these are setup by default in the application domain that is created when you register an agent.
That is really all there is to it. Define resources in OAM to broadly or narrowly match your real application resources. Add each resource to the appropriate authentication policy based on whether or not you want to require users to be authenticated when accessing those resources.
If you want to limit certain resources to certain user communities then define authorization policies with constraints that restrict access to those communities and put each resource in the appropriate policy. If you don’t care who can access what once your users are authenticated then just put all your resources in an authorization policy with no constraints.
The following are a couple additional details that may help round things out:
1) If a request fails to match up with any of the defined resources then a failure is returned by the OAM server. With 11g webgates this always means that the request will be blocked. With 10g webgates the behavior is controlled by the “denyOnNotProtected” setting. If set to true then the request will be blocked. If set to false, then anonymous access will be granted and the request will be let through the webgate.
2) If the request matches a resource but that resource is not in any authentication policy or not in any authorization policy, then the request will be blocked.
In my next post I will cover the topics of application domains, host identifiers, and resources in detail. Until then, happy policy authoring!
OAM 11g was released last summer and constitutes a major upgrade/rewrite of OAM, which happens to be one of the more popular Oracle IAM products. My goal with this series is to help everyone attempting to use and deploy the product at various stages by explaining major OAM 11g concepts, making architectural recommendations, pointing out potential pain points, and walking you through common yet non-trivial tasks such as setting up authentication to an external custom login form.
For the entire series content, see here: Oracle Access Manager Academy Index
OAM 11g Policy Model Index:
OAM 11g Policy Model Overview -- Continue below...
OAM 11g Policy Model Part 2: Application Domains and Host Identifiers
OAM 11g Policy Model Part 3: Resources
OAM 11g Policy Model
Today I would like to kick off this series by giving a general overview of the OAM 11g policy model. I define policy model to broadly mean the set of configurations that determine how OAM will handle a given request. I will be following up today’s post with 3-4 more posts on the
At a conceptual level this means the configurations that determine whether a given resource is protected or unprotected, how to authenticate a user that is trying to access a protected resource, whether a given resource is authorized to make a given request, what headers and cookies to generate in the process of authenticating and authorizing a request, etc.
At a lower level I define policy model to describe all the objects that make up OAM policy configurations and how they relate to each other. This includes objects like resources, ID stores, authentication schemes, and policies themselves.
Yes the Policy Model for OAM 11g is New
The OAM 11g policy model is a little different from the 10g model. At first glance the 11g policy model may seem complicated and some people may feel a little intimidated at the idea of having to learn a whole new policy model from scratch. However, I’m here to tell you today that:
1) The OAM 11g policy model is the most straight forward, easiest to understand model in the WAM space.
2) There is still quite a bit of overlap with the 10g model, so OAM 10g users don’t have to feel like you are starting over.
The documentation actually does a pretty good job of laying out the nuts and bolts of the policy model including the object hierarchy.
Policy Model Overview: http://download.oracle.com/docs/cd/E14571_01/doc.1111/e15478/sso.htm#BJFGDIAJ
What You Need to Know
As I mentioned, when you first look at this documentation, it can seem pretty daunting. However, if you cut through the clutter following the steps I’m about to describe you will find that creating OAM 11g policies is fairly straight forward; even more so than with OAM 10g.
In the next few posts, I’ll break down the OAM 11g policy model in detail; but to get you started here is what you need to know:
1) When a user makes a request the host part of the URL is transformed into a host identifier and combined with the rest of the URL into an internal representation of the resource being protected. The best way to think about the host identifier is a binding between the hostnames (real or virtual) and URI based resources. I’ll cover host identifiers in more detail in my next post.
2) This internal representation of the request is then compared to the URL patterns of the resources you have defined. If there is a match then policies will be evaluated based on that resource. I’ll write more about the URL patterns for resources in my next post. The important thing to know for now is that a request will be matched to one and only one OAM resource. The algorthim used to decide what resource the request will be matched to in the event that more than one URL pattern match the URL in the request is a "best match" algorithm.
3) A resource can only be in no more than one authentication policy and no more than one authorization policy.
4) You choose how you want to authenticate users by changing the authentication scheme selected in an authentication policy.
5) You control what users can access what resources by creating constraints in authorization policies. Additionally, you can use OAM to generate HTTP headers containing information about the user or user session by defining responses in authorization policies. Responses can also be defined in authentication policies but most of the time you’ll want to define them in authorization policies. I’ll cover this in detail in a future post.
6) Anonymous access to resources can be granted by adding the resource to the application domain’s Public Resource authentication and authorization policies. The Public Resource authentication policy utilizes the anonymous authentication scheme and the Public Resource authorization policy simply contains no constraints. Both of these are setup by default in the application domain that is created when you register an agent.
That is really all there is to it. Define resources in OAM to broadly or narrowly match your real application resources. Add each resource to the appropriate authentication policy based on whether or not you want to require users to be authenticated when accessing those resources.
If you want to limit certain resources to certain user communities then define authorization policies with constraints that restrict access to those communities and put each resource in the appropriate policy. If you don’t care who can access what once your users are authenticated then just put all your resources in an authorization policy with no constraints.
The following are a couple additional details that may help round things out:
1) If a request fails to match up with any of the defined resources then a failure is returned by the OAM server. With 11g webgates this always means that the request will be blocked. With 10g webgates the behavior is controlled by the “denyOnNotProtected” setting. If set to true then the request will be blocked. If set to false, then anonymous access will be granted and the request will be let through the webgate.
2) If the request matches a resource but that resource is not in any authentication policy or not in any authorization policy, then the request will be blocked.
In my next post I will cover the topics of application domains, host identifiers, and resources in detail. Until then, happy policy authoring!
Labels:
oam,
oam 11g academy
Subscribe to:
Posts (Atom)