Dec 20, 2013

Recently i ran into a situation where i had to specify a default value for bind variable in ViewAccessor  and access the value in the lookup view object. Upon analysis, I found that when i accessed the value using bind variable accessor, the value was always null but in the overridden executeQueryForCollection method i was always receiving the default value, which i had specified, in the params parameter. Needless to say i did not want to access the params object for accessing the bind value, so to fix the issue i changed the RowLevelBinds property of the view accessor to false. Note that this property should only be used if your rowset of the lookup does not change depending upon the current rows’ attribute which essentially means that if you have to fetch a bunch of data like let’s say a list of all organizations and you want to avoid re-querying of the data based on your current rows’ (the view object row where you are using the lookup) attributes you are better off by using the RowLevelBinds=false.

Doing so leads to the following benefits

a) Avoids re-querying of the database b)More important, in my case be able to access the default bind value that i had specified on the viewaccessor using bind variable accessor.

 

Note: At this point of time i do not know whether this is a bug or the intended behaviour that causes the default value to be discarded when using the bind variable accessor.

Posted on Friday, December 20, 2013 by Unknown

Dec 14, 2013

ADF provides strong transaction handling mechanisms for database based applications which are backed by entity and view objects. But what if you have a application which does not use database ? How do you then detect changes to your data ?.  Let’s say you have a web service backed programmatic view objects and you have extended your application to work without a database and user saves some data and clicks on update record ? How would you then verify a) whether the data has changed b) handle commits and rollback c) warn user of uncommitted data changes

In this post i will share the solution to above problems.

1) Detecting changes: If you are using data controls and bindings then you can detect changes when the user submits the data by using the following snippet of code. Note that doing this is very relevant as you do not want to call the backend web service or execute some update code where no action is required.

DCBindingContainer dcBindingContainer=(DCBindingContainer)
BindingContext.getCurrent().getCurrentBindingsEntry();
if(dcBindingContainer.getDataControl().isTransactionModified()){
// Handle update calls to your service here
}
else{
//well nothing to do here maybe notify the user of the same
JSFUtils.addFacesInformationMessage("No change detected");
}


2. Handling Commit or Rollback: Now to handle commits or rollback you can use the following code snippet also note that you must rollback or commit the transaction manually.



//To commit the transaction
dcBindingContainer.getDataControl().commitTransaction();
//To rollback the transaction
dcBindingContainer.getDataControl().rollbackTransaction();


3. Implementing Uncommitted Data Changes warning: To prevent navigation between regions you can implement your own logic on a similar lines as shown in the following code snippet. The backing bean snippet is shown below followed by the test page code.



public class NavBacking implements Serializable {
private static final long serialVersionUID = 1L;
//this will hold the value of task flow id being passed in the setter
private String newTaskFlowId;
//this is where the value of task flow id is picked from
private String taskFlowId

public void setTaskFlowId(String taskFlowId) {
//here we will override the setter to set the taskflow value into our own variable for the time being
this.newTaskFlowId=taskFlowId;

}
/**
* This method is used to check whether the transaction is dirty or not
* and then it launches a popup to confirm from user whether he wants to
* discard the changes or not
* @return
*/
public String checkChanges() {
DCBindingContainer dcBindingContainer=(DCBindingContainer)
BindingContext.getCurrent().getCurrentBindingsEntry();
if(dcBindingContainer.getDataControl().isTransactionModified()){
/**
*check placed here to confirm whether the oldtaskflow id and new taskflow id are same
*and if they are it allows the change
*
*/

if(!taskFlowId.equals(newTaskFlowId)){
FacesContext context = FacesContext.getCurrentInstance();
ExtendedRenderKitService erks =
Service.getRenderKitService(context, ExtendedRenderKitService.class);
//show popup
erks.addScript(context,"AdfPage.PAGE.findComponent('"+popupId +"').show();");
}
}
else{
this.taskFlowId=newTaskFlowId;
}

return null;
}
/**
* If the user insists on discarding changes rollback the transaction
* @return
*/
public String okAction() {
DCBindingContainer dcBindingContainer=(DCBindingContainer)
BindingContext.getCurrent().getCurrentBindingsEntry();
dcBindingContainer.getDataControl().rollbackTransaction();
this.taskFlowId=newTaskFlowId;
return null;
}
}

//Unbounded taskflow page which is used to call the bounded taskflow

<af:popup id="pp3" animate="default" childCreation="deferred"  clientComponent="true">
<af:dialog id="dg1" closeIconVisible="false" type="none"
title="Uncommitted Data Warning" >
<f:facet name="buttonBar">
<af:toolbar id="tb1">
<af:commandButton id="cb1"
text="OK" immediate="true" action="#{viewScope.NavBacking.okAction}"/>
<af:commandButton id="cb2"
text="CANCEL" immediate="true" action=" "/>
</af:toolbar>
</f:facet>
<af:inputText id="opt1" readOnly="true" wrap="hard"
value="You have made some changes are you sure you want to continue"/>
<af:spacer id="sp2" height="5"/>
</af:dialog>
</af:popup>
// now you have to ensure that each command link enforces a call to checkChanges

<af:commandLink text="pageName"
visible="true" immediate="true"
action="#{viewScope.NavBacking.checkChanges}"
id="cl2">
<af:setPropertyListener type="action"
from="taskflowIdGoesHere"
to="#{viewScope.NavBacking.taskFlowId}"/>
</af:commandLink>

Note: For this code to work your bounded taskflow must share the datacontrol with the calling unbounded taskflow.

Screenshots:-







Links and references:-



Checking Dirty Data by Jobinesh

Posted on Saturday, December 14, 2013 by Unknown

Nov 14, 2013

ADF essentials release does not include the two IDM jar files that are used for OPSS integration (these are included with normal ADF release) .  Although to use OPSS APIs’ features one does not require a porting effort. I just hope that Oracle can offer these with the ADF essentials release. Here are the reasons why Oracle should do it.

  1. OPSS API’s are Directory server vendor agnostic: The same code works for OID as it works for OpenLdap and Active Directory (you only have to switch the providers).
  2. OPSS API’s are container agnostic: The same code runs on weblogic as on glassfish
  3. ADF essentials lacks a security framework; these API’s could be used to fill the gap.

The following snippet is based on a earlier post of mine which used OID as a provider, this snippet however, is configured for open ldap provider.

package model;

import java.security.Principal;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.ResourceBundle;
import java.util.Set;
import java.util.logging.Level;

import oracle.adf.share.ADFContext;
import oracle.adf.share.logging.ADFLogger;
import oracle.adf.share.security.SecurityContext;
import oracle.jbo.JboException;

import oracle.security.idm.ComplexSearchFilter;
import oracle.security.idm.IMException;
import oracle.security.idm.Identity;
import oracle.security.idm.IdentityStore;


import oracle.security.idm.IdentityStoreFactory;
import oracle.security.idm.IdentityStoreFactoryBuilder;
import oracle.security.idm.ModProperty;
import oracle.security.idm.ObjectNotFoundException;
import oracle.security.idm.OperationNotSupportedException;
import oracle.security.idm.Role;
import oracle.security.idm.RoleManager;
import oracle.security.idm.RoleProfile;
import oracle.security.idm.SearchFilter;
import oracle.security.idm.SearchParameters;
import oracle.security.idm.SearchResponse;
import oracle.security.idm.SimpleSearchFilter;
import oracle.security.idm.User;
import oracle.security.idm.UserManager;
import oracle.security.idm.UserProfile;
import oracle.security.idm.providers.oid.OIDIdentityStoreFactory;
import oracle.security.idm.providers.openldap.OLdapIdentityStoreFactory;
/**
*This class can be used to perform operation on OpenLdap using OPSS API
* @author Ramandeep Nanda
*/

public class LDAPOperations {
public static final ADFLogger LDAPLogger=ADFLogger.createADFLogger(LDAPOperations.class);
private static final ResourceBundle rb =
ResourceBundle.getBundle("model.myresourcebundle");
/**
*
* @return The store instance for OpenLDAP store
*/
public static IdentityStore getStoreInstance(){
return IdentityStoreConfigurator.initializeDefaultStore();
}
public static IdentityStoreFactory getIdentityStoreFactory(){
return IdentityStoreConfigurator.idStoreFactory;
}


/**
* Assigns the specified role to the user
* @param roleName the role to assign
* @param userName the user to assign role to
*/
public static void assignRoleToUser(String roleName,String userName){
String methodName=Thread.currentThread().getStackTrace()[1].getMethodName();
IdentityStore store = LDAPOperations.getStoreInstance();
try {
Role role= store.searchRole(IdentityStore.SEARCH_BY_NAME,roleName);
User user= store.searchUser(userName);
RoleManager rm=store.getRoleManager();
if(!rm.isGranted(role, user.getPrincipal())){
rm.grantRole(role, user.getPrincipal());
}

} catch (IMException e) {
LDAPLogger.severe("Exception in "+methodName + "Could not assign role ["+roleName+"] to the user ["+userName +"] because of " +e.getMessage() +" ", e);
throw new JboException("Could not assign role ["+roleName+"] to the user ["+userName +"] due to "+e.getMessage());

}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}
}
/**
* Assigns the specified role to the user
* @param roleNames the roles to assign
* @param userName the user to assign role to
* @return the set of users who are assigned roles
*/
public static Set assignRolesToUser(Set roleNames,String userName){
Set rolesAssigned=new HashSet();

String methodName=Thread.currentThread().getStackTrace()[1].getMethodName();
IdentityStore store = LDAPOperations.getStoreInstance();
String roleName=null;
try {
User user= store.searchUser(userName);
Principal userPrincipal=user.getPrincipal();
RoleManager rm=store.getRoleManager();
Iterator it=roleNames.iterator();
while(it.hasNext()){
roleName=(String)it.next();
Role role= store.searchRole(IdentityStore.SEARCH_BY_NAME,roleName);
if(!rm.isGranted(role, user.getPrincipal())){
rm.grantRole(role,userPrincipal);
rolesAssigned.add(roleName);
}
}
} catch (IMException e) {

LDAPLogger.severe("Exception in "+methodName + "Could not assign role ["+roleName+"] to the user ["+userName +"] because of " +e.getMessage() +" ", e);
throw new JboException("Could not assign role ["+roleName+"] to the user ["+userName +"] due to "+e.getMessage());


}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}

return rolesAssigned;
}
/**
* Assigns the specified role to the user
* @param roleName the role to assign
* @param users the users to assign role to
* @return The users who are assigned the role
*/
public static Set assignRoleToUsers(String roleName,Map users){
Set usersAssigned=new HashSet();
String methodName=Thread.currentThread().getStackTrace()[1].getMethodName();
IdentityStore store = LDAPOperations.getStoreInstance();

Set entrySet = users.entrySet();
Iterator it=entrySet.iterator();
String userName=null;

try {
Role role= store.searchRole(IdentityStore.SEARCH_BY_NAME,roleName);
RoleManager rm=store.getRoleManager();
while(it.hasNext()){
Map.Entry entry=(Map.Entry)it.next();
userName=(String)entry.getKey();
User user= store.searchUser(userName);
if(!rm.isGranted(role, user.getPrincipal())){
rm.grantRole(role, user.getPrincipal());
usersAssigned.add(user);
}
}
} catch (IMException e) {
LDAPLogger.severe("Exception in "+methodName + "Could not assign role ["+roleName+"] to the user ["+userName +"] because of " +e.getMessage() +" ", e);
}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}
return usersAssigned;
}

//revoke sample below It is similar to the above mentioned assign case so mentioning a sample operation

/**
* To remove the role from user
* @param roleName the role to remove/ revoke
* @param userName the user from which to revoke role
*/
public static void removeRoleFromUser(String roleName,String userName){
String methodName=Thread.currentThread().getStackTrace()[1].getMethodName();
IdentityStore store = LDAPOperations.getStoreInstance();
try {
Role role= store.searchRole(IdentityStore.SEARCH_BY_NAME,roleName);

User user= store.searchUser(userName);
RoleManager rm=store.getRoleManager();
if(rm.isGranted(role, user.getPrincipal())){
rm.revokeRole(role, user.getPrincipal());
}
} catch (IMException e) {
LDAPLogger.severe("Exception in "+methodName + "Could not revoke role ["+roleName+"] from the user ["+userName +"] because of " +e.getMessage() +" ", e);
throw new JboException("Could not remove role ["+roleName+"] from the user ["+userName +"] due to "+e.getMessage());

}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}
}
public static void dropUserWithRoles(String userId){
UserManager um = null;
IdentityStore store=null;
User newUser = null;
try {
store = LDAPOperations.getStoreInstance();
User user = store.searchUser(IdentityStore.SEARCH_BY_NAME, userId);
um=store.getUserManager();
if (user != null) {
//drop user if already present
um.dropUser(user);
RoleManager rm = store.getRoleManager();
Principal userPrincipal= user.getPrincipal();
SearchResponse resp=rm.getGrantedRoles(userPrincipal, true);
while(resp.hasNext()){
rm.revokeRole((Role)resp.next(), user.getPrincipal());
}
}
}
catch (IMException e) {
LDAPLogger.info("[dropUser]" +

e);

}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}
}
public static void dropUser(String userId){
UserManager um = null;
User newUser = null;
IdentityStore store=null;

try {
store = LDAPOperations.getStoreInstance();
User user = store.searchUser(IdentityStore.SEARCH_BY_NAME, userId);
um=store.getUserManager();
if (user != null) {
//drop user if already present
um.dropUser(user);
}
}
catch (IMException e) {
LDAPLogger.info("[dropUser]" +
e);

}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}
}

/**
* Gets the userProfile of the logged in user
* @param approverUser
* @return
*/
public static oracle.security.idm.UserProfile getUserProfile(String approverUser) {
IdentityStore store = LDAPOperations.getStoreInstance();
oracle.security.idm.UserProfile profile=null;
try {
User user= store.searchUser(approverUser);
profile=user.getUserProfile();

} catch (IMException e) {
LDAPLogger.info("Could not find user in OID with supplied Id"+approverUser);
throw new JboException(e.getMessage());
}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}
return profile;
}
/**
* Gets all the roles
* @return
*/
public static List getAllRoles(){
String methodName = Thread.currentThread().getStackTrace()[1].getMethodName();
List returnList=new ArrayList();
IdentityStore store = LDAPOperations.getStoreInstance();

try{
SimpleSearchFilter filter=store.getSimpleSearchFilter(RoleProfile.NAME,SimpleSearchFilter.TYPE_EQUAL,null);
String wildCardChar=filter.getWildCardChar();
// Here the default_role is a property this is just a placeholder can be any pattern you want to search
filter.setValue(wildCardChar+rb.getString("DEFAULT_ROLE")+wildCardChar);
SearchParameters parameters=new SearchParameters(filter,SearchParameters.SEARCH_ROLES_ONLY) ;
SearchResponse resp=store.searchRoles(Role.SCOPE_ANY,parameters);
while(resp.hasNext()){
Role role=(Role)resp.next();
String tempRole=role.getPrincipal().getName();
returnList.add(tempRole);
}
store.close();
}catch(IMException e){
LDAPLogger.severe("Exception in "+methodName + " " +e.getMessage() +" ", e);
throw new JboException(e.getMessage());
}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}

return returnList;
}
/**
* Fetches all the roles assigned to the user
* @param userName
* @return
*/
public static List getAllUserRoles(String userName, String searchPath) {
String methodName = Thread.currentThread().getStackTrace()[1].getMethodName();
List returnList=new ArrayList();
IdentityStoreFactory storeFactory = LDAPOperations.getIdentityStoreFactory();
IdentityStore store=null;
String[] userSearchBases= {rb.getString(searchPath)};
String[] groupSearchBases= {rb.getString("group.search.bases")};
Hashtable storeEnv=new Hashtable();
storeEnv.put(OLdapIdentityStoreFactory.ADF_IM_SUBSCRIBER_NAME,rb.getString("oidsubscribername"));
storeEnv.put(OLdapIdentityStoreFactory.RT_USER_SEARCH_BASES,userSearchBases);
storeEnv.put(OLdapIdentityStoreFactory.RT_GROUP_SEARCH_BASES,groupSearchBases);

try{
store = storeFactory.getIdentityStoreInstance(storeEnv);
User user= store.searchUser(IdentityStore.SEARCH_BY_NAME,userName);
RoleManager mgr=store.getRoleManager();
SearchResponse resp= mgr.getGrantedRoles(user.getPrincipal(), false);
while(resp.hasNext()){
String name= resp.next().getName();
returnList.add(name);
}

}catch(IMException e){
LDAPLogger.severe("Exception in "+methodName + " " +e.getMessage() +" ", e);
throw new JboException(e.getMessage());
}
finally {
try{
store.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}

return returnList;
}

/**
*Use to change the passoword for logged in user It uses ADF Security Context to get logged in user
*
**/
public static void changePasswordForUser(String oldPassword,String newPassword, String userName){
String methodName =
java.lang.Thread.currentThread().getStackTrace()[1].getMethodName();
SecurityContext securityContext =
ADFContext.getCurrent().getSecurityContext();
String user = securityContext.getUserName();
IdentityStore oidStore=null;
oidStore = LDAPOperations.getStoreInstance();
try {
UserManager uMgr = oidStore.getUserManager();
User authUser =
uMgr.authenticateUser(user, oldPassword.toCharArray());

if (authUser != null) {
UserProfile profile = authUser.getUserProfile();

profile.setPassword( oldPassword.toCharArray(),
newPassword.toCharArray());
}
} catch (IMException e) {
if (LDAPLogger.isLoggable(Level.SEVERE)) {
LDAPLogger.severe("[" + methodName +
"] Exception occured due to " + e.getCause(),
e);
}
throw new JboException(e.getMessage());
}
finally {
try{
oidStore.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}


}
/**
* It is used to generate a random password
*@code{
* generateRandomPassword();
* }
* @return
*/
public static String generateRandomPassword() {
String password =
Long.toHexString(Double.doubleToLongBits(Math.random()));
int passLength = password.length();
if (passLength >= 8) {
password = password.substring(passLength - 8, passLength);
}

return password;
}

/**
* Resets the password for user
*
**/
public static void resetPasswordForUser(String userId)
{
String methodName =
java.lang.Thread.currentThread().getStackTrace()[1].getMethodName();
IdentityStore oidStore = LDAPOperations.getStoreInstance();
User user = null;
try {
user = oidStore.searchUser(userId);
if (user != null) {
UserProfile userProfile = user.getUserProfile();
List passwordValues =
userProfile.getProperty("userpassword").getValues();
ModProperty prop =
new ModProperty("PASSWORD", passwordValues.get(0),
ModProperty.REMOVE);
userProfile.setProperty(prop);
String randomPassword = generateRandomPassword();
userProfile.setPassword(null, randomPassword.toCharArray());
}
} catch (IMException e) {
LDAPLogger.severe("[" + methodName + "]" +
"Exception occured due to ", e);

}
finally {
try{
oidStore.close();
}
catch (IMException e) {
LDAPLogger.severe("Exception occured in closing store");
}
}

}


/**
* This nested private class is used for configuring and initializing a store instance
* @author Ramandeep Nanda
*/
private static final class IdentityStoreConfigurator {
private static final IdentityStoreFactory idStoreFactory=initializeFactory();


private static IdentityStoreFactory initializeFactory(){
String methodName = Thread.currentThread().getStackTrace()[1].getMethodName();
IdentityStoreFactoryBuilder builder = new
IdentityStoreFactoryBuilder();
IdentityStoreFactory oidFactory = null;
try {
Hashtable factEnv = new Hashtable();
factEnv.put(OLdapIdentityStoreFactory.ST_SECURITY_PRINCIPAL,rb.getString("oidusername"));
factEnv.put(OLdapIdentityStoreFactory.ST_SECURITY_CREDENTIALS, rb.getString("oiduserpassword"));
factEnv.put(OLdapIdentityStoreFactory.ST_SUBSCRIBER_NAME,rb.getString("oidsubscribername"));
factEnv.put(OLdapIdentityStoreFactory.ST_LDAP_URL,rb.getString("ldap.url"));
factEnv.put(OLdapIdentityStoreFactory.ST_USER_NAME_ATTR,rb.getString("username.attr"));
oidFactory = builder.getIdentityStoreFactory("oracle.security.idm.providers.openldap.OLdapIdentityStoreFactory", factEnv);
}
catch (IMException e) {
LDAPLogger.severe("Exception in "+methodName + " " +e.getMessage() +" ", e);
//re throw exception here
}
return oidFactory;
}
private static IdentityStore initializeDefaultStore(){
IdentityStore store=null;
String methodName = Thread.currentThread().getStackTrace()[1].getMethodName();
String[] userSearchBases= {rb.getString("user.search.bases")};
String[] groupCreateBases= {rb.getString("group.search.bases")};
String []usercreate={rb.getString("user.create.bases")};
String [] groupClass={rb.getString("GROUP_CLASSES")};
Hashtable storeEnv=new Hashtable();
storeEnv.put(OLdapIdentityStoreFactory.ADF_IM_SUBSCRIBER_NAME,rb.getString("oidsubscribername"));
storeEnv.put(OLdapIdentityStoreFactory.RT_USER_SEARCH_BASES,userSearchBases);
storeEnv.put(OLdapIdentityStoreFactory.RT_GROUP_SEARCH_BASES,groupCreateBases);
storeEnv.put(OLdapIdentityStoreFactory.RT_USER_CREATE_BASES,usercreate);
storeEnv.put(OLdapIdentityStoreFactory.RT_USER_SELECTED_CREATEBASE,rb.getString("user.create.bases"));
storeEnv.put(OLdapIdentityStoreFactory.RT_GROUP_OBJECT_CLASSES,groupClass);
try{
store = model.LDAPOperations.IdentityStoreConfigurator.idStoreFactory.getIdentityStoreInstance(storeEnv);
}
catch (IMException e) {
LDAPLogger.severe("Exception in "+methodName + " " +e.getMessage() +" ", e);
// re throw exception here

}
return store;

}


}
}


The resource bundle properties used in this code are mentioned below.



ldap.url=ldap://localhost:389
user.create.bases=ou=Users,dc=ramannanda,dc=blogspot,dc=com
username.attr=uid
oidusername=cn=Manager,dc=ramannanda,dc=blogspot,dc=com
GROUP_CLASSES=groupOfUniqueNames
#not safe
oiduserpassword=oracle
user.search.bases=ou=users,dc=ramannanda,dc=blogspot,dc=com
group.search.bases=cn=groups,dc=ramannanda,dc=blogspot,dc=com
oidsubscribername=dc=ramannanda,dc=blogspot,dc=com


Here’s the link to earlier post: OPSS API's with OID



 



Let’s hope they offer the use of these API’s at a nominal cost or include it with ADF essentials for free.

Posted on Thursday, November 14, 2013 by Unknown

Oct 13, 2013

To trace an existing J2EE application (or a legacy application,legacy here means the ones still not using CDI) at database layer is not easy, especially if that application does not have any reference to the user whom you want to trace. A cumbersome way would be to pass the user name or id from the view layer to each method you call on model layer and then pass it further down to class method from which you obtain the database connection. But, there is a far easy solution that i am going to discuss in this post. The solution that i am going to discuss can be used to enable database tracing for any legacy application and that too far easily.

The major issue with the existing applications is that they cannot access HttpSession from the model layer and hence cannot obtain the user id or user name of the user. To overcome this scenario we can use ThreadLocal class or any implementation of it (in this post i am going to use slf4j MDC class). A ThreadLocal variable is local to the currently executing thread and it cannot be altered by a concurrent thread,so we can use this variable to store the user information. But in case of web applications, during a user’s session, it is most likely that each of his/her request will be handled by a separate thread, So to ensure that the user’s information is kept stored in ThreadLocal variable, we can use a filter which can take the user id from the HttpSession variable and store it in the ThreadLocal variable. Also to avoid memory leaks we can remove the variable once a request is completed. Once this variable is stored it can be accessed from any class that is called by this thread, hence we easily achieve the goal of getting the information we need to enable the trace at database layer. The following code snippets show how it can be achieved.

The Filter Class :-

import org.slf4j.MDC;

public class UserIdInjectingFilter implements Filter{

public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {

HttpSession session=((HttpServletRequest)request).getSession(false);

if(session!=null){
if(session.getAttribute("userID")!=null){
//Here we populate the MDC
MDC.put("userID", (String)session.getAttribute("userID"));

}
}
chain.doFilter(request, response);

finally{
//Be sure to remove it, will cause memory leaks and permgen out of space errors. if not done so
MDC.remove("userID");
}


}

The central database connection management class methods:-

.....

private Connection connection=null;

Connection getDBConnection(){

CallableStatement cs=null;

try{
Context ctx=new InitialContext();
Context initContext = new InitialContext();
DataSource ds=(DataSource)initContext.lookup("jdbc/TestDS");
connection=ds.getConnection();
//get the value from thread local variable
String userId=MDC.get("userID");

cs=connection.prepareCall("begin set_cid(?,?,?); end;");
cs.setString(1, userId);
String invokingMethodName=Thread.currentThread().getStackTrace()[3].getMethodName();
String invokingClassName=Thread.currentThread().getStackTrace()[3].getClassName();
cs.setString(2,invokingClassName);
cs.setString(3,invokingMethodName);
cs.executeUpdate();
}catch(NamingException nameEx){
// handle exception here
}

// Be Specific :-)

catch(Exception sqlEx){
// catch your exception here
}

finally{

try {
cs.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

}
return connection;
}

/**
*Before closing the connection unset the bunch of identifiers
*/
public void closeConnection(){
//Bunch of close statements
if(connection != null && connection.isClosed()==false){
CallableStatement cs=connection.prepareCall("begin clear_cid(); end;");
cs.executeUpdate();
cs.close();
connection.close();
}
}catch(SQLException sqlEx){
// handle your exception here
}


}
.....

The PL/SQL procedures to set the identifiers:

create or replace procedure set_cid(p_cid varchar2,p_module_id varchar2,p_method_id varchar2)
is
begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO (p_cid);
DBMS_APPLICATION_INFO.SET_MODULE (p_module_id,p_method_id);

end set_cid;


create or replace procedure clear_cid
is
begin
DBMS_APPLICATION_INFO.SET_CLIENT_INFO (' ');
DBMS_APPLICATION_INFO.SET_MODULE ('','');

end clear_cid;

The query to see the details:-

select client_info,module,action from v$session


 


Hope this helps !

Posted on Sunday, October 13, 2013 by Unknown

Oct 3, 2013

I had developed this utility on swing back in 2009. This utility can be used to schedule shutdown, lock, sleep, hibernate, reboot and log off operations in windows.

Earlier this utility was based on external dependencies and due to this dependency the utility was not working anymore . Now I have created two versions of the utility Shutdown-basic* and Shutdown jar. The difference lies in the component used to select the date and time for scheduling the operation.

Shutdown.jar:  This utility utilizes a advanced swing component to select date and time.  As this swing component is for trial version only, using it after the trial period might give you exception during launch, but the functionality will still work.

Shutdownbasic.jar:  This is a standalone utility which does not use any proprietary swing component, so it will work without issues.

Using the application:-

  • The jar is runnable, so just launch the application by opening command prompt at the application folder and invoking "java -jar shutdown.jar" (without the quotes) or “java –jar shutdownbasic.jar” command or you can set it to open with java by default.
  • Select the time and date for the scheduling the operation.
  • Select the operation from the drop down box that you want to execute and click on the submit button.
  • One can always abort the current operation in between by clicking on the abort button.
  • The label at the bottom will display the time left for the execution of the operation.

 

Download the application from the link below:-








Following are the snapshots of the application:-


shutdown utility shutdown utility2

shutdown utility3 shutdown utility4



Note:- Its always recommended to create a shortcut for running jar files. So just create a shortcut and in the target prefix the command 'javaw -jar' and save the shortcut.

Posted on Thursday, October 03, 2013 by Unknown

Aug 13, 2013

To determine code quality of an application,lot of tools are available such as PMD, FindBugs,etc. During the product development life-cycle there are various phases such as unit testing, code coverage analysis, performance testing ,and code quality analysis. SonarQube is one such open source tool that aggregates the aforementioned metrics into a single dashboard through which one can see, manage and iteratively improve upon such aspects. SonarQube has a plug-in based architecture and you can use this tool to analyze different languages within the same project and can maintain a central repository to track changes, assign and resolve issues.

In this post i will share an example of code analysis done by SonarQube for an ADF project, I have configured MySQL database as a repository for the SonarQube application. I have also extended the existing rule sets by adding a XPATH rule; This rule is based on PMD xpath template that flags the use of getRowSetIterator() method. The steps for running the analysis are mentioned below.
  1. Assuming sonar-runner in your classpath and SONAR_RUNNER_HOME  is configured, create sonar-project.properties in the application directory of your application as shown below :-
    # required metadata
    sonar.projectKey=com.blogspot.ramannanda.MasterDetail
    sonar.projectName=ADF Project
    sonar.projectVersion=1.0
     
    # optional description
    sonar.projectDescription=Master Detail Example
    
    #project modules
    sonar.modules=Model,ViewController
    
    # The value of the property must be the key of the language.
    sonar.language=java
    
    # Encoding of the source code
    sonar.sourceEncoding=UTF-8



  2. Create module specific sonar-project.properties files as shown below
    #For model project
    # required metadata
    sonar.projectName=Model
    sonar.projectVersion=1.0
     
    # path to source directories (required)
    sonar.sources=src
     
    # optional description
    sonar.projectDescription=Model for Master Detail
    
    #path to binaries
    sonar.binaries=classes 

    For viewcontroller project the configuration is similar as shown below

    # required metadata
    sonar.projectName=ViewController
    sonar.projectVersion=1.0
     
    # path to source directories (required)
    sonar.sources=src
    
    #optional description
    sonar.projectDescription=ViewController project for 
    
    #path to binaries
    sonar.binaries=classes 



  3. Just type the sonar-runner command in the application directory and analysis will be performed, after which you can open the application and view the analysis for the project


Extending the rule sets:-


Now i will provide an example of how to extend  the application with a custom XPATH rule.



  1. Go to Quality Profile menu and select a java profile


  2. Search for xpath rule; there might be two disabled rules. I have chosen the PMD xpath template rule, as it is currently the supported rule for java and expressions can easily be checked using PMDDesigner utility. The screenshot is shown below.SonarQube _Rule


  3. The xpath expression for the rule is shown in the below snippet.
    //PrimaryExpression/PrimaryPrefix/Name[contains(@Image,'getRowSetIterator')]



  4. Activate the rule and run the analysis with sonar-runner utility; After running the analysis we can see the rule violations for the project.


sonarqube_custom_pmd_rule





As can be seen from the screen-shot, we also have the option to assign the issue to an individual, browse the source code, see how long a issue has been opened, etc. So all in all this tool is a good place to maintain the metrics about the project. Probably, in future, people can extend the functionality further for ADF application by creating more rule sets applicable to ADF application.





References:-


http://www.sonarqube.org/


http://pmd.sourceforge.net/pmd-5.0.5/xpathruletutorial.html


Some Screens:


SonarQube - ADFProject


SonarQube_treemap

Posted on Tuesday, August 13, 2013 by Unknown

Apr 22, 2013

Developers while creating the applications often ignore guidelines of closing result sets which is the most common cause of memory leaks. In ADF applications it is the RowSetIterators that need to be closed . This scenario can easily be simulated under load testing using jmeter and turning on memory sampling in JVisualVM.  In this post i will share an example of a one page application in which there is a depiction of master detail relationship b/w a department and its employees. There is a disabled af:inputtext on the fragment that is bound to a backing bean and its getter method has code for iterating the employee table and calculating the total employee salary. Initially i have created a secondary row set iterator and left it open just to examine the memory leaks under load that occur due to this open reference. The code in the backing bean is shown below.

    public Number getSum() {
DCBindingContainer bc=(DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
EmpVOImpl empVoImpl=(EmpVOImpl)bc.findIteratorBinding("EmpVO1Iterator").getViewObject();
Number totalAmount=new Number(0);
RowSetIterator it= empVoImpl.createRowSetIterator(null);
while(it.hasNext()){
EmpVORowImpl row=(EmpVORowImpl)it.next();
if(row.getSal()!=null){
totalAmount=row.getSal().add(totalAmount);
}
}
it.closeRowSetIterator();
this.sum=totalAmount;
return sum;
}


So to identify that you have open iterator issue, open the JVisualVM’s sampler tab and turn on memory sampling, when you do so you would see many instances of different types of classes but to identify open iterator issue you need to focus on ViewRowSetIteratorImpl instances and then to your project specific VOImpl classes through which you obtained the iterator in the first place. When ever you create a new RowSetIterator you are returned a ViewRowSetIteratorImpl class instance. As is obvious you would expect the number of instances to increase as you open the RowSetIterator instances and do not close them. Now when i ran the application under simulated load with JMeter with open iterator reference i could see the instances for ViewRowSetIteratorImpl class increase in number very quickly. So then i took a heap dump with JVisualVM and then clicked on the ViewRowSetIteratorImpl class in classes tab to open the instance view tab, then upon selecting an instance and querying for the nearest GC root (in the reference section) i could see that EmpVOImpl was holding a reference to the ViewRowSetIteratorImpl instance. You can also use OQL support in JVisualVM to find out the references which are keeping the object alive.



select  heap.livepaths(u,false) from oracle.jbo.server.ViewRowSetImpl u


adfgcroot



livepaths



After opening the EmpVOImpl instance by clicking on the node, I expanded the mviewrowset (it is a variable which holds ViewRowSetImpl  instance) and then further expanding its mViews property i could see a number of ViewRowSetIteratorImpl instances as shown below.



empvoimplholdingviewrowsetiteratorimpl



So to resolve this issue all i had to do was to close the open iterator by calling the closeRowSetIterator method on the RowSetIterator instance.



There are other alternatives also for identifying the problem in your source code by turning on profiling in JVisualVM but profiling on a production system is not recommended way of even approaching the issue.



The comparison b/w the application with open iterators and closed iterators is shown below.



 



adfmemoryleak nomemoryleakiteratorclose



In this application JMeter was used to simulate the load with loop controller configured to select different master rows so that upon selection change and partial refresh the getter method for the inputtext in backing bean was called again and again to increase the iterator instances.



jmeterconf



The application and jmx file can be downloaded from below links




  1. JMeter test plan


  2. Application

Posted on Monday, April 22, 2013 by Unknown

Apr 19, 2013

A user on OTN forum was trying to extend the AdfcExceptionHandler in which he was trying to do redirect to an error page and facing Response Already Committed exception. On examining the scenario i could see that the phase in which the exception handler was being called was RENDER_RESPONSE and by this time it is too late to do a redirect as part of the response has already been submitted (one can check this by using response.iscommitted() method). So redirects/forwards should never be issued from the AdfcExceptionHandler in a conventional way.

The question also arises why someone needs to extend the AdfcExceptionHandler  in the first place, but, we need to do this as default exception handling will not handle exceptions that occur during render response phase for example: let’s say that during the application execution the connectivity to the DB goes down, the exceptions raised then will not be handled by the default exception handling mechanism.

So we cannot use conventional redirects but we have to do redirects somehow, the solution then is to use javascript redirect from the exception handler. The code for the exception handler is shown below.

package com.blogspot.ramannanda.view.handlers;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
import javax.faces.event.PhaseId;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import oracle.adf.view.rich.context.ExceptionHandler;
import oracle.adfinternal.controller.application.AdfcExceptionHandler;
import oracle.jbo.DMLException;
import org.apache.myfaces.trinidad.render.ExtendedRenderKitService;
import org.apache.myfaces.trinidad.util.Service;


public class MyAppUIErrorHandler extends AdfcExceptionHandler {
public MyAppUIErrorHandler() {
super();
}


@Override
public void handleException(FacesContext facesContext, Throwable throwable,
PhaseId phaseId) throws Throwable {
ExternalContext ctx =
FacesContext.getCurrentInstance().getExternalContext();
HttpServletRequest request = (HttpServletRequest)ctx.getRequest();
if (phaseId.getOrdinal() == PhaseId.RENDER_RESPONSE.getOrdinal()) {
//handle null pointer or sqlexception or runtime DMLException that might occur
if (throwable instanceof DMLException ||
throwable instanceof SQLException ||
throwable instanceof NullPointerException) {
String contextPath = request.getContextPath();
FacesContext context = FacesContext.getCurrentInstance();
StringBuffer script = new StringBuffer();
//set the window.location.href property this causes the redirect.
script.append("window.location.href= '").append(contextPath).append("/error.html';");
ExtendedRenderKitService erks =
Service.getRenderKitService(context,
ExtendedRenderKitService.class);
erks.addScript(context, script.toString());
//encoding script is required as just adding script does not work here
erks.encodeScripts(context);
return;
} else {
super.handleException(facesContext, throwable, phaseId);
}

}


}
}


So by using this code we are doing redirect to error.html page, I am also checking for phaseId here as the handler will be called for each phase in turn and we can do the handling in render response phase.



Note: Note that i have used encodeScripts method after adding the script and this is required because this method outputs any required scripts by RenderKit which in our case tends to be the script we added. 



 



adferrorhandling

Posted on Friday, April 19, 2013 by Unknown

Apr 2, 2013

In this post i will share some scenarios that you might face with ADF Master Detail Table component. Typically those mentioned below:-

  1. Create a row in child table on creation of row in master table (uses Accessor to programmatically create a row in child )
  2. Using Cascade delete option on committed rows

The code sample is based upon scott schema and uses dept and emp tables. The dept table serves as master table and the emp table serves as child table. Also,the association relationship between the entities involved is a composition relationship.

There are a few pre-requisites for this sample to work, basically, you need to create triggers on both the tables and database sequences that will be used to populate  primary key values. The SQL script is present in the sample app.

 

1. Creating row in child table whenever a row is created in master table:-

To accomplish this i have created a method in the application module implementation class that creates a row for department programmatically and then uses the exposed accessor to create the row in child. The snippet is shown below :-

    /**
* Creates and inserts Dept and Emp Row
*/
public void createDeptAndEmpRow(){
DeptVOImpl deptVO=this.getDeptVO1();
DeptVORowImpl row=(DeptVORowImpl)deptVO.createRow();
deptVO.insertRow(row);
RowIterator iterator= row.getEmpVO();
Number deptNumber=row.getDeptno().getSequenceNumber();
NameValuePairs nvps=new NameValuePairs();
nvps.setAttribute("Deptno", deptNumber);
EmpVORowImpl empRow=(EmpVORowImpl)iterator.createAndInitRow(nvps);
iterator.insertRow(empRow);
}


Here i have used the *VOImpl and *VORowImpl class implementations. Also note  the partial triggers on employee table for “create department and employee button” and “Delete department” button which causes the employee table to refresh.



2. Using cascade delete option on committed rows:-



Default database foreign key constraints ensure that you cannot delete rows from a parent while records exist in child table, so if you issue a delete statement on master table i.e Department table you will receive a  foreign key constraint violation exception. The solution is to make the foreign key constraint deferrable which ensures that validation happens when you issue a  commit and not while you issue the delete statement. So to get this to work drop the existing constraint and recreate it as following.



ALTER TABLE emp
ADD CONSTRAINT fk_deptno
FOREIGN KEY (deptno)
REFERENCES dept (deptno)
ON DELETE CASCADE
DEFERRABLE
INITIALLY DEFERRED ;


 



Also note the “ON DELETE CASCADE” clause which will delete the records in child table whenever a row from master table is deleted.



 



The database scripts for triggers and sequences are present in the sample project which can be downloaded from here.



masterdetailexample



 associationrelationship

Posted on Tuesday, April 02, 2013 by Unknown

Mar 3, 2013

If you have a requirement to reconcile user status from target resource and also to provision the value of user status into a target account you can follow this post.
Let’s say for example in the target resource the status is marked as A for enable and  D for disable. To accomplish this using GTC connector you will need a lookup based translation.
Provisioning:
For provisioning let’s assume that the target database application table has a column USER_STATUS corresponding to the Status column in OIM, Also the target has different statuses corresponding to the statuses in OIM. To accomplish the provisioning then follow these steps:-
  1. Create a lookup definition as shown in below snapshot

    oimprovstatuslookup
  2. Now create the GTC connector and do not choose the trusted source reconciliation as this is an example of target resource reconciliation and provisioning. Now map the OIM user dataset Status column to the provisioning staging dataset’s USER_STATUS column and choose “create mapping with translation” option. The mapping should be as shown in the below screenshot.

    provstatusmapping
  3. Now when the provisioning happens the USER_STATUS will be populated as A and when you will disable the user the USER_STATUS will be set as D on the target resource.
Reconciliation:
If you also have to reconcile the target’s resource status with OIM’s resource status  and the target resource has different values for the statuses than OIM then follow the below mentioned steps:-
  1. Create a another lookup (you could also use same lookup) that maps the target’s statuses to OIM’s statuses. Since this is target resource reconciliation the statuses will be Enabled and Disabled on OIM side But,if this were trusted resource reconciliation these would have been Active and Disabled on OIM side. The screenshot below shows the lookup.

    reconcilestatuslookup
  2. Now go to reconciliation mapping of the GTC connector and in the reconciliation staging dataset add a column and name it RECONCILE_STATUS and choose “Create mapping with Translation” option and map the USER_STATUS field  to the the reconciliation lookup literal which has the mapping for translation.  Refer to the below screenshots.

    statusreconcile_trans1
    statusreconcile_trans2
  3. After the above mapping map is done map the new RECONCILE_STATUS column in Reconciliation staging dataset to the OIM_OBJECT_STATUS field in the OIM account data set.

    reconcileoimobjectstatus
This completes the mapping for the GTC connector now you can test the provisioning and  reconciliation.To test the reconciliation change the status from A TO D in the target database table and run the OIM reconciliation scheduler which in turn will generate the event and the resource will be disabled in OIM for the particular user.
The following screenshot shows how the entire mapping looks like.
gtcscreen

Posted on Sunday, March 03, 2013 by Unknown

Mar 2, 2013

I recently faced an issue that after installing and configuring IDM domain, when i tried to run the config.bat for oim server the script was crashing on my windows 7 OS. The problems faced by me are mentioned below along with the causes and solutions.
  1. Problem: Setup.exe has stopped working; Solution: Try placing the JDK in a directory that does not contain spaces and make sure that if you are using windows 7 64 bit OS the jre is also 64 bit. To specify the jre location to the config script place it in the config.bat script as mentioned in below snippet.
    %ORACLE_HOME%\oui\bin\setup.exe %ARGS% -debug -jreLoc C:\jdk1.6.0_18\jre 

    The above snippet also enables debugging which is helpful in fixing any other issues that you might face.



  2. Problem: Setup.exe crashes with FileNotFoundException, This problem was happening because of the fact that config.bat specifies an additional parameter “-oneclick” which causes the setup.exe to look for oneclick.properties; Solution: This property is not required so open the config.bat and remove this property.



  3. Problem: JVM crashes with access violation exception, This problem causes jvm to dump the process state and occurs due to compatibility issue A small snippet of process dump that shows the issue is mentioned below ; Solution: Go to the setup.exe process in oui/bin directory and under the compatibility tab for the process check  disable visual themes and desktop composition .

    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    #  EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000077080895, pid=6968, tid=6860
    #
    # JRE version: 6.0_18-b07
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode windows-amd64 )
    # Problematic frame:
    # C  [ntdll.dll+0x50895]
    #
    # If you would like to submit a bug report, please visit:
    #   http://java.sun.com/webapps/bugreport/crash.jsp

    compatsetup

Phew ! thats a lot of issues for one installer hope it saves someone else's time cause it sure didn't save mine.

Posted on Saturday, March 02, 2013 by Unknown

Jan 4, 2013

In this post i am sharing the process to make your own custom reconciliation connector. The process flow of the scheduler is shown in the below flowchart.

 

customtargetreconciliation

This flow chart is a simplified version and should serve as a simple aid. The code for the 11g scheduler is shown below, it is a pretty crude implementation and just serves as a POC it does a full target reconciliation.

public class UserDetailReconTask extends TaskSupport {
private static final ADFLogger reconLogger =
ADFLogger.createADFLogger(UserDetailReconTask.class);

public UserDetailReconTask() {
super();
}


public void execute(HashMap hashMap) {
String methodName =
Thread.currentThread().getStackTrace()[1].getMethodName();
tcITResourceInstanceOperationsIntf itRes =
Platform.getService(tcITResourceInstanceOperationsIntf.class);
reconLogger.entering(methodName, hashMap.toString());
//get it resourcename
String itResourceName = hashMap.get("ITResource").toString();
//Get resource object to reconcile
String resourceObjectName = hashMap.get("ResourceObject").toString();
//Get table name
String tableName = hashMap.get("TableName").toString();
HashMap hashmap = new HashMap();
if (reconLogger.isLoggable(Level.INFO)) {
reconLogger.info("[ " + methodName + " ] " +
"Got It Resource name " + itResourceName);
}
hashmap.put("IT Resources.Name", itResourceName);
tcResultSet rss;
tcResultSet parameters;
HashMap paramsMap = new HashMap();
try {
rss = itRes.findITResourceInstances(hashmap);
Long ll = rss.getLongValue("IT Resource.Key");

parameters = itRes.getITResourceInstanceParameters(ll);
for (int i = 0; i < parameters.getRowCount(); i++) {
parameters.goToRow(i);
String paramName =
parameters.getStringValue("IT Resources Type Parameter.Name");
if (paramName.trim().equalsIgnoreCase("DatabaseName")) {
paramsMap.put("DatabaseName",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("URL")) {
paramsMap.put("URL",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("UserID")) {
paramsMap.put("UserID",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("Password")) {
paramsMap.put("Password",
parameters.getStringValue("IT Resource.Parameter.Value"));
} else if (paramName.trim().equalsIgnoreCase("Driver")) {
paramsMap.put("Driver",
parameters.getStringValue("IT Resource.Parameter.Value"));
}
}
} catch (tcAPIException e) {
reconLogger.severe("[ " + methodName + " ] " +
"error occured during retrieving IT Resource",
e);
throw new RuntimeException("[ " + methodName + " ] " +
"error occured during retrieving IT Resource");
} catch (tcColumnNotFoundException e) {
reconLogger.severe("[ " + methodName + " ] " +
"error occured during retrieving IT Resource column name",
e);
throw new RuntimeException("[ " + methodName + " ] " +
"error occured during retrieving IT Resource column name");
} catch (tcITResourceNotFoundException e) {
reconLogger.severe("[ " + methodName + " ] " +
"error occured during retrieving IT Resource by key",
e);
throw new RuntimeException("[ " + methodName + " ] " +
"error occured during retrieving IT Resource by key");
}
reconcileAndCreateEvents(paramsMap.get("UserID").toString(),
paramsMap.get("Password").toString(),
paramsMap.get("Driver").toString(),
paramsMap.get("URL").toString(),
resourceObjectName, tableName);

reconLogger.exiting("UserDetailReconTask", methodName);
}

public HashMap getAttributes() {
return null;
}

public void setAttributes() {
}


/**
* This method gets the data from the source table and then creates the events after
* that OIM applies the rules to check whether the user is there or not
* @param adminID Source admin user id
* @param password Source Password
* @param Driver Driver type
* @param Url jdbc url of the target database
* @param resourceObject target resource object
* @param tableName The target table to reconcile from
*/
private void reconcileAndCreateEvents(String adminID, String password,
String Driver, String Url,
String resourceObject,
String tableName) {
String methodName =
Thread.currentThread().getStackTrace()[1].getMethodName();
reconLogger.entering("UserDetailReconTask", methodName);
ResultSet rs = null;
Connection conn = null;
PreparedStatement ps = null;
try {
Class.forName(Driver);
} catch (ClassNotFoundException e) {
throw new RuntimeException("Unable to find the driver class", e);
}
try {
tcReconciliationOperationsIntf reconService =
Platform.getService(tcReconciliationOperationsIntf.class);
HashMap dataMap = new HashMap();
conn = DriverManager.getConnection(Url, adminID, password);
ps = conn.prepareStatement("Select * from "+ tableName);
rs = ps.executeQuery();
reconLogger.info("[ " + methodName + " ] " +
"Executed the query succesfully");
while (rs.next()) {
//put data in map
dataMap.put("UserLogin", rs.getString(1));
//create reconciliation event
reconLogger.info("[ " + methodName + " ] " + "Got login Id ",
rs.getString(1));
try {
//create reconciliation event
reconService.createReconciliationEvent(resourceObject,
dataMap, true);
reconLogger.info("[ " + methodName + " ] " +
"Created Recon Event", rs.getString(1));

} catch (tcObjectNotFoundException e) {
reconLogger.severe("Unable to find resource object");
throw new RuntimeException("Unable to find the driver class",
e);
} catch (tcAPIException e) {
reconLogger.severe("Unable to find resource object");
throw new RuntimeException("Unable to createevent", e);
}
}

} catch (SQLException e) {
throw new RuntimeException("Unable to get connection", e);
}
}
}


 



Here the method of relevance is reconcileAndCreateEvent which actually fetches the details from the target table name and then it is being used to populate a dataMap with reconciliation field names and value, this along with the resource object name is being used to create the reconciliation event which in turn is then processed by OIM Reconciliation engine which finds a reconciliation rule corresponding to the resource object and then applies the rule, in this case UserLogin is matched with User_Login in OIM and the account linking is performed.



The reconciliation API reference link is here.



The scheduler xml is shown below. It is named as UserDetailReconTask.xml and needs to be imported into mds using weblogicImportMetada.sh script.The  path needs to be /db/UserDetailReconTask.xml



<scheduledtasks xmlns="http://xmlns.oracle.com/oim/scheduler">
<task>
<name>UserDetailReconTask</name>
<class>com.blogspot.ramannanda.schedulers.UserDetailReconTask</class>
<description>Target Reconciliation</description>
<retry>5</retry>
<parameters>
<string-param required="true" encrypted="false" helpText="IT Resource">ITResource</string-param>
<string-param required="true" encrypted="false" helpText="Resource Object name">ResourceObject</string-param>
<string-param required="true" encrypted="false" helpText="Table Name">TableName</string-param>
</parameters>
</task>
</scheduledTasks>


The plugin xml is mentioned below



<?xml version="1.0" encoding="UTF-8"?>
<oimplugins>
<plugins pluginpoint="oracle.iam.scheduler.vo.TaskSupport">
<plugin pluginclass="com.blogspot.ramannanda.schedulers.UserDetailReconTask" version="1.0" name="TrustedSourceReconciliation"/>
</plugins>
</oimplugins>


Note: The actual implementations should filter the data from the target for performance by using something like last modified timestamp.

Posted on Friday, January 04, 2013 by Unknown