Dienstag, 4. Dezember 2012

Montag, 22. Oktober 2012

Migrating Forms to APEX: DOAG TV

Interesting short statement of Patrick Wolf (Principal Oracle Member of Technical Staff) in DOAG TV (german language) about migrating Oracle Forms applications to APEX and why you should consider redeveloping instead of using the Oracle migration wizard.

http://www.doag.org/home/aktuelle-news/article/doagstatement-so-migieren-sie-von-forms-nach-apex.html

Montag, 15. Oktober 2012

Fault Management Framework - Part 4

Configuration of composite.xml

The policy files are stored either locally or in the same folder where the composite.xml-file resides. The other option is to store them in a central place for unique error handling, e.g. in the Meta Data Repository (MDS). The folder is configured by two properties in the composite.xml-file. This configuration is used instead of other possible locally stored policy files.

Example of a central policy file
In this example the fault policy file is stored in the central SOA metadata store

<property name="oracle.composite.faultPolicyFile">oramds://folderpolicyfiles fault-policies.xml
</property>
<property name="oracle.composite.faultBindingFile">oramds://folderpolicyfiles/ fault-bindings.xml
</property>

You have to keep in mind, that a fault policy which is configured in the fault management framework overwrites an error handling in a catch activity inside a scope in a BPEL process. Nevertheless, the fault framework can be configured this way that the error is given to the catch activity

Montag, 8. Oktober 2012

Fault Management Framework - Part 3

Putting conditions and actions into the policy file

 The conditions in fault-policies.xml can be used to categorize errors into types:

·         Mediator-specific errors

·         Business- and SOAP-errors

·         Adapter-specific errors

By using the <test> element you can evaluate and filter the $fault.mediatorErrorCode. Following error types exist in the mediator component:

·         TYPE_ALL – all mediator errors

·         TYPE_DATA – for data related errors

·         TYPE_METADATA – for metadata related errors

·         TYPE_FATAL – for fatal errors

·         TYPE_TRANSIENT – for resolvable errors

These groups have sub-groups like e.g. TYPE_FATAL_DB for fatal database errors.

Example for Database-Adapter:  

<faultName

       xmlns:bpelx="http://schemas.oracle.com/bpel/extension" 

       name="bpelx:bindingFault">

        <condition><!—ORA-02292: Integrity constraint violated -->

            <test>$fault.code=”2292”</test>

            <action ref="ora-terminate"/>

        </condition>

    </faultName>

By using the <javaAction> it is possible to execute an external java class. This java object has to implement the interface IFaultRecoveryJavaClass, which returns a string of handleFault(). Different return values lead to another action, depending on the return value. If no <returnValue> elements are defined, the defaultAction will be executed.

    <Action id="ora-java-execute">

         <javaAction className="com.xxxxpackage.faultpolicy.UserJavaAction"  defaultAction="ora-terminate" propertySet="someProperty">

                <returnValue value="TERMINATE" ref="ora-terminate"/>

                <returnValue value="RETRY" ref="ora-retry"/>

                <returnValue value="HUMAN" ref="ora-human-intervention"/>

          </javaAction>

    </Action>
 
There will be a part 4 next week...

Dienstag, 2. Oktober 2012

ADF Essentials: Free version of Oracle ADF

The long waited for FREE version of Oracle's Application Development Framework has arrived!

"ADF Essentials" will contain most features of the original ADF. Unfortunately the SOA Suite integration part and ADF mobile will not be contained in the new ADF version.

The free ADF will run on the Glassfish OpenSoure Application Server, so no Weblogic Server is necessary!

Hope this will help spreading ADF in the community!

More information can be found in this german newsletter:

http://www.doag.org/home/aktuelle-news/article/adf-essentials-doag-begruesst-erscheinen-der-kostenfreien-version-von-adf.html

or on the Oracle Homepage:

http://www.oracle.com/technetwork/developer-tools/adf/overview/adfessentials-1719844.html

Freitag, 28. September 2012

Fault Management Framework - Part 2


Structure of a fault-bindings.xml file

The fault policy bindings file fault-bindings.xml, which is located in the same directory, binds the policies of the fault policy file with

·         The SOA composite application
·         The BPEL processes and Oracle Mediator components
·         The reference binding components for BPEL processes and Oracle Mediator Service components

<faultPolicyBindings>
 <composite>
 <component>
  <name>
 <reference>
  <name>
  <portType>

The framework identifies the fault policy bindings in the following order:

·         Reference Binding Component
·         BPEL Process or Oracle Mediator Service Component
·         SOA Composite Application

If no condition is found during evaluation, the execution checks the next level.


Example of fault-bindings.xml file

This example associates the fault policy with the composite application

<?xml version="1.0" encoding="UTF-8" ?>
<faultPolicyBindings version="2.0.2"
 xmlns="http://schemas.oracle.com/bpel/faultpolicy"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<composite faultPolicy="ApplicationProcessingFaults"/>
</faultPolicyBindings>

Another example with definitions for BPEL / Mediator Components and definitions for references of an external service

<?xml version="1.0" encoding="UTF-8"?>
<faultPolicyBindings version="2.0.2"
 xmlns="http://schemas.oracle.com/bpel/faultpolicy"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<composite faultPolicy="Connection_Faults"/>
<component faultPolicy="Service_Faults">
<name>Komponente1</name>
</component>
<reference faultPolicy="Service_Faults">
<name>Service1</name>
<name>Reference2</name>
<portType xmlns:credit="http://xxxxxxx">xxxxxxService</portType>
<portType xmlns:db="http://xmlns.oracle.com/pcbpel/adapter/db/insert/">db:insert_plt</portType>
</reference>
<reference faultPolicy="test1">
<name>name3</name>
</reference>
</faultPolicyBindings>
Part 3 will follow...

Donnerstag, 20. September 2012

Fault Management Framework - Part 1

A colleague of mine just held a presentation about the fault management framework in Oracle SOA Suite 11g. With her permission, I publish the essential points here.

Oracle SOA Suite 11g provides a fault framework for handling errors in composites and BPEL processes. With this feature a consistent error handling for SCA components is possible.

Structure of fault-policies.xml

The policy file fault-policies.xml has an XML structure and contains conditions for errors (error names an XPath functions for the content) and actions, which can be executed. These are: retry, human intervention, replay scope, rethrow fault, abort, custom Java action.

<faultPolicies>
 <faultPolicy>
  <Conditions>
   <faultName>
    <condition>
     <test>
     <action>
  <Actions>
   <Action>
    <retry>
    <humanIntervention>
    <rethrowFault>
    ….

Furthermore process instances can be accessed during runtime. For example, if an action results in a user intervention, recovery actions can be initiated in the EM console.

Examples of a fault-policies.xml file

In this example the action “ora-retry” is allocated to all BPEL errors
The actions tries a repitition 2 times with an interval of 5 seconds. In case of error the “Human Intervention” will be executed afterwards.

<?xml version="1.0" encoding="UTF-8" ?>
<faultPolicies xmlns="http://schemas.oracle.com/bpel/faultpolicy">
  <faultPolicy version="2.0.2"
        id="ProcessingFaults"
               xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"
               xmlns:xs="http://www.w3.org/2001/XMLSchema"
               xmlns="http://schemas.oracle.com/bpel/faultpolicy"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <Conditions>
     <faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension" name="bpelx:remoteFault">
        <condition>
            <action ref="ora-retry"/>
        </condition>
    </faultName>
      </Conditions>
     <Actions>
      <Action id="ora-retry">
        <retry>
          <retryCount>2</retryCount>
          <retryInterval>5</retryInterval>
          <exponentialBackoff/>
          <retryFailureAction ref="ora-human-intervention"/>
        </retry>
      </Action>
      <Action id="ora-rethrow-fault">
        <rethrowFault/>
      </Action>
      <Action id="ora-human-intervention">
        <humanIntervention/>
      </Action>
    </Actions>
  </faultPolicy>
</faultPolicies>

Part 2 will follow...


Montag, 16. Juli 2012

OWSM - Part 3

Here is part 3 of OWSM

OWSM versus OEG (Oracle Enterprise Gateway) -
Which one to use depends on the tasks!

I already published some information about the Oracle Enterprise Gateway here in my blog (http://soaandit.blogspot.de/2012/05/oracle-enterprise-gateway-oeg.html). Some of the
skills and abilities of the OWSM and the OEG overlap. Therefore you habe to decide which of both tools you want to you use. That depends mostly on the location of the operation.

Use OWSM in your private Zone

If you only need security in your private company zone (green zone), you can use the OWSM. It should be applied together with other Fusion Middleware products e.g. SOA Suite.

Use OEG in your DeMilitarized Zone (DMZ)

In case you only need security in your DMZ, then apply the OEG there. It has capabilities like intrusion detection, virus checking or message throttling, which is needed in the red zone.

End-to-End-Security

If you need end-to-end security you shoud use both! Place the OWSM in your private zone for using policies, monitoring and web service security. At the same time apply the OEG in your DMZ and use its functions against attacks from the outside (XML-Firewall and -Gateway). This combination will give you maximum security and gateway capabilities both in  your private zone and your DMZ.









Donnerstag, 28. Juni 2012

OWSM - Part 2

This is part 2 of the OWSM presentation

Supported Standards

The Oracle WebServices Manager (OWSM) supports a lot of different standards concerning security, reliability, adressing and more. Some of the supported standards are:

- SOAP 1.2 (with attachements), MTOM
- WS-Policy 1.2
- WS-MEX (Metadata Exchange)
- WS-Reliable-Messaging 1.0
- WS-Security 1.1 and WS-Security-Policy 1.1
- UDDI
- JAX-WS Policy annotations
- ...

Assertions and Policy attachement

In the OWSM policies are made up of one or multiple assertions. These assertions are applied(used), when the corresponding policy was attached to the reference or service. The assertions are used for both the request and the reply.

Multiple policies can be attached or detached at design-time through JDeveloper context menu and property inspector. At run-time policies can be attached or detached through the Enterprise Manager. In this case a bulk attachment of policies to multiple services or clients is also possible.

Monitoring

OWSM collects a big amount of monitoring data and metrics for services, ports, and operations. You can also see the number of security violations (in case an authentication or an authorization failes). The monitoring data can be accessed with the Enterprise Manger in the corresponding composite (in the service or reference where the policy is attached).

Part 3 follows...






Dienstag, 26. Juni 2012

OWSM - Part 1

Recently I held a presentation about Oracle WebServices Manager (OWSM). I will take the important parts of that and publish it here. I split it into several parts. This ist part 1.

Introduction to OWSM

OWSM is a solution for policy management and security of service infrastructure. The control of the policies is done through a centralized administration interface, the Enterprise Manager(EM). Servie-oriented architectures can be secured declaratively with the OWSM. It is a part of the Oracle SOA Suite – you cannot start or stop it and there is no extra development tool.

Features of OWSM

- The unified console for policy management and attachment is the Enterprise Manager
- Policy attachment at design-time happens through JDeveloper
- Built-in identity propagation for E2E security (autoamtic identity propation)
- Monitoring of the policies through Enterprise Manager
- Policy lifecycle management (versioning, activation and deactivation of policies)
- Use of  OPSS-Services (OraclePlatformSecurityServices) possible

Policy Management of OWSM

- Types of policies: security-, reliability-, addressing-, management- and MTOM-policies are available
- Custom policies: You can create your own policies
- Pre-defined policies which can directly be used
- Policy advertisement in WSDL and/or WS-MEX (MetadataExchangeClient)
- possibility to generate client policies for existing web services
- Policy monitoring and auditing
- Policy impact analysis: Before making a change to a policy, the administrator uses Oracle EM to view all the web services endpoints attached to that policy and evaluate the effect of the change on attached - policies
- Policies can be exported and imported
- Policy versioning is possible

Part 2 will follow

Donnerstag, 21. Juni 2012

Interview: Are german architects too careful?

Interesting interview with Jürgen Menge (Oracle Sales Consultant) about ADF, APEX, Migration of Forms-Applications and the question: "Are german architects too carefull?"

In german language:
DOAG@Talk: Sind Architekten in Deutschland zu vorsichtig?
http://www.doag.org/home/aktuelle-news/article/doagtalk-sind-architekten-in-deutschland-zu-vorsichtig.html

Freitag, 15. Juni 2012

Dienstag, 12. Juni 2012

Oracle SOA Suite 11.1.1.6 VirtualBox now available from Oracle

You can now download, test and try Release 11.1.1.6 of the Oracle SOA Suite without installing the whole stack by yourself! Oracle just published an image (VirtualBox) which contains SOA Suite 11.1.1.6 (including OSB), BPM Suite 11.1.1.6 and Webcenter Suite 11.1.1.6. You do not need to install or configure everything. It's ready to use.

You will find every necessary information and the download here:

http://www.oracle.com/technetwork/middleware/soasuite/learnmore/vmsoa-172279.html

Have fun!

Mittwoch, 16. Mai 2012

Redundant code and functions in services: Normalization

Redundant code and functions in services: Normalization

If your SOA landscape exists for a while, there could be a problem, which you could also know from system, that do not use service-oriented archtecture: Redundant code or functions!

Redundant code or functions means programming logic, which exists more than once in your system or programming logic, which creates exactly the same result. Redundant code is dispensable. That means, if you do it right, you can delete the redundant code or function and the results will be the same. Also the logic of your services can overlap, which means, that only parts of it are redundant.

When will it happen?

If the number of services in your system grows or already has increased over the years, redundant code and functions can happen very easily, if your analysis of logic and content is not what it should be.

Problem

If the logic of an existing function changes, you have to adjust every part of your system, where this redundant logic was implemented. Often it is very hard to identify all those places in your system. Also, this has negative impact on the re-usability of your services.

Solutions

A possible solution would be a normalization of (the logic of) your services, which prevents redundance and overlapping in the service-logic of your SOA.

Your efforts to avoid redundant code and functions in your services should start very early in your development process. Right from the start of the analysis phase should be assured, that conjoint logic is identified and marked as such. At design time each of those functions should be united and exposed in one service. It does not matter if you deal with business or technical logic. Business logic is packed into a business service, which can be consumed by other business services, whereas technical logic is implemented in a technical service, which can be consumed by business services or technical services.
The boundaries of the services(-logic) have to be defined and must strongly be observed, to avoid overlaps.

If you observed already redundant logic in your services, the only solution is a review and consequent refactoring. The redundant implementation of logic for related or identic problems by different developers at different places occurs oftenimes, especially on the technical level.

SOA

The SOA paradigm makes the refactoring easier in this case. If a identic function or logic was identified in different places of your code, those can be cut out and be implemented as an own service. At the cut-out areas the new service must be consumed. At this, you have to keep some important patterns in mind, to meet the SOA paradigm: Are the services still lousely coupled afterwards? Which message pattern must be used? Do you need a proxy- or a wrapper-service? And so on...

Normalized services and a normalized service repository will help you with re-usablitiy, simplifies your maintenance and makes thinks clearer, regarding your logic and services.

More information about service normalization:
http://en.wikipedia.org/wiki/Service_Normalization_Pattern
http://www.soapatterns.org/service_normalization.php

Freitag, 11. Mai 2012

Oracle Enterprise Gateway OEG

This time, I will give some information about the Oracle Enterprise Gateway OEG.

The OEG (formerly Vordel XML Gateway) is a high performance gateway, which can help you expose webservice to domains, which are not under your control.

It consists of two main features: the transparent sercurity proxy and the service broker. The service broker acts as a control center for webservices and is divided into a definable external facade and a configurable internal mediator.



One of the main purposes is the easy development and implementation of security features in a central place. It can also be used to implement a Pay-as-you-use model (e.g. in a cloud). It is also very easy to provide standards (like REST) to the external world.

http://www.oracle.com/us/technologies/soa/soa-governance/enterprise-gateway-345737.html
(Oracle's main page for OEG)

The fascade is the service broker. It acts as the WSDL interface and control point. Methods can be activated and deactivated here. A security proxy does the authentication and authorization using policies. It holds a security token service and features for encryption, decryption, signing and XML threat protection. The mediator is the internal service broker. You can use it for content based routing, (protocol) transformations and as a load balancer. Usually the OEG is placed in the DMZ, between the external firewall and the internal network.

Cross domain exchange

One of its main purposes is the exchange of informations between different security domains. Different security tokens and identity formats can be administered by the OEG. Tokens are translated between the different security domains.

Policies

Different policies can be added and used with your web access management. Those policies are pre-configured. You do not have to change your existing web access management.

Enrichment

XML data which are transported through the OEG can be enriched if necessary. If additional data are e.g. needed by an internal webservices, but those data are not provided by the caller, the XML message can be enriched with those data. For example those data can be selected from a database.

Firewall

The OEG can also be used as a XML firewall. This way, internal webservices can be exposed to the outside world. The OEG can block attacks on the http-, SOAP- and XML-level.

Protocoll transformation

SOAP- and REST-requests can be transformed and both used to call internal services. The OEG implements service virtualization for external callers.

Token adding

If a caller cannot deliver the required security tokens for a webservice, the OEG can act as a security token service. It can add the necessary security tokens to the XML messages, so that the calls to the internal webservice are valid.

http://www.oracle.com/technetwork/middleware/id-mgmt/oeg-tech-wp-apr-2011-345875.pdf
(OEG Technical Whitepaper)

You can find the OEG 11.1.1.6 documentation here:
http://docs.oracle.com/cd/E27515_01/index.html

You can downlaod the OEG from Oracle here:
http://www.oracle.com/technetwork/middleware/id-mgmt/oeg-300773.html?ssSourceSiteId=ocomen#downloads


Donnerstag, 26. April 2012

Local import of WSDLs and XSDs for design time in JDeveloper

Local import of WSDLs and XSDs for design time in JDeveloper

The following problem occured to me in a project:

The remote Webservice, which should be referenced by a composite is secured with http basic authentication (username and password). This means you cannot use the normal dialogue, because it cannot access the remote service.




You need the WSDL-file and the corresponding XSD-Schema-Files. There is no way around that.
When you managed to get those, you can create the Webservice-Reference and import the WSDL locally into your project.






This way you should be able to create your reference to the remote Webservice. The imported local WSDL-file should show up in your project-folder in JDev.

The same goes for the .XSD-schema-files. If e.g. you want to expose your composite as a Webservice, you must generate a new WSDL for that.



In the "Create WSDL"-dialogue, choose "Browse for schema file".



There, choose you .xsd-files for request, reply and fault, if necessary.

Now the "Exposed Services" and the "Externel References" of your composite are ready.

If you do not import the xsd-files locally, you will get errors while trying to deploy the composite to the WLS! JDeveloper needs access to those files at design-time!

The disadvantage of this is, when the Webservice parameters are changed, you need to to reimport the thing and redeploy it to your WLS.


Montag, 23. April 2012

Integration of legacy systems into a modern enterprise SOA (part 8)

Integration of legacy systems into a modern enterprise SOA - Part 8 (last part)

8. Conclusion

The integration into a modern architecture can often be an opportune and flexible alternative to full replacement or redevelopment of an existing legacy system. The advantages of an integration using the SOA approach are:

- preservation of the business logic of the legacy system
- reduction of risk: No expensive and complex replacement or redevelopment necessary
- reduction of TCO
- Usage of the existing modern (SOA-) IT landscape (including infrastructure and hardware)
- guranteed future and flexibility because of standards, frameworks and reference architectures
- modern business processes because of service-oriented architectures and their advantages (loose coupling, reusability, flexibility, interoperability, standardization, less maintanance, ...)

The integration of legacy systems into a modern IT landscape kann be a complex project. Endurance is needed for the development to a flexible and agile company.

Montag, 16. April 2012

Integration of legacy systems into a modern enterprise SOA (part 7.3)

Integration of legacy systems into a modern enterprise SOA - Part 7.3

7. Approach

5. Transformations

The required data transformations or the necessary mapping must be implemented at the interface. For this the essential transformation steps have to be identified and implemented. If necessary the creation of mapping tables for data and business objects is inevitable. If a canonical data model exists, the transformations and the mapping are directly based on that. If, in one of the earlier phases you decided to implement a canonical data model, this is the time to implement and integrate it.

6. Adjust the calling business logic

Often, changes in the logic of the calling business processes are also necessary. Possibly, activites and processes have to be newly cut or stripped down, so that the call to the legacy system can be integrated.

7. Integrate the legacy interface into the SOA landscape

In a final step the legacy system will be integrated into the present SOA landscape. The needed web service calls and adaptors are integrated into the available business logic.


Part 8 will follow next week...

Dienstag, 10. April 2012

Integration of legacy systems into a modern enterprise SOA (part 7.2)

Integration of legacy systems into a modern enterprise SOA - Part 7.2

7. Approach (part 2)

3. Choise of adaptors and connectors

By means of the analysis of the legacy systems, the decision can be made, if it can easily later be augmented with modern interface technologies (java interfaces, web services). The choice of the right communication technology depends on that. Depending on the demands of performance and transaction security, oftentimes this decision is made towards Java Connector Architecture (JCA) and/or the WS framework. The use of a standard adapter, which is offered my different software companies, often represents the low-priced alternative. If the demands or the legacy systems do not allow the use of standard adapters, these can also be developed by yourself. The JCA framework is often used for this.

4. Customizing the legacy system

As said it is often necessary to implement changes in the legacy system, to be able to integrate it into a modern SOA. If the documentation was evaluated as "not sufficient" in the planning phase, an additional phase of documentation must be carried out. In a worst case scenario this could meand reverse engineering. The necessary changes can then be implemented.

Part 7.3 will follow next week...

Freitag, 30. März 2012

Integration of legacy systems into a modern enterprise SOA (part 7.1)

Integration of legacy systems into a modern enterprise SOA - Part 7.1

7. Approach

1. Pre- and analysis phase

In a first phase the legacy application which should be integrated und the calling business process should be analysed thoroughly. With standard application the level of customer-specific changes in the application must be identified (customizing quote). Documentation and calling methods of these changes must be checked. If the customizing quote is high, it must be decided, if the use of standard adapter is still reasonable. Often, with individual customer-specific legacy systems the only way of integration is using the application data layer. It must be checked how data persistence was implemented.

The state of documentation is also very important regarding the application which should be integrated.

- do process maps and system manuals exist?
- is there documentation of data models / object models?
- are there application-specific interfaces (APIs), which can be used?
- how does the legacy system act regarding security?
- how do the components communicate between one another?
- with multi-tier architecture, are there reusable components, applications server, data dictionaries, etc. ?
- are there possible interests and issues of the operating department?


2. Planning phase

The planning phase follows the analysis phase. If needed, a new modelling of the target processes is conducted in the planning phase. This is based on the process documentation which is created in phase 1 (actual process). At that time it must be clarified how the target architecure should be composed.

During the planning phase the following topics have to handled:

- which topology will be introduced? (e.g. bus topology)
- is the use of a canonical data model reasonable and effective?
- do any standard application packages exist ("packaged integration solutions", e.g. Oracle AIA)
- do any reference archtecture exist?
- how can the user experience of the new process be improved?
- ...

Part 7.2 will follow next week...

Montag, 26. März 2012

Integration of legacy systems into a modern enterprise SOA (part 5 + 6)

Integration of legacy systems into a modern enterprise SOA - Part 5 and 6


5. Impact of SOA on the legacy system

If servce-oriented rules are applied while integration, the level of reusability of the components of the legacy system can be increased. By using loose coupling, both the business process and the use of the integrated component will be more flexible. Changes in processes are now even easier to implement. The complexity level of the complete system will be reduced by use of standard (integration-) components. This can also be relevant for monitoring and maintenance.


6. Ways of integration

Application integration

One of the main tasks that must be achieved is the integration of business logic, the mapping of rules and workflows. In many cases it is not sufficient to wrap existing logic publish it as a webservice. Maybe batch processes have to be transformed into a callable function. Transformation- and mapping logic must be added to the interface. Parts of the previous business process could be changed. Furthermore, transaction security must be observed and guaranteed.

Data integration

Relational and not relational data must be published on the interface. Dependant on the legacy system, an existing persistence layer must be used or even a new layer must be created. Here also transformations must be conducted at the interface to present data in a general valid format. Maybe a canonical data model should be taken into consideration.

presentation (view) integration

If access to data and application logic is not practical due to different reasons, as an alternative one can integrate the legacy system directly on the presentation layer. The aim is here to replace all terminals, which access the mainframe. The different forms, screens and menues must tapped/scanned at the mainframe and be made available as a service, e.g. in a modern JEE application. It is advisalbe to integrate the forms and menues of the legacy systems into a portal or a modern application by which they are presented to the user.

Part 7 will follow next week...

Mittwoch, 21. März 2012

Integration of legacy systems into a modern enterprise SOA (part 3 + 4)

Integration of legacy systems into a modern enterprise SOA - Part 3 and 4

3. Previous attempts

Previous attempts like "point to point", "hub and spoke" or "batch runs" do only meet some of the demands of current architectures. Furthermore there are several drawbacks, which have to be accepted when using those. Often, these methods are only suitable to solve one problem. A wholistic, integrated method would provide an important benefit.

4. Using the SOA method for integration

To adress the previously specified troubles, it is important to identify the required functionality of the old legacy system. The target would be to provide those as services, so that they can be used by other applications. Legacy systems were often treated as a black box. The underlying business logic was ignored by the caller, so that issues like performance, security and maintenance are hard to include into the overall view. The monolithic structure / tight coupling of the legacy components prevents them from being used in a service-oriented architecture respectively its business processes.

In contrast a SOA landscape consists of a mapping of business processes and the corresponding services. Standards allow the general communication and all components should be lousely coupled, so that the advantages of this archicture (e.g. a high grade of reusability) are ensured.

One of the most important advantages of a SOA are its features for legacy enablement. The use and integration of data and logic from legacy systems considering SOA-specific rules, allows a flexible, real-time use of the legacy recources, while keeping and using the advantages, which is offered by a service oriented architecure. By applicating this strategy a big number of assets of the legacy system and the service oriented architecture can be used, to create additional benefit for your enterprise.

Part 5 will follow next week...

Samstag, 17. März 2012

Integration of legacy systems into a modern enterprise SOA (part 2)

Integration of legacy systems into a modern enterprise SOA (part 2)

2. What are lagacy systems?

Legacy systems are still wide-spread. Frequently they are defined as IT systems (hard- and software), which are in use for a long time in an enterprise. In the majority of cases these systems (or parts of it) do not run on up-to-date hardware and even if they do, oftentimes the software architecture is not on recent levels. Mostly present-day software and architecture standards like SOA, JEE, Multi-Tier, application server and middleware are not used.

However, those systems are an integral part of some enterprises. Usually, they even today represent their core competence and directly participate in revenue generation (e.g. mainframe, cobol, ...). In addition those systems contain huge development and maintenance effort from the past, for which reason they represent an important factor in enterprise intelligence and -experience.

The many advantages of those inherited systems are obvious. Frequently the TCO is very high, their components are tightly coupled or the architecure is monolithic and not up-to-date. Senior developer and other knowledge carrier have maybe left the company and new demands require much development effort (high "Time-to-Market"). Also the documentation is out of date. These are reasons why in many cases the management thinks about replacement or new product development, but this was refused due to cost and time issues. Also with enterprise takeovers or acquisitions an integration scenario can become necessary. In this case external legacy systems must often be added to the own SOA landscape.

As you can see, there are many reasons, why those systems should be integrated simple, fast, functional and future-proof into modern enterprise architectures (e.g. SOA). This process is called "legacy enablement".

Part 3 will follow next week...

Montag, 12. März 2012

Integration of legacy systems into a modern enterprise SOA (part 1)

In the last few weeks I engaged in thinking about how to integrate legacy systems into a modern enterprise SOA. I will publish some of the results here in the next weeks.

Integration of legacy systems into a modern enterprise SOA

1. Introduction

Legacy systems and mainframes are still an important part of current IT environments. In some companies they even stil belong to the core business, which could not operate without them. In the course of the last years modern company and IT architectures have been introduced. This led to an uncontrolled growth of different integration scenarios. The introduction of (web-) services or whole SOA structures forces many IT responsibles to reconsider their integration architecture. The SOA integration is often the first step of a bigger project of modernization of the legacy systems and their interfaces.

I suggest a holistic, integrated approach for consolidation and modernization for your integration between legacy systems and your SOA landscape. An analysis of both sides of the interface should be performed and hereupon a suitable integration solution can be proposed and implemented. It is important to design future-proof interfaces for your IT landscape.

If you need help in one of those topics, please don't hesitate to contact me... :-)

Part 2 will follow next week...

Donnerstag, 8. März 2012

Patch Set 5 for Oracle SOA Suite

The Patch Set 5 (11.1.1.6) for Oracle SOA Suite 11g R1 was released in February!

New Features are:
Improvements for Decision Tables
Enhancements in BPEL 2.0
improved REST features
improved fault analysis
Rules test suite
....

Find more information here
https://blogs.oracle.com/soacommunity/entry/soa_suite_ps5_11_1

Mittwoch, 29. Februar 2012

(SOA-)Security-Basics - part 5

(SOA-)Security-Basics - part 5

X.509 certificates

Cryptography

Identity? Digital ID? -> X.509 certificates

A X.509 certificate is just a digital ID. With that digital ID it can be decided, if the party showing this ID is who he claims to be.

[Example: https://www.google.com/]








X.509

Who does guarantee that?

The trust center!

Now, how does this work?

During the creation of a SSL connection or when a signature is verified, a certificate is transferred. This certificate states, that it is the server / communication partner of your trust. To verify that, the transferred certificate is read, its validity is tested and the publisher is determined. After that the validity of the publisher certificate is determined and then again the publisher. This will continue until the RootCA is reached. The RootCA certificate and all intermediate certificates have to be located in a trust store on the system, which conducts the check (be installed). Those certificates are trusted in principle.

X.509

- public key of the communication partner
- application of key (incl. "critical flags")
- e.g. CA, for S/MIME, SSL /key change, digital signature of documents
- date information of the validity
- used algorithms
- serial number
- reference to black lists OCSP responder
- reference to the publisher
- more OIDs
- 1.2.840.113533.7.65.0 - certificate extension for entrust version
- 1.2.840.113533.7.65 - Secure Networks Certificate Extensions
- 1.2.840.113533.7 - Entrust Technologies
- 1.2.840.113533 - Nortel Networks
- 1.2.840 - USA
- 1.2 - ISO member body
- 1 - ISO assigned OIDs

Dienstag, 21. Februar 2012

(SOA-)Security-Basics - part 4

Here comes part 4. Today we'll talk about certificates and PKI

X.509 Zertifikate / Public key infrastructure (PKI)

Trustcenter (TC) / RootCA / PKI

A trust center is a company, which issues "digital identities". This happens by letting a "trust tree" grow. The root is the so-called RootCA (CA = Certification Authority). It consists technically of a private key and the corresponding public key (often RSA). This combination is generated by the trust center itself (SelfSigned Certificate). This RootCA certificate must be trusted unconditionally, because all identities which are issued by the trust center are backtracked to this root. For the trust centers it is most important to keep their private key secret. Otherwise the trust center cannot be "trusted" anymore (Examples: Commodo, DigiNotar)

Public key infrastructure (PKI)

On basis of this RootCA, so-called intermediate CAs (Certification Authorities) are created. These are usually used for special purposes. They issue digital identities which are only suited for special areas (SSL, S/MIME, digital signature)

Part 5 will follow next week...

Dienstag, 14. Februar 2012

(SOA-)Security-Basics - part 3

"digital signature, certificates and digital identities"
This is the third part.

Private keys, public keys and the man in the middle

Introducing Alice, Bob and...

Alice sends a message to Bob and adds the hash value to this message. To ensure integrity Bob creates a hash value from the message and compares his hash value with the one that's included in the message. If the two values are identical, Bob knows that the message was not changed on its way!

BUT can Bob really be sure about that?

The man in the middle

Mallory, who is a crypto-villain, puts himself between Alice and Bob and captures the message! He then creates his own new message, hashes it and sends it to Bob. Bob again creates his own hash value and compares it. The result will be fine. Bob will not realize, that this is not Alices message! Hashing is good for integrity, but does not protect against an "man in the middle"-attack.

So what to do?

Very private but even so public

To solve this problem asymmetric cryptography is used. To encrypt something you need a key. In this case even two! A private key and a public key. Those two keys correspond mathematically in the following way:

- What is encrypted with the public key, can only be decrypted with the private key (but this is not interessting  for signatures)
- Important: What is encrypted with the private key, can only be decrypted with the public key

As the name indicates, the private key belongs to one person or institution. It is kept secret and nobody knows it, but the owner. The public key is sent to all communication partners.

Alice and Bob again...

Alice again sends a message to Bob. But this time they use asymmetric cryptography and hashing:



This time mallory cannot manipulate the message because if he would, the comparison would lead to another hash value. To sign the message again, he would need the private key of Alice. The second message from Alice to Bob is protected by a digital signature.

1. The integrity is guaranteed (hashing)
2. The author of the message is ensured (public / private encoding

Part 4 will follow...

Donnerstag, 9. Februar 2012

(SOA-)Security-Basics - part 2

Here comes part 2 of "digital signature, certificates and digital identities". This is about hashing.

Hashing works, but only in one direction

Cryptographic hashing means, creating a checksum out of plain text. This checksum has to offer the following attributes:

1. The method is irreversible, which means, that you cannot calculate the plain text from the checksum
2. Different plain texts may not create the same hash value (the method must be collison-free)

Hashing - an example

There is the plain text "Hello World". This plain text is now hashed with the algorithm SHA1 and encoded with BASE64 for readability:

SHA1(Hello world) = pTVD5kns67A2N4yRuZ3vROStrxM=

Because the method has to be collision-free another plain text would create another hash value.
SHA1(Hello WorlD) = IYE/IEl7riYyhCez2P3l4xn9qrE=

The irreversibility is also ensured:
SHA1(Hello world) creates: = pTVD5kns67A2N4yRuZ3vROStrxM=, but
SHA1(Hello mars, how is the weather today on the red planet?) creates: = ZAmG0snPZ5zWTWdcwYCvJdZeApY=,
which also is a 28 character hash value encoded in Base64.

So what is the advantage of hashing?

- From a plain text an ambiguous checksum is created
- Every manipulation on the plain text causes another hash value to be created

==>
Thus, for every transmission of a message, its integrity is secured!


Part 3 will follow...

Donnerstag, 2. Februar 2012

(SOA-)Security-Basics

My MT AG colleage Jan Quack just held a presentation about SOA-Security-Basics: digital signature, certificates and digital identities. I will publish parts of that here in the next weeks. I think this makes a good introduction into the world of signatures and certificates. It has some really nice examples, too ;-).

What is a digital signature?

A digital signature is a value which is computed with the help of cryptographic methods. It is applied with electronic documents and messages. This value allows to secure 2 different things:

1. The integrity of the message: Was the message altered on its way from the sender to the receiver?

2. The authorship of the message: Who composed the message?

... and how does this work?

Digital signature is based on two cryptographic methods, which are combined with one another:

1. Hashing

2. Asymmetric encoding

Part 2 will follow next week...

Freitag, 27. Januar 2012

BPEL and transactions - part 5 (last part)

About transaction standards and protocolls. Here is part 5 of "BPEL and transactions".

Transaction standards and protocols

The execution of distributed transactions poses new requirements to a transaction environment. Services have to be executed and managed in heterogenous systems.



Most transactions standards are based on the Distributed Transaction Coordinator Model.Those standards are distinguished from another by the length of the transaction. Some standards are suited for short-lived transactions, because during execution resources are blocked. Others are more suited for long transacton times.

Identification of a transaction by transaction context

The context of the transaction has to be managed during the existence of the transaction and also it must be broadcasted to all engaged systems.

The occurence of an error may not lead to inconsistencies in many different systems.


Transaction standard:  Web Service Atomic Transaktion (WS-AT)

With the adoption of WS-AT the ACID properties can be implemented over distributed services on different application servers. As resources are locked during execution, WS-AT is only suitable for short-living processes. The WS-AT standard is supported by different middleware manufacturers.


Web Service Atomic Transaktion (WS-AT) in Oracle SOA Suite

The Oracle Weblogic Server 11gR1 (10.3.3) and Oracle SOA Suite 11g R1 PS2 (11.1.1.3) support the WS-AT standard version 1.2. For being able to use WS-AT on a WLS, the domain must be configured for WS-AT. The start of the transactions has to be set through a property in the BPEL process and at the references WS-AT has to be activated. When calling external references the transaction participation has to be transfered.

Transaction participation

Never: the service executes in an own transaction
Supports: the service participates at the transaction
Mandatory: the service participates at the transaction. If there is no transaction available, an exceptio message/fault is thrown.


Other error options:

-system errors
-deadlocks
-protocol errors
-network errors
-hardware errors
-timeout

Error handling when the coordinator fails

Before the sending of the prepare message, all participants must be logged. The response message of all participants must also be logged. With the logs the coordinator can restart at the point of failure.

Error handling when a participant fails

All requests are logged, just like the response messages.With the logs the coordinator can restart at the point of failure.

Summary

With the adoption of WS-AT the ACID properties can be implemented over distributed services on different application servers. All participants must support the WS-AT standard. If possible distributed transactions should not be too complex. Frequently a transaction, which is distributed over different heterogenous systems and products is complex and error-prone. Local transactions should be use where it is possible.

Montag, 23. Januar 2012

BPEL and transactions - part 4

And here is part 4 of "BPEL and transactions". This time I write about BPEL and error detection in a transactional enviroment.

BPEL and error detection

To trigger a compensation action, every service has to return a fault in case of error. In case of synchronous services faults are returned by default. With asynchronous services no fault is returned. A "receive element" would wait forerver for the result of the service (endless loop).




Pick element

The pick element contains the two cases "on Message" and "on Alarm". It waits until the called asynchronous process returns a result ("on Message") or the maximum waiting time ("on Alarm") is reached and then executes the corresponding block.

Dienstag, 17. Januar 2012

T-Shape professionals

After visiting one of his presentations, I was just reading a book by Prof. Dr. Gunter Dueck with the title "Professionelle Intelligenz - Worauf es morgen ankommt". That translates roughly to "Professional Intelligence - What will be important tomorrow".

Especially the concept of T-Shape IT professionals appeals to me. For those of you who are not familiar with that, take a look at Wikipedia:
http://en.wikipedia.org/wiki/T-shaped_skills

The T-Shape professional has a mixture of skills. On the one hand he has very deep, maybe expert-level knowledge of one special area (the I in the T). On the other hand he is not only an expert in that field, but has a strong basic knowledge in other generell areas concerning his job (the - in the T). For example being a great JEE-Developer is sometimes not enough in the modern IT world. You may also have to have knowledge in more generel domains like busines processes, project management, team work, presentation techniques, sales and marketing or other social skills.

In this way T-Shape professionals become much more valuable for their company, because they are much more applicable. I think this is getting more and more important. Expecially being an IT consultant!

An interesting article about T-Shape in german language can be found here:
http://www.perspektive-mittelstand.de/T-Shaped-Professionals-Die-IT-Fachkraft-der-Zukunft/management-wissen/3775.html

Here is also an english article, which is a bit older.
http://fixcv.com/t-shaped-people-jobs-and-recruiting-4828.html

Montag, 16. Januar 2012

BPEL and transactions - part 3

About controlling transactions in the Oracle SOA Suite MEDIATOR component. Here is part 3 of "BPEL and transactions".

Control of transactions in Mediator

When the mediator component of Oracle SOA Suite is called from an external process, a new transaction will be created. If a parent transaction exists and mediator can use this, it will do so.





With sequential (syncronous) routing rules, all rules will be executed in one transaction. In case of error there will be a rollback of all routings. The fault policies of the mediator are NOT being executed.




In case of parallel routing rules, a new transaction is created for every new rule. In case of error a rollback will be executed. Here, the fault policies of the mediator are executed and can be used to react to errors.



First, sequential routing rules are executed in one transaction. After that every parallel rule is executed in its own transaction.

Freitag, 13. Januar 2012

SOA/BPM guidelines article

The german magazine "Computerwoche" published an interesting link to an article on SOA/BPM guidelines/code of practice:
http://soa-know-how.de/index.php?id=45

This article is in german language.

BPEL and transactions - part 2

Here is part 2 of "BPEL and transactions". This is about how to control transactions in BPEL. Have fun!

Control of transactions in BPEL

There 2 alternatives of executing a transaction in BPEL or a composite.
1. inherit the parent transaction, if one exists
2. execution in a new transaction

The control of the transaction is conducted by properties. Unfortunately exist different properties in Oracle SOA Suite 10g and 11g. The control of the transactions can be done by "breakpoint acitivities".
In Oracle SOA Suite 11g there are properties in the composite to control transactions. For "exposed services" and/or "external references" the property "Transaction Participation" can be set to "Never", "Supports", "Mandatory" or "WSDLDriven". In the bpel component the property "bpel.config.transaction" can be set to "requiresNew" or "required".




A syncronous invoke without transaction properties



A syncronous invoke with BPEL property "bpel.config.transaction" set to "required"



The local transaction is being completed by BPEL by a "breakpoint activity" or at the end of the flow (dehydration). Following "breakpoint activities" exist:
- Receive (unless it is the first receive and the flow is part of the parent transaction)
- OnMessage
- Wait (is being commited after some seconds)
- OnAlarm
- Invoke (if partner link is idempotent)
- End of flow (if process has its own transaction)
A syncronous invoke with partner link property "idempotent" set to false.


Montag, 9. Januar 2012

BPEL and transactions - part 1

Some time ago I held a presentation about "BPEL and transactions". I will take the important parts of that and publish it here. I split it into several parts. This ist part 1.

General considerations

A transaction is a logical unit consisting of one or many operations.
The 4 ACID properties of transactions are: Atomacy, Consistency, Isolation and Durability.
Usually, during the processing of a transactions some components of a systems are locked to avoid inconsistencies.



Try-Cancel-Confirm mechanism

One possible solution is the Try-Cancel-Confirm mechanism.
TRY: the call of reservation service
PENDING: the system reserves the resource for a specific time
CONFIRM: when all operations were succesfull, the reservation is being confirmed
CANCEL: if an error occured during execution, every change is being canceled
Disadvantage: The Locking of the resource (reservation) is only possible for a short amount of time
TRY, CONFIRM and CANCEL operations must be provided by the called service.

Solution in BPEL: Compensation

- methods to reverse the changes being done so far
- often  the execution of the inverse operation (or rollback) is not enough
- "logical UNDO" = method, which executes a functional, complete reversal/cancelation
- for example a cancelation of an order in a remote system, sending of an correction e-mail,...
- the reversal method must be provided by the service

Compensation pattern
To be continued...

Freitag, 6. Januar 2012

iPhone and SOA

My colleage Guido Neander did a presentation "How to stake iPhones into a SOA landscape" at the DOAG Conference last November. It's about sending events and messages (alerts) to an iPhone-App. The events and alerts are generated by a business process which is implemented with the Oracle SOA Components and the Oracle Database. The presentation is in german language. If you're interested in this presentation, please contact me: arne.platzen@mt-ag.com. If you're a DOAG-Member you can download it here: http://www.doag.org/formes/servlet/DocNavi?action=getFile&did=3049753&file=2011-K-SOA-Guido_Neander-Gewusst_wie__IPhone-Anbindung_in_SOA-Landschaften-Praesentation.pdf

What to expect here!

In this Blog I'm going to share my experiences of SOA and Integration projects with you, share interesting articles and documents, as well as post general stuff concerning modern IT. I hope some of you will follow me. Feel free to contact me: arne.platzen@mt-ag.com

Welcome!

Welcome.
My name is Arne Platzen. I work as an IT consultant for MT AG, Germany, for more than 5 years now, having 12+ years experience in the IT-Sector. Originally coming from deep Oracle-World of developing with PL/SQL, Oracle Forms, Oracle Reports and Oracle Designer, I moved on to discover new worlds and new civilisations, which means the Oracle Fusion stack. I developed Eclipse RCP Applications and Web Applications with the Oracle Application Development Framework (ADF).
After shifting more and more to integration projects, I started working with Oracle SOA Suite, OSB and the Weblogic Server. Currently, I am head of the Competence Center for Oracle SOA in my company and work as a consultant for SOA and integration, doing conceptional and architectural work in different projects.