Consulting Enterprise Application Integration (EAI)
Distribution instead of Integration
Typical Enterprise Application Integration (EAI) projects use so-called
Brokers and Adapters to support Integration.
Adapters allow flexible connection to relational databases and flat
files for example. If the interface is more complicated, specific code,
e.g. Java has to be written. The retrieved data is transferred in a
standard format, e.g. XML to the Information Broker. The Broker can
be configured to support routing rules. Certain information depending
on its source, content or other constraints is forwarded to a list of
destination systems. To allow those systems to receive the data again
an adapter has to exist. These Information Brokers are historically
used for massive data transfer. The data transformation and related
business logic is implemented in each individual, distributed adaptor.
These approaches can face the following problems:
- There is no persistent Repository to support naming convention
mapping between various systems.
- Technically, transaction handling and state is hardly available.
The data is read from the source system and its extraction is "committed".
In a second step it is transferred to the destination systems and eventually
another message is sent back to the source system. This is often omitted.
- Normally the destination system has to accept the message to
not loose content. If the user is currently analysing a certain scenario
based on an actual consistent state of his information, he might want
to finish his work before receiving additional data. Consequently, additional
development on each system is required to allow intelligent buffering.
This is usually not even possible.
- Real time data from Process Historians cannot be distributed
within this concept because the amount of data exceeds the process able
amount.
- The typical result of this type of EAI solution is a network
of data streams. During the lifetime of your enterprise solution the
code to be maintained would probably increase to an unmanageable amount.
Even worse, the solution could start mixing data transfer and business
workflow management.
- How can data aggregation be achieved?
Some available data needs to be condensed or aggregated before it can
be sent to other applications. Real-time process data, available on
a minute basis needs to be aggregated to hourly or daily totals or averages
before it can be used in an accounting application.
This aggregation can also be structural, being hierarchical and even
dynamic.
- Some systems contain a complex data model (e.g. simulation,
optimisation or process models). The difficulty of the information transfer
is not moving the resulting data of for example a simulation run, but
providing the internal relationship of the result, which most of the
time results in the necessity to also transfer the model structure.
Central meta-data solution
BeauTec follows a different approach. Taking into account that each
enterprise is continuously changing and has to change to be successful,
we are following an object and service oriented approach (SOA) and provide
a persistent meta-data repository (Object Warehouse) to support the
aggregation of data (even non linear) and mapping of various naming
conventions (even histories) without duplicating the data.
Our solution provides:
- Bi-directional connectors to required systems and open architecture
to add or enhance the system.
- Central hierarchical meta-data repository to provide graphical
modelling of data structures, aggregation and mappings.
- Support for long-term storage of all types of data if required.
- A rigid class and object model to build a solution that best
suits your current and future requirements.
- Real transaction handling including rollback and commitment,
based on standard industry proven technology.
- Storage of intermediate results including automatic forward
or pooling of information for destination systems. Allow the destination
systems to poll for available information.
- Caching information for performance boost.
- Notification of systems if new information is available even
without sending the data directly.
- Visualization of all information available in the repository
to any point in time.
- Standard backup and restore features.
- Documented API for additional add-ons, source code is available.
- Maintenance can be done without taking the system out of service.
Even changes to the data models or interfaces are mostly possible at
run time.
- Existing adapters to real-time databases e.g. PI, PHD, and IPx
support even online instant access.
- The central Information Platform decreases the number of required
interfaces to a single one per system and configuration can be done
remotely.
- Referential integration exists, even historized.
Consequently for example, the replacement of an existing system by a
new one is made simple. Both systems can deliver their information simultaneously
and within the Integration Platform, the decision is made, when to switch
from one system to the other. This is transparent for the end users.
- Aggregation of information, time wise and hierarchical is of
course possible, even non-linear (e.g. blending laws).
- Support of transfer of data and structures. Structural changes
in the source system are automatically updated in the repository. Different
rules exist to handle exceptions. No additional maintenance is required.
- An abstract meta-data model can be constructed to represent
the overall envisaged enterprise model by references, formulae and aggregations
to the individual models of the external systems. Complex calculations
can either be done on the fly or pre-calculated and re-triggered on
data changes.
- Based on this integrated business vision, enterprise-wide Key
Performance Indicators (KPI) can be defined and monitored.
|
BeauTec CMDB A new version of BeauTec CMDB is available since march 2011.
Business Pilot A new version of BeauTec Business Pilot is available since January 2011.
|