On Integration – Developing Software with Integration in Mind


I’ve posted two articles on the challenges of systems integration, firstly on the advantages and disadvantages of Best-of-Breed and Fully Integrated Solutions, and secondly on how to approach integration if you’ve chosen Best-of-Breed.

See Systems Integration and Reducing the Risks.

This post is concerned with how software should be developed if integration is anticipated.


You can think of it in this way:

Systems respond to EVENTS by executing software PROCESSES that update DATA.

In a single ‘fully integrated’ system the ‘complete’ response to an EVENT is contained entirely in one system and is reflected in one set of DATA. ‘Fully integrated’ systems can thus always ensure a complete response to an EVENT or no response, but are rarely left in an incomplete state resulting from partial processing. Typically, databases roll back incomplete PROCESSES, and all the consequent DATA changes, if there is a failure at any stage. It’s all or nothing, if you want it to be.

Integrating best-of-breed systems, working with more than one database, makes it more difficult to achieve this kind of integrity. The failure of a PROCESS in one system can’t easily trigger the roll-back of a PROCESS in the other.

Integrating separate systems requires therefore that you should easily be able to identify the success or failure of each part of a PROCESS and identify and control each set of DATA changes that each PROCESS effects.

For example, suppose you’re a company that distributes imported goods and you’re obsessive about calculating the margin you obtain on each product (this may be for good statutory tax reasons, or simply because you are unnaturally pedantic). The cost of the goods you sell is determined by the price you pay for them, the duties you pay on importing them and the cost of transporting them to your warehouse. You only know the true cost of each product once you’ve received the invoices that make up all of these costs. But, in the meantime, you may have shipped them.

Let’s consider an EVENT such as the arrival of an invoice for transportation. If the calculation of ‘actual’ costs is automated then your system must carry out the following PROCESSES:

  1. Invoice Registration: The invoice is booked against supplier, tax and cost accounts.
  2. Accrual Reversal: If an accrual for this cost has been made earlier it must be reversed.
  3. Stock Value Correction: The difference between accrued and actual transportation cost must be calculated and allocated against all the stock items it is related to.
  4. Cost of Goods Sold Correction: If there are items to which this difference in cost relates that have already been sold, then the appropriate portion of this difference must be allocated to a cost of goods sold account rather than to an inventory account.

Each step involves updating transaction tables, and if all these PROCESSES are occurring in a single integrated system then either they all succeed or they all fail, and the database either reflects a complete response to the EVENT or no response at all.

But if your finance system is separated from your manufacturing system then some of these PROCESSES must be carried out in one system and some in the other (and parts of some in both). It’s likely also that they will take place at different times (if there is ‘batch update’ of one system by the other rather than immediate update). It doesn’t matter for the moment which. In consequence it is possible that taken together the systems may reflect a partial response to an EVENT.

When developing systems that must be capable of handling partial failure in response to an EVENT the following approaches are wise:

  • Ideally there should be an explicit database table that identifies for each uniquely identifiable EVENT the PROCESSES that have been completed successfully and which are therefore fully reflected in the system’s database. Ideally such tables are present in the system that is sending and the system that is receiving. In reality, it is the state of the data that implies whether a PROCESS is complete, and it is rare to find an explicit table that records this separately, but it would certainly be useful. However the success of each PROCESS is recorded, the changes in data associated with each PROCESS should be easy to identify.
  • Transaction data that have been processed by an interface program must be marked as such.
  • All modifications to data that imply additional interface processes should be handled with a full audit trail. Ideally a modification to a transaction involves the insertion of both a reversal of the earlier transaction and a new transaction.
  • All transactions should have a field that identifies the order of events. Ideally this should be a date/time stamp. This enables execution of interfaces in the correct sequence, as well as the unwinding of PROCESSES if required.

If these safeguards are observed reconciliation and correction should not be too difficult.

System integration isn’t easy and there is no reason to believe it will get easier. There are still no standards for the description of data to which all system developers and integrators can refer. This is no surprise. Data are not objective objects which can be explored and described without reference to purpose, context and culture, and they never will be. They are not like the objects of material science; they are human artefacts.

Over the last decades I have witnessed a number of false dawns with regard to ‘easier integration’. These usually turn out to be useful advances in technology but rarely make functional integration any easier.

Two weeks ago I read an article on Data-centre Software in The Economist (Sept 19th 2015). That’s what made me think about integration issues. The article mentions Docker, a successful startup that’s making it easier and more efficient to move applications from one cloud to another. Laudable, clever technology, but, as far as I can tell, it represents no progress in the more difficult task of integrating business processes. Those of us who do such work can be assured of work for many decades to come.

On Systems Integration – How to reduce the risk?


A couple of days ago I wrote about the challenges of systems integration, and the advantages and disadvantages of Best-of-Breed and Fully Integrated Solutions.

See Systems Integration.

If you’ve chosen the Best of Breed approach, how can you reduce the risks of integration, and design systems that enable integration that’s as ‘seamless’ as it can be? 

integration 2

The first way is to keep things simple. Don’t integrate the impossible. In my experience of business systems, which is largely with financial, manufacturing, expense management and professional services management systems, integration usually involves a ‘front-office’ system that reflects the special needs of an organisation, and a financial system.

I’ve come across two main reasons why organisations choose to do this:

  • Some ‘front-office’ systems, such as our systems@work software, don’t include accounting modules. This is deliberate, in that they aim to work with any system.
  • Some ‘fully-integrated’ systems don’t possess powerful enough accounting modules to meet organisations’ corporate, management or statutory reporting needs in every country in which they operate.

In many instances I’ve found myself working on the integration of, for example, hotel management systems, or time@work (our timesheet, expense, planning and billing system for professional services organisations) with back-office accounting systems.

The synchronisation of ‘reference data’, those core data items such as ’employees’, ‘products’, ‘projects’, ‘departments’, even ‘accounts’, is usually handled manually, though on occasion automatically too (if automation is cost effective) with ‘slave’ and ‘master’ well defined.

When it comes to transactional data it is usually the ‘front-office’ system that is passing data to the ‘back-office’, and rarely vice versa, the back-office accounting system being essentially more ‘passive’. Even if this transfer is done periodically rather than daily, it is not so difficult to ensure reconciliation.

In all of these cases integration is relatively easy. Data flow from ‘front’ to ‘back’. But the integration of manufacturing systems with accounting systems is much more difficult, and although I have done this too, there are areas, such as ‘costing’ (the determination of the ‘actual’ cost of a product) where the task becomes much more complex. Costs of raw materials, components and semi-finished items, often based on accruals, may already have been transmitted to the accounting system, and will usually need subsequent revision as further costing details are obtained. Techniques such as standard costing can alleviate problems of this kind, and data may still flow generally in one direction, from the manufacturing to the financial system, but the number of transactions and the reasons for passing data are many more.

Where, first, to record transactions?

More complex questions also arise when you must decide where to record various types of transaction. In which system, for example, should supplier invoices be booked? Most organisations prefer to record similar transactions in just one place in order to avoid mistakes and to simplify procedures for the finance department. But the data derived from supplier invoices may be needed by both systems.

In the professional services world, supplier invoices are often recharged to customers, so supplier invoice data are needed in the professional services modules. If they are first booked there, then the question arises as to whether it make sense to book, say, utility invoices there also, rather than directly into the financial system’s accounts payable module?

In the manufacturing world, supplier invoices contribute to the cost of products, so must find their way either by direct initial input or through integration into the manufacturing system’s database.

The choice of where to enter data is further clouded by the fact that many ‘front-office’ systems are less well adapted to meet the burdens of statutory reporting, such as VAT reporting, and do not capture the information required for this purpose.

Transaction History

You must also make sure that modifications of data are achieved through additional transactions rather than through change to existing ones (the original values thereby being lost). You always need to know the before and after status of data. This is commonplace in most transaction systems, such as accounting systems, where posted journals are not usually modifiable. Corrections and any other kind of value changes are achieved through new journals. This is important because you must always know exactly which transactions have been transferred during integration, and once passed to another system, a transaction must not change.

Marking as Passed

It is also essential that there be some way of determining whether integration processes have been executed in respect of particular data. If you’re exporting transactions from one database to another you need either to be able to mark a transaction as exported or to store details of exported transactions in a separate table. And, of course, assuming that you have taken note of the second rule of integration, and as noted earlier, a transaction that has been exported should never change its value.

Date and Time Stamp

Make sure that all transaction data are ‘date and time stamped’. This isn’t essential, even if it is wise, in a ‘fully integrated’ system because processes are executed fully and at once and any processes that depend on the evaluation of data at the time of processing will of necessity evaluate that data correctly. But if a process is split between systems and cannot therefore be completed fully and at once any data evaluation may be influenced by later transactions and processes. Separated processes therefore need some other way of determining how to evaluate data ‘as of’ a particular date and time.

In a final post we will look at how software systems should be developed with integration in mind.

Choosing New Software – Best of Breed or Integrated?

When you’re looking for new software, one of the first things you must decide is whether to look for two or more systems that are best in their particular fields, or one software system that does nearly everything you need.

This choice is between ‘best of breed’ systems:

Best of Breed

or ‘integrated’:


The problem with the best-of-breed choice is that you have to do the integration yourself, or commission it. This means developing software to map the data in one system to the data in the other, scheduling the execution of the integration software, ensuring that reference data such as account codes and department codes are synchronised in both systems, and that both systems can be reconciled. It gets even more complicated if there are more than two systems in the mix.

The problem with the integrated choice is that you rarely get everything you want from one system, or, not affordably. And it means that if you only want new software for one particular purpose (e.g. professional services management) then you have to throw away everything else that you have and start again.

It’s not easy to choose.

But if you decide to go the best-or-breed way then Infor’s ION integration framework solves the integration problems by providing technical, logical and procedural management of all aspects of integration.

See this YouTube video to see how we’ve integrated time@work (for Professional Services Management) and Infor SunSystems (for back-office accounting):