Loosely coupled systems
It is increasingly common to find that applications have remote API access. With the ubiquitous use of web technology people can add XML based remote interfaces quite quickly.
The temptation is now there to connect together all sorts of systems using these APIs in an event driven fashion. For example, if we want to connect our customer database to our mailing list system, which has a web based API for managing users, then this can be easy to set up. Whenever there is a change to our customer database this triggers some code that uses the mailing list API and we have the change propagated in real time. Seeing systems connected together like this with real time interaction is addictive.
If we’re not careful though, we end up building tightly coupled systems when this might not be what we want.
To explain this, let’s talk about a sending system and a receiving system and go through some scenarios:
What happens if we want to take the receiving system down for an upgrade?
Hopefully that’s something that was planned for and the sending system can detect if the receiving system is not available and queue the changes. Then when the receiving system comes back on line the sending system can process the queue. So that one was easy enough.
What happens if the receiving system dies and has to be restored from a backup tape?
Now that’s a bit more complicated because the changes might have been lost. To get around this all we need do is ensure that every change is written out to a table or file of changes. That way we can replay the changes lot by restoring from backup to get the receiving system back to the right state.
What happens when the format of the changes, changes (if you see what I mean)?
This happens all the time. The format changes so we change the code that fires off the changes but forget to change the code that stores the changes. Either that or we do remember but make a mistake in the storing of the changes. Well this is also easy to get around, we just ensure that the process on the sending system that sends changes to the receiving system uses as it’s source information, the stored changes after they are stored. That way, there is only one source and we can’t introduce an error we don’t find until much later.
What happens if for some reason we don’t want to send the changes?
Let’s just imagine the receiving system develops a fault in the API, say after an upgrade, and trying to use the API causes it to crash. Obviously we don’t want to turn off the sending system so we could firewall the receiving system, but that’s a bit of a pain and may not be possible. The obvious thing to do is to turn off the process on the sending system that fires off the changes.
And there you have it, a loosely coupled system. All it takes is for us develop the sending system so that it:
- writes out every change to a table or file of changes
- uses the written out changes as the source of data for publishing those changes
- makes the process that publishes those processes a separate process that can be started and stopped independently of any other systems
- enables that process to start from any point in the stored list of changes so that you can replay changes.
The final thing to point out is that this has an unexpected benefit. If we want to test a new version of the receiving system API then we can just set the sending system process to replay the last few days of changes and see if the receiving system copes. No special code needed.