Security in the context of public APIs is typically well recognized problem and all the financial institutions are focused on this issue. OAuth security track record is poor, there are known and emerging challenges to be addressed. I could go on and on.
One of the problems that tends to be overshadowed by all the enthusiastic hype and security (which is of outmost importance) is … performance.
Performance of the back-end systems is negatively affected by the increased number of requests from partners and end consumers consuming publicly available APIs. There’s no way around it, it must be faced head-on.
High response times will certainly kill user experience and make your partners and end users simply go somewhere else.
Techniques such as caching or load balancing along multiple processing nodes, borrowed from Internet TOP FIVE may not help as much as you would expect. They will definitely help to accept more requests but the real processing usually involves accessing back end data stores, especially when APIs are to return the latest data and our priority is to avoid stall data problem.
You are not Netflix or Facebook after all.
What may be more tolerable for social network apps is not welcome in business apps. For instance, retrieving list of pictures posted on line and missing one or two pictures uploaded in previous minute is usually acceptable, eventual consistency will well eventually make all the receiving nodes get the same list of images.
Picture: this is not your business, isn’t it?
The contrasting example could be public API for partners to receive bank account balance value – of course we cannot assure perfect transactional consistency (as in local non-distributed system, in fact we never were, but it would be a topic for another article) but we obviously cannot afford to cache and effectively return stall data (as much as it is possible in distributed system).
Picture: this is more like your business, you cannot afford to work with stale data
As a consequence, when building public APIs you have to optimize back end storage throughput using various techniques.
It’s tempting to (simply but not so simply) add more servers with RAM, increase disk and network IO and declare win. But it won’t last long assuming rapid increase in API requests from your partners and clients.
Command Query Responsibility Segregation pattern and other architectural best practices may be helpful as query dynamics and requirements are almost always different and justify separate structures and routines used to write and read data to / from back end storage.
Many or most back end systems are not optimized in that way but when you build new ones or introduce significant architectural changes do include public API actors as the equal or even more important citizens than users of your mobile and web apps.
Traditional DB scaling and performance optimization techniques are still valid, not a bit less important than before API begun to take the world of enterprise systems.
To make it even more difficult your back-end system is probably well optimized and somewhat dominated by its current (user interaction driven) usage scenario, even by user interface itself, it is really hard to avoid when building transactional systems for traditional scenarios.
Public APIs may almost certainly require different “slices” of your data than internal systems so it may be a good idea to create optimized data structures (logical or virtual) to optimize querying performance for what really is the new usage scenario.
On the other hand, the ambitious attempts to keep everything totally reusable will almost certainly fail and result in very chatty internal APIs combined using complex assembler sets, API gateways, multiple call effectively killing performance for both internal and external API consumers. One size does not fit all neither in this context.
Yet still there are architects and decision makers obsessed with reusability at all cost (without mentioning the cost variable of the equation).
Do not underestimate the power / impact of the external APIs on your solution performance and resulting architecture challenges and resulting changes.
External API project done properly is not a matter of exposing using REST what you have right now as it is done right now but adopting systems including their data layers to the new API economy reality.
Altkom Software and Consulting has more than 16 years of experience of creating both internal and external APIs beginning with SOA, then simple REST APIs to advanced API orchestration and versioning scenarios. Consider us as a partner to help you get through all those complex challenges.
Authot: Jacek Chmiel, Technical Director Altkom Software & Consulting