
Managing API interactions – look at the edge cases for the real story

In most scenarios that involve statistics, we tend to look for overall trends, and we seek generalizations to get an overall picture of a situation.
In most instances, that type of approach that discards the outliers, or edge cases, works very well. But in technology, and especially in the area of technology that has a direct connection with end-users, it’s the edge cases to which we need to place special attention.
The important stakeholders for most organizations are their customers. And customer experience, very much many organizations’ focus, is a concept that encompasses many different factors. It might begin with the GUI and UX of a mobile app, or the way a website performs when asked to achieve a simple task. Also, of large importance is the speed and reliability of services. Designers of services, application developers and web designers all work very hard to make their products fast and responsive, to give the customer the best possible experience.
But often, the API connections between the customer and the app, or between the microservices that make up the larger application can falter under high loads, so negating all the developer’s hard work, the hours spent testing the GUI, the QA sessions around the UX — in fact, all the organization’s resources invested to that point are lost.
A critical link in this customer experience chain is the performance of the underlying infrastructure that processes all of these API calls, chief among them the API gateway. API gateways usually function just fine. But when the loads get high, and the traffic peaks, that’s when the edge cases suddenly become important. The edge-cases, when high loads are placed on APIs, are the subject of a whitepaper from GigaOm. Its researchers tested API gateways under increasing amounts of stress. It’s those outlier cases that would normally be ignored statistically that are of major concern in API performance measurements. These represent the high load cases where enterprises might be processing API traffic at a rate of over 1,000 calls a second — or peaks in demand that reach those types of proportions.
As the document states, the results from the 99th percentile of loads and upwards are the critical zone — where poor API gateway performance can create backlogs, with latencies that are heading upwards (in some cases) toward the whole second mark.
Think 99th percentile is too stringent? Consider this. There are roughly 4.5 billion users on the internet. If one percent of these users’ API calls fail or delay, then simple arithmetic tells us 45 million users’ experience will suffer. Coincidentally, 4.5 billion is about the average number of API calls an enterprise’s digital service will generate per month. Can you afford 45 million customer sessions to degrade?
However, the impact can be even worse – affecting more than just one percent of users. Using the simile of the fast-food drive-through (we love a simile at Tech Wire Asia), the document poses the scenario in which a single order takes ten times longer to process than the usual minute or so. For the one person left waiting, that’s not an issue. But, it’s all the cars queuing behind the delayed order that suffer the delay too.
In customer experience terms, that will mean that the one delayed API call or response, moving through the gateway, creates latencies for its antecedents. And that means there are multiple customers, potentially numbering in the thousands per second, that are negatively impacted.
And as our previous article on this subject explored, the critical role of the API gateway as the conduit through which a great deal of internet traffic for the organization flows, can be the bottleneck that causes the whole organization’s reputation to diminish, as customer experiences suffer en masse.
The paper, which is available for download here shows empirical results from real-world load tests for several common API gateway topologies (excluding Google’s Apigee, which has a T&C clause that prevents its use in comparative tests). The most efficient gateway under high loads in every case was the NGINX API gateway, deployed in “vanilla” format by the NGINX Controller interface, without any CLI attenuations or flags being set during tweaked compiles, or similar low level alterations.
The reality of today’s internet and the applications and services that proliferate on it is one of the interconnections of discrete pieces of software: between organizations and their partners, and also between the internal services, applications and (increasingly) the microservices that make up the technology stack of the enterprise as a whole.
Therefore, API gateway technologies have been pushed to the fore as the critical element of the organization’s digital services.
With reputations won and lost literally in microseconds, the NGINX API gateway and the NGINX Controller pairing is proving to be the most responsive, most efficient and most suited to an elastically-scaling business. It’s out there today and can be tested against its competitors (the paper includes the GigaOm code). The NGINX API Gateway’s capabilities make it a good match for any business with high API traffic demands.
Furthermore, with API traffic now taking a majority of internet bandwidth, and the proportion of said traffic set to grow over the foreseeable future, every organization should be considering its options. The question is whether your gateway solution is ready for peak API traffic. We hold that the NGINX API Gateway is the best possible solution out there today, and for the future too.
But don’t just take our word for it. Read for yourself the full details in the GigaOm paper, today.
READ MORE
- The largest deal made by Cisco is heating up the AI cybersecurity race. Here’s why
- Chinese electric cars can now be controlled with smartphones
- After Bard Extensions, the Microsoft Copilot AI companion unveiled
- Speaking easy: is realtime translation ready?
- WhatsApp for Business targets Indian enterprises