
How NGINX uses guardrails and not gates to integrate security

There’s a simple reason why cybersecurity roles are so well paid. In fact, there are two interconnected reasons. Firstly, ensuring that modern applications and services are safe and secure is a tough ask. The second reason follows on from the first: because it’s a very difficult thing to do well, there aren’t many people that do it well. There is an enormous demand for cybersecurity experts, with the salaries to match, and too few people available for every role — and thus the high salaries.
There are abundant reasons why security is difficult — the role of SecOps in particular — especially in terms of working in a DevOps environment: ever-shifting hybrid and multi-cloud topologies, the required speed of application development, and overhanging it all, the challenges posed by some very smart cyber criminals, each with their eyes on illicit profits.
Keeping organizations safe as they roll out new apps and services is, therefore, a tall order: SecOps is seen as a drag on the required speed and agility of development, and conversely, the few security experts an organization has managed to hire live in a state of constant alarm that their colleagues are potentially putting the business at risk.
Many organizations have sought to address this situation by cross-training their DevOps team in cybersecurity. Though a well-meant move, in practical, day-to-day terms, digital security is such a moving target that it’s quickly apparent that full-time, dedicated SecOps staff are needed, and we’re back to where we began.
Thinking guardrails, not gates
There is, now, an answer to the conundrum of how DevOps can work with CI/CD principles, and do so in a way that embeds security policies as a matter of course. The principles comprise a trio of G’s: guidance, governance, and guardrails, a threesome that forms a framework in which the disparity between DevOps and SecOps can be removed — and the organization operate more safely at all stages of application development, publication and production.
The guidance portion of the equation is the formulation of security policy-sets that define rules for what is or is not permitted in different, business-focused situations. These can be termed along the lines of “Financial Element Policy” or “Supply Chain Policy Set” and comprise a series of ready-mades, available via a self-service portal to all of the centralized teams that make up the modern enterprise IT ecosystem: NetOps, DevOps, application developers, Systems Architects, and so on. In practical terms, security is embedded according to centrally-defined policy.
As part of the guidance sets, the organization can be assured of the second ‘G,’ the governance aspect of data privacy and security. Compliance is ensured as part of the guidelines, according to the raft of strictures and legislations that affect the business’s workings. As a geography changes its rules, updating the security guideline policy affects all operations of live systems (apps and services already in production), new iterations of those applications, and any new projects as they move through the development cycle.
As we advance, therefore, the guardrails of security policies protect all aspects of application development, whether those reside on virtual machines, in containers, in different cloud instances, or in the data center. Rather than slamming gates shut in the path of development at any stage of the DevOps pipeline, guardrails have been laid down by cybersecurity experts who no longer have to (if they ever could) pore through raw code or comb logfiles to try and see where security holes might exist.
In brief, guardrails are a matter of chronology: rather than reactively interrupt the DevOps pipeline, security policy proactively connects guidance and governance to the app pipeline.
NGINX Controller and the unified ecosystem
The majority of readers of this article will be very aware of F5 NGINX, and many will already have live NGINX solutions running production, testing, and development environments. What the NGINX Controller does is draw together security, API management, application development, and powerful analytics into a flexible self-service environment. That allows the entire IT function to concentrate on their individual roles, and prevents the friction between what may have been, until now, competing branches of the same tree.
NGINX has long been a go-to choice at the data plane level of many IT projects, perhaps first introduced into the stack as a load balancer or web server. From those roles, and with its strong, community-based open-source roots, the NGINX project has transformed over time.
Now the NGINX-based IT schema means that any data plane interactions (API calls, DNS resolutions, traffic to CDN, load balancing and so on) adhere to security guiderails set up in NGINX, and overseen by the NGINX Controller.
NGINX provides an environment in which the entire IT function gains protection, oversight and analytics, as well as an integrated system that improves app performance and drives down operational costs. API calls remain API calls, Kubernetes deployments continue, and DevOps teams still hit their KPIs — but oversight, analysis and security come as an integral part of the picture.
And as we’re focusing on security, it’s worth saying that the “build once, run anywhere” ideal of modern apps in the DevOps sense is replicated in security policies: build once, adhere everywhere. That means significant security savings, fewer cases of friction between the priorities of different teams, and an assurance everywhere that best security practice is being followed.
By establishing early on the role of the NGINX Controller Application Security add-on module and setting up guardrails, DevOps automatically associates policy into every part of the application development process: from first sketches right through to the iterative updates applied to services in production.
Security services and tools are adaptable, of course, on a per-app basis, but the majority of the heavy lifting is done automatically. And when the enterprise shifts its deployment strategies to a new cloud provider, or back to the data center, or any combination required, the nature of the NGINX platform ensures continuity, agility, and the same quality of customer experience.
To learn more about what’s new and what’s on the horizon for SecOps, DevOps, and the rest of the IT function, check out the NGINX website and while you visit, why not request a free trial?
READ MORE
- NVIDIA and NTT DOCOMO revolutionize telecom services with world’s first GPU-accelerated 5G network
- Sony battles new hack: ‘Is my account safe?’ Echoes among concerned customers
- GlobalFoundries opens Malaysian office, seeks funding from U.S. CHIPS act
- Can we expect a new AI from Amazon soon, given its up to US$4 billion investment in Anthropic?
- Oracle Fusion Data Intelligence pioneering the change in analytics