At a high level, web application delivery is exactly what it sounds like: enabling users to run applications over the internet. But while it sounds straightforward, it’s challenging to deliver on. It involves the right mix of services, sufficient computing power and low latency networking to provide a reliable user experience – which also must be secure.
Technology changes constantly, as do the parameters for providing an optimal user experience. Let’s look at the evolution of web application delivery over the last three decades.
The web application delivery model was fairly simple through the 1990s. For perspective, this is around the time the sitcom Friends launched its first season, and Amazon.com was just beginning to take shape. It’s been a while.
- Early web application delivery was based mainly on the LAMP stack with all components on a single machine. LAMP, by the way, is short for Linux, Apache, MySQL and PHP. It refers to a PHP web application running on an Apache web server running on Linux and connecting to a MySQL database.
With the LAMP stack on the same machine, an attacker that breached one component could pretty easily breach the others. As a defense mechanism, enterprises decided to spread stack components across multiple systems. But that resulted in a lot more equipment to deal with, and inefficient use of compute resources.
In the late 1990s to mid-2000s, enterprises moved to virtual machines (VMs) to get past those hurdles. Several virtual machines could run simultaneously on a single physical server, which was a much more efficient use of resources.
And security improved.
In a virtualized environment, attackers that accessed one part of the application stack couldn’t access the others, because each component was in a different virtual machine. Plus, it took much less effort for administrators to manage, back up and restore virtual machines, making overall security much stronger.
Then, the late 2000s saw the rise of cloud computing. It was so much easier to deliver apps as a service in the cloud. Initially, it wasn’t a dramatic change for security: It was just a matter of moving those virtual machines from on-premises to the cloud and adjusting appropriately.
Over time, however, agencies needed to be able to rapidly and dynamically scale virtual machines to meet new demands. Virtual machines were slow to spin up and spin down, so running them in the cloud became less ideal. And, they tended to run over a moderately long term, which dug into the budget.
So today, agencies often use what’s called a cloud-native environment. Rather than virtual machines, developers work with containers and microservices, which provide greater scalability, flexibility and speed. This shift also helps agencies control costs by spinning up the components they need and spinning down whatever is not being used.
To make a cloud-native environment as secure as possible, you need to address three key issues:
- One, think in terms of securing a platform, not securing an operating system.
- Two, don’t forget about the elasticity of the native cloud environment. That is, how an application can quickly spin up from one unit to thousands of units.
- And three, think about the new extended network architecture that cloud introduces to your environment.
This article is an excerpt from GovLoop Academy’s recent course, “How Secure Is Your Web App Delivery?” created in partnership with ClearShark and Palo Alto. To learn more, access the full course here.