Nginx Vs Apache is an age-old debate. Both these servers are industry leaders, serving more than 50% of all web pages on the internet combined. Though they’re members of the same industry and serve similar purposes, each follows a different set of dynamics towards achieving their end goal, i.e., serving web pages.
Today, we’re going to compare both on the basis of:
- Static Vs Dynamic content
- OS Support
- Nginx Vs Apache- What to Choose and When?
It will be interesting to see what distinct approaches these servers bring into the web. But, before we get into the nitty-gritty of things, here’s a short history of both the platforms:
A Brief Overview
Apache is an Open Source HTTP server that works across multiple platforms (Linux, Mac, Windows).
Being an industry old-timer (its first version was released in 1995), it has contributed significantly to the initial growth of the World Wide Web.
It was also able to amass customers in huge numbers, having a market share of more than 50% in the early 2000s. It is particularly popular for its modular structure. It uses Multiprocessing modules (MPMs) in order to meet varied infrastructure needs. Meaning, developers can customize the software as much as they like.
However, over the past few years, Apache has lost its market share to competitors like Nginx, its share now being 23.98%, as of Feb 2020.
The major reason for this downfall is because Apache is slower than Nginx when it comes to serving static website content. It cannot handle a heavy amount of traffic due to its thread-based infrastructure. We’ll elaborate more on this below.
Apart from being an HTTPs web server, Nginx can also work as a reverse proxy, load balancer, and mail proxy.
The software was specifically created to solve the C10K problem. Servers using thread-based infrastructure (like Apache) were not able to handle a large number (say 10,000) of concurrent requests without exhausting their resources and dropping connections.
To overcome this problem, Igor Sysoev first released Nginx in 2004 as an Open Source software. After swiftly gaining popularity and a large client base, Nginx Inc was established in 2011 to continue the development of the Open Source software as well as to offer commercial products.
As of Feb 2020, Nginx holds the largest market share by serving 37.07% of all websites on the web.
Nginx shook up the web server industry by bringing superior performance to the table, using its asynchronous and event-driven architecture. It is now the fastest web server on the internet, known for its efficient resource utilization and ability to scale easily .
Now that we know how these two industry giants got where they are today, let’s roll into an in-depth comparison right away!
Nginx Vs Apache:
Both Nginx and Apache have different basic architectures through which they handle internet traffic and serve web pages.
Apache was developed with a process-based architecture. This means:
- Every time a parent process receives a request, it creates a child process (a thread) to handle that request. Each process contains one thread that can handle one request.
One major setback of this model is that every child thread takes up resources i.e., some space in the RAM.
Now picture a traffic hike. A large number of requests means Apache will have to spawn new child processes to handle each query. All of these new processes will fill up the RAM. Once a certain limit is reached, there will be no more space left to handle more requests!
In other words, If the number of requests received is below the limit of processes available, Apache works fluently. Once the requests exceed this limit, Apache’s resources are drained and it starts dropping connections.
Maybe this is why Apache was so popular at a time when internet traffic wasn’t much. As web users grew, Apache developers created Multiprocessing modules or MPMs to handle the ever-increasing traffic.
What we discussed above is the prefork module or MPM_prefork. The other two modules are:
In this module, the parent process creates new child threads, only this time there can be multiple threads per process. Meaning, one process can handle multiple requests. This module is can scale better than the prefork module because threads are more resource-efficient than processes.
Since a worker process has more threads, new connections can be handled faster. New requests do not have to wait for a process to be free, they can immediately connect with an empty thread.
The only difference between the Event and Worker modules is that Event creates separate threads to handle keep-alive connections.
- A keep-alive connection is a request that is kept open even after the initial query has been handled. This is so that multiple requests can be made through that same connection without closing and re-opening it again and again.
A keep-alive connection holds a thread open, even if active requests aren’t being made. The Event module creates separate threads for these and passes on the live requests to other threads so that resources can be allocated more efficiently.
Apache’s basic architecture is a blocker module, new requests have to wait in queue for processes to get free. This means it cannot scale as fast as the traffic on internet scales in today’s time. Which is why when it comes to choosing a server offering high performance, Apache probably won’t be your best pick.
But, don’t give up on it yet! Despite the issues discussed above, Apache has continued to be popular among developers. Mainly because its modular framework makes it highly configurable and customizable. Third-party developers can easily contribute to its source code to increase its functionality.
Nginx was specifically built to overcome the resource taxing and scalability problems Apache’s architecture posed. It works on an asynchronous, non-blocking, event-driven architecture.
Here’s how it works:
- It has one master process that creates a number of child processes to handle different types of requests. The master process receives requests from clients, forwards them to its child processes, and moves on to receive more requests.
- The child processes are of three types:
- The cache loader
- The cache manager
- The worker process
The cache loader and cache manager are used to load and store cache to facilitate faster serving of static content.
The worker process is the one that does all the heavy lifting. Here’s how it functions:
- It works asynchronously– when it has handled a certain request, it doesn’t wait on the client to make its next move. The process moves on to other open requests or listens for new ones.
- It is single-threaded– it doesn’t create new processes or threads for new requests. It can handle thousands of requests at a time.
- It is event-based– port signals and notifications are used to respond to requests. This frees up resources that can be allocated as and when requests are being made.
From the above explanation, it’s obvious that Nginx’s architecture uses a much more practical mechanism that allows it to scale easily. It doesn’t dedicate resources to single requests, rather uses an infrastructure that frees up processes dynamically to handle more requests.
2. Dynamic Vs Static Content
Apache uses its traditional file-based method to serve static content. However, I wouldn’t exactly call it ‘top of the line’ when it comes to serving static content, precisely because of what we’ve discussed so far.
Its core architecture hinders its ability to deliver a good performance during a traffic spike.
If we talk about dynamic content, Apache processes it within the server itself. It can easily add modules to read PHP withing its worker processes, so it doesn’t have to rely on any outer component or software to interpret and process dynamic content requests.
Nginx is the best when we talk about serving static content. Its event-based architecture can serve both static and cached content readily, by utilizing resources efficiently. In fact, Nginx can serve 2.1 times more requests per second on average compared to Apache.
Now coming to dynamic content, Nginx does not process it in the server itself. It can’t independently interpret server-side languages like PHP, which is why it needs to request it from an outside process. Once it receives the interpretation, it passes it on to the browser.
However, this doesn’t mean that Nginx is slower than Apache while serving dynamic content. In fact, it performs as well as Apache in this regard. The only slight disadvantage you might find is that the procedure Nginx uses may be a bit more complicated from the administrator’s end.
This criterion will give us a better understanding of Nginx Vs Apache performance and how they handle requests.
Apache allows for a decentralized configuration.
Distribution configuration files called .htaccess are placed inside a document directory. The directives listed in these files are applicable to that particular directory and its sub-directories. These files are read or interpreted on every request, so all changes made on them are applied immediately, without reloading the server.
Because of this admins can allow non-privileged users to gain some control over their web content, without giving them access to the entire configuration file.
Nginx’s architecture doesn’t have any scope for decentralized configuration. There is no concept of any .htaccess file. While this limits its flexibility, it also means that Nginx’s configuration allows it to serve requests faster.
Apache looks for .htaccess files in its directories and subdirectories each time a request is made. Whereas Nginx does a lookup of the main directory and fetches the required files. This is why if we look at it from a performance perspective, Nginx’s configuration framework is better.
If you’re more concerned about flexibility here, then Apache’s configuration is preferred.
4. OS Support
- All Unix operating systems are supported including Linux, BSD, Redhat, CentOS, Fedora
- Microsoft Windows is fully supported
- It also works on other platforms like Novell Netware
- It works on Unix and its variants like BSD, Solaris, Linux, Mac OS, etc.
- Supports Windows albeit not as seamless as one would like.
It is recommended that Nginx be used Linux, and integration with Windows be avoided.
5. Modules & Flexibility
An Nginx Vs Apache showdown is incomplete if we don’t talk about their additional functionality and flexibility.
Apache’s framework offers impressive flexibility. Its modules can be loaded dynamically and can be enabled or disabled during runtime of the server, as needed. This makes it very to work around modules on Apache.
There are a lot of prebuilt modules (60+) already available on Apache’s website. Apart from serving dynamic content, Apache’s modules can also be used for encryption, compression, caching, authentication, rewriting URLs, logging, etc.
Nginx does not have dynamically loadable modules. Meaning, all the modules that are needed to extend its functionality, have to be compiled in its core code. This makes it less flexible compared to Apache. Especially for people who like to use distribution packages of the server.
While the distribution packages contain all the important and most commonly used modules, if you need something off standard, you will have to create a configuration from scratch using the source code. Nginx’s system doesn’t allow you to simply disable the modules you don’t want to use.
While the above is still true for Nginx Open Source, Nginx Plus now has dynamic modules in its version 1.9.11. Though these modules still have to be compiled in its binary, you can now decide whether you want to enable or disable them while the software is running. Please note that this feature is only available on Nginx Plus (a paid service) as of now.
There’s not much difference between Nginx and Apache if we talk about the functions these modules provide. There are a plethora of third-party Nginx modules available, that provide proxying support, mail functionality, rewriting, geolocation, authentication, encryption, streaming, logging, HTTP caching, etc.
Apache is very well documented, and answers to most questions can be easily found in its documentation. Apart from that, it has a ‘Users list’ or ‘Usenet groups’, where you can as questions from industry experts and get answers.
The Usenet groups might take a bit longer to answer questions, so if you want to speedy answers you can visit the
#httpd channel on the irc.freenode.net– IRC network
Commercial support for Apache httpd is also available, although the Apache foundation doesn’t hold a list of those.
While it used to be difficult finding support documentation on Nginx, because most of it was written in Russian, it’s fairly easy now.
With its ever-expanding customer base, Nginx’s documentation and community support are getting better. It offers community support through Stack Overflow and mailing lists.
Commercial support is also available for Nginx plus and prebuilt Open Source packages.
Nginx Vs Apache- What to Choose and When?
We’ve talked enough about Apache’s flexibility. Whether it’s the modular architecture it works on or the configuration framework it deploys. Apache has clearly tried to make easy configuration and customization as its USP.
So if you need flexibility or a unique set up you might not otherwise find on the distribution packages, Apache is your pick.
Nginx is almost synonymous with superior performance, and you can find a lot of information to support this statement. It is the fastest web server when it comes to serving static content, and isn’t behind Apache when it comes to dynamic content.
Its architecture is smart and uses resources efficiently. Its configuration follows a simple process of content lookup, delivering better performance. There’s no doubt that when it comes to performance, Nginx is your pick!
C. Or, use both together!
One popular idea you’ll find on the internet is using Apache as a web server, and Nginx as a reverse proxy in front of Apache. If you think about it, both these servers are actually opposites, and if used together, you can eliminate their weaknesses altogether!
Acting as a reverse proxy, Nginx will be the first to come in contact with client requests. This could work like a charm because Nginx is best at handling concurrent requests. It will be able to handle traffic spikes without putting too much load on Apache, and Apache’s blocking architecture will no more be a matter of concern.
Also, Nginx can serve all the static content and pass all requests pertaining to dynamic content to Apache. You can also leverage Apache’s flexibility when it comes to its modular structure, and enable and disable modules as you like.
Working in tandem, both these servers can:
- Scale easily and handle more requests
- Offer better flexibility with modules and configuration
- Deliver high performance
Theoretically, it seems like Apache and Nginx are ‘made for each other’. While this setup has worked for many people, a lot of them have also faced obstacles. My suggestion would be that you ask around in the community and research this properly, before making a decision!
If I’ve missed something out, do let me know in the comments below! If you have any queries, talk to experts.
Interesting read: Create Self Signed Certificate: Ubuntu, Windows, Nginx