How to use nginx as HTTP load balancer

The following is a simple example file for configuring nginx load balancing. The main functions are:
  • Users visit http://www.meterpreter.org and load balance them to four servers: 192.168.1.2:80, 92.168.1.3:80, 192.168.1.4:80, 192.168.1.5:80
  • Users visit http://new.meterpreter.org and load balance them to ports 8080, 8081 and 8082 of the 192.168.1.7 server.

Nginx download

user  www www;

worker_processes 10;

#error_log  logs/error.log;

#error_log  logs/error.log  notice;

#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

worker_rlimit_nofile 51200;

events

{

      use epoll;

      worker_connections 51200;

}

http

{

      include       conf/mime.types;

      default_type  application/octet-stream;

      keepalive_timeout 120;

      tcp_nodelay on;

      upstream  www.meterpreter.org {

              server   192.168.1.2:80;

              server   192.168.1.3:80;

              server   192.168.1.4:80;

              server   192.168.1.5:80;

      }

      upstream  new.meterpreter.org {

              server   192.168.1.7:8080;

              server   192.168.1.7:8081;

              server   192.168.1.7:8082;

      }

      server

      {

              listen  80;

              server_name  www.meterpreter.org;

              location / {

                       proxy_pass        http://www.meterpreter.org;

                       proxy_set_header   Host             $host;

                       proxy_set_header   X-Real-IP        $remote_addr;

                       proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

              }

              log_format  www_s135_com  ‘$remote_addr – $remote_user [$time_local] $request ‘

                                ‘”$status” $body_bytes_sent “$http_referer” ‘

                                ‘”$http_user_agent” “$http_x_forwarded_for”‘;

              access_log  /data1/logs/www.log  www_s135_com;

      }

      server

      {

              listen  80;

              server_name  new.meterpreter.org;

              location / {

                       proxy_pass        http://new.meterpreter.org;

                       proxy_set_header   Host             $host;

                       proxy_set_header   X-Real-IP        $remote_addr;

                       proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

              }

              log_format  blog_s135_com  ‘$remote_addr – $remote_user [$time_local] $request ‘

                                ‘”$status” $body_bytes_sent “$http_referer” ‘

                                ‘”$http_user_agent” “$http_x_forwarded_for”‘;

access_log  /data1/logs/blog.log  blog_s135_com;

}

}

There are two main modules used here:
1. HTTP load balancing module (HTTP Upstream), some fields explained:
  • server: Specify the name of the backend server and some parameters. You can use the domain name, IP, port, or Unix socket. If specified as a domain name, it is resolved to IP first.
  • upstream: This field sets up a group of servers. This field can be placed in the proxy_pass and fastcgi_pass instructions as a separate entity. They can be servers listening on different ports, and they can also be servers listening on both TCP and Unix sockets.
2. HTTP proxy module (HTTP Proxy)
This module can forward requests to other servers.
  • proxy_pass: This command sets the address of the proxy server and the mapped URI. The address can be in the form of a hostname or IP plus port number.
  • proxy_set_header: This directive allows you to redefine or add fields to the request header sent to the proxy server. This value can be a text, a variable, or a combination of them.