Attacking Nginx
Nginx, a popular web server and reverse proxy, is a critical component in many web infrastructures, making it a prime target for attacks. Common vulnerabilities in Nginx configurations include improper handling of headers, such as Upgrade
and Connection
, which can lead to h2c smuggling attacks, allowing attackers to bypass security controls and access internal endpoints. Additionally, issues like insufficient path restrictions, unsafe variable use, and default settings like merge_slashes
can expose the server to local file inclusion (LFI) attacks, HTTP request splitting, and other exploitation techniques. These vulnerabilities can be exploited to gain unauthorized access, manipulate traffic, or expose sensitive information.
To secure Nginx, it’s crucial to apply best practices in configuration. This includes disabling or carefully managing headers that can be exploited, setting strict access controls on sensitive endpoints, and ensuring that directives like merge_slashes
are configured appropriately to prevent URL-based attacks. Moreover, using features like proxy_intercept_errors
and proxy_hide_header
can help mask backend server errors and prevent the leakage of sensitive information. Regular audits of the Nginx configuration, alongside keeping the software up to date, are essential steps in maintaining a robust security posture.
Missing Root Location in Nginx Configuration
When configuring an Nginx server, the root
directive is crucial as it specifies the base directory from which the server serves files. Here's an example configuration:
server {
root /etc/nginx;
location /hello.txt {
try_files $uri $uri/ =404;
proxy_pass http://127.0.0.1:8080/;
}
}
Explanation:
Root Directive: The
root /etc/nginx;
directive sets the base directory for all file requests. In this case, files will be served from the/etc/nginx
directory.Location Directive: The
location /hello.txt { ... }
block defines specific behavior for requests targeting/hello.txt
. Thetry_files
directive attempts to serve the file if it exists, and if not, returns a 404 error. Theproxy_pass
directive forwards the request to another server, here,
http://127.0.0.1:8080/
.
The Missing Root Location Issue:
The issue here arises from the lack of a location / { ... }
block. This omission means that the root directive (/etc/nginx
) applies globally, affecting all requests to the server, including those to the root path /
. As a result, any request to the root path or other undefined locations could potentially access sensitive files within /etc/nginx
.
For example, a request to GET /nginx.conf
could serve the Nginx configuration file located at /etc/nginx/nginx.conf
. This exposes sensitive server configurations, which could include paths, credentials, and other vital details.
Unsafe variable use / HTTP Request Splitting
Nginx configurations must be carefully designed to avoid vulnerabilities like unsafe variable use and HTTP request splitting, which can lead to severe security issues. Below, we will explore how certain variables and regular expressions can introduce these vulnerabilities and how to mitigate them.
1. Unsafe Use of Variables: $uri
and $document_uri
In Nginx, the $uri
and $document_uri
variables are often used to capture the request URI. However, these variables automatically decode URL-encoded characters, which can introduce vulnerabilities, especially when handling user inputs directly.
For example:
location / {
return 302 https://example.com$uri;
}
In this configuration, the $uri
variable is used directly in the redirection URL. If an attacker crafts a request like:
http://localhost/%0d%0aDetectify:%20clrf
The Nginx server will decode the %0d%0a
characters to \r\n
(Carriage Return and Line Feed), potentially allowing the injection of a new header in the response:
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.19.3
Content-Type: text/html
Content-Length: 145
Connection: keep-alive
Location: https://example.com/
Detectify: clrf
This is an example of HTTP response splitting, where the response is split into two, potentially allowing an attacker to inject malicious headers or even content.
2. Regex Vulnerabilities
Regex patterns used in Nginx configurations can also be vulnerable if they are not carefully constructed. For instance:
location ~ /docs/([^/])? { … $1 … } # Vulnerable
This regex pattern does not check for spaces, which can lead to unexpected behavior if the input contains a space or a newline character.
A safer version would be:
location ~ /docs/([^/\s])? { … $1 … } # Not vulnerable (checks for spaces)
Alternatively:
location ~ /docs/(.*)? { … $1 … } # Not vulnerable (matches anything after /docs/)
Safe Configuration
To mitigate these risks, you should avoid using $uri
and $document_uri
directly in configurations where user input could be present. Instead, use $request_uri
, which preserves the original, unmodified request, including any URL-encoded characters.
Example of a Safe Configuration:
location / {
return 302 https://example.com$request_uri;
}
In this configuration, $request_uri
preserves the URL encoding, preventing the server from accidentally interpreting characters that could lead to HTTP response splitting.
Attack Scenarios and Detection Techniques
1. CRLF Injection and HTTP Request Splitting
Attack Scenario: An attacker tries to exploit HTTP request splitting by injecting CRLF characters into a request:
curl "http://localhost/%0d%0aX-Injected-Header:%20Test"
If the server is vulnerable, the response will include the injected header:
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.19.3
Content-Type: text/html
Content-Length: 145
Connection: keep-alive
Location: https://example.com/
X-Injected-Header: Test
Detection Techniques: Test for misconfigurations using the following requests:
curl -I "https://example.com/%20X" # Should return any HTTP code if vulnerable
curl -I "https://example.com/%20H" # Should return a 400 Bad Request
If the first request succeeds and the second returns an error, the server is likely vulnerable.
2. Bypassing Path Restrictions Using Encoded Characters
Attack Scenario: An attacker tries to bypass path restrictions by injecting encoded characters:
curl "http://localhost/lite/api/%0d%0aX-Injected-Header:%20Test"
If the Nginx server uses $uri
in its proxy_pass
directive, the request might be passed to the backend without proper sanitization, leading to header injection.
Detection Techniques: Test paths with encoded spaces and special characters:
curl -I "http://company.tld/%20HTTP/1.1%0D%0AXXXX:%20x"
curl -I "http://company.tld/%20HTTP/1.1%0D%0AHost:%20x"
The first request might succeed if the server is vulnerable, while the second should cause a 400 Bad Request error.
Examples of Vulnerable Configurations
Proxy Pass with $uri:
location ^~ /lite/api/ {
proxy_pass http://lite-backend$uri$is_args$args;
}
Vulnerable because $uri
is directly passed to the backend, which could lead to CRLF injection.
Rewrite with $uri:
location ~ ^/dna/payment {
rewrite ^/dna/([^/]+) /registered/main.pl?cmd=unifiedPayment&context=$1&native_uri=$uri break;
proxy_pass http://$back;
}
Vulnerable because $uri
is used inside a query parameter, making it susceptible to manipulation.
S3 Bucket Access:
location /s3/ {
proxy_pass https://company-bucket.s3.amazonaws.com$uri;
}
Vulnerable because $uri
is used directly in the proxy_pass
URL.
Raw Backend Response Reading
Nginx's ability to intercept backend responses is a powerful feature designed to enhance security and user experience by masking internal errors and sensitive information. However, under certain circumstances, particularly with invalid HTTP requests, this mechanism can fail, leading to the unintended exposure of raw backend responses. Below, we'll explore how this vulnerability occurs, provide example configurations, and discuss potential attack scenarios.
Example Scenario: Exposing Raw Backend Responses
Consider a scenario where an Nginx server is fronting a uWSGI application. The uWSGI application may occasionally return an error response that includes sensitive information, such as custom headers or internal error messages.
Example uWSGI Application:
def application(environ, start_response):
start_response('500 Error', [('Content-Type', 'text/html'), ('Secret-Header', 'secret-info')])
return [b"Secret info, should not be visible!"]
This application, when encountering an error, sends an HTTP 500 response along with a custom header Secret-Header
containing sensitive information.
Nginx Configuration
Nginx can be configured to handle such situations by intercepting errors and hiding sensitive headers.
Example Nginx Configuration:
http {
error_page 500 /html/error.html;
proxy_intercept_errors on;
proxy_hide_header Secret-Header;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
location = /html/error.html {
internal;
root /usr/share/nginx/html;
}
}
proxy_intercept_errors on;
: This directive ensures that when the backend (uWSGI in this case) returns an HTTP status code greater than 300, Nginx serves a custom error page (/html/error.html
) instead of the backend's response. This helps to prevent the exposure of internal errors and sensitive data.proxy_hide_header Secret-Header;
: This directive prevents theSecret-Header
from being forwarded to the client, even if the backend includes it in the response.
Under normal circumstances, this setup works as intended. However, when an invalid HTTP request is sent to the server, Nginx may forward this malformed request directly to the backend without processing it properly. The backend's raw response, including any headers and content, is then sent directly to the client without Nginx's intervention.
Example Invalid HTTP Request:
curl -X GET "http://localhost/%0D%0A" -i
Valid Request: If a valid request is made, and the backend returns an error, Nginx intercepts it and serves the custom error page, hiding the
Secret-Header
.Invalid Request: When an invalid request containing characters like
%0D%0A
(CRLF) is sent, Nginx might not correctly intercept the response. The backend's raw response, including theSecret-Header
, is sent directly to the client.
Example Output for Invalid Request:
HTTP/1.1 500 Error
Content-Type: text/html
Secret-Header: secret-info
Content-Length: 32
Connection: keep-alive
Secret info, should not be visible!
Attack Scenario
1. Exploiting Raw Backend Response Exposure
An attacker can exploit this vulnerability by sending a specially crafted HTTP request that Nginx considers invalid or improperly formed.
Step 1: Identify a backend endpoint that might return sensitive information in its headers or body.
Step 2: Send a malformed request to the Nginx server:
curl -X GET "http://victim.com/%0D%0A" -i
Step 3: If the server is vulnerable, the raw response from the backend, including any sensitive headers like
Secret-Header
, is exposed to the attacker.
Mitigation Strategies
To mitigate the risk of exposing raw backend responses, consider the following strategies:
Strict Request Validation:
Implement strict validation of incoming requests to ensure they adhere to proper HTTP standards. This can be done using Nginx’s
if
directive or by setting up custom error handling for malformed requests.
Custom Error Handling for All Scenarios:
Ensure that even in cases of malformed requests, Nginx serves a generic error page instead of forwarding the request to the backend. You can do this by defining error pages for common invalid requests.
server {
listen 80;
location / {
if ($request_uri ~* "%0D|%0A") {
return 400 "Bad Request";
}
proxy_pass http://backend;
}
}
Limit Backend Exposure:
Configure the backend to not include sensitive information in its error responses or headers. This reduces the risk even if Nginx does not properly intercept the response.
Monitoring and Logging:
Monitor and log requests that result in malformed requests or responses. This can help in detecting and responding to potential attacks quickly.
merge_slashes set to off
Nginx, while being a powerful and flexible web server and reverse proxy, can be configured in ways that unintentionally introduce security vulnerabilities. Below, we’ll explore several important security considerations, including the merge_slashes
directive, malicious response headers, the map
directive, DNS spoofing risks, and the use of the proxy_pass
and internal
directives. Each section includes examples, potential attack scenarios, and mitigations.