Tech

How the Nginx ngx_http_map_module works


I’ve decided to run through one of the very important and often used directives in Nginx. The ngx_http_map_module is often used to perform multiple conditional matchings. It equalizes a list of possible values of one (first in the expressions – to be shown in the example) variable with the other. The most common example uses $args built-in variable for simplicity of the use case:

map $args $test {
    default 0;
    test 1;
}
server {
    location / {
    add_header Test $test;
    }
}

What we have in the above example are two sections, mapping and deploying. Mapping is done within a map block where we catch the values of $args. Then we look for the query string “test” within $args. If we find the “test” in the $args collection value of “1” will be deployed into the local custom variable $test. If, however, we don’t find the “test” inside the $args collection the default value “0” will be deployed into the local custom variable $test.

Then in deploy, we are simply catching the $test variable value and putting it into the response header called “Test”.

Expected ngx_http_map_module behavior for above example:

without “test” in query

$ curl -I localhost:8000
HTTP/1.1 200 OK
X-Cache: HIT
Test: 0

with “test” in query

$ curl -I localhost:8000?test
HTTP/1.1 200 OK
X-Cache: HIT
Test: 1

What I’ve have done here is I expected to see “test” in request parameters. Then shown the corresponding value in the response header “test”.

Scenario with selective caching based on query string debug

In a more realistic scenario, we could want not to cache the response after “debug=1” or “debug=true” was set in the request. Below is the example scenario using http://bluegrid.io as a backend server:

map $args $debug{
    default 0;
    ~debug=(1|true)    1;
}
server {
    location / {
    proxy_pass http://bluegrid.io;
    proxy_cache idabic;
    proxy_cache_min_uses 2;
    proxy_cache_valid 200 1d;
    add_header Cache-Status $upstream_cache_status;
    add_header Debug $debug;
    proxy_no_cache $debug;
    }
}

Obviously, I am using a proxy_no_cache directive to prevent caching for requests with debug in query string. How it works is that if debug=1 or debug=true is set, we are setting proxy_no_cache to “1” or, we are disabling caching for this request:

without debug query parameter

$ curl -I localhost:8000
HTTP/1.1 200 OK
Cache-Status: HIT
Debug: 0

with debug query parameter

$ curl -I localhost:8000?debug=10
HTTP/1.1 200 OK
Cache-Status: MISS
Debug: 1

and with debug request parameter with string “true”

$ curl -I localhost:8000?debug=true
HTTP/1.1 200 OK
Cache-Status: MISS
Debug: 1

As you can notice, I’ve used the response header “Cache-Status” to show whether the response was served from the cache. To show how the caching prevention works based on the request parameter “debug” and its value I’ve predefined integer “1” and a string “true”. It will be the only viable debugging request parameters. This means that if I’ve used some other string or type of request parameter within “debug” I would expect to see “HIT” as a value of “Cache-Status” (Note that HIT is expected IF this request was cached earlier, otherwise, just refer to Debug response header value):

$ curl -I localhost:8000?debug=blah
HTTP/1.1 200 OK
Cache-Status: HIT
Debug: 0

Yep! As expected: Cache-Status: HIT and most important that Debug response header shows value “0“. This means that we have not met the conditions in the map directive block and thus, served previously cached content.

How does this work with multiple possible values of the matched variable? 

Simple! Using match list one line after the other with corresponding values would define each match expressions or condition separately:

map $args $debug{
    default 0;
    ~debug=1    true;
    ~test=1     false;
    ~live=1     maybe;
}
server{
    location / {
        add_header Debug $debug;
    }
}

To confirm we are getting expected results look at the value of “Debug” header:

$ curl -I localhost:8000?live=1
HTTP/1.1 200 OK
Cache-Status: MISS
Debug: maybe
$ curl -I localhost:8000?test=1
HTTP/1.1 200 OK
Cache-Status: MISS
Debug: false
$ curl -I localhost:8000?debug=1
HTTP/1.1 200 OK
Cache-Status: MISS
Debug: true

What if I want to use the same rule on other vhosts?

This is a completely valid question. In most cases, vhost file is being considered to be the space for all variables inside it. So we can jail the calculations and conditions within this space. In some cases, the same rule has to be defined across several or dozens of vhosts. This won’t be effective if we defined each rule inside each of vhost files. To create a workaround, we’ve defined ONE mapping rule in ONE vhost and the other vhosts were simply pulling the global definition from this one vhost I’ve mentioned.

How does this work ?

Simple enough! If you already have map defined same way I have it in my previous examples you can simply load or use “debug” variables in other vhosts as well.

So, essentially we have map directive outside the “server” block (within the http space/context) in vhost #1 which, defines a global variable that can be used in other vhosts too. With the help of Nginx loading all of its vhosts after it’s being restarted or reloaded, we have access to vhosts in memory thus, quick access to globals from vhost #2.

To show it’s working I’ve created a new vhost. Unlike the previous one that was listening on port 8000 (shown in each cURL example), this one listens on port 80 (for clarity of example). I’ll send the HEAD request to localhost on port 80 (curl -I localhost) and will have a “Test-New” header to display the $debug variable value that I have defined in the old vhost (the one listens on port 8000):

vhost file:
server {
    listen     80;
    location / {
    add_header Test-New $debug;
    }
}

cURL to show the Test-New being changed according to mapping rule in old vhost:

$ curl -I localhost?live=1
HTTP/1.1 200 OK
Test-New: maybe
$ curl -I localhost?test=1
HTTP/1.1 200 OK
Test-New: false
$ curl -I localhost?debug=1
HTTP/1.1 200 OK
Test-New: true

I hope this was helpful enough for anyone with a need to work with the map directive. Nginx offers pretty neat tricks and is a pretty good caching engine. Understanding its directives, scopes, pros, and cons gives very powerful tools in the hands of a system admin.

Talk to our nginx experts
Share this post

Share this link via

Or copy link