Link11 WAAP
v5
v5
  • Link11 WAAP Documentation
  • Release Notes
  • Known Issues
  • User Guide
    • Introduction to Link11 WAAP
  • How Link11 WAAP Works
    • Traffic Filtering Process
    • Traffic Reporting and Analytics
    • Policy Mapping and Traffic Routing
    • Tagging
    • UI Overview and Common Elements
  • Console UI Walkthrough
    • Analytics
      • Dashboard
      • Events Log
    • Security
      • Global Filters
      • Flow Control Policies
      • Security Policies
      • Rate Limit Rules
      • ACL Profiles
      • Actions
      • Dynamic Rules
      • Quarantined
      • Content Filter
        • Content Filter Profiles
        • Content Filter Rules
    • Sites
      • Server Groups
      • Proxy Templates
      • Mobile Application Groups
      • Backend Services
      • Edge Functions
      • DNS Records
      • SSL
        • Load Balancers
        • Certificates
    • System
      • Interactive Challenge
      • SSO Configuration
      • Purge CDN Cache
      • Users Management
      • Security Alerts
      • Log Exporters
      • Version Control
      • System DB
      • Publish Changes
    • Account
  • Using the product
    • Best Practices
      • Saving and Publishing Your Changes
      • Enabling Passive Challenges
      • Understanding and Diagnosing Traffic Issues
    • How Do I...
      • Authenticate mobile app users
      • Ban, unban, and allowlist traffic sources
      • Bypass Link11 WAAP for loadtesting or other purposes
      • Configure a new path/section of a site
      • Control caching behavior
      • Enable GraphQL traffic
      • Enable mTLS (mutual TLS)
      • Protect sensitive information in logs and analytics
      • Quickly block an attacker
      • Redirect or block HTTP traffic
      • Run custom code
      • Set rate limits and exemptions
      • Stream event data to a SIEM solution or other destination
    • The Link11 WAAP API
      • Overview
      • Internal data structures
      • Using Swagger UI
      • Using curl
  • Reference Information
    • Acronyms
    • API
      • API access to traffic data
      • Types of namespaces
      • Namespace reference
        • ACL Profiles
        • Actions
        • Backend Services
        • Certificates
        • Configs
        • Content Filter Profiles
        • Content Filter Rules
        • Data queries
        • Dynamic Rules
        • Edge Functions
        • Flow Control Policies
        • Global Filters
        • Load Balancers
        • Log Exporters
        • Mobile Application Groups
        • Planets
        • Proxy Templates
        • Rate Limit Rules
        • Security Alerts
        • Security Policies
        • Server Groups
        • Tags
        • Tools
        • Users
    • Hostile Bot Detection / LWCSI
      • Environmental detection and browser verification
      • Client authentication
      • Biometric behavioral verification
    • HTTP Response Codes
    • Log Exporter Output
    • Pattern Matching Syntax
    • Query Filter Syntax and Best Practices
  • Support
Powered by GitBook
On this page
  • Overview
  • Components
  • General Parameters
  • Name
  • Description
  • Frontend Settings
  • General frontend settings
  • Size Limits
  • Application IP Rate Limits
  • Timeouts
  • Header Sizes
  • Body Size
  • Backend Settings
  • Proxy Connect Timeout
  • Proxy Send Timeout
  • Proxy Read Timeout
  • Backend Service Host Header
  • Real IP Header Name
  • Trusted Sources
  • Advanced Configuration
  • How it works
  • Parameters

Was this helpful?

Export as PDF
  1. Console UI Walkthrough
  2. Sites

Proxy Templates

PreviousServer GroupsNextMobile Application Groups

Last updated 22 days ago

Was this helpful?

Overview

Link11 WAAP acts as a reverse proxy; it receives requests from clients (web visitors, API clients, etc.), blocks hostile traffic, and passes legitimate requests to the backend.

When new sites are created, they remain linked to their underlying templates. Revising a Proxy Template will automatically update all sites that are based upon it.

Components

A Proxy Template consists of the following:

  • General parameters for administration

  • Frontend settings, defining L11WAAP's interaction with clients

  • Backend settings, defining L11WAAP's interaction with backend servers

  • Trusted Sources, defining trusted sources of traffic (e.g., load balancers and CDNs)

  • Advanced Configuration, defining additional customization

General Parameters

Name

A name to be used within the interface.

Description

Information about this Template, to be used within the interface.

Frontend Settings

General frontend settings

Client IP header name

Defines one or more header fields within which L11WAAP can find the client's IP address. When the system receives an incoming request from a client, the request will have passed through a load balancer on its way to L11WAAP. This means that the header will contain the client's IP and the load-balancer IP. These two IPs are usually found within the X-Forwarded-For field (which is the default entry here). In this situation, L11WAAP knows how to extract the client IP from this field. In other situations, a different field name might be necessary. For example, if the customer is using Akamai CDN, the incoming request will have the client IP in a field named True-Client-IP instead.

Size Limits

You can place limits on the amount of data that users can upload to the system. The defaults usually work well; however, if your application accepts user-generated content or other large files, then changes to these settings might be necessary.

Please note that if you increase these settings within L11WAAP, then the upstream server should also be configured to accept and store the quantity of data that L11WAAP will (potentially) pass through.

Client max body size

Specifies the maximum accepted body size of a client request, as indicated by the request header Content-Length. Size in MBs.

Application IP Rate Limits

These settings allow you to limit the amount of resources consumed by an IP address. The system can limit consumption by the average number of requests per second, while also allowing temporary bursts at a higher rate.

When a requestor exceeds any of these thresholds, subsequent requests will be answered with error code 503 (Service Unavailable).

Requests per second per IP address

Sets the allowable request rate per IP, per second: i.e., the allowable per-second average of incoming requests, enforced on an incremental basis (where "increment" refers to the number of milliseconds allowed for one request).

Example: This is set to 100. Thus, 100 requests are allowed per second. However, the system does not enforce rate limits on a per-second basis; it used a granularity of milliseconds. Therefore, it will allow one request every 10 milliseconds. (100 r/s, divided by 1000 ms/s, equals 1r/10ms.)

Burst of requests per second per IP address

Sets the allowable additional burst rate per IP, per second. The additional requests are accepted and placed into a queue.

Example: Let's say that the previous field (Requests per second per IP address) is set to 100. Without burst limits—i.e., if this field were set to zero—the system will reject every request that was received less than 10ms after the previous one. However, the burst limit is set to 20 instead. This means that L11WAAP will accept 21 requests (1 original plus 20 additional) per 10 milliseconds. In other words, when a request is received, up to 20 more can be received and accepted within the following 10 ms. If instead 25 total requests are received during that time, the last four requests will be denied with a 503 error.

Timeouts

The Timeout settings allow the system to monitor the time required to serve resources to each client. Any connection that exceeds the specific limits will be dropped.

Why timeouts are important

Some DDoS tools (e.g., R-U-Dead-Yet, or RUDY) send a relatively small quantity of traffic requests, but do so as slowly as possible (often with each byte sent separately). While a legitimate request can be resolved in milliseconds, a single RUDY client can tie up server resources for several minutes. Even a few hundred of these machines attacking a server can be very destructive.

The Timeout settings allow L11WAAP to block unresponsive requestors, whether their unresponsiveness is malicious or not. For most deployments, the default timeout settings work well. They are sufficient to filter out hostile traffic, while still accommodating even those users with low bandwidth.

All times are specified in seconds.

Client body timeout

If the body is not obtained in one read-step, this timeout begins. If the timeout expires and the client has still sent nothing, the L11WAAP Gateway returns error Request time out (408).

Keepalive Timeout

The timeout for keep-alive connections with the client. The L11WAAP Gateway will close connections after this time. This setting increases server efficiency; it allows the server to re-use browser connections and save resources. When changing this value, special care should be taken; in some cases, it depends on specific cloud vendor and load balancer settings.

Client header timeout

How long to wait for the client to send a request header. If the header is not received within this time, L11WAAP returns error 408 (Request Timeout).

Send timeout

Specifies the response timeout to the client. This timeout does not apply to the entire transfer but, rather, only between two subsequent client-read operations. Thus, if the client has not read any data for this amount of time, the L11WAAP Gateway shuts down the connection.

Header Sizes

The multiple of Large header size and Large header buffers should not exceed load balancer limitations.

Large header size

The maximum buffer size for accepting client request headers.

Header buffer size

The default buffer size for accepting client request headers.

Large header buffers

The maximum number of buffers for accepting client request headers.

Body Size

Client body buffer size

The maximum buffer size for accepting the client body.

Backend Settings

Proxy Connect Timeout

The time (in seconds) for L11WAAP to wait, before treating a connection with the backend as having failed.

Proxy Send Timeout

The time (in seconds) for L11WAAP to wait, before treating a data transfer attempt to the backend as having failed.

Proxy Read Timeout

The time (in seconds) for L11WAAP to wait, before treating a downstream (toward Link11) data transfer attempt as having failed.

Backend Service Host Header

Defines the value of the Host header passed to the backend. The default value ($host) sets it equal to the Host header in the incoming request (in other words, the Host header is passed upstream unchanged).

Real IP Header Name

Defines the field name that contains the client's IP address. L11WAAP is a proxy, and it passes incoming client requests to the upstream server. This means that the server will receive request headers which contain L11WAAP's cloud IP address as the "client" IP. Usually, this is not useful; almost always, the server will need the IP of the actual client instead. To facilitate server logging, analytics, and so on, L11WAAP adds the IP address of the originating client to the headers that it sends to the server. The Real IP Header Name defines the name of the field within which this information is passed.

Trusted Sources

This list defines the ranges of IP addresses which are trusted for providing forwarded IP addresses: for example, the load balancers in front of L11WAAP, or the CDN.

Advanced Configuration

The Advanced Configuration section allows admins to define custom code for this purpose.

How it works

During this process, nginx.conf's other contents are not overwritten. L11WAAP extends this file with its various per-site settings.

Be extremely cautious when using the capability. The system will perform some basic validation on custom code (for syntax and so on), but this is not exhaustive; therefore, admins are ultimately responsible to ensure that the code is correct. If incorrect code is executed, this can result in unpredictable system behavior.

Some examples are below.

Parameters

Advanced Configuration Name

A name to be used within the interface.

Description

A description to be used within the interface.

Protocol

Requests with this protocol will trigger the execution of the Custom Configuration Code.

Custom Configuration Code

The code which will be executed when requests have the specified Protocol.

In the examples below, there are sections of code delimited by -----BEGIN SERVER----- / -----END SERVER-----. and -----BEGIN LOCATION----- / -----END LOCATION-----. These delimiters are not included in the final file; they merely tell L11WAAP where to insert the code.

  • Code inside the SERVER delimiters will be placed into a server {} block. (This code can include one or more nginx location {} blocks.)

  • Code inside the LOCATION delimiters will be placed within a location / {} block for the specific server group.

  • Code without any delimiters will be placed into the server {} block.

Below are some examples.

Example 1

-----BEGIN SERVER-----
ssl_client_certificate /etc/zzzz-client-cert-mydomain.crt;
ssl_verify_client on;
ssl_verify_depth 3;
-----END SERVER-----
-----BEGIN LOCATION-----
-----END LOCATION-----

Example 1 discussion

This snippet configures SSL client certificate authentication on the Nginx server, requiring clients to present valid certificates. The server uses the certificate authority file at /etc/zzzz-client-cert-mydomain.crt to validate the client's certificate, allowing up to 3 levels of intermediate certificates in the validation chain. There are no specific configurations for any URL locations.

Here is a detailed explanation.

ssl_client_certificate /etc/zzzz-client-cert-mydomain.crt;

Specifies the path to the client certificate authority (CA) file. This file contains the trusted CA certificates (in this case, zzzz-client-cert-mydomain.crt) that are used to verify the client's SSL certificate.

ssl_verify_client on;

This turns on SSL client verification, meaning the server will require and verify client certificates. Any client trying to connect to this server must present a valid SSL certificate signed by a trusted authority specified in the ssl_client_certificate file.

ssl_verify_depth 3;

Defines the maximum verification depth for the client certificate chain, meaning that Nginx will verify up to 3 levels of intermediate certificates. This ensures that the certificate chain presented by the client is valid up to the root certificate authority.

-----BEGIN LOCATION-----
-----END LOCATION-----

This section is empty, indicating that no specific configurations are defined for any particular location in this part of the file. However, in the broader context of Nginx configurations, this would be where you define behavior for specific paths or endpoints on the server, such as /api or /login.

Example 2

-----BEGIN SERVER-----
location ^~ /.well-known/acme-challenge/ {
  proxy_set_header rbz-letsencrypt 931f5de197188ad0ab3e2de3efeb4d60b15767dff448fbd0;
  proxy_set_header Host myplanet.app.reblaze.io;
  proxy_pass https://myplanet.app.reblaze.io;
  add_header Cache-Control "max-age=0, no-cache, no-store";
  add_header expires "Thu, 01 Jan 1970 00:00:01 GMT";
  add_header Pragma no-cache;
}
-----END SERVER-----
-----BEGIN LOCATION-----
-----END LOCATION-----

Example 2 discussion

By default, Let's Encrypt is supported only on port 80 (the http protocol). If for some reason the request is received over HTTPS (note that on Cloudfront there is a redirect from HTTP to HTTPS), it needs to be allowed. The code above adds validation on HTTPS.

Example 3

location / {
  proxy_http_version  1.1;
  proxy_pass $active_upstream;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection $connection_upgrade;
  proxy_set_header Host $host:$proxy_port;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Proto $scheme;
  proxy_pass_request_headers on;
  access_by_lua_block {
    local hooks  = require "hooks"
    hooks.trigger_access()
  }
  header_filter_by_lua_block {
    local hooks  = require "hooks"
    hooks.trigger_header()
  }
  add_header X-Cache-Status $upstream_cache_status;
}

Example 3 discussion

This configuration proxies requests for a certain location (the root) to an upstream server, handling WebSocket connections and forwarding various headers like real client IP and protocol. Lua scripting is used to inject custom logic during the request and response phases, and the caching status of the upstream is also added to the response headers.

proxy_http_version  1.1;

Configures Nginx to use HTTP/1.1 when communicating with the upstream server.

proxy_pass $active_upstream;

Passes the request to the upstream server, which is dynamically defined by the $active_upstream variable.

proxy_set_header Upgrade $http_upgrade;

Sets the Upgrade header to the value of $http_upgrade, typically used for WebSocket connections or other HTTP protocol upgrades.

proxy_set_header Connection $connection_upgrade;

Sets the Connection header to the value of $connection_upgrade. This also relates to keeping WebSocket or other persistent connections alive.

proxy_set_header Host $host:$proxy_port;

Modifies the Host header that is passed to the upstream server to include the requested host and port.

proxy_set_header X-Real-IP $remote_addr;

Adds a header to pass the client’s real IP address (from $remote_addr).

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Sets the X-Forwarded-For header, which includes the original IP address of the client, useful for tracking traffic through multiple proxies.

proxy_set_header X-Forwarded-Proto $scheme;

Sets the protocol used for the request (HTTP or HTTPS) in the X-Forwarded-Proto header.

proxy_pass_request_headers on;

Ensures that the request headers are forwarded to the upstream server.

access_by_lua_block {
  local hooks = require "hooks"
  hooks.trigger_access()
}

Lua blocks now customize how requests and responses are processed. First, a script is run at the access phase of the request. This script requires a Lua module called hooks and calls the trigger_access() function. This can be used to run custom logic like authentication, logging, or modifying requests.

header_filter_by_lua_block {
  local hooks  = require "hooks"
  hooks.trigger_header()
}

Next, a Lua script is run during the header filter phase (typically after the response headers are received from the upstream server but before they are sent to the client). The trigger_header() function can modify or inspect response headers.

add_header X-Cache-Status $upstream_cache_status;

Adds a custom header (X-Cache-Status) to the response, reflecting the caching status of the upstream server.

Example 4

location = /public/notification {
  access_by_lua_block {
    local session = require "session"
    local metadict = require "metadict"
    local metadict_set = metadict.metadict_set
    
    metadict_set("profiler", ngx.now(), 'rbz_start_req_proc')
    session.process_request()
    metadict_set("profiler", ngx.now(), 'rbz_done_req_proc')
  }
  header_filter_by_lua_block {
    local metadict = require "metadict"
    local metadict_set = metadict.metadict_set
    metadict_set("profiler", ngx.now(), 'rbz_start_response')       
  }
  proxy_cache_bypass 1;
  proxy_no_cache 1;
  add_header X-Cache-Status $upstream_cache_status;
  proxy_buffering off;
  proxy_pass https://mydomain_ssl;
}

Example 4 discussion

This Nginx configuration block handles requests for the path /public/notification, forwarding them to an upstream HTTPS server. Lua scripts are used to log profiling data for request and response processing times. Caching is disabled, and the response is sent directly without buffering. The use of session handling (session.process_request()) suggests some form of user/session validation or similar logic.

location = /public/notification {

Specifies that this block handles requests to the exact URL /public/notification.

access_by_lua_block {

Begins Lua scripting for request handling.

 local session = require "session"

Loads a Lua module named "session". This module is likely responsible for session management or processing the request in some way.

local metadict = require "metadict"

Loads a Lua module called metadict, likely for logging or profiling purposes.

local metadict_set = metadict.metadict_set

Gets a function metadict_set from the metadict module to set profiling data.

metadict_set("profiler", ngx.now(), 'rbz_start_req_proc')

Logs the start of request processing by calling metadict_set with the current timestamp (ngx.now()), storing it under the key 'rbz_start_req_proc' in a "profiler".

session.process_request()

Calls a function process_request() from the session module, which processes the request, likely dealing with authentication, session validation, etc.

metadict_set("profiler", ngx.now(), 'rbz_done_req_proc')

Logs the end of the request processing by calling metadict_set again, marking the completion of the request with the timestamp.

Next, there is Lua scripting for response handling.

header_filter_by_lua_block {

This block runs after the upstream server responds but before the response is sent to the client.

local metadict = require "metadict"

Loads the metadict module again.

local metadict_set = metadict.metadict_set

Retrieves the metadict_set function.

metadict_set("profiler", ngx.now(), 'rbz_start_response')

Logs the start of the response phase by setting a timestamp for the key 'rbz_start_response' in the "profiler".

proxy_cache_bypass 1;

Instructs Nginx to bypass any caching mechanism for this specific request, meaning the response will not be served from cache.

proxy_no_cache 1;

Ensures that the response is not stored in the cache for future requests.

add_header X-Cache-Status $upstream_cache_status;

Adds a custom response header (X-Cache-Status) that contains the upstream cache status, indicating whether caching was used for this request.

proxy_buffering off;

Disables proxy buffering for this location, meaning that Nginx will not buffer the response and will immediately send it to the client as it receives it from the upstream server.

proxy_pass https://mydomain_ssl;

Forwards the request to the specified upstream server at https://mydomain_ssl. This could be a load balancer or an actual backend server, and it's handling the secure HTTPS protocol.

Proxy Templates define L11WAAP's behavior as a proxy. They are templates for creating new sites (i.e., ) within Link11 WAAP.

Note that this rate limiting applies across the entire application. For example, if one IP address is submitting requests to multiple URLs within a web application, all the requests are combined when determining if rate limits have been violated. If you need more flexibility, consider using instead.

For more info on the header size settings below, see and .

Internally, L11WAAP uses . A Proxy Template contains a number of commonly-used settings that define Nginx's parameters and behavior. However, there are many other possible configuration changes that an admin might want to make.

Nginx uses configuration files to define its settings. When a Link11 WAAP admin adds Custom Configuration Code to a Proxy Template, and then the changes, this code is inserted into an individual per-site configuration file. (In other words, each server group can have its own code.) It is then imported into the main nginx.conf file.

Custom code can use the directives and variables from .

Server Groups
Rate Limit Rules
this
this
Nginx
publishes
ngx_http_upstream_module