HTTP Server Libraries

Web application development typically involves writing your web applications, packaging them into a web application archive, the *.war file, and then deploy the *.war file into a standalone Servlet Container that you have previously installed.

The Jetty server libraries allow you to write web applications components using either the Jetty APIs (by writing Jetty Handlers) or using the standard Servlet APIs (by writing Servlets and Servlet Filters). These components can then be programmatically assembled together, without the need of creating a *.war file, added to a Jetty Server instance that is then started. This result in your web applications to be available to HTTP clients as if you deployed your *.war files in a standalone Jetty server.

Jetty Handler APIs pros:

  • Simple minimalist asynchronous APIs.

  • Very low overhead, only configure the features you use.

  • Faster turnaround to implement new APIs or new standards.

  • Normal classloading behavior (web application classloading isolation also available).

Servlet APIs pros:

  • Standard, well known, APIs.

The Maven artifact coordinates are:

<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-server</artifactId>
  <version>12.0.10-SNAPSHOT</version>
</dependency>

An org.eclipse.jetty.server.Server instance is the central component that links together a collection of Connectors and a collection of Handlers, with threads from a ThreadPool doing the work.

Diagram

The components that accept connections from clients are org.eclipse.jetty.server.Connector implementations.

When a Jetty server interprets the HTTP protocol (HTTP/1.1, HTTP/2 or HTTP/3), it uses org.eclipse.jetty.server.Handler instances to process incoming requests and eventually produce responses.

A Server must be created, configured and started:

// Create and configure a ThreadPool.
QueuedThreadPool threadPool = new QueuedThreadPool();
threadPool.setName("server");

// Create a Server instance.
Server server = new Server(threadPool);

// Create a ServerConnector to accept connections from clients.
Connector connector = new ServerConnector(server);

// Add the Connector to the Server
server.addConnector(connector);

// Set a simple Handler to handle requests/responses.
server.setHandler(new Handler.Abstract()
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Succeed the callback to signal that the
        // request/response processing is complete.
        callback.succeeded();
        return true;
    }
});

// Start the Server to start accepting connections from clients.
server.start();

The example above shows the simplest HTTP/1.1 server; it has no support for HTTP sessions, nor for HTTP authentication, nor for any of the features required by the Servlet specification.

These features (HTTP session support, HTTP authentication support, etc.) are provided by the Jetty server libraries, but not all of them may be necessary in your web application. You need to put together the required Jetty components to provide the features required by your web applications. The advantage is that you do not pay the cost for features that you do not use, saving resources and likely increasing performance.

The built-in Handlers provided by the Jetty server libraries allow you to write web applications that have functionalities similar to Apache HTTPD or Nginx (for example: URL redirection, URL rewriting, serving static content, reverse proxying, etc.), as well as generating content dynamically by processing incoming requests. Read this section for further details about Handlers.

If you are interested in writing your web application based on the Servlet APIs, jump to this section.

Request Processing

The Jetty HTTP request processing is outlined below in the diagram below. You may want to refer to the Jetty I/O architecture for additional information about the classes mentioned below.

Request handing is slightly different for each protocol; in HTTP/2 Jetty takes into account multiplexing, something that is not present in HTTP/1.1.

However, the diagram below captures the essence of request handling that is common among all protocols that carry HTTP requests.

Diagram

First, the Jetty I/O layer emits an event that a socket has data to read. This event is converted to a call to AbstractConnection.onFillable(), where the Connection first reads from the EndPoint into a ByteBuffer, and then calls a protocol specific parser to parse the bytes in the ByteBuffer.

The parser emit events that are protocol specific; the HTTP/2 parser, for example, emits events for each HTTP/2 frame that has been parsed, and similarly does the HTTP/3 parser. The parser events are then converted to protocol independent events such as "request start", "request headers", "request content chunk", etc. detailed in this section.

When enough of the HTTP request is arrived, the Connection calls HttpChannel.onRequest().

HttpChannel.onRequest() calls the request customizers, that allow to customize the request and/or the response headers on a per-Connector basis.

After request customization, if any, the Handler chain is invoked, starting from the Server instance, and eventually your web application code is invoked.

Request Processing Events

Advanced web applications may be interested in the progress of the processing of an HTTP request/response. A typical case is to know exactly when the HTTP request/response processing starts and when it is complete, for example to monitor processing times.

This is conveniently implemented by org.eclipse.jetty.server.handler.EventsHandler, described in more details in this section.

Request Logging

HTTP requests and responses can be logged to provide data that can be later analyzed with other tools. These tools can provide information such as the most frequently accessed request URIs, the response status codes, the request/response content lengths, geographical information about the clients, etc.

The default request/response log line format is the NCSA Format extended with referrer data and user-agent data.

Typically, the extended NCSA format is the is enough and it’s the standard used and understood by most log parsing tools and monitoring tools.

To customize the request/response log line format see the CustomRequestLog javadocs.

Request logging can be enabled at the Server level.

The request logging output can be directed to an SLF4J logger named "org.eclipse.jetty.server.RequestLog" at INFO level, and therefore to any logging library implementation of your choice (see also this section about logging).

Server server = new Server();

// Sets the RequestLog to log to an SLF4J logger named "org.eclipse.jetty.server.RequestLog" at INFO level.
server.setRequestLog(new CustomRequestLog(new Slf4jRequestLogWriter(), CustomRequestLog.EXTENDED_NCSA_FORMAT));

Alternatively, the request logging output can be directed to a daily rolling file of your choice, and the file name must contain yyyy_MM_dd so that rolled over files retain their date:

Server server = new Server();

// Use a file name with the pattern 'yyyy_MM_dd' so rolled over files retain their date.
RequestLogWriter logWriter = new RequestLogWriter("/var/log/yyyy_MM_dd.jetty.request.log");
// Retain rolled over files for 2 weeks.
logWriter.setRetainDays(14);
// Log times are in the current time zone.
logWriter.setTimeZone(TimeZone.getDefault().getID());

// Set the RequestLog to log to the given file, rolling over at midnight.
server.setRequestLog(new CustomRequestLog(logWriter, CustomRequestLog.EXTENDED_NCSA_FORMAT));

For maximum flexibility, you can log to multiple RequestLogs using class RequestLog.Collection, for example by logging with different formats or to different outputs.

You can use CustomRequestLog with a custom RequestLog.Writer to direct the request logging output to your custom targets (for example, an RDBMS). You can implement your own RequestLog if you want to have functionalities that are not implemented by CustomRequestLog.

Request Customizers

A request customizer is an instance of HttpConfiguration.Customizer, that can customize the HTTP request and/or the HTTP response headers before the Handler chain is invoked.

Request customizers are added to a particular HttpConfiguration instance, and therefore are specific to a Connector instance: you can have two different Connectors configured with different request customizers.

For example, it is common to configure a secure Connector with the SecureRequestCustomizer that customizes the HTTP request by adding attributes that expose TLS data associated with the secure communication.

A request customizer may:

  • Inspect the received HTTP request method, URI, version and headers.

  • Wrap the Request object to allow any method to be overridden and customized. Typically this is done to synthesize additional HTTP request headers, or to change the return value of overridden methods.

  • Add or modify the HTTP response headers.

The out-of-the-box request customizers include:

  • ForwardedRequestCustomizer — to interpret the Forwarded (or the the obsolete X-Forwarded-*) HTTP header added by a reverse proxy; see this section.

  • HostHeaderCustomizer — to customize, or synthesize it when original absent, the HTTP Host header; see this section.

  • ProxyCustomizer — to expose as Request attributes the ip:port information carried by the PROXY protocol; see this section.

  • RewriteCustomizer — to rewrite the request URI; see this section.

  • SecureRequestCustomizer — to expose TLS data via Request attributes; see this section.

You can also write your own request customizers and add them to the HttpConfiguration instance along existing request customizers. Multiple request customizers will be invoked in the order they have been added.

Below you can find an example of how to add a request customizer:

Server server = new Server();

// Configure the secure connector.
HttpConfiguration httpsConfig = new HttpConfiguration();

// Add the SecureRequestCustomizer.
httpsConfig.addCustomizer(new SecureRequestCustomizer());

// Configure the SslContextFactory with the KeyStore information.
SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");
// Configure the Connector to speak HTTP/1.1 and HTTP/2.
HttpConnectionFactory h1 = new HttpConnectionFactory(httpsConfig);
HTTP2ServerConnectionFactory h2 = new HTTP2ServerConnectionFactory(httpsConfig);
ALPNServerConnectionFactory alpn = new ALPNServerConnectionFactory();
alpn.setDefaultProtocol(h1.getProtocol());
SslConnectionFactory ssl = new SslConnectionFactory(sslContextFactory, alpn.getProtocol());
ServerConnector connector = new ServerConnector(server, ssl, alpn, h2, h1);
server.addConnector(connector);

server.start();

ForwardedRequestCustomizer

ForwardedRequestCustomizer should be added when Jetty receives requests from a reverse proxy on behalf of a remote client, and web applications need to access the remote client information.

The reverse proxy adds the Forwarded (or the obsolete X-Forwarded-*) HTTP header to the request, and may offload TLS so that the request arrives in clear-text to Jetty.

Applications deployed in Jetty may need to access information related to the remote client, for example the remote IP address and port, or whether the request was sent through a secure communication channel. However, the request is forwarded by the reverse proxy, so the direct information about the remote IP address is that of the proxy, not of the remote client. Furthermore, the proxy may offload TLS and forward the request in clear-text, so that the URI scheme would be http as forwarded by the reverse proxy, not https as sent by the remote client.

ForwardedRequestCustomizer reads the Forwarded header where the reverse proxy saved the remote client information, and wraps the original Request so that applications will transparently see the remote client information when calling methods such as Request.isSecure(), or Request.getConnectionMetaData().getRemoteSocketAddress(), etc.

For more information about how to configure ForwardedRequestCustomizer, see also the javadocs.

HostHeaderCustomizer

HostHeaderCustomizer should be added when Jetty receives requests that may lack the Host HTTP header, such as HTTP/1.0, HTTP/2 or HTTP/3 requests, and web applications have logic that depends on the value of the Host HTTP header.

For HTTP/2 and HTTP/3, the Host HTTP header is missing because the authority information is carried by the :authority pseudo-header, as per the respective specifications.

HostHeaderCustomizer will look at the :authority pseudo-header, then wrap the original Request adding a Host HTTP header synthesized from the :authority pseudo-header. In this way, web applications that rely on the presence of the Host HTTP header will work seamlessly in any HTTP protocol version.

HostHeaderCustomizer works also for the WebSocket protocol.

WebSocket over HTTP/2 or over HTTP/3 initiate the WebSocket communication with an HTTP request that only has the :authority pseudo-header. HostHeaderCustomizer synthesizes the Host HTTP header for such requests, so that WebSocket web applications that inspect the initial HTTP request before the WebSocket communication will work seamlessly in any HTTP protocol version.

For more information about how to configure HostHeaderCustomizer, see also the javadocs.

ProxyCustomizer

ProxyCustomizer should be added when Jetty receives requests from a reverse proxy on behalf of a remote client, prefixed by the PROXY protocol (see also this section about the PROXY protocol).

ProxyCustomizer adds the reverse proxy IP address and port as Request attributes. Web applications may use these attributes in conjunction with the data exposed by ForwardedRequestCustomizer (see this section).

For more information about how to configure ProxyCustomizer, see also the javadocs.

RewriteCustomizer

RewriteCustomizer is similar to RewriteHandler (see this section), but a RewriteCustomizer cannot send a response or otherwise complete the request/response processing.

A RewriteCustomizer is mostly useful if you want to rewrite the request URI before the Handler chain is invoked. However, a very similar effect can be achieved by having the RewriteHandler as the first Handler (the child Handler of the Server instance).

Since RewriteCustomizer cannot send a response or complete the request/response processing, Rules that do so such as redirect rules have no effect and are ignored; only Rules that modify or wrap the Request will have effect and be applied.

Due to this limitation, it is often a better choice to use RewriteHandler instead of RewriteCustomizer.

For more information about how to configure RewriteCustomizer, see also the javadocs.

SecureRequestCustomizer

SecureRequestCustomizer should be added when Jetty receives requests over a secure Connector.

SecureRequestCustomizer adds TLS information as request attributes, in particular an instance of EndPoint.SslSessionData that contains information about the negotiated TLS cipher suite and possibly client certificates, and an instance of org.eclipse.jetty.util.ssl.X509 that contains information about the server certificate.

SecureRequestCustomizer also adds, if configured so, the Strict-Transport-Security HTTP response header (for more information about this header, see its specification).

For more information about how to configure SecureRequestCustomizer, see also the javadocs.

Server Connectors

A Connector is the component that handles incoming requests from clients, and works in conjunction with ConnectionFactory instances.

The available implementations are:

  • org.eclipse.jetty.server.ServerConnector, for TCP/IP sockets.

  • org.eclipse.jetty.unixdomain.server.UnixDomainServerConnector for Unix-Domain sockets (requires Java 16 or later).

  • org.eclipse.jetty.quic.server.QuicServerConnector, for the low-level QUIC protocol and HTTP/3.

  • org.eclipse.jetty.server.MemoryConnector, for memory communication between client and server.

ServerConnector and UnixDomainServerConnector use a java.nio.channels.ServerSocketChannel to listen to a socket address and to accept socket connections. QuicServerConnector uses a java.nio.channels.DatagramChannel to listen to incoming UDP packets. MemoryConnector uses memory for the communication between client and server, avoiding the use of sockets.

Since ServerConnector wraps a ServerSocketChannel, it can be configured in a similar way, for example the TCP port to listen to, the IP address to bind to, etc.:

Server server = new Server();

// The number of acceptor threads.
int acceptors = 1;

// The number of selectors.
int selectors = 1;

// Create a ServerConnector instance.
ServerConnector connector = new ServerConnector(server, acceptors, selectors, new HttpConnectionFactory());

// Configure TCP/IP parameters.

// The port to listen to.
connector.setPort(8080);
// The address to bind to.
connector.setHost("127.0.0.1");

// The TCP accept queue size.
connector.setAcceptQueueSize(128);

server.addConnector(connector);
server.start();

UnixDomainServerConnector also wraps a ServerSocketChannel and can be configured with the Unix-Domain path to listen to:

Server server = new Server();

// The number of acceptor threads.
int acceptors = 1;

// The number of selectors.
int selectors = 1;

// Create a ServerConnector instance.
UnixDomainServerConnector connector = new UnixDomainServerConnector(server, acceptors, selectors, new HttpConnectionFactory());

// Configure Unix-Domain parameters.

// The Unix-Domain path to listen to.
connector.setUnixDomainPath(Path.of("/tmp/jetty.sock"));

// The TCP accept queue size.
connector.setAcceptQueueSize(128);

server.addConnector(connector);
server.start();

You can use Unix-Domain sockets only when you run your server with Java 16 or later.

QuicServerConnector wraps a DatagramChannel and can be configured in a similar way, as shown in the example below. Since the communication via UDP does not require to "accept" connections like TCP does, the number of acceptors is set to 0 and there is no API to configure their number.

Server server = new Server();

// Configure the SslContextFactory with the keyStore information.
SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");

// Create a QuicServerConnector instance.
Path pemWorkDir = Path.of("/path/to/pem/dir");
ServerQuicConfiguration serverQuicConfig = new ServerQuicConfiguration(sslContextFactory, pemWorkDir);
QuicServerConnector connector = new QuicServerConnector(server, serverQuicConfig, new HTTP3ServerConnectionFactory(serverQuicConfig));

// The port to listen to.
connector.setPort(8080);
// The address to bind to.
connector.setHost("127.0.0.1");

server.addConnector(connector);
server.start();

MemoryConnector uses in-process memory, not sockets, for the communication between client and server, that therefore must be in the same process.

Typical usage of MemoryConnector is the following:

Server server = new Server();

// Create a MemoryConnector instance that speaks HTTP/1.1.
MemoryConnector connector = new MemoryConnector(server, new HttpConnectionFactory());

server.addConnector(connector);
server.start();

// The code above is the server-side.
// ----
// The code below is the client-side.

HttpClient httpClient = new HttpClient();
httpClient.start();

ContentResponse response = httpClient.newRequest("http://localhost/")
    // Use the memory Transport to communicate with the server-side.
    .transport(new MemoryTransport(connector))
    .send();

Acceptors

The acceptors are threads (typically only one) that compete to accept TCP socket connections. The connectors for the QUIC or HTTP/3 protocol, based on UDP, have no acceptors.

When a TCP connection is accepted, ServerConnector wraps the accepted SocketChannel and passes it to the SelectorManager. Therefore, there is a little moment where the acceptor thread is not accepting new connections because it is busy wrapping the just accepted connection to pass it to the SelectorManager. Connections that are ready to be accepted but are not accepted yet are queued in a bounded queue (at the OS level) whose capacity can be configured with the acceptQueueSize parameter.

If your application must withstand a very high rate of connection opening, configuring more than one acceptor thread may be beneficial: when one acceptor thread accepts one connection, another acceptor thread can take over accepting connections.

Selectors

The selectors are components that manage a set of accepted TCP sockets, implemented by ManagedSelector. For QUIC or HTTP/3, there are no accepted TCP sockets, but only one DatagramChannel and therefore there is only one selector.

Each selector requires one thread and uses the Java NIO mechanism to efficiently handle a set of registered channels.

As a rule of thumb, a single selector can easily manage up to 1000-5000 TCP sockets, although the number may vary greatly depending on the application.

For example, web applications for websites tend to use TCP sockets for one or more HTTP requests to retrieve resources and then the TCP socket is idle for most of the time. In this case a single selector may be able to manage many TCP sockets because chances are that they will be idle most of the time. On the contrary, web messaging applications or REST applications tend to send many small messages at a very high frequency so that the TCP sockets are rarely idle. In this case a single selector may be able to manage less TCP sockets because chances are that many of them will be active at the same time, so you may need more than one selector.

Multiple Connectors

It is possible to configure more than one Connector per Server. Typical cases are a ServerConnector for clear-text HTTP, and another ServerConnector for secure HTTP. Another case could be a publicly exposed ServerConnector for secure HTTP, and an internally exposed UnixDomainServerConnector or MemoryConnector for clear-text HTTP. Yet another example could be a ServerConnector for clear-text HTTP, a ServerConnector for secure HTTP/2, and an QuicServerConnector for QUIC+HTTP/3.

For example:

Server server = new Server();

// Create a ServerConnector instance on port 8080.
ServerConnector connector1 = new ServerConnector(server, 1, 1, new HttpConnectionFactory());
connector1.setPort(8080);
server.addConnector(connector1);

// Create another ServerConnector instance on port 9090,
// for example with a different HTTP configuration.
HttpConfiguration httpConfig2 = new HttpConfiguration();
httpConfig2.setHttpCompliance(HttpCompliance.LEGACY);
ServerConnector connector2 = new ServerConnector(server, 1, 1, new HttpConnectionFactory(httpConfig2));
connector2.setPort(9090);
server.addConnector(connector2);

server.start();

If you do not specify the port the connector listens to explicitly, the OS will allocate one randomly when the connector starts.

You may need to use the randomly allocated port to configure other components. One example is to use the randomly allocated port to configure secure redirects (when redirecting from a URI with the http scheme to the https scheme). Another example is to bind both the HTTP/2 connector and the HTTP/3 connector to the same randomly allocated port. It is possible that the HTTP/2 connector and the HTTP/3 connector share the same port, because one uses TCP, while the other uses UDP.

For example:

SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");

Server server = new Server();

// The plain HTTP configuration.
HttpConfiguration plainConfig = new HttpConfiguration();

// The secure HTTP configuration.
HttpConfiguration secureConfig = new HttpConfiguration(plainConfig);
secureConfig.addCustomizer(new SecureRequestCustomizer());

// First, create the secure connector for HTTPS and HTTP/2.
HttpConnectionFactory https = new HttpConnectionFactory(secureConfig);
HTTP2ServerConnectionFactory http2 = new HTTP2ServerConnectionFactory(secureConfig);
ALPNServerConnectionFactory alpn = new ALPNServerConnectionFactory();
alpn.setDefaultProtocol(https.getProtocol());
ConnectionFactory ssl = new SslConnectionFactory(sslContextFactory, https.getProtocol());
ServerConnector secureConnector = new ServerConnector(server, 1, 1, ssl, alpn, http2, https);
server.addConnector(secureConnector);

// Second, create the plain connector for HTTP.
HttpConnectionFactory http = new HttpConnectionFactory(plainConfig);
ServerConnector plainConnector = new ServerConnector(server, 1, 1, http);
server.addConnector(plainConnector);

// Third, create the connector for HTTP/3.
Path pemWorkDir = Path.of("/path/to/pem/dir");
ServerQuicConfiguration serverQuicConfig = new ServerQuicConfiguration(sslContextFactory, pemWorkDir);
QuicServerConnector http3Connector = new QuicServerConnector(server, serverQuicConfig, new HTTP3ServerConnectionFactory(serverQuicConfig));
server.addConnector(http3Connector);

// Set up a listener so that when the secure connector starts,
// it configures the other connectors that have not started yet.
secureConnector.addEventListener(new NetworkConnector.Listener()
{
    @Override
    public void onOpen(NetworkConnector connector)
    {
        int port = connector.getLocalPort();

        // Configure the plain connector for secure redirects from http to https.
        plainConfig.setSecurePort(port);

        // Configure the HTTP3 connector port to be the same as HTTPS/HTTP2.
        http3Connector.setPort(port);
    }
});

server.start();

Configuring Protocols

A server Connector can be configured with one or more ConnectionFactorys, and this list of ConnectionFactorys represents the protocols that the Connector can understand. If no ConnectionFactory is specified then HttpConnectionFactory is implicitly configured.

For each accepted connection, the server Connector asks a ConnectionFactory to create a Connection object that handles the traffic on that connection, parsing and generating bytes for a specific protocol (see this section for more details about Connection objects).

You can listen for Connection open and close events as detailed in this section.

Secure protocols like secure HTTP/1.1, secure HTTP/2 or HTTP/3 (HTTP/3 is intrinsically secure — there is no clear-text HTTP/3) require an SslContextFactory.Server to be configured with a KeyStore.

For HTTP/1.1 and HTTP/2, SslContextFactory.Server is used in conjunction with SSLEngine, which drives the TLS handshake that establishes the secure communication.

Applications may register a org.eclipse.jetty.io.ssl.SslHandshakeListener to be notified of TLS handshakes success or failure, by adding the SslHandshakeListener as a bean to the Connector:

// Create a SslHandshakeListener.
SslHandshakeListener listener = new SslHandshakeListener()
{
    @Override
    public void handshakeSucceeded(Event event) throws SSLException
    {
        SSLEngine sslEngine = event.getSSLEngine();
        System.getLogger("tls").log(INFO, "TLS handshake successful to %s", sslEngine.getPeerHost());
    }

    @Override
    public void handshakeFailed(Event event, Throwable failure)
    {
        SSLEngine sslEngine = event.getSSLEngine();
        System.getLogger("tls").log(ERROR, "TLS handshake failure to %s", sslEngine.getPeerHost(), failure);
    }
};

Server server = new Server();
ServerConnector connector = new ServerConnector(server);
server.addConnector(connector);

// Add the SslHandshakeListener as bean to ServerConnector.
// The listener will be notified of TLS handshakes success and failure.
connector.addBean(listener);

Clear-Text HTTP/1.1

HttpConnectionFactory creates HttpConnection objects that parse bytes and generate bytes for the HTTP/1.1 protocol.

This is how you configure Jetty to support clear-text HTTP/1.1:

Server server = new Server();

// The HTTP configuration object.
HttpConfiguration httpConfig = new HttpConfiguration();
// Configure the HTTP support, for example:
httpConfig.setSendServerVersion(false);

// The ConnectionFactory for HTTP/1.1.
HttpConnectionFactory http11 = new HttpConnectionFactory(httpConfig);

// Create the ServerConnector.
ServerConnector connector = new ServerConnector(server, http11);
connector.setPort(8080);

server.addConnector(connector);
server.start();

Encrypted HTTP/1.1 (https)

Supporting encrypted HTTP/1.1 (that is, requests with the https scheme) is supported by configuring an SslContextFactory that has access to the KeyStore containing the private server key and public server certificate, in this way:

Server server = new Server();

// The HTTP configuration object.
HttpConfiguration httpConfig = new HttpConfiguration();
// Add the SecureRequestCustomizer because TLS is used.
httpConfig.addCustomizer(new SecureRequestCustomizer());

// The ConnectionFactory for HTTP/1.1.
HttpConnectionFactory http11 = new HttpConnectionFactory(httpConfig);

// Configure the SslContextFactory with the keyStore information.
SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");

// The ConnectionFactory for TLS.
SslConnectionFactory tls = new SslConnectionFactory(sslContextFactory, http11.getProtocol());

// The ServerConnector instance.
ServerConnector connector = new ServerConnector(server, tls, http11);
connector.setPort(8443);

server.addConnector(connector);
server.start();

You can customize the SSL/TLS provider as explained in this section.

Clear-Text HTTP/2

It is well known that the HTTP ports are 80 (for clear-text HTTP) and 443 for encrypted HTTP. By using those ports, a client had prior knowledge that the server would speak, respectively, the HTTP/1.x protocol and the TLS protocol (and, after decryption, the HTTP/1.x protocol).

HTTP/2 was designed to be a smooth transition from HTTP/1.1 for users and as such the HTTP ports were not changed. However the HTTP/2 protocol is, on the wire, a binary protocol, completely different from HTTP/1.1. Therefore, with HTTP/2, clients that connect to port 80 (or to a specific Unix-Domain path) may speak either HTTP/1.1 or HTTP/2, and the server must figure out which version of the HTTP protocol the client is speaking.

Jetty can support both HTTP/1.1 and HTTP/2 on the same clear-text port by configuring both the HTTP/1.1 and the HTTP/2 ConnectionFactorys:

Server server = new Server();

// The HTTP configuration object.
HttpConfiguration httpConfig = new HttpConfiguration();

// The ConnectionFactory for HTTP/1.1.
HttpConnectionFactory http11 = new HttpConnectionFactory(httpConfig);

// The ConnectionFactory for clear-text HTTP/2.
HTTP2CServerConnectionFactory h2c = new HTTP2CServerConnectionFactory(httpConfig);

// The ServerConnector instance.
ServerConnector connector = new ServerConnector(server, http11, h2c);
connector.setPort(8080);

server.addConnector(connector);
server.start();

Note how the ConnectionFactorys passed to ServerConnector are in order: first HTTP/1.1, then HTTP/2. This is necessary to support both protocols on the same port: Jetty will start parsing the incoming bytes as HTTP/1.1, but then realize that they are HTTP/2 bytes and will therefore upgrade from HTTP/1.1 to HTTP/2.

This configuration is also typical when Jetty is installed in backend servers behind a load balancer that also takes care of offloading TLS. When Jetty is behind a load balancer, you can always prepend the PROXY protocol as described in this section.

Encrypted HTTP/2

When using encrypted HTTP/2, the unencrypted protocol is negotiated by client and server using an extension to the TLS protocol called ALPN.

Jetty supports ALPN and encrypted HTTP/2 with this configuration:

Server server = new Server();

// The HTTP configuration object.
HttpConfiguration httpConfig = new HttpConfiguration();
// Add the SecureRequestCustomizer because TLS is used.
httpConfig.addCustomizer(new SecureRequestCustomizer());

// The ConnectionFactory for HTTP/1.1.
HttpConnectionFactory http11 = new HttpConnectionFactory(httpConfig);

// The ConnectionFactory for HTTP/2.
HTTP2ServerConnectionFactory h2 = new HTTP2ServerConnectionFactory(httpConfig);

// The ALPN ConnectionFactory.
ALPNServerConnectionFactory alpn = new ALPNServerConnectionFactory();
// The default protocol to use in case there is no negotiation.
alpn.setDefaultProtocol(http11.getProtocol());

// Configure the SslContextFactory with the keyStore information.
SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");

// The ConnectionFactory for TLS.
SslConnectionFactory tls = new SslConnectionFactory(sslContextFactory, alpn.getProtocol());

// The ServerConnector instance.
ServerConnector connector = new ServerConnector(server, tls, alpn, h2, http11);
connector.setPort(8443);

server.addConnector(connector);
server.start();

Note how the ConnectionFactorys passed to ServerConnector are in order: TLS, ALPN, HTTP/2, HTTP/1.1.

Jetty starts parsing TLS bytes so that it can obtain the ALPN extension. With the ALPN extension information, Jetty can negotiate a protocol and pick, among the ConnectionFactorys supported by the ServerConnector, the ConnectionFactory correspondent to the negotiated protocol.

The fact that the HTTP/2 protocol comes before the HTTP/1.1 protocol indicates that HTTP/2 is the preferred protocol for the server.

Note also that the default protocol set in the ALPN ConnectionFactory, which is used in case ALPN is not supported by the client, is HTTP/1.1 — if the client does not support ALPN is probably an old client so HTTP/1.1 is the safest choice.

You can customize the SSL/TLS provider as explained in this section.

HTTP/3

The HTTP/3 protocol is layered on top of the QUIC protocol, which is based on UDP. This is rather different with respect to HTTP/1 and HTTP/2, that are based on TCP.

Jetty only implements the HTTP/3 layer in Java; the QUIC implementation is provided by the Quiche native library, that Jetty calls via JNA (and possibly, in the future, via the Foreign APIs).

Jetty’s HTTP/3 support can only be used on the platforms (OS and CPU) supported by the Quiche native library.

HTTP/3 clients may not know in advance if the server supports QUIC (over UDP), but the server typically supports either HTTP/1 or HTTP/2 (over TCP) on the default HTTP secure port 443, and advertises the availability HTTP/3 as an HTTP alternate service, possibly on a different port and/or on a different host.

For example, an HTTP/2 response may include the following header:

Alt-Svc: h3=":843"

The presence of this header indicates that protocol h3 is available on the same host (since no host is defined before the port), but on port 843 (although it may be the same port 443). The HTTP/3 client may now initiate a QUIC connection on port 843 and make HTTP/3 requests.

It is nowadays common to use the same port 443 for both HTTP/2 and HTTP/3. This does not cause problems because HTTP/2 listens on the TCP port 443, while QUIC listens on the UDP port 443.

It is therefore common for HTTP/3 clients to initiate connections using the HTTP/2 protocol over TCP, and if the server supports HTTP/3 switch to HTTP/3 as indicated by the server.

Diagram

The code necessary to configure HTTP/2 is described in this section.

To setup HTTP/3, for example on port 843, you need the following code (some of which could be shared with other connectors such as HTTP/2’s):

Server server = new Server();

SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");

HttpConfiguration httpConfig = new HttpConfiguration();
httpConfig.addCustomizer(new SecureRequestCustomizer());

// Create and configure the HTTP/3 connector.
// It is mandatory to configure the PEM directory.
Path pemWorkDir = Path.of("/path/to/pem/dir");
ServerQuicConfiguration serverQuicConfig = new ServerQuicConfiguration(sslContextFactory, pemWorkDir);
QuicServerConnector connector = new QuicServerConnector(server, serverQuicConfig, new HTTP3ServerConnectionFactory(serverQuicConfig));
connector.setPort(843);

server.addConnector(connector);

server.start();

The use of the Quiche native library requires the private key and public certificate present in the KeyStore to be exported as PEM files for Quiche to use them.

It is therefore mandatory to configure the PEM directory as shown above.

The PEM directory must also be adequately protected using file system permissions, because it stores the private key PEM file. You want to grant as few permissions as possible, typically the equivalent of POSIX rwx only to the user that runs the Jetty process. Using /tmp or any other directory accessible by any user is not a secure choice.

Using Conscrypt as SSL/TLS Provider

If not explicitly configured, the TLS implementation is provided by the JDK you are using at runtime.

OpenJDK’s vendors may replace the default TLS provider with their own, but you can also explicitly configure an alternative TLS provider.

The standard TLS provider from OpenJDK is implemented in Java (no native code), and its performance is not optimal, both in CPU usage and memory usage.

A faster alternative, implemented natively, is Google’s Conscrypt, which is built on BoringSSL, which is Google’s fork of OpenSSL.

As Conscrypt eventually binds to a native library, there is a higher risk that a bug in Conscrypt or in the native library causes a JVM crash, while the Java implementation will not cause a JVM crash.

To use Conscrypt as TLS provider, you must have the Conscrypt jar and the Jetty dependency jetty-alpn-conscrypt-server-12.0.10-SNAPSHOT.jar in the class-path or module-path.

Then, you must configure the JDK with the Conscrypt provider, and configure Jetty to use the Conscrypt provider, in this way:

// Configure the JDK with the Conscrypt provider.
Security.addProvider(new OpenSSLProvider());

SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");
// Configure Jetty's SslContextFactory to use Conscrypt.
sslContextFactory.setProvider("Conscrypt");

Jetty Behind a Load Balancer

It is often the case that Jetty receives connections from a load balancer configured to distribute the load among many Jetty backend servers.

From the Jetty point of view, all the connections arrive from the load balancer, rather than the real clients, but is possible to configure the load balancer to forward the real client IP address and IP port to the backend Jetty server using the PROXY protocol.

The PROXY protocol is widely supported by load balancers such as HAProxy (via its send-proxy directive), Nginx(via its proxy_protocol on directive) and others.

To support this case, Jetty can be configured in this way:

Server server = new Server();

// The HTTP configuration object.
HttpConfiguration httpConfig = new HttpConfiguration();
// Configure the HTTP support, for example:
httpConfig.setSendServerVersion(false);

// The ConnectionFactory for HTTP/1.1.
HttpConnectionFactory http11 = new HttpConnectionFactory(httpConfig);

// The ConnectionFactory for the PROXY protocol.
ProxyConnectionFactory proxy = new ProxyConnectionFactory(http11.getProtocol());

// Create the ServerConnector.
ServerConnector connector = new ServerConnector(server, proxy, http11);
connector.setPort(8080);

server.addConnector(connector);
server.start();

Note how the ConnectionFactorys passed to ServerConnector are in order: first PROXY, then HTTP/1.1. Note also how the PROXY ConnectionFactory needs to know its next protocol (in this example, HTTP/1.1).

Each ConnectionFactory is asked to create a Connection object for each accepted TCP connection; the Connection objects will be chained together to handle the bytes, each for its own protocol. Therefore the ProxyConnection will handle the PROXY protocol bytes and HttpConnection will handle the HTTP/1.1 bytes producing a request object and response object that will be processed by Handlers.

The load balancer may be configured to communicate with Jetty backend servers via Unix-Domain sockets (requires Java 16 or later). For example:

Server server = new Server();

// The HTTP configuration object.
HttpConfiguration httpConfig = new HttpConfiguration();
// Configure the HTTP support, for example:
httpConfig.setSendServerVersion(false);

// The ConnectionFactory for HTTP/1.1.
HttpConnectionFactory http11 = new HttpConnectionFactory(httpConfig);

// The ConnectionFactory for the PROXY protocol.
ProxyConnectionFactory proxy = new ProxyConnectionFactory(http11.getProtocol());

// Create the ServerConnector.
UnixDomainServerConnector connector = new UnixDomainServerConnector(server, proxy, http11);
connector.setUnixDomainPath(Path.of("/tmp/jetty.sock"));

server.addConnector(connector);
server.start();

Note that the only difference when using Unix-Domain sockets is instantiating UnixDomainServerConnector instead of ServerConnector and configuring the Unix-Domain path instead of the IP port.

Server Handlers

An org.eclipse.jetty.server.Handler is the component that processes incoming HTTP requests and eventually produces HTTP responses.

Handlers can process the HTTP request themselves, or they can be Handler.Containers that delegate HTTP request processing to one or more contained Handlers. This allows Handlers to be organized as a tree comprised of:

  • Leaf Handlers that generate a response, complete the Callback, and return true from the handle(...) method.

  • A Handler.Wrapper can be used to form a chain of Handlers where request, response or callback objects may be wrapped in the handle(...) method before being passed down the chain.

  • A Handler.Sequence that contains a sequence of Handlers, with each Handler being called in sequence until one returns true from its handle(...) method.

  • A specialized Handler.Container that may use properties of the request (for example, the URI, or a header, etc.) to select from one or more contained Handlers to delegate the HTTP request processing to, for example PathMappingsHandler.

A Handler tree is created by composing Handlers together:

Server server = new Server();

GzipHandler gzipHandler = new GzipHandler();
server.setHandler(gzipHandler);

Handler.Sequence sequence = new Handler.Sequence();
gzipHandler.setHandler(sequence);

sequence.addHandler(new App1Handler());
sequence.addHandler(new App2Handler());

The corresponding Handler tree structure looks like the following:

Server
└── GzipHandler
    └── Handler.Sequence
        ├── App1Handler
        └── App2Handler

You should prefer using existing Handlers provided by the Jetty server libraries for managing web application contexts, security, HTTP sessions and Servlet support. Refer to this section for more information about how to use the Handlers provided by the Jetty server libraries.

You should write your own leaf Handlers to implement your web application logic. Refer to this section for more information about how to write your own Handlers.

A Handler may be declared as non-blocking (by extending Handler.Abstract.NonBlocking) or as blocking (by extending Handler.Abstract), to allow interaction with the Jetty threading architecture for more efficient thread and CPU utilization during the request/response processing.

Container Handlers typically inherit whether they are blocking or non-blocking from their child or children.

Furthermore, container Handlers may be declared as dynamic: they allow adding/removing child Handlers after they have been started (see Handler.AbstractContainer for more information). Dynamic container Handlers are therefore always blocking, as it is not possible to know if a child Handler added in the future will be blocking or non-blocking.

If the Handler tree is not dynamic, then it is possible to create a non-blocking Handler tree, for example:

Server
└── RewriteHandler
    └── GzipHandler
        └── ContextHandler
            └── AppHandler extends Handler.Abstract.NonBlocking

When the Handler tree is non-blocking, Jetty may use the Produce-Consume mode to invoke the Handler tree, therefore avoiding a thread hand-off, and saving the cost of being scheduled on a different CPU with cold caches.

The Produce-Consume mode is equivalent to what other servers call "event loop" or "selector thread loop" architectures.

This mode has the benefit of reducing OS context switches and CPU cache misses, using fewer threads, and it is overall very efficient. On the other hand, it requires writing quick, non-blocking code, and partially sequentializes the request/response processing, so that the Nth request in the sequence pays the latency of the processing of the N-1 requests in front of it.

If you declare your Handler to be non-blocking by extending Handler.Abstract.NonBlocking, the code you write in handle(...) (and recursively all the code called from there) must truly be non-blocking, and possibly execute quickly.

If the code blocks, you risk a server lock-up. If the code takes a long time to execute, requests from other connections may be delayed.

Jetty Handlers

Web applications are the unit of deployment in an HTTP server or Servlet container such as Jetty.

Two different web applications are typically deployed on different context paths, where a context path is the initial segment of the URI path. For example, web application webappA that implements a web user interface for an e-commerce site may be deployed to context path /shop, while web application webappB that implements a REST API for the e-commerce business may be deployed to /api.

A client making a request to URI /shop/cart is directed by Jetty to webappA, while a request to URI /api/products is directed to webappB.

An alternative way to deploy the two web applications of the example above is to use virtual hosts. A virtual host is a subdomain of the primary domain that shares the same IP address with the primary domain. If the e-commerce business primary domain is domain.com, then a virtual host for webappA could be shop.domain.com, while a virtual host for webappB could be api.domain.com.

Web application webappA can now be deployed to virtual host shop.domain.com and context path /, while web application webappB can be deployed to virtual host api.domain.com and context path /. Both applications have the same context path /, but they can be distinguished by the subdomain.

A client making a request to https://shop.domain.com/cart is directed by Jetty to webappA, while a request to https://api.domain.com/products is directed to webappB.

Therefore, in general, a web application is deployed to a context which can be seen as the pair (virtual_host, context_path). In the first case the contexts were (domain.com, /shop) and (domain.com, /api), while in the second case the contexts were (shop.domain.com, /) and (api.domain.com, /). Server applications using the Jetty Server Libraries create and configure a context for each web application. Many contexts can be deployed together to enrich the web application offering — for example a catalog context, a shop context, an API context, an administration context, etc.

Web applications can be written using exclusively the Servlet APIs, since developers know well the Servlet API and because they guarantee better portability across Servlet container implementations, as described in this section.

On the other hand, web applications can be written using the Jetty APIs, for better performance, or to be able to access to Jetty specific APIs, or to use features such as redirection from HTTP to HTTPS, support for gzip content compression, URI rewriting, etc. The Jetty Server Libraries provides a number of out-of-the-box Handlers that implement the most common functionalities and are described in the next sections.

ContextHandler

ContextHandler is a Handler that represents a context for a web application. It is a Handler.Wrapper that performs some action before and after delegating to the nested Handler.

The simplest use of ContextHandler is the following:

class ShopHandler extends Handler.Abstract
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Implement the shop, remembering to complete the callback.
        return true;
    }
}

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Create a ContextHandler with contextPath.
ContextHandler context = new ContextHandler(new ShopHandler(), "/shop");

// Link the context to the server.
server.setHandler(context);

server.start();

The Handler tree structure looks like the following:

Server
└── ContextHandler /shop
    └── ShopHandler

ContextHandlerCollection

Server applications may need to deploy to Jetty more than one web application.

Recall from the introduction that Jetty offers Handler.Collection that contains a sequence of child Handlers. However, this has no knowledge of the concept of context and just iterates through the sequence of Handlers.

A better choice for multiple web application is ContextHandlerCollection, that matches a context from either its context path or virtual host, without iterating through the Handlers.

If ContextHandlerCollection does not find a match, it just returns false from its handle(...) method. What happens next depends on the Handler tree structure: other Handlers may be invoked after ContextHandlerCollection, for example DefaultHandler (see this section). Eventually, if no Handler returns true from their own handle(...) method, then Jetty returns an HTTP 404 response to the client.

class ShopHandler extends Handler.Abstract
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Implement the shop, remembering to complete the callback.
        return true;
    }
}

class RESTHandler extends Handler.Abstract
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Implement the REST APIs, remembering to complete the callback.
        return true;
    }
}

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Create a ContextHandlerCollection to hold contexts.
ContextHandlerCollection contextCollection = new ContextHandlerCollection();

// Create the context for the shop web application and add it to ContextHandlerCollection.
contextCollection.addHandler(new ContextHandler(new ShopHandler(), "/shop"));

// Link the ContextHandlerCollection to the Server.
server.setHandler(contextCollection);

server.start();

// Create the context for the API web application.
ContextHandler apiContext = new ContextHandler(new RESTHandler(), "/api");
// Web applications can be deployed after the Server is started.
contextCollection.deployHandler(apiContext, Callback.NOOP);

The Handler tree structure looks like the following:

Server
└── ContextHandlerCollection
    ├── ContextHandler /shop
    │   └── ShopHandler
    └── ContextHandler /api
        └── RESTHandler

ResourceHandler — Static Content

Static content such as images or files (HTML, JavaScript, CSS) can be sent by Jetty very efficiently because Jetty can write the content asynchronously, using direct ByteBuffers to minimize data copy, and using a memory cache for faster access to the data to send.

Being able to write content asynchronously means that if the network gets congested (for example, the client reads the content very slowly) and the server stalls the send of the requested data, then Jetty will wait to resume the send without blocking a thread to finish the send.

ResourceHandler supports the following features:

  • Welcome files, for example serving /index.html for request URI /

  • Precompressed resources, serving a precompressed /document.txt.gz for request URI /document.txt

  • Range requests, for requests containing the Range header, which allows clients to pause and resume downloads of large files

  • Directory listing, serving a HTML page with the file list of the requested directory

  • Conditional headers, for requests containing the If-Match, If-None-Match, If-Modified-Since, If-Unmodified-Since headers.

The number of features supported and the efficiency in sending static content are on the same level as those of common front-end servers used to serve static content such as Nginx or Apache. Therefore, the traditional architecture where Nginx/Apache was the front-end server used only to send static content and Jetty was the back-end server used only to send dynamic content is somehow obsolete as Jetty can perform efficiently both tasks. This leads to simpler systems (less components to configure and manage) and more performance (no need to proxy dynamic requests from front-end servers to back-end servers).

It is common to use Nginx/Apache as load balancers, or as rewrite/redirect servers. We typically recommend HAProxy as load balancer, and Jetty has rewrite/redirect features as well.

This is how you configure a ResourceHandler to create a simple file server:

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Create and configure a ResourceHandler.
ResourceHandler handler = new ResourceHandler();
// Configure the directory where static resources are located.
handler.setBaseResource(ResourceFactory.of(handler).newResource("/path/to/static/resources/"));
// Configure directory listing.
handler.setDirAllowed(false);
// Configure welcome files.
handler.setWelcomeFiles(List.of("index.html"));
// Configure whether to accept range requests.
handler.setAcceptRanges(true);

// Link the context to the server.
server.setHandler(handler);

server.start();

If you need to serve static resources from multiple directories:

ResourceHandler handler = new ResourceHandler();

// For multiple directories, use ResourceFactory.combine().
Resource resource = ResourceFactory.combine(
    ResourceFactory.of(handler).newResource("/path/to/static/resources/"),
    ResourceFactory.of(handler).newResource("/another/path/to/static/resources/")
);
handler.setBaseResource(resource);

If the resource is not found, ResourceHandler will not return true from the handle(...) method, so what happens next depends on the Handler tree structure. See also how to use DefaultHandler.

GzipHandler

GzipHandler provides supports for automatic decompression of compressed request content and automatic compression of response content.

GzipHandler is a Handler.Wrapper that inspects the request and, if the request matches the GzipHandler configuration, just installs the required components to eventually perform decompression of the request content or compression of the response content. The decompression/compression is not performed until the web application reads request content or writes response content.

GzipHandler can be configured at the server level in this way:

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Create and configure GzipHandler.
GzipHandler gzipHandler = new GzipHandler();
server.setHandler(gzipHandler);
// Only compress response content larger than this.
gzipHandler.setMinGzipSize(1024);
// Do not compress these URI paths.
gzipHandler.setExcludedPaths("/uncompressed");
// Also compress POST responses.
gzipHandler.addIncludedMethods("POST");
// Do not compress these mime types.
gzipHandler.addExcludedMimeTypes("font/ttf");

// Create a ContextHandlerCollection to manage contexts.
ContextHandlerCollection contexts = new ContextHandlerCollection();
gzipHandler.setHandler(contexts);

server.start();

The Handler tree structure looks like the following:

Server
└── GzipHandler
    └── ContextHandlerCollection
        ├── ContextHandler 1
        :── ...
        └── ContextHandler N

However, in less common cases, you can configure GzipHandler on a per-context basis, for example because you want to configure GzipHandler with different parameters for each context, or because you want only some contexts to have compression support:

Server server = new Server();
ServerConnector connector = new ServerConnector(server);
server.addConnector(connector);

// Create a ContextHandlerCollection to hold contexts.
ContextHandlerCollection contextCollection = new ContextHandlerCollection();
// Link the ContextHandlerCollection to the Server.
server.setHandler(contextCollection);

// Create the context for the shop web application wrapped with GzipHandler so only the shop will do gzip.
GzipHandler shopGzipHandler = new GzipHandler(new ContextHandler(new ShopHandler(), "/shop"));

// Add it to ContextHandlerCollection.
contextCollection.addHandler(shopGzipHandler);

// Create the context for the API web application.
ContextHandler apiContext = new ContextHandler(new RESTHandler(), "/api");

// Add it to ContextHandlerCollection.
contextCollection.addHandler(apiContext);

server.start();

The Handler tree structure looks like the following:

Server
└── ContextHandlerCollection
    └── ContextHandlerCollection
        ├── GzipHandler
        │   └── ContextHandler /shop
        │       └── ShopHandler
        └── ContextHandler /api
            └── RESTHandler

RewriteHandler

RewriteHandler provides support for URL rewriting, very similarly to Apache’s mod_rewrite or Nginx rewrite module.

The Maven artifact coordinates are:

<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-rewrite</artifactId>
  <version>12.0.10-SNAPSHOT</version>
</dependency>

RewriteHandler can be configured with a set of rules; a rule inspects the request and when it matches it performs some change to the request (for example, changes the URI path, adds/removes headers, etc.).

The Jetty Server Libraries provide rules for the most common usages, but you can write your own rules by extending the org.eclipse.jetty.rewrite.handler.Rule class.

Please refer to the jetty-rewrite module javadocs for the complete list of available rules.

You typically want to configure RewriteHandler at the server level, although it is possible to configure it on a per-context basis.

Server server = new Server();
ServerConnector connector = new ServerConnector(server);
server.addConnector(connector);

// Create and link the RewriteHandler to the Server.
RewriteHandler rewriteHandler = new RewriteHandler();
server.setHandler(rewriteHandler);

// Compacts URI paths with double slashes, e.g. /ctx//path/to//resource.
rewriteHandler.addRule(new CompactPathRule());
// Rewrites */products/* to */p/*.
rewriteHandler.addRule(new RewriteRegexRule("/(.*)/product/(.*)", "/$1/p/$2"));
// Redirects permanently to a different URI.
RedirectRegexRule redirectRule = new RedirectRegexRule("/documentation/(.*)", "https://docs.domain.com/$1");
redirectRule.setStatusCode(HttpStatus.MOVED_PERMANENTLY_301);
rewriteHandler.addRule(redirectRule);

// Create a ContextHandlerCollection to hold contexts.
ContextHandlerCollection contextCollection = new ContextHandlerCollection();
rewriteHandler.setHandler(contextCollection);

server.start();

The Handler tree structure looks like the following:

Server
└── RewriteHandler
    └── ContextHandlerCollection
        ├── ContextHandler 1
        :── ...
        └── ContextHandler N

SizeLimitHandler

SizeLimitHandler tracks the sizes of request content and response content, and fails the request processing with an HTTP status code of 413 Content Too Large.

Server applications can set up the SizeLimitHandler before or after handlers that modify the request content or response content such as GzipHandler. When SizeLimitHandler is before GzipHandler in the Handler tree, it will limit the compressed content; when it is after, it will limit the uncompressed content.

The Handler tree structure look like the following, to limit uncompressed content:

Server
└── GzipHandler
    └── SizeLimitHandler
        └── ContextHandlerCollection
            ├── ContextHandler 1
            :── ...
            └── ContextHandler N

StatisticsHandler

StatisticsHandler gathers and exposes a number of statistic values related to request processing such as:

  • Total number of requests

  • Current number of concurrent requests

  • Minimum, maximum, average and standard deviation of request processing times

  • Number of responses grouped by HTTP code (i.e. how many 2xx responses, how many 3xx responses, etc.)

  • Total response content bytes

Server applications can read these values and use them internally, or expose them via some service, or export them to JMX.

StatisticsHandler can be configured at the server level or at the context level.

Server server = new Server();
ServerConnector connector = new ServerConnector(server);
server.addConnector(connector);

// Create and link the StatisticsHandler to the Server.
StatisticsHandler statsHandler = new StatisticsHandler();
server.setHandler(statsHandler);

// Create a ContextHandlerCollection to hold contexts.
ContextHandlerCollection contextCollection = new ContextHandlerCollection();
statsHandler.setHandler(contextCollection);

server.start();

The Handler tree structure looks like the following:

Server
└── StatisticsHandler
    └── ContextHandlerCollection
        ├── ContextHandler 1
        :── ...
        └── ContextHandler N

It is possible to act on those statistics by subclassing StatisticsHandler. For instance, StatisticsHandler.MinimumDataRateHandler can be used to enforce a minimum read rate and a minimum write rate based of the figures collected by the StatisticsHandler:

Server server = new Server();
ServerConnector connector = new ServerConnector(server);
server.addConnector(connector);

// Create and link the MinimumDataRateHandler to the Server.
// Create the MinimumDataRateHandler with a minimum read rate of 1KB per second and no minimum write rate.
StatisticsHandler.MinimumDataRateHandler dataRateHandler = new StatisticsHandler.MinimumDataRateHandler(1024L, 0L);
server.setHandler(dataRateHandler);

// Create a ContextHandlerCollection to hold contexts.
ContextHandlerCollection contextCollection = new ContextHandlerCollection();
dataRateHandler.setHandler(contextCollection);

server.start();

EventsHandler

EventsHandler allows applications to be notified of request processing events.

EventsHandler must be subclassed, and the relevant onXYZ() methods overridden to capture the request processing events you are interested in. The request processing events can be used in conjunction with Request APIs that provide the information you may be interested in.

For example, if you want to use EventsHandler to record processing times, you can use the request processing events with the following Request APIs:

  • Request.getBeginNanoTime(), which returns the earliest possible nanoTime the request was received.

  • Request.getHeadersNanoTime(), which returns the nanoTime at which the parsing of the HTTP headers was completed.

The Request and Response objects may be inspected during events, but it is recommended to avoid modifying them, for example by adding/removing headers or by reading/writing content, because any modification may interfere with the processing performed by other Handlers.

EventsHandler emits the following events:

beforeHandling

Emitted just before EventsHandler invokes the Handler.handle(...) method of the next Handler in the Handler tree.

afterHandling

Emitted just after the invocation to the Handler.handle(...) method of the next Handler in the Handler tree returns, either normally or by throwing.

requestRead

Emitted every time a chunk of content is read from the Request.

responseBegin

Emitted when the response first write happens.

responseWrite

Emitted every time the write of some response content is initiated.

responseWriteComplete

Emitted every time the write of some response content is completed, either successfully or with a failure.

responseTrailersComplete

Emitted when the write of the response trailers is completed.

complete

Emitted when both request and the response have been completely processed.

Your EventsHandler subclass should then be linked in the Handler tree in the relevant position, typically as the outermost Handler after Server:

class MyEventsHandler extends EventsHandler
{
    @Override
    protected void onBeforeHandling(Request request)
    {
        // The nanoTime at which the request is first received.
        long requestBeginNanoTime = request.getBeginNanoTime();

        // The nanoTime just before invoking the next Handler.
        request.setAttribute("beforeHandlingNanoTime", NanoTime.now());
    }

    @Override
    protected void onComplete(Request request, int status, HttpFields headers, Throwable failure)
    {
        // Retrieve the before handling nanoTime.
        long beforeHandlingNanoTime = (long)request.getAttribute("beforeHandlingNanoTime");

        // Record the request processing time and the status that was sent back to the client.
        long processingTime = NanoTime.millisSince(beforeHandlingNanoTime);
        System.getLogger("trackTime").log(INFO, "processing request %s took %d ms and ended with status code %d", request, processingTime, status);
    }
}

Server server = new Server();
ServerConnector connector = new ServerConnector(server);
server.addConnector(connector);

// Link the EventsHandler as the outermost Handler after Server.
MyEventsHandler eventsHandler = new MyEventsHandler();
server.setHandler(eventsHandler);

ContextHandler appHandler = new ContextHandler("/app");
eventsHandler.setHandler(appHandler);

server.start();

The Handler tree structure looks like the following:

Server
└── MyEventsHandler
    └── ContextHandler /app

You can link the EventsHandler at any point in the Handler tree structure, and even have multiple EventsHandlers to be notified of the request processing events at the different stages of the Handler tree, for example:

Server
└── TotalEventsHandler
    └── SlowHandler
        └── AppEventsHandler
            └── ContextHandler /app

In the example above, TotalEventsHandler may record the total times of request processing, from SlowHandler all the way to the ContextHandler. On the other hand, AppEventsHandler may record both the time it takes for the request to flow from TotalEventsHandler to AppEventsHandler, therefore effectively measuring the processing time due to SlowHandler, and the time it takes to process the request by the ContextHandler.

Refer to the EventsHandler javadocs for further information.

QoSHandler

QoSHandler allows web applications to limit the number of concurrent requests, therefore implementing a quality of service (QoS) mechanism for end users.

Web applications may need to access resources with limited capacity, for example a relational database accessed through a JDBC connection pool.

Consider the case where each HTTP request results in a JDBC query, and the capacity of the database is of 400 queries/s. Allowing more than 400 HTTP requests/s into the system, for example 500 requests/s, results in 100 requests blocking waiting for a JDBC connection for every second. It is evident that even a short load spike of few seconds eventually results in consuming all the server threads: some will be processing requests and queries, the remaining will be blocked waiting for a JDBC connection. When no more threads are available, additional requests will queue up as tasks in the thread pool, consuming more memory and potentially causing a complete server failure. This situation affects the whole server, so one bad behaving web application may affect other well behaving web applications. From the end user perspective the quality of service is terrible, because requests will take a lot of time to be served and eventually time out.

In cases of load spikes, caused for example by popular events (weather or social events), usage bursts (Black Friday sales), or even denial of service attacks, it is desirable to give priority to certain requests rather than others. For example, in an e-commerce site requests that lead to the checkout and to the payments should have higher priorities than requests to browse the products. Another example is to prioritize requests for certain users such as paying users or administrative users.

QoSHandler allows you to configure the maximum number of concurrent requests; by extending QoSHandler you can prioritize suspended requests for faster processing.

A simple example that just limits the number of concurrent requests:

class ShopHandler extends Handler.Abstract
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Implement the shop, remembering to complete the callback.
        callback.succeeded();
        return true;
    }
}

int maxThreads = 256;
QueuedThreadPool serverThreads = new QueuedThreadPool(maxThreads);
Server server = new Server(serverThreads);
ServerConnector connector = new ServerConnector(server);
server.addConnector(connector);

// Create and configure QoSHandler.
QoSHandler qosHandler = new QoSHandler();
// Set the max number of concurrent requests,
// for example in relation to the thread pool.
qosHandler.setMaxRequestCount(maxThreads / 2);
// A suspended request may stay suspended for at most 15 seconds.
qosHandler.setMaxSuspend(Duration.ofSeconds(15));
server.setHandler(qosHandler);

// Provide quality of service to the shop
// application by wrapping ShopHandler with QoSHandler.
qosHandler.setHandler(new ShopHandler());

server.start();

This is an example of a QoSHandler subclass where you can implement a custom prioritization logic:

class PriorityQoSHandler extends QoSHandler
{
    @Override
    protected int getPriority(Request request)
    {
        String pathInContext = Request.getPathInContext(request);

        // Payment requests have the highest priority.
        if (pathInContext.startsWith("/payment/"))
            return 3;

        // Login, checkout and admin requests.
        if (pathInContext.startsWith("/login/"))
            return 2;
        if (pathInContext.startsWith("/checkout/"))
            return 2;
        if (pathInContext.startsWith("/admin/"))
            return 2;

        // Health-check requests from the load balancer.
        if (pathInContext.equals("/health-check"))
            return 1;

        // Other requests have the lowest priority.
        return 0;
    }
}

SecuredRedirectHandler — Redirect from HTTP to HTTPS

SecuredRedirectHandler allows to redirect requests made with the http scheme (and therefore to the clear-text port) to the https scheme (and therefore to the encrypted port).

For example a request to http://domain.com:8080/path?param=value is redirected to https://domain.com:8443/path?param=value.

Server applications must configure a HttpConfiguration object with the secure scheme and secure port so that SecuredRedirectHandler can build the redirect URI.

SecuredRedirectHandler is typically configured at the server level, although it can be configured on a per-context basis.

Server server = new Server();

// Configure the HttpConfiguration for the clear-text connector.
int securePort = 8443;
HttpConfiguration httpConfig = new HttpConfiguration();
httpConfig.setSecurePort(securePort);

// The clear-text connector.
ServerConnector connector = new ServerConnector(server, new HttpConnectionFactory(httpConfig));
connector.setPort(8080);
server.addConnector(connector);

// Configure the HttpConfiguration for the secure connector.
HttpConfiguration httpsConfig = new HttpConfiguration(httpConfig);
// Add the SecureRequestCustomizer because TLS is used.
httpConfig.addCustomizer(new SecureRequestCustomizer());

// The HttpConnectionFactory for the secure connector.
HttpConnectionFactory http11 = new HttpConnectionFactory(httpsConfig);

// Configure the SslContextFactory with the keyStore information.
SslContextFactory.Server sslContextFactory = new SslContextFactory.Server();
sslContextFactory.setKeyStorePath("/path/to/keystore");
sslContextFactory.setKeyStorePassword("secret");

// The ConnectionFactory for TLS.
SslConnectionFactory tls = new SslConnectionFactory(sslContextFactory, http11.getProtocol());

// The secure connector.
ServerConnector secureConnector = new ServerConnector(server, tls, http11);
secureConnector.setPort(8443);
server.addConnector(secureConnector);

// Create and link the SecuredRedirectHandler to the Server.
SecuredRedirectHandler securedHandler = new SecuredRedirectHandler();
server.setHandler(securedHandler);

// Create a ContextHandlerCollection to hold contexts.
ContextHandlerCollection contextCollection = new ContextHandlerCollection();
securedHandler.setHandler(contextCollection);

server.start();

CrossOriginHandler

CrossOriginHandler supports the server-side requirements of the CORS protocol implemented by browsers when performing cross-origin requests.

An example of a cross-origin request is when a script downloaded from the origin domain http://domain.com uses fetch() or XMLHttpRequest to make a request to a cross domain such as http://cross.domain.com (a subdomain of the origin domain) or to http://example.com (a completely different domain).

This is common, for example, when you embed reusable components such as a chat component into a web page: the web page and the chat component files are downloaded from http://domain.com, but the chat server is at http://chat.domain.com, so the chat component must make cross-origin requests to the chat server.

This kind of setup exposes to cross-site request forgery (CSRF) attacks, and the CORS protocol has been established to protect against this kind of attacks.

For security reasons, browsers by default do not allow cross-origin requests, unless the response from the cross domain contains the right CORS headers.

CrossOriginHandler relieves server-side web applications from handling CORS headers explicitly. You can set up your Handler tree with the CrossOriginHandler, configure it, and it will take care of the CORS headers separately from your application, where you can concentrate on the business logic.

The Handler tree structure looks like the following:

Server
└── CrossOriginHandler
    └── ContextHandler /app
        └── AppHandler

The most important CrossOriginHandler configuration parameter that must be configured is allowedOrigins, which by default is the empty set, therefore disallowing all origins.

You want to restrict requests to your cross domain to only origins you trust. From the chat example above, the chat server at http://chat.domain.com knows that the chat component is downloaded from the origin server at http://domain.com, so the CrossOriginHandler is configured in this way:

CrossOriginHandler crossOriginHandler = new CrossOriginHandler();
// The allowed origins are regex patterns.
crossOriginHandler.setAllowedOriginPatterns(Set.of("http://domain\\.com"));

Browsers send cross-origin requests in two ways:

  • Directly, if the cross-origin request meets some simple criteria.

  • By issuing a hidden preflight request before the actual cross-origin request, to verify with the server if it is willing to reply properly to the actual cross-origin request.

Both preflight requests and cross-origin requests will be handled by CrossOriginHandler, which will analyze the request and possibly add appropriate CORS response headers.

By default, preflight requests are not delivered to the CrossOriginHandler child Handler, but it is possible to configure CrossOriginHandler by setting deliverPreflightRequests=true so that the web application can fine-tune the CORS response headers.

Another important CrossOriginHandler configuration parameter is allowCredentials, which controls whether cookies and authentication headers that match the cross-origin request to the cross domain are sent in the cross-origin requests. By default, allowCredentials=false so that cookies and authentication headers are not sent in cross-origin requests.

If the application deployed in the cross domain requires cookies or authentication, then you must set allowCredentials=true, but you also need to restrict the allowed origins only to the ones your trust, otherwise your cross domain application will be vulnerable to CSRF attacks.

For more CrossOriginHandler configuration options, refer to the CrossOriginHandler javadocs.

StateTrackingHandler

StateTrackingHandler is a troubleshooting Handler that tracks whether Handler/Request/Response asynchronous APIs are properly used by applications.</p>

Asynchronous APIs are notoriously more difficult to troubleshoot than blocking APIs, and may be subject to restrictions that applications need to respect (a typical case is that they cannot perform blocking operations).

For example, a Handler implementation whose handle(...) method returns true must eventually complete the callback passed to handle(...) (for more details on the Handler APIs, see this section).

When an application forgets to complete the callback passed to handle(...), the HTTP response may not be sent to the client, but it will be difficult to troubleshoot why the client is not receiving responses.

StateTrackingHandler helps with this troubleshooting because it tracks the callback passed to handle(...) and emits an event if the callback is not completed within a configurable timeout:

StateTrackingHandler stateTrackingHandler = new StateTrackingHandler();

// Emit an event if the Handler callback is not completed with 5 seconds.
stateTrackingHandler.setHandlerCallbackTimeout(5000);

By default, events are logged at warning level, but it is possible to specify a listener to be notified of the events tracked by StateTrackingHandler:

StateTrackingHandler stateTrackingHandler = new StateTrackingHandler(new StateTrackingHandler.Listener()
{
    @Override
    public void onHandlerCallbackNotCompleted(Request request, StateTrackingHandler.ThreadInfo handlerThreadInfo)
    {
        // Your own event handling logic.
    }
});

// Emit an event if the Handler callback is not completed with 5 seconds.
stateTrackingHandler.setHandlerCallbackTimeout(5000);

Other events tracked by StateTrackingHandler are demand callbacks that block, writes that do not complete their callbacks, or write callbacks that block. The complete list of events is specified by the StateTrackingHandler.Listener class (javadocs).

DefaultHandler

DefaultHandler is a terminal Handler that always returns true from its handle(...) method and performs the following:

  • Serves the favicon.ico Jetty icon when it is requested

  • Sends a HTTP 404 response for any other request

  • The HTTP 404 response content nicely shows a HTML table with all the contexts deployed on the Server instance

DefaultHandler is set directly on the server, for example:

Server server = new Server();
server.setDefaultHandler(new DefaultHandler(false, true));

Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Add a ContextHandlerCollection to manage contexts.
ContextHandlerCollection contexts = new ContextHandlerCollection();

// Link the contexts to the Server.
server.setHandler(contexts);

server.start();

The Handler tree structure looks like the following:

Server
  ├── ContextHandlerCollection
  │   ├── ContextHandler 1
  │   :── ...
  │   └── ContextHandler N
  └── DefaultHandler

In the example above, ContextHandlerCollection will try to match a request to one of the contexts; if the match fails, Server will call the DefaultHandler that will return a HTTP 404 with an HTML page showing the existing contexts deployed on the Server.

DefaultHandler just sends a nicer HTTP 404 response in case of wrong requests from clients. Jetty will send an HTTP 404 response anyway if DefaultHandler has not been set.

Servlet API Handlers

ServletContextHandler

Handlers are easy to write, but often web applications have already been written using the Servlet APIs, using Servlets and Filters.

ServletContextHandler is a ContextHandler that provides support for the Servlet APIs and implements the behaviors required by the Servlet specification.

However, differently from WebAppContext, it does not require web application to be packaged as a *.war, nor it requires a web.xml for configuration.

With ServletContextHandler you can just put all your Servlet components in a *.jar and configure each component using the ServletContextHandler APIs, in a way equivalent to what you would write in a web.xml.

The Maven artifact coordinates depend on the version of Jakarta EE you want to use, and they are:

<dependency>
  <groupId>org.eclipse.jetty.ee{8,9,10}</groupId>
  <artifactId>jetty-ee{8,9,10}-servlet</artifactId>
  <version>12.0.10-SNAPSHOT</version>
</dependency>

For example, for Jakarta EE 10 the coordinates are: org.eclipse.jetty.ee10:jetty-ee10-servlet:12.0.10-SNAPSHOT.

Below you can find an example of how to set up a Jakarta EE 10 ServletContextHandler:

public class ShopCartServlet extends HttpServlet
{
    @Override
    protected void service(HttpServletRequest request, HttpServletResponse response)
    {
        // Implement the shop cart functionality.
    }
}
Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Add the CrossOriginHandler to protect from CSRF attacks.
CrossOriginHandler crossOriginHandler = new CrossOriginHandler();
crossOriginHandler.setAllowedOriginPatterns(Set.of("http://domain.com"));
crossOriginHandler.setAllowCredentials(true);
server.setHandler(crossOriginHandler);

// Create a ServletContextHandler with contextPath.
ServletContextHandler context = new ServletContextHandler();
context.setContextPath("/shop");
// Link the context to the server.
crossOriginHandler.setHandler(context);

// Add the Servlet implementing the cart functionality to the context.
ServletHolder servletHolder = context.addServlet(ShopCartServlet.class, "/cart/*");
// Configure the Servlet with init-parameters.
servletHolder.setInitParameter("maxItems", "128");

server.start();

The Handler and Servlet components tree structure looks like the following:

Server
└── ServletContextHandler /shop
    ├── ShopCartServlet /cart/*
    └── CrossOriginFilter /*

Note how the Servlet components (they are not Handlers) are represented in italic.

Note also how adding a Servlet or a Filter returns a holder object that can be used to specify additional configuration for that particular Servlet or Filter, for example initialization parameters (equivalent to <init-param> in web.xml).

When a request arrives to ServletContextHandler the request URI will be matched against the Filters and Servlet mappings and only those that match will process the request, as dictated by the Servlet specification.

ServletContextHandler is a terminal Handler, that is it always returns true from its handle(...) method when invoked. Server applications must be careful when creating the Handler tree to put ServletContextHandlers as last Handlers in any Handler.Collection or as children of a ContextHandlerCollection.

WebAppContext

WebAppContext is a ServletContextHandler that autoconfigures itself by reading a web.xml Servlet configuration file.

The Maven artifact coordinates depend on the version of Jakarta EE you want to use, and they are:

<dependency>
  <groupId>org.eclipse.jetty.ee{8,9,10}</groupId>
  <artifactId>jetty-ee{8,9,10}-webapp</artifactId>
  <version>12.0.10-SNAPSHOT</version>
</dependency>

Server applications can specify a *.war file or a directory with the structure of a *.war file to WebAppContext to deploy a standard Servlet web application packaged as a war (as defined by the Servlet specification).

Where server applications using ServletContextHandler must manually invoke methods to add Servlets and Filters as described in this section, WebAppContext reads WEB-INF/web.xml to add Servlets and Filters, and also enforces a number of restrictions defined by the Servlet specification, in particular related to class loading.

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Create a WebAppContext.
WebAppContext context = new WebAppContext();
// Link the context to the server.
server.setHandler(context);

// Configure the path of the packaged web application (file or directory).
context.setWar("/path/to/webapp.war");
// Configure the contextPath.
context.setContextPath("/app");

server.start();
WebAppContext Class Loading

The Servlet specification requires that a web application class loader must load the web application classes from WEB-INF/classes and WEB_INF/lib. The web application class loader is special because it behaves differently from typical class loaders: where typical class loaders first delegate to their parent class loader and then try to find the class locally, the web application class loader first tries to find the class locally and then delegates to the parent class loader. The typical class loading model, parent-first, is inverted for web application class loaders, as they use a child-first model.

Furthermore, the Servlet specification requires that web applications cannot load or otherwise access the Servlet container implementation classes, also called server classes. Web applications receive the HTTP request object as an instance of the jakarta.servlet.http.HttpServletRequest interface, and cannot downcast it to the Jetty specific implementation of that interface to access Jetty specific features — this ensures maximum web application portability across Servlet container implementations.

Lastly, the Servlet specification requires that other classes, also called system classes, such as jakarta.servlet.http.HttpServletRequest or JDK classes such as java.lang.String or java.sql.Connection cannot be modified by web applications by putting, for example, modified versions of those classes in WEB-INF/classes so that they are loaded first by the web application class loader (instead of the class-path class loader where they are normally loaded from).

WebAppContext implements this class loader logic using a single class loader, WebAppClassLoader, with filtering capabilities: when it loads a class, it checks whether the class is a system class or a server class and acts according to the Servlet specification.

When WebAppClassLoader is asked to load a class, it first tries to find the class locally (since it must use the inverted child-first model); if the class is found, and it is not a system class, the class is loaded; otherwise the class is not found locally. If the class is not found locally, the parent class loader is asked to load the class; the parent class loader uses the standard parent-first model, so it delegates the class loading to its parent, and so on. If the class is found, and it is not a server class, the class is loaded; otherwise the class is not found and a ClassNotFoundException is thrown.

Unfortunately, the Servlet specification does not define exactly which classes are system classes and which classes are server classes. However, Jetty picks good defaults and allows server applications to customize system classes and server classes in WebAppContext.

DefaultServlet — Static Content for Servlets

If you have a Servlet web application, you may want to use a DefaultServlet instead of ResourceHandler. The features are similar, but DefaultServlet is more commonly used to serve static files for Servlet web applications.

The Maven artifact coordinates depend on the version of Jakarta EE you want to use, and they are:

<dependency>
  <groupId>org.eclipse.jetty.ee{8,9,10}</groupId>
  <artifactId>jetty-ee{8,9,10}-servlet</artifactId>
  <version>12.0.10-SNAPSHOT</version>
</dependency>

Below you can find an example of how to setup DefaultServlet:

// Create a ServletContextHandler with contextPath.
ServletContextHandler context = new ServletContextHandler();
context.setContextPath("/app");

// Add the DefaultServlet to serve static content.
ServletHolder servletHolder = context.addServlet(DefaultServlet.class, "/");
// Configure the DefaultServlet with init-parameters.
servletHolder.setInitParameter("resourceBase", "/path/to/static/resources/");
servletHolder.setAsyncSupported(true);

Implementing Handler

The Handler API consist fundamentally of just one method:

public boolean handle(Request request, Response response, Callback callback) throws Exception

The code that implements the handle(...) method must respect the following contract:

  • It may inspect Request immutable information such as URI and headers, typically to decide whether to return true or false (see below).

  • Returning false means that the implementation will not handle the request, and it must not complete the callback parameter, nor read the request content, nor write response content.

  • Returning true means that the implementation will handle the request, and it must eventually complete the callback parameter. The completion of the callback parameter may happen synchronously within the invocation to handle(...), or at a later time, asynchronously, possibly from another thread. If the response has not been explicitly written when the callback has been completed, the Jetty implementation will write a 200 response with no content if the callback has been succeeded, or an error response if the callback has been failed.

Violating the contract above may result in undefined or unexpected behavior, and possibly leak resources.

For example, returning true from handle(...), but not completing the callback parameter may result in the request or the response never be completed, likely causing the client to time out.

Similarly, returning false from handle(...) but then either writing the response or completing the callback parameter will likely result in a garbled response be sent to the client, as the implementation will either invoke another Handler (that may write a response) or write a default response.

Applications may wrap the request, the response, or the callback and forward the wrapped request, response and callback to a child Handler.

Hello World Handler

A simple "Hello World" Handler is the following:

class HelloWorldHandler extends Handler.Abstract.NonBlocking
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        response.setStatus(200);
        response.getHeaders().put(HttpHeader.CONTENT_TYPE, "text/html; charset=UTF-8");

        // Write a Hello World response.
        Content.Sink.write(response, true, """
            <!DOCTYPE html>
            <html>
            <head>
              <title>Jetty Hello World Handler</title>
            </head>
            <body>
              <p>Hello World</p>
            </body>
            </html>
            """, callback);
        return true;
    }
}

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Set the Hello World Handler.
server.setHandler(new HelloWorldHandler());

server.start();

Such a simple Handler can access the request and response main features, such as reading request headers and content, or writing response headers and content.

Note how HelloWorldHandler extends from Handler.Abstract.NonBlocking. This is a declaration that HelloWorldHandler does not use blocking APIs (of any kind) to perform its logic, allowing Jetty to apply optimizations (see here) that are not applied to Handlers that declare themselves as blocking.

If your Handler implementation uses blocking APIs (of any kind), extend from Handler.Abstract.

Note how the callback parameter is passed to Content.Sink.write(...) — a utility method that eventually calls Response.write(...), so that when the write completes, also the callback parameter is completed. Note also that because the callback parameter will eventually be completed, the value returned from handle(...) is true.

In this way the Handler contract is fully respected: when true is returned, the callback will eventually be completed.

Filtering Handler

A filtering Handler is a handler that perform some modification to the request or response, and then either forwards the request to another Handler or produces an error response:

class FilterHandler extends Handler.Wrapper
{
    public FilterHandler(Handler handler)
    {
        super(handler);
    }

    @Override
    public boolean handle(Request request, Response response, Callback callback) throws Exception
    {
        String path = Request.getPathInContext(request);
        if (path.startsWith("/old_path/"))
        {
            // Rewrite old paths to new paths.
            HttpURI uri = request.getHttpURI();
            String newPath = "/new_path/" + path.substring("/old_path/".length());
            HttpURI newURI = HttpURI.build(uri).path(newPath).asImmutable();

            // Modify the request object by wrapping the HttpURI.
            Request newRequest = Request.serveAs(request, newURI);

            // Forward to the next Handler using the wrapped Request.
            return super.handle(newRequest, response, callback);
        }
        else
        {
            // Forward to the next Handler as-is.
            return super.handle(request, response, callback);
        }
    }
}

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Link the Handlers in a chain.
server.setHandler(new FilterHandler(new HelloWorldHandler()));

server.start();

Note how a filtering Handler extends from Handler.Wrapper and as such needs another handler to forward the request processing to, and how the two Handlers needs to be linked together to work properly.

Using the Request

The Request object can be accessed by web applications to inspect the HTTP request URI, the HTTP request headers and read the HTTP request content.

Since the Request object may be wrapped by filtering Handlers, the design decision for the Request APIs was to keep the number of virtual methods at a minimum. This minimizes the effort necessary to write Request wrapper implementations and provides a single source for the data carried by Request objects.

To use the Request APIs, you should look up the relevant methods in the following order:

  1. Request virtual methods. For example, Request.getMethod() returns the HTTP method used in the request, such as GET, POST, etc.

  2. Request static methods. These are utility methods that provide more convenient access to request features. For example, the HTTP URI query is a string and can be directly accessed via the non-static method request.getHttpURI().getQuery(); however, the query string typically holds key/value parameters and applications should not have the burden to parse the query string, so the static Request.extractQueryParameters(Request) method is provided.

  3. Super class static methods. Since Request is-a Content.Source, look also for static methods in Content.Source that take a Content.Source as a parameter, so that you can pass the Request object as a parameter.

Below you can find a list of the most common Request features and how to access them. Refer to the Request javadocs for the complete list.

Request URI

The Request URI is accessed via Request.getHttpURI() and the HttpURI APIs.

Request HTTP headers

The Request HTTP headers are accessed via Request.getHeaders() and the HttpFields APIs.

Request cookies

The Request cookies are accessed via static Request.getCookies(Request) and the HttpCookie APIs.

Request parameters

The Request parameters are accessed via static Request.extractQueryParameters(Request) for those present in the HTTP URI query, and via static Request.getParametersAsync(Request) for both query parameters and request content parameters received via form upload with Content-Type: application/x-www-url-form-encoded, and the Fields APIs. If you are only interested in the request content parameters received via form upload, you can use static FormFields.from(Request), see also this section.

Request HTTP session

The Request HTTP session is accessed via Request.getSession(boolean) and the Session APIs. For more information about how to set up support for HTTP sessions, see this section.

Reading the Request Content

Since Request is-a Content.Source, the section about reading from Content.Source applies to Request as well. The static Content.Source utility methods will allow you to read the request content as a string, or as an InputStream, for example.

There are two special cases that are specific to HTTP for the request content: form parameters (sent when submitting an HTML form) and multipart form data (sent when submitting an HTML form with file upload).

For form parameters, typical of HTML form submissions, you can use the FormFields APIs as shown here:

class FormHandler extends Handler.Abstract.NonBlocking
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        String contentType = request.getHeaders().get(HttpHeader.CONTENT_TYPE);
        if (MimeTypes.Type.FORM_ENCODED.is(contentType))
        {
            // Convert the request content into Fields.
            CompletableFuture<Fields> completableFields = FormFields.from(request); (1)

            // When all the request content has arrived, process the fields.
            completableFields.whenComplete((fields, failure) -> (2)
            {
                if (failure == null)
                {
                    processFields(fields);
                    // Send a simple 200 response, completing the callback.
                    response.setStatus(HttpStatus.OK_200);
                    callback.succeeded();
                }
                else
                {
                    // Reading the request content failed.
                    // Send an error response, completing the callback.
                    Response.writeError(request, response, callback, failure);
                }
            });

            // The callback will be eventually completed in all cases, return true.
            return true;
        }
        else
        {
            // Send an error response, completing the callback, and returning true.
            Response.writeError(request, response, callback, HttpStatus.BAD_REQUEST_400, "invalid request");
            return true;
        }
    }
}
1 If the Content-Type is x-www-form-urlencoded, read the request content with FormFields.
2 When all the request content has arrived, process the Fields.

The Handler returns true, so the callback parameter must be completed.

It is therefore mandatory to use CompletableFuture APIs that are invoked even when reading the request content failed, such as whenComplete(BiConsumer), handle(BiFunction), exceptionally(Function), etc.

Failing to do so may result in the Handler callback parameter to never be completed, causing the request processing to hang forever.

For multipart form data, typical for HTML form file uploads, you can use the MultiPartFormData.Parser APIs as shown here:

class MultiPartFormDataHandler extends Handler.Abstract.NonBlocking
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        String contentType = request.getHeaders().get(HttpHeader.CONTENT_TYPE);
        if (MimeTypes.Type.MULTIPART_FORM_DATA.is(contentType))
        {
            // Extract the multipart boundary.
            String boundary = MultiPart.extractBoundary(contentType);

            // Create and configure the multipart parser.
            MultiPartFormData.Parser parser = new MultiPartFormData.Parser(boundary);
            // By default, uploaded files are stored in this directory, to
            // avoid to read the file content (which can be large) in memory.
            parser.setFilesDirectory(Path.of("/tmp"));
            // Convert the request content into parts.
            CompletableFuture<MultiPartFormData.Parts> completableParts = parser.parse(request); (1)

            // When all the request content has arrived, process the parts.
            completableParts.whenComplete((parts, failure) -> (2)
            {
                if (failure == null)
                {
                    // Use the Parts API to process the parts.
                    processParts(parts);
                    // Send a simple 200 response, completing the callback.
                    response.setStatus(HttpStatus.OK_200);
                    callback.succeeded();
                }
                else
                {
                    // Reading the request content failed.
                    // Send an error response, completing the callback.
                    Response.writeError(request, response, callback, failure);
                }
            });

            // The callback will be eventually completed in all cases, return true.
            return true;
        }
        else
        {
            // Send an error response, completing the callback, and returning true.
            Response.writeError(request, response, callback, HttpStatus.BAD_REQUEST_400, "invalid request");
            return true;
        }
    }
}
1 If the Content-Type is multipart/form-data, read the request content with MultiPartFormData.Parser.
2 When all the request content has arrived, process the MultiPartFormData.Parts.

The Handler returns true, so the callback parameter must be completed.

It is therefore mandatory to use CompletableFuture APIs that are invoked even when reading the request content failed, such as whenComplete(BiConsumer), handle(BiFunction), exceptionally(Function), etc.

Failing to do so may result in the Handler callback parameter to never be completed, causing the request processing to hang forever.

Request Listeners

Application may add listeners to the Request object to be notified of particular events happening during the request/response processing.

Request.addIdleTimeoutListener(Predicate<TimeoutException>) allows you to add an idle timeout listener, which is invoked when an idle timeout period elapses during the request/response processing, if the idle timeout event is not notified otherwise.

When an idle timeout event happens, it is delivered to the application as follows:

  • If there is pending demand (via Request.demand(Runnable)), then the demand Runnable is invoked and the application may see the idle timeout failure by reading from the Request, obtaining a transient failure chunk.

  • If there is a pending response write (via Response.write(boolean, ByteBuffer, Callback)), the response write Callback is failed.

  • If neither of the above, the idle timeout listeners are invoked, in the same order they have been added. The first idle timeout listener that returns true stops the Jetty implementation from invoking the idle timeout listeners that follow.

The idle timeout listeners are therefore invoked only when the application is really idle, neither trying to read nor trying to write.

An idle timeout listener may return true to indicate that the idle timeout should be treated as a fatal failure of the request/response processing; otherwise the listener may return false to indicate that no further handling of the idle timeout is needed from the Jetty implementation.

When idle timeout listeners return false, then any subsequent idle timeouts are handled as above. In the case that the application does not initiate any read or write, then the idle timeout listeners are invoked again after another idle timeout period.

Request.addFailureListener(Consumer<Throwable>) allows you to add a failure listener, which is invoked when a failure happens during the request/response processing.

When a failure happens during the request/response processing, then:

  • The pending demand for request content, if any, is invoked; that is, the Runnable passed to Request.demand(Runnable) is invoked.

  • The callback of an outstanding call to Response.write(boolean, ByteBuffer, Callback), if any, is failed.

  • The failure listeners are invoked, in the same order they have been added.

Failure listeners are invoked also in case of idle timeouts, in the following cases:

  • At least one idle timeout listener returned true to indicate to the Jetty implementation to treat the idle timeout as a fatal failure.

  • There are no idle timeout listeners.

Failures reported to a failure listener are always fatal failures; see also this section about fatal versus transient failures. This means that it is not possible to read or write from a failure listener: the read returns a fatal failure chunk, and the write will immediately fail the write callback.

Applications are always required to complete the Handler callback, as described here. In case of asynchronous failures, failure listeners are a good place to complete (typically by failing it) the Handler callback.

Request.addCompletionListener(Consumer<Throwable>) allows you to add a completion listener, which is invoked at the very end of the request/response processing. This is equivalent to adding an HttpStream wrapper and overriding both HttpStream.succeeded() and HttpStream.failed(Throwable).

Completion listeners are typically (but not only) used to recycle or dispose resources used during the request/response processing, or get a precise timing for when the request/response processing finishes, to be paired with Request.getBeginNanoTime().

Note that while failure listeners are invoked as soon as the failure happens, completion listeners are invoked only at the very end of the request/response processing: after the Callback passed to Handler.handle(Request, Response, Callback) has been completed, all container dispatched threads have returned, and all the response writes have been completed.

In case of many completion listeners, they are invoked in the reverse order they have been added.

Using the Response

The Response object can be accessed by web applications to set the HTTP response status code, the HTTP response headers and write the HTTP response content.

The design of the Response APIs is similar to that of the Request APIs, described in this section.

To use the Response APIs, you should look up the relevant methods in the following order:

  1. Response virtual methods. For example, Response.setStatus(int) to set the HTTP response status code.

  2. Request static methods. These are utility methods that provide more convenient access to response features. For example, adding an HTTP cookie could be done by adding a Set-Cookie response header, but it would be extremely error-prone. The utility method static Response.addCookie(Response, HttpCookie) is provided instead.

  3. Super class static methods. Since Response is-a Content.Sink, look also for static methods in Content.Sink that take a Content.Sink as a parameter, so that you can pass the Response object as a parameter.

Below you can find a list of the most common Response features and how to access them. Refer to the Response javadocs for the complete list.

Response status code

The Response HTTP status code is accessed via Response.getStatus() and Response.setStatus(int).

Response HTTP headers

The Response HTTP headers are accessed via Response.getHeaders() and the HttpFields.Mutable APIs. The response headers are mutable until the response is committed, as defined in this section.

Response cookies

The Response cookies are accessed via static Response.addCookie(Response, HttpCookie), static Response.replaceCookie(Response, HttpCookie) and the HttpCookie APIs. Since cookies translate to HTTP headers, they can be added/replaces until the response is committed, as defined in this section.

Writing the Response Content

Since Response is-a Content.Sink, the section about writing to Content.Sink applies to Response as well. The static Content.Sink utility methods will allow you to write the response content as a string, or as an OutputStream, for example.

The first call to Response.write(boolean, ByteBuffer, Callback) commits the response.

Committing the response means that the response status code and response headers are sent to the other peer, and therefore cannot be modified anymore. Trying to modify them may result in an IllegalStateException to be thrown, as it is an application mistake to commit the response and then try to modify the headers.

You can explicitly commit the response by performing an empty, non-last write:

class FlushingHandler extends Handler.Abstract.NonBlocking
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Set the response status code.
        response.setStatus(HttpStatus.OK_200);
        // Set the response headers.
        response.getHeaders().put(HttpHeader.CONTENT_TYPE, "text/plain");

        // Commit the response with a "flush" write.
        Callback.Completable.with(flush -> response.write(false, null, flush))
            // When the flush is finished, send the content and complete the callback.
            .whenComplete((ignored, failure) ->
            {
                if (failure == null)
                    response.write(true, UTF_8.encode("HELLO"), callback);
                else
                    callback.failed(failure);
            });

        // Return true because the callback will eventually be completed.
        return true;
    }
}

The Handler returns true, so the callback parameter must be completed.

It is therefore mandatory to use CompletableFuture APIs that are invoked even when writing the response content failed, such as whenComplete(BiConsumer), handle(BiFunction), exceptionally(Function), etc.

Failing to do so may result in the Handler callback parameter to never be completed, causing the request processing to hang forever.

Jetty can perform important optimizations for the HTTP/1.1 protocol if the response content length is known before the response is committed:

class ContentLengthHandler extends Handler.Abstract.NonBlocking
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Set the response status code.
        response.setStatus(HttpStatus.OK_200);

        String content = """
            {
              "result": 0,
              "advice": {
                "message": "Jetty Rocks!"
              }
            }
            """;
        // Must count the bytes, not the characters!
        byte[] bytes = content.getBytes(UTF_8);
        long contentLength = bytes.length;

        // Set the response headers before the response is committed.
        HttpFields.Mutable responseHeaders = response.getHeaders();
        // Set the content type.
        responseHeaders.put(HttpHeader.CONTENT_TYPE, "application/json; charset=UTF-8");
        // Set the response content length.
        responseHeaders.put(HttpHeader.CONTENT_LENGTH, contentLength);

        // Commit the response.
        response.write(true, ByteBuffer.wrap(bytes), callback);

        // Return true because the callback will eventually be completed.
        return true;
    }
}
Setting the response content length is an optimization; Jetty will work well even without it. If you set the response content length, however, remember that it must specify the number of bytes, not the number of characters.

Sending Interim Responses

The HTTP protocol (any version) allows applications to write interim responses.

An interim response has a status code in the 1xx range (but not 101), and an application may write zero or more interim response before the final response.

This is an example of writing an interim 100 Continue response:

class Continue100Handler extends Handler.Wrapper
{
    public Continue100Handler(Handler handler)
    {
        super(handler);
    }

    @Override
    public boolean handle(Request request, Response response, Callback callback) throws Exception
    {
        HttpFields requestHeaders = request.getHeaders();
        if (requestHeaders.contains(HttpHeader.EXPECT, HttpHeaderValue.CONTINUE.asString()))
        {
            // Analyze the request and decide whether to receive the content.
            long contentLength = request.getLength();
            if (contentLength > 0 && contentLength < 1024)
            {
                // Small request content, ask to send it by
                // sending a 100 Continue interim response.
                CompletableFuture<Void> processing = response.writeInterim(HttpStatus.CONTINUE_100, HttpFields.EMPTY) (1)
                    // Then read the request content into a ByteBuffer.
                    .thenCompose(ignored -> Promise.Completable.<ByteBuffer>with(p -> Content.Source.asByteBuffer(request, p)))
                    // Then store the ByteBuffer somewhere.
                    .thenCompose(byteBuffer -> store(byteBuffer));

                // At the end of the processing, complete
                // the callback with the CompletableFuture,
                // a simple 200 response in case of success,
                // or a 500 response in case of failure.
                callback.completeWith(processing); (2)
                return true;
            }
            else
            {
                // The request content is too large, send an error.
                Response.writeError(request, response, callback, HttpStatus.PAYLOAD_TOO_LARGE_413);
                return true;
            }
        }
        else
        {
            return super.handle(request, response, callback);
        }
    }
}
1 Using Response.writeInterim(...) to send the interim response.
2 The completion of the callback must take into account both success and failure.

Note how writing an interim response is as asynchronous operation. As such you must perform subsequent operations using the CompletableFuture APIs, and remember to complete the Handler callback parameter both in case of success or in case of failure.

This is an example of writing an interim 103 Early Hints response:

class EarlyHints103Handler extends Handler.Wrapper
{
    public EarlyHints103Handler(Handler handler)
    {
        super(handler);
    }

    @Override
    public boolean handle(Request request, Response response, Callback callback) throws Exception
    {
        String pathInContext = Request.getPathInContext(request);

        // Simple logic that assumes that every HTML
        // file has associated the same CSS stylesheet.
        if (pathInContext.endsWith(".html"))
        {
            // Tell the client that a Link is coming
            // sending a 103 Early Hints interim response.
            HttpFields.Mutable interimHeaders = HttpFields.build()
                .put(HttpHeader.LINK, "</style.css>; rel=preload; as=style");

            response.writeInterim(HttpStatus.EARLY_HINTS_103, interimHeaders) (1)
                .whenComplete((ignored, failure) -> (2)
                {
                    if (failure == null)
                    {
                        try
                        {
                            // Delegate the handling to the child Handler.
                            boolean handled = super.handle(request, response, callback);
                            if (!handled)
                            {
                                // The child Handler did not produce a final response, do it here.
                                Response.writeError(request, response, callback, HttpStatus.NOT_FOUND_404);
                            }
                        }
                        catch (Throwable x)
                        {
                            callback.failed(x);
                        }
                    }
                    else
                    {
                        callback.failed(failure);
                    }
                });

            // This Handler sent an interim response, so this Handler
            // (or its descendants) must produce a final response, so return true.
            return true;
        }
        else
        {
            // Not a request for an HTML page, delegate
            // the handling to the child Handler.
            return super.handle(request, response, callback);
        }
    }
}
1 Using Response.writeInterim(...) to send the interim response.
2 The completion of the callback must take into account both success and failure.

An interim response may or may not have its own HTTP headers (this depends on the interim response status code), and they are typically different from the final response HTTP headers.

HTTP Session Support

Some web applications (but not all of them) have the concept of a user, that is a way to identify a specific client that is interacting with the web application.

The HTTP session is a feature offered by servers that allows web applications to maintain a temporary, per-user, storage for user-specific data.

The storage can be accessed by the web application across multiple request/response interactions with the client. This makes the web application stateful, because a computation performed by a previous request may be stored in the HTTP session and used in subsequent requests without the need to perform again the computation.

Since not all web applications need support for the HTTP session, Jetty offers this feature optionally.

The Maven coordinates for the Jetty HTTP session support are:

<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-session</artifactId>
  <version>12.0.10-SNAPSHOT</version>
</dependency>

The HTTP session support is provided by the org.eclipse.jetty.session.SessionHandler, that must be set up in the Handler tree between a ContextHandler and your Handler implementation:

class MyAppHandler extends Handler.Abstract.NonBlocking
{
    @Override
    public boolean handle(Request request, Response response, Callback callback)
    {
        // Your web application implemented here.

        // You can access the HTTP session.
        Session session = request.getSession(false);

        return true;
    }
}

Server server = new Server();
Connector connector = new ServerConnector(server);
server.addConnector(connector);

// Create a ContextHandler with contextPath.
ContextHandler contextHandler = new ContextHandler("/myApp");
server.setHandler(contextHandler);

// Create and link the SessionHandler.
SessionHandler sessionHandler = new SessionHandler();
contextHandler.setHandler(sessionHandler);

// Link your web application Handler.
sessionHandler.setHandler(new MyAppHandler());

server.start();

The corresponding Handler tree structure looks like the following:

Server
└── ContextHandler /myApp
    └── SessionHandler
        └── MyAppHandler

With the Handlers set up in this way, you can access the HTTP session from your MyAppHandler using Request.getSession(boolean), and then use the Session APIs.

The support provided by Jetty for HTTP sessions is advanced and completely pluggable, providing features such as first-level and second-level caching, eviction, etc.

You can configure the HTTP session support from a very simple local in-memory configuration, to a replicated (across nodes in a cluster), persistent (for example over file system, database or memcached) configuration for the most advanced use cases. The advanced configuration of Jetty’s HTTP session support is discussed in more details in this section.

Securing HTTP Server Applications

TODO

Writing HTTP Server Applications

Writing HTTP applications is typically simple, especially when using blocking APIs. However, there are subtle cases where it is worth clarifying what a server application should do to obtain the desired results when run by Jetty.

Sending 1xx Responses

The HTTP/1.1 RFC allows for 1xx informational responses to be sent before a real content response. Unfortunately the servlet specification does not provide a way for these to be sent, so Jetty has had to provide non-standard handling of these headers.

100 Continue

The 100 Continue response should be sent by the server when a client sends a request with an Expect: 100-continue header, as the client will not send the body of the request until the 100 Continue response has been sent.

The intent of this feature is to allow a server to inspect the headers and to tell the client to not send a request body that might be too large or insufficiently private or otherwise unable to be handled.

Jetty achieves this by waiting until the input stream or reader is obtained by the filter/servlet, before sending the 100 Continue response. Thus a filter/servlet may inspect the headers of a request before getting the input stream and send an error response (or redirect etc.) rather than the 100 continues.

class Continue100HttpServlet extends HttpServlet
{
    @Override
    protected void service(HttpServletRequest request, HttpServletResponse response) throws IOException
    {
        // Inspect the method and headers.
        boolean isPost = HttpMethod.POST.is(request.getMethod());
        boolean expects100 = HttpHeaderValue.CONTINUE.is(request.getHeader("Expect"));
        long contentLength = request.getContentLengthLong();

        if (isPost && expects100)
        {
            if (contentLength > 1024 * 1024)
            {
                // Rejects uploads that are too large.
                response.sendError(HttpStatus.PAYLOAD_TOO_LARGE_413);
            }
            else
            {
                // Getting the request InputStream indicates that
                // the application wants to read the request content.
                // Jetty will send the 100 Continue response at this
                // point, and the client will send the request content.
                ServletInputStream input = request.getInputStream();

                // Read and process the request input.
            }
        }
        else
        {
            // Process normal requests.
        }
    }
}

102 Processing

RFC 2518 defined the 102 Processing status code that can be sent:

when the server has a reasonable expectation that the request will take significant time to complete. As guidance, if a method is taking longer than 20 seconds (a reasonable, but arbitrary value) to process the server SHOULD return a 102 Processing response.
— RFC 2518 section 10.1

However, a later update of RFC 2518, RFC 4918, removed the 102 Processing status code for "lack of implementation".

Jetty supports the 102 Processing status code. If a request is received with the Expect: 102-processing header, then a filter/servlet may send a 102 Processing response (without terminating further processing) by calling response.sendError(102).