HTTP Client

HttpClient Introduction

The Jetty HTTP client module provides easy-to-use APIs and utility classes to perform HTTP (or HTTPS) requests.

Jetty’s HTTP client is non-blocking and asynchronous. It offers an asynchronous API that never blocks for I/O, making it very efficient in thread utilization and well suited for high performance scenarios such as load testing or parallel computation.

However, when all you need to do is to perform a GET request to a resource, Jetty’s HTTP client offers also a synchronous API; a programming interface where the thread that issued the request blocks until the request/response conversation is complete.

Jetty’s HTTP client supports different HTTP formats: HTTP/1.1, HTTP/2, HTTP/3 and FastCGI. Each format has a different HttpClientTransport implementation, that in turn use a low-level transport to communicate with the server.

This means that the semantic of an HTTP request such as: " GET the resource /index.html " can be carried over the low-level transport in different formats. The most common and default format is HTTP/1.1. That said, Jetty’s HTTP client can carry the same request using the HTTP/2 format, the HTTP/3 format, or the FastCGI format.

Furthermore, every format can be transported over different low-level transport, such as TCP, Unix-Domain sockets, QUIC or memory. Supports for Unix-Domain sockets requires Java 16 or later, since Unix-Domain sockets support has been introduced in OpenJDK with JEP 380.

The FastCGI format is used in Jetty’s FastCGI support that allows Jetty to work as a reverse proxy to PHP (exactly like Apache or Nginx do) and therefore be able to serve, for example, WordPress websites, often in conjunction with Unix-Domain sockets (although it is possible to use FastCGI via network too).

The HTTP/2 format allows Jetty’s HTTP client to perform requests using HTTP/2 to HTTP/2 enabled websites, see also Jetty’s HTTP/2 support.

The HTTP/3 format allows Jetty’s HTTP client to perform requests using HTTP/3 to HTTP/3 enabled websites, see also Jetty’s HTTP/3 support.

Out of the box features that you get with the Jetty HTTP client include:

  • Redirect support — redirect codes such as 302 or 303 are automatically followed.

  • Cookies support — cookies sent by servers are stored and sent back to servers in matching requests.

  • Authentication support — HTTP "Basic", "Digest" and "SPNEGO" authentications are supported, others are pluggable.

  • Forward proxy support — HTTP proxying, SOCKS4 and SOCKS5 proxying.

Starting HttpClient

The Jetty artifact that provides the main HTTP client implementation is jetty-client. The Maven artifact coordinates are the following:

<dependency>
  <groupId>org.eclipse.jetty</groupId>
  <artifactId>jetty-client</artifactId>
  <version>12.0.16-SNAPSHOT</version>
</dependency>

The main class is named org.eclipse.jetty.client.HttpClient.

You can think of a HttpClient instance as a browser instance. Like a browser it can make requests to different domains, it manages redirects, cookies and authentication, you can configure it with a forward proxy, and it provides you with the responses to the requests you make.

In order to use HttpClient, you must instantiate it, configure it, and then start it:

// Instantiate HttpClient.
HttpClient httpClient = new HttpClient();

// Configure HttpClient, for example:
httpClient.setFollowRedirects(false);

// Start HttpClient.
httpClient.start();

You may create multiple instances of HttpClient, but typically one instance is enough for an application. There are several reasons for having multiple HttpClient instances including, but not limited to:

  • You want to specify different configuration parameters (for example, one instance is configured with a forward proxy while another is not).

  • You want the two instances to behave like two different browsers and hence have different cookies, different authentication credentials, etc.

  • You want to use different HttpClientTransports.

Like browsers, HTTPS requests are supported out-of-the-box (see this section for the TLS configuration), as long as the server provides a valid certificate. In case the server does not provide a valid certificate (or in case it is self-signed) you want to customize HttpClient's TLS configuration as described in this section.

Stopping HttpClient

It is recommended that when your application stops, you also stop the HttpClient instance (or instances) that you are using.

// Stop HttpClient.
httpClient.stop();

Stopping HttpClient makes sure that the memory it holds (for example, ByteBuffer pools, authentication credentials, cookies, etc.) is released, and that the thread pool and scheduler are properly stopped allowing all threads used by HttpClient to exit.

You cannot call HttpClient.stop() from one of its own threads, as it would cause a deadlock. It is recommended that you stop HttpClient from an unrelated thread, or from a newly allocated thread, for example:

// Stop HttpClient from a new thread.
// Use LifeCycle.stop(...) to rethrow checked exceptions as unchecked.
new Thread(() -> LifeCycle.stop(httpClient)).start();

HttpClient Architecture

A HttpClient instance can be thought as a browser instance, and it manages the following components:

A Destination is the client-side component that represents an origin server, and manages a queue of requests for that origin, and a pool of connections to that origin.

An origin may be simply thought as the tuple (scheme, host, port) and it is where the client connects to in order to communicate with the server. However, this is not enough.

If you use HttpClient to write a proxy you may have different clients that want to contact the same server. In this case, you may not want to use the same proxy-to-server connection to proxy requests for both clients, for example for authentication reasons: the server may associate the connection with authentication credentials, and you do not want to use the same connection for two different users that have different credentials. Instead, you want to use different connections for different clients and this can be achieved by "tagging" a destination with a tag object that represents the remote client (for example, it could be the remote client IP address).

Two origins with the same (scheme, host, port) but different tag create two different destinations and therefore two different connection pools. However, also this is not enough.

It is possible for a server to speak different protocols on the same port. A connection may start by speaking one protocol, for example HTTP/1.1, but then be upgraded to speak a different protocol, for example HTTP/2. After a connection has been upgraded to a second protocol, it cannot speak the first protocol anymore, so it can only be used to communicate using the second protocol.

Two origins with the same (scheme, host, port, tag) but different protocol create two different destinations and therefore two different connection pools.

Finally, it is possible for a server to speak the same protocol over different low-level transports (represented by Transport), for example TCP and Unix-Domain.

Two origins with the same (scheme, host, port, tag, protocol) but different low-level transports create two different destinations and therefore two different connection pools.

Therefore, an origin is identified by the tuple (scheme, host, port, tag, protocol, transport).

HttpClient Connection Pooling

A Destination manages a org.eclipse.jetty.client.ConnectionPool, where connections to a particular origin are pooled for performance reasons: opening a connection is a costly operation, and it’s better to reuse them for multiple requests.

Remember that to select a specific Destination you must select a specific origin, and that an origin is identified by the tuple (scheme, host, port, tag, protocol, transport), so you can have multiple Destinations for the same host and port, and therefore multiple ConnectionPools

You can access the ConnectionPool in this way:

HttpClient httpClient = new HttpClient();
httpClient.start();

ConnectionPool connectionPool = httpClient.getDestinations().stream()
    // Find the destination by filtering on the Origin.
    .filter(destination -> destination.getOrigin().getAddress().getHost().equals("domain.com"))
    .findAny()
    // Get the ConnectionPool.
    .map(Destination::getConnectionPool)
    .orElse(null);

Jetty’s client library provides the following ConnectionPool implementations:

  • DuplexConnectionPool, historically the first implementation, only used by the HTTP/1.1 transport.

  • MultiplexConnectionPool, the generic implementation valid for any transport where connections are reused with a most recently used algorithm (that is, the connections most recently returned to the connection pool are the more likely to be used again).

  • RoundRobinConnectionPool, similar to MultiplexConnectionPool but where connections are reused with a round-robin algorithm.

  • RandomConnectionPool, similar to MultiplexConnectionPool but where connections are reused with an algorithm that chooses them randomly.

The ConnectionPool implementation can be customized for each destination in by setting a ConnectionPool.Factory on the HttpClientTransport:

HttpClient httpClient = new HttpClient();
httpClient.start();

// The max number of connections in the pool.
int maxConnectionsPerDestination = httpClient.getMaxConnectionsPerDestination();

// The max number of requests per connection (multiplexing).
// Start with 1, since this value is dynamically set to larger values if
// the transport supports multiplexing requests on the same connection.
int maxRequestsPerConnection = 1;

HttpClientTransport transport = httpClient.getTransport();

// Set the ConnectionPool.Factory using a lambda.
transport.setConnectionPoolFactory(destination ->
    new RoundRobinConnectionPool(destination,
        maxConnectionsPerDestination,
        maxRequestsPerConnection));

Pre-Creating Connections

ConnectionPool offers the ability to pre-create connections by calling ConnectionPool.preCreateConnections(int).

Pre-creating the connections saves the time and processing spent to establish the TCP connection, performing the TLS handshake (if necessary) and, for HTTP/2 and HTTP/3, perform the initial protocol setup. This is particularly important for HTTP/2 because in the initial protocol setup the server informs the client of the maximum number of concurrent requests per connection (otherwise assumed to be just 1 by the client).

The scenarios where pre-creating connections is useful are, for example:

  • Load testing, where you want to prepare the system with connections already created to avoid paying of cost of connection setup.

  • Proxying scenarios, often in conjunction with the use of RoundRobinConnectionPool or RandomConnectionPool, where the proxy creates early the connections to the backend servers.

This is an example of how to pre-create connections:

HttpClient httpClient = new HttpClient();
httpClient.start();

// For HTTP/1.1, you need to explicitly configure to initialize connections.
if (httpClient.getTransport() instanceof HttpClientTransportOverHTTP http1)
    http1.setInitializeConnections(true);

// Create a dummy request to the server you want to pre-create connections to.
Request request = httpClient.newRequest("https://host/");

// Resolve the destination for that request.
Destination destination = httpClient.resolveDestination(request);

// Pre-create, for example, half of the connections.
int preCreate = httpClient.getMaxConnectionsPerDestination() / 2;
CompletableFuture<Void> completable = destination.getConnectionPool().preCreateConnections(preCreate);

// Wait for the connections to be created.
completable.get(5, TimeUnit.SECONDS);

Pre-creating connections for secure HTTP/1.1 requires you to call HttpClientTransportOverHTTP.setInitializeConnections(true), otherwise only the TCP connection is established, but the TLS handshake is not initiated.

To initialize connections for secure HTTP/1.1, the client sends an initial OPTIONS * HTTP/1.1 request to the server. The server must be able to handle this request without closing the connection (in particular it must not add the Connection: close header in the response).

HttpClient Request Processing

Diagram

When a request is sent, an origin is computed from the request; HttpClient uses that origin to find (or create if it does not exist) the correspondent destination. The request is then queued onto the destination, and this causes the destination to ask its connection pool for a free connection. If a connection is available, it is returned, otherwise a new connection is created. Once the destination has obtained the connection, it dequeues the request and sends it over the connection.

The first request to a destination triggers the opening of the first connection. A second request with the same origin sent after the first request/response cycle is completed may reuse the same connection, depending on the connection pool implementation. A second request with the same origin sent concurrently with the first request will likely cause the opening of a second connection, depending on the connection pool implementation. The configuration parameter HttpClient.maxConnectionsPerDestination (see also the configuration section) controls the max number of connections that can be opened for a destination.

If opening connections to a given origin takes a long time, then requests for that origin will queue up in the corresponding destination until the connections are established.

To save the time spent opening connections, you can pre-create connections.

Each connection can handle a limited number of concurrent requests. For HTTP/1.1, this number is always 1: there can only be one outstanding request for each connection. For HTTP/2 this number is determined by the server max_concurrent_stream setting (typically around 100, i.e. there can be up to 100 outstanding requests for every connection).

When a destination has maxed out its number of connections, and all connections have maxed out their number of outstanding requests, more requests sent to that destination will be queued. When the request queue is full, the request will be failed. The configuration parameter HttpClient.maxRequestsQueuedPerDestination (see also the configuration section) controls the max number of requests that can be queued for a destination.

HttpClient API Usage

HttpClient provides two types of APIs: a blocking API and a non-blocking API.

HttpClient Blocking APIs

The simpler way to perform a HTTP request is the following:

HttpClient httpClient = new HttpClient();
httpClient.start();

// Perform a simple GET and wait for the response.
ContentResponse response = httpClient.GET("http://domain.com/path?query");

The method HttpClient.GET(...) performs a HTTP GET request to the given URI and returns a ContentResponse when the request/response conversation completes successfully.

The ContentResponse object contains the HTTP response information: status code, headers and possibly content. The content length is limited by default to 2 MiB; for larger content see the section on response content handling.

If you want to customize the request, for example by issuing a HEAD request instead of a GET, and simulating a browser user agent, you can do it in this way:

ContentResponse response = httpClient.newRequest("http://domain.com/path?query")
    .method(HttpMethod.HEAD)
    .agent("Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0")
    .send();

This is a shorthand for:

Request request = httpClient.newRequest("http://domain.com/path?query");
request.method(HttpMethod.HEAD);
request.agent("Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0) Gecko/20100101 Firefox/17.0");
ContentResponse response = request.send();

You first create a request object using httpClient.newRequest(...), and then you customize it using the fluent API style (that is, a chained invocation of methods on the request object). When the request object is customized, you call request.send() that produces the ContentResponse when the request/response conversation is complete.

The Request object, despite being mutable, cannot be reused for other requests. This is true also when trying to send two or more identical requests: you have to create two or more Request objects.

Simple POST requests also have a shortcut method:

ContentResponse response = httpClient.POST("http://domain.com/entity/1")
    .param("p", "value")
    .send();

The POST parameter values added via the param() method are automatically URL-encoded.

Jetty’s HttpClient automatically follows redirects, so it handles the typical web pattern POST/Redirect/GET, and the response object contains the content of the response of the GET request. Following redirects is a feature that you can enable/disable on a per-request basis or globally.

File uploads also require one line, and make use of java.nio.file classes:

ContentResponse response = httpClient.POST("http://domain.com/upload")
    .file(Paths.get("file_to_upload.txt"), "text/plain")
    .send();

It is possible to impose a total timeout for the request/response conversation using the Request.timeout(...) method as follows:

ContentResponse response = httpClient.newRequest("http://domain.com/path?query")
    .timeout(5, TimeUnit.SECONDS)
    .send();

In the example above, when the 5 seconds expire, the request/response cycle is aborted and a java.util.concurrent.TimeoutException is thrown.

HttpClient Non-Blocking APIs

So far we have shown how to use Jetty HTTP client in a blocking style — that is, the thread that issues the request blocks until the request/response conversation is complete.

This section will look at Jetty’s HttpClient non-blocking, asynchronous APIs that are perfectly suited for large content downloads, for parallel processing of requests/responses and in cases where performance and efficient thread and resource utilization is a key factor.

The asynchronous APIs rely heavily on listeners that are invoked at various stages of request and response processing. These listeners are implemented by applications and may perform any kind of logic. The implementation invokes these listeners in the same thread that is used to process the request or response. Therefore, if the application code in these listeners takes a long time to execute, the request or response processing is delayed until the listener returns.

If you need to execute application code that takes long time inside a listener, it is typically better to spawn your own thread to execute the code that takes long time. In this way you return from the listener as soon as possible and allow the implementation to resume the processing of the request or response (or of other requests/responses).

Request and response processing are executed by two different threads and therefore may happen concurrently. A typical example of this concurrent processing is an echo server, where a large upload may be concurrent with the large download echoed back.

Remember that responses may be processed and completed before requests; a typical example is a large upload that triggers a quick response, for example an error, by the server: the response may arrive and be completed while the request content is still being uploaded.

The application thread that calls Request.send(Response.CompleteListener) performs the processing of the request until either the request is fully sent over the network or until it would block on I/O, then it returns (and therefore never blocks). If it would block on I/O, the thread asks the I/O system to emit an event when the I/O will be ready to continue, then returns. When such an event is fired, a thread taken from the HttpClient thread pool will resume the processing of the request.

Response are processed from the I/O thread taken from the HttpClient thread pool that processes the event that bytes are ready to be read. Response processing continues until either the response is fully processed or until it would block for I/O. If it would block for I/O, the thread asks the I/O system to emit an event when the I/O will be ready to continue, then returns. When such an event is fired, a (possibly different) thread taken from the HttpClient thread pool will resume the processing of the response.

When the request and the response are both fully processed, the thread that finished the last processing (usually the thread that processes the response, but may also be the thread that processes the request — if the request takes more time than the response to be processed) is used to dequeue the next request for the same destination and to process it.

A simple non-blocking GET request that discards the response content can be written in this way:

httpClient.newRequest("http://domain.com/path")
    .send(result ->
    {
        // Your logic here
    });

Method Request.send(Response.CompleteListener) returns void and does not block; the Response.CompleteListener lambda provided as a parameter is notified when the request/response conversation is complete, and the Result parameter allows you to access the request and response objects as well as failures, if any.

You can impose a total timeout for the request/response conversation in the same way used by the synchronous API:

httpClient.newRequest("http://domain.com/path")
    .timeout(3, TimeUnit.SECONDS)
    .send(result ->
    {
        /* Your logic here */
    });

The example above will impose a total timeout of 3 seconds on the request/response conversation.

The HTTP client APIs use listeners extensively to provide hooks for all possible request and response events:

httpClient.newRequest("http://domain.com/path")
    // Add request hooks.
    .onRequestQueued(request -> { /* ... */ })
    .onRequestBegin(request -> { /* ... */ })
    .onRequestHeaders(request -> { /* ... */ })
    .onRequestCommit(request -> { /* ... */ })
    .onRequestContent((request, content) -> { /* ... */ })
    .onRequestFailure((request, failure) -> { /* ... */ })
    .onRequestSuccess(request -> { /* ... */ })
    // Add response hooks.
    .onResponseBegin(response -> { /* ... */ })
    .onResponseHeader((response, field) -> true)
    .onResponseHeaders(response -> { /* ... */ })
    .onResponseContentAsync((response, chunk, demander) -> demander.run())
    .onResponseFailure((response, failure) -> { /* ... */ })
    .onResponseSuccess(response -> { /* ... */ })
    // Result hook.
    .send(result -> { /* ... */ });

This makes Jetty HTTP client suitable for HTTP load testing because, for example, you can accurately time every step of the request/response conversation (thus knowing where the request/response time is really spent).

The code in request and response listeners should not block.

It is allowed to call other blocking APIs, such as the Java file-system APIs. You should not call blocking APIs that:

  • Wait for other request or response events, such as receiving other request or response content chunks.

  • Use wait/notify primitives such as those available in java.lang.Object or java.util.concurrent.locks.Condition.

If the listener code blocks, the implementation also will be blocked and will not be able to advance the processing of the request or response that the listener code is likely waiting for, causing a deadlock.

Have a look at the Request.Listener class to know about request events, and to the Response.Listener class to know about response events.

Request Content Handling

Jetty’s HttpClient provides a number of utility classes off the shelf to handle request content.

You can provide request content as String, byte[], ByteBuffer, java.nio.file.Path, InputStream, and provide your own implementation of org.eclipse.jetty.client.Request.Content. Here’s an example that provides the request content using java.nio.file.Paths:

ContentResponse response = httpClient.POST("http://domain.com/upload")
    .body(new PathRequestContent("text/plain", Paths.get("file_to_upload.txt")))
    .send();

Alternatively, you can use FileInputStream via the InputStreamRequestContent utility class:

ContentResponse response = httpClient.POST("http://domain.com/upload")
    .body(new InputStreamRequestContent("text/plain", new FileInputStream("file_to_upload.txt")))
    .send();

Since InputStream is blocking, then also the send of the request will block if the input stream blocks, even in case of usage of the non-blocking HttpClient APIs.

If you have already read the content in memory, you can pass it as a byte[] (or a String) using the BytesRequestContent (or StringRequestContent) utility class:

ContentResponse bytesResponse = httpClient.POST("http://domain.com/upload")
    .body(new BytesRequestContent("text/plain", bytes))
    .send();

ContentResponse stringResponse = httpClient.POST("http://domain.com/upload")
    .body(new StringRequestContent("text/plain", string))
    .send();

If the request content is not immediately available, but your application will be notified of the content to send, you can use AsyncRequestContent in this way:

AsyncRequestContent content = new AsyncRequestContent();
httpClient.POST("http://domain.com/upload")
    .body(content)
    .send(result ->
    {
        // Your logic here
    });

// Content not available yet here.

// An event happens in some other class, in some other thread.
class ContentPublisher
{
    void publish(byte[] bytes, boolean lastContent)
    {
        // Wrap the bytes into a new ByteBuffer.
        ByteBuffer buffer = ByteBuffer.wrap(bytes);

        // Write the content.
        content.write(buffer, Callback.NOOP);

        // Close AsyncRequestContent when all the content is arrived.
        if (lastContent)
            content.close();
    }
}

While the request content is awaited and consequently uploaded by the client application, the server may be able to respond (at least with the response headers) completely asynchronously. In this case, Response.Listener callbacks will be invoked before the request is fully sent. This allows fine-grained control of the request/response conversation: for example the server may reject contents that are too big, send a response to the client, which in turn may stop the content upload.

Another way to provide request content is by using an OutputStreamRequestContent, which allows applications to write request content when it is available to the OutputStream provided by OutputStreamRequestContent:

OutputStreamRequestContent content = new OutputStreamRequestContent();

// Use try-with-resources to close the OutputStream when all content is written.
try (OutputStream output = content.getOutputStream())
{
    httpClient.POST("http://localhost:8080/")
        .body(content)
        .send(result ->
        {
            // Your logic here
        });

    // Content not available yet here.

    // Content is now available.
    byte[] bytes = new byte[]{'h', 'e', 'l', 'l', 'o'};
    output.write(bytes);
}
// End of try-with-resource, output.close() called automatically to signal end of content.

Response Content Handling

Jetty’s HttpClient allows applications to handle response content in different ways.

You can buffer the response content in memory; this is done when using the blocking APIs and the content is buffered within a ContentResponse up to 2 MiB.

If you want to control the length of the response content (for example limiting to values smaller than the default of 2 MiB), then you can use a org.eclipse.jetty.client.CompletableResponseListener in this way:

Request request = httpClient.newRequest("http://domain.com/path");

// Limit response content buffer to 512 KiB.
CompletableFuture<ContentResponse> completable = new CompletableResponseListener(request, 512 * 1024)
    .send();

// You can attach actions to the CompletableFuture,
// to be performed when the request+response completes.

// Wait at most 5 seconds for request+response to complete.
ContentResponse response = completable.get(5, TimeUnit.SECONDS);

If the response content length is exceeded, the response will be aborted, and an exception will be thrown by method get(...).

You can buffer the response content in memory also using the non-blocking APIs, via the BufferingResponseListener utility class:

httpClient.newRequest("http://domain.com/path")
    // Buffer response content up to 8 MiB
    .send(new BufferingResponseListener(8 * 1024 * 1024)
    {
        @Override
        public void onComplete(Result result)
        {
            if (!result.isFailed())
            {
                byte[] responseContent = getContent();
                // Your logic here
            }
        }
    });

If you want to avoid buffering, you can wait for the response and then stream the content using the InputStreamResponseListener utility class:

InputStreamResponseListener listener = new InputStreamResponseListener();
httpClient.newRequest("http://domain.com/path")
    .send(listener);

// Wait for the response headers to arrive.
Response response = listener.get(5, TimeUnit.SECONDS);

// Look at the response before streaming the content.
if (response.getStatus() == HttpStatus.OK_200)
{
    // Use try-with-resources to close input stream.
    try (InputStream responseContent = listener.getInputStream())
    {
        // Your logic here
    }
}
else
{
    response.abort(new IOException("Unexpected HTTP response"));
}

If you want to save the response content to a file, you can use the PathResponseListener utility class:

Path savePath = Path.of("/path/to/save/file.bin");

// Typical usage as a response listener.
PathResponseListener listener = new PathResponseListener(savePath, true);
httpClient.newRequest("http://domain.com/path")
    .send(listener);
// Wait for the response content to be saved.
var result = listener.get(5, TimeUnit.SECONDS);

// Alternative usage with CompletableFuture.
var completable = PathResponseListener.write(httpClient.newRequest("http://domain.com/path"), savePath, true);
completable.whenComplete((pathResponse, failure) ->
{
    // Your logic here.
});

Finally, let’s look at the advanced usage of the response content handling.

The response content is provided by the HttpClient implementation to application listeners following the read/demand model of org.eclipse.jetty.io.Content.Source.

The listener that follows this model is Response.ContentSourceListener.

After the response headers have been processed by the HttpClient implementation, Response.ContentSourceListener.onContentSource(response, contentSource) is invoked once and only once. This allows the application to control precisely the read/demand loop: when to read a chunk, how to process it and when to demand the next one.

You must provide a ContentSourceListener whose implementation reads a Content.Chunk from the provided Content.Source, as explained in this section.

The invocation of onContentSource(Request, Content.Source) and of the demand callback passed to contentSource.demand(Runnable) are serialized with respect to asynchronous events such as timeouts or an asynchronous call to Request.abort(Throwable). This means that these asynchronous events are not processed until the invocation of onContentSource(Request, Content.Source) returns, or until the invocation of the demand callback returns. With this model, applications should not worry too much about concurrent asynchronous events happening during response content handling, because they will eventually see the events as failures while reading the response content.

Demanding for content and consuming the content are orthogonal activities.

An application can read, store aside the Content.Chunk objects without releasing them (to consume them later), and demand for more chunks, but it must call Chunk.retain() on the stored chunks, and arrange to release them after they have been consumed later.

If not done carefully, this may lead to excessive memory consumption, since the ByteBuffer bytes are not consumed. Releasing the Content.Chunks will result in the ByteBuffers to be disposed/recycled and may be performed at any time.

An application can also read one chunk of content, consume it, release it, and then not demand for more content until a later time.

Subclass Response.AsyncContentListener overrides the behavior of Response.ContentSourceListener; when an application implements AsyncContentListener.onContent(response, chunk, demander), it can control the disposing/recycling of the ByteBuffer by releasing the chunk and it can control when to demand one more chunk by calling demander.run().

Subclass Response.ContentListener overrides the behavior of Response.AsyncContentListener; when an application implementing its onContent(response, buffer) returns from the method itself, it will both the effect of disposing/recycling the buffer and the effect of demanding one more chunk of content.

An application that implements a forwarder between two servers can be implemented efficiently by handling the response content without copying the ByteBuffer bytes as in the following example:

// Prepare a request to server1, the source.
Request request1 = httpClient.newRequest(host1, port1)
    .path("/source");

// Prepare a request to server2, the sink.
AsyncRequestContent content2 = new AsyncRequestContent();
Request request2 = httpClient.newRequest(host2, port2)
    .path("/sink")
    .body(content2);

request1.onResponseContentSource(new Response.ContentSourceListener()
{
    @Override
    public void onContentSource(Response response, Content.Source contentSource)
    {
        // Only execute this method the very first time
        // to initialize the request to server2.

        request2.onRequestCommit(request ->
        {
            // Only when the request to server2 has been sent,
            // then demand response content from server1.
            contentSource.demand(() -> forwardContent(response, contentSource));
        });

        // Send the request to server2.
        request2.send(result -> System.getLogger("forwarder").log(INFO, "Forwarding to server2 complete"));
    }

    private void forwardContent(Response response, Content.Source contentSource)
    {
        // Read one chunk of content.
        Content.Chunk chunk = contentSource.read();
        if (chunk == null)
        {
            // The read chunk is null, demand to be called back
            // when the next one is ready to be read.
            contentSource.demand(() -> forwardContent(response, contentSource));
            // Once a demand is in progress, the content source must not be read
            // nor demanded again until the demand callback is invoked.
            return;
        }
        // Check if the chunk is last and empty, in which case the
        // read/demand loop is done. Demanding again when the terminal
        // chunk has been read will invoke the demand callback with
        // the same terminal chunk, so this check must be present to
        // avoid infinitely demanding and reading the terminal chunk.
        if (chunk.isLast() && !chunk.hasRemaining())
        {
            chunk.release();
            return;
        }

        // When a response chunk is received from server1, forward it to server2.
        content2.write(chunk.getByteBuffer(), Callback.from(() ->
        {
            // When the request chunk is successfully sent to server2,
            // release the chunk to recycle the buffer.
            chunk.release();
            // Then demand more response content from server1.
            contentSource.demand(() -> forwardContent(response, contentSource));
        }, x ->
        {
            chunk.release();
            response.abort(x);
        }));
    }
});

// When the response content from server1 is complete,
// complete also the request content to server2.
request1.onResponseSuccess(response -> content2.close());

// Send the request to server1.
request1.send(result -> System.getLogger("forwarder").log(INFO, "Sourcing from server1 complete"));

Request Transport

The communication between client and server happens over a low-level transport, and applications can specify the low-level transport to use for each request.

This gives client applications great flexibility, because they can use the same HttpClient instance to communicate, for example, with an external third party web application via TCP, to a different process via Unix-Domain sockets, and efficiently to the same process via memory.

Client application can also choose more esoteric configurations such as using QUIC, typically used to transport HTTP/3, to transport HTTP/1.1 or HTTP/2, because QUIC provides reliable and ordered communication like TCP does.

Provided you have configured a UnixDomainServerConnector on the server, this is how you can configure a request to use Unix-Domain sockets:

// This is the path where the server "listens" on.
Path unixDomainPath = Path.of("/path/to/server.sock");

// Creates a ClientConnector.
ClientConnector clientConnector = new ClientConnector();

// You can use Unix-Domain for HTTP/1.1.
HttpClientTransportOverHTTP http1Transport = new HttpClientTransportOverHTTP(clientConnector);

// You can use Unix-Domain also for HTTP/2.
HTTP2Client http2Client = new HTTP2Client(clientConnector);
HttpClientTransportOverHTTP2 http2Transport = new HttpClientTransportOverHTTP2(http2Client);

// You can use Unix-Domain also for the dynamic transport.
ClientConnectionFactory.Info http1 = HttpClientConnectionFactory.HTTP11;
ClientConnectionFactoryOverHTTP2.HTTP2 http2 = new ClientConnectionFactoryOverHTTP2.HTTP2(http2Client);
HttpClientTransportDynamic dynamicTransport = new HttpClientTransportDynamic(clientConnector, http1, http2);

// Choose the transport you prefer for HttpClient, for example the dynamic transport.
HttpClient httpClient = new HttpClient(dynamicTransport);
httpClient.start();

ContentResponse response = httpClient.newRequest("jetty.org", 80)
    // Specify that the request must be sent over Unix-Domain.
    .transport(new Transport.TCPUnix(unixDomainPath))
    .send();

In the same way, if you have configured a MemoryConnector on the server, this is how you can configure a request to use memory for communication:

// The server-side MemoryConnector speaking HTTP/1.1.
Server server = new Server();
MemoryConnector memoryConnector = new MemoryConnector(server, new HttpConnectionFactory());
server.addConnector(memoryConnector);
// ...

// The code above is the server-side.
// ----
// The code below is the client-side.

HttpClient httpClient = new HttpClient();
httpClient.start();

// Use the MemoryTransport to communicate with the server-side.
Transport transport = new MemoryTransport(memoryConnector);

httpClient.newRequest("http://localhost/")
    // Specify the Transport to use.
    .transport(transport)
    .send();

This is a fancy example of how to mix HTTP versions and low-level transports:

HttpClient httpClient = new HttpClient(new HttpClientTransportDynamic(clientConnector, http2, http1, http3));
httpClient.start();

// Make a TCP request to a 3rd party web application.
ContentResponse thirdPartyResponse = httpClient.newRequest("https://third-party.com/api")
    // No need to specify the Transport, TCP will be used by default.
    .send();

// Upload the third party response content to a validation process.
ContentResponse validatedResponse = httpClient.newRequest("http://localhost/validate")
    // The validation process is available via Unix-Domain.
    .transport(new Transport.TCPUnix(unixDomainPath))
    .method(HttpMethod.POST)
    .body(new BytesRequestContent(thirdPartyResponse.getContent()))
    .send();

// Process the validated response intra-process by sending
// it to another web application in the same Jetty server.
ContentResponse response = httpClient.newRequest("http://localhost/process")
    // The processing is in-memory.
    .transport(new MemoryTransport(memoryConnector))
    .method(HttpMethod.POST)
    .body(new BytesRequestContent(validatedResponse.getContent()))
    .send();

Connection Information

In order to send a request, it is necessary to obtain a connection, as explained in the request processing section.

The HTTP/1.1 protocol may send only one request at a time on a single connection, while multiplexed protocols such as HTTP/2 may send many requests at a time on a single connection.

You can access the connection information, for example the local and remote SocketAddress, or the SslSessionData if the connection is secured, in the following way:

HttpClient httpClient = new HttpClient();
httpClient.start();

ContentResponse response = httpClient.newRequest("http://domain.com/path")
    // The connection information is only available starting from the request begin event.
    .onRequestBegin(request ->
    {
        Connection connection = request.getConnection();

        // Obtain the address of the server.
        SocketAddress remoteAddress = connection.getRemoteSocketAddress();
        System.getLogger("connection").log(INFO, "Server address: %s", remoteAddress);

        // Obtain the SslSessionData.
        EndPoint.SslSessionData sslSessionData = connection.getSslSessionData();
        if (sslSessionData != null)
            System.getLogger("connection").log(INFO, "SslSessionData: %s", sslSessionData);
    })
    .send();

The connection information is only available when the request is associated with a connection.

This means that the connection is not available in the request queued event, but only starting from the request begin event. For more information about request events, see this section.

Connection Events

In order to send HTTP requests, a connection is necessary, as explained in the request processing section.

HTTP/1.1 and HTTP/2 use SocketChannel.connect(SocketAddress) to establish a connection with the server, either via the TCP transport or via the Unix-Domain transport.

You can listen to these connect events using a ClientConnector.ConnectListener, for example to record connection establishment times:

ClientConnector clientConnector = new ClientConnector();
clientConnector.addEventListener(new ClientConnector.ConnectListener()
{
    private final ConcurrentMap<SocketChannel, Long> times = new ConcurrentHashMap<>();

    @Override
    public void onConnectBegin(SocketChannel socketChannel, SocketAddress socketAddress)
    {
        times.put(socketChannel, System.nanoTime());
    }

    @Override
    public void onConnectSuccess(SocketChannel socketChannel)
    {
        Long begin = times.remove(socketChannel);
        System.getLogger("connection").log(INFO, "established in %d ns", System.nanoTime() - begin);
    }

    @Override
    public void onConnectFailure(SocketChannel socketChannel, SocketAddress socketAddress, Throwable failure)
    {
        Long begin = times.remove(socketChannel);
        System.getLogger("connection").log(INFO, "failed in %d ns", System.nanoTime() - begin);
    }
});

HttpClient httpClient = new HttpClient(new HttpClientTransportOverHTTP(clientConnector));
httpClient.start();

This could be particularly useful when you notice that your client application seem "slow" to send requests. The connect begin and connect success events, along with the request queued and request begin event (detailed here), allow you to understand whether it is the server being slow at accepting connections, or it is the client being slow at processing queued requests.

Once the low-level connection has been established, you can be notified of connection events using a Connection.Listener, or more concretely ConnectionStatistics, as described in this section.

HttpClient Configuration

HttpClient has a quite large number of configuration parameters. Please refer to the HttpClient javadocs for the complete list of configurable parameters.

The most common parameters are:

  • HttpClient.idleTimeout: same as ClientConnector.idleTimeout described in this section.

  • HttpClient.connectBlocking: same as ClientConnector.connectBlocking described in this section.

  • HttpClient.connectTimeout: same as ClientConnector.connectTimeout described in this section.

  • HttpClient.maxConnectionsPerDestination: the max number of TCP connections that are opened for a particular destination (defaults to 64).

  • HttpClient.maxRequestsQueuedPerDestination: the max number of requests queued (defaults to 1024).

HttpClient TLS Configuration

HttpClient supports HTTPS requests out-of-the-box like a browser does.

The support for HTTPS request is provided by a SslContextFactory.Client instance, typically configured in the ClientConnector. If not explicitly configured, the ClientConnector will allocate a default one when started.

SslContextFactory.Client sslContextFactory = new SslContextFactory.Client();

ClientConnector clientConnector = new ClientConnector();
clientConnector.setSslContextFactory(sslContextFactory);

HttpClient httpClient = new HttpClient(new HttpClientTransportDynamic(clientConnector));
httpClient.start();

The default SslContextFactory.Client verifies the certificate sent by the server by verifying the validity of the certificate with respect to the certificate chain, the expiration date, the server host name, etc. This means that requests to public websites that have a valid certificate (such as https://google.com) will work out-of-the-box, without the need to specify a KeyStore or a TrustStore.

However, requests made to sites that return an invalid or a self-signed certificate will fail (like they will in a browser). An invalid certificate may be expired or have the wrong server host name; a self-signed certificate has a certificate chain that cannot be verified.

The validation of the server host name present in the certificate is important, to guarantee that the client is connected indeed with the intended server.

The validation of the server host name is performed at two levels: at the TLS level (in the JDK) and, optionally, at the application level.

By default, the validation of the server host name at the TLS level is enabled, while it is disabled at the application level.

You can configure the SslContextFactory.Client to skip the validation of the server host name at the TLS level:

SslContextFactory.Client sslContextFactory = new SslContextFactory.Client();
// Disable the validation of the server host name at the TLS level.
sslContextFactory.setEndpointIdentificationAlgorithm(null);

When you disable the validation of the server host name at the TLS level, you are strongly recommended to enable it at the application level. Failing to do so puts you at risk of connecting to a server different from the one you intend to connect to:

SslContextFactory.Client sslContextFactory = new SslContextFactory.Client();
// Only allow to connect to subdomains of domain.com.
sslContextFactory.setHostnameVerifier((hostName, session) -> hostName.endsWith(".domain.com"));

Enabling server host name validation at both the TLS level and application level allow you to further restrict the set of server hosts the client can connect to, among those allowed in the certificate sent by the server.

Entirely disabling server host name validation is not recommended, but may be done in controlled environments.

Even with server host name validation disabled, the validation of the certificate chain, by validating cryptographic signatures and validity dates is still performed.

Please refer to the SslContextFactory.Client javadocs for the complete list of configurable parameters.

HttpClient SslHandshakeListener

Applications may register a org.eclipse.jetty.io.ssl.SslHandshakeListener to be notified of TLS handshakes success or failure, by adding the SslHandshakeListener as a bean to HttpClient:

// Create a SslHandshakeListener.
SslHandshakeListener listener = new SslHandshakeListener()
{
    @Override
    public void handshakeSucceeded(Event event) throws SSLException
    {
        SSLEngine sslEngine = event.getSSLEngine();
        System.getLogger("tls").log(INFO, "TLS handshake successful to %s", sslEngine.getPeerHost());
    }

    @Override
    public void handshakeFailed(Event event, Throwable failure)
    {
        SSLEngine sslEngine = event.getSSLEngine();
        System.getLogger("tls").log(ERROR, "TLS handshake failure to %s", sslEngine.getPeerHost(), failure);
    }
};

HttpClient httpClient = new HttpClient();

// Add the SslHandshakeListener as bean to HttpClient.
// The listener will be notified of TLS handshakes success and failure.
httpClient.addBean(listener);

HttpClient TLS TrustStore Configuration

TODO

HttpClient TLS Client Certificates Configuration

TODO

Jetty’s HttpClient supports cookies out of the box.

The HttpClient instance receives cookies from HTTP responses and stores them in a java.net.CookieStore, a class that is part of the JDK. When new requests are made, the cookie store is consulted and if there are matching cookies (that is, cookies that are not expired and that match domain and path of the request) then they are added to the requests.

Applications can programmatically access the cookie store to find the cookies that have been set:

HttpCookieStore cookieStore = httpClient.getHttpCookieStore();
List<HttpCookie> cookies = cookieStore.match(URI.create("http://domain.com/path"));

Applications can also programmatically set cookies as if they were returned from a HTTP response:

HttpCookieStore cookieStore = httpClient.getHttpCookieStore();
HttpCookie cookie = HttpCookie.build("foo", "bar")
    .domain("domain.com")
    .path("/")
    .maxAge(TimeUnit.DAYS.toSeconds(1))
    .build();
cookieStore.add(URI.create("http://domain.com"), cookie);

Cookies may be added explicitly only for a particular request:

ContentResponse response = httpClient.newRequest("http://domain.com/path")
    .cookie(HttpCookie.from("foo", "bar"))
    .send();

You can remove cookies that you do not want to be sent in future HTTP requests:

HttpCookieStore cookieStore = httpClient.getHttpCookieStore();
URI uri = URI.create("http://domain.com");
List<HttpCookie> cookies = cookieStore.match(uri);
for (HttpCookie cookie : cookies)
{
    cookieStore.remove(uri, cookie);
}

If you want to totally disable cookie handling, you can install a HttpCookieStore.Empty. This must be done when HttpClient is used in a proxy application, in this way:

httpClient.setHttpCookieStore(new HttpCookieStore.Empty());

You can enable cookie filtering by installing a cookie store that performs the filtering logic in this way:

class GoogleOnlyCookieStore extends HttpCookieStore.Default
{
    @Override
    public boolean add(URI uri, HttpCookie cookie)
    {
        if (uri.getHost().endsWith("google.com"))
            return super.add(uri, cookie);
        return false;
    }
}

httpClient.setHttpCookieStore(new GoogleOnlyCookieStore());

The example above will retain only cookies that come from the google.com domain or sub-domains.

Special Characters in Cookies

Jetty is compliant with RFC6265, and as such care must be taken when setting a cookie value that includes special characters such as ;.

Previously, Version=1 cookies defined in RFC2109 (and continued in RFC2965) allowed for special/reserved characters to be enclosed within double quotes when declared in a Set-Cookie response header:

Set-Cookie: foo="bar;baz";Version=1;Path="/secure"

This was added to the HTTP Response as follows:

protected void service(HttpServletRequest request, HttpServletResponse response)
{
    jakarta.servlet.http.Cookie cookie = new Cookie("foo", "bar;baz");
    cookie.setPath("/secure");
    response.addCookie(cookie);
}

The introduction of RFC6265 has rendered this approach no longer possible; users are now required to encode cookie values that use these special characters. This can be done utilizing jakarta.servlet.http.Cookie as follows:

jakarta.servlet.http.Cookie cookie = new Cookie("foo", URLEncoder.encode("bar;baz", "UTF-8"));

Jetty validates all cookie names and values being added to the HttpServletResponse via the addCookie(Cookie) method. If an illegal value is discovered Jetty will throw an IllegalArgumentException with the details.

HttpClient Authentication Support

Jetty’s HttpClient supports the BASIC and DIGEST authentication mechanisms defined by RFC 7235, as well as the SPNEGO authentication mechanism defined in RFC 4559.

The HTTP conversation, the sequence of related HTTP requests, for a request that needs authentication is the following:

Diagram

Upon receiving a HTTP 401 response code, HttpClient looks at the WWW-Authenticate response header (the server challenge) and then tries to match configured authentication credentials to produce an Authentication header that contains the authentication credentials to access the resource.

You can configure authentication credentials in the HttpClient instance as follows:

// Add authentication credentials.
AuthenticationStore auth = httpClient.getAuthenticationStore();

URI uri1 = new URI("http://mydomain.com/secure");
auth.addAuthentication(new BasicAuthentication(uri1, "MyRealm", "userName1", "password1"));

URI uri2 = new URI("http://otherdomain.com/admin");
auth.addAuthentication(new BasicAuthentication(uri1, "AdminRealm", "admin", "password"));

Authentications are matched against the server challenge first by mechanism (e.g. BASIC or DIGEST), then by realm and then by URI.

If an Authentication match is found, the application does not receive events related to the HTTP 401 response. These events are handled internally by HttpClient which produces another (internal) request similar to the original request but with an additional Authorization header.

If the authentication is successful, the server responds with a HTTP 200 and HttpClient caches the Authentication.Result so that subsequent requests for a matching URI will not incur in the additional rountrip caused by the HTTP 401 response.

It is possible to clear Authentication.Results in order to force authentication again:

httpClient.getAuthenticationStore().clearAuthenticationResults();

Authentication results may be preempted to avoid the additional roundtrip due to the server challenge in this way:

AuthenticationStore auth = httpClient.getAuthenticationStore();
URI uri = URI.create("http://domain.com/secure");
auth.addAuthenticationResult(new BasicAuthentication.BasicResult(uri, "username", "password"));

In this way, requests for the given URI are enriched immediately with the Authorization header, and the server should respond with HTTP 200 (and the resource content) rather than with the 401 and the challenge.

It is also possible to preempt the authentication for a single request only, in this way:

URI uri = URI.create("http://domain.com/secure");
Authentication.Result authn = new BasicAuthentication.BasicResult(uri, "username", "password");
Request request = httpClient.newRequest(uri);
authn.apply(request);
request.send();

See also the proxy authentication section for further information about how authentication works with HTTP proxies.

HttpClient SPNEGO Authentication Support

TODO

HttpClient Proxy Support

Jetty’s HttpClient can be configured to use proxies to connect to destinations.

These types of proxies are available out of the box:

  • HTTP proxy (provided by class org.eclipse.jetty.client.HttpProxy)

  • SOCKS 4 proxy (provided by class org.eclipse.jetty.client.Socks4Proxy)

  • SOCKS 5 proxy (provided by class org.eclipse.jetty.client.Socks5Proxy)

Other implementations may be written by subclassing ProxyConfiguration.Proxy.

The following is a typical configuration:

HttpProxy proxy = new HttpProxy("proxyHost", 8888);

// Do not proxy requests for localhost:8080.
proxy.getExcludedAddresses().add("localhost:8080");

// Add the new proxy to the list of proxies already registered.
ProxyConfiguration proxyConfig = httpClient.getProxyConfiguration();
proxyConfig.addProxy(proxy);

ContentResponse response = httpClient.GET("http://domain.com/path");

You specify the proxy host and proxy port, and optionally also the addresses that you do not want to be proxied, and then add the proxy configuration on the ProxyConfiguration instance.

Configured in this way, HttpClient makes requests to the HTTP proxy (for plain-text HTTP requests) or establishes a tunnel via HTTP CONNECT (for encrypted HTTPS requests).

Proxying is supported for any version of the HTTP protocol.

The communication between the client and the proxy may be encrypted, so that it would not be possible for another party on the same network as the client to know what servers the client connects to.

SOCKS5 Proxy Support

SOCKS 5 (defined in RFC 1928) offers choices for authentication methods and supports IPv6 (things that SOCKS 4 does not support).

A typical SOCKS 5 proxy configuration with the username/password authentication method is the following:

Socks5Proxy proxy = new Socks5Proxy("proxyHost", 8888);
String socks5User = "jetty";
String socks5Pass = "secret";
var socks5AuthenticationFactory = new Socks5.UsernamePasswordAuthenticationFactory(socks5User, socks5Pass);
// Add the authentication method to the proxy.
proxy.putAuthenticationFactory(socks5AuthenticationFactory);

// Do not proxy requests for localhost:8080.
proxy.getExcludedAddresses().add("localhost:8080");

// Add the new proxy to the list of proxies already registered.
ProxyConfiguration proxyConfig = httpClient.getProxyConfiguration();
proxyConfig.addProxy(proxy);

ContentResponse response = httpClient.GET("http://domain.com/path");

HTTP Proxy Authentication Support

Jetty’s HttpClient supports HTTP proxy authentication in the same way it supports server authentication.

In the example below, the HTTP proxy requires BASIC authentication, but the server requires DIGEST authentication, and therefore:

AuthenticationStore auth = httpClient.getAuthenticationStore();

// Proxy credentials.
URI proxyURI = new URI("http://proxy.net:8080");
auth.addAuthentication(new BasicAuthentication(proxyURI, "ProxyRealm", "proxyUser", "proxyPass"));

// Server credentials.
URI serverURI = new URI("http://domain.com/secure");
auth.addAuthentication(new DigestAuthentication(serverURI, "ServerRealm", "serverUser", "serverPass"));

// Proxy configuration.
ProxyConfiguration proxyConfig = httpClient.getProxyConfiguration();
HttpProxy proxy = new HttpProxy("proxy.net", 8080);
proxyConfig.addProxy(proxy);

ContentResponse response = httpClient.newRequest(serverURI).send();

The HTTP conversation for successful authentications on both the proxy and the server is the following:

Diagram

The application does not receive events related to the responses with code 407 and 401 since they are handled internally by HttpClient.

Similarly to the authentication section, the proxy authentication result and the server authentication result can be preempted to avoid, respectively, the 407 and 401 roundtrips.

HttpClient Pluggable Transports

Jetty’s HttpClient can be configured to use different HTTP formats to carry the semantic of HTTP requests and responses, by specifying different HttpClientTransport implementations.

This means that the intention of a client to request resource /index.html using the GET method can be carried over a low-level transport in different formats.

An HttpClientTransport is the component that is in charge of converting a high-level, semantic, HTTP requests such as " GET resource /index.html " into the specific format understood by the server (for example, HTTP/2 or HTTP/3), and to convert the server response from the specific format (HTTP/2 or HTTP/3) into high-level, semantic objects that can be used by applications.

The most common protocol format is HTTP/1.1, a textual protocol with lines separated by \r\n:

GET /index.html HTTP/1.1\r\n
Host: domain.com\r\n
...
\r\n

However, the same request can be made using FastCGI, a binary protocol:

x01 x01 x00 x01 x00 x08 x00 x00
x00 x01 x01 x00 x00 x00 x00 x00
x01 x04 x00 x01 xLL xLL x00 x00
x0C x0B  D   O   C   U   M   E
 N   T   _   U   R   I   /   i
 n   d   e   x   .   h   t   m
 l
...

Similarly, HTTP/2 is a binary protocol that transports the same information in a yet different format via TCP, while HTTP/3 is a binary protocol that transports the same information in yet another format via QUIC.

The HTTP protocol version may be negotiated between client and server. A request for a resource may be sent using one protocol (for example, HTTP/1.1), but the response may arrive in a different protocol (for example, HTTP/2).

HttpClient supports these HttpClientTransport implementations, each speaking only one protocol:

  • HttpClientTransportOverHTTP, for HTTP/1.1 (both clear-text and TLS encrypted)

  • HttpClientTransportOverHTTP2, for HTTP/2 (both clear-text and TLS encrypted)

  • HttpClientTransportOverHTTP3, for HTTP/3 (only encrypted via QUIC)

  • HttpClientTransportOverFCGI, for FastCGI (both clear-text and TLS encrypted)

HttpClient also supports HttpClientTransportDynamic, a dynamic transport that can speak different HTTP formats and can select the right protocol by negotiating it with the server or by explicit indication from applications.

Furthermore, every HTTP format can be sent over different low-level transports such as TCP, Unix-Domain, QUIC or memory. Supports for Unix-Domain sockets requires Java 16 or later, since Unix-Domain sockets support has been introduced in OpenJDK with JEP 380.

Applications are typically not aware of the actual HTTP format or low-level transport being used. This allows them to write their logic against a high-level API that hides the details of the specific HTTP format and low-level transport being used.

HTTP/1.1 Transport

HTTP/1.1 is the default transport.

// No transport specified, using default.
HttpClient httpClient = new HttpClient();
httpClient.start();

If you want to customize the HTTP/1.1 transport, you can explicitly configure it in this way:

// Configure HTTP/1.1 transport.
HttpClientTransportOverHTTP transport = new HttpClientTransportOverHTTP();
transport.setHeaderCacheSize(16384);

HttpClient client = new HttpClient(transport);
client.start();

HTTP/2 Transport

The HTTP/2 transport can be configured in this way:

// The HTTP2Client powers the HTTP/2 transport.
HTTP2Client http2Client = new HTTP2Client();
http2Client.setInitialSessionRecvWindow(64 * 1024 * 1024);

// Create and configure the HTTP/2 transport.
HttpClientTransportOverHTTP2 transport = new HttpClientTransportOverHTTP2(http2Client);
transport.setUseALPN(true);

HttpClient client = new HttpClient(transport);
client.start();

HTTP2Client is the lower-level client that provides an API based on HTTP/2 concepts such as sessions, streams and frames that are specific to HTTP/2. See the HTTP/2 client section for more information.

HttpClientTransportOverHTTP2 uses HTTP2Client to format high-level semantic HTTP requests into the HTTP/2 specific format.

HTTP/3 Transport

The HTTP/3 transport can be configured in this way:

// HTTP/3 requires secure communication.
SslContextFactory.Client sslContextFactory = new SslContextFactory.Client();
// The HTTP3Client powers the HTTP/3 transport.
ClientQuicConfiguration clientQuicConfig = new ClientQuicConfiguration(sslContextFactory, null);
HTTP3Client http3Client = new HTTP3Client(clientQuicConfig);
http3Client.getQuicConfiguration().setSessionRecvWindow(64 * 1024 * 1024);

// Create and configure the HTTP/3 transport.
HttpClientTransportOverHTTP3 transport = new HttpClientTransportOverHTTP3(http3Client);

HttpClient client = new HttpClient(transport);
client.start();

HTTP3Client is the lower-level client that provides an API based on HTTP/3 concepts such as sessions, streams and frames that are specific to HTTP/3. See the HTTP/3 client section for more information.

HttpClientTransportOverHTTP3 uses HTTP3Client to format high-level semantic HTTP requests into the HTTP/3 specific format.

FastCGI Transport

The FastCGI transport can be configured in this way:

String scriptRoot = "/var/www/wordpress";
HttpClientTransportOverFCGI transport = new HttpClientTransportOverFCGI(scriptRoot);

HttpClient client = new HttpClient(transport);
client.start();

In order to make requests using the FastCGI transport, you need to have a FastCGI server such as PHP-FPM (see also link:http://php.net/manual/en/install.fpm.php).

The FastCGI transport is primarily used by Jetty’s FastCGI support to serve PHP pages (WordPress for example).

Dynamic Transport

The static HttpClientTransport implementations work well if you know in advance the protocol you want to speak with the server, or if the server only supports one protocol (such as FastCGI).

With the advent of HTTP/2 and HTTP/3, however, servers are now able to support multiple protocols.

The HTTP/2 protocol is typically negotiated between client and server. This negotiation can happen via ALPN, a TLS extension that allows the client to tell the server the list of protocol that the client supports, so that the server can pick one of the client supported protocols that also the server supports; or via HTTP/1.1 upgrade by means of the Upgrade header.

Applications can configure the dynamic transport with one or more HTTP versions such as HTTP/1.1, HTTP/2 or HTTP/3. The implementation will take care of using TLS for HTTPS URIs, using ALPN if necessary, negotiating protocols, upgrading from one protocol to another, etc.

By default, the dynamic transport only speaks HTTP/1.1:

// Dynamic transport speaks HTTP/1.1 by default.
HttpClientTransportDynamic transport = new HttpClientTransportDynamic();

HttpClient client = new HttpClient(transport);
client.start();

The dynamic transport can be configured with just one protocol, making it equivalent to the corresponding static transport:

ClientConnector connector = new ClientConnector();

// Equivalent to HttpClientTransportOverHTTP.
HttpClientTransportDynamic http11Transport = new HttpClientTransportDynamic(connector, HttpClientConnectionFactory.HTTP11);

// Equivalent to HttpClientTransportOverHTTP2.
HTTP2Client http2Client = new HTTP2Client(connector);
HttpClientTransportDynamic http2Transport = new HttpClientTransportDynamic(connector, new ClientConnectionFactoryOverHTTP2.HTTP2(http2Client));

The dynamic transport, however, has been implemented to support multiple transports, in particular HTTP/1.1, HTTP/2 and HTTP/3:

SslContextFactory.Client sslContextFactory = new SslContextFactory.Client();

ClientConnector connector = new ClientConnector();
connector.setSslContextFactory(sslContextFactory);

ClientConnectionFactory.Info http1 = HttpClientConnectionFactory.HTTP11;

HTTP2Client http2Client = new HTTP2Client(connector);
ClientConnectionFactoryOverHTTP2.HTTP2 http2 = new ClientConnectionFactoryOverHTTP2.HTTP2(http2Client);

ClientQuicConfiguration quicConfiguration = new ClientQuicConfiguration(sslContextFactory, null);
HTTP3Client http3Client = new HTTP3Client(quicConfiguration, connector);
ClientConnectionFactoryOverHTTP3.HTTP3 http3 = new ClientConnectionFactoryOverHTTP3.HTTP3(http3Client);

// The order of the protocols indicates the client's preference.
// The first is the most preferred, the last is the least preferred, but
// the protocol version to use can be explicitly specified in the request.
HttpClientTransportDynamic transport = new HttpClientTransportDynamic(connector, http1, http2, http3);

HttpClient client = new HttpClient(transport);
client.start();

The order in which the protocols are specified to HttpClientTransportDynamic indicates what is the client preference (first the most preferred).

When clear-text communication is used (i.e. URIs with the http scheme) there is no HTTP protocol version negotiation, and therefore the application must know a priori whether the server supports the HTTP version or not. For example, if the server only supports clear-text HTTP/2, and HttpClientTransportDynamic is configured as in the example above, where HTTP/1.1 has precedence over HTTP/2, the client will send, by default, a clear-text HTTP/1.1 request to a clear-text HTTP/2 only server, which will result in a communication failure.

When using TLS (i.e. URIs with the https scheme), the HTTP protocol version is negotiated between client and server via ALPN, and it is the server that decides what is the application protocol to use for the communication, regardless of the client preference.

HTTP/1.1 and HTTP/2 are compatible because they both use TCP, while HTTP/3 is incompatible with previous HTTP versions because it uses QUIC.

Only compatible HTTP versions can negotiate the HTTP protocol version to use via ALPN, and only compatible HTTP versions can be upgraded from an older version to a newer version.

Provided that the server supports HTTP/1.1, HTTP/2 and HTTP/3, client applications can explicitly hint the version they want to use:

HttpClientTransportDynamic transport = new HttpClientTransportDynamic(connector, http1, http2, http3);
HttpClient client = new HttpClient(transport);
client.start();

// The server supports HTTP/1.1, HTTP/2 and HTTP/3.

ContentResponse http1Response = client.newRequest("https://host/")
    // Specify the version explicitly.
    .version(HttpVersion.HTTP_1_1)
    .send();

ContentResponse http2Response = client.newRequest("https://host/")
    // Specify the version explicitly.
    .version(HttpVersion.HTTP_2)
    .send();

ContentResponse http3Response = client.newRequest("https://host/")
    // Specify the version explicitly.
    .version(HttpVersion.HTTP_3)
    .send();

// Make a clear-text upgrade request from HTTP/1.1 to HTTP/2.
// The request will start as HTTP/1.1, but the response will be HTTP/2.
ContentResponse upgradedResponse = client.newRequest("https://host/")
    .headers(headers -> headers
        .put(HttpHeader.UPGRADE, "h2c")
        .put(HttpHeader.HTTP2_SETTINGS, "")
        .put(HttpHeader.CONNECTION, "Upgrade, HTTP2-Settings"))
    .send();

If the client application explicitly specifies the HTTP version, then ALPN is not used by the client. By specifying the HTTP version explicitly, the client application has prior-knowledge of what HTTP version the server supports, and therefore ALPN is not needed. If the server does not support the HTTP version chosen by the client, then the communication will fail.

If the client application does not explicitly specify the HTTP version, then ALPN will be used by the client, but only for compatible protocols. If the server also supports ALPN, then the protocol will be negotiated via ALPN and the server will choose the protocol to use. If the server does not support ALPN, the client will try to use the first protocol configured in HttpClientTransportDynamic, and the communication may succeed or fail depending on whether the server supports the protocol chosen by the client.

For example, HTTP/3 is not compatible with previous HTTP version; if HttpClientTransportDynamic is configured to prefer HTTP/3, it will be the only protocol attempted by the client:

// Client prefers HTTP/3.
HttpClientTransportDynamic transport = new HttpClientTransportDynamic(connector, http3, http2, http1);
HttpClient client = new HttpClient(transport);
client.start();

// No explicit HTTP version specified.
// Either HTTP/3 succeeds, or communication failure.
ContentResponse httpResponse = client.newRequest("https://host/")
    .send();

When the client application configures HttpClientTransportDynamic to prefer HTTP/2, there could be ALPN negotiation between HTTP/2 and HTTP/1.1 (but not HTTP/3 because it is incompatible); HTTP/3 will only be possible by specifying the HTTP version explicitly:

// Client prefers HTTP/2.
HttpClientTransportDynamic transport = new HttpClientTransportDynamic(connector, http2, http1, http3);
HttpClient client = new HttpClient(transport);
client.start();

// No explicit HTTP version specified.
// Either HTTP/1.1 or HTTP/2 will be negotiated via ALPN.
// HTTP/3 only possible by specifying the version explicitly.
ContentResponse httpResponse = client.newRequest("https://host/")
    .send();