HTTP/2 Server Library

In the vast majority of cases, server applications should use the generic, high-level, HTTP server library that also provides HTTP/2 support via the HTTP/2 ConnectionFactorys as described in details here.

The low-level HTTP/2 server library has been designed for those applications that need low-level access to HTTP/2 features such as sessions, streams and frames, and this is quite a rare use case.

See also the correspondent HTTP/2 client library.

Introduction

The Maven artifact coordinates for the HTTP/2 client library are the following:

<dependency>
  <groupId>org.eclipse.jetty.http2</groupId>
  <artifactId>jetty-http2-server</artifactId>
  <version>12.0.10-SNAPSHOT</version>
</dependency>

HTTP/2 is a multiplexed protocol: it allows multiple HTTP/2 requests to be sent on the same TCP connection, or session. Each request/response cycle is represented by a stream. Therefore, a single session manages multiple concurrent streams. A stream has typically a very short life compared to the session: a stream only exists for the duration of the request/response cycle and then disappears.

HTTP/2 Flow Control

The HTTP/2 protocol is flow controlled (see the specification). This means that a sender and a receiver maintain a flow control window that tracks the number of data bytes sent and received, respectively. When a sender sends data bytes, it reduces its flow control window. When a receiver receives data bytes, it also reduces its flow control window, and then passes the received data bytes to the application. The application consumes the data bytes and tells back the receiver that it has consumed the data bytes. The receiver then enlarges the flow control window, and the implementation arranges to send a message to the sender with the number of bytes consumed, so that the sender can enlarge its flow control window.

A sender can send data bytes up to its whole flow control window, then it must stop sending. The sender may resume sending data bytes when it receives a message from the receiver that the data bytes sent previously have been consumed. This message enlarges the sender flow control window, which allows the sender to send more data bytes.

HTTP/2 defines two flow control windows: one for each session, and one for each stream. Let’s see with an example how they interact, assuming that in this example the session flow control window is 120 bytes and the stream flow control window is 100 bytes.

The sender opens a session, and then opens stream_1 on that session, and sends 80 data bytes. At this point the session flow control window is 40 bytes (120 - 80), and stream_1's flow control window is 20 bytes (100 - 80). The sender now opens stream_2 on the same session and sends 40 data bytes. At this point, the session flow control window is 0 bytes (40 - 40), while stream_2's flow control window is 60 (100 - 40). Since now the session flow control window is 0, the sender cannot send more data bytes, neither on stream_1 nor on stream_2, nor on other streams, despite all the streams having their stream flow control windows greater than 0.

The receiver consumes stream_2's 40 data bytes and sends a message to the sender with this information. At this point, the session flow control window is 40 (0 + 40), stream_1's flow control window is still 20 and stream_2's flow control window is 100 (60 + 40). If the sender opens stream_3 and would like to send 50 data bytes, it would only be able to send 40 because that is the maximum allowed by the session flow control window at this point.

It is therefore very important that applications notify the fact that they have consumed data bytes as soon as possible, so that the implementation (the receiver) can send a message to the sender (in the form of a WINDOW_UPDATE frame) with the information to enlarge the flow control window, therefore reducing the possibility that sender stalls due to the flow control windows being reduced to 0.

How a server application should handle HTTP/2 flow control is discussed in details in this section.

Server Setup

The low-level HTTP/2 support is provided by org.eclipse.jetty.http2.server.RawHTTP2ServerConnectionFactory and org.eclipse.jetty.http2.api.server.ServerSessionListener:

// Create a Server instance.
Server server = new Server();

ServerSessionListener sessionListener = new ServerSessionListener() {};

// Create a ServerConnector with RawHTTP2ServerConnectionFactory.
RawHTTP2ServerConnectionFactory http2 = new RawHTTP2ServerConnectionFactory(sessionListener);

// Configure RawHTTP2ServerConnectionFactory, for example:

// Configure the max number of concurrent requests.
http2.setMaxConcurrentStreams(128);
// Enable support for CONNECT.
http2.setConnectProtocolEnabled(true);

// Create the ServerConnector.
ServerConnector connector = new ServerConnector(server, http2);

// Add the Connector to the Server
server.addConnector(connector);

// Start the Server so it starts accepting connections from clients.
server.start();

Where server applications using the high-level server library deal with HTTP requests and responses in Handlers, server applications using the low-level HTTP/2 server library deal directly with HTTP/2 sessions, streams and frames in a ServerSessionListener implementation.

The ServerSessionListener interface defines a number of methods that are invoked by the implementation upon the occurrence of HTTP/2 events, and that server applications can override to react to those events.

Please refer to the ServerSessionListener javadocs for the complete list of events.

The first event is the accept event and happens when a client opens a new TCP connection to the server and the server accepts the connection. This is the first occasion where server applications have access to the HTTP/2 Session object:

ServerSessionListener sessionListener = new ServerSessionListener()
{
    @Override
    public void onAccept(Session session)
    {
        SocketAddress remoteAddress = session.getRemoteSocketAddress();
        System.getLogger("http2").log(INFO, "Connection from {0}", remoteAddress);
    }
};

After connecting to the server, a compliant HTTP/2 client must send the HTTP/2 client preface, and when the server receives it, it generates the preface event on the server. This is where server applications can customize the connection settings by returning a map of settings that the implementation will send to the client:

ServerSessionListener sessionListener = new ServerSessionListener()
{
    @Override
    public Map<Integer, Integer> onPreface(Session session)
    {
        // Customize the settings, for example:
        Map<Integer, Integer> settings = new HashMap<>();

        // Tell the client that HTTP/2 push is disabled.
        settings.put(SettingsFrame.ENABLE_PUSH, 0);

        return settings;
    }
};

Receiving a Request

Receiving an HTTP request from the client, and sending a response, creates a stream that encapsulates the exchange of HTTP/2 frames that compose the request and the response.

An HTTP request is made of a HEADERS frame, that carries the request method, the request URI and the request headers, and optional DATA frames that carry the request content.

Receiving the HEADERS frame opens the Stream:

ServerSessionListener sessionListener = new ServerSessionListener()
{
    @Override
    public Stream.Listener onNewStream(Stream stream, HeadersFrame frame)
    {
        // This is the "new stream" event, so it's guaranteed to be a request.
        MetaData.Request request = (MetaData.Request)frame.getMetaData();

        // Return a Stream.Listener to handle the request events,
        // for example request content events or a request reset.
        return new Stream.Listener()
        {
            // Override callback methods for events you are interested in.
        };
    }
};

Server applications should return a Stream.Listener implementation from onNewStream(...) to be notified of events generated by the client, such as DATA frames carrying request content, or a RST_STREAM frame indicating that the client wants to reset the request, or an idle timeout event indicating that the client was supposed to send more frames but it did not.

The example below shows how to receive request content:

ServerSessionListener sessionListener = new ServerSessionListener()
{
    @Override
    public Stream.Listener onNewStream(Stream stream, HeadersFrame frame)
    {
        MetaData.Request request = (MetaData.Request)frame.getMetaData();

        // Demand for request data content.
        stream.demand();

        // Return a Stream.Listener to handle the request events.
        return new Stream.Listener()
        {
            @Override
            public void onDataAvailable(Stream stream)
            {
                Stream.Data data = stream.readData();

                if (data == null)
                {
                    stream.demand();
                    return;
                }

                // Get the content buffer.
                ByteBuffer buffer = data.frame().getByteBuffer();

                // Consume the buffer, here - as an example - just log it.
                System.getLogger("http2").log(INFO, "Consuming buffer {0}", buffer);

                // Tell the implementation that the buffer has been consumed.
                data.release();

                if (!data.frame().isEndStream())
                {
                    // Demand more DATA frames when they are available.
                    stream.demand();
                }
            }
        };
    }
};

When onDataAvailable(Stream stream) is invoked, the demand is implicitly cancelled.

Just returning from the onDataAvailable(Stream stream) method does not implicitly demand for more DATA frames.

Applications must call Stream.demand() to explicitly require that onDataAvailable(Stream stream) is invoked again when more DATA frames are available.

Applications that consume the content buffer within onDataAvailable(Stream stream) (for example, writing it to a file, or copying the bytes to another storage) should call Data.release() as soon as they have consumed the content buffer. This allows the implementation to reuse the buffer, reducing the memory requirements needed to handle the content buffers.

Alternatively, an application may store away the Data object to consume the buffer bytes later, or pass the Data object to another asynchronous API (this is typical in proxy applications).

The call to Stream.readData() tells the implementation to enlarge the stream and session flow control windows so that the sender will be able to send more DATA frames without stalling.

Applications can unwrap the Data object into some other object that may be used later, provided that the release semantic is maintained:

record Chunk(ByteBuffer byteBuffer, Callback callback)
{
}

// A queue that consumers poll to consume content asynchronously.
Queue<Chunk> dataQueue = new ConcurrentLinkedQueue<>();

// Implementation of Stream.Listener.onDataAvailable(Stream stream)
// in case of unwrapping of the Data object for asynchronous content
// consumption and demand.
Stream.Listener listener = new Stream.Listener()
{
    @Override
    public void onDataAvailable(Stream stream)
    {
        Stream.Data data = stream.readData();

        if (data == null)
        {
            stream.demand();
            return;
        }

        // Get the content buffer.
        ByteBuffer byteBuffer = data.frame().getByteBuffer();

        // Unwrap the Data object, converting it to a Chunk.
        // The Data.release() semantic is maintained in the completion of the Callback.
        dataQueue.offer(new Chunk(byteBuffer, Callback.from(() ->
        {
            // When the buffer has been consumed, then:
            // A) release the Data object.
            data.release();
            // B) possibly demand more DATA frames.
            if (!data.frame().isEndStream())
                stream.demand();
        })));

        // Do not demand more data here, to avoid to overflow the queue.
    }
};

Applications that implement onDataAvailable(Stream stream) must remember to call Stream.demand() eventually.

If they do not call Stream.demand(), the implementation will not invoke onDataAvailable(Stream stream) to deliver more DATA frames and the application will stall threadlessly until an idle timeout fires to close the stream or the session.

Sending a Response

After receiving an HTTP request, a server application must send an HTTP response.

An HTTP response is typically composed of a HEADERS frame containing the HTTP status code and the response headers, and optionally one or more DATA frames containing the response content bytes.

The HTTP/2 protocol also supports response trailers (that is, headers that are sent after the response content) that also are sent using a HEADERS frame.

A server application can send a response in this way:

ServerSessionListener sessionListener = new ServerSessionListener()
{
    @Override
    public Stream.Listener onNewStream(Stream stream, HeadersFrame frame)
    {
        // Send a response after reading the request.
        MetaData.Request request = (MetaData.Request)frame.getMetaData();
        if (frame.isEndStream())
        {
            respond(stream, request);
            return null;
        }
        else
        {
            // Demand for request data.
            stream.demand();

            // Return a listener to handle the request events.
            return new Stream.Listener()
            {
                @Override
                public void onDataAvailable(Stream stream)
                {
                    Stream.Data data = stream.readData();

                    if (data == null)
                    {
                        stream.demand();
                        return;
                    }

                    // Consume the request content.
                    data.release();

                    if (data.frame().isEndStream())
                        respond(stream, request);
                    else
                        stream.demand();
                }
            };
        }
    }

    private void respond(Stream stream, MetaData.Request request)
    {
        // Prepare the response HEADERS frame.

        // The response HTTP status and HTTP headers.
        MetaData.Response response = new MetaData.Response(HttpStatus.OK_200, null, HttpVersion.HTTP_2, HttpFields.EMPTY);

        if (HttpMethod.GET.is(request.getMethod()))
        {
            // The response content.
            ByteBuffer resourceBytes = getResourceBytes(request);

            // Send the HEADERS frame with the response status and headers,
            // and a DATA frame with the response content bytes.
            stream.headers(new HeadersFrame(stream.getId(), response, null, false))
                .thenCompose(s -> s.data(new DataFrame(s.getId(), resourceBytes, true)));
        }
        else
        {
            // Send just the HEADERS frame with the response status and headers.
            stream.headers(new HeadersFrame(stream.getId(), response, null, true));
        }
    }
};

Resetting a Request

A server application may decide that it does not want to accept the request. For example, it may throttle the client because it sent too many requests in a time window, or the request is invalid (and does not deserve a proper HTTP response), etc.

A request can be reset in this way:

ServerSessionListener sessionListener = new ServerSessionListener()
{
    @Override
    public Stream.Listener onNewStream(Stream stream, HeadersFrame frame)
    {
        float requestRate = calculateRequestRate();

        if (requestRate > maxRequestRate)
        {
            stream.reset(new ResetFrame(stream.getId(), ErrorCode.REFUSED_STREAM_ERROR.code), Callback.NOOP);
            return null;
        }
        else
        {
            // The request is accepted.
            MetaData.Request request = (MetaData.Request)frame.getMetaData();
            // Return a Stream.Listener to handle the request events.
            return new Stream.Listener()
            {
                // Override callback methods for events you are interested in.
            };
        }
    }
};

HTTP/2 Push of Resources

A server application may push secondary resources related to a primary resource.

A client may inform the server that it does not accept pushed resources(see this section of the specification) via a SETTINGS frame. Server applications must track SETTINGS frames and verify whether the client supports HTTP/2 push, and only push if the client supports it:

// The favicon bytes.
ByteBuffer faviconBuffer = BufferUtil.toBuffer(ResourceFactory.root().newResource("/path/to/favicon.ico"), true);

ServerSessionListener sessionListener = new ServerSessionListener()
{
    // By default, push is enabled.
    private boolean pushEnabled = true;

    @Override
    public void onSettings(Session session, SettingsFrame frame)
    {
        // Check whether the client sent an ENABLE_PUSH setting.
        Map<Integer, Integer> settings = frame.getSettings();
        Integer enablePush = settings.get(SettingsFrame.ENABLE_PUSH);
        if (enablePush != null)
            pushEnabled = enablePush == 1;
    }

    @Override
    public Stream.Listener onNewStream(Stream stream, HeadersFrame frame)
    {
        MetaData.Request request = (MetaData.Request)frame.getMetaData();
        if (pushEnabled && request.getHttpURI().toString().endsWith("/index.html"))
        {
            // Push the favicon.
            HttpURI pushedURI = HttpURI.build(request.getHttpURI()).path("/favicon.ico");
            MetaData.Request pushedRequest = new MetaData.Request("GET", pushedURI, HttpVersion.HTTP_2, HttpFields.EMPTY);
            PushPromiseFrame promiseFrame = new PushPromiseFrame(stream.getId(), 0, pushedRequest);
            stream.push(promiseFrame, null)
                .thenCompose(pushedStream ->
                {
                    // Send the favicon "response".
                    MetaData.Response pushedResponse = new MetaData.Response(HttpStatus.OK_200, null, HttpVersion.HTTP_2, HttpFields.EMPTY);
                    return pushedStream.headers(new HeadersFrame(pushedStream.getId(), pushedResponse, null, false))
                        .thenCompose(pushed -> pushed.data(new DataFrame(pushed.getId(), faviconBuffer.slice(), true)));
                });
        }
        // Return a Stream.Listener to handle the request events.
        return new Stream.Listener()
        {
            // Override callback methods for events you are interested in.
        };
    }
};