I/O Architecture

The Jetty client libraries provide the basic components and APIs to implement a client application.

They build on the common Jetty I/O Architecture and provide client specific concepts (such as establishing a connection to a server).

There are conceptually two layers that compose the Jetty client libraries:

  1. The transport layer, that handles the low-level communication with the server, and deals with buffers, threads, etc.

  2. The protocol layer, that handles the high-level protocol by parsing the bytes read from the transport layer and by generating the bytes to write to the transport layer.

Transport Layer

The transport layer is the low-level layer that communicates with the server.

Protocols such as HTTP/1.1 and HTTP/2 are typically transported over TCP, while the newer HTTP/3 is transported over QUIC, which is itself transported over UDP.

However, there are other means of communication supported by the Jetty client libraries, in particular over Unix-Domain sockets (for inter-process communication), and over memory (for intra-process communication).

The same high-level protocol can be carried by different low-level transports. For example, the high-level HTTP/1.1 protocol can be transported over either TCP (the default), or QUIC, or Unix-Domain sockets, or memory, because all these low-level transport provide reliable and ordered communication between client and server.

Similarly, the high-level HTTP/3 protocol can be transported over either QUIC (the default) or memory. It would be possible to transport HTTP/3 also over Unix-Domain sockets, but the current version of Java only supports Unix-Domain sockets for SocketChannels and not for DatagramChannels.

The Jetty client libraries use the common I/O design described in this section.

The common I/O components and concepts are used for all low-level transports. The only partial exception is the memory transport, which is not based on network components; as such it does not need a SelectorManager, but it exposes EndPoint so that high-level protocols have a common interface to interact with the low-level transport.

The client-side abstraction for the low-level transport is org.eclipse.jetty.io.Transport.

Transport represents how high-level protocols can be transported; there is Transport.TCP_IP that represents communication over TCP, but also Transport.TCPUnix for Unix-Domain sockets, QuicTransport for QUIC and MemoryTransport for memory.

Applications can specify the Transport to use for each request as described in this section.

When the Transport implementation uses the network, it delegates to org.eclipse.jetty.io.ClientConnector.

ClientConnector primarily wraps org.eclipse.jetty.io.SelectorManager to provide network functionalities, and aggregates other four components:

  • a thread pool (in form of an java.util.concurrent.Executor)

  • a scheduler (in form of org.eclipse.jetty.util.thread.Scheduler)

  • a byte buffer pool (in form of org.eclipse.jetty.io.ByteBufferPool)

  • a TLS factory (in form of org.eclipse.jetty.util.ssl.SslContextFactory.Client)

The ClientConnector is where you want to set those components after you have configured them. If you don’t explicitly set those components on the ClientConnector, then appropriate defaults will be chosen when the ClientConnector starts.

ClientConnector manages all network-related components, and therefore it is used for TCP, UDP, QUIC and Unix-Domain sockets.

The simplest example that creates and starts a ClientConnector is the following:

ClientConnector clientConnector = new ClientConnector();
clientConnector.start();

A more typical example:

// Create and configure the SslContextFactory.
SslContextFactory.Client sslContextFactory = new SslContextFactory.Client();
sslContextFactory.addExcludeProtocols("TLSv1", "TLSv1.1");

// Create and configure the thread pool.
QueuedThreadPool threadPool = new QueuedThreadPool();
threadPool.setName("client");

// Create and configure the ClientConnector.
ClientConnector clientConnector = new ClientConnector();
clientConnector.setSslContextFactory(sslContextFactory);
clientConnector.setExecutor(threadPool);
clientConnector.start();

A more advanced example that customizes the ClientConnector by overriding some of its methods:

class CustomClientConnector extends ClientConnector
{
    @Override
    protected SelectorManager newSelectorManager()
    {
        return new ClientSelectorManager(getExecutor(), getScheduler(), getSelectors())
        {
            @Override
            protected void endPointOpened(EndPoint endpoint)
            {
                System.getLogger("endpoint").log(INFO, "opened %s", endpoint);
            }

            @Override
            protected void endPointClosed(EndPoint endpoint)
            {
                System.getLogger("endpoint").log(INFO, "closed %s", endpoint);
            }
        };
    }
}

// Create and configure the thread pool.
QueuedThreadPool threadPool = new QueuedThreadPool();
threadPool.setName("client");

// Create and configure the scheduler.
Scheduler scheduler = new ScheduledExecutorScheduler("scheduler-client", false);

// Create and configure the custom ClientConnector.
CustomClientConnector clientConnector = new CustomClientConnector();
clientConnector.setExecutor(threadPool);
clientConnector.setScheduler(scheduler);
clientConnector.start();

Since ClientConnector is the component that handles the low-level network transport, it is also the component where you want to configure the low-level network configuration.

The most common parameters are:

  • ClientConnector.selectors: the number of java.nio.Selectors components (defaults to 1) that are present to handle the SocketChannels and DatagramChannels opened by the ClientConnector. You typically want to increase the number of selectors only for those use cases where each selector should handle more than few hundreds concurrent socket events. For example, one selector typically runs well for 250 concurrent socket events; as a rule of thumb, you can multiply that number by 10 to obtain the number of opened sockets a selector can handle (2500), based on the assumption that not all the 2500 sockets will be active at the same time.

  • ClientConnector.idleTimeout: the duration of time after which ClientConnector closes a socket due to inactivity (defaults to 30 seconds). This is an important parameter to configure, and you typically want the client idle timeout to be shorter than the server idle timeout, to avoid race conditions where the client attempts to use a socket just before the client-side idle timeout expires, but the server-side idle timeout has already expired and the is already closing the socket.

  • ClientConnector.connectBlocking: whether the operation of connecting a socket to the server (i.e. SocketChannel.connect(SocketAddress)) must be a blocking or a non-blocking operation (defaults to false). For localhost or same datacenter hosts you want to set this parameter to true because DNS resolution will be immediate (and likely never fail). For generic Internet hosts (e.g. when you are implementing a web spider) you want to set this parameter to false.

  • ClientConnector.connectTimeout: the duration of time after which ClientConnector aborts a connection attempt to the server (defaults to 5 seconds). This time includes the DNS lookup time and the TCP connect time.

Please refer to the ClientConnector javadocs for the complete list of configurable parameters.

Unix-Domain Support

JEP 380 introduced Unix-Domain sockets support in Java 16, on all operative systems, but only for SocketChannels (not for DatagramChannels).

ClientConnector handles Unix-Domain sockets exactly like it handles regular TCP sockets, so there is no additional configuration necessary — Unix-Domain sockets are supported out-of-the-box.

Applications can specify the Transport to use for each request as described in this section.

Memory Support

In addition to support communication between client and server via network or Unix-Domain, the Jetty client libraries also support communication between client and server via memory for intra-process communication. This means that the client and server must be in the same JVM process.

This functionality is provided by org.eclipse.jetty.server.MemoryTransport, which does not delegate to ClientConnector, but instead delegates to the server-side MemoryConnector and its related classes.

Applications can specify the Transport to use for each request as described in this section.

Protocol Layer

The protocol layer builds on top of the transport layer to generate the bytes to be written to the low-level transport and to parse the bytes read from the low-level transport.

Recall from this section that Jetty uses the Connection abstraction to produce and interpret the low-level transport bytes.

On the client side, a ClientConnectionFactory implementation is the component that creates Connection instances based on the protocol that the client wants to "speak" with the server.

Applications may use ClientConnector.connect(SocketAddress, Map<String, Object>) to establish a TCP connection to the server, and must provide ClientConnector with the following information in the context map:

  • A Transport instance that specifies the low-level transport to use.

  • A ClientConnectionFactory that creates Connection instances for the high-level protocol.

  • A Promise that is notified when the connection creation succeeds or fails.

For example:

class CustomConnection extends AbstractConnection
{
    public CustomConnection(EndPoint endPoint, Executor executor)
    {
        super(endPoint, executor);
    }

    @Override
    public void onOpen()
    {
        super.onOpen();
        System.getLogger("connection").log(INFO, "Opened connection {0}", this);
    }

    @Override
    public void onFillable()
    {
    }
}

ClientConnector clientConnector = new ClientConnector();
clientConnector.start();

String host = "serverHost";
int port = 8080;
SocketAddress address = new InetSocketAddress(host, port);

// The Transport instance.
Transport transport = Transport.TCP_IP;

// The ClientConnectionFactory that creates CustomConnection instances.
ClientConnectionFactory connectionFactory = (endPoint, context) ->
{
    System.getLogger("connection").log(INFO, "Creating connection for {0}", endPoint);
    return new CustomConnection(endPoint, clientConnector.getExecutor());
};

// The Promise to notify of connection creation success or failure.
CompletableFuture<CustomConnection> connectionPromise = new Promise.Completable<>();

// Populate the context with the mandatory keys to create and obtain connections.
Map<String, Object> context = new ConcurrentHashMap<>();
context.put(Transport.class.getName(), transport);
context.put(ClientConnector.CLIENT_CONNECTION_FACTORY_CONTEXT_KEY, connectionFactory);
context.put(ClientConnector.CONNECTION_PROMISE_CONTEXT_KEY, connectionPromise);
clientConnector.connect(address, context);

// Use the Connection when it's available.

// Use it in a non-blocking way via CompletableFuture APIs.
connectionPromise.whenComplete((connection, failure) ->
{
    System.getLogger("connection").log(INFO, "Created connection for {0}", connection);
});

// Alternatively, you can block waiting for the connection (or a failure).
// CustomConnection connection = connectionPromise.get();

When a Connection is created successfully, its onOpen() method is invoked, and then the promise is completed successfully.

It is now possible to write a super-simple telnet client that reads and writes string lines:

class TelnetConnection extends AbstractConnection
{
    private final ByteArrayOutputStream bytes = new ByteArrayOutputStream();
    private Consumer<String> consumer;

    public TelnetConnection(EndPoint endPoint, Executor executor)
    {
        super(endPoint, executor);
    }

    @Override
    public void onOpen()
    {
        super.onOpen();

        // Declare interest for fill events.
        fillInterested();
    }

    @Override
    public void onFillable()
    {
        try
        {
            ByteBuffer buffer = BufferUtil.allocate(1024);
            while (true)
            {
                int filled = getEndPoint().fill(buffer);
                if (filled > 0)
                {
                    while (buffer.hasRemaining())
                    {
                        // Search for newline.
                        byte read = buffer.get();
                        if (read == '\n')
                        {
                            // Notify the consumer of the line.
                            consumer.accept(bytes.toString(StandardCharsets.UTF_8));
                            bytes.reset();
                        }
                        else
                        {
                            bytes.write(read);
                        }
                    }
                }
                else if (filled == 0)
                {
                    // No more bytes to fill, declare
                    // again interest for fill events.
                    fillInterested();
                    return;
                }
                else
                {
                    // The other peer closed the
                    // connection, close it back.
                    getEndPoint().close();
                    return;
                }
            }
        }
        catch (Exception x)
        {
            getEndPoint().close(x);
        }
    }

    public void onLine(Consumer<String> consumer)
    {
        this.consumer = consumer;
    }

    public void writeLine(String line, Callback callback)
    {
        line = line + "\r\n";
        getEndPoint().write(callback, ByteBuffer.wrap(line.getBytes(StandardCharsets.UTF_8)));
    }
}

ClientConnector clientConnector = new ClientConnector();
clientConnector.start();

String host = "example.org";
int port = 80;
SocketAddress address = new InetSocketAddress(host, port);

ClientConnectionFactory connectionFactory = (endPoint, context) ->
    new TelnetConnection(endPoint, clientConnector.getExecutor());

CompletableFuture<TelnetConnection> connectionPromise = new Promise.Completable<>();

Map<String, Object> context = new HashMap<>();
context.put(Transport.class.getName(), Transport.TCP_IP);
context.put(ClientConnector.CLIENT_CONNECTION_FACTORY_CONTEXT_KEY, connectionFactory);
context.put(ClientConnector.CONNECTION_PROMISE_CONTEXT_KEY, connectionPromise);
clientConnector.connect(address, context);

connectionPromise.whenComplete((connection, failure) ->
{
    if (failure == null)
    {
        // Register a listener that receives string lines.
        connection.onLine(line -> System.getLogger("app").log(INFO, "line: {0}", line));

        // Write a line.
        connection.writeLine("GET / HTTP/1.0\r\n", Callback.NOOP);
    }
    else
    {
        failure.printStackTrace();
    }
});

Note how a very basic "telnet" API that applications could use is implemented in the form of the onLine(Consumer<String>) for the non-blocking receiving side and writeLine(String, Callback) for the non-blocking sending side. Note also how the onFillable() method implements some basic "parsing" by looking up the \n character in the buffer.

The "telnet" client above looks like a super-simple HTTP client because HTTP/1.0 can be seen as a line-based protocol. HTTP/1.0 was used just as an example, but we could have used any other line-based protocol such as SMTP, provided that the server was able to understand it.

This is very similar to what the Jetty client implementation does for real network protocols. Real network protocols are of course more complicated and so is the implementation code that handles them, but the general ideas are similar.

The Jetty client implementation provides a number of ClientConnectionFactory implementations that can be composed to produce and interpret the network bytes.

For example, it is simple to modify the above example to use the TLS protocol so that you will be able to connect to the server on port 443, typically reserved for the secure HTTP protocol.

The differences between the clear-text version and the TLS encrypted version are minimal:

class TelnetConnection extends AbstractConnection
{
    private final ByteArrayOutputStream bytes = new ByteArrayOutputStream();
    private Consumer<String> consumer;

    public TelnetConnection(EndPoint endPoint, Executor executor)
    {
        super(endPoint, executor);
    }

    @Override
    public void onOpen()
    {
        super.onOpen();

        // Declare interest for fill events.
        fillInterested();
    }

    @Override
    public void onFillable()
    {
        try
        {
            ByteBuffer buffer = BufferUtil.allocate(1024);
            while (true)
            {
                int filled = getEndPoint().fill(buffer);
                if (filled > 0)
                {
                    while (buffer.hasRemaining())
                    {
                        // Search for newline.
                        byte read = buffer.get();
                        if (read == '\n')
                        {
                            // Notify the consumer of the line.
                            consumer.accept(bytes.toString(StandardCharsets.UTF_8));
                            bytes.reset();
                        }
                        else
                        {
                            bytes.write(read);
                        }
                    }
                }
                else if (filled == 0)
                {
                    // No more bytes to fill, declare
                    // again interest for fill events.
                    fillInterested();
                    return;
                }
                else
                {
                    // The other peer closed the
                    // connection, close it back.
                    getEndPoint().close();
                    return;
                }
            }
        }
        catch (Exception x)
        {
            getEndPoint().close(x);
        }
    }

    public void onLine(Consumer<String> consumer)
    {
        this.consumer = consumer;
    }

    public void writeLine(String line, Callback callback)
    {
        line = line + "\r\n";
        getEndPoint().write(callback, ByteBuffer.wrap(line.getBytes(StandardCharsets.UTF_8)));
    }
}

ClientConnector clientConnector = new ClientConnector();
clientConnector.start();

// Use port 443 to contact the server using encrypted HTTP.
String host = "example.org";
int port = 443;
SocketAddress address = new InetSocketAddress(host, port);

ClientConnectionFactory connectionFactory = (endPoint, context) ->
    new TelnetConnection(endPoint, clientConnector.getExecutor());

// Wrap the "telnet" ClientConnectionFactory with the SslClientConnectionFactory.
connectionFactory = new SslClientConnectionFactory(clientConnector.getSslContextFactory(),
    clientConnector.getByteBufferPool(), clientConnector.getExecutor(), connectionFactory);

// We will obtain a SslConnection now.
CompletableFuture<SslConnection> connectionPromise = new Promise.Completable<>();

Map<String, Object> context = new ConcurrentHashMap<>();
context.put(Transport.class.getName(), Transport.TCP_IP);
context.put(ClientConnector.CLIENT_CONNECTION_FACTORY_CONTEXT_KEY, connectionFactory);
context.put(ClientConnector.CONNECTION_PROMISE_CONTEXT_KEY, connectionPromise);
clientConnector.connect(address, context);

connectionPromise.whenComplete((sslConnection, failure) ->
{
    if (failure == null)
    {
        // Unwrap the SslConnection to access the "line" APIs in TelnetConnection.
        TelnetConnection connection = (TelnetConnection)sslConnection.getSslEndPoint().getConnection();
        // Register a listener that receives string lines.
        connection.onLine(line -> System.getLogger("app").log(INFO, "line: {0}", line));

        // Write a line.
        connection.writeLine("GET / HTTP/1.0\r\n", Callback.NOOP);
    }
    else
    {
        failure.printStackTrace();
    }
});

The differences with the clear-text version are only:

  • Change the port from 80 to 443.

  • Wrap the ClientConnectionFactory with SslClientConnectionFactory.

  • Unwrap the SslConnection to access TelnetConnection.