connections: introduce http/3 happy eyeballs

New cfilter HTTP-CONNECT for h3/h2/http1.1 eyeballing.
- filter is installed when `--http3` in the tool is used (or
  the equivalent CURLOPT_ done in the library)
- starts a QUIC/HTTP/3 connect right away. Should that not
  succeed after 100ms (subject to change), a parallel attempt
  is started for HTTP/2 and HTTP/1.1 via TCP
- both attempts are subject to IPv6/IPv4 eyeballing, same
  as happens for other connections
- tie timeout to the ip-version HAPPY_EYEBALLS_TIMEOUT
- use a `soft` timeout at half the value. When the soft timeout
  expires, the HTTPS-CONNECT filter checks if the QUIC filter
  has received any data from the server. If not, it will start
  the HTTP/2 attempt.

HTTP/3(ngtcp2) improvements.
- setting call_data in all cfilter calls similar to http/2 and vtls filters
  for use in callback where no stream data is available.
- returning CURLE_PARTIAL_FILE for prematurely terminated transfers
- enabling pytest test_05 for h3
- shifting functionality to "connect" UDP sockets from ngtcp2
  implementation into the udp socket cfilter. Because unconnected
  UDP sockets are weird. For example they error when adding to a
  pollset.

HTTP/3(quiche) improvements.
- fixed upload bug in quiche implementation, now passes 251 and pytest
- error codes on stream RESET
- improved debug logs
- handling of DRAIN during connect
- limiting pending event queue

HTTP/2 cfilter improvements.
- use LOG_CF macros for dynamic logging in debug build
- fix CURLcode on RST streams to be CURLE_PARTIAL_FILE
- enable pytest test_05 for h2
- fix upload pytests and improve parallel transfer performance.

GOAWAY handling for ngtcp2/quiche
- during connect, when the remote server refuses to accept new connections
  and closes immediately (so the local conn goes into DRAIN phase), the
  connection is torn down and a another attempt is made after a short grace
  period.
  This is the behaviour observed with nghttpx when we tell it to  shut
  down gracefully. Tested in pytest test_03_02.

TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
  copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
  and connection related things based on the negotiated protocol (or lack thereof).

- new tests/tests-httpd/scorecard.py for testing h3/h2 protocol implementation.
  Invoke:
    python3 tests/tests-httpd/scorecard.py --help
  for usage.

Improvements on gathering connect statistics and socket access.
- new CF_CTRL_CONN_REPORT_STATS cfilter control for having cfilters
  report connection statistics. This is triggered when the connection
  has completely connected.
- new void Curl_pgrsTimeWas(..) method to report a timer update with
  a timestamp of when it happend. This allows for updating timers
  "later", e.g. a connect statistic after full connectivity has been
  reached.
- in case of HTTP eyeballing, the previous changes will update
  statistics only from the filter chain that "won" the eyeballing.
- new cfilter query CF_QUERY_SOCKET for retrieving the socket used
  by a filter chain.
  Added methods Curl_conn_cf_get_socket() and Curl_conn_get_socket()
  for convenient use of this query.
- Change VTLS backend to query their sub-filters for the socket when
  checks during the handshake are made.

HTTP/3 documentation on how https eyeballing works.

TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
  copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
  and connection related things based on the negotiated protocol (or lack thereof).

Scorecard with Caddy.
- configure can be run with `--with-test-caddy=path` to specify which caddy to use for testing
- tests/tests-httpd/scorecard.py now measures download speeds with caddy

pytest improvements
- adding Makfile to clean gen dir
- adding nghttpx rundir creation on start
- checking httpd version 2.4.55 for test_05 cases where it is needed. Skipping with message if too old.
- catch exception when checking for caddy existance on system.

Closes #10349
This commit is contained in:
Stefan Eissing 2023-02-01 17:13:12 +01:00 committed by Daniel Stenberg
parent b7aaf074e5
commit 671158242d
No known key found for this signature in database
GPG Key ID: 5CC908FDB71E12C2
61 changed files with 3599 additions and 1249 deletions

View File

@ -335,6 +335,9 @@ IoT
ipadOS
IPCXN
IPv
IPv4
IPv4/6
IPv6
IRIs
IRIX
Itanium

View File

@ -311,6 +311,16 @@ AS_HELP_STRING([--with-test-nghttpx=PATH],[where to find nghttpx for testing]),
)
AC_SUBST(TEST_NGHTTPX)
CADDY=caddy
AC_ARG_WITH(test-caddy,dnl
AS_HELP_STRING([--with-test-caddy=PATH],[where to find caddy for testing]),
CADDY=$withval
if test X"$OPT_CADDY" = "Xno" ; then
CADDY=""
fi
)
AC_SUBST(CADDY)
dnl we'd like a httpd+apachectl as test server
dnl
AC_ARG_WITH(test-httpd, [AS_HELP_STRING([--with-test-httpd=PATH],
@ -366,6 +376,14 @@ fi
AC_PATH_PROG([APXS], [apxs])
AC_SUBST(HTTPD_NGHTTPX)
dnl the Caddy server we might use in testing
if test "x$TEST_CADDY" != "x"; then
CADDY="$TEST_CADDY"
else
AC_PATH_PROG([CADDY], [caddy])
fi
AC_SUBST(CADDY)
dnl If no TLS choice has been made, check if it was explicitly disabled or
dnl error out to force the user to decide.
if test -z "$TLSCHOICE"; then
@ -4646,6 +4664,7 @@ AC_CONFIG_FILES([Makefile \
tests/libtest/Makefile \
tests/unit/Makefile \
tests/tests-httpd/config.ini \
tests/tests-httpd/Makefile \
packages/Makefile \
packages/vms/Makefile \
curl-config \

View File

@ -239,7 +239,11 @@ directory, or copy `msquic.dll` and `msh3.dll` from that directory to the
# `--http3`
Use HTTP/3 directly:
Use only HTTP/3:
curl --http3-only https://nghttp2.org:4433/
Use HTTP/3 with fallback to HTTP/2 or HTTP/1.1 (see "HTTPS eyeballing" below):
curl --http3 https://nghttp2.org:4433/
@ -249,6 +253,28 @@ Upgrade via Alt-Svc:
See this [list of public HTTP/3 servers](https://bagder.github.io/HTTP3-test/)
### HTTPS eyeballing
With option `--http3` curl will attempt earlier HTTP versions as well should the connect
attempt via HTTP/3 not succeed "fast enough". This strategy is similar to IPv4/6 happy
eyeballing where the alternate address family is used in parallel after a short delay.
The IPv4/6 eyeballing has a default of 200ms and you may override that via `--happy-eyeballs-timeout-ms value`.
Since HTTP/3 is still relatively new, we decided to use this timeout also for the HTTP eyeballing - with a slight twist.
The `happy-eyeballs-timeout-ms` value is the **hard** timeout, meaning after that time expired, a TLS connection is opened in addition to negotiate HTTP/2 or HTTP/1.1. At half of that value - currently - is the **soft** timeout. The soft timeout fires, when there has been **no data at all** seen from the server on the HTTP/3 connection.
So, without you specifying anything, the hard timeout is 200ms and the soft is 100ms:
* Ideally, the whole QUIC handshake happens and curl has a HTTP/3 connection in less than 100ms.
* When QUIC is not supported (or UDP does not work for this network path), no reply is seen and the HTTP/2 TLS+TCP connection starts 100ms later.
* In the worst case, UDP replies start before 100ms, but drag on. This will start the TLS+TCP connection after 200ms.
* When the QUIC handshake fails, the TLS+TCP connection is attempted right away. For example, when the QUIC server presents the wrong certificate.
The whole transfer only fails, when **both** QUIC and TLS+TCP fail to handshake or time out.
Note that all this happens in addition to IP version happy eyeballing. If the name resolution for the server gives more than one IP address, curl will try all those until one succeeds - just as with all other protocols. And if those IP addresses contain both IPv6 and IPv4, those attempts will happen, delayed, in parallel (the actual eyeballing).
## Known Bugs
Check out the [list of known HTTP3 bugs](https://curl.se/docs/knownbugs.html#HTTP3).

View File

@ -40,7 +40,8 @@ int main(void)
/* Forcing HTTP/3 will make the connection fail if the server is not
accessible over QUIC + HTTP/3 on the given host and port.
Consider using CURLOPT_ALTSVC instead! */
curl_easy_setopt(curl, CURLOPT_HTTP_VERSION, (long)CURL_HTTP_VERSION_3);
curl_easy_setopt(curl, CURLOPT_HTTP_VERSION,
(long)CURL_HTTP_VERSION_3ONLY);
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);

View File

@ -107,7 +107,8 @@ LIB_CFILES = \
base64.c \
bufref.c \
c-hyper.c \
cf-socket.c \
cf-http.c \
cf-socket.c \
cfilters.c \
conncache.c \
connect.c \
@ -232,7 +233,8 @@ LIB_HFILES = \
asyn.h \
bufref.h \
c-hyper.h \
cf-socket.h \
cf-http.h \
cf-socket.h \
cfilters.h \
conncache.h \
connect.h \

518
lib/cf-http.c Normal file
View File

@ -0,0 +1,518 @@
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at https://curl.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* SPDX-License-Identifier: curl
*
***************************************************************************/
#include "curl_setup.h"
#if !defined(CURL_DISABLE_HTTP) && !defined(USE_HYPER)
#include "urldata.h"
#include <curl/curl.h>
#include "curl_log.h"
#include "cfilters.h"
#include "connect.h"
#include "multiif.h"
#include "cf-http.h"
#include "http2.h"
#include "vquic/vquic.h"
/* The last 3 #include files should be in this order */
#include "curl_printf.h"
#include "curl_memory.h"
#include "memdebug.h"
typedef enum {
CF_HC_INIT,
CF_HC_CONNECT,
CF_HC_SUCCESS,
CF_HC_FAILURE
} cf_hc_state;
struct cf_hc_baller {
const char *name;
struct Curl_cfilter *cf;
CURLcode result;
struct curltime started;
int reply_ms;
bool enabled;
};
static void cf_hc_baller_reset(struct cf_hc_baller *b,
struct Curl_easy *data)
{
if(b->cf) {
Curl_conn_cf_close(b->cf, data);
Curl_conn_cf_discard_chain(&b->cf, data);
b->cf = NULL;
}
b->result = CURLE_OK;
b->reply_ms = -1;
}
static bool cf_hc_baller_is_active(struct cf_hc_baller *b)
{
return b->enabled && b->cf && !b->result;
}
static bool cf_hc_baller_has_started(struct cf_hc_baller *b)
{
return !!b->cf;
}
static int cf_hc_baller_reply_ms(struct cf_hc_baller *b,
struct Curl_easy *data)
{
if(b->reply_ms < 0)
b->cf->cft->query(b->cf, data, CF_QUERY_CONNECT_REPLY_MS,
&b->reply_ms, NULL);
return b->reply_ms;
}
static bool cf_hc_baller_data_pending(struct cf_hc_baller *b,
const struct Curl_easy *data)
{
return b->cf && !b->result && b->cf->cft->has_data_pending(b->cf, data);
}
struct cf_hc_ctx {
cf_hc_state state;
const struct Curl_dns_entry *remotehost;
struct curltime started; /* when connect started */
CURLcode result; /* overall result */
struct cf_hc_baller h3_baller;
struct cf_hc_baller h21_baller;
int soft_eyeballs_timeout_ms;
int hard_eyeballs_timeout_ms;
};
static void cf_hc_baller_init(struct cf_hc_baller *b,
struct Curl_cfilter *cf,
struct Curl_easy *data,
const char *name,
int transport)
{
struct cf_hc_ctx *ctx = cf->ctx;
struct Curl_cfilter *save = cf->next;
b->name = name;
cf->next = NULL;
b->started = Curl_now();
b->result = Curl_cf_setup_insert_after(cf, data, ctx->remotehost,
transport, CURL_CF_SSL_ENABLE);
b->cf = cf->next;
cf->next = save;
}
static CURLcode cf_hc_baller_connect(struct cf_hc_baller *b,
struct Curl_cfilter *cf,
struct Curl_easy *data,
bool *done)
{
struct Curl_cfilter *save = cf->next;
cf->next = b->cf;
b->result = Curl_conn_cf_connect(cf->next, data, FALSE, done);
b->cf = cf->next; /* it might mutate */
cf->next = save;
return b->result;
}
static void cf_hc_reset(struct Curl_cfilter *cf, struct Curl_easy *data)
{
struct cf_hc_ctx *ctx = cf->ctx;
if(ctx) {
cf_hc_baller_reset(&ctx->h3_baller, data);
cf_hc_baller_reset(&ctx->h21_baller, data);
ctx->state = CF_HC_INIT;
ctx->result = CURLE_OK;
ctx->hard_eyeballs_timeout_ms = data->set.happy_eyeballs_timeout;
ctx->soft_eyeballs_timeout_ms = data->set.happy_eyeballs_timeout / 2;
}
}
static CURLcode baller_connected(struct Curl_cfilter *cf,
struct Curl_easy *data,
struct cf_hc_baller *winner)
{
struct cf_hc_ctx *ctx = cf->ctx;
CURLcode result = CURLE_OK;
DEBUGASSERT(winner->cf);
if(winner != &ctx->h3_baller)
cf_hc_baller_reset(&ctx->h3_baller, data);
if(winner != &ctx->h21_baller)
cf_hc_baller_reset(&ctx->h21_baller, data);
DEBUGF(LOG_CF(data, cf, "connect+handshake %s: %dms, 1st data: %dms",
winner->name, (int)Curl_timediff(Curl_now(), winner->started),
cf_hc_baller_reply_ms(winner, data)));
cf->next = winner->cf;
winner->cf = NULL;
switch(cf->conn->alpn) {
case CURL_HTTP_VERSION_3:
infof(data, "using HTTP/3");
break;
case CURL_HTTP_VERSION_2:
#ifdef USE_NGHTTP2
/* Using nghttp2, we add the filter "below" us, so when the conn
* closes, we tear it down for a fresh reconnect */
result = Curl_http2_switch_at(cf, data);
if(result) {
ctx->state = CF_HC_FAILURE;
ctx->result = result;
return result;
}
#endif
infof(data, "using HTTP/2");
break;
case CURL_HTTP_VERSION_1_1:
infof(data, "using HTTP/1.1");
break;
default:
infof(data, "using HTTP/1.x");
break;
}
ctx->state = CF_HC_SUCCESS;
cf->connected = TRUE;
Curl_conn_cf_cntrl(cf->next, data, TRUE,
CF_CTRL_CONN_INFO_UPDATE, 0, NULL);
return result;
}
static bool time_to_start_h21(struct Curl_cfilter *cf,
struct Curl_easy *data,
struct curltime now)
{
struct cf_hc_ctx *ctx = cf->ctx;
timediff_t elapsed_ms;
if(!ctx->h21_baller.enabled || cf_hc_baller_has_started(&ctx->h21_baller))
return FALSE;
if(!ctx->h3_baller.enabled || !cf_hc_baller_is_active(&ctx->h3_baller))
return TRUE;
elapsed_ms = Curl_timediff(now, ctx->started);
if(elapsed_ms >= ctx->hard_eyeballs_timeout_ms) {
DEBUGF(LOG_CF(data, cf, "hard timeout of %dms reached, starting h21",
ctx->hard_eyeballs_timeout_ms));
return TRUE;
}
if(elapsed_ms >= ctx->soft_eyeballs_timeout_ms) {
if(cf_hc_baller_reply_ms(&ctx->h3_baller, data) < 0) {
DEBUGF(LOG_CF(data, cf, "soft timeout of %dms reached, h3 has not "
"seen any data, starting h21",
ctx->soft_eyeballs_timeout_ms));
return TRUE;
}
/* set the effective hard timeout again */
Curl_expire(data, ctx->hard_eyeballs_timeout_ms - elapsed_ms,
EXPIRE_ALPN_EYEBALLS);
}
return FALSE;
}
static CURLcode cf_hc_connect(struct Curl_cfilter *cf,
struct Curl_easy *data,
bool blocking, bool *done)
{
struct cf_hc_ctx *ctx = cf->ctx;
struct curltime now;
CURLcode result = CURLE_OK;
(void)blocking;
if(cf->connected) {
*done = TRUE;
return CURLE_OK;
}
*done = FALSE;
now = Curl_now();
switch(ctx->state) {
case CF_HC_INIT:
DEBUGASSERT(!ctx->h3_baller.cf);
DEBUGASSERT(!ctx->h21_baller.cf);
DEBUGASSERT(!cf->next);
DEBUGF(LOG_CF(data, cf, "connect, init"));
ctx->started = now;
if(ctx->h3_baller.enabled) {
cf_hc_baller_init(&ctx->h3_baller, cf, data, "h3", TRNSPRT_QUIC);
if(ctx->h21_baller.enabled)
Curl_expire(data, ctx->soft_eyeballs_timeout_ms, EXPIRE_ALPN_EYEBALLS);
}
else if(ctx->h21_baller.enabled)
cf_hc_baller_init(&ctx->h21_baller, cf, data, "h21", TRNSPRT_TCP);
ctx->state = CF_HC_CONNECT;
/* FALLTHROUGH */
case CF_HC_CONNECT:
if(cf_hc_baller_is_active(&ctx->h3_baller)) {
result = cf_hc_baller_connect(&ctx->h3_baller, cf, data, done);
if(!result && *done) {
result = baller_connected(cf, data, &ctx->h3_baller);
goto out;
}
}
if(time_to_start_h21(cf, data, now)) {
cf_hc_baller_init(&ctx->h21_baller, cf, data, "h21", TRNSPRT_TCP);
}
if(cf_hc_baller_is_active(&ctx->h21_baller)) {
DEBUGF(LOG_CF(data, cf, "connect, check h21"));
result = cf_hc_baller_connect(&ctx->h21_baller, cf, data, done);
if(!result && *done) {
result = baller_connected(cf, data, &ctx->h21_baller);
goto out;
}
}
if((!ctx->h3_baller.enabled || ctx->h3_baller.result) &&
(!ctx->h21_baller.enabled || ctx->h21_baller.result)) {
/* both failed or disabled. we give up */
DEBUGF(LOG_CF(data, cf, "connect, all failed"));
result = ctx->result = ctx->h3_baller.enabled?
ctx->h3_baller.result : ctx->h21_baller.result;
ctx->state = CF_HC_FAILURE;
goto out;
}
result = CURLE_OK;
*done = FALSE;
break;
case CF_HC_FAILURE:
result = ctx->result;
cf->connected = FALSE;
*done = FALSE;
break;
case CF_HC_SUCCESS:
result = CURLE_OK;
cf->connected = TRUE;
*done = TRUE;
break;
}
out:
DEBUGF(LOG_CF(data, cf, "connect -> %d, done=%d", result, *done));
return result;
}
static int cf_hc_get_select_socks(struct Curl_cfilter *cf,
struct Curl_easy *data,
curl_socket_t *socks)
{
struct cf_hc_ctx *ctx = cf->ctx;
size_t i, j, s;
int brc, rc = GETSOCK_BLANK;
curl_socket_t bsocks[MAX_SOCKSPEREASYHANDLE];
struct cf_hc_baller *ballers[2];
if(cf->connected)
return cf->next->cft->get_select_socks(cf->next, data, socks);
ballers[0] = &ctx->h3_baller;
ballers[1] = &ctx->h21_baller;
for(i = s = 0; i < sizeof(ballers)/sizeof(ballers[0]); i++) {
struct cf_hc_baller *b = ballers[i];
if(!cf_hc_baller_is_active(b))
continue;
brc = Curl_conn_cf_get_select_socks(b->cf, data, bsocks);
DEBUGF(LOG_CF(data, cf, "get_selected_socks(%s) -> %x", b->name, brc));
if(!brc)
continue;
for(j = 0; j < MAX_SOCKSPEREASYHANDLE && s < MAX_SOCKSPEREASYHANDLE; ++j) {
if((brc & GETSOCK_WRITESOCK(j)) || (brc & GETSOCK_READSOCK(j))) {
socks[s] = bsocks[j];
if(brc & GETSOCK_WRITESOCK(j))
rc |= GETSOCK_WRITESOCK(s);
if(brc & GETSOCK_READSOCK(j))
rc |= GETSOCK_READSOCK(s);
s++;
}
}
}
DEBUGF(LOG_CF(data, cf, "get_selected_socks -> %x", rc));
return rc;
}
static bool cf_hc_data_pending(struct Curl_cfilter *cf,
const struct Curl_easy *data)
{
struct cf_hc_ctx *ctx = cf->ctx;
if(cf->connected)
return cf->next->cft->has_data_pending(cf->next, data);
DEBUGF(LOG_CF((struct Curl_easy *)data, cf, "data_pending"));
return cf_hc_baller_data_pending(&ctx->h3_baller, data)
|| cf_hc_baller_data_pending(&ctx->h21_baller, data);
}
static void cf_hc_close(struct Curl_cfilter *cf, struct Curl_easy *data)
{
DEBUGF(LOG_CF(data, cf, "close"));
cf_hc_reset(cf, data);
cf->connected = FALSE;
if(cf->next) {
cf->next->cft->close(cf->next, data);
Curl_conn_cf_discard_chain(&cf->next, data);
}
}
static void cf_hc_destroy(struct Curl_cfilter *cf, struct Curl_easy *data)
{
struct cf_hc_ctx *ctx = cf->ctx;
(void)data;
DEBUGF(LOG_CF(data, cf, "destroy"));
cf_hc_reset(cf, data);
Curl_safefree(ctx);
}
struct Curl_cftype Curl_cft_http_connect = {
"HTTPS-CONNECT",
0,
CURL_LOG_DEFAULT,
cf_hc_destroy,
cf_hc_connect,
cf_hc_close,
Curl_cf_def_get_host,
cf_hc_get_select_socks,
cf_hc_data_pending,
Curl_cf_def_send,
Curl_cf_def_recv,
Curl_cf_def_cntrl,
Curl_cf_def_conn_is_alive,
Curl_cf_def_conn_keep_alive,
Curl_cf_def_query,
};
static CURLcode cf_hc_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
const struct Curl_dns_entry *remotehost,
bool try_h3, bool try_h21)
{
struct Curl_cfilter *cf = NULL;
struct cf_hc_ctx *ctx;
CURLcode result = CURLE_OK;
(void)data;
ctx = calloc(sizeof(*ctx), 1);
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
ctx->remotehost = remotehost;
ctx->h3_baller.enabled = try_h3;
ctx->h21_baller.enabled = try_h21;
result = Curl_cf_create(&cf, &Curl_cft_http_connect, ctx);
if(result)
goto out;
ctx = NULL;
cf_hc_reset(cf, data);
out:
*pcf = result? NULL : cf;
free(ctx);
return result;
}
CURLcode Curl_cf_http_connect_add(struct Curl_easy *data,
struct connectdata *conn,
int sockindex,
const struct Curl_dns_entry *remotehost,
bool try_h3, bool try_h21)
{
struct Curl_cfilter *cf;
CURLcode result = CURLE_OK;
DEBUGASSERT(data);
result = cf_hc_create(&cf, data, remotehost, try_h3, try_h21);
if(result)
goto out;
Curl_conn_cf_add(data, conn, sockindex, cf);
out:
return result;
}
CURLcode
Curl_cf_http_connect_insert_after(struct Curl_cfilter *cf_at,
struct Curl_easy *data,
const struct Curl_dns_entry *remotehost,
bool try_h3, bool try_h21)
{
struct Curl_cfilter *cf;
CURLcode result;
DEBUGASSERT(data);
result = cf_hc_create(&cf, data, remotehost, try_h3, try_h21);
if(result)
goto out;
Curl_conn_cf_insert_after(cf_at, cf);
out:
return result;
}
CURLcode Curl_cf_https_setup(struct Curl_easy *data,
struct connectdata *conn,
int sockindex,
const struct Curl_dns_entry *remotehost)
{
bool try_h3 = FALSE, try_h21 = TRUE; /* defaults, for now */
CURLcode result = CURLE_OK;
(void)sockindex;
(void)remotehost;
if(!conn->bits.tls_enable_alpn)
goto out;
if(data->state.httpwant == CURL_HTTP_VERSION_3ONLY) {
result = Curl_conn_may_http3(data, conn);
if(result) /* can't do it */
goto out;
try_h3 = TRUE;
try_h21 = FALSE;
}
else if(data->state.httpwant >= CURL_HTTP_VERSION_3) {
/* We assume that silently not even trying H3 is ok here */
/* TODO: should we fail instead? */
try_h3 = (Curl_conn_may_http3(data, conn) == CURLE_OK);
try_h21 = TRUE;
}
result = Curl_cf_http_connect_add(data, conn, sockindex, remotehost,
try_h3, try_h21);
out:
return result;
}
#endif /* !defined(CURL_DISABLE_HTTP) && !defined(USE_HYPER) */

58
lib/cf-http.h Normal file
View File

@ -0,0 +1,58 @@
#ifndef HEADER_CURL_CF_HTTP_H
#define HEADER_CURL_CF_HTTP_H
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at https://curl.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* SPDX-License-Identifier: curl
*
***************************************************************************/
#include "curl_setup.h"
#if !defined(CURL_DISABLE_HTTP) && !defined(USE_HYPER)
struct Curl_cfilter;
struct Curl_easy;
struct connectdata;
struct Curl_cftype;
struct Curl_dns_entry;
extern struct Curl_cftype Curl_cft_http_connect;
CURLcode Curl_cf_http_connect_add(struct Curl_easy *data,
struct connectdata *conn,
int sockindex,
const struct Curl_dns_entry *remotehost,
bool try_h3, bool try_h21);
CURLcode
Curl_cf_http_connect_insert_after(struct Curl_cfilter *cf_at,
struct Curl_easy *data,
const struct Curl_dns_entry *remotehost,
bool try_h3, bool try_h21);
CURLcode Curl_cf_https_setup(struct Curl_easy *data,
struct connectdata *conn,
int sockindex,
const struct Curl_dns_entry *remotehost);
#endif /* !defined(CURL_DISABLE_HTTP) && !defined(USE_HYPER) */
#endif /* HEADER_CURL_CF_HTTP_H */

View File

@ -250,9 +250,23 @@ static CURLcode socket_open(struct Curl_easy *data,
(struct curl_sockaddr *)addr);
Curl_set_in_callback(data, false);
}
else
else {
/* opensocket callback not set, so simply create the socket now */
*sockfd = socket(addr->family, addr->socktype, addr->protocol);
if(!*sockfd && addr->socktype == SOCK_DGRAM) {
/* This is icky and seems, at least, to happen on macOS:
* we get sockfd == 0 and if called again, we get a valid one > 0.
* If we close the 0, we sometimes get failures in multi poll, as
* 0 seems also be the fd for the sockpair used for WAKEUP polling.
* Very strange. Maybe this code shouldbe ifdef'ed for macOS, but
* on "real" OS, fd 0 is stdin and we never see that. So...
*/
fake_sclose(*sockfd);
*sockfd = socket(addr->family, addr->socktype, addr->protocol);
DEBUGF(infof(data, "QUIRK: UDP socket() gave handle 0, 2nd attempt %d",
(int)*sockfd));
}
}
if(*sockfd == CURL_SOCKET_BAD)
/* no socket, no connection */
@ -769,11 +783,25 @@ struct cf_socket_ctx {
int r_port; /* remote port number */
char l_ip[MAX_IPADR_LEN]; /* local IP as string */
int l_port; /* local port number */
struct curltime started_at; /* when socket was created */
struct curltime connected_at; /* when socket connected/got first byte */
struct curltime first_byte_at; /* when first byte was recvd */
int error; /* errno of last failure or 0 */
BIT(got_first_byte); /* if first byte was received */
BIT(accepted); /* socket was accepted, not connected */
BIT(active);
};
static void cf_socket_ctx_init(struct cf_socket_ctx *ctx,
const struct Curl_addrinfo *ai,
int transport)
{
memset(ctx, 0, sizeof(*ctx));
ctx->sock = CURL_SOCKET_BAD;
ctx->transport = transport;
Curl_sock_assign_addr(&ctx->addr, ai, transport);
}
static void cf_socket_close(struct Curl_cfilter *cf, struct Curl_easy *data)
{
struct cf_socket_ctx *ctx = cf->ctx;
@ -785,27 +813,34 @@ static void cf_socket_close(struct Curl_cfilter *cf, struct Curl_easy *data)
* closed it) and we just forget about it.
*/
if(ctx->sock == cf->conn->sock[cf->sockindex]) {
DEBUGF(LOG_CF(data, cf, "cf_socket_close(%d) active", (int)ctx->sock));
DEBUGF(LOG_CF(data, cf, "cf_socket_close(%d, active)",
(int)ctx->sock));
socket_close(data, cf->conn, !ctx->accepted, ctx->sock);
cf->conn->sock[cf->sockindex] = CURL_SOCKET_BAD;
}
else {
DEBUGF(LOG_CF(data, cf, "cf_socket_close(%d) no longer at "
"conn->sock[], discarding", (int)ctx->sock));
/* TODO: we do not want this to happen. Need to check which
* code is messing with conn->sock[cf->sockindex] */
}
ctx->sock = CURL_SOCKET_BAD;
if(cf->sockindex == FIRSTSOCKET)
cf->conn->remote_addr = NULL;
}
else {
/* this is our local socket, we did never publish it */
DEBUGF(LOG_CF(data, cf, "cf_socket_close(%d) local", (int)ctx->sock));
DEBUGF(LOG_CF(data, cf, "cf_socket_close(%d, not active)",
(int)ctx->sock));
sclose(ctx->sock);
ctx->sock = CURL_SOCKET_BAD;
}
#ifdef USE_RECV_BEFORE_SEND_WORKAROUND
io_buffer_reset(&ctx->recv_buffer);
#endif
ctx->sock = CURL_SOCKET_BAD;
ctx->active = FALSE;
memset(&ctx->started_at, 0, sizeof(ctx->started_at));
memset(&ctx->connected_at, 0, sizeof(ctx->connected_at));
}
cf->connected = FALSE;
@ -882,8 +917,10 @@ static CURLcode cf_socket_open(struct Curl_cfilter *cf,
const char *ipmsg;
(void)data;
ctx->sock = CURL_SOCKET_BAD;
DEBUGASSERT(ctx->sock == CURL_SOCKET_BAD);
ctx->started_at = Curl_now();
result = socket_open(data, &ctx->addr, &ctx->sock);
DEBUGF(LOG_CF(data, cf, "socket_open() -> %d, fd=%d", result, ctx->sock));
if(result)
goto out;
@ -963,12 +1000,15 @@ out:
}
else if(isconnected) {
set_local_ip(cf, data);
ctx->connected_at = Curl_now();
cf->connected = TRUE;
}
DEBUGF(LOG_CF(data, cf, "cf_socket_open() -> %d, fd=%d", result, ctx->sock));
return result;
}
static int do_connect(struct Curl_cfilter *cf, struct Curl_easy *data)
static int do_connect(struct Curl_cfilter *cf, struct Curl_easy *data,
bool is_tcp_fastopen)
{
struct cf_socket_ctx *ctx = cf->ctx;
#ifdef TCP_FASTOPEN_CONNECT
@ -977,7 +1017,7 @@ static int do_connect(struct Curl_cfilter *cf, struct Curl_easy *data)
int rc = -1;
(void)data;
if(cf->conn->bits.tcp_fastopen) {
if(is_tcp_fastopen) {
#if defined(CONNECT_DATA_IDEMPOTENT) /* Darwin */
# if defined(HAVE_BUILTIN_AVAILABLE)
/* while connectx function is available since macOS 10.11 / iOS 9,
@ -1048,7 +1088,7 @@ static CURLcode cf_tcp_connect(struct Curl_cfilter *cf,
DEBUGF(LOG_CF(data, cf, "connect opened(%d)", (int)ctx->sock));
/* Connect TCP socket */
rc = do_connect(cf, data);
rc = do_connect(cf, data, cf->conn->bits.tcp_fastopen);
if(-1 == rc) {
result = Curl_socket_connect_result(data, ctx->r_ip, SOCKERRNO);
goto out;
@ -1071,6 +1111,7 @@ static CURLcode cf_tcp_connect(struct Curl_cfilter *cf,
else if(rc == CURL_CSELECT_OUT || cf->conn->bits.tcp_fastopen) {
if(verifyconnect(ctx->sock, &ctx->error)) {
/* we are connected with TCP, awesome! */
ctx->connected_at = Curl_now();
set_local_ip(cf, data);
*done = TRUE;
cf->connected = TRUE;
@ -1224,9 +1265,11 @@ static ssize_t cf_socket_send(struct Curl_cfilter *cf, struct Curl_easy *data,
const void *buf, size_t len, CURLcode *err)
{
struct cf_socket_ctx *ctx = cf->ctx;
curl_socket_t fdsave;
ssize_t nwritten;
*err = CURLE_OK;
#ifdef USE_RECV_BEFORE_SEND_WORKAROUND
/* WinSock will destroy unread received data if send() is
failed.
@ -1239,6 +1282,9 @@ static ssize_t cf_socket_send(struct Curl_cfilter *cf, struct Curl_easy *data,
}
#endif
fdsave = cf->conn->sock[cf->sockindex];
cf->conn->sock[cf->sockindex] = ctx->sock;
#if defined(MSG_FASTOPEN) && !defined(TCP_FASTOPEN_CONNECT) /* Linux */
if(cf->conn->bits.tcp_fastopen) {
nwritten = sendto(ctx->sock, buf, len, MSG_FASTOPEN,
@ -1276,8 +1322,10 @@ static ssize_t cf_socket_send(struct Curl_cfilter *cf, struct Curl_easy *data,
*err = CURLE_SEND_ERROR;
}
}
DEBUGF(LOG_CF(data, cf, "send(len=%zu) -> %d, err=%d",
len, (int)nwritten, *err));
cf->conn->sock[cf->sockindex] = fdsave;
return nwritten;
}
@ -1285,6 +1333,7 @@ static ssize_t cf_socket_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
char *buf, size_t len, CURLcode *err)
{
struct cf_socket_ctx *ctx = cf->ctx;
curl_socket_t fdsave;
ssize_t nread;
*err = CURLE_OK;
@ -1299,6 +1348,9 @@ static ssize_t cf_socket_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
}
#endif
fdsave = cf->conn->sock[cf->sockindex];
cf->conn->sock[cf->sockindex] = ctx->sock;
nread = sread(ctx->sock, buf, len);
if(-1 == nread) {
@ -1326,8 +1378,14 @@ static ssize_t cf_socket_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
*err = CURLE_RECV_ERROR;
}
}
DEBUGF(LOG_CF(data, cf, "recv(len=%zu) -> %d, err=%d", len, (int)nread,
*err));
if(nread > 0 && !ctx->got_first_byte) {
ctx->first_byte_at = Curl_now();
ctx->got_first_byte = TRUE;
}
cf->conn->sock[cf->sockindex] = fdsave;
return nread;
}
@ -1374,6 +1432,7 @@ static void cf_socket_active(struct Curl_cfilter *cf, struct Curl_easy *data)
cf->conn->bits.ipv6 = (ctx->addr.family == AF_INET6)? TRUE : FALSE;
#endif
conn_set_primary_ip(cf, data);
set_local_ip(cf, data);
Curl_persistconninfo(data, cf->conn, ctx->l_ip, ctx->l_port);
}
ctx->active = TRUE;
@ -1391,6 +1450,22 @@ static CURLcode cf_socket_cntrl(struct Curl_cfilter *cf,
case CF_CTRL_CONN_INFO_UPDATE:
cf_socket_active(cf, data);
break;
case CF_CTRL_CONN_REPORT_STATS:
switch(ctx->transport) {
case TRNSPRT_UDP:
case TRNSPRT_QUIC:
/* Since UDP connected sockets work different from TCP, we use the
* time of the first byte from the peer as the "connect" time. */
if(ctx->got_first_byte) {
Curl_pgrsTimeWas(data, TIMER_CONNECT, ctx->first_byte_at);
break;
}
/* FALLTHROUGH */
default:
Curl_pgrsTimeWas(data, TIMER_CONNECT, ctx->connected_at);
break;
}
break;
case CF_CTRL_DATA_SETUP:
Curl_persistconninfo(data, cf->conn, ctx->l_ip, ctx->l_port);
break;
@ -1434,6 +1509,33 @@ static bool cf_socket_conn_is_alive(struct Curl_cfilter *cf,
return TRUE;
}
static CURLcode cf_socket_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void *pres2)
{
struct cf_socket_ctx *ctx = cf->ctx;
switch(query) {
case CF_QUERY_SOCKET:
DEBUGASSERT(pres2);
*((curl_socket_t *)pres2) = ctx->sock;
return CURLE_OK;
case CF_QUERY_CONNECT_REPLY_MS:
if(ctx->got_first_byte) {
timediff_t ms = Curl_timediff(ctx->first_byte_at, ctx->started_at);
*pres1 = (ms < INT_MAX)? (int)ms : INT_MAX;
}
else
*pres1 = -1;
return CURLE_OK;
default:
break;
}
return cf->next?
cf->next->cft->query(cf->next, data, query, pres1, pres2) :
CURLE_UNKNOWN_OPTION;
}
struct Curl_cftype Curl_cft_tcp = {
"TCP",
CF_TYPE_IP_CONNECT,
@ -1449,13 +1551,14 @@ struct Curl_cftype Curl_cft_tcp = {
cf_socket_cntrl,
cf_socket_conn_is_alive,
Curl_cf_def_conn_keep_alive,
Curl_cf_def_query,
cf_socket_query,
};
CURLcode Curl_cf_tcp_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai)
const struct Curl_addrinfo *ai,
int transport)
{
struct cf_socket_ctx *ctx = NULL;
struct Curl_cfilter *cf = NULL;
@ -1463,14 +1566,13 @@ CURLcode Curl_cf_tcp_create(struct Curl_cfilter **pcf,
(void)data;
(void)conn;
DEBUGASSERT(transport == TRNSPRT_TCP);
ctx = calloc(sizeof(*ctx), 1);
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
ctx->transport = TRNSPRT_TCP;
Curl_sock_assign_addr(&ctx->addr, ai, ctx->transport);
ctx->sock = CURL_SOCKET_BAD;
cf_socket_ctx_init(ctx, ai, transport);
result = Curl_cf_create(&cf, &Curl_cft_tcp, ctx);
@ -1484,6 +1586,46 @@ out:
return result;
}
static CURLcode cf_udp_setup_quic(struct Curl_cfilter *cf,
struct Curl_easy *data)
{
struct cf_socket_ctx *ctx = cf->ctx;
int rc;
/* QUIC needs a connected socket, nonblocking */
DEBUGASSERT(ctx->sock != CURL_SOCKET_BAD);
rc = connect(ctx->sock, &ctx->addr.sa_addr, ctx->addr.addrlen);
if(-1 == rc) {
return Curl_socket_connect_result(data, ctx->r_ip, SOCKERRNO);
}
set_local_ip(cf, data);
DEBUGF(LOG_CF(data, cf, "%s socket %d connected: [%s:%d] -> [%s:%d]",
(ctx->transport == TRNSPRT_QUIC)? "QUIC" : "UDP",
ctx->sock, ctx->l_ip, ctx->l_port, ctx->r_ip, ctx->r_port));
(void)curlx_nonblock(ctx->sock, TRUE);
switch(ctx->addr.family) {
#if defined(__linux__) && defined(IP_MTU_DISCOVER)
case AF_INET: {
int val = IP_PMTUDISC_DO;
(void)setsockopt(ctx->sock, IPPROTO_IP, IP_MTU_DISCOVER, &val,
sizeof(val));
break;
}
#endif
#if defined(__linux__) && defined(IPV6_MTU_DISCOVER)
case AF_INET6: {
int val = IPV6_PMTUDISC_DO;
(void)setsockopt(ctx->sock, IPPROTO_IPV6, IPV6_MTU_DISCOVER, &val,
sizeof(val));
break;
}
#endif
}
return CURLE_OK;
}
static CURLcode cf_udp_connect(struct Curl_cfilter *cf,
struct Curl_easy *data,
bool blocking, bool *done)
@ -1500,17 +1642,29 @@ static CURLcode cf_udp_connect(struct Curl_cfilter *cf,
if(ctx->sock == CURL_SOCKET_BAD) {
result = cf_socket_open(cf, data);
if(result) {
DEBUGF(LOG_CF(data, cf, "cf_udp_connect(), open failed -> %d", result));
if(ctx->sock != CURL_SOCKET_BAD) {
socket_close(data, cf->conn, TRUE, ctx->sock);
ctx->sock = CURL_SOCKET_BAD;
}
goto out;
}
if(ctx->transport == TRNSPRT_QUIC) {
result = cf_udp_setup_quic(cf, data);
if(result)
goto out;
DEBUGF(LOG_CF(data, cf, "cf_udp_connect(), opened socket=%d (%s:%d)",
ctx->sock, ctx->l_ip, ctx->l_port));
}
else {
set_local_ip(cf, data);
*done = TRUE;
cf->connected = TRUE;
DEBUGF(LOG_CF(data, cf, "cf_udp_connect(), opened socket=%d "
"(unconnected)", ctx->sock));
}
*done = TRUE;
cf->connected = TRUE;
}
out:
return result;
}
@ -1529,13 +1683,14 @@ struct Curl_cftype Curl_cft_udp = {
cf_socket_cntrl,
cf_socket_conn_is_alive,
Curl_cf_def_conn_keep_alive,
Curl_cf_def_query,
cf_socket_query,
};
CURLcode Curl_cf_udp_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai)
const struct Curl_addrinfo *ai,
int transport)
{
struct cf_socket_ctx *ctx = NULL;
struct Curl_cfilter *cf = NULL;
@ -1543,14 +1698,13 @@ CURLcode Curl_cf_udp_create(struct Curl_cfilter **pcf,
(void)data;
(void)conn;
DEBUGASSERT(transport == TRNSPRT_UDP || transport == TRNSPRT_QUIC);
ctx = calloc(sizeof(*ctx), 1);
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
ctx->transport = TRNSPRT_UDP;
Curl_sock_assign_addr(&ctx->addr, ai, ctx->transport);
ctx->sock = CURL_SOCKET_BAD;
cf_socket_ctx_init(ctx, ai, transport);
result = Curl_cf_create(&cf, &Curl_cft_udp, ctx);
@ -1580,13 +1734,14 @@ struct Curl_cftype Curl_cft_unix = {
cf_socket_cntrl,
cf_socket_conn_is_alive,
Curl_cf_def_conn_keep_alive,
Curl_cf_def_query,
cf_socket_query,
};
CURLcode Curl_cf_unix_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai)
const struct Curl_addrinfo *ai,
int transport)
{
struct cf_socket_ctx *ctx = NULL;
struct Curl_cfilter *cf = NULL;
@ -1594,14 +1749,13 @@ CURLcode Curl_cf_unix_create(struct Curl_cfilter **pcf,
(void)data;
(void)conn;
DEBUGASSERT(transport == TRNSPRT_UNIX);
ctx = calloc(sizeof(*ctx), 1);
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
ctx->transport = TRNSPRT_UNIX;
Curl_sock_assign_addr(&ctx->addr, ai, ctx->transport);
ctx->sock = CURL_SOCKET_BAD;
cf_socket_ctx_init(ctx, ai, transport);
result = Curl_cf_create(&cf, &Curl_cft_unix, ctx);
@ -1644,7 +1798,7 @@ struct Curl_cftype Curl_cft_tcp_accept = {
cf_socket_cntrl,
cf_socket_conn_is_alive,
Curl_cf_def_conn_keep_alive,
Curl_cf_def_query,
cf_socket_query,
};
CURLcode Curl_conn_tcp_listen_set(struct Curl_easy *data,
@ -1676,6 +1830,7 @@ CURLcode Curl_conn_tcp_listen_set(struct Curl_easy *data,
set_remote_ip(cf, data);
set_local_ip(cf, data);
ctx->active = TRUE;
ctx->connected_at = Curl_now();
cf->connected = TRUE;
DEBUGF(LOG_CF(data, cf, "Curl_conn_tcp_listen_set(%d)", (int)ctx->sock));
@ -1707,6 +1862,7 @@ CURLcode Curl_conn_tcp_accepted_set(struct Curl_easy *data,
set_local_ip(cf, data);
ctx->active = TRUE;
ctx->accepted = TRUE;
ctx->connected_at = Curl_now();
cf->connected = TRUE;
DEBUGF(LOG_CF(data, cf, "Curl_conn_tcp_accepted_set(%d)", (int)ctx->sock));
@ -1722,10 +1878,11 @@ bool Curl_cf_is_socket(struct Curl_cfilter *cf)
}
CURLcode Curl_cf_socket_peek(struct Curl_cfilter *cf,
struct Curl_easy *data,
curl_socket_t *psock,
const struct Curl_sockaddr_ex **paddr,
const char **premote_ip_str,
int *premote_port)
const char **pr_ip_str, int *pr_port,
const char **pl_ip_str, int *pl_port)
{
if(Curl_cf_is_socket(cf) && cf->ctx) {
struct cf_socket_ctx *ctx = cf->ctx;
@ -1734,10 +1891,17 @@ CURLcode Curl_cf_socket_peek(struct Curl_cfilter *cf,
*psock = ctx->sock;
if(paddr)
*paddr = &ctx->addr;
if(premote_ip_str)
*premote_ip_str = ctx->r_ip;
if(premote_port)
*premote_port = ctx->r_port;
if(pr_ip_str)
*pr_ip_str = ctx->r_ip;
if(pr_port)
*pr_port = ctx->r_port;
if(pl_port ||pl_ip_str) {
set_local_ip(cf, data);
if(pl_ip_str)
*pl_ip_str = ctx->l_ip;
if(pl_port)
*pl_port = ctx->l_port;
}
return CURLE_OK;
}
return CURLE_FAILED_INIT;

View File

@ -116,7 +116,8 @@ void Curl_sock_assign_addr(struct Curl_sockaddr_ex *dest,
CURLcode Curl_cf_tcp_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai);
const struct Curl_addrinfo *ai,
int transport);
/**
* Creates a cfilter that opens a UDP socket to the given address
@ -128,7 +129,8 @@ CURLcode Curl_cf_tcp_create(struct Curl_cfilter **pcf,
CURLcode Curl_cf_udp_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai);
const struct Curl_addrinfo *ai,
int transport);
/**
* Creates a cfilter that opens a UNIX socket to the given address
@ -140,7 +142,8 @@ CURLcode Curl_cf_udp_create(struct Curl_cfilter **pcf,
CURLcode Curl_cf_unix_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai);
const struct Curl_addrinfo *ai,
int transport);
/**
* Creates a cfilter that keeps a listening socket.
@ -168,15 +171,18 @@ bool Curl_cf_is_socket(struct Curl_cfilter *cf);
* The filter owns all returned values.
* @param psock pointer to hold socket descriptor or NULL
* @param paddr pointer to hold addr reference or NULL
* @param premote_ip_str pointer to hold remote addr as string or NULL
* @param premote_port pointer to hold remote port number or NULL
* @param pr_ip_str pointer to hold remote addr as string or NULL
* @param pr_port pointer to hold remote port number or NULL
* @param pl_ip_str pointer to hold local addr as string or NULL
* @param pl_port pointer to hold local port number or NULL
* Returns error if the filter is of invalid type.
*/
CURLcode Curl_cf_socket_peek(struct Curl_cfilter *cf,
struct Curl_easy *data,
curl_socket_t *psock,
const struct Curl_sockaddr_ex **paddr,
const char **premote_ip_str,
int *premote_port);
const char **pr_ip_str, int *pr_port,
const char **pl_ip_str, int *pl_port);
extern struct Curl_cftype Curl_cft_tcp;
extern struct Curl_cftype Curl_cft_udp;

View File

@ -141,7 +141,7 @@ CURLcode Curl_cf_def_conn_keep_alive(struct Curl_cfilter *cf,
CURLcode Curl_cf_def_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void **pres2)
int query, int *pres1, void *pres2)
{
return cf->next?
cf->next->cft->query(cf->next, data, query, pres1, pres2) :
@ -370,6 +370,7 @@ CURLcode Curl_conn_connect(struct Curl_easy *data,
result = cf->cft->connect(cf, data, blocking, done);
if(!result && *done) {
Curl_conn_ev_update_info(data, data->conn);
Curl_conn_ev_report_stats(data, data->conn);
data->conn->keepalive = Curl_now();
}
}
@ -514,6 +515,28 @@ CURLcode Curl_conn_cf_cntrl(struct Curl_cfilter *cf,
return result;
}
curl_socket_t Curl_conn_cf_get_socket(struct Curl_cfilter *cf,
struct Curl_easy *data)
{
curl_socket_t sock;
if(cf && !cf->cft->query(cf, data, CF_QUERY_SOCKET, NULL, &sock))
return sock;
return CURL_SOCKET_BAD;
}
curl_socket_t Curl_conn_get_socket(struct Curl_easy *data, int sockindex)
{
struct Curl_cfilter *cf;
cf = data->conn? data->conn->cfilter[sockindex] : NULL;
/* if the top filter has not connected, ask it (and its sub-filters)
* for the socket. Otherwise conn->sock[sockindex] should have it.
*/
if(cf && !cf->connected)
return Curl_conn_cf_get_socket(cf, data);
return data->conn? data->conn->sock[sockindex] : CURL_SOCKET_BAD;
}
static CURLcode cf_cntrl_all(struct connectdata *conn,
struct Curl_easy *data,
bool ignore_result,
@ -585,6 +608,12 @@ void Curl_conn_ev_update_info(struct Curl_easy *data,
cf_cntrl_all(conn, data, TRUE, CF_CTRL_CONN_INFO_UPDATE, 0, NULL);
}
void Curl_conn_ev_report_stats(struct Curl_easy *data,
struct connectdata *conn)
{
cf_cntrl_all(conn, data, TRUE, CF_CTRL_CONN_REPORT_STATS, 0, NULL);
}
bool Curl_conn_is_alive(struct Curl_easy *data, struct connectdata *conn)
{
struct Curl_cfilter *cf = conn->cfilter[FIRSTSOCKET];

View File

@ -109,6 +109,8 @@ typedef CURLcode Curl_cft_conn_keep_alive(struct Curl_cfilter *cf,
#define CF_CTRL_DATA_DONE_SEND 8 /* 0 NULL ignored */
/* update conn info at connection and data */
#define CF_CTRL_CONN_INFO_UPDATE (256+0) /* 0 NULL ignored */
/* report conn statistics (timers) for connection and data */
#define CF_CTRL_CONN_REPORT_STATS (256+1) /* 0 NULL ignored */
/**
* Handle event/control for the filter.
@ -124,9 +126,18 @@ typedef CURLcode Curl_cft_cntrl(struct Curl_cfilter *cf,
* - MAX_CONCURRENT: the maximum number of parallel transfers the filter
* chain expects to handle at the same time.
* default: 1 if no filter overrides.
* - CONNECT_REPLY_MS: milliseconds until the first indication of a server
* response was received on a connect. For TCP, this
* reflects the time until the socket connected. On UDP
* this gives the time the first bytes from the server
* were received.
* -1 if not determined yet.
* - CF_QUERY_SOCKET: the socket used by the filter chain
*/
/* query res1 res2 */
#define CF_QUERY_MAX_CONCURRENT 1 /* number - */
#define CF_QUERY_CONNECT_REPLY_MS 2 /* number - */
#define CF_QUERY_SOCKET 3 /* - curl_socket_t */
/**
* Query the cfilter for properties. Filters ignorant of a query will
@ -134,7 +145,7 @@ typedef CURLcode Curl_cft_cntrl(struct Curl_cfilter *cf,
*/
typedef CURLcode Curl_cft_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void **pres2);
int query, int *pres1, void *pres2);
/**
* Type flags for connection filters. A filter can have none, one or
@ -210,7 +221,7 @@ CURLcode Curl_cf_def_conn_keep_alive(struct Curl_cfilter *cf,
struct Curl_easy *data);
CURLcode Curl_cf_def_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void **pres2);
int query, int *pres1, void *pres2);
/**
* Create a new filter instance, unattached to the filter chain.
@ -279,6 +290,12 @@ CURLcode Curl_conn_cf_cntrl(struct Curl_cfilter *cf,
bool ignore_result,
int event, int arg1, void *arg2);
/**
* Get the socket used by the filter chain starting at `cf`.
* Returns CURL_SOCKET_BAD if not available.
*/
curl_socket_t Curl_conn_cf_get_socket(struct Curl_cfilter *cf,
struct Curl_easy *data);
#define CURL_CF_SSL_DEFAULT -1
@ -333,6 +350,12 @@ void Curl_conn_close(struct Curl_easy *data, int sockindex);
bool Curl_conn_data_pending(struct Curl_easy *data,
int sockindex);
/**
* Return the socket used on data's connection for the index.
* Returns CURL_SOCKET_BAD if not available.
*/
curl_socket_t Curl_conn_get_socket(struct Curl_easy *data, int sockindex);
/**
* Get any select fd flags and the socket filters at chain `sockindex`
* at connection `conn` might be waiting for.
@ -411,6 +434,12 @@ CURLcode Curl_conn_ev_data_pause(struct Curl_easy *data, bool do_pause);
void Curl_conn_ev_update_info(struct Curl_easy *data,
struct connectdata *conn);
/**
* Inform connection filters to report statistics.
*/
void Curl_conn_ev_report_stats(struct Curl_easy *data,
struct connectdata *conn);
/**
* Check if FIRSTSOCKET's cfilter chain deems connection alive.
*/

View File

@ -59,6 +59,7 @@
#include "strerror.h"
#include "cfilters.h"
#include "connect.h"
#include "cf-http.h"
#include "cf-socket.h"
#include "select.h"
#include "url.h" /* for Curl_safefree() */
@ -445,6 +446,7 @@ static void baller_initiate(struct Curl_cfilter *cf,
struct Curl_easy *data,
struct eyeballer *baller)
{
struct cf_he_ctx *ctx = cf->ctx;
struct Curl_cfilter *cf_prev = baller->cf;
struct Curl_cfilter *wcf;
CURLcode result;
@ -454,7 +456,8 @@ static void baller_initiate(struct Curl_cfilter *cf,
socket gets a different file descriptor, which can prevent bugs when
the curl_multi_socket_action interface is used with certain select()
replacements such as kqueue. */
result = baller->cf_create(&baller->cf, data, cf->conn, baller->addr);
result = baller->cf_create(&baller->cf, data, cf->conn, baller->addr,
ctx->transport);
if(result)
goto out;
@ -877,7 +880,7 @@ static CURLcode cf_he_connect(struct Curl_cfilter *cf,
switch(ctx->state) {
case SCFST_INIT:
DEBUGASSERT(CURL_SOCKET_BAD == cf->conn->sock[cf->sockindex]);
DEBUGASSERT(CURL_SOCKET_BAD == Curl_conn_cf_get_socket(cf, data));
DEBUGASSERT(!cf->connected);
result = start_connect(cf, data, ctx->remotehost);
if(result)
@ -900,9 +903,7 @@ static CURLcode cf_he_connect(struct Curl_cfilter *cf,
Curl_conn_cf_cntrl(cf->next, data, TRUE,
CF_CTRL_CONN_INFO_UPDATE, 0, NULL);
Curl_pgrsTime(data, TIMER_CONNECT); /* we're connected already */
if(Curl_conn_is_ssl(cf->conn, FIRSTSOCKET) ||
(cf->conn->handler->protocol & PROTO_FAMILY_SSH))
if(cf->conn->handler->protocol & PROTO_FAMILY_SSH)
Curl_pgrsTime(data, TIMER_APPCONNECT); /* we're connected already */
Curl_verboseconnect(data, cf->conn);
data->info.numconnects++; /* to track the # of connections made */
@ -950,6 +951,44 @@ static bool cf_he_data_pending(struct Curl_cfilter *cf,
return FALSE;
}
static CURLcode cf_he_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void *pres2)
{
struct cf_he_ctx *ctx = cf->ctx;
if(!cf->connected) {
switch(query) {
case CF_QUERY_CONNECT_REPLY_MS: {
int reply_ms = -1;
size_t i;
for(i = 0; i < sizeof(ctx->baller)/sizeof(ctx->baller[0]); i++) {
struct eyeballer *baller = ctx->baller[i];
int breply_ms;
if(baller && baller->cf &&
!baller->cf->cft->query(baller->cf, data, query,
&breply_ms, NULL)) {
if(breply_ms >= 0 && (reply_ms < 0 || breply_ms < reply_ms))
reply_ms = breply_ms;
}
}
*pres1 = reply_ms;
DEBUGF(LOG_CF(data, cf, "query connect reply: %dms", *pres1));
return CURLE_OK;
}
default:
break;
}
}
return cf->next?
cf->next->cft->query(cf->next, data, query, pres1, pres2) :
CURLE_UNKNOWN_OPTION;
}
static void cf_he_destroy(struct Curl_cfilter *cf, struct Curl_easy *data)
{
struct cf_he_ctx *ctx = cf->ctx;
@ -977,14 +1016,15 @@ struct Curl_cftype Curl_cft_happy_eyeballs = {
Curl_cf_def_cntrl,
Curl_cf_def_conn_is_alive,
Curl_cf_def_conn_keep_alive,
Curl_cf_def_query,
cf_he_query,
};
CURLcode Curl_cf_happy_eyeballs_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
cf_ip_connect_create *cf_create,
const struct Curl_dns_entry *remotehost)
const struct Curl_dns_entry *remotehost,
int transport)
{
struct cf_he_ctx *ctx = NULL;
CURLcode result;
@ -997,6 +1037,7 @@ CURLcode Curl_cf_happy_eyeballs_create(struct Curl_cfilter **pcf,
result = CURLE_OUT_OF_MEMORY;
goto out;
}
ctx->transport = transport;
ctx->cf_create = cf_create;
ctx->remotehost = remotehost;
@ -1073,7 +1114,8 @@ static CURLcode cf_he_insert_after(struct Curl_cfilter *cf_at,
return CURLE_UNSUPPORTED_PROTOCOL;
}
result = Curl_cf_happy_eyeballs_create(&cf, data, cf_at->conn,
cf_create, remotehost);
cf_create, remotehost,
transport);
if(result)
return result;
@ -1095,6 +1137,7 @@ struct cf_setup_ctx {
cf_setup_state state;
const struct Curl_dns_entry *remotehost;
int ssl_mode;
int transport;
};
static CURLcode cf_setup_connect(struct Curl_cfilter *cf,
@ -1118,8 +1161,7 @@ connect_sub_chain:
}
if(ctx->state < CF_SETUP_CNNCT_EYEBALLS) {
result = cf_he_insert_after(cf, data, ctx->remotehost,
cf->conn->transport);
result = cf_he_insert_after(cf, data, ctx->remotehost, ctx->transport);
if(result)
return result;
ctx->state = CF_SETUP_CNNCT_EYEBALLS;
@ -1244,6 +1286,75 @@ struct Curl_cftype Curl_cft_setup = {
Curl_cf_def_query,
};
static CURLcode cf_setup_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
const struct Curl_dns_entry *remotehost,
int transport,
int ssl_mode)
{
struct Curl_cfilter *cf = NULL;
struct cf_setup_ctx *ctx;
CURLcode result = CURLE_OK;
(void)data;
ctx = calloc(sizeof(*ctx), 1);
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
ctx->state = CF_SETUP_INIT;
ctx->remotehost = remotehost;
ctx->ssl_mode = ssl_mode;
ctx->transport = transport;
result = Curl_cf_create(&cf, &Curl_cft_setup, ctx);
if(result)
goto out;
ctx = NULL;
out:
*pcf = result? NULL : cf;
free(ctx);
return result;
}
CURLcode Curl_cf_setup_add(struct Curl_easy *data,
struct connectdata *conn,
int sockindex,
const struct Curl_dns_entry *remotehost,
int transport,
int ssl_mode)
{
struct Curl_cfilter *cf;
CURLcode result = CURLE_OK;
DEBUGASSERT(data);
result = cf_setup_create(&cf, data, remotehost, transport, ssl_mode);
if(result)
goto out;
Curl_conn_cf_add(data, conn, sockindex, cf);
out:
return result;
}
CURLcode Curl_cf_setup_insert_after(struct Curl_cfilter *cf_at,
struct Curl_easy *data,
const struct Curl_dns_entry *remotehost,
int transport,
int ssl_mode)
{
struct Curl_cfilter *cf;
CURLcode result;
DEBUGASSERT(data);
result = cf_setup_create(&cf, data, remotehost, transport, ssl_mode);
if(result)
goto out;
Curl_conn_cf_insert_after(cf_at, cf);
out:
return result;
}
CURLcode Curl_conn_setup(struct Curl_easy *data,
struct connectdata *conn,
int sockindex,
@ -1251,34 +1362,31 @@ CURLcode Curl_conn_setup(struct Curl_easy *data,
int ssl_mode)
{
CURLcode result = CURLE_OK;
struct cf_setup_ctx *ctx = NULL;
DEBUGASSERT(data);
/* If no filter is set, we add the "default" setup connection filter.
*/
if(!conn->cfilter[sockindex]) {
struct Curl_cfilter *cf;
DEBUGASSERT(conn->handler);
ctx = calloc(sizeof(*ctx), 1);
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
ctx->state = CF_SETUP_INIT;
ctx->remotehost = remotehost;
ctx->ssl_mode = ssl_mode;
#if !defined(CURL_DISABLE_HTTP) && !defined(USE_HYPER)
if(!conn->cfilter[sockindex] &&
conn->handler->protocol == CURLPROTO_HTTPS &&
(ssl_mode == CURL_CF_SSL_ENABLE || ssl_mode != CURL_CF_SSL_DISABLE)) {
result = Curl_cf_create(&cf, &Curl_cft_setup, ctx);
result = Curl_cf_https_setup(data, conn, sockindex, remotehost);
if(result)
goto out;
}
#endif /* !defined(CURL_DISABLE_HTTP) && !defined(USE_HYPER) */
/* Still no cfilter set, apply default. */
if(!conn->cfilter[sockindex]) {
result = Curl_cf_setup_add(data, conn, sockindex, remotehost,
conn->transport, ssl_mode);
if(result)
goto out;
ctx = NULL;
Curl_conn_cf_add(data, conn, sockindex, cf);
}
DEBUGASSERT(conn->cfilter[sockindex]);
out:
free(ctx);
return result;
}

View File

@ -101,7 +101,8 @@ void Curl_conncontrol(struct connectdata *conn,
typedef CURLcode cf_ip_connect_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai);
const struct Curl_addrinfo *ai,
int transport);
/**
* Create a happy eyeball connection filter that uses the, once resolved,
@ -118,13 +119,26 @@ Curl_cf_happy_eyeballs_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
cf_ip_connect_create *cf_create,
const struct Curl_dns_entry *remotehost);
const struct Curl_dns_entry *remotehost,
int transport);
CURLcode Curl_cf_setup_add(struct Curl_easy *data,
struct connectdata *conn,
int sockindex,
const struct Curl_dns_entry *remotehost,
int transport,
int ssl_mode);
CURLcode Curl_cf_setup_insert_after(struct Curl_cfilter *cf_at,
struct Curl_easy *data,
const struct Curl_dns_entry *remotehost,
int transport,
int ssl_mode);
/**
* Setup the cfilters at `sockindex` in connection `conn`, invoking
* the instance `setup(remotehost)` methods. If no filter chain is
* installed yet, inspects the configuration in `data` to install a
* suitable filter chain.
* Setup the cfilters at `sockindex` in connection `conn`.
* If no filter chain is installed yet, inspects the configuration
* in `data` and `conn? to install a suitable filter chain.
*/
CURLcode Curl_conn_setup(struct Curl_easy *data,
struct connectdata *conn,

View File

@ -38,6 +38,7 @@
#include "connect.h"
#include "http2.h"
#include "http_proxy.h"
#include "cf-http.h"
#include "socks.h"
#include "strtok.h"
#include "vtls/vtls.h"
@ -166,6 +167,9 @@ static struct Curl_cftype *cf_types[] = {
#endif /* !CURL_DISABLE_PROXY */
#ifdef ENABLE_QUIC
&Curl_cft_http3,
#endif
#if !defined(CURL_DISABLE_HTTP) && !defined(USE_HYPER)
&Curl_cft_http_connect,
#endif
NULL,
};

View File

@ -219,38 +219,6 @@ const struct Curl_handler Curl_handler_wss = {
#endif
static CURLcode h3_setup_conn(struct Curl_easy *data,
struct connectdata *conn)
{
#ifdef ENABLE_QUIC
/* We want HTTP/3 directly, setup the filter chain ourself,
* overriding the default behaviour. */
DEBUGASSERT(conn->transport == TRNSPRT_QUIC);
if(!(conn->handler->flags & PROTOPT_SSL)) {
failf(data, "HTTP/3 requested for non-HTTPS URL");
return CURLE_URL_MALFORMAT;
}
#ifndef CURL_DISABLE_PROXY
if(conn->bits.socksproxy) {
failf(data, "HTTP/3 is not supported over a SOCKS proxy");
return CURLE_URL_MALFORMAT;
}
if(conn->bits.httpproxy && conn->bits.tunnel_proxy) {
failf(data, "HTTP/3 is not supported over a HTTP proxy");
return CURLE_URL_MALFORMAT;
}
#endif
return CURLE_OK;
#else /* ENABLE_QUIC */
(void)conn;
(void)data;
DEBUGF(infof(data, "QUIC is not supported in this build"));
return CURLE_NOT_BUILT_IN;
#endif /* !ENABLE_QUIC */
}
static CURLcode http_setup_conn(struct Curl_easy *data,
struct connectdata *conn)
{
@ -266,13 +234,16 @@ static CURLcode http_setup_conn(struct Curl_easy *data,
Curl_mime_initpart(&http->form);
data->req.p.http = http;
if(data->state.httpwant == CURL_HTTP_VERSION_3) {
if((data->state.httpwant == CURL_HTTP_VERSION_3)
|| (data->state.httpwant == CURL_HTTP_VERSION_3ONLY)) {
CURLcode result = Curl_conn_may_http3(data, conn);
if(result)
return result;
/* TODO: HTTP lower version eyeballing */
conn->transport = TRNSPRT_QUIC;
}
if(conn->transport == TRNSPRT_QUIC) {
return h3_setup_conn(data, conn);
}
return CURLE_OK;
}
@ -1320,7 +1291,7 @@ CURLcode Curl_buffer_send(struct dynbuf *in,
DEBUGASSERT(socketindex <= SECONDARYSOCKET);
sockfd = conn->sock[socketindex];
sockfd = Curl_conn_get_socket(data, socketindex);
/* The looping below is required since we use non-blocking sockets, but due
to the circumstances we will just loop and try again and again etc */
@ -1571,8 +1542,8 @@ static int http_getsock_do(struct Curl_easy *data,
curl_socket_t *socks)
{
/* write mode */
(void)data;
socks[0] = conn->sock[FIRSTSOCKET];
(void)conn;
socks[0] = Curl_conn_get_socket(data, FIRSTSOCKET);
return GETSOCK_WRITESOCK(0);
}
@ -3008,33 +2979,25 @@ CURLcode Curl_http(struct Curl_easy *data, bool *done)
the rest of the request in the PERFORM phase. */
*done = TRUE;
if(Curl_conn_is_http3(data, conn, FIRSTSOCKET)
|| Curl_conn_is_http2(data, conn, FIRSTSOCKET)
|| conn->httpversion == 20 /* like to get rid of this */) {
/* all fine, we are set */
}
else { /* undecided */
switch(conn->alpn) {
case CURL_HTTP_VERSION_2:
result = Curl_http2_switch(data, conn, FIRSTSOCKET, NULL, 0);
switch(conn->alpn) {
case CURL_HTTP_VERSION_3:
DEBUGASSERT(Curl_conn_is_http3(data, conn, FIRSTSOCKET));
break;
case CURL_HTTP_VERSION_2:
DEBUGASSERT(Curl_conn_is_http2(data, conn, FIRSTSOCKET));
break;
case CURL_HTTP_VERSION_1_1:
/* continue with HTTP/1.1 when explicitly requested */
break;
default:
/* Check if user wants to use HTTP/2 with clear TCP */
if(Curl_http2_may_switch(data, conn, FIRSTSOCKET)) {
DEBUGF(infof(data, "HTTP/2 over clean TCP"));
result = Curl_http2_switch(data, conn, FIRSTSOCKET);
if(result)
return result;
break;
case CURL_HTTP_VERSION_1_1:
/* continue with HTTP/1.1 when explicitly requested */
break;
default:
/* Check if user wants to use HTTP/2 with clear TCP */
if(Curl_http2_may_switch(data, conn, FIRSTSOCKET)) {
DEBUGF(infof(data, "HTTP/2 over clean TCP"));
result = Curl_http2_switch(data, conn, FIRSTSOCKET, NULL, 0);
if(result)
return result;
}
break;
}
break;
}
http = data->req.p.http;
@ -3936,8 +3899,8 @@ CURLcode Curl_http_readwrite_headers(struct Curl_easy *data,
/* switch to http2 now. The bytes after response headers
are also processed here, otherwise they are lost. */
result = Curl_http2_switch(data, conn, FIRSTSOCKET,
k->str, *nread);
result = Curl_http2_upgrade(data, conn, FIRSTSOCKET,
k->str, *nread);
if(result)
return result;
*nread = 0;

View File

@ -248,7 +248,8 @@ struct HTTP {
const uint8_t *upload_mem; /* points to a buffer to read from */
size_t upload_len; /* size of the buffer 'upload_mem' points to */
curl_off_t upload_left; /* number of bytes left to upload */
bool closed; /* TRUE on HTTP2 stream close */
bool closed; /* TRUE on stream close */
bool reset; /* TRUE on stream reset */
#endif
#ifdef ENABLE_QUIC
@ -274,7 +275,6 @@ struct HTTP {
#else /* !_WIN32 */
pthread_mutex_t recv_lock;
#endif /* _WIN32 */
/* Receive Buffer (Headers and Data) */
uint8_t* recv_buf;
size_t recv_buf_alloc;
@ -288,6 +288,10 @@ struct HTTP {
/* General Receive Error */
CURLcode recv_error;
#endif /* USE_MSH3 */
#ifdef USE_QUICHE
bool h3_got_header; /* TRUE when h3 stream has recvd some HEADER */
bool h3_recving_data; /* TRUE when h3 stream is reading DATA */
#endif /* USE_QUICHE */
};
CURLcode Curl_http_size(struct Curl_easy *data);

File diff suppressed because it is too large Load Diff

View File

@ -49,24 +49,32 @@ bool Curl_h2_http_1_1_error(struct Curl_easy *data);
bool Curl_conn_is_http2(const struct Curl_easy *data,
const struct connectdata *conn,
int sockindex);
bool Curl_cf_is_http2(struct Curl_cfilter *cf, const struct Curl_easy *data);
bool Curl_http2_may_switch(struct Curl_easy *data,
struct connectdata *conn,
int sockindex);
CURLcode Curl_http2_switch(struct Curl_easy *data,
struct connectdata *conn, int sockindex,
const char *ptr, size_t nread);
struct connectdata *conn, int sockindex);
CURLcode Curl_http2_switch_at(struct Curl_cfilter *cf, struct Curl_easy *data);
CURLcode Curl_http2_upgrade(struct Curl_easy *data,
struct connectdata *conn, int sockindex,
const char *ptr, size_t nread);
extern struct Curl_cftype Curl_cft_nghttp2;
#else /* USE_NGHTTP2 */
#define Curl_cf_is_http2(a,b) FALSE
#define Curl_conn_is_http2(a,b,c) FALSE
#define Curl_http2_may_switch(a,b,c) FALSE
#define Curl_http2_request_upgrade(x,y) CURLE_UNSUPPORTED_PROTOCOL
#define Curl_http2_switch(a,b,c,d,e) CURLE_UNSUPPORTED_PROTOCOL
#define Curl_http2_switch(a,b,c) CURLE_UNSUPPORTED_PROTOCOL
#define Curl_http2_upgrade(a,b,c,d,e) CURLE_UNSUPPORTED_PROTOCOL
#define Curl_h2_http_1_1_error(x) 0
#endif

View File

@ -267,10 +267,11 @@ static CURLcode CONNECT_host(struct Curl_easy *data,
}
#ifndef USE_HYPER
static CURLcode start_CONNECT(struct Curl_easy *data,
struct connectdata *conn,
static CURLcode start_CONNECT(struct Curl_cfilter *cf,
struct Curl_easy *data,
struct tunnel_state *ts)
{
struct connectdata *conn = cf->conn;
char *hostheader = NULL;
char *host = NULL;
const char *httpv;
@ -476,7 +477,7 @@ static CURLcode recv_CONNECT_resp(struct Curl_cfilter *cf,
{
CURLcode result = CURLE_OK;
struct SingleRequest *k = &data->req;
curl_socket_t tunnelsocket = cf->conn->sock[ts->sockindex];
curl_socket_t tunnelsocket = Curl_conn_cf_get_socket(cf, data);
char *linep;
size_t perline;
int error;
@ -665,12 +666,13 @@ static CURLcode recv_CONNECT_resp(struct Curl_cfilter *cf,
#else /* USE_HYPER */
/* The Hyper version of CONNECT */
static CURLcode start_CONNECT(struct Curl_easy *data,
struct connectdata *conn,
static CURLcode start_CONNECT(struct Curl_cfilter *cf,
struct Curl_easy *data,
struct tunnel_state *ts)
{
struct connectdata *conn = cf->conn;
struct hyptransfer *h = &data->hyp;
curl_socket_t tunnelsocket = conn->sock[ts->sockindex];
curl_socket_t tunnelsocket = Curl_conn_cf_get_socket(cf, data);
hyper_io *io = NULL;
hyper_request *req = NULL;
hyper_headers *headers = NULL;
@ -971,7 +973,7 @@ static CURLcode CONNECT(struct Curl_cfilter *cf,
case TUNNEL_INIT:
/* Prepare the CONNECT request and make a first attempt to send. */
DEBUGF(LOG_CF(data, cf, "CONNECT start"));
result = start_CONNECT(data, cf->conn, ts);
result = start_CONNECT(cf, data, ts);
if(result)
goto out;
tunnel_go_state(cf, ts, TUNNEL_CONNECT, data);
@ -1125,15 +1127,13 @@ static int http_proxy_cf_get_select_socks(struct Curl_cfilter *cf,
curl_socket_t *socks)
{
struct tunnel_state *ts = cf->ctx;
struct connectdata *conn = cf->conn;
int fds;
DEBUGASSERT(conn);
fds = cf->next->cft->get_select_socks(cf->next, data, socks);
if(!fds && cf->next->connected && !cf->connected) {
/* If we are not connected, but the filter "below" is
* and not waiting on something, we are tunneling. */
socks[0] = conn->sock[cf->sockindex];
socks[0] = Curl_conn_cf_get_socket(cf, data);
if(ts) {
/* when we've sent a CONNECT to a proxy, we should rather either
wait for the socket to become readable to be able to get the
@ -1347,15 +1347,13 @@ static int cf_haproxy_get_select_socks(struct Curl_cfilter *cf,
struct Curl_easy *data,
curl_socket_t *socks)
{
struct connectdata *conn = cf->conn;
int fds;
DEBUGASSERT(conn);
fds = cf->next->cft->get_select_socks(cf->next, data, socks);
if(!fds && cf->next->connected && !cf->connected) {
/* If we are not connected, but the filter "below" is
* and not waiting on something, we are sending. */
socks[0] = conn->sock[cf->sockindex];
socks[0] = Curl_conn_cf_get_socket(cf, data);
return GETSOCK_WRITESOCK(0);
}
return fds;

View File

@ -166,14 +166,11 @@ void Curl_pgrsResetTransferSizes(struct Curl_easy *data)
/*
*
* Curl_pgrsTime(). Store the current time at the given label. This fetches a
* fresh "now" and returns it.
*
* @unittest: 1399
* Curl_pgrsTimeWas(). Store the timestamp time at the given label.
*/
struct curltime Curl_pgrsTime(struct Curl_easy *data, timerid timer)
void Curl_pgrsTimeWas(struct Curl_easy *data, timerid timer,
struct curltime timestamp)
{
struct curltime now = Curl_now();
timediff_t *delta = NULL;
switch(timer) {
@ -183,15 +180,15 @@ struct curltime Curl_pgrsTime(struct Curl_easy *data, timerid timer)
break;
case TIMER_STARTOP:
/* This is set at the start of a transfer */
data->progress.t_startop = now;
data->progress.t_startop = timestamp;
break;
case TIMER_STARTSINGLE:
/* This is set at the start of each single fetch */
data->progress.t_startsingle = now;
data->progress.t_startsingle = timestamp;
data->progress.is_t_startransfer_set = false;
break;
case TIMER_STARTACCEPT:
data->progress.t_acceptdata = now;
data->progress.t_acceptdata = timestamp;
break;
case TIMER_NAMELOOKUP:
delta = &data->progress.t_nslookup;
@ -214,7 +211,7 @@ struct curltime Curl_pgrsTime(struct Curl_easy *data, timerid timer)
* changing the t_starttransfer time.
*/
if(data->progress.is_t_startransfer_set) {
return now;
return;
}
else {
data->progress.is_t_startransfer_set = true;
@ -224,15 +221,30 @@ struct curltime Curl_pgrsTime(struct Curl_easy *data, timerid timer)
/* this is the normal end-of-transfer thing */
break;
case TIMER_REDIRECT:
data->progress.t_redirect = Curl_timediff_us(now, data->progress.start);
data->progress.t_redirect = Curl_timediff_us(timestamp,
data->progress.start);
break;
}
if(delta) {
timediff_t us = Curl_timediff_us(now, data->progress.t_startsingle);
timediff_t us = Curl_timediff_us(timestamp, data->progress.t_startsingle);
if(us < 1)
us = 1; /* make sure at least one microsecond passed */
*delta += us;
}
}
/*
*
* Curl_pgrsTime(). Store the current time at the given label. This fetches a
* fresh "now" and returns it.
*
* @unittest: 1399
*/
struct curltime Curl_pgrsTime(struct Curl_easy *data, timerid timer)
{
struct curltime now = Curl_now();
Curl_pgrsTimeWas(data, timer, now);
return now;
}

View File

@ -57,6 +57,13 @@ timediff_t Curl_pgrsLimitWaitTime(curl_off_t cursize,
curl_off_t limit,
struct curltime start,
struct curltime now);
/**
* Update progress timer with the elapsed time from its start to `timestamp`.
* This allows updating timers later and is used by happy eyeballing, where
* we only want to record the winner's times.
*/
void Curl_pgrsTimeWas(struct Curl_easy *data, timerid timer,
struct curltime timestamp);
#define PGRS_HIDE (1<<4)
#define PGRS_UL_SIZE_KNOWN (1<<5)

View File

@ -1169,7 +1169,7 @@ static int socks_cf_get_select_socks(struct Curl_cfilter *cf,
if(!fds && cf->next->connected && !cf->connected && sx) {
/* If we are not connected, the filter below is and has nothing
* to wait on, we determine what to wait for. */
socks[0] = cf->conn->sock[cf->sockindex];
socks[0] = Curl_conn_cf_get_socket(cf, data);
switch(sx->state) {
case CONNECT_RESOLVING:
case CONNECT_SOCKS_READ:

View File

@ -1233,6 +1233,7 @@ typedef enum {
EXPIRE_TOOFAST,
EXPIRE_QUIC,
EXPIRE_FTP_ACCEPT,
EXPIRE_ALPN_EYEBALLS,
EXPIRE_LAST /* not an actual timer, used as a marker only */
} expire_id;

View File

@ -34,6 +34,7 @@
#include "cfilters.h"
#include "cf-socket.h"
#include "connect.h"
#include "progress.h"
#include "h2h3.h"
#include "curl_msh3.h"
#include "socketpair.h"
@ -115,6 +116,8 @@ struct cf_msh3_ctx {
curl_socket_t sock[2]; /* fake socket pair until we get support in msh3 */
char l_ip[MAX_IPADR_LEN]; /* local IP as string */
int l_port; /* local port number */
struct curltime connect_started; /* time the current attempt started */
struct curltime handshake_at; /* time connect handshake finished */
/* Flags written by msh3/msquic thread */
bool handshake_complete;
bool handshake_succeeded;
@ -491,11 +494,12 @@ static int cf_msh3_get_select_socks(struct Curl_cfilter *cf,
struct Curl_easy *data,
curl_socket_t *socks)
{
struct cf_msh3_ctx *ctx = cf->ctx;
struct HTTP *stream = data->req.p.http;
int bitmap = GETSOCK_BLANK;
if(stream && cf->conn->sock[FIRSTSOCKET] != CURL_SOCKET_BAD) {
socks[0] = cf->conn->sock[FIRSTSOCKET];
if(stream && ctx->sock[SP_LOCAL] != CURL_SOCKET_BAD) {
socks[0] = ctx->sock[SP_LOCAL];
if(stream->recv_error) {
bitmap |= GETSOCK_READSOCK(0);
@ -544,6 +548,7 @@ static CURLcode cf_msh3_data_event(struct Curl_cfilter *cf,
struct Curl_easy *data,
int event, int arg1, void *arg2)
{
struct cf_msh3_ctx *ctx = cf->ctx;
struct HTTP *stream = data->req.p.http;
CURLcode result = CURLE_OK;
@ -553,7 +558,6 @@ static CURLcode cf_msh3_data_event(struct Curl_cfilter *cf,
case CF_CTRL_DATA_SETUP:
result = msh3_data_setup(cf, data);
break;
case CF_CTRL_DATA_DONE:
DEBUGF(LOG_CF(data, cf, "req: done"));
if(stream) {
@ -567,16 +571,18 @@ static CURLcode cf_msh3_data_event(struct Curl_cfilter *cf,
}
}
break;
case CF_CTRL_DATA_DONE_SEND:
DEBUGF(LOG_CF(data, cf, "req: send done"));
stream->upload_done = TRUE;
break;
case CF_CTRL_CONN_INFO_UPDATE:
DEBUGF(LOG_CF(data, cf, "req: update"));
DEBUGF(LOG_CF(data, cf, "req: update info"));
cf_msh3_active(cf, data);
break;
case CF_CTRL_CONN_REPORT_STATS:
if(cf->sockindex == FIRSTSOCKET)
Curl_pgrsTimeWas(data, TIMER_APPCONNECT, ctx->handshake_at);
break;
default:
break;
@ -657,12 +663,14 @@ static CURLcode cf_msh3_connect(struct Curl_cfilter *cf,
*done = FALSE;
if(!ctx->qconn) {
ctx->connect_started = Curl_now();
result = cf_connect_start(cf, data);
if(result)
goto out;
}
if(ctx->handshake_complete) {
ctx->handshake_at = Curl_now();
if(ctx->handshake_succeeded) {
cf->conn->bits.multiplex = TRUE; /* at least potentially multiplexed */
cf->conn->httpversion = 30;
@ -671,6 +679,7 @@ static CURLcode cf_msh3_connect(struct Curl_cfilter *cf,
cf->conn->alpn = CURL_HTTP_VERSION_3;
*done = TRUE;
connkeep(cf->conn, "HTTP/3 default");
Curl_pgrsTime(data, TIMER_APPCONNECT);
}
else {
failf(data, "failed to connect, handshake failed");
@ -733,7 +742,7 @@ static void cf_msh3_destroy(struct Curl_cfilter *cf, struct Curl_easy *data)
static CURLcode cf_msh3_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void **pres2)
int query, int *pres1, void *pres2)
{
struct cf_msh3_ctx *ctx = cf->ctx;

View File

@ -53,8 +53,10 @@
#include "cfilters.h"
#include "cf-socket.h"
#include "connect.h"
#include "progress.h"
#include "strerror.h"
#include "dynbuf.h"
#include "select.h"
#include "vquic.h"
#include "h2h3.h"
#include "vtls/keylog.h"
@ -155,11 +157,21 @@ struct cf_ngtcp2_ctx {
/* the packets blocked by sendmsg (EAGAIN or EWOULDBLOCK) */
struct blocked_pkt blocked_pkt[2];
struct cf_call_data call_data;
nghttp3_conn *h3conn;
nghttp3_settings h3settings;
int qlogfd;
struct curltime started_at; /* time the current attempt started */
struct curltime handshake_at; /* time connect handshake finished */
struct curltime first_byte_at; /* when first byte was recvd */
struct curltime reconnect_at; /* time the next attempt should start */
BIT(got_first_byte); /* if first byte was received */
};
/* How to access `call_data` from a cf_ngtcp2 filter */
#define CF_CTX_CALL_DATA(cf) \
((struct cf_ngtcp2_ctx *)(cf)->ctx)->call_data
/* ngtcp2 default congestion controller does not perform pacing. Limit
the maximum packet burst to MAX_PKT_BURST packets. */
@ -613,11 +625,14 @@ static int cb_recv_stream_data(ngtcp2_conn *tconn, uint32_t flags,
struct cf_ngtcp2_ctx *ctx = cf->ctx;
nghttp3_ssize nconsumed;
int fin = (flags & NGTCP2_STREAM_DATA_FLAG_FIN) ? 1 : 0;
struct Curl_easy *data = stream_user_data;
(void)offset;
(void)stream_user_data;
(void)data;
nconsumed =
nghttp3_conn_read_stream(ctx->h3conn, stream_id, buf, buflen, fin);
DEBUGF(LOG_CF(data, cf, "[h3sid=%" PRIx64 "] read_stream(len=%zu) -> %zd",
stream_id, buflen, nconsumed));
if(nconsumed < 0) {
ngtcp2_connection_close_error_set_application_error(
&ctx->last_error,
@ -662,11 +677,10 @@ static int cb_stream_close(ngtcp2_conn *tconn, uint32_t flags,
{
struct Curl_cfilter *cf = user_data;
struct cf_ngtcp2_ctx *ctx = cf->ctx;
struct Curl_easy *data = stream_user_data;
int rv;
(void)data;
(void)tconn;
(void)stream_user_data;
/* stream is closed... */
if(!(flags & NGTCP2_STREAM_CLOSE_FLAG_APP_ERROR_CODE_SET)) {
@ -675,7 +689,6 @@ static int cb_stream_close(ngtcp2_conn *tconn, uint32_t flags,
rv = nghttp3_conn_close_stream(ctx->h3conn, stream_id,
app_error_code);
DEBUGF(LOG_CF(data, cf, "[qsid=%" PRIx64 "] close -> %d", stream_id, rv));
if(rv) {
ngtcp2_connection_close_error_set_application_error(
&ctx->last_error, nghttp3_err_infer_quic_app_error_code(rv), NULL, 0);
@ -858,7 +871,9 @@ static int cf_ngtcp2_get_select_socks(struct Curl_cfilter *cf,
struct SingleRequest *k = &data->req;
int rv = GETSOCK_BLANK;
struct HTTP *stream = data->req.p.http;
struct cf_call_data save;
CF_DATA_SAVE(save, cf, data);
socks[0] = ctx->sockfd;
/* in an HTTP/3 connection we can basically always get a frame so we should
@ -873,6 +888,9 @@ static int cf_ngtcp2_get_select_socks(struct Curl_cfilter *cf,
nghttp3_conn_is_stream_writable(ctx->h3conn, stream->stream3_id))
rv |= GETSOCK_WRITESOCK(0);
DEBUGF(LOG_CF(data, cf, "get_select_socks -> %x (sock=%d)",
rv, (int)socks[0]));
CF_DATA_RESTORE(cf, save);
return rv;
}
@ -888,9 +906,15 @@ static int cb_h3_stream_close(nghttp3_conn *conn, int64_t stream_id,
(void)app_error_code;
(void)cf;
DEBUGF(LOG_CF(data, cf, "[h3sid=%" PRIx64 "] close", stream_id));
DEBUGF(LOG_CF(data, cf, "[h3sid=%" PRIx64 "] close(err=%" PRIx64 ")",
stream_id, app_error_code));
stream->closed = TRUE;
stream->error3 = app_error_code;
if(app_error_code == NGHTTP3_H3_INTERNAL_ERROR) {
/* TODO: we do not get a specific error when the remote end closed
* the response before it was complete. */
stream->reset = TRUE;
}
Curl_expire(data, 0, EXPIRE_QUIC);
/* make sure that ngh3_stream_recv is called again to complete the transfer
even if there are no more packets to be received from the server. */
@ -919,8 +943,9 @@ static CURLcode write_data(struct HTTP *stream, const void *mem, size_t memlen)
ncopy -= len;
}
/* copy the rest to the overflow buffer */
if(ncopy)
if(ncopy) {
result = Curl_dyn_addn(&stream->overflow, buf, ncopy);
}
return result;
}
@ -1022,6 +1047,7 @@ static int cb_h3_recv_header(nghttp3_conn *conn, int64_t stream_id,
nghttp3_rcbuf *value, uint8_t flags,
void *user_data, void *stream_user_data)
{
struct Curl_cfilter *cf = user_data;
nghttp3_vec h3name = nghttp3_rcbuf_get_buf(name);
nghttp3_vec h3val = nghttp3_rcbuf_get_buf(value);
struct Curl_easy *data = stream_user_data;
@ -1031,7 +1057,7 @@ static int cb_h3_recv_header(nghttp3_conn *conn, int64_t stream_id,
(void)stream_id;
(void)token;
(void)flags;
(void)user_data;
(void)cf;
if(token == NGHTTP3_QPACK_TOKEN__STATUS) {
char line[14]; /* status line is always 13 characters long */
@ -1040,6 +1066,8 @@ static int cb_h3_recv_header(nghttp3_conn *conn, int64_t stream_id,
DEBUGASSERT(stream->status_code != -1);
ncopy = msnprintf(line, sizeof(line), "HTTP/3 %03d \r\n",
stream->status_code);
DEBUGF(LOG_CF(data, cf, "[h3sid=%" PRIx64 "] status: %s",
stream_id, line));
result = write_data(stream, line, ncopy);
if(result) {
return -1;
@ -1047,6 +1075,9 @@ static int cb_h3_recv_header(nghttp3_conn *conn, int64_t stream_id,
}
else {
/* store as an HTTP1-style header */
DEBUGF(LOG_CF(data, cf, "[h3sid=%" PRIx64 "] header: %.*s: %.*s",
stream_id, (int)h3name.len, h3name.base,
(int)h3val.len, h3val.base));
result = write_data(stream, h3name.base, h3name.len);
if(result) {
return -1;
@ -1208,7 +1239,10 @@ static ssize_t cf_ngtcp2_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
struct HTTP *stream = data->req.p.http;
ssize_t nread = -1;
struct cf_call_data save;
CF_DATA_SAVE(save, cf, data);
DEBUGASSERT(cf->connected);
DEBUGASSERT(ctx);
DEBUGASSERT(ctx->qconn);
@ -1229,15 +1263,17 @@ static ssize_t cf_ngtcp2_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
if(cf_process_ingress(cf, data)) {
*err = CURLE_RECV_ERROR;
return -1;
nread = -1;
goto out;
}
if(cf_flush_egress(cf, data)) {
*err = CURLE_SEND_ERROR;
return -1;
nread = -1;
goto out;
}
if(stream->memlen) {
ssize_t memlen = stream->memlen;
nread = stream->memlen;
/* data arrived */
/* reset to allow more data to come */
stream->memlen = 0;
@ -1245,22 +1281,33 @@ static ssize_t cf_ngtcp2_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
stream->len = len;
/* extend the stream window with the data we're consuming and send out
any additional packets to tell the server that we can receive more */
DEBUGF(LOG_CF(data, cf, "[h3sid=%" PRIx64 "] recv, consumed %zd bytes",
stream->stream3_id, nread));
extend_stream_window(ctx->qconn, stream);
if(cf_flush_egress(cf, data)) {
*err = CURLE_SEND_ERROR;
return -1;
nread = -1;
goto out;
}
return memlen;
goto out;
}
if(stream->closed) {
if(stream->error3 != NGHTTP3_H3_NO_ERROR) {
if(stream->reset) {
failf(data,
"HTTP/3 stream %" PRId64 " was not closed cleanly: (err %" PRIu64
"HTTP/3 stream %" PRId64 " reset by server", stream->stream3_id);
*err = CURLE_PARTIAL_FILE;
nread = -1;
goto out;
}
else if(stream->error3 != NGHTTP3_H3_NO_ERROR) {
failf(data,
"HTTP/3 stream %" PRId64 " was not closed cleanly: (err 0x%" PRIx64
")",
stream->stream3_id, stream->error3);
*err = CURLE_HTTP3;
return -1;
nread = -1;
goto out;
}
if(!stream->bodystarted) {
@ -1269,15 +1316,20 @@ static ssize_t cf_ngtcp2_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
" all response header fields, treated as error",
stream->stream3_id);
*err = CURLE_HTTP3;
return -1;
nread = -1;
goto out;
}
return 0;
nread = 0;
goto out;
}
infof(data, "ngh3_stream_recv returns 0 bytes and EAGAIN");
DEBUGF(LOG_CF(data, cf, "cf_ngtcp2_recv returns EAGAIN"));
*err = CURLE_AGAIN;
return -1;
nread = -1;
out:
CF_DATA_RESTORE(cf, save);
return nread;
}
/* this amount of data has now been acked on this stream */
@ -1392,14 +1444,13 @@ static CURLcode h3_stream_open(struct Curl_cfilter *cf,
CURLcode result = CURLE_OK;
nghttp3_nv *nva = NULL;
int64_t stream3_id;
int rc;
int rc = 0;
struct h3out *h3out = NULL;
struct h2h3req *hreq = NULL;
rc = ngtcp2_conn_open_bidi_stream(ctx->qconn, &stream3_id, NULL);
if(rc) {
failf(data, "can get bidi streams");
result = CURLE_SEND_ERROR;
goto fail;
}
@ -1449,22 +1500,22 @@ static CURLcode h3_stream_open(struct Curl_cfilter *cf,
}
stream->h3out = h3out;
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] sending request %s, with_body=%d",
stream->stream3_id, data->state.url, !!stream->upload_left));
rc = nghttp3_conn_submit_request(ctx->h3conn, stream->stream3_id,
nva, nheader, &data_reader, data);
if(rc) {
result = CURLE_SEND_ERROR;
if(rc)
goto fail;
}
break;
}
default:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] sending request %s",
stream->stream3_id, data->state.url));
stream->upload_left = 0; /* nothing left to send */
rc = nghttp3_conn_submit_request(ctx->h3conn, stream->stream3_id,
nva, nheader, NULL, data);
if(rc) {
result = CURLE_SEND_ERROR;
if(rc)
goto fail;
}
break;
}
@ -1479,6 +1530,20 @@ static CURLcode h3_stream_open(struct Curl_cfilter *cf,
return CURLE_OK;
fail:
if(rc) {
switch(rc) {
case NGHTTP3_ERR_CONN_CLOSING:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] failed to send, "
"connection is closing", stream->stream3_id));
result = CURLE_RECV_ERROR;
break;
default:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] failed to send -> %d (%s)",
stream->stream3_id, rc, ngtcp2_strerror(rc)));
result = CURLE_SEND_ERROR;
break;
}
}
free(nva);
Curl_pseudo_free(hreq);
return result;
@ -1490,23 +1555,26 @@ static ssize_t cf_ngtcp2_send(struct Curl_cfilter *cf, struct Curl_easy *data,
struct cf_ngtcp2_ctx *ctx = cf->ctx;
ssize_t sent = 0;
struct HTTP *stream = data->req.p.http;
struct cf_call_data save;
CF_DATA_SAVE(save, cf, data);
DEBUGASSERT(cf->connected);
DEBUGASSERT(ctx);
DEBUGASSERT(ctx->qconn);
DEBUGASSERT(ctx->h3conn);
*err = CURLE_OK;
if(stream->closed) {
*err = CURLE_HTTP3;
return -1;
sent = -1;
goto out;
}
if(!stream->h3req) {
CURLcode result = h3_stream_open(cf, data, buf, len);
if(result) {
*err = CURLE_SEND_ERROR;
return -1;
DEBUGF(LOG_CF(data, cf, "failed to open stream -> %d", result));
sent = -1;
goto out;
}
/* Assume that mem of length len only includes HTTP/1.1 style
header fields. In other words, it does not contain request
@ -1523,13 +1591,15 @@ static ssize_t cf_ngtcp2_send(struct Curl_cfilter *cf, struct Curl_easy *data,
}
else {
*err = CURLE_AGAIN;
return -1;
sent = -1;
goto out;
}
}
if(cf_flush_egress(cf, data)) {
*err = CURLE_SEND_ERROR;
return -1;
sent = -1;
goto out;
}
/* Reset post upload buffer after resumed. */
@ -1546,10 +1616,12 @@ static ssize_t cf_ngtcp2_send(struct Curl_cfilter *cf, struct Curl_easy *data,
if(sent == 0) {
*err = CURLE_AGAIN;
return -1;
sent = -1;
goto out;
}
}
out:
CF_DATA_RESTORE(cf, save);
return sent;
}
@ -1627,12 +1699,15 @@ static CURLcode cf_process_ingress(struct Curl_cfilter *cf,
SOCKERRNO == EINTR)
;
if(recvd == -1) {
if(SOCKERRNO == EAGAIN || SOCKERRNO == EWOULDBLOCK)
break;
if(SOCKERRNO == EAGAIN || SOCKERRNO == EWOULDBLOCK) {
DEBUGF(LOG_CF(data, cf, "ingress, recvfrom -> EAGAIN"));
goto out;
}
if(SOCKERRNO == ECONNREFUSED) {
const char *r_ip;
int r_port;
Curl_cf_socket_peek(cf->next, NULL, NULL, &r_ip, &r_port);
Curl_cf_socket_peek(cf->next, data, NULL, NULL,
&r_ip, &r_port, NULL, NULL);
failf(data, "ngtcp2: connection to %s port %u refused",
r_ip, r_port);
return CURLE_COULDNT_CONNECT;
@ -1642,13 +1717,21 @@ static CURLcode cf_process_ingress(struct Curl_cfilter *cf,
return CURLE_RECV_ERROR;
}
if(recvd > 0 && !ctx->got_first_byte) {
ctx->first_byte_at = Curl_now();
ctx->got_first_byte = TRUE;
}
ngtcp2_addr_init(&path.local, (struct sockaddr *)&ctx->local_addr,
ctx->local_addrlen);
ngtcp2_addr_init(&path.remote, (struct sockaddr *)&remote_addr,
remote_addrlen);
DEBUGF(LOG_CF(data, cf, "ingress, recvd pkt of %zd bytes", recvd));
rv = ngtcp2_conn_read_pkt(ctx->qconn, &path, &pi, buf, recvd, ts);
if(rv) {
DEBUGF(LOG_CF(data, cf, "ingress, read_pkt -> %s",
ngtcp2_strerror(rv)));
if(!ctx->last_error.error_code) {
if(rv == NGTCP2_ERR_CRYPTO) {
ngtcp2_connection_close_error_set_transport_error_tls_alert(
@ -1669,6 +1752,7 @@ static CURLcode cf_process_ingress(struct Curl_cfilter *cf,
}
}
out:
return CURLE_OK;
}
@ -1803,6 +1887,7 @@ static CURLcode send_packet(struct Curl_cfilter *cf,
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
DEBUGF(LOG_CF(data, cf, "egress, send %zu bytes", pktlen));
if(ctx->no_gso && pktlen > gsolen) {
return send_packet_no_gso(cf, data, pkt, pktlen, gsolen, psent);
}
@ -2081,7 +2166,9 @@ static CURLcode cf_ngtcp2_data_event(struct Curl_cfilter *cf,
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
CURLcode result = CURLE_OK;
struct cf_call_data save;
CF_DATA_SAVE(save, cf, data);
(void)arg1;
(void)arg2;
switch(event) {
@ -2102,60 +2189,73 @@ static CURLcode cf_ngtcp2_data_event(struct Curl_cfilter *cf,
case CF_CTRL_DATA_IDLE:
if(timestamp() >= ngtcp2_conn_get_expiry(ctx->qconn)) {
if(cf_flush_egress(cf, data)) {
return CURLE_SEND_ERROR;
result = CURLE_SEND_ERROR;
}
}
break;
case CF_CTRL_CONN_REPORT_STATS:
if(cf->sockindex == FIRSTSOCKET) {
if(ctx->got_first_byte)
Curl_pgrsTimeWas(data, TIMER_CONNECT, ctx->first_byte_at);
Curl_pgrsTimeWas(data, TIMER_APPCONNECT, ctx->handshake_at);
}
break;
default:
break;
}
CF_DATA_RESTORE(cf, save);
return result;
}
static void cf_ngtcp2_ctx_clear(struct cf_ngtcp2_ctx *ctx)
{
if(ctx) {
if(ctx->qlogfd != -1) {
close(ctx->qlogfd);
ctx->qlogfd = -1;
}
struct cf_call_data save = ctx->call_data;
if(ctx->qlogfd != -1) {
close(ctx->qlogfd);
ctx->qlogfd = -1;
}
#ifdef USE_OPENSSL
if(ctx->ssl)
SSL_free(ctx->ssl);
if(ctx->sslctx)
SSL_CTX_free(ctx->sslctx);
if(ctx->ssl)
SSL_free(ctx->ssl);
if(ctx->sslctx)
SSL_CTX_free(ctx->sslctx);
#elif defined(USE_GNUTLS)
if(ctx->gtls) {
if(ctx->gtls->cred)
gnutls_certificate_free_credentials(ctx->gtls->cred);
if(ctx->gtls->session)
gnutls_deinit(ctx->gtls->session);
free(ctx->gtls);
}
if(ctx->gtls) {
if(ctx->gtls->cred)
gnutls_certificate_free_credentials(ctx->gtls->cred);
if(ctx->gtls->session)
gnutls_deinit(ctx->gtls->session);
free(ctx->gtls);
}
#elif defined(USE_WOLFSSL)
if(ctx->ssl)
wolfSSL_free(ctx->ssl);
if(ctx->sslctx)
wolfSSL_CTX_free(ctx->sslctx);
if(ctx->ssl)
wolfSSL_free(ctx->ssl);
if(ctx->sslctx)
wolfSSL_CTX_free(ctx->sslctx);
#endif
free(ctx->pktbuf);
free(ctx->pktbuf);
if(ctx->h3conn)
nghttp3_conn_del(ctx->h3conn);
if(ctx->qconn)
ngtcp2_conn_del(ctx->qconn);
memset(ctx, 0, sizeof(*ctx));
}
memset(ctx, 0, sizeof(*ctx));
ctx->call_data = save;
}
static void cf_ngtcp2_close(struct Curl_cfilter *cf, struct Curl_easy *data)
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
struct cf_call_data save;
(void)data;
CF_DATA_SAVE(save, cf, data);
if(ctx && ctx->qconn) {
char buffer[NGTCP2_MAX_UDP_PAYLOAD_SIZE];
ngtcp2_tstamp ts;
ngtcp2_ssize rc;
DEBUGF(LOG_CF(data, cf, "close"));
ts = timestamp();
rc = ngtcp2_conn_write_connection_close(ctx->qconn, NULL, /* path */
NULL, /* pkt_info */
@ -2170,16 +2270,22 @@ static void cf_ngtcp2_close(struct Curl_cfilter *cf, struct Curl_easy *data)
}
cf->connected = FALSE;
CF_DATA_RESTORE(cf, save);
}
static void cf_ngtcp2_destroy(struct Curl_cfilter *cf, struct Curl_easy *data)
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
struct cf_call_data save;
(void)data;
cf_ngtcp2_ctx_clear(ctx);
free(ctx);
CF_DATA_SAVE(save, cf, data);
DEBUGF(LOG_CF(data, cf, "destroy"));
if(ctx) {
cf_ngtcp2_ctx_clear(ctx);
free(ctx);
}
cf->ctx = NULL;
/* No CF_DATA_RESTORE(cf, save) possible */
}
/*
@ -2194,45 +2300,8 @@ static CURLcode cf_connect_start(struct Curl_cfilter *cf,
CURLcode result;
ngtcp2_path path; /* TODO: this must be initialized properly */
const struct Curl_sockaddr_ex *sockaddr;
const char *r_ip;
int r_port;
int qfd;
result = Curl_cf_socket_peek(cf->next, &ctx->sockfd,
&sockaddr, &r_ip, &r_port);
if(result)
return result;
DEBUGASSERT(ctx->sockfd != CURL_SOCKET_BAD);
infof(data, "Connect socket %d over QUIC to %s:%d",
ctx->sockfd, r_ip, r_port);
rc = connect(ctx->sockfd, &sockaddr->sa_addr, sockaddr->addrlen);
if(-1 == rc) {
return Curl_socket_connect_result(data, r_ip, SOCKERRNO);
}
/* QUIC sockets need to be nonblocking */
(void)curlx_nonblock(ctx->sockfd, TRUE);
switch(sockaddr->family) {
#if defined(__linux__) && defined(IP_MTU_DISCOVER)
case AF_INET: {
int val = IP_PMTUDISC_DO;
(void)setsockopt(ctx->sockfd, IPPROTO_IP, IP_MTU_DISCOVER, &val,
sizeof(val));
break;
}
#endif
#if defined(__linux__) && defined(IPV6_MTU_DISCOVER)
case AF_INET6: {
int val = IPV6_PMTUDISC_DO;
(void)setsockopt(ctx->sockfd, IPPROTO_IPV6, IPV6_MTU_DISCOVER, &val,
sizeof(val));
break;
}
#endif
}
ctx->version = NGTCP2_PROTO_VER_MAX;
#ifdef USE_OPENSSL
result = quic_ssl_ctx(&ctx->sslctx, cf, data);
@ -2266,6 +2335,8 @@ static CURLcode cf_connect_start(struct Curl_cfilter *cf,
ctx->qlogfd = qfd; /* -1 if failure above */
quic_settings(ctx, data);
Curl_cf_socket_peek(cf->next, data, &ctx->sockfd,
&sockaddr, NULL, NULL, NULL, NULL);
ctx->local_addrlen = sizeof(ctx->local_addr);
rv = getsockname(ctx->sockfd, (struct sockaddr *)&ctx->local_addr,
&ctx->local_addrlen);
@ -2321,6 +2392,8 @@ static CURLcode cf_ngtcp2_connect(struct Curl_cfilter *cf,
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
CURLcode result = CURLE_OK;
struct cf_call_data save;
struct curltime now;
if(cf->connected) {
*done = TRUE;
@ -2335,10 +2408,24 @@ static CURLcode cf_ngtcp2_connect(struct Curl_cfilter *cf,
}
*done = FALSE;
now = Curl_now();
CF_DATA_SAVE(save, cf, data);
if(ctx->reconnect_at.tv_sec && Curl_timediff(now, ctx->reconnect_at) < 0) {
/* Not time yet to attempt the next connect */
DEBUGF(LOG_CF(data, cf, "waiting for reconnect time"));
goto out;
}
if(!ctx->qconn) {
ctx->started_at = now;
result = cf_connect_start(cf, data);
if(result)
goto out;
result = cf_flush_egress(cf, data);
/* we do not expect to be able to recv anything yet */
goto out;
}
result = cf_process_ingress(cf, data);
@ -2350,8 +2437,12 @@ static CURLcode cf_ngtcp2_connect(struct Curl_cfilter *cf,
goto out;
if(ngtcp2_conn_get_handshake_completed(ctx->qconn)) {
ctx->handshake_at = now;
DEBUGF(LOG_CF(data, cf, "handshake complete after %dms",
(int)Curl_timediff(now, ctx->started_at)));
result = qng_verify_peer(cf, data);
if(!result) {
DEBUGF(LOG_CF(data, cf, "peer verified"));
cf->connected = TRUE;
cf->conn->alpn = CURL_HTTP_VERSION_3;
*done = TRUE;
@ -2360,37 +2451,80 @@ static CURLcode cf_ngtcp2_connect(struct Curl_cfilter *cf,
}
out:
if(result == CURLE_RECV_ERROR && ctx->qconn &&
ngtcp2_conn_is_in_draining_period(ctx->qconn)) {
/* When a QUIC server instance is shutting down, it may send us a
* CONNECTION_CLOSE right away. Our connection then enters the DRAINING
* state.
* This may be a stopping of the service or it may be that the server
* is reloading and a new instance will start serving soon.
* In any case, we tear down our socket and start over with a new one.
* We re-open the underlying UDP cf right now, but do not start
* connecting until called again.
*/
int reconn_delay_ms = 200;
DEBUGF(LOG_CF(data, cf, "connect, remote closed, reconnect after %dms",
reconn_delay_ms));
Curl_conn_cf_close(cf->next, data);
cf_ngtcp2_ctx_clear(ctx);
result = Curl_conn_cf_connect(cf->next, data, FALSE, done);
if(!result && *done) {
*done = FALSE;
ctx->reconnect_at = now;
ctx->reconnect_at.tv_usec += reconn_delay_ms * 1000;
Curl_expire(data, reconn_delay_ms, EXPIRE_QUIC);
result = CURLE_OK;
}
}
#ifndef CURL_DISABLE_VERBOSE_STRINGS
if(result && result != CURLE_AGAIN) {
if(result) {
const char *r_ip;
int r_port;
Curl_cf_socket_peek(cf->next, NULL, NULL, &r_ip, &r_port);
infof(data, "connect to %s port %u failed: %s",
Curl_cf_socket_peek(cf->next, data, NULL, NULL,
&r_ip, &r_port, NULL, NULL);
infof(data, "QUIC connect to %s port %u failed: %s",
r_ip, r_port, curl_easy_strerror(result));
}
#endif
DEBUGF(LOG_CF(data, cf, "connect -> %d, done=%d", result, *done));
CF_DATA_RESTORE(cf, save);
return result;
}
static CURLcode cf_ngtcp2_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void **pres2)
int query, int *pres1, void *pres2)
{
struct cf_ngtcp2_ctx *ctx = cf->ctx;
struct cf_call_data save;
switch(query) {
case CF_QUERY_MAX_CONCURRENT: {
const ngtcp2_transport_params *rp;
DEBUGASSERT(pres1);
CF_DATA_SAVE(save, cf, data);
rp = ngtcp2_conn_get_remote_transport_params(ctx->qconn);
if(rp)
*pres1 = (rp->initial_max_streams_bidi > INT_MAX)?
INT_MAX : (int)rp->initial_max_streams_bidi;
else /* not arrived yet? */
*pres1 = Curl_multi_max_concurrent_streams(data->multi);
DEBUGF(LOG_CF(data, cf, "query max_conncurrent -> %d", *pres1));
CF_DATA_RESTORE(cf, save);
return CURLE_OK;
}
case CF_QUERY_CONNECT_REPLY_MS:
if(ctx->got_first_byte) {
timediff_t ms = Curl_timediff(ctx->first_byte_at, ctx->started_at);
*pres1 = (ms < INT_MAX)? (int)ms : INT_MAX;
}
else
*pres1 = -1;
return CURLE_OK;
default:
break;
}
@ -2428,21 +2562,22 @@ CURLcode Curl_cf_ngtcp2_create(struct Curl_cfilter **pcf,
CURLcode result;
(void)data;
(void)conn;
ctx = calloc(sizeof(*ctx), 1);
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
cf_ngtcp2_ctx_clear(ctx);
result = Curl_cf_create(&cf, &Curl_cft_http3, ctx);
if(result)
goto out;
result = Curl_cf_udp_create(&udp_cf, data, conn, ai);
result = Curl_cf_udp_create(&udp_cf, data, conn, ai, TRNSPRT_QUIC);
if(result)
goto out;
cf->conn = conn;
udp_cf->conn = cf->conn;
udp_cf->sockindex = cf->sockindex;
cf->next = udp_cf;
@ -2455,7 +2590,6 @@ out:
Curl_safefree(cf);
Curl_safefree(ctx);
}
return result;
}

View File

@ -37,6 +37,7 @@
#include "strcase.h"
#include "multiif.h"
#include "connect.h"
#include "progress.h"
#include "strerror.h"
#include "vquic.h"
#include "curl_quiche.h"
@ -143,7 +144,10 @@ struct cf_quiche_ctx {
SSL_CTX *sslctx;
SSL *ssl;
struct h3_event_node *pending;
bool h3_recving; /* TRUE when in h3-body-reading state */
struct curltime connect_started; /* time the current attempt started */
struct curltime handshake_done; /* time connect handshake finished */
int first_reply_ms; /* ms since first data arrived */
struct curltime reconnect_at; /* time the next attempt should start */
bool goaway;
};
@ -169,13 +173,33 @@ static void h3_clear_pending(struct cf_quiche_ctx *ctx)
}
}
static void cf_quiche_ctx_clear(struct cf_quiche_ctx *ctx)
{
if(ctx) {
if(ctx->pending)
h3_clear_pending(ctx);
if(ctx->qconn)
quiche_conn_free(ctx->qconn);
if(ctx->h3config)
quiche_h3_config_free(ctx->h3config);
if(ctx->h3c)
quiche_h3_conn_free(ctx->h3c);
if(ctx->cfg)
quiche_config_free(ctx->cfg);
memset(ctx, 0, sizeof(*ctx));
ctx->first_reply_ms = -1;
}
}
static CURLcode h3_add_event(struct Curl_cfilter *cf,
struct Curl_easy *data,
int64_t stream3_id, quiche_h3_event *ev)
int64_t stream3_id, quiche_h3_event *ev,
size_t *pqlen)
{
struct cf_quiche_ctx *ctx = cf->ctx;
struct Curl_easy *mdata;
struct h3_event_node *node, **pnext = &ctx->pending;
size_t qlen;
DEBUGASSERT(data->multi);
for(mdata = data->multi->easyp; mdata; mdata = mdata->next) {
@ -185,9 +209,10 @@ static CURLcode h3_add_event(struct Curl_cfilter *cf,
}
if(!mdata) {
DEBUGF(LOG_CF(data, cf, "event for unknown stream %"PRId64", discarded",
stream3_id));
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] event discarded, easy handle "
"not found", stream3_id));
quiche_h3_event_free(ev);
*pqlen = 0;
return CURLE_OK;
}
@ -197,10 +222,13 @@ static CURLcode h3_add_event(struct Curl_cfilter *cf,
node->stream3_id = stream3_id;
node->ev = ev;
/* append to process them in order of arrival */
qlen = 0;
while(*pnext) {
pnext = &((*pnext)->next);
++qlen;
}
*pnext = node;
*pqlen = qlen + 1;
if(!mdata->state.drain) {
/* tell the multi handle that this data needs processing */
mdata->state.drain = 1;
@ -260,23 +288,24 @@ static ssize_t h3_process_event(struct Curl_cfilter *cf,
switch(quiche_h3_event_type(ev)) {
case QUICHE_H3_EVENT_HEADERS:
stream->h3_got_header = TRUE;
headers.dest = buf;
headers.destlen = len;
headers.nlen = 0;
rc = quiche_h3_event_for_each_header(ev, cb_each_header, &headers);
if(rc) {
failf(data, "Error in HTTP/3 response header");
failf(data, "Error %d in HTTP/3 response header for stream[%"PRId64"]",
rc, stream3_id);
*err = CURLE_RECV_ERROR;
recvd = -1;
break;
}
recvd = headers.nlen;
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] HEADERS len=%d",
stream3_id, (int)recvd));
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv, HEADERS len=%zd",
stream3_id, recvd));
break;
case QUICHE_H3_EVENT_DATA:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] DATA", stream3_id));
if(!stream->firstbody) {
/* add a header-body separator CRLF */
buf[0] = '\r';
@ -291,23 +320,33 @@ static ssize_t h3_process_event(struct Curl_cfilter *cf,
rcode = quiche_h3_recv_body(ctx->h3c, ctx->qconn, stream3_id,
(unsigned char *)buf, len);
if(rcode <= 0) {
failf(data, "Error %zd in HTTP/3 response body for stream[%"PRId64"]",
rcode, stream3_id);
recvd = -1;
*err = CURLE_AGAIN;
break;
}
ctx->h3_recving = TRUE;
stream->h3_recving_data = TRUE;
recvd += rcode;
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv, DATA len=%zd",
stream3_id, rcode));
break;
case QUICHE_H3_EVENT_RESET:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] RESET", stream3_id));
if(quiche_conn_is_draining(ctx->qconn) && !stream->h3_got_header) {
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] stream RESET without response, "
"connection is draining", stream3_id));
}
else {
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv, RESET", stream3_id));
}
streamclose(cf->conn, "Stream reset");
*err = CURLE_PARTIAL_FILE;
*err = stream->h3_got_header? CURLE_PARTIAL_FILE : CURLE_RECV_ERROR;
recvd = -1;
break;
case QUICHE_H3_EVENT_FINISHED:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] FINISHED", stream3_id));
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv, FINISHED", stream3_id));
stream->closed = TRUE;
streamclose(cf->conn, "End of stream");
*err = CURLE_OK;
@ -315,13 +354,14 @@ static ssize_t h3_process_event(struct Curl_cfilter *cf,
break;
case QUICHE_H3_EVENT_GOAWAY:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv, GOAWAY", stream3_id));
recvd = -1;
*err = CURLE_AGAIN;
ctx->goaway = TRUE;
break;
default:
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] unhandled event %d",
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv, unhandled event %d",
stream3_id, quiche_h3_event_type(ev)));
break;
}
@ -336,16 +376,30 @@ static ssize_t h3_process_pending(struct Curl_cfilter *cf,
struct cf_quiche_ctx *ctx = cf->ctx;
struct HTTP *stream = data->req.p.http;
struct h3_event_node *node = ctx->pending, **pnext = &ctx->pending;
ssize_t recvd = -1;
ssize_t recvd = 0, erecvd;
for(; node; pnext = &node->next, node = node->next) {
DEBUGASSERT(stream);
while(node) {
if(node->stream3_id == stream->stream3_id) {
recvd = h3_process_event(cf, data, buf, len,
node->stream3_id, node->ev, err);
erecvd = h3_process_event(cf, data, buf, len,
node->stream3_id, node->ev, err);
quiche_h3_event_free(node->ev);
*pnext = node->next;
free(node);
break;
node = *pnext;
if(erecvd < 0) {
recvd = erecvd;
break;
}
recvd += erecvd;
if(erecvd > INT_MAX || (size_t)erecvd >= len)
break;
buf += erecvd;
len -= erecvd;
}
else {
pnext = &node->next;
node = node->next;
}
}
return recvd;
@ -373,15 +427,24 @@ static CURLcode cf_process_ingress(struct Curl_cfilter *cf,
recvd = recvfrom(ctx->sockfd, buf, bufsize, 0,
(struct sockaddr *)&from, &from_len);
if((recvd < 0) && ((SOCKERRNO == EAGAIN) || (SOCKERRNO == EWOULDBLOCK)))
break;
if(recvd < 0) {
if((SOCKERRNO == EAGAIN) || (SOCKERRNO == EWOULDBLOCK))
goto out;
if(SOCKERRNO == ECONNREFUSED) {
const char *r_ip;
int r_port;
Curl_cf_socket_peek(cf->next, data, NULL, NULL,
&r_ip, &r_port, NULL, NULL);
failf(data, "quiche: connection to %s:%u refused",
r_ip, r_port);
return CURLE_COULDNT_CONNECT;
}
failf(data, "quiche: recvfrom() unexpectedly returned %zd "
"(errno: %d, socket %d)", recvd, SOCKERRNO, ctx->sockfd);
return CURLE_RECV_ERROR;
}
DEBUGF(LOG_CF(data, cf, "ingress, recvd %zd bytes", recvd));
recv_info.from = (struct sockaddr *) &from;
recv_info.from_len = from_len;
recv_info.to = (struct sockaddr *) &ctx->local_addr;
@ -389,7 +452,7 @@ static CURLcode cf_process_ingress(struct Curl_cfilter *cf,
recvd = quiche_conn_recv(ctx->qconn, buf, recvd, &recv_info);
if(recvd == QUICHE_ERR_DONE)
break;
goto out;
if(recvd < 0) {
if(QUICHE_ERR_TLS_FAIL == recvd) {
@ -406,8 +469,13 @@ static CURLcode cf_process_ingress(struct Curl_cfilter *cf,
return CURLE_RECV_ERROR;
}
if(ctx->first_reply_ms < 0) {
timediff_t ms = Curl_timediff(Curl_now(), ctx->connect_started);
ctx->first_reply_ms = (ms < INT_MAX)? (int)ms : INT_MAX;
}
} while(1);
out:
return CURLE_OK;
}
@ -434,6 +502,7 @@ static CURLcode cf_flush_egress(struct Curl_cfilter *cf,
return CURLE_SEND_ERROR;
}
DEBUGF(LOG_CF(data, cf, "egress, send %zu bytes", sent));
sent = send(ctx->sockfd, out, sent, 0);
if(sent < 0) {
failf(data, "send() returned %zd", sent);
@ -459,9 +528,18 @@ static ssize_t cf_quiche_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
quiche_h3_event *ev;
struct HTTP *stream = data->req.p.http;
DEBUGF(LOG_CF(data, cf, "recv[%"PRId64"]", stream->stream3_id));
*err = CURLE_AGAIN;
/* process any pending events for `data` first. if there are,
* return so the transfer can handle those. We do not want to
* progress ingress while events are pending here. */
recvd = h3_process_pending(cf, data, buf, len, err);
if(recvd < 0) {
goto out;
}
else if(recvd > 0) {
*err = CURLE_OK;
goto out;
}
recvd = -1;
if(cf_process_ingress(cf, data)) {
@ -470,12 +548,12 @@ static ssize_t cf_quiche_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
goto out;
}
if(ctx->h3_recving) {
if(stream->h3_recving_data) {
/* body receiving state */
rcode = quiche_h3_recv_body(ctx->h3c, ctx->qconn, stream->stream3_id,
(unsigned char *)buf, len);
if(rcode <= 0) {
ctx->h3_recving = FALSE;
stream->h3_recving_data = FALSE;
/* fall through into the while loop below */
}
else {
@ -485,27 +563,29 @@ static ssize_t cf_quiche_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
}
}
if(recvd < 0) {
recvd = h3_process_pending(cf, data, buf, len, err);
}
while(recvd < 0) {
int64_t stream3_id = quiche_h3_conn_poll(ctx->h3c, ctx->qconn, &ev);
if(stream3_id < 0)
/* nothing more to do */
break;
if(stream3_id != stream->stream3_id) {
if(stream3_id == stream->stream3_id) {
recvd = h3_process_event(cf, data, buf, len, stream3_id, ev, err);
quiche_h3_event_free(ev);
}
else {
size_t qlen;
/* event for another transfer, preserver for later */
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] queuing event", stream3_id));
if(h3_add_event(cf, data, stream3_id, ev) != CURLE_OK) {
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv, queue event "
"for h3[%"PRId64"]", stream->stream3_id, stream3_id));
if(h3_add_event(cf, data, stream3_id, ev, &qlen) != CURLE_OK) {
*err = CURLE_OUT_OF_MEMORY;
goto out;
}
}
else {
recvd = h3_process_event(cf, data, buf, len, stream3_id, ev, err);
quiche_h3_event_free(ev);
if(qlen > 20) {
Curl_expire(data, 0, EXPIRE_QUIC);
break;
}
}
}
@ -519,6 +599,7 @@ static ssize_t cf_quiche_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
if(recvd >= 0) {
/* Get this called again to drain the event queue */
Curl_expire(data, 0, EXPIRE_QUIC);
*err = CURLE_OK;
}
else if(stream->closed) {
*err = CURLE_OK;
@ -527,7 +608,7 @@ static ssize_t cf_quiche_recv(struct Curl_cfilter *cf, struct Curl_easy *data,
out:
data->state.drain = (recvd >= 0) ? 1 : 0;
DEBUGF(LOG_CF(data, cf, "recv[%"PRId64"] -> %ld, err=%d",
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] recv -> %ld, err=%d",
stream->stream3_id, (long)recvd, *err));
return recvd;
}
@ -549,7 +630,8 @@ static CURLcode cf_http_request(struct Curl_cfilter *cf,
CURLcode result = CURLE_OK;
struct h2h3req *hreq = NULL;
stream->h3req = TRUE; /* senf off! */
DEBUGF(LOG_CF(data, cf, "cf_http_request %s", data->state.url));
stream->h3req = TRUE; /* send off! */
result = Curl_pseudo_headers(data, mem, len, NULL, &hreq);
if(result)
@ -584,22 +666,14 @@ static CURLcode cf_http_request(struct Curl_cfilter *cf,
stream3_id = quiche_h3_send_request(ctx->h3c, ctx->qconn, nva, nheader,
stream->upload_left ? FALSE: TRUE);
DEBUGF(LOG_CF(data, cf, "send_request(with_body=%d) -> %"PRId64,
!!stream->upload_left, stream3_id));
if((stream3_id >= 0) && data->set.postfields) {
ssize_t sent = quiche_h3_send_body(ctx->h3c, ctx->qconn, stream3_id,
(uint8_t *)data->set.postfields,
stream->upload_left, TRUE);
if(sent <= 0) {
failf(data, "quiche_h3_send_body failed");
result = CURLE_SEND_ERROR;
}
stream->upload_left = 0; /* nothing left to send */
}
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] send request %s, upload=%zu",
stream3_id, data->state.url, stream->upload_left));
break;
default:
stream3_id = quiche_h3_send_request(ctx->h3c, ctx->qconn, nva, nheader,
TRUE);
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] send request %s",
stream3_id, data->state.url));
break;
}
@ -631,6 +705,7 @@ static ssize_t cf_quiche_send(struct Curl_cfilter *cf, struct Curl_easy *data,
struct HTTP *stream = data->req.p.http;
ssize_t sent;
DEBUGF(LOG_CF(data, cf, "cf_quiche_send(len=%zu) %s", len, data->state.url));
if(!stream->h3req) {
CURLcode result = cf_http_request(cf, data, buf, len);
if(result) {
@ -705,8 +780,6 @@ static bool cf_quiche_data_pending(struct Curl_cfilter *cf,
return TRUE;
}
}
DEBUGF(LOG_CF((struct Curl_easy *)data, cf, "h3[%"PRId64"] no data pending",
stream->stream3_id));
return FALSE;
}
@ -730,7 +803,16 @@ static CURLcode cf_quiche_data_event(struct Curl_cfilter *cf,
return CURLE_SEND_ERROR;
break;
}
case CF_CTRL_DATA_DONE: {
struct HTTP *stream = data->req.p.http;
DEBUGF(LOG_CF(data, cf, "h3[%"PRId64"] easy handle is %s",
stream->stream3_id, arg1? "cancelled" : "done"));
break;
}
case CF_CTRL_CONN_REPORT_STATS:
if(cf->sockindex == FIRSTSOCKET)
Curl_pgrsTimeWas(data, TIMER_APPCONNECT, ctx->handshake_done);
break;
default:
break;
}
@ -758,7 +840,6 @@ static CURLcode cf_verify_peer(struct Curl_cfilter *cf,
X509_free(server_cert);
if(result)
goto out;
DEBUGF(LOG_CF(data, cf, "Verified certificate just fine"));
}
else
DEBUGF(LOG_CF(data, cf, "Skipped certificate verification"));
@ -797,48 +878,16 @@ static CURLcode cf_connect_start(struct Curl_cfilter *cf,
struct Curl_easy *data)
{
struct cf_quiche_ctx *ctx = cf->ctx;
int rc;
int rv;
CURLcode result;
const struct Curl_sockaddr_ex *sockaddr;
const char *r_ip;
int r_port;
result = Curl_cf_socket_peek(cf->next, &ctx->sockfd,
&sockaddr, &r_ip, &r_port);
result = Curl_cf_socket_peek(cf->next, data, &ctx->sockfd,
&sockaddr, NULL, NULL, NULL, NULL);
if(result)
return result;
DEBUGASSERT(ctx->sockfd != CURL_SOCKET_BAD);
infof(data, "Connect socket %d over QUIC to %s:%d",
ctx->sockfd, r_ip, r_port);
rc = connect(ctx->sockfd, &sockaddr->sa_addr, sockaddr->addrlen);
if(-1 == rc) {
return Curl_socket_connect_result(data, r_ip, SOCKERRNO);
}
/* QUIC sockets need to be nonblocking */
(void)curlx_nonblock(ctx->sockfd, TRUE);
switch(sockaddr->family) {
#if defined(__linux__) && defined(IP_MTU_DISCOVER)
case AF_INET: {
int val = IP_PMTUDISC_DO;
(void)setsockopt(ctx->sockfd, IPPROTO_IP, IP_MTU_DISCOVER, &val,
sizeof(val));
break;
}
#endif
#if defined(__linux__) && defined(IPV6_MTU_DISCOVER)
case AF_INET6: {
int val = IPV6_PMTUDISC_DO;
(void)setsockopt(ctx->sockfd, IPPROTO_IPV6, IPV6_MTU_DISCOVER, &val,
sizeof(val));
break;
}
#endif
}
#ifdef DEBUG_QUICHE
/* initialize debug log callback only once */
static int debug_log_init = 0;
@ -940,6 +989,7 @@ static CURLcode cf_quiche_connect(struct Curl_cfilter *cf,
{
struct cf_quiche_ctx *ctx = cf->ctx;
CURLcode result = CURLE_OK;
struct curltime now;
if(cf->connected) {
*done = TRUE;
@ -954,10 +1004,19 @@ static CURLcode cf_quiche_connect(struct Curl_cfilter *cf,
}
*done = FALSE;
now = Curl_now();
if(ctx->reconnect_at.tv_sec && Curl_timediff(now, ctx->reconnect_at) < 0) {
/* Not time yet to attempt the next connect */
DEBUGF(LOG_CF(data, cf, "waiting for reconnect time"));
goto out;
}
if(!ctx->qconn) {
result = cf_connect_start(cf, data);
if(result)
goto out;
ctx->connect_started = now;
}
result = cf_process_ingress(cf, data);
@ -969,15 +1028,43 @@ static CURLcode cf_quiche_connect(struct Curl_cfilter *cf,
goto out;
if(quiche_conn_is_established(ctx->qconn)) {
DEBUGF(LOG_CF(data, cf, "handshake complete after %dms",
(int)Curl_timediff(now, ctx->connect_started)));
ctx->handshake_done = now;
result = cf_verify_peer(cf, data);
if(!result) {
DEBUGF(infof(data, "quiche established connection"));
DEBUGF(LOG_CF(data, cf, "peer verified"));
cf->connected = TRUE;
cf->conn->alpn = CURL_HTTP_VERSION_3;
*done = TRUE;
connkeep(cf->conn, "HTTP/3 default");
}
}
else if(quiche_conn_is_draining(ctx->qconn)) {
/* When a QUIC server instance is shutting down, it may send us a
* CONNECTION_CLOSE right away. Our connection then enters the DRAINING
* state.
* This may be a stopping of the service or it may be that the server
* is reloading and a new instance will start serving soon.
* In any case, we tear down our socket and start over with a new one.
* We re-open the underlying UDP cf right now, but do not start
* connecting until called again.
*/
int reconn_delay_ms = 200;
DEBUGF(LOG_CF(data, cf, "connect, remote closed, reconnect after %dms",
reconn_delay_ms));
Curl_conn_cf_close(cf->next, data);
cf_quiche_ctx_clear(ctx);
result = Curl_conn_cf_connect(cf->next, data, FALSE, done);
if(!result && *done) {
*done = FALSE;
ctx->reconnect_at = Curl_now();
ctx->reconnect_at.tv_usec += reconn_delay_ms * 1000;
Curl_expire(data, reconn_delay_ms, EXPIRE_QUIC);
result = CURLE_OK;
}
}
out:
#ifndef CURL_DISABLE_VERBOSE_STRINGS
@ -985,7 +1072,8 @@ out:
const char *r_ip;
int r_port;
Curl_cf_socket_peek(cf->next, NULL, NULL, &r_ip, &r_port);
Curl_cf_socket_peek(cf->next, data, NULL, NULL,
&r_ip, &r_port, NULL, NULL);
infof(data, "connect to %s port %u failed: %s",
r_ip, r_port, curl_easy_strerror(result));
}
@ -993,23 +1081,6 @@ out:
return result;
}
static void cf_quiche_ctx_clear(struct cf_quiche_ctx *ctx)
{
if(ctx) {
if(ctx->pending)
h3_clear_pending(ctx);
if(ctx->qconn)
quiche_conn_free(ctx->qconn);
if(ctx->h3config)
quiche_h3_config_free(ctx->h3config);
if(ctx->h3c)
quiche_h3_conn_free(ctx->h3c);
if(ctx->cfg)
quiche_config_free(ctx->cfg);
memset(ctx, 0, sizeof(*ctx));
}
}
static void cf_quiche_close(struct Curl_cfilter *cf, struct Curl_easy *data)
{
struct cf_quiche_ctx *ctx = cf->ctx;
@ -1038,7 +1109,7 @@ static void cf_quiche_destroy(struct Curl_cfilter *cf, struct Curl_easy *data)
static CURLcode cf_quiche_query(struct Curl_cfilter *cf,
struct Curl_easy *data,
int query, int *pres1, void **pres2)
int query, int *pres1, void *pres2)
{
struct cf_quiche_ctx *ctx = cf->ctx;
@ -1052,6 +1123,11 @@ static CURLcode cf_quiche_query(struct Curl_cfilter *cf,
DEBUGF(LOG_CF(data, cf, "query: MAX_CONCURRENT -> %d", *pres1));
return CURLE_OK;
}
case CF_QUERY_CONNECT_REPLY_MS:
*pres1 = ctx->first_reply_ms;
DEBUGF(LOG_CF(data, cf, "query connect reply: %dms", *pres1));
return CURLE_OK;
default:
break;
}
@ -1100,7 +1176,7 @@ CURLcode Curl_cf_quiche_create(struct Curl_cfilter **pcf,
if(result)
goto out;
result = Curl_cf_udp_create(&udp_cf, data, conn, ai);
result = Curl_cf_udp_create(&udp_cf, data, conn, ai, TRNSPRT_QUIC);
if(result)
goto out;

View File

@ -24,19 +24,25 @@
#include "curl_setup.h"
#ifdef ENABLE_QUIC
#ifdef HAVE_FCNTL_H
#include <fcntl.h>
#endif
#include "urldata.h"
#include "dynbuf.h"
#include "curl_printf.h"
#include "curl_log.h"
#include "curl_msh3.h"
#include "curl_ngtcp2.h"
#include "curl_quiche.h"
#include "vquic.h"
/* The last 3 #include files should be in this order */
#include "curl_printf.h"
#include "curl_memory.h"
#include "memdebug.h"
#ifdef ENABLE_QUIC
#ifdef O_BINARY
#define QLOGMODE O_WRONLY|O_CREAT|O_BINARY
#else
@ -102,8 +108,10 @@ CURLcode Curl_qlogdir(struct Curl_easy *data,
CURLcode Curl_cf_quic_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai)
const struct Curl_addrinfo *ai,
int transport)
{
DEBUGASSERT(transport == TRNSPRT_QUIC);
#ifdef USE_NGTCP2
return Curl_cf_ngtcp2_create(pcf, data, conn, ai);
#elif defined(USE_QUICHE)
@ -135,4 +143,36 @@ bool Curl_conn_is_http3(const struct Curl_easy *data,
#endif
}
CURLcode Curl_conn_may_http3(struct Curl_easy *data,
const struct connectdata *conn)
{
if(!(conn->handler->flags & PROTOPT_SSL)) {
failf(data, "HTTP/3 requested for non-HTTPS URL");
return CURLE_URL_MALFORMAT;
}
#ifndef CURL_DISABLE_PROXY
if(conn->bits.socksproxy) {
failf(data, "HTTP/3 is not supported over a SOCKS proxy");
return CURLE_URL_MALFORMAT;
}
if(conn->bits.httpproxy && conn->bits.tunnel_proxy) {
failf(data, "HTTP/3 is not supported over a HTTP proxy");
return CURLE_URL_MALFORMAT;
}
#endif
return CURLE_OK;
}
#else /* ENABLE_QUIC */
CURLcode Curl_conn_may_http3(struct Curl_easy *data,
const struct connectdata *conn)
{
(void)conn;
(void)data;
DEBUGF(infof(data, "QUIC is not supported in this build"));
return CURLE_NOT_BUILT_IN;
}
#endif /* !ENABLE_QUIC */

View File

@ -43,7 +43,8 @@ CURLcode Curl_qlogdir(struct Curl_easy *data,
CURLcode Curl_cf_quic_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai);
const struct Curl_addrinfo *ai,
int transport);
bool Curl_conn_is_http3(const struct Curl_easy *data,
const struct connectdata *conn,
@ -53,8 +54,11 @@ extern struct Curl_cftype Curl_cft_http3;
#else /* ENABLE_QUIC */
#define Curl_conn_is_http3(a,b,c) FALSE
#define Curl_conn_is_http3(a,b,c) FALSE
#endif /* !ENABLE_QUIC */
CURLcode Curl_conn_may_http3(struct Curl_easy *data,
const struct connectdata *conn);
#endif /* HEADER_CURL_VQUIC_QUIC_H */

View File

@ -58,7 +58,7 @@ struct ssl_backend_data {
unsigned char buf[BR_SSL_BUFSIZE_BIDI];
br_x509_trust_anchor *anchors;
size_t anchors_len;
const char *protocols[2];
const char *protocols[ALPN_ENTRIES_MAX];
/* SSL client context is active */
bool active;
/* size of pending write, yet to be flushed */
@ -691,35 +691,17 @@ static CURLcode bearssl_connect_step1(struct Curl_cfilter *cf,
Curl_ssl_sessionid_unlock(data);
}
if(cf->conn->bits.tls_enable_alpn) {
int cur = 0;
if(connssl->alpn) {
struct alpn_proto_buf proto;
size_t i;
/* NOTE: when adding more protocols here, increase the size of the
* protocols array in `struct ssl_backend_data`.
*/
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
backend->protocols[cur++] = ALPN_HTTP_1_0;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_0);
for(i = 0; i < connssl->alpn->count; ++i) {
backend->protocols[i] = connssl->alpn->entries[i];
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2
#ifndef CURL_DISABLE_PROXY
&& (!Curl_ssl_cf_is_proxy(cf) || !cf->conn->bits.tunnel_proxy)
#endif
) {
backend->protocols[cur++] = ALPN_H2;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_H2);
}
#endif
backend->protocols[cur++] = ALPN_HTTP_1_1;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
}
br_ssl_engine_set_protocol_names(&backend->ctx.eng,
backend->protocols, cur);
br_ssl_engine_set_protocol_names(&backend->ctx.eng, backend->protocols,
connssl->alpn->count);
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
if((1 == Curl_inet_pton(AF_INET, hostname, &addr))
@ -868,26 +850,11 @@ static CURLcode bearssl_connect_step3(struct Curl_cfilter *cf,
DEBUGASSERT(backend);
if(cf->conn->bits.tls_enable_alpn) {
const char *protocol;
const char *proto;
protocol = br_ssl_engine_get_selected_protocol(&backend->ctx.eng);
if(protocol) {
infof(data, VTLS_INFOF_ALPN_ACCEPTED_1STR, protocol);
#ifdef USE_HTTP2
if(!strcmp(protocol, ALPN_H2))
cf->conn->alpn = CURL_HTTP_VERSION_2;
else
#endif
if(!strcmp(protocol, ALPN_HTTP_1_1))
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
else
infof(data, "ALPN, unrecognized protocol %s", protocol);
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
}
else
infof(data, VTLS_INFOF_NO_ALPN);
proto = br_ssl_engine_get_selected_protocol(&backend->ctx.eng);
Curl_alpn_set_negotiated(cf, data, (const unsigned char *)proto,
proto? strlen(proto) : 0);
}
if(ssl_config->primary.sessionid) {
@ -983,7 +950,7 @@ static CURLcode bearssl_connect_common(struct Curl_cfilter *cf,
{
CURLcode ret;
struct ssl_connect_data *connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
timediff_t timeout_ms;
int what;

View File

@ -499,7 +499,7 @@ static void cancel_async_handshake(struct Curl_cfilter *cf,
(void)data;
DEBUGASSERT(BACKEND);
if(QsoCancelOperation(cf->conn->sock[cf->sockindex], 0) > 0)
if(QsoCancelOperation(Curl_conn_cf_get_socket(cf, data), 0) > 0)
QsoWaitForIOCompletion(BACKEND->iocport, &cstat, (struct timeval *) NULL);
}
@ -532,7 +532,7 @@ static int pipe_ssloverssl(struct Curl_cfilter *cf, int directions)
DEBUGASSERT(connssl_next->backend);
n = 1;
fds[0].fd = BACKEND->remotefd;
fds[1].fd = cf->conn->sock[cf->sockindex];
fds[1].fd = Curl_conn_cf_get_socket(cf, data);
if(directions & SOS_READ) {
fds[0].events |= POLLOUT;
@ -847,7 +847,7 @@ static CURLcode gskit_connect_step1(struct Curl_cfilter *cf,
result = set_numeric(data, BACKEND->handle, GSK_OS400_READ_TIMEOUT, 1);
if(!result)
result = set_numeric(data, BACKEND->handle, GSK_FD, BACKEND->localfd >= 0?
BACKEND->localfd: cf->conn->sock[cf->sockindex]);
BACKEND->localfd: Curl_conn_cf_get_socket(cf, data));
if(!result)
result = set_ciphers(cf, data, BACKEND->handle, &protoflags);
if(!protoflags) {
@ -1208,7 +1208,7 @@ static int gskit_shutdown(struct Curl_cfilter *cf,
close_one(cf, data);
rc = 0;
what = SOCKET_READABLE(cf->conn->sock[cf->sockindex],
what = SOCKET_READABLE(Curl_conn_cf_get_socket(cf, data),
SSL_SHUTDOWN_TIMEOUT);
while(loop--) {
@ -1230,7 +1230,7 @@ static int gskit_shutdown(struct Curl_cfilter *cf,
notify alert from the server. No way to gsk_secure_soc_read() now, so
use read(). */
nread = read(cf->conn->sock[cf->sockindex], buf, sizeof(buf));
nread = read(Curl_conn_cf_get_socket(cf, data), buf, sizeof(buf));
if(nread < 0) {
char buffer[STRERROR_LEN];
@ -1241,7 +1241,7 @@ static int gskit_shutdown(struct Curl_cfilter *cf,
if(nread <= 0)
break;
what = SOCKET_READABLE(cf->conn->sock[cf->sockindex], 0);
what = SOCKET_READABLE(Curl_conn_cf_get_socket(cf, data), 0);
}
return rc;

View File

@ -214,7 +214,7 @@ static CURLcode handshake(struct Curl_cfilter *cf,
struct ssl_connect_data *connssl = cf->ctx;
struct ssl_backend_data *backend = connssl->backend;
gnutls_session_t session;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
DEBUGASSERT(backend);
session = backend->gtls.session;
@ -698,37 +698,22 @@ gtls_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
if(result)
return result;
if(cf->conn->bits.tls_enable_alpn) {
int cur = 0;
gnutls_datum_t protocols[2];
if(connssl->alpn) {
struct alpn_proto_buf proto;
gnutls_datum_t alpn[ALPN_ENTRIES_MAX];
size_t i;
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
protocols[cur].data = (unsigned char *)ALPN_HTTP_1_0;
protocols[cur++].size = ALPN_HTTP_1_0_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_0);
for(i = 0; i < connssl->alpn->count; ++i) {
alpn[i].data = (unsigned char *)connssl->alpn->entries[i];
alpn[i].size = (unsigned)strlen(connssl->alpn->entries[i]);
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2
#ifndef CURL_DISABLE_PROXY
&& (!Curl_ssl_cf_is_proxy(cf) || !cf->conn->bits.tunnel_proxy)
#endif
) {
protocols[cur].data = (unsigned char *)ALPN_H2;
protocols[cur++].size = ALPN_H2_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_H2);
}
#endif
protocols[cur].data = (unsigned char *)ALPN_HTTP_1_1;
protocols[cur++].size = ALPN_HTTP_1_1_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
}
if(gnutls_alpn_set_protocols(backend->gtls.session, protocols, cur, 0)) {
if(gnutls_alpn_set_protocols(backend->gtls.session, alpn,
(unsigned)connssl->alpn->count, 0)) {
failf(data, "failed setting ALPN");
return CURLE_SSL_CONNECT_ERROR;
}
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
/* This might be a reconnect, so we check for a session ID in the cache
@ -1272,28 +1257,10 @@ static CURLcode gtls_verifyserver(struct Curl_cfilter *cf,
int rc;
rc = gnutls_alpn_get_selected_protocol(session, &proto);
if(rc == 0) {
infof(data, VTLS_INFOF_ALPN_ACCEPTED_LEN_1STR, proto.size,
proto.data);
#ifdef USE_HTTP2
if(proto.size == ALPN_H2_LENGTH &&
!memcmp(ALPN_H2, proto.data,
ALPN_H2_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_2;
}
else
#endif
if(proto.size == ALPN_HTTP_1_1_LENGTH &&
!memcmp(ALPN_HTTP_1_1, proto.data, ALPN_HTTP_1_1_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
}
}
if(rc == 0)
Curl_alpn_set_negotiated(cf, data, proto.data, proto.size);
else
infof(data, VTLS_INFOF_NO_ALPN);
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
Curl_alpn_set_negotiated(cf, data, NULL, 0);
}
if(ssl_config->primary.sessionid) {
@ -1517,7 +1484,7 @@ static int gtls_shutdown(struct Curl_cfilter *cf,
char buf[120];
while(!done) {
int what = SOCKET_READABLE(cf->conn->sock[cf->sockindex],
int what = SOCKET_READABLE(Curl_conn_cf_get_socket(cf, data),
SSL_SHUTDOWN_TIMEOUT);
if(what > 0) {
/* Something to read, let's do it and hope that it is the close

View File

@ -646,19 +646,13 @@ mbed_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
}
#ifdef HAS_ALPN
if(cf->conn->bits.tls_enable_alpn) {
const char **p = &backend->protocols[0];
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
*p++ = ALPN_HTTP_1_0;
if(connssl->alpn) {
struct alpn_proto_buf proto;
size_t i;
for(i = 0; i < connssl->alpn->count; ++i) {
backend->protocols[i] = connssl->alpn->entries[i];
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2)
*p++ = ALPN_H2;
#endif
*p++ = ALPN_HTTP_1_1;
}
*p = NULL;
/* this function doesn't clone the protocols array, which is why we need
to keep it around */
if(mbedtls_ssl_conf_alpn_protocols(&backend->config,
@ -666,8 +660,8 @@ mbed_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
failf(data, "Failed setting ALPN protocols");
return CURLE_SSL_CONNECT_ERROR;
}
for(p = &backend->protocols[0]; *p; ++p)
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, *p);
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
#endif
@ -847,28 +841,11 @@ mbed_connect_step2(struct Curl_cfilter *cf, struct Curl_easy *data)
}
#ifdef HAS_ALPN
if(cf->conn->bits.tls_enable_alpn) {
const char *next_protocol = mbedtls_ssl_get_alpn_protocol(&backend->ssl);
if(connssl->alpn) {
const char *proto = mbedtls_ssl_get_alpn_protocol(&backend->ssl);
if(next_protocol) {
infof(data, VTLS_INFOF_ALPN_ACCEPTED_1STR, next_protocol);
#ifdef USE_HTTP2
if(!strncmp(next_protocol, ALPN_H2, ALPN_H2_LENGTH) &&
!next_protocol[ALPN_H2_LENGTH]) {
cf->conn->alpn = CURL_HTTP_VERSION_2;
}
else
#endif
if(!strncmp(next_protocol, ALPN_HTTP_1_1, ALPN_HTTP_1_1_LENGTH) &&
!next_protocol[ALPN_HTTP_1_1_LENGTH]) {
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
}
}
else {
infof(data, VTLS_INFOF_NO_ALPN);
}
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
Curl_alpn_set_negotiated(cf, data, (const unsigned char *)proto,
proto? strlen(proto) : 0);
}
#endif
@ -1084,7 +1061,7 @@ mbed_connect_common(struct Curl_cfilter *cf, struct Curl_easy *data,
{
CURLcode retcode;
struct ssl_connect_data *connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
timediff_t timeout_ms;
int what;

View File

@ -873,11 +873,11 @@ static void HandshakeCallback(PRFileDesc *sock, void *arg)
#endif
case SSL_NEXT_PROTO_NO_SUPPORT:
case SSL_NEXT_PROTO_NO_OVERLAP:
infof(data, VTLS_INFOF_NO_ALPN);
Curl_alpn_set_negotiated(cf, data, NULL, 0);
return;
#ifdef SSL_ENABLE_ALPN
case SSL_NEXT_PROTO_SELECTED:
infof(data, VTLS_INFOF_ALPN_ACCEPTED_LEN_1STR, buflen, buf);
Curl_alpn_set_negotiated(cf, data, buf, buflen);
break;
#endif
default:
@ -885,29 +885,6 @@ static void HandshakeCallback(PRFileDesc *sock, void *arg)
break;
}
#ifdef USE_HTTP2
if(buflen == ALPN_H2_LENGTH &&
!memcmp(ALPN_H2, buf, ALPN_H2_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_2;
}
else
#endif
if(buflen == ALPN_HTTP_1_1_LENGTH &&
!memcmp(ALPN_HTTP_1_1, buf, ALPN_HTTP_1_1_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
}
else if(buflen == ALPN_HTTP_1_0_LENGTH &&
!memcmp(ALPN_HTTP_1_0, buf, ALPN_HTTP_1_0_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_1_0;
}
/* This callback might get called when PR_Recv() is used within
* close_one() during a connection shutdown. At that point there might not
* be any "bundle" associated with the connection anymore.
*/
if(conn->bundle)
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
}
}
@ -1901,7 +1878,7 @@ static CURLcode nss_setup_connect(struct Curl_cfilter *cf,
PRFileDesc *nspr_io_stub = NULL;
PRBool ssl_no_cache;
PRBool ssl_cbc_random_iv;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
struct ssl_connect_data *connssl = cf->ctx;
struct ssl_backend_data *backend = connssl->backend;
struct ssl_primary_config *conn_config = Curl_ssl_cf_get_primary_config(cf);
@ -2167,34 +2144,17 @@ static CURLcode nss_setup_connect(struct Curl_cfilter *cf,
#endif
#if defined(SSL_ENABLE_ALPN)
if(cf->conn->bits.tls_enable_alpn) {
int cur = 0;
unsigned char protocols[128];
if(connssl->alpn) {
struct alpn_proto_buf proto;
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
protocols[cur++] = ALPN_HTTP_1_0_LENGTH;
memcpy(&protocols[cur], ALPN_HTTP_1_0, ALPN_HTTP_1_0_LENGTH);
cur += ALPN_HTTP_1_0_LENGTH;
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2
#ifndef CURL_DISABLE_PROXY
&& (!Curl_ssl_cf_is_proxy(cf) || !cf->conn->bits.tunnel_proxy)
#endif
) {
protocols[cur++] = ALPN_H2_LENGTH;
memcpy(&protocols[cur], ALPN_H2, ALPN_H2_LENGTH);
cur += ALPN_H2_LENGTH;
}
#endif
protocols[cur++] = ALPN_HTTP_1_1_LENGTH;
memcpy(&protocols[cur], ALPN_HTTP_1_1, ALPN_HTTP_1_1_LENGTH);
cur += ALPN_HTTP_1_1_LENGTH;
}
if(SSL_SetNextProtoNego(backend->handle, protocols, cur) != SECSuccess)
result = Curl_alpn_to_proto_buf(&proto, connssl->alpn);
if(result || SSL_SetNextProtoNego(backend->handle, proto.data, proto.len)
!= SECSuccess) {
failf(data, "Error setting ALPN");
goto error;
}
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
#endif

View File

@ -1814,7 +1814,7 @@ static int ossl_check_cxn(struct Curl_cfilter *cf, struct Curl_easy *data)
#ifdef MSG_PEEK
char buf;
ssize_t nread;
nread = recv((RECV_TYPE_ARG1)cf->conn->sock[cf->sockindex],
nread = recv((RECV_TYPE_ARG1)Curl_conn_cf_get_socket(cf, data),
(RECV_TYPE_ARG2)&buf, (RECV_TYPE_ARG3)1,
(RECV_TYPE_ARG4)MSG_PEEK);
if(nread == 0)
@ -2008,7 +2008,7 @@ static int ossl_shutdown(struct Curl_cfilter *cf,
if(backend->handle) {
buffsize = (int)sizeof(buf);
while(!done && loop--) {
int what = SOCKET_READABLE(cf->conn->sock[cf->sockindex],
int what = SOCKET_READABLE(Curl_conn_cf_get_socket(cf, data),
SSL_SHUTDOWN_TIMEOUT);
if(what > 0) {
ERR_clear_error();
@ -3651,43 +3651,17 @@ static CURLcode ossl_connect_step1(struct Curl_cfilter *cf,
SSL_CTX_set_options(backend->ctx, ctx_options);
#ifdef HAS_ALPN
if(cf->conn->bits.tls_enable_alpn) {
int cur = 0;
unsigned char protocols[128];
if(connssl->alpn) {
struct alpn_proto_buf proto;
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
protocols[cur++] = ALPN_HTTP_1_0_LENGTH;
memcpy(&protocols[cur], ALPN_HTTP_1_0, ALPN_HTTP_1_0_LENGTH);
cur += ALPN_HTTP_1_0_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_0);
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2
#ifndef CURL_DISABLE_PROXY
&& (!Curl_ssl_cf_is_proxy(cf) || !cf->conn->bits.tunnel_proxy)
#endif
) {
protocols[cur++] = ALPN_H2_LENGTH;
memcpy(&protocols[cur], ALPN_H2, ALPN_H2_LENGTH);
cur += ALPN_H2_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_H2);
}
#endif
protocols[cur++] = ALPN_HTTP_1_1_LENGTH;
memcpy(&protocols[cur], ALPN_HTTP_1_1, ALPN_HTTP_1_1_LENGTH);
cur += ALPN_HTTP_1_1_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
}
/* expects length prefixed preference ordered list of protocols in wire
* format
*/
if(SSL_CTX_set_alpn_protos(backend->ctx, protocols, cur)) {
result = Curl_alpn_to_proto_buf(&proto, connssl->alpn);
if(result ||
SSL_CTX_set_alpn_protos(backend->ctx, proto.data, proto.len)) {
failf(data, "Error setting ALPN");
return CURLE_SSL_CONNECT_ERROR;
}
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
#endif
@ -4038,26 +4012,8 @@ static CURLcode ossl_connect_step2(struct Curl_cfilter *cf,
const unsigned char *neg_protocol;
unsigned int len;
SSL_get0_alpn_selected(backend->handle, &neg_protocol, &len);
if(len) {
infof(data, VTLS_INFOF_ALPN_ACCEPTED_LEN_1STR, len, neg_protocol);
#ifdef USE_HTTP2
if(len == ALPN_H2_LENGTH &&
!memcmp(ALPN_H2, neg_protocol, len)) {
cf->conn->alpn = CURL_HTTP_VERSION_2;
}
else
#endif
if(len == ALPN_HTTP_1_1_LENGTH &&
!memcmp(ALPN_HTTP_1_1, neg_protocol, ALPN_HTTP_1_1_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
}
}
else
infof(data, VTLS_INFOF_NO_ALPN);
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
return Curl_alpn_set_negotiated(cf, data, neg_protocol, len);
}
#endif
@ -4374,7 +4330,7 @@ static CURLcode ossl_connect_common(struct Curl_cfilter *cf,
{
CURLcode result = CURLE_OK;
struct ssl_connect_data *connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
int what;
/* check if the connection has already been established */

View File

@ -354,34 +354,19 @@ cr_init_backend(struct Curl_cfilter *cf, struct Curl_easy *data,
rconn = backend->conn;
config_builder = rustls_client_config_builder_new();
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
rustls_slice_bytes alpn[] = {
{ (const uint8_t *)ALPN_HTTP_1_0, ALPN_HTTP_1_0_LENGTH }
};
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_0);
rustls_client_config_builder_set_alpn_protocols(config_builder, alpn, 1);
}
else {
rustls_slice_bytes alpn[2] = {
{ (const uint8_t *)ALPN_HTTP_1_1, ALPN_HTTP_1_1_LENGTH },
{ (const uint8_t *)ALPN_H2, ALPN_H2_LENGTH },
};
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2
#ifndef CURL_DISABLE_PROXY
&& (!Curl_ssl_cf_is_proxy(cf) || !cf->conn->bits.tunnel_proxy)
#endif
) {
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_H2);
rustls_client_config_builder_set_alpn_protocols(config_builder, alpn, 2);
}
else
#endif
{
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
rustls_client_config_builder_set_alpn_protocols(config_builder, alpn, 1);
if(connssl->alpn) {
struct alpn_proto_buf proto;
rustls_slice_bytes alpn[ALPN_ENTRIES_MAX];
size_t i;
for(i = 0; i < connssl->alpn->count; ++i) {
alpn[i].data = (const uint8_t *)connssl->alpn->entries[i];
alpn[i].len = strlen(connssl->alpn->entries[i]);
}
rustls_client_config_builder_set_alpn_protocols(config_builder, alpn,
connssl->alpn->count);
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
if(!verifypeer) {
rustls_client_config_builder_dangerous_set_certificate_verifier(
@ -457,29 +442,7 @@ cr_set_negotiated_alpn(struct Curl_cfilter *cf, struct Curl_easy *data,
size_t len = 0;
rustls_connection_get_alpn_protocol(rconn, &protocol, &len);
if(!protocol) {
infof(data, VTLS_INFOF_NO_ALPN);
return;
}
#ifdef USE_HTTP2
if(len == ALPN_H2_LENGTH && 0 == memcmp(ALPN_H2, protocol, len)) {
infof(data, VTLS_INFOF_ALPN_ACCEPTED_1STR, ALPN_H2);
cf->conn->alpn = CURL_HTTP_VERSION_2;
}
else
#endif
if(len == ALPN_HTTP_1_1_LENGTH &&
0 == memcmp(ALPN_HTTP_1_1, protocol, len)) {
infof(data, VTLS_INFOF_ALPN_ACCEPTED_1STR, ALPN_HTTP_1_1);
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
}
else {
infof(data, "ALPN, negotiated an unrecognized protocol");
}
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
Curl_alpn_set_negotiated(cf, data, protocol, len);
}
static CURLcode
@ -487,7 +450,7 @@ cr_connect_nonblocking(struct Curl_cfilter *cf,
struct Curl_easy *data, bool *done)
{
struct ssl_connect_data *const connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
struct ssl_backend_data *const backend = connssl->backend;
struct rustls_connection *rconn = NULL;
CURLcode tmperr = CURLE_OK;
@ -591,7 +554,7 @@ cr_get_select_socks(struct Curl_cfilter *cf, struct Curl_easy *data,
curl_socket_t *socks)
{
struct ssl_connect_data *const connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
struct ssl_backend_data *const backend = connssl->backend;
struct rustls_connection *rconn = NULL;

View File

@ -1105,7 +1105,7 @@ schannel_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
#ifdef HAS_ALPN
/* ALPN is only supported on Windows 8.1 / Server 2012 R2 and above.
Also it doesn't seem to be supported for Wine, see curl bug #983. */
backend->use_alpn = cf->conn->bits.tls_enable_alpn &&
backend->use_alpn = connssl->alpn &&
!GetProcAddress(GetModuleHandle(TEXT("ntdll")),
"wine_get_version") &&
curlx_verify_windows_version(6, 3, 0, PLATFORM_WINNT,
@ -1196,6 +1196,7 @@ schannel_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
int list_start_index = 0;
unsigned int *extension_len = NULL;
unsigned short* list_len = NULL;
struct alpn_proto_buf proto;
/* The first four bytes will be an unsigned int indicating number
of bytes of data in the rest of the buffer. */
@ -1215,33 +1216,22 @@ schannel_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
list_start_index = cur;
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
alpn_buffer[cur++] = ALPN_HTTP_1_0_LENGTH;
memcpy(&alpn_buffer[cur], ALPN_HTTP_1_0, ALPN_HTTP_1_0_LENGTH);
cur += ALPN_HTTP_1_0_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_0);
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2) {
alpn_buffer[cur++] = ALPN_H2_LENGTH;
memcpy(&alpn_buffer[cur], ALPN_H2, ALPN_H2_LENGTH);
cur += ALPN_H2_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_H2);
}
#endif
alpn_buffer[cur++] = ALPN_HTTP_1_1_LENGTH;
memcpy(&alpn_buffer[cur], ALPN_HTTP_1_1, ALPN_HTTP_1_1_LENGTH);
cur += ALPN_HTTP_1_1_LENGTH;
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
result = Curl_alpn_to_proto_buf(&proto, connssl->alpn);
if(result) {
failf(data, "Error setting ALPN");
return CURLE_SSL_CONNECT_ERROR;
}
memcpy(&alpn_buffer[cur], proto.data, proto.len);
cur += proto.len;
*list_len = curlx_uitous(cur - list_start_index);
*extension_len = *list_len + sizeof(unsigned int) + sizeof(unsigned short);
InitSecBuffer(&inbuf, SECBUFFER_APPLICATION_PROTOCOLS, alpn_buffer, cur);
InitSecBufferDesc(&inbuf_desc, &inbuf, 1);
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
else {
InitSecBuffer(&inbuf, SECBUFFER_EMPTY, NULL, 0);
@ -1735,40 +1725,23 @@ schannel_connect_step3(struct Curl_cfilter *cf, struct Curl_easy *data)
if(alpn_result.ProtoNegoStatus ==
SecApplicationProtocolNegotiationStatus_Success) {
unsigned char alpn = 0;
unsigned char prev_alpn = cf->conn->alpn;
infof(data, VTLS_INFOF_ALPN_ACCEPTED_LEN_1STR,
alpn_result.ProtocolIdSize, alpn_result.ProtocolId);
#ifdef USE_HTTP2
if(alpn_result.ProtocolIdSize == ALPN_H2_LENGTH &&
!memcmp(ALPN_H2, alpn_result.ProtocolId, ALPN_H2_LENGTH)) {
alpn = CURL_HTTP_VERSION_2;
}
else
#endif
if(alpn_result.ProtocolIdSize == ALPN_HTTP_1_1_LENGTH &&
!memcmp(ALPN_HTTP_1_1, alpn_result.ProtocolId,
ALPN_HTTP_1_1_LENGTH)) {
alpn = CURL_HTTP_VERSION_1_1;
}
Curl_alpn_set_negotiated(cf, data, alpn_result.ProtocolId,
alpn_result.ProtocolIdSize);
if(backend->recv_renegotiating) {
if(alpn != cf->conn->alpn) {
if(prev_alpn != cf->conn->alpn &&
prev_alpn != CURL_HTTP_VERSION_NONE) {
/* Renegotiation selected a different protocol now, we cannot
* deal with this */
failf(data, "schannel: server selected an ALPN protocol too late");
return CURLE_SSL_CONNECT_ERROR;
}
}
else
cf->conn->alpn = alpn;
}
else {
if(!backend->recv_renegotiating)
infof(data, VTLS_INFOF_NO_ALPN);
}
if(!backend->recv_renegotiating) {
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
Curl_alpn_set_negotiated(cf, data, NULL, 0);
}
}
#endif
@ -1849,7 +1822,7 @@ schannel_connect_common(struct Curl_cfilter *cf,
{
CURLcode result;
struct ssl_connect_data *connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
timediff_t timeout_ms;
int what;
@ -2064,7 +2037,7 @@ schannel_send(struct Curl_cfilter *cf, struct Curl_easy *data,
}
else if(!timeout_ms)
timeout_ms = TIMEDIFF_T_MAX;
what = SOCKET_WRITABLE(cf->conn->sock[cf->sockindex], timeout_ms);
what = SOCKET_WRITABLE(Curl_conn_cf_get_socket(cf, data), timeout_ms);
if(what < 0) {
/* fatal error */
failf(data, "select/poll on SSL socket, errno: %d", SOCKERRNO);

View File

@ -1636,7 +1636,6 @@ static CURLcode sectransp_connect_step1(struct Curl_cfilter *cf,
const bool verifypeer = conn_config->verifypeer;
char * const ssl_cert = ssl_config->primary.clientcert;
const struct curl_blob *ssl_cert_blob = ssl_config->primary.cert_blob;
bool isproxy = Curl_ssl_cf_is_proxy(cf);
#ifdef ENABLE_IPV6
struct in6_addr addr;
#else
@ -1797,38 +1796,28 @@ static CURLcode sectransp_connect_step1(struct Curl_cfilter *cf,
#endif /* CURL_BUILD_MAC_10_8 || CURL_BUILD_IOS */
#if (CURL_BUILD_MAC_10_13 || CURL_BUILD_IOS_11) && HAVE_BUILTIN_AVAILABLE == 1
if(cf->conn->bits.tls_enable_alpn) {
if(connssl->alpn) {
if(__builtin_available(macOS 10.13.4, iOS 11, tvOS 11, *)) {
struct alpn_proto_buf proto;
size_t i;
CFStringRef cstr;
CFMutableArrayRef alpnArr = CFArrayCreateMutable(NULL, 0,
&kCFTypeArrayCallBacks);
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
CFArrayAppendValue(alpnArr, CFSTR(ALPN_HTTP_1_0));
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_0);
for(i = 0; i < connssl->alpn->count; ++i) {
cstr = CFStringCreateWithCString(NULL, connssl->alpn->entries[i],
kCFStringEncodingUTF8);
if(!cstr)
return CURLE_OUT_OF_MEMORY;
CFArrayAppendValue(alpnArr, cstr);
CFRelease(cstr);
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2
#ifndef CURL_DISABLE_PROXY
&& (!isproxy || !cf->conn->bits.tunnel_proxy)
#endif
) {
CFArrayAppendValue(alpnArr, CFSTR(ALPN_H2));
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_H2);
}
#endif
CFArrayAppendValue(alpnArr, CFSTR(ALPN_HTTP_1_1));
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
}
/* expects length prefixed preference ordered list of protocols in wire
* format
*/
err = SSLSetALPNProtocols(backend->ssl_ctx, alpnArr);
if(err != noErr)
infof(data, "WARNING: failed to set ALPN protocols; OSStatus %d",
err);
CFRelease(alpnArr);
Curl_alpn_to_proto_str(&proto, connssl->alpn);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
}
#endif
@ -3018,7 +3007,7 @@ sectransp_connect_common(struct Curl_cfilter *cf, struct Curl_easy *data,
{
CURLcode result;
struct ssl_connect_data *connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
int what;
/* check if the connection has already been established */
@ -3196,7 +3185,8 @@ static int sectransp_shutdown(struct Curl_cfilter *cf,
rc = 0;
what = SOCKET_READABLE(cf->conn->sock[cf->sockindex], SSL_SHUTDOWN_TIMEOUT);
what = SOCKET_READABLE(Curl_conn_cf_get_socket(cf, data),
SSL_SHUTDOWN_TIMEOUT);
DEBUGF(LOG_CF(data, cf, "shutdown"));
while(loop--) {
@ -3225,7 +3215,7 @@ static int sectransp_shutdown(struct Curl_cfilter *cf,
if(nread <= 0)
break;
what = SOCKET_READABLE(cf->conn->sock[cf->sockindex], 0);
what = SOCKET_READABLE(Curl_conn_cf_get_socket(cf, data), 0);
}
return rc;

View File

@ -290,7 +290,8 @@ static bool ssl_prefs_check(struct Curl_easy *data)
return TRUE;
}
static struct ssl_connect_data *cf_ctx_new(struct Curl_easy *data)
static struct ssl_connect_data *cf_ctx_new(struct Curl_easy *data,
const struct alpn_spec *alpn)
{
struct ssl_connect_data *ctx;
@ -299,6 +300,7 @@ static struct ssl_connect_data *cf_ctx_new(struct Curl_easy *data)
if(!ctx)
return NULL;
ctx->alpn = alpn;
ctx->backend = calloc(1, Curl_ssl->sizeof_ssl_backend_data);
if(!ctx->backend) {
free(ctx);
@ -329,7 +331,6 @@ static CURLcode ssl_connect(struct Curl_cfilter *cf, struct Curl_easy *data)
result = Curl_ssl->connect_blocking(cf, data);
if(!result) {
Curl_pgrsTime(data, TIMER_APPCONNECT); /* SSL is connected */
DEBUGASSERT(connssl->state == ssl_connection_complete);
}
@ -605,19 +606,20 @@ int Curl_ssl_get_select_socks(struct Curl_cfilter *cf, struct Curl_easy *data,
curl_socket_t *socks)
{
struct ssl_connect_data *connssl = cf->ctx;
curl_socket_t sock = Curl_conn_cf_get_socket(cf->next, data);
(void)data;
if(connssl->connecting_state == ssl_connect_2_writing) {
/* write mode */
socks[0] = cf->conn->sock[FIRSTSOCKET];
return GETSOCK_WRITESOCK(0);
if(sock != CURL_SOCKET_BAD) {
if(connssl->connecting_state == ssl_connect_2_writing) {
/* write mode */
socks[0] = sock;
return GETSOCK_WRITESOCK(0);
}
if(connssl->connecting_state == ssl_connect_2_reading) {
/* read mode */
socks[0] = sock;
return GETSOCK_READSOCK(0);
}
}
if(connssl->connecting_state == ssl_connect_2_reading) {
/* read mode */
socks[0] = cf->conn->sock[FIRSTSOCKET];
return GETSOCK_READSOCK(0);
}
return GETSOCK_BLANK;
}
@ -1534,8 +1536,7 @@ static CURLcode ssl_cf_connect(struct Curl_cfilter *cf,
if(!result && *done) {
cf->connected = TRUE;
if(cf->sockindex == FIRSTSOCKET && !Curl_ssl_cf_is_proxy(cf))
Curl_pgrsTime(data, TIMER_APPCONNECT); /* SSL is connected */
connssl->handshake_done = Curl_now();
DEBUGASSERT(connssl->state == ssl_connection_complete);
}
out:
@ -1603,11 +1604,16 @@ static CURLcode ssl_cf_cntrl(struct Curl_cfilter *cf,
struct Curl_easy *data,
int event, int arg1, void *arg2)
{
struct ssl_connect_data *connssl = cf->ctx;
struct cf_call_data save;
(void)arg1;
(void)arg2;
switch(event) {
case CF_CTRL_CONN_REPORT_STATS:
if(cf->sockindex == FIRSTSOCKET && !Curl_ssl_cf_is_proxy(cf))
Curl_pgrsTimeWas(data, TIMER_APPCONNECT, connssl->handshake_done);
break;
case CF_CTRL_DATA_ATTACH:
if(Curl_ssl->attach_data) {
CF_DATA_SAVE(save, cf, data);
@ -1683,14 +1689,16 @@ struct Curl_cftype Curl_cft_ssl_proxy = {
};
static CURLcode cf_ssl_create(struct Curl_cfilter **pcf,
struct Curl_easy *data)
struct Curl_easy *data,
struct connectdata *conn)
{
struct Curl_cfilter *cf = NULL;
struct ssl_connect_data *ctx;
CURLcode result;
DEBUGASSERT(data->conn);
ctx = cf_ctx_new(data);
ctx = cf_ctx_new(data, Curl_alpn_get_spec(data, conn));
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
@ -1712,7 +1720,7 @@ CURLcode Curl_ssl_cfilter_add(struct Curl_easy *data,
struct Curl_cfilter *cf;
CURLcode result;
result = cf_ssl_create(&cf, data);
result = cf_ssl_create(&cf, data, conn);
if(!result)
Curl_conn_cf_add(data, conn, sockindex, cf);
return result;
@ -1724,7 +1732,7 @@ CURLcode Curl_cf_ssl_insert_after(struct Curl_cfilter *cf_at,
struct Curl_cfilter *cf;
CURLcode result;
result = cf_ssl_create(&cf, data);
result = cf_ssl_create(&cf, data, cf_at->conn);
if(!result)
Curl_conn_cf_insert_after(cf_at, cf);
return result;
@ -1732,18 +1740,18 @@ CURLcode Curl_cf_ssl_insert_after(struct Curl_cfilter *cf_at,
#ifndef CURL_DISABLE_PROXY
static CURLcode cf_ssl_proxy_create(struct Curl_cfilter **pcf,
struct Curl_easy *data)
struct Curl_easy *data,
struct connectdata *conn)
{
struct Curl_cfilter *cf = NULL;
struct ssl_connect_data *ctx;
CURLcode result;
ctx = cf_ctx_new(data);
ctx = cf_ctx_new(data, Curl_alpn_get_proxy_spec(data, conn));
if(!ctx) {
result = CURLE_OUT_OF_MEMORY;
goto out;
}
result = Curl_cf_create(&cf, &Curl_cft_ssl_proxy, ctx);
out:
@ -1760,7 +1768,7 @@ CURLcode Curl_ssl_cfilter_proxy_add(struct Curl_easy *data,
struct Curl_cfilter *cf;
CURLcode result;
result = cf_ssl_proxy_create(&cf, data);
result = cf_ssl_proxy_create(&cf, data, conn);
if(!result)
Curl_conn_cf_add(data, conn, sockindex, cf);
return result;
@ -1772,7 +1780,7 @@ CURLcode Curl_cf_ssl_proxy_insert_after(struct Curl_cfilter *cf_at,
struct Curl_cfilter *cf;
CURLcode result;
result = cf_ssl_proxy_create(&cf, data);
result = cf_ssl_proxy_create(&cf, data, cf_at->conn);
if(!result)
Curl_conn_cf_insert_after(cf_at, cf);
return result;
@ -1900,4 +1908,136 @@ struct Curl_cfilter *Curl_ssl_cf_get_ssl(struct Curl_cfilter *cf)
return NULL;
}
static const struct alpn_spec ALPN_SPEC_H10 = {
{ ALPN_HTTP_1_0 }, 1
};
static const struct alpn_spec ALPN_SPEC_H11 = {
{ ALPN_HTTP_1_1 }, 1
};
#ifdef USE_HTTP2
static const struct alpn_spec ALPN_SPEC_H2_H11 = {
{ ALPN_H2, ALPN_HTTP_1_1 }, 2
};
#endif
const struct alpn_spec *
Curl_alpn_get_spec(struct Curl_easy *data, struct connectdata *conn)
{
if(!conn->bits.tls_enable_alpn)
return NULL;
if(data->state.httpwant == CURL_HTTP_VERSION_1_0)
return &ALPN_SPEC_H10;
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2)
return &ALPN_SPEC_H2_H11;
#endif
return &ALPN_SPEC_H11;
}
const struct alpn_spec *
Curl_alpn_get_proxy_spec(struct Curl_easy *data, struct connectdata *conn)
{
if(!conn->bits.tls_enable_alpn)
return NULL;
if(data->state.httpwant == CURL_HTTP_VERSION_1_0)
return &ALPN_SPEC_H10;
return &ALPN_SPEC_H11;
}
CURLcode Curl_alpn_to_proto_buf(struct alpn_proto_buf *buf,
const struct alpn_spec *spec)
{
size_t i, len;
int off = 0;
unsigned char blen;
memset(buf, 0, sizeof(*buf));
for(i = 0; spec && i < spec->count; ++i) {
len = strlen(spec->entries[i]);
if(len > 255)
return CURLE_FAILED_INIT;
blen = (unsigned char)len;
if(off + blen + 1 >= (int)sizeof(buf->data))
return CURLE_FAILED_INIT;
buf->data[off++] = blen;
memcpy(buf->data + off, spec->entries[i], blen);
off += blen;
}
buf->len = off;
return CURLE_OK;
}
CURLcode Curl_alpn_to_proto_str(struct alpn_proto_buf *buf,
const struct alpn_spec *spec)
{
size_t i, len;
size_t off = 0;
memset(buf, 0, sizeof(*buf));
for(i = 0; spec && i < spec->count; ++i) {
len = strlen(spec->entries[i]);
if(len > 255)
return CURLE_FAILED_INIT;
if(off + len + 2 >= (int)sizeof(buf->data))
return CURLE_FAILED_INIT;
if(off)
buf->data[off++] = ',';
memcpy(buf->data + off, spec->entries[i], len);
off += len;
}
buf->data[off] = '\0';
buf->len = (int)off;
return CURLE_OK;
}
CURLcode Curl_alpn_set_negotiated(struct Curl_cfilter *cf,
struct Curl_easy *data,
const unsigned char *proto,
size_t proto_len)
{
int can_multi = 0;
if(proto && proto_len) {
if(proto_len == ALPN_HTTP_1_1_LENGTH &&
!memcmp(ALPN_HTTP_1_1, proto, ALPN_HTTP_1_1_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
}
else if(proto_len == ALPN_HTTP_1_0_LENGTH &&
!memcmp(ALPN_HTTP_1_0, proto, ALPN_HTTP_1_0_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_1_0;
}
#ifdef USE_HTTP2
else if(proto_len == ALPN_H2_LENGTH &&
!memcmp(ALPN_H2, proto, ALPN_H2_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_2;
can_multi = 1;
}
#endif
#ifdef USE_HTTP3
else if(proto_len == ALPN_H3_LENGTH &&
!memcmp(ALPN_H3, proto, ALPN_H3_LENGTH)) {
cf->conn->alpn = CURL_HTTP_VERSION_3;
can_multi = 1;
}
#endif
else {
cf->conn->alpn = CURL_HTTP_VERSION_NONE;
failf(data, "unsupported ALPN protocol: '%.*s'", proto_len, proto);
/* TODO: do we want to fail this? Previous code just ignored it and
* some vtls backends even ignore the return code of this function. */
/* return CURLE_NOT_BUILT_IN; */
goto out;
}
infof(data, VTLS_INFOF_ALPN_ACCEPTED_LEN_1STR, proto_len, proto);
}
else {
cf->conn->alpn = CURL_HTTP_VERSION_NONE;
infof(data, VTLS_INFOF_NO_ALPN);
}
out:
Curl_multiuse_state(data, can_multi? BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
return CURLE_OK;
}
#endif /* USE_SSL */

View File

@ -27,7 +27,6 @@
struct connectdata;
struct ssl_config_data;
struct ssl_connect_data;
struct ssl_primary_config;
struct Curl_ssl_session;
@ -73,6 +72,49 @@ CURLsslset Curl_init_sslset_nolock(curl_sslbackend id, const char *name,
#define ALPN_HTTP_1_0 "http/1.0"
#define ALPN_H2_LENGTH 2
#define ALPN_H2 "h2"
#define ALPN_H3_LENGTH 2
#define ALPN_H3 "h3"
/* conservative sizes on the ALPN entries and count we are handling,
* we can increase these if we ever feel the need or have to accomodate
* ALPN strings from the "outside". */
#define ALPN_NAME_MAX 10
#define ALPN_ENTRIES_MAX 3
#define ALPN_PROTO_BUF_MAX (ALPN_ENTRIES_MAX * (ALPN_NAME_MAX + 1))
struct alpn_spec {
const char entries[ALPN_ENTRIES_MAX][ALPN_NAME_MAX];
size_t count; /* number of entries */
};
struct alpn_proto_buf {
unsigned char data[ALPN_PROTO_BUF_MAX];
int len;
};
CURLcode Curl_alpn_to_proto_buf(struct alpn_proto_buf *buf,
const struct alpn_spec *spec);
CURLcode Curl_alpn_to_proto_str(struct alpn_proto_buf *buf,
const struct alpn_spec *spec);
CURLcode Curl_alpn_set_negotiated(struct Curl_cfilter *cf,
struct Curl_easy *data,
const unsigned char *proto,
size_t proto_len);
/**
* Get the ALPN specification to use for talking to remote host.
* May return NULL if ALPN is disabled on the connection.
*/
const struct alpn_spec *
Curl_alpn_get_spec(struct Curl_easy *data, struct connectdata *conn);
/**
* Get the ALPN specification to use for talking to the proxy.
* May return NULL if ALPN is disabled on the connection.
*/
const struct alpn_spec *
Curl_alpn_get_proxy_spec(struct Curl_easy *data, struct connectdata *conn);
char *Curl_ssl_snihost(struct Curl_easy *data, const char *host, size_t *olen);

View File

@ -36,8 +36,10 @@ struct ssl_connect_data {
char *hostname; /* hostname for verification */
char *dispname; /* display version of hostname */
int port; /* remote port at origin */
const struct alpn_spec *alpn; /* ALPN to use or NULL for none */
struct ssl_backend_data *backend; /* vtls backend specific props */
struct cf_call_data call_data; /* data handle used in current call */
struct curltime handshake_done; /* time when handshake finished */
};

View File

@ -631,34 +631,18 @@ wolfssl_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
#endif
#ifdef HAVE_ALPN
if(cf->conn->bits.tls_enable_alpn) {
char protocols[128];
*protocols = '\0';
if(connssl->alpn) {
struct alpn_proto_buf proto;
CURLcode result;
/* wolfSSL's ALPN protocol name list format is a comma separated string of
protocols in descending order of preference, eg: "h2,http/1.1" */
if(data->state.httpwant == CURL_HTTP_VERSION_1_0) {
strcpy(protocols, ALPN_HTTP_1_0);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_0);
}
else {
#ifdef USE_HTTP2
if(data->state.httpwant >= CURL_HTTP_VERSION_2) {
strcpy(protocols + strlen(protocols), ALPN_H2 ",");
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_H2);
}
#endif
strcpy(protocols + strlen(protocols), ALPN_HTTP_1_1);
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, ALPN_HTTP_1_1);
}
if(wolfSSL_UseALPN(backend->handle, protocols,
(unsigned)strlen(protocols),
result = Curl_alpn_to_proto_str(&proto, connssl->alpn);
if(result ||
wolfSSL_UseALPN(backend->handle, (char *)proto.data, proto.len,
WOLFSSL_ALPN_CONTINUE_ON_MISMATCH) != SSL_SUCCESS) {
failf(data, "SSL: failed setting ALPN protocols");
return CURLE_SSL_CONNECT_ERROR;
}
infof(data, VTLS_INFOF_ALPN_OFFER_1STR, proto.data);
}
#endif /* HAVE_ALPN */
@ -710,7 +694,7 @@ wolfssl_connect_step1(struct Curl_cfilter *cf, struct Curl_easy *data)
}
#else /* USE_BIO_CHAIN */
/* pass the raw socket into the SSL layer */
if(!SSL_set_fd(backend->handle, (int)cf->conn->sock[cf->sockindex])) {
if(!SSL_set_fd(backend->handle, (int)Curl_conn_cf_get_socket(cf, data))) {
failf(data, "SSL: SSL_set_fd failed");
return CURLE_SSL_CONNECT_ERROR;
}
@ -886,25 +870,11 @@ wolfssl_connect_step2(struct Curl_cfilter *cf, struct Curl_easy *data)
rc = wolfSSL_ALPN_GetProtocol(backend->handle, &protocol, &protocol_len);
if(rc == SSL_SUCCESS) {
infof(data, VTLS_INFOF_ALPN_ACCEPTED_LEN_1STR, protocol_len, protocol);
if(protocol_len == ALPN_HTTP_1_1_LENGTH &&
!memcmp(protocol, ALPN_HTTP_1_1, ALPN_HTTP_1_1_LENGTH))
cf->conn->alpn = CURL_HTTP_VERSION_1_1;
#ifdef USE_HTTP2
else if(data->state.httpwant >= CURL_HTTP_VERSION_2 &&
protocol_len == ALPN_H2_LENGTH &&
!memcmp(protocol, ALPN_H2, ALPN_H2_LENGTH))
cf->conn->alpn = CURL_HTTP_VERSION_2;
#endif
else
infof(data, "ALPN, unrecognized protocol %.*s", protocol_len,
protocol);
Curl_multiuse_state(data, cf->conn->alpn == CURL_HTTP_VERSION_2 ?
BUNDLE_MULTIPLEX : BUNDLE_NO_MULTIUSE);
Curl_alpn_set_negotiated(cf, data, (const unsigned char *)protocol,
protocol_len);
}
else if(rc == SSL_ALPN_NOT_FOUND)
infof(data, VTLS_INFOF_NO_ALPN);
Curl_alpn_set_negotiated(cf, data, NULL, 0);
else {
failf(data, "ALPN, failure getting protocol, error %d", rc);
return CURLE_SSL_CONNECT_ERROR;
@ -1169,7 +1139,7 @@ wolfssl_connect_common(struct Curl_cfilter *cf,
{
CURLcode result;
struct ssl_connect_data *connssl = cf->ctx;
curl_socket_t sockfd = cf->conn->sock[cf->sockindex];
curl_socket_t sockfd = Curl_conn_cf_get_socket(cf, data);
int what;
/* check if the connection has already been established */

View File

@ -49,7 +49,7 @@ BUILD_UNIT =
DIST_UNIT = unit
endif
SUBDIRS = certs data server libtest $(BUILD_UNIT)
SUBDIRS = certs data server libtest tests-httpd $(BUILD_UNIT)
DIST_SUBDIRS = $(SUBDIRS) $(DIST_UNIT)
PERLFLAGS = -I$(srcdir)

View File

@ -74,7 +74,7 @@ int test(char *URL)
target_url[sizeof(target_url) - 1] = '\0';
easy_setopt(curl[i], CURLOPT_URL, target_url);
/* go http2 */
easy_setopt(curl[i], CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_3);
easy_setopt(curl[i], CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_3ONLY);
easy_setopt(curl[i], CURLOPT_CONNECTTIMEOUT_MS, (long)5000);
easy_setopt(curl[i], CURLOPT_CAINFO, "./certs/EdelCurlRoot-ca.cacert");
/* wait for first connection establised to see if we can share it */

View File

@ -0,0 +1,27 @@
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
# SPDX-License-Identifier: curl
#
###########################################################################
clean-local:
rm -rf *.pyc __pycache__
rm -rf gen

View File

@ -33,7 +33,11 @@ apachectl = @APACHECTL@
[test]
http_port = 5001
https_port = 5002
h3_port = 5003
h3_port = 5002
[nghttpx]
nghttpx = @HTTPD_NGHTTPX@
nghttpx = @HTTPD_NGHTTPX@
[caddy]
caddy = @CADDY@
port = 5004

View File

@ -70,7 +70,7 @@ def httpd(env) -> Httpd:
@pytest.fixture(scope='package')
def nghttpx(env) -> Optional[Nghttpx]:
def nghttpx(env, httpd) -> Optional[Nghttpx]:
if env.have_h3_server():
nghttpx = Nghttpx(env=env)
nghttpx.clear_logs()

View File

@ -0,0 +1,400 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2008 - 2022, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
# SPDX-License-Identifier: curl
#
###########################################################################
#
import argparse
import json
import logging
import os
import sys
from datetime import datetime
from statistics import mean
from typing import Dict, Any
from testenv import Env, Httpd, Nghttpx, CurlClient, Caddy, ExecResult
log = logging.getLogger(__name__)
class ScoreCardException(Exception):
pass
class ScoreCard:
def __init__(self):
self.verbose = 0
self.env = None
self.httpd = None
self.nghttpx = None
self.caddy = None
def info(self, msg):
if self.verbose > 0:
sys.stderr.write(msg)
sys.stderr.flush()
def handshakes(self, proto: str) -> Dict[str, Any]:
props = {}
sample_size = 10
self.info(f'handshaking ')
for authority in [
f'{self.env.authority_for(self.env.domain1, proto)}'
]:
self.info('localhost')
c_samples = []
hs_samples = []
errors = []
for i in range(sample_size):
self.info('.')
curl = CurlClient(env=self.env)
url = f'https://{authority}/'
r = curl.http_download(urls=[url], alpn_proto=proto)
if r.exit_code == 0 and len(r.stats) == 1:
c_samples.append(r.stats[0]['time_connect'])
hs_samples.append(r.stats[0]['time_appconnect'])
else:
errors.append(f'exit={r.exit_code}')
props['localhost'] = {
'connect': mean(c_samples),
'handshake': mean(hs_samples),
'errors': errors
}
for authority in [
'curl.se', 'google.com', 'cloudflare.com', 'nghttp2.org',
]:
for ipv in ['ipv4', 'ipv6']:
self.info(f'{authority}-{ipv}')
c_samples = []
hs_samples = []
errors = []
for i in range(sample_size):
self.info('.')
curl = CurlClient(env=self.env)
args = [
'--http3-only' if proto == 'h3' else '--http2',
f'--{ipv}', f'https://{authority}/'
]
r = curl.run_direct(args=args, with_stats=True)
if r.exit_code == 0 and len(r.stats) == 1:
c_samples.append(r.stats[0]['time_connect'])
hs_samples.append(r.stats[0]['time_appconnect'])
else:
errors.append(f'exit={r.exit_code}')
props[f'{authority}-{ipv}'] = {
'connect': mean(c_samples) if len(c_samples) else -1,
'handshake': mean(hs_samples) if len(hs_samples) else -1,
'errors': errors
}
self.info('\n')
return props
def _make_docs_file(self, docs_dir: str, fname: str, fsize: int):
fpath = os.path.join(docs_dir, fname)
data1k = 1024*'x'
flen = 0
with open(fpath, 'w') as fd:
while flen < fsize:
fd.write(data1k)
flen += len(data1k)
return flen
def _check_downloads(self, r: ExecResult, count: int):
error = ''
if r.exit_code != 0:
error += f'exit={r.exit_code} '
if r.exit_code != 0 or len(r.stats) != count:
error += f'stats={len(r.stats)}/{count} '
fails = [s for s in r.stats if s['response_code'] != 200]
if len(fails) > 0:
error += f'{len(fails)} failed'
return error if len(error) > 0 else None
def transfer_single(self, url: str, proto: str, count: int):
sample_size = count
count = 1
samples = []
errors = []
self.info(f'{sample_size}x single')
for i in range(sample_size):
curl = CurlClient(env=self.env)
r = curl.http_download(urls=[url], alpn_proto=proto)
err = self._check_downloads(r, count)
if err:
errors.append(err)
else:
samples.append(r.stats[0]['speed_download'])
self.info(f'.')
return {
'count': count,
'samples': sample_size,
'speed': mean(samples) if len(samples) else -1,
'errors': errors
}
def transfer_serial(self, url: str, proto: str, count: int):
sample_size = 1
samples = []
errors = []
url = f'{url}?[0-{count - 1}]'
self.info(f'{sample_size}x{count} serial')
for i in range(sample_size):
curl = CurlClient(env=self.env)
r = curl.http_download(urls=[url], alpn_proto=proto)
self.info(f'.')
err = self._check_downloads(r, count)
if err:
errors.append(err)
else:
for s in r.stats:
samples.append(s['speed_download'])
return {
'count': count,
'samples': sample_size,
'speed': mean(samples) if len(samples) else -1,
'errors': errors
}
def transfer_parallel(self, url: str, proto: str, count: int):
sample_size = 1
samples = []
errors = []
url = f'{url}?[0-{count - 1}]'
self.info(f'{sample_size}x{count} parallel')
for i in range(sample_size):
curl = CurlClient(env=self.env)
start = datetime.now()
r = curl.http_download(urls=[url], alpn_proto=proto,
extra_args=['--parallel'])
err = self._check_downloads(r, count)
if err:
errors.append(err)
else:
duration = datetime.now() - start
total_size = sum([s['size_download'] for s in r.stats])
samples.append(total_size / duration.total_seconds())
return {
'count': count,
'samples': sample_size,
'speed': mean(samples) if len(samples) else -1,
'errors': errors
}
def download_url(self, url: str, proto: str, count: int):
self.info(f' {url}: ')
props = {
'single': self.transfer_single(url=url, proto=proto, count=10),
'serial': self.transfer_serial(url=url, proto=proto, count=count),
'parallel': self.transfer_parallel(url=url, proto=proto, count=count),
}
self.info(f'\n')
return props
def downloads(self, proto: str) -> Dict[str, Any]:
scores = {}
if proto == 'h3':
port = self.env.h3_port
via = 'nghttpx'
descr = f'port {port}, proxying httpd'
else:
port = self.env.https_port
via = 'httpd'
descr = f'port {port}'
self.info('httpd downloads\n')
self._make_docs_file(docs_dir=self.httpd.docs_dir, fname='score1.data', fsize=1024*1024)
url1 = f'https://{self.env.domain1}:{port}/score1.data'
self._make_docs_file(docs_dir=self.httpd.docs_dir, fname='score10.data', fsize=10*1024*1024)
url10 = f'https://{self.env.domain1}:{port}/score10.data'
self._make_docs_file(docs_dir=self.httpd.docs_dir, fname='score100.data', fsize=100*1024*1024)
url100 = f'https://{self.env.domain1}:{port}/score100.data'
scores[via] = {
'description': descr,
'1MB-local': self.download_url(url=url1, proto=proto, count=50),
'10MB-local': self.download_url(url=url10, proto=proto, count=50),
'100MB-local': self.download_url(url=url100, proto=proto, count=50),
}
if self.caddy:
port = self.env.caddy_port
via = 'caddy'
descr = f'port {port}'
self.info('caddy downloads\n')
self._make_docs_file(docs_dir=self.caddy.docs_dir, fname='score1.data', fsize=1024 * 1024)
url1 = f'https://{self.env.domain1}:{port}/score1.data'
self._make_docs_file(docs_dir=self.caddy.docs_dir, fname='score10.data', fsize=10 * 1024 * 1024)
url10 = f'https://{self.env.domain1}:{port}/score10.data'
self._make_docs_file(docs_dir=self.caddy.docs_dir, fname='score100.data', fsize=100 * 1024 * 1024)
url100 = f'https://{self.env.domain1}:{port}/score100.data'
scores[via] = {
'description': descr,
'1MB-local': self.download_url(url=url1, proto=proto, count=50),
'10MB-local': self.download_url(url=url10, proto=proto, count=50),
'100MB-local': self.download_url(url=url100, proto=proto, count=50),
}
return scores
def score_proto(self, proto: str, handshakes: bool = True, downloads: bool = True):
self.info(f"scoring {proto}\n")
p = {}
if proto == 'h3':
p['name'] = 'h3'
if not self.env.have_h3_curl():
raise ScoreCardException('curl does not support HTTP/3')
for lib in ['ngtcp2', 'quiche', 'msh3']:
if self.env.curl_uses_lib(lib):
p['implementation'] = lib
break
elif proto == 'h2':
p['name'] = 'h2'
if not self.env.have_h2_curl():
raise ScoreCardException('curl does not support HTTP/2')
for lib in ['nghttp2', 'hyper']:
if self.env.curl_uses_lib(lib):
p['implementation'] = lib
break
else:
raise ScoreCardException(f"unknown protocol: {proto}")
if 'implementation' not in p:
raise ScoreCardException(f'did not recognized {p} lib')
p['version'] = Env.curl_lib_version(p['implementation'])
score = {
'curl': self.env.curl_version(),
'os': self.env.curl_os(),
'protocol': p,
}
if handshakes:
score['handshakes'] = self.handshakes(proto=proto)
if downloads:
score['downloads'] = self.downloads(proto=proto)
self.info("\n")
return score
def fmt_ms(self, tval):
return f'{int(tval*1000)} ms' if tval >= 0 else '--'
def fmt_mb(self, val):
return f'{val/(1024*1024):0.000f} MB' if val >= 0 else '--'
def fmt_mbs(self, val):
return f'{val/(1024*1024):0.000f} MB/s' if val >= 0 else '--'
def print_score(self, score):
print(f'{score["protocol"]["name"].upper()} in curl {score["curl"]} ({score["os"]}) via '
f'{score["protocol"]["implementation"]}/{score["protocol"]["version"]} ')
if 'handshakes' in score:
print('Handshakes')
print(f' {"Host":<25} {"Connect":>12} {"Handshake":>12} {"Errors":<20}')
for key, val in score["handshakes"].items():
print(f' {key:<25} {self.fmt_ms(val["connect"]):>12} '''
f'{self.fmt_ms(val["handshake"]):>12} {"/".join(val["errors"]):<20}')
if 'downloads' in score:
print('Downloads')
for dkey, dval in score["downloads"].items():
print(f' {dkey}: {dval["description"]}')
for skey, sval in dval.items():
if isinstance(sval, str):
continue
print(f' {skey:<13} {"Samples":>10} {"Count":>10} {"Speed":>17} {"Errors":<20}')
for key, val in sval.items():
print(f' {key:<11} {val["samples"]:>10} '''
f'{val["count"]:>10} {self.fmt_mbs(val["speed"]):>17} '
f'{"/".join(val["errors"]):<20}')
def main(self):
parser = argparse.ArgumentParser(prog='scorecard', description="""
Run a range of tests to give a scorecard for a HTTP protocol
'h3' or 'h2' implementation in curl.
""")
parser.add_argument("-v", "--verbose", action='count', default=0,
help="log more output on stderr")
parser.add_argument("-t", "--text", action='store_true', default=False,
help="print text instead of json")
parser.add_argument("-d", "--downloads", action='store_true', default=False,
help="evaluate downloads only")
parser.add_argument("protocols", nargs='*', help="Name(s) of protocol to score")
args = parser.parse_args()
self.verbose = args.verbose
if args.verbose > 0:
console = logging.StreamHandler()
console.setLevel(logging.INFO)
console.setFormatter(logging.Formatter(logging.BASIC_FORMAT))
logging.getLogger('').addHandler(console)
protocols = args.protocols if len(args.protocols) else ['h2', 'h3']
handshakes = True
downloads = True
if args.downloads:
handshakes = False
rv = 0
self.env = Env()
self.env.setup()
self.httpd = None
self.nghttpx = None
self.caddy = None
try:
self.httpd = Httpd(env=self.env)
assert self.httpd.exists(), f'httpd not found: {self.env.httpd}'
self.httpd.clear_logs()
assert self.httpd.start()
if 'h3' in protocols:
self.nghttpx = Nghttpx(env=self.env)
self.nghttpx.clear_logs()
assert self.nghttpx.start()
if self.env.caddy:
self.caddy = Caddy(env=self.env)
self.caddy.clear_logs()
assert self.caddy.start()
for p in protocols:
score = self.score_proto(proto=p, handshakes=handshakes, downloads=downloads)
if args.text:
self.print_score(score)
else:
print(json.JSONEncoder(indent=2).encode(score))
except ScoreCardException as ex:
sys.stderr.write(f"ERROR: {str(ex)}\n")
rv = 1
except KeyboardInterrupt:
log.warning("aborted")
rv = 1
finally:
if self.caddy:
self.caddy.stop()
self.caddy = None
if self.nghttpx:
self.nghttpx.stop(wait_dead=False)
if self.httpd:
self.httpd.stop()
self.httpd = None
sys.exit(rv)
if __name__ == "__main__":
ScoreCard().main()

View File

@ -38,6 +38,11 @@ log = logging.getLogger(__name__)
reason=f"missing: {Env.incomplete_reason()}")
class TestBasic:
@pytest.fixture(autouse=True, scope='class')
def _class_scope(self, env, nghttpx):
if env.have_h3():
nghttpx.start_if_needed()
# simple http: GET
def test_01_01_http_get(self, env: Env, httpd):
curl = CurlClient(env=env)

View File

@ -24,12 +24,11 @@
#
###########################################################################
#
import json
import logging
from typing import Optional
import os
import pytest
from testenv import Env, CurlClient, ExecResult
from testenv import Env, CurlClient
log = logging.getLogger(__name__)
@ -39,6 +38,18 @@ log = logging.getLogger(__name__)
reason=f"missing: {Env.incomplete_reason()}")
class TestDownload:
@pytest.fixture(autouse=True, scope='class')
def _class_scope(self, env, httpd, nghttpx):
if env.have_h3():
nghttpx.start_if_needed()
fpath = os.path.join(httpd.docs_dir, 'data-1mb.data')
data1k = 1024*'x'
with open(fpath, 'w') as fd:
fsize = 0
while fsize < 1024*1024:
fd.write(data1k)
fsize += len(data1k)
# download 1 file
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_02_01_download_1(self, env: Env, httpd, nghttpx, repeat, proto):
@ -48,7 +59,7 @@ class TestDownload:
url = f'https://{env.authority_for(env.domain1, proto)}/data.json'
r = curl.http_download(urls=[url], alpn_proto=proto)
assert r.exit_code == 0, f'{r}'
r.check_responses(count=1, exp_status=200)
r.check_stats(count=1, exp_status=200)
# download 2 files
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
@ -59,7 +70,7 @@ class TestDownload:
url = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-1]'
r = curl.http_download(urls=[url], alpn_proto=proto)
assert r.exit_code == 0
r.check_responses(count=2, exp_status=200)
r.check_stats(count=2, exp_status=200)
# download 100 files sequentially
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
@ -71,8 +82,7 @@ class TestDownload:
urln = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-99]'
r = curl.http_download(urls=[urln], alpn_proto=proto)
assert r.exit_code == 0
r.check_responses(count=100, exp_status=200)
assert len(r.stats) == 100, f'{r.stats}'
r.check_stats(count=100, exp_status=200)
# http/1.1 sequential transfers will open 1 connection
assert r.total_connects == 1
@ -87,7 +97,7 @@ class TestDownload:
r = curl.http_download(urls=[urln], alpn_proto=proto,
extra_args=['--parallel'])
assert r.exit_code == 0
r.check_responses(count=100, exp_status=200)
r.check_stats(count=100, exp_status=200)
if proto == 'http/1.1':
# http/1.1 parallel transfers will open multiple connections
assert r.total_connects > 1
@ -105,7 +115,7 @@ class TestDownload:
urln = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-499]'
r = curl.http_download(urls=[urln], alpn_proto=proto)
assert r.exit_code == 0
r.check_responses(count=500, exp_status=200)
r.check_stats(count=500, exp_status=200)
if proto == 'http/1.1':
# http/1.1 parallel transfers will open multiple connections
assert r.total_connects > 1
@ -124,7 +134,7 @@ class TestDownload:
r = curl.http_download(urls=[urln], alpn_proto=proto,
extra_args=['--parallel'])
assert r.exit_code == 0
r.check_responses(count=500, exp_status=200)
r.check_stats(count=500, exp_status=200)
if proto == 'http/1.1':
# http/1.1 parallel transfers will open multiple connections
assert r.total_connects > 1
@ -146,28 +156,28 @@ class TestDownload:
'--parallel', '--parallel-max', '200'
])
assert r.exit_code == 0, f'{r}'
r.check_responses(count=500, exp_status=200)
r.check_stats(count=500, exp_status=200)
# http2 should now use 2 connections, at most 5
assert r.total_connects <= 5, "h2 should use fewer connections here"
def check_response(self, r: ExecResult, count: int,
exp_status: Optional[int] = None):
if len(r.responses) != count:
seen_queries = []
for idx, resp in enumerate(r.responses):
assert resp['status'] == 200, f'response #{idx} status: {resp["status"]}'
if 'rquery' not in resp['header']:
log.error(f'response #{idx} missing "rquery": {resp["header"]}')
seen_queries.append(int(resp['header']['rquery']))
for i in range(0,count-1):
if i not in seen_queries:
log.error(f'response for query {i} missing')
if r.with_stats and len(r.stats) == count:
log.error(f'got all {count} stats, though')
assert len(r.responses) == count
if exp_status is not None:
for idx, x in enumerate(r.responses):
assert x['status'] == exp_status, \
f'response #{idx} unexpectedstatus: {x["status"]}'
if r.with_stats:
assert len(r.stats) == count, f'{r}'
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_02_08_1MB_serial(self, env: Env,
httpd, nghttpx, repeat, proto):
count = 2
urln = f'https://{env.authority_for(env.domain1, proto)}/data-1mb.data?[0-{count-1}]'
curl = CurlClient(env=env)
r = curl.http_download(urls=[urln], alpn_proto=proto)
assert r.exit_code == 0
r.check_stats(count=count, exp_status=200)
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_02_09_1MB_parallel(self, env: Env,
httpd, nghttpx, repeat, proto):
count = 2
urln = f'https://{env.authority_for(env.domain1, proto)}/data-1mb.data?[0-{count-1}]'
curl = CurlClient(env=env)
r = curl.http_download(urls=[urln], alpn_proto=proto, extra_args=[
'--parallel'
])
assert r.exit_code == 0
r.check_stats(count=count, exp_status=200)

View File

@ -24,12 +24,10 @@
#
###########################################################################
#
import json
import logging
import time
from datetime import timedelta
from threading import Thread
from typing import Optional
import pytest
from testenv import Env, CurlClient, ExecResult
@ -42,6 +40,11 @@ log = logging.getLogger(__name__)
reason=f"missing: {Env.incomplete_reason()}")
class TestGoAway:
@pytest.fixture(autouse=True, scope='class')
def _class_scope(self, env, nghttpx):
if env.have_h3():
nghttpx.start_if_needed()
# download files sequentially with delay, reload server for GOAWAY
def test_03_01_h2_goaway(self, env: Env, httpd, nghttpx, repeat):
proto = 'h2'
@ -64,8 +67,7 @@ class TestGoAway:
t.join()
r: ExecResult = self.r
assert r.exit_code == 0, f'{r}'
r.check_responses(count=count, exp_status=200)
assert len(r.stats) == count, f'{r.stats}'
r.check_stats(count=count, exp_status=200)
# reload will shut down the connection gracefully with GOAWAY
# we expect to see a second connection opened afterwards
assert r.total_connects == 2
@ -77,7 +79,6 @@ class TestGoAway:
# download files sequentially with delay, reload server for GOAWAY
@pytest.mark.skipif(condition=not Env.have_h3_server(), reason="no h3 server")
@pytest.mark.skipif(condition=True, reason="2nd and 3rd request sometimes fail")
def test_03_02_h3_goaway(self, env: Env, httpd, nghttpx, repeat):
proto = 'h3'
count = 3
@ -95,12 +96,10 @@ class TestGoAway:
# each request will take a second, reload the server in the middle
# of the first one.
time.sleep(1.5)
assert nghttpx.reload(timeout=timedelta(seconds=5))
assert nghttpx.reload(timeout=timedelta(seconds=2))
t.join()
r: ExecResult = self.r
assert r.exit_code == 0, f'{r}'
r.check_responses(count=count, exp_status=200)
assert len(r.stats) == count, f'{r.stats}'
# reload will shut down the connection gracefully with GOAWAY
# we expect to see a second connection opened afterwards
assert r.total_connects == 2
@ -109,5 +108,6 @@ class TestGoAway:
log.debug(f'request {idx} connected')
# this should take `count` seconds to retrieve
assert r.duration >= timedelta(seconds=count)
r.check_stats(count=count, exp_status=200, exp_exitcode=0)

View File

@ -38,6 +38,11 @@ log = logging.getLogger(__name__)
reason=f"missing: {Env.incomplete_reason()}")
class TestStuttered:
@pytest.fixture(autouse=True, scope='class')
def _class_scope(self, env, nghttpx):
if env.have_h3():
nghttpx.start_if_needed()
# download 1 file, check that delayed response works in general
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_04_01_download_1(self, env: Env, httpd, nghttpx, repeat,
@ -51,7 +56,7 @@ class TestStuttered:
'&chunks=100&chunk_size=100&chunk_delay=10ms'
r = curl.http_download(urls=[urln], alpn_proto=proto)
assert r.exit_code == 0, f'{r}'
r.check_responses(count=1, exp_status=200)
r.check_stats(count=1, exp_status=200)
# download 50 files in 100 chunks a 100 bytes with 10ms delay between
# prepend 100 file requests to warm up connection processing limits
@ -71,11 +76,11 @@ class TestStuttered:
r = curl.http_download(urls=[url1, urln], alpn_proto=proto,
extra_args=['--parallel'])
assert r.exit_code == 0, f'{r}'
r.check_responses(count=warmups+count, exp_status=200)
r.check_stats(count=warmups+count, exp_status=200)
assert r.total_connects == 1
t_avg, i_min, t_min, i_max, t_max = self.stats_spread(r.stats[warmups:], 'time_total')
assert t_max < (3 * t_min) and t_min < 2, \
f'avg time of transfer: {t_avg} [{i_min}={t_min}, {i_max}={t_max}]'
if t_max < (5 * t_min) and t_min < 2:
log.warning(f'avg time of transfer: {t_avg} [{i_min}={t_min}, {i_max}={t_max}]')
# download 50 files in 1000 chunks a 10 bytes with 1ms delay between
# prepend 100 file requests to warm up connection processing limits
@ -94,11 +99,11 @@ class TestStuttered:
r = curl.http_download(urls=[url1, urln], alpn_proto=proto,
extra_args=['--parallel'])
assert r.exit_code == 0
r.check_responses(count=warmups+count, exp_status=200)
r.check_stats(count=warmups+count, exp_status=200)
assert r.total_connects == 1
t_avg, i_min, t_min, i_max, t_max = self.stats_spread(r.stats[warmups:], 'time_total')
assert t_max < (2 * t_min), \
f'avg time of transfer: {t_avg} [{i_min}={t_min}, {i_max}={t_max}]'
if t_max < (5 * t_min):
log.warning(f'avg time of transfer: {t_avg} [{i_min}={t_min}, {i_max}={t_max}]')
# download 50 files in 10000 chunks a 1 byte with 10us delay between
# prepend 100 file requests to warm up connection processing limits
@ -107,8 +112,6 @@ class TestStuttered:
def test_04_04_1000_10_1(self, env: Env, httpd, nghttpx, repeat, proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
if proto == 'h2':
pytest.skip("h2 shows overly long request times")
count = 50
warmups = 100
curl = CurlClient(env=env)
@ -119,11 +122,11 @@ class TestStuttered:
r = curl.http_download(urls=[url1, urln], alpn_proto=proto,
extra_args=['--parallel'])
assert r.exit_code == 0
r.check_responses(count=warmups+count, exp_status=200)
r.check_stats(count=warmups+count, exp_status=200)
assert r.total_connects == 1
t_avg, i_min, t_min, i_max, t_max = self.stats_spread(r.stats[warmups:], 'time_total')
assert t_max < (2 * t_min), \
f'avg time of transfer: {t_avg} [{i_min}={t_min}, {i_max}={t_max}]'
if t_max < (5 * t_min):
log.warning(f'avg time of transfer: {t_avg} [{i_min}={t_min}, {i_max}={t_max}]')
def stats_spread(self, stats: List[Dict], key: str) -> Tuple[float, int, float, int, float]:
stotals = 0.0

View File

@ -37,18 +37,21 @@ log = logging.getLogger(__name__)
@pytest.mark.skipif(condition=Env.setup_incomplete(),
reason=f"missing: {Env.incomplete_reason()}")
@pytest.mark.skipif(condition=not Env.httpd_is_at_least('2.4.55'),
reason=f"httpd version too old for this: {Env.httpd_version()}")
class TestErrors:
@pytest.fixture(autouse=True, scope='class')
def _class_scope(self, env, nghttpx):
if env.have_h3():
nghttpx.start_if_needed()
# download 1 file, check that we get CURLE_PARTIAL_FILE
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_05_01_partial_1(self, env: Env, httpd, nghttpx, repeat,
proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
if proto == 'h2': # TODO, fix error code in curl
pytest.skip("h2 reports exitcode 16(CURLE_HTTP2)")
if proto == 'h3': # TODO, fix error code in curl
pytest.skip("h3 reports exitcode 95(CURLE_HTTP3)")
count = 1
curl = CurlClient(env=env)
urln = f'https://{env.authority_for(env.domain1, proto)}' \
@ -58,7 +61,7 @@ class TestErrors:
assert r.exit_code != 0, f'{r}'
invalid_stats = []
for idx, s in enumerate(r.stats):
if 'exitcode' not in s or s['exitcode'] != 18:
if 'exitcode' not in s or s['exitcode'] not in [18, 56]:
invalid_stats.append(f'request {idx} exit with {s["exitcode"]}')
assert len(invalid_stats) == 0, f'failed: {invalid_stats}'
@ -68,10 +71,6 @@ class TestErrors:
proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
if proto == 'h2': # TODO, fix error code in curl
pytest.skip("h2 reports exitcode 16(CURLE_HTTP2)")
if proto == 'h3': # TODO, fix error code in curl
pytest.skip("h3 reports exitcode 95(CURLE_HTTP3) and takes a long time")
count = 20
curl = CurlClient(env=env)
urln = f'https://{env.authority_for(env.domain1, proto)}' \
@ -82,6 +81,6 @@ class TestErrors:
assert len(r.stats) == count, f'did not get all stats: {r}'
invalid_stats = []
for idx, s in enumerate(r.stats):
if 'exitcode' not in s or s['exitcode'] != 18:
if 'exitcode' not in s or s['exitcode'] not in [18, 56]:
invalid_stats.append(f'request {idx} exit with {s["exitcode"]}')
assert len(invalid_stats) == 0, f'failed: {invalid_stats}'

View File

@ -0,0 +1,86 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2008 - 2022, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
# SPDX-License-Identifier: curl
#
###########################################################################
#
import json
import logging
from typing import Optional, Tuple, List, Dict
import pytest
from testenv import Env, CurlClient, ExecResult
log = logging.getLogger(__name__)
@pytest.mark.skipif(condition=Env.setup_incomplete(),
reason=f"missing: {Env.incomplete_reason()}")
@pytest.mark.skipif(condition=not Env.have_h3_server(),
reason=f"missing HTTP/3 server")
@pytest.mark.skipif(condition=not Env.have_h3_curl(),
reason=f"curl built without HTTP/3")
class TestEyeballs:
@pytest.fixture(autouse=True, scope='class')
def _class_scope(self, env, nghttpx):
if env.have_h3():
nghttpx.start_if_needed()
# download using only HTTP/3 on working server
def test_06_01_h3_only(self, env: Env, httpd, nghttpx, repeat):
curl = CurlClient(env=env)
urln = f'https://{env.authority_for(env.domain1, "h3")}/data.json'
r = curl.http_download(urls=[urln], extra_args=['--http3-only'])
assert r.exit_code == 0, f'{r}'
r.check_stats(count=1, exp_status=200)
assert r.stats[0]['http_version'] == '3'
# download using only HTTP/3 on missing server
def test_06_02_h3_only(self, env: Env, httpd, nghttpx, repeat):
nghttpx.stop_if_running()
curl = CurlClient(env=env)
urln = f'https://{env.authority_for(env.domain1, "h3")}/data.json'
r = curl.http_download(urls=[urln], extra_args=['--http3-only'])
assert r.exit_code == 7, f'{r}' # could not connect
# download using HTTP/3 on missing server with fallback on h2
def test_06_03_h3_fallback_h2(self, env: Env, httpd, nghttpx, repeat):
nghttpx.stop_if_running()
curl = CurlClient(env=env)
urln = f'https://{env.authority_for(env.domain1, "h3")}/data.json'
r = curl.http_download(urls=[urln], extra_args=['--http3'])
assert r.exit_code == 0, f'{r}'
r.check_stats(count=1, exp_status=200)
assert r.stats[0]['http_version'] == '2'
# download using HTTP/3 on missing server with fallback on http/1.1
def test_06_04_h3_fallback_h1(self, env: Env, httpd, nghttpx, repeat):
nghttpx.stop_if_running()
curl = CurlClient(env=env)
urln = f'https://{env.authority_for(env.domain2, "h3")}/data.json'
r = curl.http_download(urls=[urln], extra_args=['--http3'])
assert r.exit_code == 0, f'{r}'
r.check_stats(count=1, exp_status=200)
assert r.stats[0]['http_version'] == '1.1'

View File

@ -0,0 +1,150 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2008 - 2022, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
# SPDX-License-Identifier: curl
#
###########################################################################
#
import logging
import os
import pytest
from testenv import Env, CurlClient
log = logging.getLogger(__name__)
@pytest.mark.skipif(condition=Env.setup_incomplete(),
reason=f"missing: {Env.incomplete_reason()}")
class TestUpload:
@pytest.fixture(autouse=True, scope='class')
def _class_scope(self, env, nghttpx):
if env.have_h3():
nghttpx.start_if_needed()
s90 = "01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678\n"
with open(os.path.join(env.gen_dir, "data-100k"), 'w') as f:
for i in range(1000):
f.write(f"{i:09d}-{s90}")
# upload small data, check that this is what was echoed
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_07_01_upload_1_small(self, env: Env, httpd, nghttpx, repeat, proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
data = '0123456789'
curl = CurlClient(env=env)
url = f'https://{env.authority_for(env.domain1, proto)}/curltest/echo?id=[0-0]'
r = curl.http_upload(urls=[url], data=data, alpn_proto=proto)
assert r.exit_code == 0, f'{r}'
r.check_stats(count=1, exp_status=200)
respdata = open(curl.response_file(0)).readlines()
assert respdata == [data]
# upload large data, check that this is what was echoed
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_07_02_upload_1_large(self, env: Env, httpd, nghttpx, repeat, proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
fdata = os.path.join(env.gen_dir, 'data-100k')
curl = CurlClient(env=env)
url = f'https://{env.authority_for(env.domain1, proto)}/curltest/echo?id=[0-0]'
r = curl.http_upload(urls=[url], data=f'@{fdata}', alpn_proto=proto)
assert r.exit_code == 0, f'{r}'
r.check_stats(count=1, exp_status=200)
indata = open(fdata).readlines()
respdata = open(curl.response_file(0)).readlines()
assert respdata == indata
# upload data sequentially, check that they were echoed
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_07_10_upload_sequential(self, env: Env, httpd, nghttpx, repeat, proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
count = 50
data = '0123456789'
curl = CurlClient(env=env)
url = f'https://{env.authority_for(env.domain1, proto)}/curltest/echo?id=[0-{count-1}]'
r = curl.http_upload(urls=[url], data=data, alpn_proto=proto)
assert r.exit_code == 0, f'{r}'
r.check_stats(count=count, exp_status=200)
for i in range(count):
respdata = open(curl.response_file(i)).readlines()
assert respdata == [data]
# upload large data sequentially, check that this is what was echoed
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_07_11_upload_seq_large(self, env: Env, httpd, nghttpx, repeat, proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
fdata = os.path.join(env.gen_dir, 'data-100k')
count = 50
curl = CurlClient(env=env)
url = f'https://{env.authority_for(env.domain1, proto)}/curltest/echo?id=[0-{count-1}]'
r = curl.http_upload(urls=[url], data=f'@{fdata}', alpn_proto=proto)
assert r.exit_code == 0, f'{r}'
r.check_stats(count=count, exp_status=200)
indata = open(fdata).readlines()
r.check_stats(count=count, exp_status=200)
for i in range(count):
respdata = open(curl.response_file(i)).readlines()
assert respdata == indata
# upload data parallel, check that they were echoed
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_07_20_upload_parallel(self, env: Env, httpd, nghttpx, repeat, proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
count = 50
data = '0123456789'
curl = CurlClient(env=env)
url = f'https://{env.authority_for(env.domain1, proto)}/curltest/echo?id=[0-{count-1}]'
r = curl.http_upload(urls=[url], data=data, alpn_proto=proto,
extra_args=['--parallel'])
assert r.exit_code == 0, f'{r}'
r.check_stats(count=count, exp_status=200)
for i in range(count):
respdata = open(curl.response_file(i)).readlines()
assert respdata == [data]
# upload large data parallel, check that this is what was echoed
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
def test_07_21_upload_parallel_large(self, env: Env, httpd, nghttpx, repeat, proto):
if proto == 'h3' and not env.have_h3():
pytest.skip("h3 not supported")
if proto == 'h3' and env.curl_uses_lib('quiche'):
pytest.skip("quiche stalls on parallel, large uploads")
fdata = os.path.join(env.gen_dir, 'data-100k')
count = 3
curl = CurlClient(env=env)
url = f'https://{env.authority_for(env.domain1, proto)}/curltest/echo?id=[0-{count-1}]'
r = curl.http_upload(urls=[url], data=f'@{fdata}', alpn_proto=proto,
extra_args=['--parallel'])
assert r.exit_code == 0, f'{r}'
r.check_stats(count=count, exp_status=200)
indata = open(fdata).readlines()
r.check_stats(count=count, exp_status=200)
for i in range(count):
respdata = open(curl.response_file(i)).readlines()
assert respdata == indata

View File

@ -26,6 +26,7 @@
#
from .env import Env
from .certs import TestCA, Credentials
from .caddy import Caddy
from .httpd import Httpd
from .curl import CurlClient, ExecResult
from .nghttpx import Nghttpx

View File

@ -0,0 +1,164 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2008 - 2022, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
# SPDX-License-Identifier: curl
#
###########################################################################
#
import logging
import os
import subprocess
import time
from datetime import timedelta, datetime
from json import JSONEncoder
from .curl import CurlClient
from .env import Env
log = logging.getLogger(__name__)
class Caddy:
def __init__(self, env: Env):
self.env = env
self._caddy = os.environ['CADDY'] if 'CADDY' in os.environ else env.caddy
self._caddy_dir = os.path.join(env.gen_dir, 'caddy')
self._docs_dir = os.path.join(self._caddy_dir, 'docs')
self._conf_file = os.path.join(self._caddy_dir, 'Caddyfile')
self._error_log = os.path.join(self._caddy_dir, 'caddy.log')
self._tmp_dir = os.path.join(self._caddy_dir, 'tmp')
self._process = None
self._rmf(self._error_log)
@property
def docs_dir(self):
return self._docs_dir
def clear_logs(self):
self._rmf(self._error_log)
def is_running(self):
if self._process:
self._process.poll()
return self._process.returncode is None
return False
def start_if_needed(self):
if not self.is_running():
return self.start()
return True
def start(self, wait_live=True):
self._mkpath(self._tmp_dir)
if self._process:
self.stop()
self._write_config()
args = [
self._caddy, 'run'
]
caddyerr = open(self._error_log, 'a')
self._process = subprocess.Popen(args=args, cwd=self._caddy_dir, stderr=caddyerr)
if self._process.returncode is not None:
return False
return not wait_live or self.wait_live(timeout=timedelta(seconds=5))
def stop_if_running(self):
if self.is_running():
return self.stop()
return True
def stop(self, wait_dead=True):
self._mkpath(self._tmp_dir)
if self._process:
self._process.terminate()
self._process.wait(timeout=2)
self._process = None
return not wait_dead or self.wait_dead(timeout=timedelta(seconds=5))
return True
def restart(self):
self.stop()
return self.start()
def wait_dead(self, timeout: timedelta):
curl = CurlClient(env=self.env, run_dir=self._tmp_dir)
try_until = datetime.now() + timeout
while datetime.now() < try_until:
check_url = f'https://{self.env.domain1}:{self.env.caddy_port}/'
r = curl.http_get(url=check_url)
if r.exit_code != 0:
return True
log.debug(f'waiting for caddy to stop responding: {r}')
time.sleep(.1)
log.debug(f"Server still responding after {timeout}")
return False
def wait_live(self, timeout: timedelta):
curl = CurlClient(env=self.env, run_dir=self._tmp_dir)
try_until = datetime.now() + timeout
while datetime.now() < try_until:
check_url = f'https://{self.env.domain1}:{self.env.caddy_port}/'
r = curl.http_get(url=check_url)
if r.exit_code == 0:
return True
log.error(f'curl: {r}')
log.debug(f'waiting for caddy to become responsive: {r}')
time.sleep(.1)
log.error(f"Server still not responding after {timeout}")
return False
def _rmf(self, path):
if os.path.exists(path):
return os.remove(path)
def _mkpath(self, path):
if not os.path.exists(path):
return os.makedirs(path)
def _write_config(self):
domain1 = self.env.domain1
creds1 = self.env.get_credentials(domain1)
self._mkpath(self._docs_dir)
self._mkpath(self._tmp_dir)
with open(os.path.join(self._docs_dir, 'data.json'), 'w') as fd:
data = {
'server': f'{domain1}',
}
fd.write(JSONEncoder().encode(data))
with open(self._conf_file, 'w') as fd:
conf = [ # base server config
f'{{',
f' https_port {self.env.caddy_port}',
f' servers :{self.env.caddy_port} {{',
f' protocols h3 h2 h1',
f' }}',
f'}}',
f'{domain1}:{self.env.caddy_port} {{',
f' file_server * {{',
f' root {self._docs_dir}',
f' }}',
f' tls {creds1.cert_file} {creds1.pkey_file}',
f'}}',
]
fd.write("\n".join(conf))

View File

@ -155,28 +155,36 @@ class ExecResult:
def add_assets(self, assets: List):
self._assets.extend(assets)
def check_responses(self, count: int, exp_status: Optional[int] = None):
if len(self.responses) != count:
seen_queries = []
for idx, resp in enumerate(self.responses):
assert resp['status'] == 200, f'response #{idx} status: {resp["status"]}'
if 'rquery' not in resp['header']:
log.error(f'response #{idx} missing "rquery": {resp["header"]}')
seen_queries.append(int(resp['header']['rquery']))
for i in range(0, count-1):
if i not in seen_queries:
log.error(f'response for query {i} missing')
if self.with_stats and len(self.stats) == count:
log.error(f'got all {count} stats, though')
def check_responses(self, count: int, exp_status: Optional[int] = None,
exp_exitcode: Optional[int] = None):
assert len(self.responses) == count, \
f'response count: expected {count}, got {len(self.responses)}'
if exp_status is not None:
for idx, x in enumerate(self.responses):
assert x['status'] == exp_status, \
f'response #{idx} unexpectedstatus: {x["status"]}'
if exp_exitcode is not None:
for idx, x in enumerate(self.responses):
if 'exitcode' in x:
assert x['exitcode'] == 0, f'response #{idx} exitcode: {x["exitcode"]}'
if self.with_stats:
assert len(self.stats) == count, f'{self}'
def check_stats(self, count: int, exp_status: Optional[int] = None,
exp_exitcode: Optional[int] = None):
assert len(self.stats) == count, \
f'stats count: expected {count}, got {len(self.stats)}'
if exp_status is not None:
for idx, x in enumerate(self.stats):
assert 'http_code' in x, \
f'status #{idx} reports no http_code'
assert x['http_code'] == exp_status, \
f'status #{idx} unexpected http_code: {x["http_code"]}'
if exp_exitcode is not None:
for idx, x in enumerate(self.stats):
if 'exitcode' in x:
assert x['exitcode'] == 0, f'status #{idx} exitcode: {x["exitcode"]}'
class CurlClient:
@ -186,7 +194,7 @@ class CurlClient:
'http/1.1': '--http1.1',
'h2': '--http2',
'h2c': '--http2',
'h3': '--http3',
'h3': '--http3-only',
}
def __init__(self, env: Env, run_dir: Optional[str] = None):
@ -219,6 +227,7 @@ class CurlClient:
def http_download(self, urls: List[str],
alpn_proto: Optional[str] = None,
with_stats: bool = True,
with_headers: bool = False,
extra_args: List[str] = None):
if extra_args is None:
extra_args = []
@ -230,7 +239,41 @@ class CurlClient:
'-w', '%{json}\\n'
])
return self._raw(urls, alpn_proto=alpn_proto, options=extra_args,
with_stats=with_stats)
with_stats=with_stats,
with_headers=with_headers)
def http_upload(self, urls: List[str], data: str,
alpn_proto: Optional[str] = None,
with_stats: bool = True,
with_headers: bool = False,
extra_args: Optional[List[str]] = None):
if extra_args is None:
extra_args = []
extra_args.extend([
'--data-binary', data, '-o', 'download_#1.data',
])
if with_stats:
extra_args.extend([
'-w', '%{json}\\n'
])
return self._raw(urls, alpn_proto=alpn_proto, options=extra_args,
with_stats=with_stats,
with_headers=with_headers)
def response_file(self, idx: int):
return os.path.join(self._run_dir, f'download_{idx}.data')
def run_direct(self, args, with_stats: bool = False):
my_args = [self._curl]
if with_stats:
my_args.extend([
'-w', '%{json}\\n'
])
my_args.extend([
'-o', 'download.data',
])
my_args.extend(args)
return self._run(args=my_args, with_stats=with_stats)
def _run(self, args, intext='', with_stats: bool = False):
self._rmf(self._stdoutfile)
@ -252,12 +295,15 @@ class CurlClient:
def _raw(self, urls, timeout=10, options=None, insecure=False,
alpn_proto: Optional[str] = None,
force_resolve=True, with_stats=False):
force_resolve=True,
with_stats=False,
with_headers=True):
args = self._complete_args(
urls=urls, timeout=timeout, options=options, insecure=insecure,
alpn_proto=alpn_proto, force_resolve=force_resolve)
alpn_proto=alpn_proto, force_resolve=force_resolve,
with_headers=with_headers)
r = self._run(args, with_stats=with_stats)
if r.exit_code == 0:
if r.exit_code == 0 and with_headers:
self._parse_headerfile(self._headerfile, r=r)
if r.json:
r.response["json"] = r.json
@ -265,13 +311,14 @@ class CurlClient:
def _complete_args(self, urls, timeout=None, options=None,
insecure=False, force_resolve=True,
alpn_proto: Optional[str] = None):
alpn_proto: Optional[str] = None,
with_headers: bool = True):
if not isinstance(urls, list):
urls = [urls]
args = [
self._curl, "-s", "--path-as-is", "-D", self._headerfile,
]
args = [self._curl, "-s", "--path-as-is"]
if with_headers:
args.extend(["-D", self._headerfile])
if self.env.verbose > 2:
args.extend(['--trace', self._tracefile, '--trace-time'])

View File

@ -59,19 +59,41 @@ class EnvConfig:
self.config = DEF_CONFIG
# check cur and its features
self.curl = CURL
self.curl_features = []
self.curl_props = {
'version': None,
'os': None,
'features': [],
'protocols': [],
'libs': [],
'lib_versions': [],
}
self.curl_protos = []
p = subprocess.run(args=[self.curl, '-V'],
capture_output=True, text=True)
if p.returncode != 0:
assert False, f'{self.curl} -V failed with exit code: {p.returncode}'
for l in p.stdout.splitlines(keepends=False):
if l.startswith('curl '):
m = re.match(r'^curl (?P<version>\S+) (?P<os>\S+) (?P<libs>.*)$', l)
if m:
self.curl_props['version'] = m.group('version')
self.curl_props['os'] = m.group('os')
self.curl_props['lib_versions'] = [
lib.lower() for lib in m.group('libs').split(' ')
]
self.curl_props['libs'] = [
re.sub(r'/.*', '',lib) for lib in self.curl_props['lib_versions']
]
if l.startswith('Features: '):
self.curl_features = [feat.lower() for feat in l[10:].split(' ')]
self.curl_props['features'] = [
feat.lower() for feat in l[10:].split(' ')
]
if l.startswith('Protocols: '):
self.curl_protos = [prot.lower() for prot in l[11:].split(' ')]
self.curl_props['protocols'] = [
prot.lower() for prot in l[11:].split(' ')
]
self.nghttpx_with_h3 = re.match(r'.* nghttp3/.*', p.stdout.strip())
log.error(f'nghttpx -v: {p.stdout}')
log.debug(f'nghttpx -v: {p.stdout}')
self.http_port = self.config['test']['http_port']
self.https_port = self.config['test']['https_port']
@ -81,6 +103,7 @@ class EnvConfig:
self.apxs = self.config['httpd']['apxs']
if len(self.apxs) == 0:
self.apxs = None
self._httpd_version = None
self.examples_pem = {
'key': 'xxx',
@ -110,7 +133,39 @@ class EnvConfig:
self.nghttpx = None
else:
self.nghttpx_with_h3 = re.match(r'.* nghttp3/.*', p.stdout.strip()) is not None
log.error(f'nghttpx -v: {p.stdout}')
log.debug(f'nghttpx -v: {p.stdout}')
self.caddy = self.config['caddy']['caddy']
if len(self.caddy) == 0:
self.caddy = 'caddy'
if self.caddy is not None:
try:
p = subprocess.run(args=[self.caddy, 'version'],
capture_output=True, text=True)
if p.returncode != 0:
# not a working caddy
self.caddy = None
except:
self.caddy = None
self.caddy_port = self.config['caddy']['port']
@property
def httpd_version(self):
if self._httpd_version is None and self.apxs is not None:
p = subprocess.run(args=[self.apxs, '-q', 'HTTPD_VERSION'],
capture_output=True, text=True)
if p.returncode != 0:
raise Exception(f'{self.apxs} failed to query HTTPD_VERSION: {p}')
self._httpd_version = p.stdout.strip()
return self._httpd_version
def _versiontuple(self, v):
v = re.sub(r'(\d+\.\d+(\.\d+)?)(-\S+)?', r'\1', v)
return tuple(map(int, v.split('.')))
def httpd_is_at_least(self, minv):
hv = self._versiontuple(self.httpd_version)
return hv >= self._versiontuple(minv)
def is_complete(self) -> bool:
return os.path.isfile(self.httpd) and \
@ -146,14 +201,46 @@ class Env:
def have_h3_server() -> bool:
return Env.CONFIG.nghttpx_with_h3
@staticmethod
def have_h2_curl() -> bool:
return 'http2' in Env.CONFIG.curl_props['features']
@staticmethod
def have_h3_curl() -> bool:
return 'http3' in Env.CONFIG.curl_features
return 'http3' in Env.CONFIG.curl_props['features']
@staticmethod
def curl_uses_lib(libname: str) -> bool:
return libname.lower() in Env.CONFIG.curl_props['libs']
@staticmethod
def curl_lib_version(libname: str) -> str:
prefix = f'{libname.lower()}/'
for lversion in Env.CONFIG.curl_props['lib_versions']:
if lversion.startswith(prefix):
return lversion[len(prefix):]
return 'unknown'
@staticmethod
def curl_os() -> bool:
return Env.CONFIG.curl_props['os']
@staticmethod
def curl_version() -> bool:
return Env.CONFIG.curl_props['version']
@staticmethod
def have_h3() -> bool:
return Env.have_h3_curl() and Env.have_h3_server()
@staticmethod
def httpd_version() -> str:
return Env.CONFIG.httpd_version
@staticmethod
def httpd_is_at_least(minv) -> bool:
return Env.CONFIG.httpd_is_at_least(minv)
def __init__(self, pytestconfig=None):
self._verbose = pytestconfig.option.verbose \
if pytestconfig is not None else 0
@ -214,6 +301,14 @@ class Env:
def h3_port(self) -> str:
return self.CONFIG.h3_port
@property
def caddy(self) -> str:
return self.CONFIG.caddy
@property
def caddy_port(self) -> str:
return self.CONFIG.caddy_port
@property
def curl(self) -> str:
return self.CONFIG.curl

View File

@ -69,22 +69,24 @@ class Httpd:
self._error_log = os.path.join(self._logs_dir, 'error_log')
self._tmp_dir = os.path.join(self._apache_dir, 'tmp')
self._mods_dir = None
if env.apxs is not None:
p = subprocess.run(args=[env.apxs, '-q', 'libexecdir'],
capture_output=True, text=True)
if p.returncode != 0:
raise Exception(f'{env.apxs} failed to query libexecdir: {p}')
self._mods_dir = p.stdout.strip()
else:
for md in self.COMMON_MODULES_DIRS:
if os.path.isdir(md):
self._mods_dir = md
assert env.apxs
p = subprocess.run(args=[env.apxs, '-q', 'libexecdir'],
capture_output=True, text=True)
if p.returncode != 0:
raise Exception(f'{env.apxs} failed to query libexecdir: {p}')
self._mods_dir = p.stdout.strip()
if self._mods_dir is None:
raise Exception(f'apache modules dir cannot be found')
if not os.path.exists(self._mods_dir):
raise Exception(f'apache modules dir does not exist: {self._mods_dir}')
self._process = None
self._rmf(self._error_log)
self._init_curltest()
@property
def docs_dir(self):
return self._docs_dir
def clear_logs(self):
self._rmf(self._error_log)
@ -213,9 +215,6 @@ class Httpd:
f'Listen {self.env.http_port}',
f'Listen {self.env.https_port}',
f'TypesConfig "{self._conf_dir}/mime.types',
# we want the quest string in a response header, so we
# can check responses more easily
f'Header set rquery "%{{QUERY_STRING}}s"',
]
conf.extend([ # plain http host for domain1
f'<VirtualHost *:{self.env.http_port}>',

View File

@ -24,15 +24,16 @@
#
###########################################################################
#
import datetime
import logging
import os
import signal
import subprocess
import time
from typing import Optional
from datetime import datetime, timedelta
from .env import Env
from .curl import CurlClient
log = logging.getLogger(__name__)
@ -43,12 +44,18 @@ class Nghttpx:
def __init__(self, env: Env):
self.env = env
self._cmd = env.nghttpx
self._pid_file = os.path.join(env.gen_dir, 'nghttpx.pid')
self._conf_file = os.path.join(env.gen_dir, 'nghttpx.conf')
self._error_log = os.path.join(env.gen_dir, 'nghttpx.log')
self._stderr = os.path.join(env.gen_dir, 'nghttpx.stderr')
self._run_dir = os.path.join(env.gen_dir, 'nghttpx')
self._pid_file = os.path.join(self._run_dir, 'nghttpx.pid')
self._conf_file = os.path.join(self._run_dir, 'nghttpx.conf')
self._error_log = os.path.join(self._run_dir, 'nghttpx.log')
self._stderr = os.path.join(self._run_dir, 'nghttpx.stderr')
self._tmp_dir = os.path.join(self._run_dir, 'tmp')
self._process = None
self._process: Optional[subprocess.Popen] = None
self._rmf(self._pid_file)
self._rmf(self._error_log)
self._mkpath(self._run_dir)
self._write_config()
def exists(self):
return os.path.exists(self._cmd)
@ -63,10 +70,15 @@ class Nghttpx:
return self._process.returncode is None
return False
def start(self):
def start_if_needed(self):
if not self.is_running():
return self.start()
return True
def start(self, wait_live=True):
self._mkpath(self._tmp_dir)
if self._process:
self.stop()
self._write_config()
args = [
self._cmd,
f'--frontend=*,{self.env.h3_port};quic',
@ -82,31 +94,78 @@ class Nghttpx:
]
ngerr = open(self._stderr, 'a')
self._process = subprocess.Popen(args=args, stderr=ngerr)
return self._process.returncode is None
if self._process.returncode is not None:
return False
return not wait_live or self.wait_live(timeout=timedelta(seconds=5))
def stop(self):
def stop_if_running(self):
if self.is_running():
return self.stop()
return True
def stop(self, wait_dead=True):
self._mkpath(self._tmp_dir)
if self._process:
self._process.terminate()
self._process.wait(timeout=2)
self._process = None
return not wait_dead or self.wait_dead(timeout=timedelta(seconds=5))
return True
def restart(self):
self.stop()
return self.start()
def reload(self, timeout: datetime.timedelta):
def reload(self, timeout: timedelta):
if self._process:
running = self._process
self._process = None
os.kill(running.pid, signal.SIGQUIT)
self.start()
try:
log.debug(f'waiting for nghttpx({running.pid}) to exit.')
running.wait(timeout=timeout.seconds)
log.debug(f'nghttpx({running.pid}) terminated -> {running.returncode}')
end_wait = datetime.now() + timeout
if not self.start(wait_live=False):
self._process = running
return False
while datetime.now() < end_wait:
try:
log.debug(f'waiting for nghttpx({running.pid}) to exit.')
running.wait(2)
log.debug(f'nghttpx({running.pid}) terminated -> {running.returncode}')
break
except subprocess.TimeoutExpired:
log.warning(f'nghttpx({running.pid}), not shut down yet.')
os.kill(running.pid, signal.SIGQUIT)
if datetime.now() >= end_wait:
log.error(f'nghttpx({running.pid}), terminate forcefully.')
os.kill(running.pid, signal.SIGKILL)
running.terminate()
running.wait(1)
return self.wait_live(timeout=timedelta(seconds=5))
return False
def wait_dead(self, timeout: timedelta):
curl = CurlClient(env=self.env, run_dir=self._tmp_dir)
try_until = datetime.now() + timeout
while datetime.now() < try_until:
check_url = f'https://{self.env.domain1}:{self.env.h3_port}/'
r = curl.http_get(url=check_url, extra_args=['--http3-only'])
if r.exit_code != 0:
return True
except subprocess.TimeoutExpired:
log.error(f'SIGQUIT nghttpx({running.pid}), but did not shut down.')
log.debug(f'waiting for nghttpx to stop responding: {r}')
time.sleep(.1)
log.debug(f"Server still responding after {timeout}")
return False
def wait_live(self, timeout: timedelta):
curl = CurlClient(env=self.env, run_dir=self._tmp_dir)
try_until = datetime.now() + timeout
while datetime.now() < try_until:
check_url = f'https://{self.env.domain1}:{self.env.h3_port}/'
r = curl.http_get(url=check_url, extra_args=['--http3-only'])
if r.exit_code == 0:
return True
log.debug(f'waiting for nghttpx to become responsive: {r}')
time.sleep(.1)
log.error(f"Server still not responding after {timeout}")
return False
def _rmf(self, path):

View File

@ -89,6 +89,7 @@ static struct test_result *current_tr;
struct cf_test_ctx {
int ai_family;
int transport;
char id[16];
struct curltime started;
timediff_t fail_delay_ms;
@ -147,7 +148,8 @@ static struct Curl_cftype cft_test = {
static CURLcode cf_test_create(struct Curl_cfilter **pcf,
struct Curl_easy *data,
struct connectdata *conn,
const struct Curl_addrinfo *ai)
const struct Curl_addrinfo *ai,
int transport)
{
struct cf_test_ctx *ctx = NULL;
struct Curl_cfilter *cf = NULL;
@ -162,6 +164,7 @@ static CURLcode cf_test_create(struct Curl_cfilter **pcf,
goto out;
}
ctx->ai_family = ai->ai_family;
ctx->transport = transport;
ctx->started = Curl_now();
#ifdef ENABLE_IPV6
if(ctx->ai_family == AF_INET6) {