- fixes stalled connections
- Make the connection window large enough, so that there is
some room left should 99/100 streams be PAUSED by the application
Reported-by: Paweł Wegner
Fixes#10988Closes#11043
The open paren check wants to warn for spaces before open parenthesis
for if/while/for but also for any function call. In order to avoid
catching function pointer declarations, the logic allows a space if the
first character after the open parenthesis is an asterisk.
I also spotted what we did not include "switch" in the check but we should.
This check is a little lame, but we reduce this problem by not allowing
that space for if/while/for/switch.
Reported-by: Emanuele Torre
Closes#11044
- Makefile support for building test specific clients in tests/http/clients
- auto-make of clients when invoking pytest
- added test_09_02 for server PUSH_PROMISEs using clients/h2-serverpush
- added test_02_21 for lib based downloads and pausing/unpausing transfers
curl url parser:
- added internal method `curl_url_set_authority()` for setting the
authority part of a url (used for PUSH_PROMISE)
http2:
- made logging of PUSH_PROMISE handling nicer
Placing python test requirements in requirements.txt files
- separate files to base test suite and http tests since use
and module lists differ
- using the files in the gh workflows
websocket test cases, fixes for we and bufq
- bufq: account for spare chunks in space calculation
- bufq: reset chunks that are skipped empty
- ws: correctly encode frames with 126 bytes payload
- ws: update frame meta information on first call of collect
callback that fills user buffer
- test client ws-data: some test/reporting improvements
Closes#11006
- Always set the libssh2 'abstract' user-pointer to the libcurl easy
handle associated with the ssh session, so it is always passed to the
ssh keyboard callback.
Prior to this change and since 8b5f100 (precedes curl 8.0.0), if libcurl
was built without CURL_DEBUG then it could crash during the ssh auth
phase due to a null dereference in the ssh keyboard callback.
Reported-by: Andreas Falkenhahn
Fixes https://github.com/curl/curl/pull/11024
Closes https://github.com/curl/curl/pull/11026
libcurl used to do a directory listing for this case (even though the
documentation says a URL needs to end in a slash for this), but
4e2b52b5f7 modified the behavior.
This change brings back a directory listing for SFTP paths that are
specified exactly as /~ in the URL.
Reported-by: Pavel Mayorov
Fixes#11001Closes#11023
The leftmost "label" of the host name can now only match against single
'*'. Like the browsers have worked for a long time.
- extended unit test 1397 for this
- move some SOURCE variables from unit/Makefile.am to unit/Makefile.inc
Reported-by: Hiroki Kurosawa
Closes#11018
- state is fully kept at connection, since curl_ws_send() and
curl_ws_rec() have lifetime beyond usual transfers
- no more limit on frame sizes
Reported-by: simplerobot on github
Fixes#10962Closes#10999
Prior to this change STRING_AWS_SIGV4 (CURLOPT_AWS_SIGV4) was wrongly
marked as binary data that could not be duplicated.
Without this fix, this option's value is not copied upon calling
curl_easy_duphandle().
Closes https://github.com/curl/curl/pull/11021
- just increasing the http/2 flow window does not necessarily
make a server send new data. It may already have exhausted
the window before
Closes#11005
- `drain` was used by http/2 and http/3 implementations to indicate
that the transfer requires send/recv independant from its socket
poll state. Intended as a counter, it was used as bool flag only.
- a similar mechanism exists on `connectdata->cselect_bits` where
specific protocols can indicate something similar, only for the
whole connection.
- `cselect_bits` are cleard in transfer.c on use and, importantly,
also set when the transfer loop expended its `maxloops` tries.
`drain` was not cleared by transfer and the http2/3 implementations
had to take care of that.
- `dselect_bits` is cleared *and* set by the transfer loop. http2/3
does no longer clear it, only set when new events happen.
This change unifies the handling of socket poll overrides, extending
`cselect_bits` by a easy handle specific value and a common treatment in
transfers.
Closes#11005
... instead of using the curl time struct, since it would use a few
uninitialized bytes and the sanitizers would complain. This is a neater
approach I think.
Reported-by: Boris Kuschel
Fixes#10993Closes#11015
By making sure we set state.upload based on the set.method value and not
independently as set.upload, we reduce confusion and mixup risks, both
internally and externally.
Closes#11017
- with `--proxy-http2` allow h2 ALPN negotiation to
forward proxies
- applies to http: requests against a https: proxy only,
as https: requests will auto-tunnel
- adding a HTTP/1 request parser in http1.c
- removed h2h3.c
- using new request parser in nghttp2 and all h3 backends
- adding test 2603 for request parser
- adding h2 proxy test cases to test_10_*
scorecard.py: request scoring accidentally always run curl
with '-v'. Removed that, expect double numbers.
labeller: added http1.* and h2-proxy sources to detection
Closes#10967
- expression 'hostptr' is always true
- a part of conditional expression is always true: proxypasswd
- expression 'proxyuser' is always true
- avoid multiple Curl_now() calls in allocate_conn
Ref: #10929Closes#10959
- Disable socket receive buffer unless USE_RECV_BEFORE_SEND_WORKAROUND
is in place.
While we would like to use the receive buffer, we have stalls in
parallel transfers where not all buffered data is consumed and no socket
events happen.
Note USE_RECV_BEFORE_SEND_WORKAROUND is a Windows sockets workaround
that has been disabled by default since b4b6e4f1, due to other bugs.
Closes https://github.com/curl/curl/pull/10961
- progress ingress stopped too early, causing data
from the underlying filters to not be processed and
report that no tunnel data was available
- this lead to "hangers" where no socket activity was
seen but data rested in buffers
Closes#10952
- callbacks and filter methods might be invoked at unexpected
times, e.g. when the transfer's stream_ctx has not been initialized
yet or, more likely, has already been taken down.
- check for existance of stream_ctx in such places and return
an error or silently succeed the call.
Closes#10951
- use bufq as recv buffer, also for Windows pre-receive handling
- catch small reads followed by larger ones in a single socket
call. A common pattern on TLS connections.
Closes#10787
- move host checks together
- simplify the scheme parser loop and the end of host name parser
- avoid itermediate buffer storing in multiple places
- reduce scope for several variables
- skip the Curl_dyn_tail() call for speed
- detect IPv6 earlier and skip extra checks for such hosts
- normalize directly in dynbuf instead of itermediate buffer
- split out the IPv6 parser into its own funciton
- call the IPv6 parser directly for ipv6 addresses
- remove (unused) special treatment of % in host names
- junkscan() once in the beginning instead of scattered
- make junkscan return error code
- remove unused query management from dedotdotify()
- make Curl_parse_login_details use memchr
- more use of memchr() instead of strchr() and less strlen() calls
- make junkscan check and return the URL length
An optimized build runs one of my benchmark URL parsing programs ~41%
faster using this branch. (compared against the shipped 7.88.1 library
in Debian)
Closes#10935
... and make Curl_cookie_add() require 'data' being set proper with an
assert.
The function has not worked with a NULL data for quite some time so this
just corrects the code and comment.
This is a different take than the proposed fixed in #10927
Reported-by: Kvarec Lezki
Ref: #10929Closes#10930
- move all code handling HTTP/2 frames for a particular
stream into a separate function to keep from confusing
the call `data` with the stream `data`.
Closes#10924
A typical mistake would be to try to set "https://" - including the
separator - this is now rejected as that would then lead to
url_get(... URL...) would get an invalid URL extracted.
Extended test 1560 to verify.
Closes#10911
Curl_http2_strerror was renamed to http2_strerror in
05b100aee2 and then http2_strerror was removed in
5808a0d0f5
This also fixes the following compiler error
lib/http2.h:41:33: error: unknown type name 'uint32_t'
lib/http2.h:1:1: note: 'uint32_t' is defined in header '<stdint.h>'
Closes#10912
The only user of this define was 'chkdecimalpoint' - a special purpose
test tool that was built but not used anymore (since 17c18fbc3 - Apr
2020).
Closes#10908
- remove NGHTTP2 members of `struct HTTP`
- add `void *h2_ctx` to `struct HTTP`
- add `void *h3_ctx` to `struct HTTP`
- separate h2/h3 pointers are needed for eyeballing
- manage local stream_ctx in http implementations
Closes#10877
- currently only on debug build and when env variable
CURL_PROXY_TUNNEL_H2 is present.
- will ALPN negotiate with the proxy server and switch
tunnel filter based on the protocol negotiated.
- http/1.1 tunnel code moved into cf-h1-proxy.[ch]
- http/2 tunnel code implemented in cf-h2-proxy.[ch]
- tunnel start and ALPN set remains in http_proxy.c
- moving all haproxy related code into cf-haproxy.[ch]
VTLS changes
- SSL filters rely solely on the "alpn" specification they
are created with and no longer check conn->bits.tls_enable_alpn.
- checks on which ALPN specification to use (or none at all) are
done in vtls.c when creating the filter.
Testing
- added a nghttpx forward proxy to the pytest setup that
speaks HTTP/2 and forwards all requests to the Apache httpd
forward proxy server.
- extending test coverage in test_10 cases
- adding proxy tests for direct/tunnel h1/h2 use of basic auth.
- adding test for http/1.1 and h2 proxy tunneling to pytest
Closes#10780
- eliminate receive loop in vtls to fill buffer. This may
lead to partial reads of data which is counter productive
- let http2 instead loop smarter to process pending network
data without transfer switches
scorecard improvements
- do not start caddy when only httpd is requested
- allow curl -v to stderr file on --curl-verbose
Closes#10891
Using bad numbers in an IPv4 numerical address now returns
CURLUE_BAD_HOSTNAME.
I noticed while working on trurl and it was originally reported here:
https://github.com/curl/trurl/issues/78
Updated test 1560 accordingly.
Closes#10894
Meaning that it would wrongly still store the fragment using spaces
instead of %20 if allowing space while also asking for URL encoding.
Discovered when playing with trurl.
Added test to lib1560 to verify the fix.
Closes#10887
- when rustls is told to recieve more TLS data and its internal
plaintext buffers are full, it returns an IOERROR
- avoid receiving TLS data while plaintext is not read empty
pytest:
- increase curl run timeout when invoking pytest with higher verbosity
Closes#10876
- ngtcp2: using bufq for recv stream data
- internal stream_ctx instead of `struct HTTP` members
for quiche, ngtcp2 and msh3
- no more QUIC related members in `struct HTTP`
- experimental use of recvmmsg(), disabled by default
- testing on my old debian box shows no throughput improvements.
- leaving it in, but disabled, for future revisit
- vquic: common UDP receive code for ngtcp2 and quiche
- vquic: common UDP send code for ngtcp2 and quiche
- added pytest skips for known msh3 failures
- fix unit2601 to survive torture testing
- quiche: using latest `master` from quiche and enabling large download
tests, now that key change is supported
- fixing test_07_21 where retry handling of starting a stream
was faulty
- msh3: use bufq for recv buffering headers and data
- msh3: replace fprintf debug logging with LOG_CF where possible
- msh3: force QUIC expire timers on recv/send to have more than
1 request per second served
Closes#10772
- use bufq for send/receive of network data
- usd bufq for send/receive of stream data
- use HTTP/2 flow control with no-auto updates to control the
amount of data we are buffering for a stream
HTTP/2 stream window set to 128K after local tests, defined
code constant for now
- elminiating PAUSEing nghttp2 processing when receiving data
since a stream can now take in all DATA nghttp2 forwards
Improved scorecard and adjuste http2 stream window sizes
- scorecard improved output formatting and options default
- scorecard now also benchmarks small requests / second
Closes#10771
RFC 7686 states that:
> Applications that do not implement the Tor
> protocol SHOULD generate an error upon the use of .onion and
> SHOULD NOT perform a DNS lookup.
Let's do that.
https://www.rfc-editor.org/rfc/rfc7686#section-2
Add test 1471 and 1472 to verify
Fixes#543Closes#10705
* Configure changes to detect AWS-LC
* CMakeLists.txt changes to detect AWS-LC
* Compile-time branches needed to support AWS-LC
* Correctly set OSSL_VERSION and report AWS-LC release number
* GitHub Actions script to build with autoconf and cmake against AWS-LC
AWS-LC is a BoringSSL/OpenSSL derivative
For more information see https://github.com/awslabs/aws-lc/Closes#10320
SSL backends like OpenSSL/wolfSSL and other return the content of one
TLS record on read, but usually there are more available.
Change the vtls cfilter recv() function to fill the given buffer until a
read would block.
Closes#10736
Some IP cameras send malformed RTSP interleaved frames sometimes, which
can cause curl_easy_perform return 1 (CURLE_UNSUPPORTED_PROTOCOL). This
change attempts to skip clearly incorrect RTSP interleaving frame data.
Closes#10808
Adding `bufq`:
- at init() time configured to hold up to `n` chunks of `m` bytes each.
- various methods for reading from and writing to it.
- `peek` support to get access to buffered data without copy
- `pass` support to allow buffer flushing on write if it becomes full
- use case: IO buffers for dynamic reads and writes that do not blow up
- distinct from `dynbuf` in that:
- it maintains a read position
- writes on a full bufq return CURLE_AGAIN instead of nuking itself
- Init options:
- SOFT_LIMIT: allow writes into a full bufq
- NO_SPARES: free empty chunks right away
- a `bufc_pool` that can keep a number of spare chunks to
be shared between different `bufq` instances
Adding `dynhds`:
- a straightforward list of name+value pairs as used for HTTP headers
- headers can be appended dynamically
- headers can be removed again
- headers can be replaced
- headers can be looked up
- http/1.1 formatting into a `dynbuf`
- configured at init() with limits on header counts and total string
sizes
- use case: pass a HTTP request or response around without being version
specific
- express a HTTP request without a curl easy handle (used in h2 proxy
tunnels)
- future extension possibilities:
- conversions of `dynhds` to nghttp2/nghttp3 name+value arrays
Closes#10720
As dynbufs always have a fixed maximum size which they are not allowed
to grow larger than, making sure that it never allocates a larger buffer
makes sure the buffer does not allocate memory that will never be used.
Closes#10845
The public 'curl_fileinfo' struct contained three fields that are for
internal purposes only. This change makes them unused in the public
struct.
The new private struct fields are also renamed to make this separation
more obvious internally.
Closes#10844
As they are not driving transfers or any socket activity, the main loop
does not need to iterate over these handles. A performance improvement.
They are instead only held in their own separate lists.
'data->multi' is kept a pointer to the multi handle as long as the easy
handle is actually part of it even when the handle is moved to the
pending/msgsent lists. It needs to know which multi handle it belongs
to, if for example curl_easy_cleanup() is called before the handle is
removed from the multi handle.
Alll 'data->multi' pointers of handles still part of the multi handle
gets cleared by curl_multi_cleanup() which "orphans" all previously
attached easy handles.
This is take 2. The first version was reverted for the 8.0.1 release.
Assisted-by: Stefan Eissing
Closes#10801
- make configure show on HTTP3 feature that both ngtcp2 and nghttp3
are in play
- define ENABLE_QUIC only when USE_NGTCP2 and USE_NGHTTP3 are defined
- add USE_NGHTTP3 in the ngtcp2 implementation
Fixes#10793Closes#10821
For GOOD_EASY_HANDLE and GOOD_MULTI_HANDLE checks
- allow NULL pointers to "just" return an error as before
- fail hard on nun-NULL pointers that no longer show the MAGICs
Closes#10812
Various compile failures in gskit.c;
- pipe_ssloverssl() needs Curl_easy data parameter for
Curl_conn_cf_get_socket(cf, data)
- key_passwd is in ssl_config, not conn_config
- close_on() has 2 parameters, not 4
- getsockopt() needs to call Curl_conn_cf_get_socket(), not
cxn->sock[FIRSTSOCKET]
Fixes#10799Closes#10800
It turns out c-ares returns an error when asked to resolve a host name with
ares_getaddrinfo using port number 0.
Reported as a c-ares bug here: https://github.com/c-ares/c-ares/issues/517
The work-around is to simply use port 80 instead, as the number typically does
not make a difference and a non-zero number works for c-ares.
Fixes#10759
Reported-by: Matt Jolly
Closes#10789
As they are not driving transfers or any socket activity, the main loop
does not need to iterate over these handles. A performance improvement.
They are instead only held in their own separate lists.
Assisted-by: Stefan Eissing
Ref: #10743Closes#10762
Linked lists themselves do not carry any allocations, so for the lists
that do not have have a set destructor we can just skip the
Curl_llist_destroy() call and save CPU time.
Closes#10764
all s3 requests default to UNSIGNED-PAYLOAD and add the required
x-amz-content-sha256 header. this allows CURLAUTH_AWS_SIGV4 to correctly
sign s3 requests to amazon with no additional configuration
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Closes#9995
- add QUIC/ngtcp2 detection in CMake with wolfSSL.
Because wolfSSL uses zlib if available, move compression detection
before TLS detection. (OpenSSL might also need this in the future.)
- wolfSSL 5.5.0 started using C99 types in its `quic.h` header, but it
doesn't #include the necessary C99 header itself, breaking builds
(unless another dependency pulled it by chance.) Add local workaround
for it. For this to work with all build tools, we had to fix our
header detection first. Ref: #10745
Ref: 6ad5f6ecc1Closes#10739
- use the defined, but so far not used, KEEP_SEND_HOLD bit for flow
control based suspend of sending in transfers.
Prior to this change KEEP_SEND_PAUSE bit was used instead, but that can
interfere with pausing streams from the user side via curl_easy_pause.
Fixes https://github.com/curl/curl/issues/10751
Closes https://github.com/curl/curl/pull/10753
Fix `stdint.h` and `inttypes.h` detection with non-autotools builds on
Windows. (autotools already auto-detected them accurately.)
`lib/config-win32.h` builds (e.g. `Makefile.mk`):
- set `HAVE_STDINT_H` where supported.
- set `HAVE_INTTYPES_H` for MinGW.
CMake:
- auto-detect them on Windows. (They were both force-disabled.)
- delete unused `CURL_PULL_STDINT_H`.
- delete unused `CURL_PULL_INTTYPES_H`.
- stop detecting `HAVE_STDINT_H` twice.
Present since the initial CMake commit: 4c5307b456
curl doesn't use these C99 headers, we need them now to workaround
broken wolfSSL builds. Ref: #10739
Once that clears up, we can delete these detections and macros (unless
we want to keep them for future us.)
Reviewed-by: Daniel Stenberg
Closes#10745
This is already how curl is documented to behave in Everything curl, but
in actuality only short POSTs skip this. This should knock 30 seconds
off a full run of the test suite since the 100-continue timeout will no
longer be hit.
Closes#10740
RST and connection close were not handled correctly during parallel
transfers, leading to aborted response bodies being reported complete.
Closes#10715
brotli v1.0.0 throughout current latest v1.0.9 and latest master [1]
trigger this warning.
It happened with CMake and GNU Make. autotools builds avoid it with
the `convert -I options to -isystem` macro.
llvm/clang:
```
In file included from ./curl/lib/content_encoding.c:36:
./brotli/x64-ucrt/usr/include/brotli/decode.h:204:34: warning: variable length array used [-Wvla]
const uint8_t encoded_buffer[BROTLI_ARRAY_PARAM(encoded_size)],
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./brotli/x64-ucrt/usr/include/brotli/port.h:253:34: note: expanded from macro 'BROTLI_ARRAY_PARAM'
^~~~~~
In file included from ./curl/lib/content_encoding.c:36:
./brotli/x64-ucrt/usr/include/brotli/decode.h:206:48: warning: variable length array used [-Wvla]
uint8_t decoded_buffer[BROTLI_ARRAY_PARAM(*decoded_size)]);
~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~
./brotli/x64-ucrt/usr/include/brotli/port.h:253:35: note: expanded from macro 'BROTLI_ARRAY_PARAM'
~^~~~~
```
gcc:
```
In file included from ./curl/lib/content_encoding.c:36:
./brotli/x64-ucrt/usr/include/brotli/decode.h:204:5: warning: ISO C90 forbids variable length array 'encoded_buffer' [-Wvla]
204 | const uint8_t encoded_buffer[BROTLI_ARRAY_PARAM(encoded_size)],
| ^~~~~
./brotli/x64-ucrt/usr/include/brotli/decode.h:206:5: warning: ISO C90 forbids variable length array 'decoded_buffer' [-Wvla]
206 | uint8_t decoded_buffer[BROTLI_ARRAY_PARAM(*decoded_size)]);
| ^~~~~~~
```
[1] ed1995b6bd
Reviewed-by: Daniel Stenberg
Reviewed-by: Marcel Raad
Closes#10738
In pytest'ing the situation occored that wolfSSL reported an
IO error when the underlying BIO operation was returning an
CURLE_AGAIN condition.
Readding the `io_result` filter context member to detect such
situations.
Also, making sure that the returned CURLcode is initialized
on all recv operations outcome.
Closes#10716
This makes us debug libssh2 less and libcurl more when for example
running torture tests that otherwise will spend a lot of time in libssh2
functions.
We leave libssh2 to test libssh2.
Closes#10721
By letting curl_easy_header() and curl_easy_nextheader() store the
header data in their own struct storage when they return a pointer to
it, it makes it possible for applications to use them both in a loop.
Like the curl tool does.
Reported-by: Boris Okunskiy
Fixes#10704Closes#10707
- since 7.87.0 we lost adding the SSL filter for an active
FTP connection that uses SSL. This leads to hangers and timeouts
as reported in #10666.
Reported-by: SandakovMM on github
Fixes#10666Closes#10669
- add parameter to `conn_is_alive()` cfilter method that returns
if there is input data waiting on the connection
- refrain from re-using connnection from the cache that have
input pending
- adapt http/2 and http/3 alive checks to digest pending input
to check the connection state
- remove check_cxn method from openssl as that was just doing
what the socket filter now does.
- add tests for connection reuse with special server configs
Closes#10690
- a reset transfer (HTTP/2 RST) did not always lead to the proper
error message on receiving its response, leading to wrong reports
of a successful transfer
- test_05_02 was able to trigger this condition with increased transfer
count. The simulated response errors did not carry a 'Content-Length'
so only proper RST handling could detect the abort
- When doing such transfers in parallel, a connection could enter the
state where
a) it had been closed (GOAWAY received)
b) the RST had not been "seen" for the transfer yet
or c) the GOAWAY announced an error and the last successful
stream id was not checked against ongoing transfers
Closes#10693
- time_connect was not updated when the overall connection failed,
e.g. when SSL verification was unsuccessful, refs #10670
- rework gather those values to interrogate involved filters,
also from all eyeballing attempts, to report the maximum of
those values.
- added 3 test cases in test_06 to check reported values on
successful, partially failed and totally failed connections.
Reported-by: Master Inspire
Fixes#10670Closes#10671
Normally curl uses cryptographically strong random provided by the
selected SSL backend. If compiled without SSL support, a naive built-in
function was used instead.
Generally this was okay, but it will result in some downsides for non-
SSL builds, such as predictable temporary file names.
This change ensures that arc4random will be used instead, if available.
Closes#10672
Before this patch, enabling LDAPS required a manual C flag:
c1cfc31cfc/curl-cmake.sh (L105)
Fix this and enable LDAPS automatically when using `wldap32` (and
when not explicitly disabled). This matches autotools and `Makefile.mk`
behavior. Also remove issue from KNOWN_BUGS.
Add workaround for MSVS 2010 warning triggered by LDAPS now enabled
in more CI tests:
`ldap.c(360): warning C4306: 'type cast' : conversion from 'int' to 'void *' of greater size`
Ref: https://ci.appveyor.com/project/curlorg/curl/builds/46408284/job/v8mwl9yfbmoeqwlr#L312
Reported-by: JackBoosY on github
Reviewed-by: Jay Satiro
Reviewed-by: Marcel Raad
Fixes#6284Closes#10674
Since abebb2b893, we set this macro for
all Windows `wldap32` builds using `Makefile.mk`.
For OpenLDAP builds this macro is not enough to enable LDAPS, and
OpenLDAP is not an option in `Makefile.mk`. For Novell LDAP it might
have helped, but it's also not an option anymore in `Makefile.mk`.
The future for LDAPS is that we should enable it by default without
extra build knobs.
Reviewed-by: Marcel Raad
Closes#10681
The feature is rarely used so this frees up data for the vast majority
of easy handles that don't use it.
Rename "protdata" to "ftpwc" since it is always an FTP wildcard struct
pointer. Made the state struct field an unsigned char to save space.
Closes#10639
- refs #10646 where reuse was attempted on closed connections in the
cache, leading to an exhaustion of retries on a transfer
- the mistake was that poll events like POLLHUP, POLLERR, etc
were regarded as "not dead".
- change cf-socket filter check to regard such events as inidication
of corpsiness.
- vtls filter checks: fixed interpretation of backend check result
when inconclusive to interrogate status further down the filter
chain.
Reported-by: SendSonS on github
Fixes#10646Closes#10652