Commit Graph

29 Commits

Author SHA1 Message Date
Xiaoyin Liu
00f3a013c3 Remove redundant declarations in record_locl.h
This patch removes the prototype of function RECORD_LAYER_set_write_sequence from record_locl.h, since this function is not defined.

Reviewed-by: Tim Hudson <tjh@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/4051)
2017-07-30 17:40:56 -04:00
Matt Caswell
380a522f68 Replace instances of OPENSSL_assert() with soft asserts in libssl
Reviewed-by: Tim Hudson <tjh@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/3496)
2017-05-22 14:00:19 +01:00
Matt Caswell
a832b5ef7a Skip early_data if appropriate after a HelloRetryRequest
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/2737)
2017-03-02 17:44:15 +00:00
Matt Caswell
54105ddd23 Rename all "read" variables with "readbytes"
Travis is reporting one file at a time shadowed variable warnings where
"read" has been used. This attempts to go through all of libssl and replace
"read" with "readbytes" to fix all the problems in one go.

Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-11-04 12:09:46 +00:00
Matt Caswell
72716e79bf Convert some misc record layer functions for size_t
Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-11-04 12:09:45 +00:00
Matt Caswell
5607b2759a Convert SSL3_RECORD_clear() and SSL3_RECORD_release() to size_t
Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-11-04 12:09:45 +00:00
Matt Caswell
7ee8627f6e Convert libssl writing for size_t
Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-11-04 12:09:45 +00:00
Matt Caswell
eda757514e Further libssl size_t-ify of reading
Writing still to be done

Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-11-04 12:09:45 +00:00
Matt Caswell
8e6d03cac4 Convert record layer to use size_t
Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-11-04 12:09:45 +00:00
Matt Caswell
af58be768e Don't allow too many consecutive warning alerts
Certain warning alerts are ignored if they are received. This can mean that
no progress will be made if one peer continually sends those warning alerts.
Implement a count so that we abort the connection if we receive too many.

Issue reported by Shi Lei.

Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-09-21 20:17:04 +01:00
Matt Caswell
1fb9fdc302 Fix DTLS replay protection
The DTLS implementation provides some protection against replay attacks
in accordance with RFC6347 section 4.1.2.6.

A sliding "window" of valid record sequence numbers is maintained with
the "right" hand edge of the window set to the highest sequence number we
have received so far. Records that arrive that are off the "left" hand
edge of the window are rejected. Records within the window are checked
against a list of records received so far. If we already received it then
we also reject the new record.

If we have not already received the record, or the sequence number is off
the right hand edge of the window then we verify the MAC of the record.
If MAC verification fails then we discard the record. Otherwise we mark
the record as received. If the sequence number was off the right hand edge
of the window, then we slide the window along so that the right hand edge
is in line with the newly received sequence number.

Records may arrive for future epochs, i.e. a record from after a CCS being
sent, can arrive before the CCS does if the packets get re-ordered. As we
have not yet received the CCS we are not yet in a position to decrypt or
validate the MAC of those records. OpenSSL places those records on an
unprocessed records queue. It additionally updates the window immediately,
even though we have not yet verified the MAC. This will only occur if
currently in a handshake/renegotiation.

This could be exploited by an attacker by sending a record for the next
epoch (which does not have to decrypt or have a valid MAC), with a very
large sequence number. This means the right hand edge of the window is
moved very far to the right, and all subsequent legitimate packets are
dropped causing a denial of service.

A similar effect can be achieved during the initial handshake. In this
case there is no MAC key negotiated yet. Therefore an attacker can send a
message for the current epoch with a very large sequence number. The code
will process the record as normal. If the hanshake message sequence number
(as opposed to the record sequence number that we have been talking about
so far) is in the future then the injected message is bufferred to be
handled later, but the window is still updated. Therefore all subsequent
legitimate handshake records are dropped. This aspect is not considered a
security issue because there are many ways for an attacker to disrupt the
initial handshake and prevent it from completing successfully (e.g.
injection of a handshake message will cause the Finished MAC to fail and
the handshake to be aborted). This issue comes about as a result of trying
to do replay protection, but having no integrity mechanism in place yet.
Does it even make sense to have replay protection in epoch 0? That
issue isn't addressed here though.

This addressed an OCAP Audit issue.

CVE-2016-2181

Reviewed-by: Richard Levitte <levitte@openssl.org>
2016-08-19 13:52:40 +01:00
Emilia Kasper
a230b26e09 Indent ssl/
Run util/openssl-format-source on ssl/

Some comments and hand-formatted tables were fixed up
manually by disabling auto-formatting.

Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-08-18 14:02:29 +02:00
Matt Caswell
78fcddbb8d Address feedback on SSLv2 ClientHello processing
Reviewed-by: Tim Hudson <tjh@openssl.org>
2016-08-15 23:14:30 +01:00
Matt Caswell
44efb88a21 Address feedback on SSLv2 ClientHello processing
Feedback on the previous SSLv2 ClientHello processing fix was that it
breaks layering by reading init_num in the record layer. It also does not
detect if there was a previous non-fatal warning.

This is an alternative approach that directly tracks in the record layer
whether this is the first record.

GitHub Issue #1298

Reviewed-by: Tim Hudson <tjh@openssl.org>
2016-08-15 23:14:30 +01:00
Matt Caswell
58c27c207d Fix crash as a result of MULTIBLOCK
The MULTIBLOCK code uses a "jumbo" sized write buffer which it allocates
and then frees later. Pipelining however introduced multiple pipelines. It
keeps track of how many pipelines are initialised using numwpipes.
Unfortunately the MULTIBLOCK code was not updating this when in deallocated
its buffers, leading to a buffer being marked as initialised but set to
NULL.

RT#4618

Reviewed-by: Rich Salz <rsalz@openssl.org>
2016-07-30 11:46:20 +01:00
Matt Caswell
255cfeacd8 Reject out of context empty records
Previously if we received an empty record we just threw it away and
ignored it. Really though if we get an empty record of a different content
type to what we are expecting then that should be an error, i.e. we should
reject out of context empty records. This commit makes the necessary changes
to achieve that.

RT#4395

Reviewed-by: Andy Polyakov <appro@openssl.org>
2016-06-07 22:07:36 +01:00
Matt Caswell
753be41d59 Fix some suspect warnings on Windows
Windows was complaining about a unary minus operator being applied to an
unsigned type. It did seem to go on and do the right thing anyway, but the
code does look a little suspect. This fixes it.

Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
2016-05-26 17:18:39 +01:00
Rich Salz
846e33c729 Copyright consolidation 01/10
Reviewed-by: Richard Levitte <levitte@openssl.org>
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
2016-05-17 14:19:19 -04:00
Matt Caswell
f482740f23 Remove the wrec record layer field
We used to use the wrec field in the record layer for keeping track of the
current record that we are writing out. As part of the pipelining changes
this has been moved to stack allocated variables to do the same thing,
therefore the field is no longer needed.

Reviewed-by: Tim Hudson <tjh@openssl.org>
2016-03-07 21:39:28 +00:00
Matt Caswell
dad78fb13d Add an ability to set the SSL read buffer size
This capability is required for read pipelining. We will only read in as
many records as will fit in the read buffer (and the network can provide
in one go). The bigger the buffer the more records we can process in
parallel.

Reviewed-by: Tim Hudson <tjh@openssl.org>
2016-03-07 21:39:27 +00:00
Matt Caswell
0220fee47f Lazily initialise the compression buffer
With read pipelining we use multiple SSL3_RECORD structures for reading.
There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these
would be used). Each one has a 16k compression buffer allocated! This
results in a significant amount of memory being consumed which, most of the
time, is not needed.  This change swaps the allocation of the compression
buffer to be lazy so that it is only done immediately before it is actually
used.

Reviewed-by: Tim Hudson <tjh@openssl.org>
2016-03-07 21:39:27 +00:00
Matt Caswell
94777c9c86 Implement read pipeline support in libssl
Read pipelining is controlled in a slightly different way than with write
pipelining. While reading we are constrained by the number of records that
the peer (and the network) can provide to us in one go. The more records
we can get in one go the more opportunity we have to parallelise the
processing.

There are two parameters that affect this:
* The number of pipelines that we are willing to process in one go. This is
controlled by max_pipelines (as for write pipelining)
* The size of our read buffer. A subsequent commit will provide an API for
adjusting the size of the buffer.

Another requirement for this to work is that "read_ahead" must be set. The
read_ahead parameter will attempt to read as much data into our read buffer
as the network can provide. Without this set, data is read into the read
buffer on demand. Setting the max_pipelines parameter to a value greater
than 1 will automatically also turn read_ahead on.

Finally, the read pipelining as currently implemented will only parallelise
the processing of application data records. This would only make a
difference for renegotiation so is unlikely to have a significant impact.

Reviewed-by: Tim Hudson <tjh@openssl.org>
2016-03-07 21:39:27 +00:00
Matt Caswell
d102d9df86 Implement write pipeline support in libssl
Use the new pipeline cipher capability to encrypt multiple records being
written out all in one go. Two new SSL/SSL_CTX parameters can be used to
control how this works: max_pipelines and split_send_fragment.

max_pipelines defines the maximum number of pipelines that can ever be used
in one go for a single connection. It must always be less than or equal to
SSL_MAX_PIPELINES (currently defined to be 32). By default only one
pipeline will be used (i.e. normal non-parallel operation).

split_send_fragment defines how data is split up into pipelines. The number
of pipelines used will be determined by the amount of data provided to the
SSL_write call divided by split_send_fragment. For example if
split_send_fragment is set to 2000 and max_pipelines is 4 then:
SSL_write called with 0-2000 bytes == 1 pipeline used
SSL_write called with 2001-4000 bytes == 2 pipelines used
SSL_write called with 4001-6000 bytes == 3 pipelines used
SSL_write_called with 6001+ bytes == 4 pipelines used

split_send_fragment must always be less than or equal to max_send_fragment.
By default it is set to be equal to max_send_fragment. This will mean that
the same number of records will always be created as would have been
created in the non-parallel case, although the data will be apportioned
differently. In the parallel case data will be spread equally between the
pipelines.

Reviewed-by: Tim Hudson <tjh@openssl.org>
2016-03-07 21:39:27 +00:00
Rich Salz
a773b52a61 Remove unused parameters from internal functions
Reviewed-by: Richard Levitte <levitte@openssl.org>
2016-02-22 13:39:44 -05:00
Rich Salz
349807608f Remove /* foo.c */ comments
This was done by the following
        find . -name '*.[ch]' | /tmp/pl
where /tmp/pl is the following three-line script:
        print unless $. == 1 && m@/\* .*\.[ch] \*/@;
        close ARGV if eof; # Close file to reset $.

And then some hand-editing of other files.

Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
2016-01-26 16:40:43 -05:00
Matt Caswell
6b41b3f5ea Fix a memory leak in compression
The function RECORD_LAYER_clear() is supposed to clear the contents of the
RECORD_LAYER structure, but retain certain data such as buffers that are
allocated. Unfortunately one buffer (for compression) got missed and was
inadvertently being wiped, thus causing a memory leak.

In part this is due to the fact that RECORD_LAYER_clear() was reaching
inside SSL3_BUFFERs and SSL3_RECORDs, which it really shouldn't. So, I've
rewritten it to only clear the data it knows about, and to defer clearing
of SSL3_RECORD and SSL3_BUFFER structures to SSL_RECORD_clear() and the
new function SSL3_BUFFER_clear().

Reviewed-by: Tim Hudson <tjh@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
2015-05-22 08:08:45 +01:00
Matt Caswell
32ec41539b Server side version negotiation rewrite
This commit changes the way that we do server side protocol version
negotiation. Previously we had a whole set of code that had an "up front"
state machine dedicated to the negotiating the protocol version. This adds
significant complexity to the state machine. Historically the justification
for doing this was the support of SSLv2 which works quite differently to
SSLv3+. However, we have now removed support for SSLv2 so there is little
reason to maintain this complexity.

The one slight difficulty is that, although we no longer support SSLv2, we
do still support an SSLv3+ ClientHello in an SSLv2 backward compatible
ClientHello format. This is generally only used by legacy clients. This
commit adds support within the SSLv3 code for these legacy format
ClientHellos.

Server side version negotiation now works in much the same was as DTLS,
i.e. we introduce the concept of TLS_ANY_VERSION. If s->version is set to
that then when a ClientHello is received it will work out the most
appropriate version to respond with. Also, SSLv23_method and
SSLv23_server_method have been replaced with TLS_method and
TLS_server_method respectively. The old SSLv23* names still exist as
macros pointing at the new name, although they are deprecated.

Subsequent commits will look at client side version negotiation, as well of
removal of the old s23* code.

Reviewed-by: Kurt Roeckx <kurt@openssl.org>
2015-05-16 09:19:56 +01:00
Matt Caswell
747e16398d Clean up record layer
Fix up various things that were missed during the record layer work. All
instances where we are breaking the encapsulation rules.

Reviewed-by: Richard Levitte <levitte@openssl.org>
2015-03-31 14:39:31 +01:00
Matt Caswell
c99c4c11a2 Renamed record layer header files
Reviewed-by: Richard Levitte <levitte@openssl.org>
2015-03-26 15:02:01 +00:00