Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync 1.12.2 #1

Merged
merged 323 commits into from
Oct 30, 2017
Merged

Sync 1.12.2 #1

merged 323 commits into from
Oct 30, 2017

Conversation

kevleyski
Copy link
Owner

No description provided.

mdounin and others added 30 commits December 27, 2016 17:23
A missing check could cause ngx_stream_ssl_handler() to be applied
to a non-ssl session, which resulted in a null pointer dereference
if ssl_verify_client is enabled.

The bug had appeared in 1.11.8 (41cb1b64561d).
If ngx_stream_ssl_init_connection() succeeded immediately, the check was not
done.

The bug had appeared in 1.11.8 (41cb1b64561d).
No functional changes, mostly style.
Closing up to 32 connections might be too aggressive if worker_connections
is set to a comparable number (and/or there are only a small number of
reusable connections).  If an occasional connection shorage happens in
such a configuration, it leads to closing all reusable connections instead
of gradually reducing keepalive timeout to a smaller value.  To improve
granularity in such configurations we now close no more than 1/8 of all
reusable connections at once.

Suggested by Joel Cunningham.
The ngx_chain_coalesce_file() function may produce more bytes to send then
requested in the limit passed, as it aligns the last file position
to send to memory page boundary.  As a result, (limit - send) may become
negative.  This resulted in big positive number when converted to size_t
while calling ngx_output_chain_to_iovec().

Another part of the problem is in ngx_chain_coalesce_file(): it changes cl
to the next chain link even if the current buffer is only partially sent
due to limit.

Therefore, if a file buffer was not expected to be fully sent due to limit,
and was followed by a memory buffer, nginx called sendfile() with a part
of the file buffer, and the memory buffer in trailer.  If there were enough
room in the socket buffer, this resulted in a part of the file buffer being
skipped, and corresponding part of the memory buffer sent instead.

The bug was introduced in 8e903522c17a (1.7.8).  Configurations affected
are ones using limits, that is, limit_rate and/or sendfile_max_chunk, and
memory buffers after file ones (may happen when using subrequests or
with proxying with disk buffering).

Fix is to explicitly check if (send < limit) before constructing trailer
with ngx_output_chain_to_iovec().  Additionally, ngx_chain_coalesce_file()
was modified to preserve unfinished file buffers in cl.
These messages doesn't seem to be needed in practice and only make
debugging logs harder to read.
The ngx_event_pipe() function wasn't called on write events with
wev->delayed set.  As a result, threaded writing results weren't
properly collected in ngx_event_pipe_write_to_downstream() when a
write event was triggered for a completed write.

Further, this wasn't detected, as p->aio was reset by a thread completion
handler, and results were later collected in ngx_event_pipe_read_upstream()
instead of scheduling a new write of additional data.  If this happened
on the last reading from an upstream, last part of the response was never
written to the cache file.

Similar problems might also happen in case of timeouts when writing to
client, as this also results in ngx_event_pipe() not being called on write
events.  In this scenario socket leaks were observed.

Fix is to check if p->writing is set in ngx_event_pipe_read_upstream(), and
therefore collect results of previous write operations in case of read events
as well, similar to how we do so in ngx_event_pipe_write_downstream().
This is enough to fix the wev->delayed case.  Additionally, we now call
ngx_event_pipe() from ngx_http_upstream_process_request() if there are
uncollected write operations (p->writing and !p->aio).  This also fixes
the wev->timedout case.
The type is no longer modified in NGINX Plus.
Based on a patch by Tom Thorogood.
Previously, buffer size was not changed from the one saved during
initial ngx_ssl_create_connection(), even if the buffer itself was not
yet created.  Fix is to change c->ssl->buffer_size in the SNI callback.

Note that it should be also possible to update buffer size even in non-SNI
virtual hosts as long as the buffer is not yet allocated.  This looks
like an overcomplication though.
The function may leave error in the error queue while returning success,
e.g., when taking a DSO reference to itself as of OpenSSL 1.1.0d:
https://git.openssl.org/?p=openssl.git;a=commit;h=4af9f7f

Notably, this fixes alert seen with statically linked OpenSSL on some platforms.

While here, check OPENSSL_init_ssl() return value.
This is not really needed in practice, and causes excessive debug output
in some of our tests.
Previously, there was no way to enable the proxy_cache_use_stale behavior by
reading the backend response.  Now, stale-while-revalidate and stale-if-error
Cache-Control extensions (RFC 5861) are supported.  They specify, how long a
stale response can be used when a cache entry is being updated, or in case of
an error.
The directives enable cache updates in subrequests.
Previously, slice subrequest location was selected based on request URI.
If request is then redirected to a new location, its context array is cleared,
making the slice module loose current slice range information.  This lead to
broken output.  Now subrequests with the NGX_HTTP_SUBREQUEST_CLONE flag are
created for slices.  Such subrequests stay in the same location as the parent
request and keep the right slice context.
mdounin and others added 29 commits September 14, 2017 19:06
Previously, "get indexed header" message was logged when in fact only
header name was obtained using an index, and "get indexed header name"
was logged when full header representation (name and value) was obtained
using an index.  Fixed version logs "get indexed name" and "get indexed
header" respectively.
This ensures slightly more readable debug logs on 80-character-wide
terminals.
After e284f3ff6831, ngx_crypt() can no longer return NGX_AGAIN.
The change in ac120e797d28 re-used the macro which was made obsolete
in adf25b8d0431.
It is to be used as a bitmask with various bits set/reset when appropriate.
63b8b157b776 made a similar change to ngx_http_upstream_rr_peer_t.down and
ngx_stream_upstream_rr_peer_t.down.
Restored alphabetical order within groups, OOXML types placed to
the application/ group and wrapped to avoid lines longer than 80 chars.
Requested by Michiel Leenaars.
If cache file is truncated, it is possible that u->process_header()
will return NGX_AGAIN.  Added appropriate handling of this case by
changing the error to NGX_HTTP_UPSTREAM_INVALID_HEADER.

Also, added appropriate logging of this and NGX_HTTP_UPSTREAM_INVALID_HEADER
cases at the "crit" level.  Note that this will result in duplicate logging
in case of NGX_HTTP_UPSTREAM_INVALID_HEADER.  While this is something better
to avoid, it is considered to be an overkill to implement cache-specific
error logging in u->process_header().

Additionally, u->buffer.start is now reset to be able to receive a new
response, and u->cache_status set to MISS to provide the value in the
$upstream_cache_status variable, much like it happens on other cache file
errors detected by ngx_http_file_cache_read(), instead of HIT, which is
believed to be misleading.
When caching intercepted errors, previous behaviour was to use
proxy_cache_valid times specified, regardless of various cache control
headers present in the response.  Fix is to check u->cacheable and
use u->cache->valid_sec as set by various cache control response headers,
similar to how we do this in the normal caching code path.
The sync flag of HTTP/2 request body buffer is used when the size of request
body is unknown or bigger than configured "client_body_buffer_size".  In this
case the buffer points to body data inside the global receive buffer that is
used for reading all HTTP/2 connections in the worker process.  Thus, when the
sync flag is set, the buffer must be flushed to a temporary file, otherwise
the request body data can be overwritten.

Previously, the sync buffer wasn't flushed to a temporary file if the whole
body was received in one DATA frame with the END_STREAM flag and wasn't
copied into the HTTP/2 body preread buffer.  As a result, the request body
might be corrupted (ticket #1384).

Now, setting r->request_body_in_file_only enforces writing the sync buffer
to a temporary file in all cases.
Some OSes (notably macOS, NetBSD, and Solaris) allow unix socket addresses
larger than struct sockaddr_un.  Moreover, some of them (macOS, Solaris)
return socklen of the socket address before it was truncated to fit the
buffer provided.  As such, on these systems socklen must not be used without
additional check that it is within the buffer provided.

Appropriate checks added to ngx_event_accept() (after accept()),
ngx_event_recvmsg() (after recvmsg()), and ngx_set_inherited_sockets()
(after getsockname()).

We also obtain socket addresses via getsockname() in
ngx_connection_local_sockaddr(), but it does not need any checks as
it is only used for INET and INET6 sockets (as there can be no
wildcard unix sockets).
At least FreeBSD, macOS, NetBSD, and OpenBSD can return unix sockets
with non-null-terminated sun_path.  Additionally, the address may become
non-null-terminated if it does not fit into the buffer provided and was
truncated (may happen on macOS, NetBSD, and Solaris, which allow unix socket
addresess larger than struct sockaddr_un).  As such, ngx_sock_ntop() might
overread the sockaddr provided, as it used "%s" format and thus assumed
null-terminated string.

To fix this, the ngx_strnlen() function was introduced, and it is now used
to calculate correct length of sun_path.
Previously, unix sockets were treated as AF_INET ones, and this may
result in buffer overread on Linux, where unbound unix sockets have
2-byte addresses.

Note that it is not correct to use just sun_path as a binary representation
for unix sockets.  This will result in an empty string for unbound unix
sockets, and thus behaviour of limit_req and limit_conn will change when
switching from $remote_addr to $binary_remote_addr.  As such, normal text
representation is used.

Reported by Stephan Dollberg.
While this may result in non-ideal distribution of requests if nginx
won't be able to select a server in a reasonable number of attempts,
this still looks better than severe performance degradation observed
if there is no limit and there are many points configured (ticket #1030).
This is also in line with what we do for other hash balancing methods.
This slightly reduces cost of selecting a peer if all or almost all peers
failed, see ticket #1030.  There should be no measureable difference with
other workloads.
When parsing of headers in a cache file fails, already parsed headers
need to be cleared, and protocol state needs to be reinitialized.  To do
so, u->request_sent is now set to ensure ngx_http_upstream_reinit() will
be called.

This change complements improvements in 46ddff109e72.
The NGX_DONE value returned from ngx_http_upstream_cache_send() indicates
that upstream was already finalized in ngx_http_upstream_process_headers().
It was treated as a generic error which resulted in duplicate finalization.

Handled NGX_HTTP_UPSTREAM_INVALID_HEADER from ngx_http_upstream_cache_send().
Previously, it could return within ngx_http_upstream_finalize_request(), and
since it's below NGX_HTTP_SPECIAL_RESPONSE, a client connection could stuck.
If proxy_next_upstream includes http_503/http_504, and upstream
returns 503/504, $upstream_status converted this to 502 for any
values except the last one.
Upgrading an upstream connection is usually followed by reading from the client
which a subrequest is not allowed to do.  Moreover, accessing the header_in
request field while processing upgraded connection ends up with a null pointer
dereference since the header_in buffer is only created for the the main request.
This header carries the definition of HMAC_Init_ex(). In OpenSSL this
header is included by <openssl/ssl.h>, but it's not so in BoringSSL.

It's probably a good idea to explicitly include this header anyway,
regardless of whether it's included by other headers or not.
The type should have been changed in c9b243802a17 along with
changing ngx_conf_handler_pt.
In particular, if ngx_http_postpone_filter_add() fails in ngx_chain_add_copy(),
the output chain of the postponed request was left in an invalid state.
This is what usually happens for zones no longer used in the new
configuration, but zones where size or tag were changed were freed
when creating new memory zones.  If reconfiguration failed (for
example, due to a conflicting listening socket), this resulted in a
segmentation fault in the master process.

Reported by Zhihua Cao,
http://mailman.nginx.org/pipermail/nginx-devel/2017-October/010536.html.
@kevleyski kevleyski merged commit aaad353 into kevleyski:master Oct 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.