Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DTRACE points to measure request timings. #11245

Merged
merged 18 commits into from
Mar 20, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Merge branch 'devel' of github.com:arangodb/arangodb into feature/dtrace
  • Loading branch information
jsteemann committed Mar 20, 2020
commit 3ffaa4c10830f995cba6e927217a8501c01838ce
75 changes: 75 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,81 @@ devel

* Add DTRACE points to track a request through the infrastructure.

* Fixed issue #11275: indexes backward compatibility broken in 3.5+.

* Updated snowball dependency to the latest version.
More stemmers are available. Built-in analyzer list is unchanged.

* Reactive REST API endpoint at `/_admin/auth/reload`, as it is called by DC2DC.

* Fix an endless loop in FollowerInfo::persistInAgency which could trigger
a hanger if a collection was dropped at the wrong time.

* Updated LZ4 dependency to version 1.9.2.

* Fix cluster representation of the collection figures for RocksDB.

* Changed behavior for creating new collections in OneShard databases (i.e.
databases with "sharding" attribute set to "single"):

Previously it was allowed to override "distributeShardsLike" and
"numberOfShards" for each new collection. The default values were "_graphs"
and 1 and were not modified if the user did not alter them, but it was still
possible to alter them.
This is now (silenly) ignored. Any attempt to set any value for
"distributeShardsLike" or "numberOfShards" for new collections in a OneShard
database will silently be ignored. The collection will automatically be
sharded like the sharding prototype and will have a single shard.

The behavior has been adjusted to match the behavior when creating collections
in a cluster started with `--cluster.force-one-shard true` option. Here any
user-supplied values for "distributeShardsLike" or "numberOfShards" were
always ignored.

Now the behavior is identical for OneShard databases and databases in cluster
started with `--cluster.force-one-shard true`.

The web interface now also hides the "Distribute shards like" settings in this
case, and makes the "Number of shards" input box read-only.

* Fix premature access to temporary path before a user-specified path was
read from the config options.

* Rebuild UI and update swagger.

* Added ability to store values in ArangoSearch views.

* Added LIKE operator/function support to SEARCH clause.

* Added NGRAM_SIMILARITY and NGRAM_POSITIONAL_SIMILARITY functions for
calculating Ngram similarity.

* Added ngram fuzzy search. Supported by NGRAM_MATCH filter function for SEARCH
clause.

* Allow to override the detected total amount of memory via an environment
variable ARANGODB_OVERRIDE_DETECTED_TOTAL_MEMORY.

* `splice-subqueries` optimization is not limited by any type of operation within the
subquery any more. It can now be applied on every subquery and will be by default.
However they may be a performance impact on some queries where splice-subqueries
are not as performant as non-spliced subqueries. This is due to internal memory
management right now and will be addressed in future versions. Spliced subqueries
can be less performant if the query around the subquery is complex and requires
lots of variables, or variables with large content, but the subquery itself
does not require a lot of variables and produces many intermediate results
s.t. good batching within the query does not pay off against memory overhead.

* Supervision to clean up zombie servers after 24h, if no
responsibility for shards.

* Fix SORT RAND() LIMIT 1 optimization for RocksDB when only a projection of the
attributes was used. When a projection was used and that projection was
covered by an index (e.g. `_key` via the primary index), then the access
pattern was transformed from random order collection seek to an index access,
which always resulted in the same index entry to be returned and not a random
one.

* Mark server startup options `--foxx.*`, `--frontend.*` and `--javascript.*`
as single server and Coordinator only for documentation (`--dump-options`).

Expand Down
6 changes: 3 additions & 3 deletions arangod/GeneralServer/SslServerFeature.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -222,9 +222,9 @@ static inline bool searchForProtocol(const unsigned char** out, unsigned char* o
return false;
}

static int alpn_select_proto_cb(SSL *ssl, const unsigned char **out,
unsigned char *outlen, const unsigned char *in,
unsigned int inlen, void *arg) {
static int alpn_select_proto_cb(SSL* ssl, const unsigned char** out,
unsigned char* outlen, const unsigned char* in,
unsigned int inlen, void* arg) {
int rv = 0;
bool const* preferHttp11InAlpn = (bool*) arg;
if (*preferHttp11InAlpn) {
Expand Down
You are viewing a condensed version of this merge commit. You can view the full changes here.