forked from arangodb/arangodb
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathCHANGELOG
5850 lines (3849 loc) · 233 KB
/
CHANGELOG
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
v2.7.0 (XXXX-XX-XX)
-------------------
* AQL functon call arguments optimization
This will lead to arguments in function calls inside AQL queries will not be copied but passed
by reference. This may speed up calls to functions with bigger argument values or queries that
calls functions a lot of times.
* upgraded V8 version to 4.3.61
* removed deprecated AQL `SKIPLIST` function.
This function was introduced in older versions of ArangoDB with a less powerful query optimizer to
retrieve data from a skiplist index using a `LIMIT` clause. It was marked as deprecated in ArangoDB
2.6.
Since ArangoDB 2.3 the behavior of the `SKIPLIST` function can be emulated using regular AQL
constructs, e.g.
FOR doc IN @@collection
FILTER doc.value >= @value
SORT doc.value DESC
LIMIT 1
RETURN doc
* the `skip()` function for simple queries does not accept negative input any longer.
This feature was deprecated in 2.6.0.
* based REST API method PUT `/_api/simple/all` on the cursor API and make its use AQL internally.
The change speeds up this REST API method and will lead to additional query information being
returned by the REST API. Clients can use this extra information or ignore it.
v2.6.0-beta4 (2015-06-16)
-------------------------
* using negative values for `SimpleQuery.skip()` is deprecated.
This functionality will be removed in future versions of ArangoDB.
* The following simple query functions are now deprecated:
* collection.near
* collection.within
* collection.geo
* collection.fulltext
* collection.range
* collection.closedRange
This also lead to the following REST API methods being deprecated from now on:
* PUT /_api/simple/near
* PUT /_api/simple/within
* PUT /_api/simple/fulltext
* PUT /_api/simple/range
It is recommended to replace calls to these functions or APIs with equivalent AQL queries,
which are more flexible because they can be combined with other operations:
FOR doc IN NEAR(@@collection, @latitude, @longitude, @limit)
RETURN doc
FOR doc IN WITHIN(@@collection, @latitude, @longitude, @radius, @distanceAttributeName)
RETURN doc
FOR doc IN FULLTEXT(@@collection, @attributeName, @queryString, @limit)
RETURN doc
FOR doc IN @@collection
FILTER doc.value >= @left && doc.value < @right
LIMIT @skip, @limit
RETURN doc`
The above simple query functions and REST API methods may be removed in future versions
of ArangoDB.
* deprecated now-obsolete AQL `SKIPLIST` function
The function was introduced in older versions of ArangoDB with a less powerful query optimizer to
retrieve data from a skiplist index using a `LIMIT` clause.
Since 2.3 the same goal can be achieved by using regular AQL constructs, e.g.
FOR doc IN collection FILTER doc.value >= @value SORT doc.value DESC LIMIT 1 RETURN doc
* fixed issues when switching the database inside tasks and during shutdown of database cursors
These features were added during 2.6 alpha stage so the fixes affect devel/2.6-alpha builds only
* issue #1360: improved foxx-manager help
* added `--enable-tcmalloc` configure option.
When this option is set, arangod and the client tools will be linked against tcmalloc, which replaces
the system allocator. When the option is set, a tcmalloc library must be present on the system under
one of the names `libtcmalloc`, `libtcmalloc_minimal` or `libtcmalloc_debug`.
As this is a configure option, it is supported for manual builds on Linux-like systems only. tcmalloc
support is currently experimental.
* issue #1353: Windows: HTTP API - incorrect path in errorMessage
* issue #1347: added option `--create-database` for arangorestore.
Setting this option to `true` will now create the target database if it does not exist. When creating
the target database, the username and passwords passed to arangorestore will be used to create an
initial user for the new database.
* issue #1345: advanced debug information for User Functions
* issue #1341: Can't use bindvars in UPSERT
* fixed vulnerability in JWT implementation.
* changed default value of option `--database.ignore-datafile-errors` from `true` to `false`
If the new default value of `false` is used, then arangod will refuse loading collections that contain
datafiles with CRC mismatches or other errors. A collection with datafile errors will then become
unavailable. This prevents follow up errors from happening.
The only way to access such collection is to use the datafile debugger (arango-dfdb) and try to repair
or truncate the datafile with it.
If `--database.ignore-datafile-errors` is set to `true`, then collections will become available
even if parts of their data cannot be loaded. This helps availability, but may cause (partial) data
loss and follow up errors.
* added server startup option `--server.session-timeout` for controlling the timeout of user sessions
in the web interface
* add sessions and cookie authentication for ArangoDB's web interface
ArangoDB's built-in web interface now uses sessions. Session information ids are stored in cookies,
so clients using the web interface must accept cookies in order to use it
* web interface: display query execution time in AQL editor
* web interface: renamed AQL query *submit* button to *execute*
* web interface: added query explain feature in AQL editor
* web interface: demo page added. only working if demo data is available, hidden otherwise
* web interface: added support for custom app scripts with optional arguments and results
* web interface: mounted apps that need to be configured are now indicated in the app overview
* web interface: added button for running tests to app details
* web interface: added button for configuring app dependencies to app details
* web interface: upgraded API documentation to use Swagger 2
* INCOMPATIBLE CHANGE
removed startup option `--log.severity`
The docs for `--log.severity` mentioned lots of severities (e.g. `exception`, `technical`, `functional`, `development`)
but only a few severities (e.g. `all`, `human`) were actually used, with `human` being the default and `all` enabling the
additional logging of requests. So the option pretended to control a lot of things which it actually didn't. Additionally,
the option `--log.requests-file` was around for a long time already, also controlling request logging.
Because the `--log.severity` option effectively did not control that much, it was removed. A side effect of removing the
option is that 2.5 installations which used `--log.severity all` will not log requests after the upgrade to 2.6. This can
be adjusted by setting the `--log.requests-file` option.
* add backtrace to fatal log events
* added optional `limit` parameter for AQL function `FULLTEXT`
* make fulltext index also index text values contained in direct sub-objects of the indexed
attribute.
Previous versions of ArangoDB only indexed the attribute value if it was a string. Sub-attributes
of the index attribute were ignored when fulltext indexing.
Now, if the index attribute value is an object, the object's values will each be included in the
fulltext index if they are strings. If the index attribute value is an array, the array's values
will each be included in the fulltext index if they are strings.
For example, with a fulltext index present on the `translations` attribute, the following text
values will now be indexed:
var c = db._create("example");
c.ensureFulltextIndex("translations");
c.insert({ translations: { en: "fox", de: "Fuchs", fr: "renard", ru: "лиса" } });
c.insert({ translations: "Fox is the English translation of the German word Fuchs" });
c.insert({ translations: [ "ArangoDB", "document", "database", "Foxx" ] });
c.fulltext("translations", "лиса").toArray(); // returns only first document
c.fulltext("translations", "Fox").toArray(); // returns first and second documents
c.fulltext("translations", "prefix:Fox").toArray(); // returns all three documents
* added batch document removal and lookup commands:
collection.lookupByKeys(keys)
collection.removeByKeys(keys)
These commands can be used to perform multi-document lookup and removal operations efficiently
from the ArangoShell. The argument to these operations is an array of document keys.
Also added HTTP APIs for batch document commands:
* PUT /_api/simple/lookup-by-keys
* PUT /_api/simple/remove-by-keys
* properly prefix document address URLs with the current database name for calls to the REST
API method GET `/_api/document?collection=...` (that method will return partial URLs to all
documents in the collection).
Previous versions of ArangoDB returned the URLs starting with `/_api/` but without the current
database name, e.g. `/_api/document/mycollection/mykey`. Starting with 2.6, the response URLs
will include the database name as well, e.g. `/_db/_system/_api/document/mycollection/mykey`.
* added dedicated collection export HTTP REST API
ArangoDB now provides a dedicated collection export API, which can take snapshots of entire
collections more efficiently than the general-purpose cursor API. The export API is useful
to transfer the contents of an entire collection to a client application. It provides optional
filtering on specific attributes.
The export API is available at endpoint `POST /_api/export?collection=...`. The API has the
same return value structure as the already established cursor API (`POST /_api/cursor`).
An introduction to the export API is given in this blog post:
http://jsteemann.github.io/blog/2015/04/04/more-efficient-data-exports/
* subquery optimizations for AQL queries
This optimization avoids copying intermediate results into subqueries that are not required
by the subquery.
A brief description can be found here:
http://jsteemann.github.io/blog/2015/05/04/subquery-optimizations/
* return value optimization for AQL queries
This optimization avoids copying the final query result inside the query's main `ReturnNode`.
A brief description can be found here:
http://jsteemann.github.io/blog/2015/05/04/return-value-optimization-for-aql/
* speed up AQL queries containing big `IN` lists for index lookups
`IN` lists used for index lookups had performance issues in previous versions of ArangoDB.
These issues have been addressed in 2.6 so using bigger `IN` lists for filtering is much
faster.
A brief description can be found here:
http://jsteemann.github.io/blog/2015/05/07/in-list-improvements/
* allow `@` and `.` characters in document keys, too
This change also leads to document keys being URL-encoded when returned in HTTP `location`
response headers.
* added alternative implementation for AQL COLLECT
The alternative method uses a hash table for grouping and does not require its input elements
to be sorted. It will be taken into account by the optimizer for `COLLECT` statements that do
not use an `INTO` clause.
In case a `COLLECT` statement can use the hash table variant, the optimizer will create an extra
plan for it at the beginning of the planning phase. In this plan, no extra `SORT` node will be
added in front of the `COLLECT` because the hash table variant of `COLLECT` does not require
sorted input. Instead, a `SORT` node will be added after it to sort its output. This `SORT` node
may be optimized away again in later stages. If the sort order of the result is irrelevant to
the user, adding an extra `SORT null` after a hash `COLLECT` operation will allow the optimizer to
remove the sorts altogether.
In addition to the hash table variant of `COLLECT`, the optimizer will modify the original plan
to use the regular `COLLECT` implementation. As this implementation requires sorted input, the
optimizer will insert a `SORT` node in front of the `COLLECT`. This `SORT` node may be optimized
away in later stages.
The created plans will then be shipped through the regular optimization pipeline. In the end,
the optimizer will pick the plan with the lowest estimated total cost as usual. The hash table
variant does not require an up-front sort of the input, and will thus be preferred over the
regular `COLLECT` if the optimizer estimates many input elements for the `COLLECT` node and
cannot use an index to sort them.
The optimizer can be explicitly told to use the regular *sorted* variant of `COLLECT` by
suffixing a `COLLECT` statement with `OPTIONS { "method" : "sorted" }`. This will override the
optimizer guesswork and only produce the *sorted* variant of `COLLECT`.
A blog post on the new `COLLECT` implementation can be found here:
http://jsteemann.github.io/blog/2015/04/22/collecting-with-a-hash-table/
* refactored HTTP REST API for cursors
The HTTP REST API for cursors (`/_api/cursor`) has been refactored to improve its performance
and use less memory.
A post showing some of the performance improvements can be found here:
http://jsteemann.github.io/blog/2015/04/01/improvements-for-the-cursor-api/
* simplified return value syntax for data-modification AQL queries
ArangoDB 2.4 since version allows to return results from data-modification AQL queries. The
syntax for this was quite limited and verbose:
FOR i IN 1..10
INSERT { value: i } IN test
LET inserted = NEW
RETURN inserted
The `LET inserted = NEW RETURN inserted` was required literally to return the inserted
documents. No calculations could be made using the inserted documents.
This is now more flexible. After a data-modification clause (e.g. `INSERT`, `UPDATE`, `REPLACE`,
`REMOVE`, `UPSERT`) there can follow any number of `LET` calculations. These calculations can
refer to the pseudo-values `OLD` and `NEW` that are created by the data-modification statements.
This allows returning projections of inserted or updated documents, e.g.:
FOR i IN 1..10
INSERT { value: i } IN test
RETURN { _key: NEW._key, value: i }
Still not every construct is allowed after a data-modification clause. For example, no functions
can be called that may access documents.
More information can be found here:
http://jsteemann.github.io/blog/2015/03/27/improvements-for-data-modification-queries/
* added AQL `UPSERT` statement
This adds an `UPSERT` statement to AQL that is a combination of both `INSERT` and `UPDATE` /
`REPLACE`. The `UPSERT` will search for a matching document using a user-provided example.
If no document matches the example, the *insert* part of the `UPSERT` statement will be
executed. If there is a match, the *update* / *replace* part will be carried out:
UPSERT { page: 'index.html' } /* search example */
INSERT { page: 'index.html', pageViews: 1 } /* insert part */
UPDATE { pageViews: OLD.pageViews + 1 } /* update part */
IN pageViews
`UPSERT` can be used with an `UPDATE` or `REPLACE` clause. The `UPDATE` clause will perform
a partial update of the found document, whereas the `REPLACE` clause will replace the found
document entirely. The `UPDATE` or `REPLACE` parts can refer to the pseudo-value `OLD`, which
contains all attributes of the found document.
`UPSERT` statements can optionally return values. In the following query, the return
attribute `found` will return the found document before the `UPDATE` was applied. If no
document was found, `found` will contain a value of `null`. The `updated` result attribute will
contain the inserted / updated document:
UPSERT { page: 'index.html' } /* search example */
INSERT { page: 'index.html', pageViews: 1 } /* insert part */
UPDATE { pageViews: OLD.pageViews + 1 } /* update part */
IN pageViews
RETURN { found: OLD, updated: NEW }
A more detailed description of `UPSERT` can be found here:
http://jsteemann.github.io/blog/2015/03/27/preview-of-the-upsert-command/
* adjusted default configuration value for `--server.backlog-size` from 10 to 64.
* issue #1231: bug xor feature in AQL: LENGTH(null) == 4
This changes the behavior of the AQL `LENGTH` function as follows:
- if the single argument to `LENGTH()` is `null`, then the result will now be `0`. In previous
versions of ArangoDB, the result of `LENGTH(null)` was `4`.
- if the single argument to `LENGTH()` is `true`, then the result will now be `1`. In previous
versions of ArangoDB, the result of `LENGTH(true)` was `4`.
- if the single argument to `LENGTH()` is `false`, then the result will now be `0`. In previous
versions of ArangoDB, the result of `LENGTH(false)` was `5`.
The results of `LENGTH()` with string, numeric, array object argument values do not change.
* issue #1298: Bulk import if data already exists (#1298)
This change extends the HTTP REST API for bulk imports as follows:
When documents are imported and the `_key` attribute is specified for them, the import can be
used for inserting and updating/replacing documents. Previously, the import could be used for
inserting new documents only, and re-inserting a document with an existing key would have failed
with a *unique key constraint violated* error.
The above behavior is still the default. However, the API now allows controlling the behavior
in case of a unique key constraint error via the optional URL parameter `onDuplicate`.
This parameter can have one of the following values:
- `error`: when a unique key constraint error occurs, do not import or update the document but
report an error. This is the default.
- `update`: when a unique key constraint error occurs, try to (partially) update the existing
document with the data specified in the import. This may still fail if the document would
violate secondary unique indexes. Only the attributes present in the import data will be
updated and other attributes already present will be preserved. The number of updated documents
will be reported in the `updated` attribute of the HTTP API result.
- `replace`: when a unique key constraint error occurs, try to fully replace the existing
document with the data specified in the import. This may still fail if the document would
violate secondary unique indexes. The number of replaced documents will be reported in the
`updated` attribute of the HTTP API result.
- `ignore`: when a unique key constraint error occurs, ignore this error. There will be no
insert, update or replace for the particular document. Ignored documents will be reported
separately in the `ignored` attribute of the HTTP API result.
The result of the HTTP import API will now contain the attributes `ignored` and `updated`, which
contain the number of ignored and updated documents respectively. These attributes will contain a
value of zero unless the `onDuplicate` URL parameter is set to either `update` or `replace`
(in this case the `updated` attribute may contain non-zero values) or `ignore` (in this case the
`ignored` attribute may contain a non-zero value).
To support the feature, arangoimp also has a new command line option `--on-duplicate` which can
have one of the values `error`, `update`, `replace`, `ignore`. The default value is `error`.
A few examples for using arangoimp with the `--on-duplicate` option can be found here:
http://jsteemann.github.io/blog/2015/04/14/updating-documents-with-arangoimp/
* changed behavior of `db._query()` in the ArangoShell:
if the command's result is printed in the shell, the first 10 results will be printed. Previously
only a basic description of the underlying query result cursor was printed. Additionally, if the
cursor result contains more than 10 results, the cursor is assigned to a global variable `more`,
which can be used to iterate over the cursor result.
Example:
arangosh [_system]> db._query("FOR i IN 1..15 RETURN i")
[object ArangoQueryCursor, count: 15, hasMore: true]
[
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
]
type 'more' to show more documents
arangosh [_system]> more
[object ArangoQueryCursor, count: 15, hasMore: false]
[
11,
12,
13,
14,
15
]
* Disallow batchSize value 0 in HTTP `POST /_api/cursor`:
The HTTP REST API `POST /_api/cursor` does not accept a `batchSize` parameter value of
`0` any longer. A batch size of 0 never made much sense, but previous versions of ArangoDB
did not check for this value. Now creating a cursor using a `batchSize` value 0 will
result in an HTTP 400 error response
* REST Server: fix memory leaks when failing to add jobs
* 'EDGES' AQL Function
The AQL function EDGES got a new fifth option parameter.
Right now only one option is available: 'includeVertices'. This is a boolean parameter
that allows to modify the result of the `EDGES` function.
Default is 'includeVertices: false' which does not have any effect.
'includeVertices: true' modifies the result, such that
{vertex: <vertexDocument>, edge: <edgeDocument>} is returned.
* INCOMPATIBLE CHANGE:
The result format of the AQL function `NEIGHBORS` has been changed.
Before it has returned an array of objects containing 'vertex' and 'edge'.
Now it will only contain the vertex directly.
Also an additional option 'includeData' has been added.
This is used to define if only the 'vertex._id' value should be returned (false, default),
or if the vertex should be looked up in the collection and the complete JSON should be returned
(true).
Using only the id values can lead to significantly improved performance if this is the only information
required.
In order to get the old result format prior to ArangoDB 2.6, please use the function EDGES instead.
Edges allows for a new option 'includeVertices' which, set to true, returns exactly the format of NEIGHBORS.
Example:
NEIGHBORS(<vertexCollection>, <edgeCollection>, <vertex>, <direction>, <example>)
This can now be achieved by:
EDGES(<vertexCollection>, <edgeCollection>, <vertex>, <direction>, <example>, {includeVertices: true})
If you are nesting several NEIGHBORS steps you can speed up their performance in the following way:
Old Example:
FOR va IN NEIGHBORS(Users, relations, 'Users/123', 'outbound') FOR vc IN NEIGHBORS(Products, relations, va.vertex._id, 'outbound') RETURN vc
This can now be achieved by:
FOR va IN NEIGHBORS(Users, relations, 'Users/123', 'outbound') FOR vc IN NEIGHBORS(Products, relations, va, 'outbound', null, {includeData: true}) RETURN vc
^^^^ ^^^^^^^^^^^^^^^^^^^
Use intermediate directly include Data for final
* INCOMPATIBLE CHANGE:
The AQL function `GRAPH_NEIGHBORS` now provides an additional option `includeData`.
This option allows controlling whether the function should return the complete vertices
or just their IDs. Returning only the IDs instead of the full vertices can lead to
improved performance .
If provided, `includeData` is set to `true`, all vertices in the result will be returned
with all their attributes. The default value of `includeData` is `false`.
This makes the default function results incompatible with previous versions of ArangoDB.
To get the old result style in ArangoDB 2.6, please set the options as follows in calls
to `GRAPH_NEIGHBORS`:
GRAPH_NEIGHBORS(<graph>, <vertex>, { includeData: true })
* INCOMPATIBLE CHANGE:
The AQL function `GRAPH_COMMON_NEIGHBORS` now provides an additional option `includeData`.
This option allows controlling whether the function should return the complete vertices
or just their IDs. Returning only the IDs instead of the full vertices can lead to
improved performance .
If provided, `includeData` is set to `true`, all vertices in the result will be returned
with all their attributes. The default value of `includeData` is `false`.
This makes the default function results incompatible with previous versions of ArangoDB.
To get the old result style in ArangoDB 2.6, please set the options as follows in calls
to `GRAPH_COMMON_NEIGHBORS`:
GRAPH_COMMON_NEIGHBORS(<graph>, <vertexExamples1>, <vertexExamples2>, { includeData: true }, { includeData: true })
* INCOMPATIBLE CHANGE:
The AQL function `GRAPH_SHORTEST_PATH` now provides an additional option `includeData`.
This option allows controlling whether the function should return the complete vertices
and edges or just their IDs. Returning only the IDs instead of full vertices and edges
can lead to improved performance .
If provided, `includeData` is set to `true`, all vertices and edges in the result will
be returned with all their attributes. There is also an optional parameter `includePath` of
type object.
It has two optional sub-attributes `vertices` and `edges`, both of type boolean.
Both can be set individually and the result will include all vertices on the path if
`includePath.vertices == true` and all edges if `includePath.edges == true` respectively.
The default value of `includeData` is `false`, and paths are now excluded by default.
This makes the default function results incompatible with previous versions of ArangoDB.
To get the old result style in ArangoDB 2.6, please set the options as follows in calls
to `GRAPH_SHORTEST_PATH`:
GRAPH_SHORTEST_PATH(<graph>, <source>, <target>, { includeData: true, includePath: { edges: true, vertices: true } })
* INCOMPATIBLE CHANGE:
All graph measurements functions in JavaScript module `general-graph` that calculated a
single figure previously returned an array containing just the figure. Now these functions
will return the figure directly and not put it inside an array.
The affected functions are:
* `graph._absoluteEccentricity`
* `graph._eccentricity`
* `graph._absoluteCloseness`
* `graph._closeness`
* `graph._absoluteBetweenness`
* `graph._betweenness`
* `graph._radius`
* `graph._diameter`
* Create the `_graphs` collection in new databases with `waitForSync` attribute set to `false`
The previous `waitForSync` value was `true`, so default the behavior when creating and dropping
graphs via the HTTP REST API changes as follows if the new settings are in effect:
* `POST /_api/graph` by default returns `HTTP 202` instead of `HTTP 201`
* `DELETE /_api/graph/graph-name` by default returns `HTTP 202` instead of `HTTP 201`
If the `_graphs` collection still has its `waitForSync` value set to `true`, then the HTTP status
code will not change.
* Upgraded ICU to version 54; this increases performance in many places.
based on https://code.google.com/p/chromium/issues/detail?id=428145
* added support for HTTP push aka chunked encoding
* issue #1051: add info whether server is running in service or user mode?
This will add a "mode" attribute to the result of the result of HTTP GET `/_api/version?details=true`
"mode" can have the following values:
- `standalone`: server was started manually (e.g. on command-line)
- `service`: service is running as Windows service, in daemon mode or under the supervisor
* improve system error messages in Windows port
* increased default value of `--server.request-timeout` from 300 to 1200 seconds for client tools
(arangosh, arangoimp, arangodump, arangorestore)
* increased default value of `--server.connect-timeout` from 3 to 5 seconds for client tools
(arangosh, arangoimp, arangodump, arangorestore)
* added startup option `--server.foxx-queues-poll-interval`
This startup option controls the frequency with which the Foxx queues manager is checking
the queue (or queues) for jobs to be executed.
The default value is `1` second. Lowering this value will result in the queue manager waking
up and checking the queues more frequently, which may increase CPU usage of the server.
When not using Foxx queues, this value can be raised to save some CPU time.
* added startup option `--server.foxx-queues`
This startup option controls whether the Foxx queue manager will check queue and job entries.
Disabling this option can reduce server load but will prevent jobs added to Foxx queues from
being processed at all.
The default value is `true`, enabling the Foxx queues feature.
* make Foxx queues really database-specific.
Foxx queues were and are stored in a database-specific collection `_queues`. However, a global
cache variable for the queues led to the queue names being treated database-independently, which
was wrong.
Since 2.6, Foxx queues names are truly database-specific, so the same queue name can be used in
two different databases for two different queues. Until then, it is advisable to think of queues
as already being database-specific, and using the database name as a queue name prefix to be
avoid name conflicts, e.g.:
var queueName = "myQueue";
var Foxx = require("org/arangodb/foxx");
Foxx.queues.create(db._name() + ":" + queueName);
* added support for Foxx queue job types defined as app scripts.
The old job types introduced in 2.4 are still supported but are known to cause issues in 2.5
and later when the server is restarted or the job types are not defined in every thread.
The new job types avoid this issue by storing an explicit mount path and script name rather
than an assuming the job type is defined globally. It is strongly recommended to convert your
job types to the new script-based system.
* renamed Foxx sessions option "sessionStorageApp" to "sessionStorage". The option now also accepts session storages directly.
* Added the following JavaScript methods for file access:
* fs.copyFile() to copy single files
* fs.copyRecursive() to copy diretory trees
* fs.chmod() to set the file permissions (non-Windows onnly)
* Added process.env for accessing the process environment from JavaScript code
* Cluster: kickstarter shutdown routines will more precisely follow the shutdown of its nodes.
* Cluster: don't delete agency connection objects that are currently in use.
* Cluster: improve passing along of HTTP errors
* fixed issue #1247: debian init script problems
* multi-threaded index creation on collection load
When a collection contains more than one secondary index, they can be built in memory in
parallel when the collection is loaded. How many threads are used for parallel index creation
is determined by the new configuration parameter `--database.index-threads`. If this is set
to 0, indexes are built by the opening thread only and sequentially. This is equivalent to
the behavior in 2.5 and before.
* speed up building up primary index when loading collections
* added `count` attribute to `parameters.json` files of collections. This attribute indicates
the number of live documents in the collection on unload. It is read when the collection is
(re)loaded to determine the initial size for the collection's primary index
* removed remainders of MRuby integration, removed arangoirb
* simplified `controllers` property in Foxx manifests. You can now specify a filename directly
if you only want to use a single file mounted at the base URL of your Foxx app.
* simplified `exports` property in Foxx manifests. You can now specify a filename directly if
you only want to export variables from a single file in your Foxx app.
* added support for node.js-style exports in Foxx exports. Your Foxx exports file can now export
arbitrary values using the `module.exports` property instead of adding properties to the
`exports` object.
* added `scripts` property to Foxx manifests. You should now specify the `setup` and `teardown`
files as properties of the `scripts` object in your manifests and can define custom,
app-specific scripts that can be executed from the web interface or the CLI.
* added `tests` property to Foxx manifests. You can now define test cases using the `mocha`
framework which can then be executed inside ArangoDB.
* updated `joi` package to 6.0.8.
* added `extendible` package.
* added Foxx model lifecycle events to repositories. See #1257.
* speed up resizing of edge index.
* allow to split an edge index into buckets which are resized individually.
This is controlled by the `indexBuckets` attribute in the `properties`
of the collection.
* fix a cluster deadlock bug in larger clusters by marking a thread waiting
for a lock on a DBserver as blocked
v2.5.4 (2015-05-14)
-------------------
* added startup option `--log.performance`: specifying this option at startup will log
performance-related info messages, mainly timings via the regular logging mechanisms
* cluster fixes
* fix for recursive copy under Windows
v2.5.3 (2015-04-29)
-------------------
* Fix fs.move to work across filesystem borders; Fixes Foxx app installation problems;
issue #1292.
* Fix Foxx app install when installed on a different drive on Windows
* issue #1322: strange AQL result
* issue #1318: Inconsistent db._create() syntax
* issue #1315: queries to a collection fail with an empty response if the
collection contains specific JSON data
* issue #1300: Make arangodump not fail if target directory exists but is empty
* allow specifying higher values than SOMAXCONN for `--server.backlog-size`
Previously, arangod would not start when a `--server.backlog-size` value was
specified that was higher than the platform's SOMAXCONN header value.
Now, arangod will use the user-provided value for `--server.backlog-size` and
pass it to the listen system call even if the value is higher than SOMAXCONN.
If the user-provided value is higher than SOMAXCONN, arangod will log a warning
on startup.
* Fixed a cluster deadlock bug. Mark a thread that is in a RemoteBlock as
blocked to allow for additional dispatcher threads to be started.
* Fix locking in cluster by using another ReadWriteLock class for collections.
* Add a second DispatcherQueue for AQL in the cluster. This fixes a
cluster-AQL thread explosion bug.
v2.5.2 (2015-04-11)
-------------------
* modules stored in _modules are automatically flushed when changed
* added missing query-id parameter in documentation of HTTP DELETE `/_api/query` endpoint
* added iterator for edge index in AQL queries
this change may lead to less edges being read when used together with a LIMIT clause
* make graph viewer in web interface issue less expensive queries for determining
a random vertex from the graph, and for determining vertex attributes
* issue #1285: syntax error, unexpected $undefined near '@_to RETURN obj
this allows AQL bind parameter names to also start with underscores
* moved /_api/query to C++
* issue #1289: Foxx models created from database documents expose an internal method
* added `Foxx.Repository#exists`
* parallelise initialization of V8 context in multiple threads
* fixed a possible crash when the debug-level was TRACE
* cluster: do not initialize statistics collection on each
coordinator, this fixes a race condition at startup
* cluster: fix a startup race w.r.t. the _configuration collection
* search for db:// JavaScript modules only after all local files have been
considered, this speeds up the require command in a cluster considerably
* general cluster speedup in certain areas
v2.5.1 (2015-03-19)
-------------------
* fixed bug that caused undefined behavior when an AQL query was killed inside
a calculation block
* fixed memleaks in AQL query cleanup in case out-of-memory errors are thrown
* by default, Debian and RedHat packages are built with debug symbols
* added option `--database.ignore-logfile-errors`
This option controls how collection datafiles with a CRC mismatch are treated.
If set to `false`, CRC mismatch errors in collection datafiles will lead
to a collection not being loaded at all. If a collection needs to be loaded
during WAL recovery, the WAL recovery will also abort (if not forced with
`--wal.ignore-recovery-errors true`). Setting this flag to `false` protects
users from unintentionally using a collection with corrupted datafiles, from
which only a subset of the original data can be recovered.
If set to `true`, CRC mismatch errors in collection datafiles will lead to
the datafile being partially loaded. All data up to until the mismatch will
be loaded. This will enable users to continue with collection datafiles
that are corrupted, but will result in only a partial load of the data.
The WAL recovery will still abort when encountering a collection with a
corrupted datafile, at least if `--wal.ignore-recovery-errors` is not set to
`true`.
The default value is *true*, so for collections with corrupted datafiles
there might be partial data loads once the WAL recovery has finished. If
the WAL recovery will need to load a collection with a corrupted datafile,
it will still stop when using the default values.
* INCOMPATIBLE CHANGE:
make the arangod server refuse to start if during startup it finds a non-readable
`parameter.json` file for a database or a collection.
Stopping the startup process in this case requires manual intervention (fixing
the unreadable files), but prevents follow-up errors due to ignored databases or
collections from happening.
* datafiles and `parameter.json` files written by arangod are now created with read and write
privileges for the arangod process user, and with read and write prileges for the arangod
process group.
Previously, these files were created with user read and write permissions only.
* INCOMPATIBLE CHANGE:
abort WAL recovery if one of the collection's datafiles cannot be opened
* INCOMPATIBLE CHANGE:
never try to raise the privileges after dropping them, this can lead to a race condition while
running the recovery
If you require to run ArangoDB on a port lower than 1024, you must run ArangoDB as root.
* fixed inefficiencies in `remove` methods of general-graph module
* added option `--database.slow-query-threshold` for controlling the default AQL slow query
threshold value on server start
* add system error strings for Windows on many places
* rework service startup so we anounce 'RUNNING' only when we're finished starting.
* use the Windows eventlog for FATAL and ERROR - log messages
* fix service handling in NSIS Windows installer, specify human readable name
* add the ICU_DATA environment variable to the fatal error messages
* fixed issue #1265: arangod crashed with SIGSEGV
* fixed issue #1241: Wildcards in examples
v2.5.0 (2015-03-09)
-------------------
* installer fixes for Windows
* fix for downloading Foxx
* fixed issue #1258: http pipelining not working?
v2.5.0-beta4 (2015-03-05)
-------------------------
* fixed issue #1247: debian init script problems
v2.5.0-beta3 (2015-02-27)
-------------------------
* fix Windows install path calculation in arango
* fix Windows logging of long strings
* fix possible undefinedness of const strings in Windows
v2.5.0-beta2 (2015-02-23)
-------------------------
* fixed issue #1256: agency binary not found #1256
* fixed issue #1230: API: document/col-name/_key and cursor return different floats
* front-end: dashboard tries not to (re)load statistics if user has no access
* V8: Upgrade to version 3.31.74.1
* etcd: Upgrade to version 2.0 - This requires go 1.3 to compile at least.
* refuse to startup if ICU wasn't initialized, this will i.e. prevent errors from being printed,
and libraries from being loaded.
* front-end: unwanted removal of index table header after creating new index
* fixed issue #1248: chrome: applications filtering not working
* fixed issue #1198: queries remain in aql editor (front-end) if you navigate through different tabs
* Simplify usage of Foxx
Thanks to our user feedback we learned that Foxx is a powerful, yet rather complicated concept.
With this release we tried to make it less complicated while keeping all its strength.
That includes a rewrite of the documentation as well as some code changes as listed below:
* Moved Foxx applications to a different folder.
The naming convention now is: <app-path>/_db/<dbname>/<mountpoint>/APP
Before it was: <app-path>/databases/<dbname>/<appname>:<appversion>
This caused some trouble as apps where cached based on name and version and updates did not apply.
Hence the path on filesystem and the app's access URL had no relation to one another.
Now the path on filesystem is identical to the URL (except for slashes and the appended APP)
* Rewrite of Foxx routing
The routing of Foxx has been exposed to major internal changes we adjusted because of user feedback.
This allows us to set the development mode per mountpoint without having to change pathes and hold
apps at seperate locations.
* Foxx Development mode
The development mode used until 2.4 is gone. It has been replaced by a much more mature version.
This includes the deprecation of the javascript.dev-app-path parameter, which is useless since 2.5.
Instead of having two separate app directories for production and development, apps now reside in
one place, which is used for production as well as for development.
Apps can still be put into development mode, changing their behavior compared to production mode.
Development mode apps are still reread from disk at every request, and still they ship more debug
output.
This change has also made the startup options `--javascript.frontend-development-mode` and
`--javascript.dev-app-path` obsolete. The former option will not have any effect when set, and the
latter option is only read and used during the upgrade to 2.5 and does not have any effects later.
* Foxx install process
Installing Foxx apps has been a two step process: import them into ArangoDB and mount them at a
specific mountpoint. These operations have been joined together. You can install an app at one
mountpoint, that's it. No fetch, mount, unmount, purge cycle anymore. The commands have been
simplified to just:
* install: get your Foxx app up and running
* uninstall: shut it down and erase it from disk
* Foxx error output
Until 2.4 the errors produced by Foxx were not optimal. Often, the error message was just
`unable to parse manifest` and contained only an internal stack trace.
In 2.5 we made major improvements there, including a much more finegrained error output that
helps you debug your Foxx apps. The error message printed is now much closer to its source and
should help you track it down.
Also we added the default handlers for unhandled errors in Foxx apps:
* You will get a nice internal error page whenever your Foxx app is called but was not installed
due to any error
* You will get a proper error message when having an uncaught error appears in any app route
In production mode the messages above will NOT contain any information about your Foxx internals
and are safe to be exposed to third party users.
In development mode the messages above will contain the stacktrace (if available), making it easier for
your in-house devs to track down errors in the application.
* added `console` object to Foxx apps. All Foxx apps now have a console object implementing
the familiar Console API in their global scope, which can be used to log diagnostic
messages to the database.