Tags: vasqcorp/daml
Tags
Backport: [JSON-API] Make key_hash idx non unique (digital-asset#11102)… … (digital-asset#11143) See [original PR] for details. [original PR]: digital-asset#11102 Note that there was a conflict, which I tried to resolve sensibly: ledger-service/http-json/src/itlib/scala/http/AbstractHttpServiceIntegrationTest.scala -- * Add failing test that covers the bug * Fix on conflict error for inserts into the contracts table changelog_begin - [JSON-API] make key_hash indexes non-unique, this fixes a bug where a duplicate key conflict was raised on the query store when the same contract was being witnessed twice by two separate parties changelog_end * move test to parent so as to test oracle query store * make key_hash indexes non-unique * use recordFromFields Co-authored-by: Akshay <akshay.shirahatti@digitalasset.com> * fix broken backport of tests Co-authored-by: Victor Peter Rouven Müller <mueller.vpr@gmail.com> Co-authored-by: Akshay <akshay.shirahatti@digitalasset.com>
[Divulgence pruning] Fixes divulgence pruning offset update query (di… …gital-asset#11111) CHANGELOG_BEGIN CHANGELOG_END
Add deprecation output for sandbox classic (digital-asset#11119) changelog_begin changelog_end
detect unsynchronized contract table and retry (digital-asset#10617) * enumerating out-of-sync offsets at the DB level * cleanup in lastOffset * write the latest-requested-or-read offset when catching up - Writing only the latest-read, as before, would imply unsynced offsets that are actually well-synced. This puts the DB in a more uniform state, i.e. it should actually reflect the single value that the fetchAndPersist loop tries to catch everything up to. * detecting lagging offsets from the unsynced-offsets set - Treating every unsynced offset as a lag would make us needlessly retry perfectly synchronized query results. * add Foldable1 derived from Foldable for NonEmpty * nicer version of the unsynced function * ConnectionIO scalaz monad * rename Offset.ordering to `Offset ordering` so it can be imported verbatim * finish aggregating in the lag-detector function, compiles * port sjd * XTag, a scalaz 7.3-derived tag to allow stacked tags * make the complicated aggregation properly testable * extra semantic corner cases I didn't think of * tests for laggingOffsets * a way to rerun queries if the laggingOffsets check reveals inconsistency * if bookmark is ever different, we always have to rerun anyway * boolean blindness * incorporate laggingOffsets into fetchAndPersistBracket * split fetchAndPersist from getTermination and clean up its arguments * just compose functors * add looping to fetchAndPersistBracket * more mvo tests * test unsyncedOffsets, too * Lagginess collector * supply more likely actual data with mvo tests; don't trust Java equals * rework minimumViableOffsets to track sync states across template IDs * extra note * fix the tests to work against the stricter mvo * move surrogatesToDomains call * more tests for lagginess accumulator * add changelog CHANGELOG_BEGIN - [JSON API] Under rare conditions, a multi-template query backed by database could have an ACS portion that doesn't match its transaction stream, if updated concurrently. This conditions is now checked and accounted for. See `issue digital-asset#10617 <https://github.com/digital-asset/daml/pull/10617>`__. CHANGELOG_END * port toSeq to Scala 2.12 * handle a corner case with offsets being too close to expected values * didn't need XTag
Include concurrency info in output (digital-asset#10970) changelog_begin changelog_end
Retry upsert of command deduplication on oracle and h2 [DPP-609] (dig… …ital-asset#10976) (digital-asset#10989) * Retry upsert of command deduplication on oracle and h2 [DPP-609] CHANGELOG_BEGIN CHANGELOG_END * address review comments * fix a typo Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
Retry upsert of command deduplication on oracle and h2 [DPP-609] (dig… …ital-asset#10976) (digital-asset#10989) * Retry upsert of command deduplication on oracle and h2 [DPP-609] CHANGELOG_BEGIN CHANGELOG_END * address review comments * fix a typo Co-authored-by: Moritz Kiefer <moritz.kiefer@purelyfunctional.org>
ledger-api-test-tool - Add conformance test for parallel command dedu… …plication using CommandSubmissionService [KVL-1099] (digital-asset#10869) * Extract deduplication "features" into a configuration to be used around the tests. Better naming for assertions that support sync and async deduplication CHANGELOG_BEGIN CHANGELOG_END * Fix broken test and use consistency for tests * ledger-api-test-tool - Add conformance test for parallel command deduplication CHANGELOG_BEGIN CHANGELOG_END * Add import for 2.12 compat * Add silencer plugin * Split parallel command deduplication scenario into it's own test suite * Add the parallel command deduplication test to append only ledgers * Run parallel command deduplication tests for append only ledgers * Apply suggestions from code review Co-authored-by: fabiotudone-da <fabio.tudone@digitalasset.com> * Code review renames * Add compat import * Run the test concurrently
Force JSON API to refresh packages for GET requests to /v1/query (dig… …ital-asset#10835) * Add failing test that covers the bug we found in digital-asset#10823 * Fix /v1/query endpoint bug changelog_begin - [JSON API] Fixed a bug that prevented the JSON API to be aware of packages uploaded directly via the Ledger API. changelog_end
DPP-368 clean up flags (digital-asset#10711) * Make batching parallelism configurable changelog_begin changelog_end * Fail if incompatible cli flags are used
PreviousNext