couchdb:prototype/fdb-layer-db-version-as-vstamps

Last commit made on 2020-06-25
Get this branch:
git clone -b prototype/fdb-layer-db-version-as-vstamps https://git.launchpad.net/couchdb

Branch merges

Branch information

Name:
prototype/fdb-layer-db-version-as-vstamps
Repository:
lp:couchdb

Recent commits

9425a64... by Paul J. Davis

Check that our LayerPrefix hasn't changed

The `\xFFmetadataVersion` key is also used to coordinate between layers
which includes the DirectoryLayer. This means if that key changes we
also have to re-check that our Directory entry still exists.

5967157... by Paul J. Davis

Condition cache storage/eviction on db_version

Now that db_version is comparable we can use that for cache control.

49949de... by Paul J. Davis

Use versionstamps for db versions

This allows us to have an ordered comparable value for knowing when to
evict entries from cache as well as prevent invalid updates when under
concurrent load.

db2b19e... by Paul J. Davis

Move sys db callback installation to fix reopen

Previously fabric2_fdb:reopen/1 would fail to reinstall the sys db
callbacks properly. This moves the logic to fabric2_fdb to fix the
issue.

da7ba5e... by Paul J. Davis

Fix broken cache update

The underlying `set_config/3` call ends up bumping the metadata version
which results in invalidtaing the entire db cache. Storing an updated
version on the db handle is unproductive becuase we are unable to get
the new metadata version.

42d9106... by Paul J. Davis

Rearrange function ordering for clarity

This moves ensure_current/1 to match its order in the export list, and
ensure_current/2 is moved near the check_metadata_version and
check_db_version functions to group related logic.

3423d19... by Tony Sun <email address hidden>

add back r and w options

fa16e6a... by Nick Vatamaniuc <email address hidden>

Bump erlfdb to v1.2.2

https://github.com/apache/couchdb-erlfdb/releases/tag/v1.2.2

b17bc49... by Nick Vatamaniuc <email address hidden>

Handle transaction and future timeouts in couch_jobs notifiers

In an overload scenario do not let notifiers crash and lose their subscribers,
instead make them more robust and let them retry on future or transaction
timeouts.

3536ad8... by Nick Vatamaniuc <email address hidden>

Split couch_views acceptors and workers

Optimize couch_views by using a separate set of acceptors and workers.
Previously, all `max_workers` where spawned on startup, and were to
waiting to accept jobs in parallel. In a setup with a large number of
pods, and 100 workers per pod, that could lead to a lot of conflicts
being generated when all those workers race to accept the same job at
the same time.

The improvement is to spawn only a limited number of acceptors (5, by
default), then, spawn more after some of them become workers. Also,
when some workers finish or die with an error, check if more acceptors
could be spawned.

As an example, here is what might happen with `max_acceptors = 5` and
`max_workers = 100` (`A` and `W` are the current counts of acceptors
and workers, respectively):

1. Starting out:
  `A = 5, W = 0`

2. After 2 acceptors start running:
  `A = 3, W = 2`
Then immediately 2 more acceptors are spawned:
  `A = 5, W = 2`

3. After 95 workers are started:
  `A = 5, W = 95`

4. Now if 3 acceptors accept, it would look like:
  `A = 2, W = 98`
But no more acceptors would be started.

5. If the last 2 acceptors also accept jobs: `A = 0, W = 100` At this
  point no more indexing jobs can be accepted and started until at
  least one of the workers finish and exit.

6. If 1 worker exits:
  `A = 0, W = 99`
An acceptor will be immediately spawned
  `A = 1, W = 99`

7. If all 99 workers exit, it will go back to:
 `A = 5, W = 0`