urcu:stable-0.10

Last commit made on 2019-05-27
Get this branch:
git clone -b stable-0.10 https://git.launchpad.net/urcu

Branch merges

Branch information

Name:
stable-0.10
Repository:
lp:urcu

Recent commits

54e31a4... by Mathieu Desnoyers on 2019-05-27

Fix: urcu/futex.h: users of struct timespec should include time.h

Fixes: #1187
Signed-off-by: Mathieu Desnoyers <email address hidden>

78e30df... by Mathieu Desnoyers on 2019-05-06

Cleanup: update code layout to fix old gcc warning

Some CI jobs show:

urcu-pointer.o
13:46:22 In file included from urcu.c:49:0:
13:46:22 urcu-wait.h:70:9: warning: missing initializer for field 'lock' of 'struct cds_wfs_stack' [-Wmissing-field-initializers]
13:46:22 struct urcu_wait_queue name = URCU_WAIT_QUEUE_HEAD_INIT(name)
13:46:22 ^
13:46:22 urcu.c:150:8: note: in expansion of macro 'DEFINE_URCU_WAIT_QUEUE'
13:46:22 static DEFINE_URCU_WAIT_QUEUE(gp_waiters);
13:46:22 ^
13:46:22 In file included from urcu-wait.h:27:0,
13:46:22 from urcu.c:49:
13:46:22 ../include/urcu/wfstack.h:92:18: note: 'lock' declared here
13:46:22 pthread_mutex_t lock;
13:46:22

Change code layout so not to confuse old gcc.

Signed-off-by: Mathieu Desnoyers <email address hidden>

dde3e46... by Michael Jeanson <email address hidden> on 2019-04-22

Fix: typo CPPLAGS in examples Makefile

Overriding CPPFLAGS throught the environment was ignored for the
examples.

Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

4533904... by Michael Jeanson <email address hidden> on 2019-03-07

Update dead link in lgpl-relicensing.txt

Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

0929404... by Mathieu Desnoyers on 2019-01-14

Version 0.10.2

Signed-off-by: Mathieu Desnoyers <email address hidden>

803e593... by Jérémie Galarneau <email address hidden> on 2018-12-07

Fix: only wait if work queue is empty in real-time mode

Unconditionally waiting for 10 ms after the completion of every batch
of jobs of the work queue in real-time mode appears to be a behaviour
inherited from the call-rcu thread.

While this is a fair trade-off in the context of call-rcu, it is less
evident that it is desirable in the context of a general-purpose
work queue. Waiting when work is available artificially degrades the
latency characteristics of the work queue.

If a workqueue user even need the explicit delay for batching (e.g. if
a call-rcu implementation would ever use the workqueue worker thread),
it can add it within the worker_before_wait_fct callback received as
argument from workqueue creation.

Signed-off-by: Jérémie Galarneau <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

79eca09... by Jérémie Galarneau <email address hidden> on 2018-12-07

Fix: don't wait after completion of a work queue job batch

As indicated in the previous patch of this series, waiting on
completion of a job batch from the work queue artificially increases
the latency of the work queue.

The previous patch removed the wait that is performed when the
work queue is observed to be empty and was observed as the cause of a
performance problem.

It is likely that waiting when the queue is observed to be non-empty
is similarly unintended. Note that I have not observed such a problem
myself.

If a workqueue user even need the explicit delay for batching (e.g. if
a call-rcu implementation would ever use the workqueue worker thread),
it can add it within the worker_before_wait_fct callback received as
argument from workqueue creation.

Signed-off-by: Jérémie Galarneau <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

d3e42be... by Jérémie Galarneau <email address hidden> on 2018-12-07

Fix: don't wait after completion of job batch if work queue is empty

On completion of a batch of jobs from the work queue, a wait of 10
ms (using poll()) is performed if there is no work present in the
work queue before waiting on its futex.

The work queue thread's structure is inspired by the call-rcu thread.
In the context of the call-rcu thread, my understanding is that the
intention is to ensure that the thread does not continuously wake-up
to process a single queued item. This is fine as an application should
not wait for a call-rcu job to be executed (or at least I don't see a
use-case for that).

In the context of the work queue, waiting for more work to be available
artificially slows down the execution of work on which an application
may wait.

I have observed a case where LTTng's session daemon's shutdown is
takes around 4 seconds as a large number of cds_lfht objects are
destroyed. Removing the wait reduces the duration of this phase of the
shut-down to almost ~10ms.

If a workqueue user even need the explicit delay for batching (e.g. if
a call-rcu implementation would ever use the workqueue worker thread),
it can add it within the worker_before_wait_fct callback received as
argument from workqueue creation.

Signed-off-by: Jérémie Galarneau <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

9e19a6b... by Mathieu Desnoyers on 2018-12-09

Fix: workqueue: struct urcu_work vs rcu_head mixup

The workqueue code was derived from call-rcu, and its API
expects a struct urcu_work for work items, but it internally iterates
over struct rcu_head.

This is not an issue at runtime because both structures have the
exact same layout and content, but it is a type mixup nevertheless.

Use the right type in the implementation.

Signed-off-by: Mathieu Desnoyers <email address hidden>

136211f... by Mathieu Desnoyers on 2018-12-09

Cleanup: workqueue: update comments referring to call-rcu

Signed-off-by: Mathieu Desnoyers <email address hidden>