Merge lp:~akopytov/percona-server/tp-low-prio-queue-throttling-5.5 into lp:percona-server/5.5

Proposed by Alexey Kopytov
Status: Merged
Approved by: Laurynas Biveinis
Approved revision: no longer in the source branch.
Merged at revision: 603
Proposed branch: lp:~akopytov/percona-server/tp-low-prio-queue-throttling-5.5
Merge into: lp:percona-server/5.5
Diff against target: 229 lines (+93/-29)
1 file modified
Percona-Server/sql/threadpool_unix.cc (+93/-29)
To merge this branch: bzr merge lp:~akopytov/percona-server/tp-low-prio-queue-throttling-5.5
Reviewer Review Type Date Requested Status
Laurynas Biveinis (community) Approve
Review via email: mp+198720@code.launchpad.net

Description of the change

    Implementation of
    https://blueprints.launchpad.net/percona-server/+spec/tp-low-prio-queue-throttling-5.5

    Introduced a limit on ‘busy’ threads. A busy thread is either in the
    active (i.e. executing a statement) or waiting (i.e. between calls to
    thd_wait_begin() and thd_wait_end()) state. No events from the low
    priority queue are processed when that limit is reached in a thread
    group.

    Also made the code that creates new threads in thd_wait_begin() a
    compile-time option. It’s currently enabled, but benchmarks show that
    the code has essentially no effect. That code will likely be removed
    later.

http://jenkins.percona.com/view/PS%205.5/job/percona-server-5.5-param/906/

To post a comment you must log in.
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Percona-Server/sql/threadpool_unix.cc'
2--- Percona-Server/sql/threadpool_unix.cc 2013-12-02 12:17:20 +0000
3+++ Percona-Server/sql/threadpool_unix.cc 2013-12-12 12:06:46 +0000
4@@ -43,6 +43,10 @@
5 /** Maximum number of native events a listener can read in one go */
6 #define MAX_EVENTS 1024
7
8+/** Define if wait_begin() should create threads if necessary without waiting
9+for stall detection to kick in */
10+#define THREADPOOL_CREATE_THREADS_ON_WAIT
11+
12 /** Indicates that threadpool was initialized*/
13 static bool threadpool_started= false;
14
15@@ -138,6 +142,7 @@
16 int thread_count;
17 int active_thread_count;
18 int connection_count;
19+ int waiting_thread_count;
20 /* Stats for the deadlock detection timer routine.*/
21 int io_event_count;
22 int queue_event_count;
23@@ -396,6 +401,33 @@
24 #error not ported yet to this OS
25 #endif
26
27+namespace {
28+
29+/*
30+ Prevent too many active threads executing at the same time, if the workload is
31+ not CPU bound.
32+*/
33+
34+inline bool too_many_active_threads(thread_group_t *thread_group)
35+{
36+ return (thread_group->active_thread_count
37+ >= 1 + (int) threadpool_oversubscribe
38+ && !thread_group->stalled);
39+}
40+
41+/*
42+ Limit the number of 'busy' threads by 1 + thread_pool_oversubscribe. A thread
43+ is busy if it is in either the active state or the waiting state (i.e. between
44+ thd_wait_begin() / thd_wait_end() calls).
45+*/
46+
47+inline bool too_many_busy_threads(thread_group_t *thread_group)
48+{
49+ return (thread_group->active_thread_count + thread_group->waiting_thread_count
50+ > 1 + (int) threadpool_oversubscribe);
51+}
52+
53+} // namespace
54
55 /* Dequeue element from a workqueue */
56
57@@ -409,7 +441,12 @@
58 {
59 thread_group->high_prio_queue.remove(c);
60 }
61- else if ((c= thread_group->queue.front()))
62+ /*
63+ Don't pick events from the low priority queue if there are too many
64+ active + waiting threads.
65+ */
66+ else if (!too_many_busy_threads(thread_group) &&
67+ (c= thread_group->queue.front()))
68 {
69 thread_group->queue.remove(c);
70 }
71@@ -530,7 +567,17 @@
72 return NULL;
73 }
74
75+/*
76+ Check if both the high and low priority queues are empty.
77
78+ NOTE: we also consider the low priority queue empty in case it has events, but
79+ they cannot be processed due to the too_many_busy_threads() limit.
80+*/
81+static bool queues_are_empty(thread_group_t *tg)
82+{
83+ return (tg->high_prio_queue.is_empty() &&
84+ (tg->queue.is_empty() || too_many_busy_threads(tg)));
85+}
86
87 void check_stall(thread_group_t *thread_group)
88 {
89@@ -557,21 +604,21 @@
90 thread_group->io_event_count= 0;
91
92 /*
93- Check whether requests from the workqueue are being dequeued.
94+ Check whether requests from the workqueues are being dequeued.
95
96 The stall detection and resolution works as follows:
97
98 1. There is a counter thread_group->queue_event_count for the number of
99- events removed from the queue. Timer resets the counter to 0 on each run.
100+ events removed from the queues. Timer resets the counter to 0 on each run.
101 2. Timer determines stall if this counter remains 0 since last check
102- and the queue is not empty.
103+ and at least one of the high and low priority queues is not empty.
104 3. Once timer determined a stall it sets thread_group->stalled flag and
105 wakes and idle worker (or creates a new one, subject to throttling).
106 4. The stalled flag is reset, when an event is dequeued.
107
108- Q : Will this handling lead to an unbound growth of threads, if queue
109- stalls permanently?
110- A : No. If queue stalls permanently, it is an indication for many very long
111+ Q : Will this handling lead to an unbound growth of threads, if queues
112+ stall permanently?
113+ A : No. If queues stall permanently, it is an indication for many very long
114 simultaneous queries. The maximum number of simultanoues queries is
115 max_connections, further we have threadpool_max_threads limit, upon which no
116 worker threads are created. So in case there is a flood of very long
117@@ -582,8 +629,7 @@
118 do wait and indicate that via thd_wait_begin/end callbacks, thread creation
119 will be faster.
120 */
121- if ((!thread_group->high_prio_queue.is_empty() ||
122- !thread_group->queue.is_empty()) && !thread_group->queue_event_count)
123+ if (!thread_group->queue_event_count && !queues_are_empty(thread_group))
124 {
125 thread_group->stalled= true;
126 wake_or_create_thread(thread_group);
127@@ -1027,19 +1073,6 @@
128 DBUG_VOID_RETURN;
129 }
130
131-
132-/*
133- Prevent too many threads executing at the same time,if the workload is
134- not CPU bound.
135-*/
136-
137-static bool too_many_threads(thread_group_t *thread_group)
138-{
139- return (thread_group->active_thread_count >= 1+(int)threadpool_oversubscribe
140- && !thread_group->stalled);
141-}
142-
143-
144 /**
145 Retrieve a connection with pending event.
146
147@@ -1070,7 +1103,7 @@
148
149 for(;;)
150 {
151- bool oversubscribed = too_many_threads(thread_group);
152+ bool oversubscribed = too_many_active_threads(thread_group);
153 if (thread_group->shutdown)
154 break;
155
156@@ -1109,7 +1142,35 @@
157 {
158 thread_group->io_event_count++;
159 connection = (connection_t *)native_event_get_userdata(&nev);
160- break;
161+
162+ /*
163+ Since we are going to perform an out-of-order event processing for the
164+ connection, first check whether it is eligible for high priority
165+ processing. We can get here even if there are queued events, so it
166+ must either have a high priority ticket, or there must be not too many
167+ busy threads (as if it was coming from a low priority queue).
168+ */
169+ if (connection->tickets > 0 &&
170+ thd_is_transaction_active(connection->thd))
171+ connection->tickets--;
172+ else if (too_many_busy_threads(thread_group))
173+ {
174+ /*
175+ Not eligible for high priority processing. Restore tickets and put
176+ it into the low priority queue.
177+ */
178+
179+ connection->tickets=
180+ connection->thd->variables.threadpool_high_prio_tickets;
181+ thread_group->queue.push_back(connection);
182+ connection= NULL;
183+ }
184+
185+ if (connection)
186+ {
187+ thread_group->queue_event_count++;
188+ break;
189+ }
190 }
191 }
192
193@@ -1167,13 +1228,14 @@
194 DBUG_ENTER("wait_begin");
195 mysql_mutex_lock(&thread_group->mutex);
196 thread_group->active_thread_count--;
197-
198+ thread_group->waiting_thread_count++;
199+
200 DBUG_ASSERT(thread_group->active_thread_count >=0);
201 DBUG_ASSERT(thread_group->connection_count > 0);
202-
203+
204+#ifdef THREADPOOL_CREATE_THREADS_ON_WAIT
205 if ((thread_group->active_thread_count == 0) &&
206- (thread_group->high_prio_queue.is_empty() ||
207- thread_group->queue.is_empty() || !thread_group->listener))
208+ (!queues_are_empty(thread_group) || !thread_group->listener))
209 {
210 /*
211 Group might stall while this thread waits, thus wake
212@@ -1181,7 +1243,8 @@
213 */
214 wake_or_create_thread(thread_group);
215 }
216-
217+#endif
218+
219 mysql_mutex_unlock(&thread_group->mutex);
220 DBUG_VOID_RETURN;
221 }
222@@ -1195,6 +1258,7 @@
223 DBUG_ENTER("wait_end");
224 mysql_mutex_lock(&thread_group->mutex);
225 thread_group->active_thread_count++;
226+ thread_group->waiting_thread_count--;
227 mysql_mutex_unlock(&thread_group->mutex);
228 DBUG_VOID_RETURN;
229 }

Subscribers

People subscribed via source and target branches