Merge lp:~percona-dev/percona-server/atomic-fixes-51 into lp:percona-server/5.1
- atomic-fixes-51
- Merge into 5.1
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Stewart Smith | ||||
Approved revision: | no longer in the source branch. | ||||
Merged at revision: | 234 | ||||
Proposed branch: | lp:~percona-dev/percona-server/atomic-fixes-51 | ||||
Merge into: | lp:percona-server/5.1 | ||||
Diff against target: |
1468 lines (+1318/-93) 1 file modified
response-time-distribution.patch (+1318/-93) |
||||
To merge this branch: | bzr merge lp:~percona-dev/percona-server/atomic-fixes-51 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Valentine Gostev (community) | qa | Needs Fixing | |
Laurynas Biveinis (community) | code | Approve | |
Review via email: mp+63953@code.launchpad.net |
This proposal supersedes a proposal from 2011-06-07.
Commit message
Description of the change
Fix LP bug 737947: unportable atomic constructs break build on various platforms.
The fix is to remove query_response_
Oleg Tsarev (tsarev) wrote : Posted in a previous version of this proposal | # |
Laurynas Biveinis (laurynas-biveinis) wrote : | # |
Marking code review as approved because it was already approved by Oleg.
Laurynas Biveinis (laurynas-biveinis) wrote : | # |
Re-submitting MP to target the correct branch. No changes from the superseded MP.
Valentine Gostev (longbow) wrote : | # |
Laurynas, following failing tests have been discovered in comparison with 5.1.57 release branch:
main.loaddata [ fail ]
Test ended at 2011-06-09 16:29:17
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1 CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
The result from queries just before the failure was:
< snip >
SELECT 'test' INTO OUTFILE 't1.txt';
LOAD DATA INFILE 't1.txt' IGNORE INTO TABLE t1 SET col0=col0;
SELECT * FROM t1;
col0
test
DROP TABLE t1;
#
# Bug #52512 : Assertion `! is_set()' in
# Diagnostics_
#
CREATE TABLE t1 (id INT NOT NULL);
LOAD DATA LOCAL INFILE 'tb.txt' INTO TABLE t1;
DROP TABLE t1;
#
# Bug #51876 : crash/memory underrun when loading data with ucs2
# and reverse() function
#
# Problem # 1 (original report): wrong parsing of ucs2 data
SELECT '00' UNION SELECT '10' INTO OUTFILE 'tmpp.txt';
CREATE TABLE t1(a INT);
More results from queries before failure can be found in /root/atom51/
- saving '/root/
-------
The servers were restarted 0 times
Spent 0.000 of 17 seconds executing testcases
Completed: Failed 1/1 tests, 0.00% were successful.
Failing test(s): main.loaddata
Valentine Gostev (longbow) wrote : | # |
CURRENT_TEST: main.mysql_
mysqltest: At line 15: command "$MYSQL_CLIENT_TEST --getopt-
Output from before failure:
exec of '/root/
The result from queries just before the failure was:
SET @old_general_log= @@global.
SET @old_slow_
- saving '/root/
-------
The servers were restarted 0 times
Spent 0.000 of 8 seconds executing testcases
Completed: Failed 1/1 tests, 0.00% were successful.
Failing test(s): main.mysql_
Laurynas Biveinis (laurynas-biveinis) wrote : | # |
As discussed on IRC, the 1st failure is in fact bug 726708.
Laurynas Biveinis (laurynas-biveinis) wrote : | # |
As discussed on IRC, the 2nd failure is a false positive.
Valentine Gostev (longbow) wrote : | # |
Build fails to complete on FreeBSD 8.1 amd64
How to reproduce:
1. branch the subject tree
2. cd <branch dir>
3.
make all
export CFLAGS="-O2 -g -fmessage-length=0 -D_FORTIFY_
export CXXFLAGS="-O2 -g -fmessage-length=0 -D_FORTIFY_
export LIBS=-lrt
cd Percona-
./configure --without-
make
after that build hangs on:
... snip ...
Making all in sql
/bin/sh ../ylwrap sql_yacc.yy y.tab.c sql_yacc.cc y.tab.h sql_yacc.h y.output sql_yacc.output -- bison -y -p MYSQL -d -d --verbose
updating sql_yacc.h
updating sql_yacc.output
make gen_lex_hash
g++ -DMYSQL_SERVER -DDEFAULT_
mv -f .deps/gen_
/bin/sh ../libtool --preserve-dup-deps --tag=CXX --mode=link g++ -Wall -Wextra -Wunused -Wwrite-strings -Wno-strict-
libtool: link: g++ -Wall -Wextra -Wunused -Wwrite-strings -Wno-strict-
./gen_lex_hash > lex_hash.h-t
/bin/mv lex_hash.h-t lex_hash.h
/usr/bin/perl > patch_info.h
Laurynas Biveinis (laurynas-biveinis) wrote : | # |
Valentine - use "gmake" not "make" to build.
Valentine Gostev (longbow) wrote : | # |
yep, my fault.
But also fails with gmake:
g++ -DMYSQL_SERVER -DDEFAULT_
cc1plus: warnings being treated as errors
mysqld.cc: In function 'void* handle_
mysqld.cc:5260: warning: 'flags' may be used uninitialized in this function
mysqld.cc:5256: warning: 'sock' may be used uninitialized in this function
gmake[3]: *** [mysqld.o] Error 1
gmake[3]: Leaving directory `/root/
gmake[2]: *** [all-recursive] Error 1
gmake[2]: Leaving directory `/root/
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory `/root/
gmake: *** [all-recursive] Error 1
[root@fbsd-t1 ~/atomic-
2
Laurynas Biveinis (laurynas-biveinis) wrote : | # |
I don't think this failure is related to the atomics. Does the Percona Server 5.1 trunk build on FreeBSD with the same configuration? If it doesn't, this should be reported separately if it already isn't.
Stewart Smith (stewart) wrote : | # |
There are problems on FreeBSD in the sample jenkins run that need resolving, so I'm so far okay with this change.
Preview Diff
1 | === modified file 'response-time-distribution.patch' |
2 | --- response-time-distribution.patch 2011-05-12 11:00:37 +0000 |
3 | +++ response-time-distribution.patch 2011-06-21 03:01:23 +0000 |
4 | @@ -222,12 +222,9 @@ |
5 | diff -ruN a/sql/query_response_time.cc b/sql/query_response_time.cc |
6 | --- a/sql/query_response_time.cc 1970-01-01 00:00:00.000000000 +0000 |
7 | +++ b/sql/query_response_time.cc 2010-11-02 15:34:52.000000000 +0000 |
8 | -@@ -0,0 +1,369 @@ |
9 | -+#ifdef __FreeBSD__ |
10 | -+#include <sys/types.h> |
11 | -+#include <machine/atomic.h> |
12 | -+#endif // __FreeBSD__ |
13 | +@@ -0,0 +1,313 @@ |
14 | +#include "my_global.h" |
15 | ++#include "my_atomic.h" |
16 | +#ifdef HAVE_RESPONSE_TIME_DISTRIBUTION |
17 | +#include "mysql_priv.h" |
18 | +#include "mysql_com.h" |
19 | @@ -378,104 +375,45 @@ |
20 | + } |
21 | + buffer[result_length]= 0; |
22 | +} |
23 | -+#ifdef __x86_64__ |
24 | -+typedef uint64 TimeCounter; |
25 | -+void add_time_atomic(TimeCounter* counter, uint64 time) |
26 | -+{ |
27 | -+ __sync_fetch_and_add(counter,time); |
28 | -+} |
29 | -+#endif // __x86_64__ |
30 | -+#ifdef __i386__ |
31 | -+inline uint32 get_high(uint64 value) |
32 | -+{ |
33 | -+ return ((value >> 32) << 32); |
34 | -+} |
35 | -+inline uint32 get_low(uint64 value) |
36 | -+{ |
37 | -+ return ((value << 32) >> 32); |
38 | -+} |
39 | -+#ifdef __FreeBSD__ |
40 | -+inline bool compare_and_swap(volatile uint32 *target, uint32 old, uint32 new_value) |
41 | -+{ |
42 | -+ return atomic_cmpset_32(target,old,new_value); |
43 | -+} |
44 | -+#else // __FreeBSD__ |
45 | -+inline bool compare_and_swap(volatile uint32* target, uint32 old, uint32 new_value) |
46 | -+{ |
47 | -+ return __sync_bool_compare_and_swap(target,old,new_value); |
48 | -+} |
49 | -+#endif // __FreeBSD__ |
50 | -+class TimeCounter |
51 | -+{ |
52 | -+public: |
53 | -+ TimeCounter& operator=(uint64 time) |
54 | -+ { |
55 | -+ this->m_high= get_high(time); |
56 | -+ this->m_low= get_low(time); |
57 | -+ return *this; |
58 | -+ } |
59 | -+ operator uint64() const |
60 | -+ { |
61 | -+ return ((static_cast<uint64>(m_high) << 32) + static_cast<uint64>(m_low)); |
62 | -+ } |
63 | -+ void add(uint64 time) |
64 | -+ { |
65 | -+ uint32 time_high = get_high(time); |
66 | -+ uint32 time_low = get_low(time); |
67 | -+ uint64 time_low64= time_low; |
68 | -+ while(true) |
69 | -+ { |
70 | -+ uint32 old_low= this->m_low; |
71 | -+ uint64 old_low64= old_low; |
72 | -+ |
73 | -+ uint64 new_low64= old_low64 + time_low64; |
74 | -+ uint32 new_low= (get_low(new_low64)); |
75 | -+ bool add_high= (get_high(new_low64) != 0); |
76 | -+ |
77 | -+ if(!compare_and_swap(&m_low,old_low,new_low)) |
78 | -+ { |
79 | -+ continue; |
80 | -+ } |
81 | -+ if(add_high) |
82 | -+ { |
83 | -+ ++time_high; |
84 | -+ } |
85 | -+ if(time_high > 0) |
86 | -+ { |
87 | -+ __sync_fetch_and_add(&m_high,time_high); |
88 | -+ } |
89 | -+ break; |
90 | -+ } |
91 | -+ } |
92 | -+private: |
93 | -+ uint32 m_low; |
94 | -+ uint32 m_high; |
95 | -+}; |
96 | -+void add_time_atomic(TimeCounter* counter, uint64 time) |
97 | -+{ |
98 | -+ counter->add(time); |
99 | -+} |
100 | -+#endif // __i386__ |
101 | + |
102 | +class time_collector |
103 | +{ |
104 | +public: |
105 | + time_collector(utility& u) : m_utility(&u) |
106 | + { |
107 | -+ } |
108 | -+ uint32 count(uint index) const { return m_count[index]; } |
109 | -+ uint64 total(uint index) const { return m_total[index]; } |
110 | ++ my_atomic_rwlock_init(&time_collector_lock); |
111 | ++ } |
112 | ++ ~time_collector() |
113 | ++ { |
114 | ++ my_atomic_rwlock_destroy(&time_collector_lock); |
115 | ++ } |
116 | ++ uint32 count(uint index) const |
117 | ++ { |
118 | ++ my_atomic_rwlock_rdlock(&time_collector_lock); |
119 | ++ uint32 result= my_atomic_load32((volatile int32*)&m_count[index]); |
120 | ++ my_atomic_rwlock_rdunlock(&time_collector_lock); |
121 | ++ return result; |
122 | ++ } |
123 | ++ uint64 total(uint index) const |
124 | ++ { |
125 | ++ my_atomic_rwlock_rdlock(&time_collector_lock); |
126 | ++ uint64 result= my_atomic_load64((volatile int64*)&m_total[index]); |
127 | ++ my_atomic_rwlock_rdunlock(&time_collector_lock); |
128 | ++ return result; |
129 | ++ } |
130 | +public: |
131 | + void flush() |
132 | + { |
133 | -+ memset(&m_count,0,sizeof(m_count)); |
134 | ++ my_atomic_rwlock_wrlock(&time_collector_lock); |
135 | ++ memset((void*)&m_count,0,sizeof(m_count)); |
136 | + memset((void*)&m_total,0,sizeof(m_total)); |
137 | ++ my_atomic_rwlock_wrunlock(&time_collector_lock); |
138 | + } |
139 | + void collect(uint64 time) |
140 | + { |
141 | + bool no_collect= false; |
142 | + DBUG_EXECUTE_IF("response_time_distribution_log_only_more_300_milliseconds", { \ |
143 | -+ no_collect= time < 300 * 1000; \ |
144 | ++ no_collect= time < 300 * 1000; \ |
145 | + }); |
146 | + if(no_collect) return; |
147 | + int i= 0; |
148 | @@ -483,16 +421,22 @@ |
149 | + { |
150 | + if(m_utility->bound(i) > time) |
151 | + { |
152 | -+ __sync_fetch_and_add(&(m_count[i]),(uint32)1); |
153 | -+ add_time_atomic(&(m_total[i]),time); |
154 | ++ my_atomic_rwlock_wrlock(&time_collector_lock); |
155 | ++ my_atomic_add32((volatile int32*)(&m_count[i]), 1); |
156 | ++ my_atomic_add64((volatile int64*)(&m_total[i]), time); |
157 | ++ my_atomic_rwlock_wrunlock(&time_collector_lock); |
158 | + break; |
159 | + } |
160 | + } |
161 | + } |
162 | +private: |
163 | + utility* m_utility; |
164 | -+ uint32 m_count[OVERALL_POWER_COUNT + 1]; |
165 | -+ TimeCounter m_total[OVERALL_POWER_COUNT + 1]; |
166 | ++ /* The lock for atomic operations on m_count and m_total. Only actually |
167 | ++ used on architectures that do not have atomic implementation of atomic |
168 | ++ operations. */ |
169 | ++ my_atomic_rwlock_t time_collector_lock; |
170 | ++ volatile uint32 m_count[OVERALL_POWER_COUNT + 1]; |
171 | ++ volatile uint64 m_total[OVERALL_POWER_COUNT + 1]; |
172 | +}; |
173 | + |
174 | +class collector |
175 | @@ -869,7 +813,77 @@ |
176 | diff -ruN a/configure.in b/configure.in |
177 | --- a/configure.in 2010-12-07 19:19:42.000000000 +0300 |
178 | +++ b/configure.in 2010-12-07 19:21:39.000000000 +0300 |
179 | -@@ -2687,7 +2687,16 @@ |
180 | +@@ -1738,6 +1738,7 @@ |
181 | + int main() |
182 | + { |
183 | + int foo= -10; int bar= 10; |
184 | ++ long long int foo64= -10; long long int bar64= 10; |
185 | + if (!__sync_fetch_and_add(&foo, bar) || foo) |
186 | + return -1; |
187 | + bar= __sync_lock_test_and_set(&foo, bar); |
188 | +@@ -1746,6 +1747,14 @@ |
189 | + bar= __sync_val_compare_and_swap(&bar, foo, 15); |
190 | + if (bar) |
191 | + return -1; |
192 | ++ if (!__sync_fetch_and_add(&foo64, bar64) || foo64) |
193 | ++ return -1; |
194 | ++ bar64= __sync_lock_test_and_set(&foo64, bar64); |
195 | ++ if (bar64 || foo64 != 10) |
196 | ++ return -1; |
197 | ++ bar64= __sync_val_compare_and_swap(&bar64, foo, 15); |
198 | ++ if (bar64) |
199 | ++ return -1; |
200 | + return 0; |
201 | + } |
202 | + ], [mysql_cv_gcc_atomic_builtins=yes], |
203 | +@@ -1757,6 +1766,46 @@ |
204 | + [Define to 1 if compiler provides atomic builtins.]) |
205 | + fi |
206 | + |
207 | ++AC_CACHE_CHECK([whether the OS provides atomic_* functions like Solaris], |
208 | ++ [mysql_cv_solaris_atomic], |
209 | ++ [AC_RUN_IFELSE( |
210 | ++ [AC_LANG_PROGRAM( |
211 | ++ [[ |
212 | ++ #include <atomic.h> |
213 | ++ ]], |
214 | ++ [[ |
215 | ++ int foo = -10; int bar = 10; |
216 | ++ int64_t foo64 = -10; int64_t bar64 = 10; |
217 | ++ if (atomic_add_int_nv((uint_t *)&foo, bar) || foo) |
218 | ++ return -1; |
219 | ++ bar = atomic_swap_uint((uint_t *)&foo, (uint_t)bar); |
220 | ++ if (bar || foo != 10) |
221 | ++ return -1; |
222 | ++ bar = atomic_cas_uint((uint_t *)&bar, (uint_t)foo, 15); |
223 | ++ if (bar) |
224 | ++ return -1; |
225 | ++ if (atomic_add_64_nv((volatile uint64_t *)&foo64, bar64) || foo64) |
226 | ++ return -1; |
227 | ++ bar64 = atomic_swap_64((volatile uint64_t *)&foo64, (uint64_t)bar64); |
228 | ++ if (bar64 || foo64 != 10) |
229 | ++ return -1; |
230 | ++ bar64 = atomic_cas_64((volatile uint64_t *)&bar64, (uint_t)foo64, 15); |
231 | ++ if (bar64) |
232 | ++ return -1; |
233 | ++ atomic_or_64((volatile uint64_t *)&bar64, 0); |
234 | ++ return 0; |
235 | ++ ]] |
236 | ++ )], |
237 | ++ [mysql_cv_solaris_atomic=yes], |
238 | ++ [mysql_cv_solaris_atomic=no], |
239 | ++ [mysql_cv_solaris_atomic=no] |
240 | ++)]) |
241 | ++ |
242 | ++if test "x$mysql_cv_solaris_atomic" = xyes; then |
243 | ++ AC_DEFINE(HAVE_SOLARIS_ATOMIC, 1, |
244 | ++ [Define to 1 if OS provides atomic_* functions like Solaris.]) |
245 | ++fi |
246 | ++ |
247 | + # Force static compilation to avoid linking problems/get more speed |
248 | + AC_ARG_WITH(mysqld-ldflags, |
249 | + [ --with-mysqld-ldflags Extra linking arguments for mysqld], |
250 | +@@ -2687,7 +2736,16 @@ |
251 | AC_SUBST(readline_link) |
252 | AC_SUBST(readline_h_ln_cmd) |
253 | |
254 | @@ -886,3 +900,1214 @@ |
255 | |
256 | # Include man pages, if desired, adapted to the configured parts. |
257 | if test X"$with_man" = Xyes |
258 | +diff -ruN /dev/null b/include/my_atomic.h |
259 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
260 | ++++ b/include/my_atomic.h 2011-06-07 08:59:00.000000000 +0300 |
261 | +@@ -0,0 +1,287 @@ |
262 | ++#ifndef MY_ATOMIC_INCLUDED |
263 | ++#define MY_ATOMIC_INCLUDED |
264 | ++ |
265 | ++/* Copyright (C) 2006 MySQL AB |
266 | ++ |
267 | ++ This program is free software; you can redistribute it and/or modify |
268 | ++ it under the terms of the GNU General Public License as published by |
269 | ++ the Free Software Foundation; version 2 of the License. |
270 | ++ |
271 | ++ This program is distributed in the hope that it will be useful, |
272 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
273 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
274 | ++ GNU General Public License for more details. |
275 | ++ |
276 | ++ You should have received a copy of the GNU General Public License |
277 | ++ along with this program; if not, write to the Free Software |
278 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
279 | ++ |
280 | ++/* |
281 | ++ This header defines five atomic operations: |
282 | ++ |
283 | ++ my_atomic_add#(&var, what) |
284 | ++ 'Fetch and Add' |
285 | ++ add 'what' to *var, and return the old value of *var |
286 | ++ |
287 | ++ my_atomic_fas#(&var, what) |
288 | ++ 'Fetch And Store' |
289 | ++ store 'what' in *var, and return the old value of *var |
290 | ++ |
291 | ++ my_atomic_cas#(&var, &old, new) |
292 | ++ An odd variation of 'Compare And Set/Swap' |
293 | ++ if *var is equal to *old, then store 'new' in *var, and return TRUE |
294 | ++ otherwise store *var in *old, and return FALSE |
295 | ++ Usually, &old should not be accessed if the operation is successful. |
296 | ++ |
297 | ++ my_atomic_load#(&var) |
298 | ++ return *var |
299 | ++ |
300 | ++ my_atomic_store#(&var, what) |
301 | ++ store 'what' in *var |
302 | ++ |
303 | ++ '#' is substituted by a size suffix - 8, 16, 32, 64, or ptr |
304 | ++ (e.g. my_atomic_add8, my_atomic_fas32, my_atomic_casptr). |
305 | ++ |
306 | ++ NOTE This operations are not always atomic, so they always must be |
307 | ++ enclosed in my_atomic_rwlock_rdlock(lock)/my_atomic_rwlock_rdunlock(lock) |
308 | ++ or my_atomic_rwlock_wrlock(lock)/my_atomic_rwlock_wrunlock(lock). |
309 | ++ Hint: if a code block makes intensive use of atomic ops, it make sense |
310 | ++ to take/release rwlock once for the whole block, not for every statement. |
311 | ++ |
312 | ++ On architectures where these operations are really atomic, rwlocks will |
313 | ++ be optimized away. |
314 | ++ 8- and 16-bit atomics aren't implemented for windows (see generic-msvc.h), |
315 | ++ but can be added, if necessary. |
316 | ++*/ |
317 | ++ |
318 | ++#ifndef my_atomic_rwlock_init |
319 | ++ |
320 | ++#define intptr void * |
321 | ++/** |
322 | ++ Currently we don't support 8-bit and 16-bit operations. |
323 | ++ It can be added later if needed. |
324 | ++*/ |
325 | ++#undef MY_ATOMIC_HAS_8_16 |
326 | ++ |
327 | ++#ifndef MY_ATOMIC_MODE_RWLOCKS |
328 | ++/* |
329 | ++ * Attempt to do atomic ops without locks |
330 | ++ */ |
331 | ++#include "atomic/nolock.h" |
332 | ++#endif |
333 | ++ |
334 | ++#ifndef make_atomic_cas_body |
335 | ++/* nolock.h was not able to generate even a CAS function, fall back */ |
336 | ++#include "atomic/rwlock.h" |
337 | ++#endif |
338 | ++ |
339 | ++/* define missing functions by using the already generated ones */ |
340 | ++#ifndef make_atomic_add_body |
341 | ++#define make_atomic_add_body(S) \ |
342 | ++ int ## S tmp=*a; \ |
343 | ++ while (!my_atomic_cas ## S(a, &tmp, tmp+v)) ; \ |
344 | ++ v=tmp; |
345 | ++#endif |
346 | ++#ifndef make_atomic_fas_body |
347 | ++#define make_atomic_fas_body(S) \ |
348 | ++ int ## S tmp=*a; \ |
349 | ++ while (!my_atomic_cas ## S(a, &tmp, v)) ; \ |
350 | ++ v=tmp; |
351 | ++#endif |
352 | ++#ifndef make_atomic_load_body |
353 | ++#define make_atomic_load_body(S) \ |
354 | ++ ret= 0; /* avoid compiler warning */ \ |
355 | ++ (void)(my_atomic_cas ## S(a, &ret, ret)); |
356 | ++#endif |
357 | ++#ifndef make_atomic_store_body |
358 | ++#define make_atomic_store_body(S) \ |
359 | ++ (void)(my_atomic_fas ## S (a, v)); |
360 | ++#endif |
361 | ++ |
362 | ++/* |
363 | ++ transparent_union doesn't work in g++ |
364 | ++ Bug ? |
365 | ++ |
366 | ++ Darwin's gcc doesn't want to put pointers in a transparent_union |
367 | ++ when built with -arch ppc64. Complains: |
368 | ++ warning: 'transparent_union' attribute ignored |
369 | ++*/ |
370 | ++#if defined(__GNUC__) && !defined(__cplusplus) && \ |
371 | ++ ! (defined(__APPLE__) && (defined(_ARCH_PPC64) ||defined (_ARCH_PPC))) |
372 | ++/* |
373 | ++ we want to be able to use my_atomic_xxx functions with |
374 | ++ both signed and unsigned integers. But gcc will issue a warning |
375 | ++ "passing arg N of `my_atomic_XXX' as [un]signed due to prototype" |
376 | ++ if the signedness of the argument doesn't match the prototype, or |
377 | ++ "pointer targets in passing argument N of my_atomic_XXX differ in signedness" |
378 | ++ if int* is used where uint* is expected (or vice versa). |
379 | ++ Let's shut these warnings up |
380 | ++*/ |
381 | ++#define make_transparent_unions(S) \ |
382 | ++ typedef union { \ |
383 | ++ int ## S i; \ |
384 | ++ uint ## S u; \ |
385 | ++ } U_ ## S __attribute__ ((transparent_union)); \ |
386 | ++ typedef union { \ |
387 | ++ int ## S volatile *i; \ |
388 | ++ uint ## S volatile *u; \ |
389 | ++ } Uv_ ## S __attribute__ ((transparent_union)); |
390 | ++#define uintptr intptr |
391 | ++make_transparent_unions(8) |
392 | ++make_transparent_unions(16) |
393 | ++make_transparent_unions(32) |
394 | ++make_transparent_unions(64) |
395 | ++make_transparent_unions(ptr) |
396 | ++#undef uintptr |
397 | ++#undef make_transparent_unions |
398 | ++#define a U_a.i |
399 | ++#define cmp U_cmp.i |
400 | ++#define v U_v.i |
401 | ++#define set U_set.i |
402 | ++#else |
403 | ++#define U_8 int8 |
404 | ++#define U_16 int16 |
405 | ++#define U_32 int32 |
406 | ++#define U_64 int64 |
407 | ++#define U_ptr intptr |
408 | ++#define Uv_8 int8 |
409 | ++#define Uv_16 int16 |
410 | ++#define Uv_32 int32 |
411 | ++#define Uv_64 int64 |
412 | ++#define Uv_ptr intptr |
413 | ++#define U_a volatile *a |
414 | ++#define U_cmp *cmp |
415 | ++#define U_v v |
416 | ++#define U_set set |
417 | ++#endif /* __GCC__ transparent_union magic */ |
418 | ++ |
419 | ++#define make_atomic_cas(S) \ |
420 | ++static inline int my_atomic_cas ## S(Uv_ ## S U_a, \ |
421 | ++ Uv_ ## S U_cmp, U_ ## S U_set) \ |
422 | ++{ \ |
423 | ++ int8 ret; \ |
424 | ++ make_atomic_cas_body(S); \ |
425 | ++ return ret; \ |
426 | ++} |
427 | ++ |
428 | ++#define make_atomic_add(S) \ |
429 | ++static inline int ## S my_atomic_add ## S( \ |
430 | ++ Uv_ ## S U_a, U_ ## S U_v) \ |
431 | ++{ \ |
432 | ++ make_atomic_add_body(S); \ |
433 | ++ return v; \ |
434 | ++} |
435 | ++ |
436 | ++#define make_atomic_fas(S) \ |
437 | ++static inline int ## S my_atomic_fas ## S( \ |
438 | ++ Uv_ ## S U_a, U_ ## S U_v) \ |
439 | ++{ \ |
440 | ++ make_atomic_fas_body(S); \ |
441 | ++ return v; \ |
442 | ++} |
443 | ++ |
444 | ++#define make_atomic_load(S) \ |
445 | ++static inline int ## S my_atomic_load ## S(Uv_ ## S U_a) \ |
446 | ++{ \ |
447 | ++ int ## S ret; \ |
448 | ++ make_atomic_load_body(S); \ |
449 | ++ return ret; \ |
450 | ++} |
451 | ++ |
452 | ++#define make_atomic_store(S) \ |
453 | ++static inline void my_atomic_store ## S( \ |
454 | ++ Uv_ ## S U_a, U_ ## S U_v) \ |
455 | ++{ \ |
456 | ++ make_atomic_store_body(S); \ |
457 | ++} |
458 | ++ |
459 | ++#ifdef MY_ATOMIC_HAS_8_16 |
460 | ++make_atomic_cas(8) |
461 | ++make_atomic_cas(16) |
462 | ++#endif |
463 | ++make_atomic_cas(32) |
464 | ++make_atomic_cas(64) |
465 | ++make_atomic_cas(ptr) |
466 | ++ |
467 | ++#ifdef MY_ATOMIC_HAS_8_16 |
468 | ++make_atomic_add(8) |
469 | ++make_atomic_add(16) |
470 | ++#endif |
471 | ++make_atomic_add(32) |
472 | ++make_atomic_add(64) |
473 | ++ |
474 | ++#ifdef MY_ATOMIC_HAS_8_16 |
475 | ++make_atomic_load(8) |
476 | ++make_atomic_load(16) |
477 | ++#endif |
478 | ++make_atomic_load(32) |
479 | ++make_atomic_load(64) |
480 | ++make_atomic_load(ptr) |
481 | ++ |
482 | ++#ifdef MY_ATOMIC_HAS_8_16 |
483 | ++make_atomic_fas(8) |
484 | ++make_atomic_fas(16) |
485 | ++#endif |
486 | ++make_atomic_fas(32) |
487 | ++make_atomic_fas(64) |
488 | ++make_atomic_fas(ptr) |
489 | ++ |
490 | ++#ifdef MY_ATOMIC_HAS_8_16 |
491 | ++make_atomic_store(8) |
492 | ++make_atomic_store(16) |
493 | ++#endif |
494 | ++make_atomic_store(32) |
495 | ++make_atomic_store(64) |
496 | ++make_atomic_store(ptr) |
497 | ++ |
498 | ++#ifdef _atomic_h_cleanup_ |
499 | ++#include _atomic_h_cleanup_ |
500 | ++#undef _atomic_h_cleanup_ |
501 | ++#endif |
502 | ++ |
503 | ++#undef U_8 |
504 | ++#undef U_16 |
505 | ++#undef U_32 |
506 | ++#undef U_64 |
507 | ++#undef U_ptr |
508 | ++#undef Uv_8 |
509 | ++#undef Uv_16 |
510 | ++#undef Uv_32 |
511 | ++#undef Uv_64 |
512 | ++#undef Uv_ptr |
513 | ++#undef a |
514 | ++#undef cmp |
515 | ++#undef v |
516 | ++#undef set |
517 | ++#undef U_a |
518 | ++#undef U_cmp |
519 | ++#undef U_v |
520 | ++#undef U_set |
521 | ++#undef make_atomic_add |
522 | ++#undef make_atomic_cas |
523 | ++#undef make_atomic_load |
524 | ++#undef make_atomic_store |
525 | ++#undef make_atomic_fas |
526 | ++#undef make_atomic_add_body |
527 | ++#undef make_atomic_cas_body |
528 | ++#undef make_atomic_load_body |
529 | ++#undef make_atomic_store_body |
530 | ++#undef make_atomic_fas_body |
531 | ++#undef intptr |
532 | ++ |
533 | ++/* |
534 | ++ the macro below defines (as an expression) the code that |
535 | ++ will be run in spin-loops. Intel manuals recummend to have PAUSE there. |
536 | ++ It is expected to be defined in include/atomic/ *.h files |
537 | ++*/ |
538 | ++#ifndef LF_BACKOFF |
539 | ++#define LF_BACKOFF (1) |
540 | ++#endif |
541 | ++ |
542 | ++#define MY_ATOMIC_OK 0 |
543 | ++#define MY_ATOMIC_NOT_1CPU 1 |
544 | ++extern int my_atomic_initialize(); |
545 | ++ |
546 | ++#endif |
547 | ++ |
548 | ++#endif /* MY_ATOMIC_INCLUDED */ |
549 | +diff -ruN a/mysys/Makefile.am b/mysys/Makefile.am |
550 | +--- a/mysys/Makefile.am 2011-04-12 15:11:35.000000000 +0300 |
551 | ++++ b/mysys/Makefile.am 2011-06-07 08:59:00.432197996 +0300 |
552 | +@@ -51,7 +51,8 @@ |
553 | + rijndael.c my_aes.c sha1.c \ |
554 | + my_compare.c my_netware.c my_largepage.c \ |
555 | + my_memmem.c stacktrace.c \ |
556 | +- my_windac.c my_access.c base64.c my_libwrap.c |
557 | ++ my_windac.c my_access.c base64.c my_libwrap.c \ |
558 | ++ my_atomic.c |
559 | + |
560 | + if NEED_THREAD |
561 | + # mf_keycache is used only in the server, so it is safe to leave the file |
562 | +diff -ruN a/mysys/CMakeLists.txt b/mysys/CMakeLists.txt |
563 | +--- a/mysys/CMakeLists.txt 2011-04-12 15:11:35.000000000 +0300 |
564 | ++++ b/mysys/CMakeLists.txt 2011-06-07 08:59:00.432197996 +0300 |
565 | +@@ -41,7 +41,8 @@ |
566 | + my_static.c my_symlink.c my_symlink2.c my_sync.c my_thr_init.c my_wincond.c |
567 | + my_windac.c my_winthread.c my_write.c ptr_cmp.c queues.c stacktrace.c |
568 | + rijndael.c safemalloc.c sha1.c string.c thr_alarm.c thr_lock.c thr_mutex.c |
569 | +- thr_rwlock.c tree.c typelib.c my_vle.c base64.c my_memmem.c my_getpagesize.c) |
570 | ++ thr_rwlock.c tree.c typelib.c my_vle.c base64.c my_memmem.c my_getpagesize.c |
571 | ++ my_atomic.c) |
572 | + |
573 | + IF(NOT SOURCE_SUBLIBS) |
574 | + ADD_LIBRARY(mysys ${MYSYS_SOURCES}) |
575 | +diff -ruN a/include/Makefile.am b/include/Makefile.am |
576 | +--- a/include/Makefile.am 2011-04-12 15:11:35.000000000 +0300 |
577 | ++++ b/include/Makefile.am 2011-06-07 08:59:00.432197996 +0300 |
578 | +@@ -38,7 +38,10 @@ |
579 | + my_aes.h my_tree.h my_trie.h hash.h thr_alarm.h \ |
580 | + thr_lock.h t_ctype.h violite.h my_md5.h base64.h \ |
581 | + my_compare.h my_time.h my_vle.h my_user.h \ |
582 | +- my_libwrap.h my_stacktrace.h |
583 | ++ my_libwrap.h my_stacktrace.h my_atomic.h \ |
584 | ++ atomic/gcc_builtins.h atomic/generic-msvc.h \ |
585 | ++ atomic/nolock.h atomic/rwlock.h atomic/solaris.h \ |
586 | ++ atomic/x86-gcc.h |
587 | + |
588 | + EXTRA_DIST = mysql.h.pp mysql/plugin.h.pp |
589 | + |
590 | +diff -ruN /dev/null b/include/atomic/gcc_builtins.h |
591 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
592 | ++++ b/include/atomic/gcc_builtins.h 2011-06-07 08:59:00.000000000 +0300 |
593 | +@@ -0,0 +1,42 @@ |
594 | ++#ifndef ATOMIC_GCC_BUILTINS_INCLUDED |
595 | ++#define ATOMIC_GCC_BUILTINS_INCLUDED |
596 | ++ |
597 | ++/* Copyright (C) 2008 MySQL AB |
598 | ++ |
599 | ++ This program is free software; you can redistribute it and/or modify |
600 | ++ it under the terms of the GNU General Public License as published by |
601 | ++ the Free Software Foundation; version 2 of the License. |
602 | ++ |
603 | ++ This program is distributed in the hope that it will be useful, |
604 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
605 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
606 | ++ GNU General Public License for more details. |
607 | ++ |
608 | ++ You should have received a copy of the GNU General Public License |
609 | ++ along with this program; if not, write to the Free Software |
610 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
611 | ++ |
612 | ++#define make_atomic_add_body(S) \ |
613 | ++ v= __sync_fetch_and_add(a, v); |
614 | ++#define make_atomic_fas_body(S) \ |
615 | ++ v= __sync_lock_test_and_set(a, v); |
616 | ++#define make_atomic_cas_body(S) \ |
617 | ++ int ## S sav; \ |
618 | ++ int ## S cmp_val= *cmp; \ |
619 | ++ sav= __sync_val_compare_and_swap(a, cmp_val, set);\ |
620 | ++ if (!(ret= (sav == cmp_val))) *cmp= sav |
621 | ++ |
622 | ++#ifdef MY_ATOMIC_MODE_DUMMY |
623 | ++#define make_atomic_load_body(S) ret= *a |
624 | ++#define make_atomic_store_body(S) *a= v |
625 | ++#define MY_ATOMIC_MODE "gcc-builtins-up" |
626 | ++ |
627 | ++#else |
628 | ++#define MY_ATOMIC_MODE "gcc-builtins-smp" |
629 | ++#define make_atomic_load_body(S) \ |
630 | ++ ret= __sync_fetch_and_or(a, 0); |
631 | ++#define make_atomic_store_body(S) \ |
632 | ++ (void) __sync_lock_test_and_set(a, v); |
633 | ++#endif |
634 | ++ |
635 | ++#endif /* ATOMIC_GCC_BUILTINS_INCLUDED */ |
636 | +diff -ruN /dev/null b/include/atomic/generic-msvc.h |
637 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
638 | ++++ b/include/atomic/generic-msvc.h 2011-06-07 08:59:00.000000000 +0300 |
639 | +@@ -0,0 +1,134 @@ |
640 | ++/* Copyright (C) 2006-2008 MySQL AB, 2008-2009 Sun Microsystems, Inc. |
641 | ++ |
642 | ++ This program is free software; you can redistribute it and/or modify |
643 | ++ it under the terms of the GNU General Public License as published by |
644 | ++ the Free Software Foundation; version 2 of the License. |
645 | ++ |
646 | ++ This program is distributed in the hope that it will be useful, |
647 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
648 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
649 | ++ GNU General Public License for more details. |
650 | ++ |
651 | ++ You should have received a copy of the GNU General Public License |
652 | ++ along with this program; if not, write to the Free Software |
653 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
654 | ++ |
655 | ++#ifndef _atomic_h_cleanup_ |
656 | ++#define _atomic_h_cleanup_ "atomic/generic-msvc.h" |
657 | ++ |
658 | ++/* |
659 | ++ We don't implement anything specific for MY_ATOMIC_MODE_DUMMY, always use |
660 | ++ intrinsics. |
661 | ++ 8 and 16-bit atomics are not implemented, but it can be done if necessary. |
662 | ++*/ |
663 | ++#undef MY_ATOMIC_HAS_8_16 |
664 | ++ |
665 | ++#include <windows.h> |
666 | ++/* |
667 | ++ x86 compilers (both VS2003 or VS2005) never use instrinsics, but generate |
668 | ++ function calls to kernel32 instead, even in the optimized build. |
669 | ++ We force intrinsics as described in MSDN documentation for |
670 | ++ _InterlockedCompareExchange. |
671 | ++*/ |
672 | ++#ifdef _M_IX86 |
673 | ++ |
674 | ++#if (_MSC_VER >= 1500) |
675 | ++#include <intrin.h> |
676 | ++#else |
677 | ++C_MODE_START |
678 | ++/*Visual Studio 2003 and earlier do not have prototypes for atomic intrinsics*/ |
679 | ++LONG _InterlockedCompareExchange (LONG volatile *Target, LONG Value, LONG Comp); |
680 | ++LONGLONG _InterlockedCompareExchange64 (LONGLONG volatile *Target, |
681 | ++ LONGLONG Value, LONGLONG Comp); |
682 | ++C_MODE_END |
683 | ++ |
684 | ++#pragma intrinsic(_InterlockedCompareExchange) |
685 | ++#pragma intrinsic(_InterlockedCompareExchange64) |
686 | ++#endif |
687 | ++ |
688 | ++#define InterlockedCompareExchange _InterlockedCompareExchange |
689 | ++#define InterlockedCompareExchange64 _InterlockedCompareExchange64 |
690 | ++/* |
691 | ++ No need to do something special for InterlockedCompareExchangePointer |
692 | ++ as it is a #define to InterlockedCompareExchange. The same applies to |
693 | ++ InterlockedExchangePointer. |
694 | ++*/ |
695 | ++#endif /*_M_IX86*/ |
696 | ++ |
697 | ++#define MY_ATOMIC_MODE "msvc-intrinsics" |
698 | ++/* Implement using CAS on WIN32 */ |
699 | ++#define IL_COMP_EXCHG32(X,Y,Z) \ |
700 | ++ InterlockedCompareExchange((volatile LONG *)(X),(Y),(Z)) |
701 | ++#define IL_COMP_EXCHG64(X,Y,Z) \ |
702 | ++ InterlockedCompareExchange64((volatile LONGLONG *)(X), \ |
703 | ++ (LONGLONG)(Y),(LONGLONG)(Z)) |
704 | ++#define IL_COMP_EXCHGptr InterlockedCompareExchangePointer |
705 | ++ |
706 | ++#define make_atomic_cas_body(S) \ |
707 | ++ int ## S initial_cmp= *cmp; \ |
708 | ++ int ## S initial_a= IL_COMP_EXCHG ## S (a, set, initial_cmp); \ |
709 | ++ if (!(ret= (initial_a == initial_cmp))) *cmp= initial_a; |
710 | ++ |
711 | ++#ifndef _M_IX86 |
712 | ++/* Use full set of optimised functions on WIN64 */ |
713 | ++#define IL_EXCHG_ADD32(X,Y) \ |
714 | ++ InterlockedExchangeAdd((volatile LONG *)(X),(Y)) |
715 | ++#define IL_EXCHG_ADD64(X,Y) \ |
716 | ++ InterlockedExchangeAdd64((volatile LONGLONG *)(X),(LONGLONG)(Y)) |
717 | ++#define IL_EXCHG32(X,Y) \ |
718 | ++ InterlockedExchange((volatile LONG *)(X),(Y)) |
719 | ++#define IL_EXCHG64(X,Y) \ |
720 | ++ InterlockedExchange64((volatile LONGLONG *)(X),(LONGLONG)(Y)) |
721 | ++#define IL_EXCHGptr InterlockedExchangePointer |
722 | ++ |
723 | ++#define make_atomic_add_body(S) \ |
724 | ++ v= IL_EXCHG_ADD ## S (a, v) |
725 | ++#define make_atomic_swap_body(S) \ |
726 | ++ v= IL_EXCHG ## S (a, v) |
727 | ++#define make_atomic_load_body(S) \ |
728 | ++ ret= 0; /* avoid compiler warning */ \ |
729 | ++ ret= IL_COMP_EXCHG ## S (a, ret, ret); |
730 | ++#endif |
731 | ++/* |
732 | ++ my_yield_processor (equivalent of x86 PAUSE instruction) should be used |
733 | ++ to improve performance on hyperthreaded CPUs. Intel recommends to use it in |
734 | ++ spin loops also on non-HT machines to reduce power consumption (see e.g |
735 | ++ http://softwarecommunity.intel.com/articles/eng/2004.htm) |
736 | ++ |
737 | ++ Running benchmarks for spinlocks implemented with InterlockedCompareExchange |
738 | ++ and YieldProcessor shows that much better performance is achieved by calling |
739 | ++ YieldProcessor in a loop - that is, yielding longer. On Intel boxes setting |
740 | ++ loop count in the range 200-300 brought best results. |
741 | ++ */ |
742 | ++#ifndef YIELD_LOOPS |
743 | ++#define YIELD_LOOPS 200 |
744 | ++#endif |
745 | ++ |
746 | ++static __inline int my_yield_processor() |
747 | ++{ |
748 | ++ int i; |
749 | ++ for(i=0; i<YIELD_LOOPS; i++) |
750 | ++ { |
751 | ++#if (_MSC_VER <= 1310) |
752 | ++ /* On older compilers YieldProcessor is not available, use inline assembly*/ |
753 | ++ __asm { rep nop } |
754 | ++#else |
755 | ++ YieldProcessor(); |
756 | ++#endif |
757 | ++ } |
758 | ++ return 1; |
759 | ++} |
760 | ++ |
761 | ++#define LF_BACKOFF my_yield_processor() |
762 | ++#else /* cleanup */ |
763 | ++ |
764 | ++#undef IL_EXCHG_ADD32 |
765 | ++#undef IL_EXCHG_ADD64 |
766 | ++#undef IL_COMP_EXCHG32 |
767 | ++#undef IL_COMP_EXCHG64 |
768 | ++#undef IL_COMP_EXCHGptr |
769 | ++#undef IL_EXCHG32 |
770 | ++#undef IL_EXCHG64 |
771 | ++#undef IL_EXCHGptr |
772 | ++ |
773 | ++#endif |
774 | +diff -ruN /dev/null b/include/atomic/nolock.h |
775 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
776 | ++++ b/include/atomic/nolock.h 2011-06-07 08:59:00.000000000 +0300 |
777 | +@@ -0,0 +1,69 @@ |
778 | ++#ifndef ATOMIC_NOLOCK_INCLUDED |
779 | ++#define ATOMIC_NOLOCK_INCLUDED |
780 | ++ |
781 | ++/* Copyright (C) 2006 MySQL AB |
782 | ++ |
783 | ++ This program is free software; you can redistribute it and/or modify |
784 | ++ it under the terms of the GNU General Public License as published by |
785 | ++ the Free Software Foundation; version 2 of the License. |
786 | ++ |
787 | ++ This program is distributed in the hope that it will be useful, |
788 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
789 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
790 | ++ GNU General Public License for more details. |
791 | ++ |
792 | ++ You should have received a copy of the GNU General Public License |
793 | ++ along with this program; if not, write to the Free Software |
794 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
795 | ++ |
796 | ++#if defined(__i386__) || defined(_MSC_VER) || defined(__x86_64__) \ |
797 | ++ || defined(HAVE_GCC_ATOMIC_BUILTINS) \ |
798 | ++ || defined(HAVE_SOLARIS_ATOMIC) |
799 | ++ |
800 | ++# ifdef MY_ATOMIC_MODE_DUMMY |
801 | ++# define LOCK_prefix "" |
802 | ++# else |
803 | ++# define LOCK_prefix "lock" |
804 | ++# endif |
805 | ++/* |
806 | ++ We choose implementation as follows: |
807 | ++ ------------------------------------ |
808 | ++ On Windows using Visual C++ the native implementation should be |
809 | ++ preferrable. When using gcc we prefer the Solaris implementation |
810 | ++ before the gcc because of stability preference, we choose gcc |
811 | ++ builtins if available, otherwise we choose the somewhat broken |
812 | ++ native x86 implementation. If neither Visual C++ or gcc we still |
813 | ++ choose the Solaris implementation on Solaris (mainly for SunStudio |
814 | ++ compilers). |
815 | ++*/ |
816 | ++# if defined(_MSV_VER) |
817 | ++# include "generic-msvc.h" |
818 | ++# elif __GNUC__ |
819 | ++# if defined(HAVE_SOLARIS_ATOMIC) |
820 | ++# include "solaris.h" |
821 | ++# elif defined(HAVE_GCC_ATOMIC_BUILTINS) |
822 | ++# include "gcc_builtins.h" |
823 | ++# elif defined(__i386__) || defined(__x86_64__) |
824 | ++# include "x86-gcc.h" |
825 | ++# endif |
826 | ++# elif defined(HAVE_SOLARIS_ATOMIC) |
827 | ++# include "solaris.h" |
828 | ++# endif |
829 | ++#endif |
830 | ++ |
831 | ++#if defined(make_atomic_cas_body) |
832 | ++/* |
833 | ++ Type not used so minimal size (emptry struct has different size between C |
834 | ++ and C++, zero-length array is gcc-specific). |
835 | ++*/ |
836 | ++typedef char my_atomic_rwlock_t __attribute__ ((unused)); |
837 | ++#define my_atomic_rwlock_destroy(name) |
838 | ++#define my_atomic_rwlock_init(name) |
839 | ++#define my_atomic_rwlock_rdlock(name) |
840 | ++#define my_atomic_rwlock_wrlock(name) |
841 | ++#define my_atomic_rwlock_rdunlock(name) |
842 | ++#define my_atomic_rwlock_wrunlock(name) |
843 | ++ |
844 | ++#endif |
845 | ++ |
846 | ++#endif /* ATOMIC_NOLOCK_INCLUDED */ |
847 | +diff -ruN /dev/null b/include/atomic/rwlock.h |
848 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
849 | ++++ b/include/atomic/rwlock.h 2011-06-07 08:59:00.000000000 +0300 |
850 | +@@ -0,0 +1,100 @@ |
851 | ++#ifndef ATOMIC_RWLOCK_INCLUDED |
852 | ++#define ATOMIC_RWLOCK_INCLUDED |
853 | ++ |
854 | ++/* Copyright (C) 2006 MySQL AB, 2009 Sun Microsystems, Inc. |
855 | ++ |
856 | ++ This program is free software; you can redistribute it and/or modify |
857 | ++ it under the terms of the GNU General Public License as published by |
858 | ++ the Free Software Foundation; version 2 of the License. |
859 | ++ |
860 | ++ This program is distributed in the hope that it will be useful, |
861 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
862 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
863 | ++ GNU General Public License for more details. |
864 | ++ |
865 | ++ You should have received a copy of the GNU General Public License |
866 | ++ along with this program; if not, write to the Free Software |
867 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
868 | ++ |
869 | ++#define MY_ATOMIC_MODE_RWLOCKS 1 |
870 | ++ |
871 | ++#ifdef MY_ATOMIC_MODE_DUMMY |
872 | ++/* |
873 | ++ the following can never be enabled by ./configure, one need to put #define in |
874 | ++ a source to trigger the following warning. The resulting code will be broken, |
875 | ++ it only makes sense to do it to see now test_atomic detects broken |
876 | ++ implementations (another way is to run a UP build on an SMP box). |
877 | ++*/ |
878 | ++#warning MY_ATOMIC_MODE_DUMMY and MY_ATOMIC_MODE_RWLOCKS are incompatible |
879 | ++ |
880 | ++typedef char my_atomic_rwlock_t; |
881 | ++ |
882 | ++#define my_atomic_rwlock_destroy(name) |
883 | ++#define my_atomic_rwlock_init(name) |
884 | ++#define my_atomic_rwlock_rdlock(name) |
885 | ++#define my_atomic_rwlock_wrlock(name) |
886 | ++#define my_atomic_rwlock_rdunlock(name) |
887 | ++#define my_atomic_rwlock_wrunlock(name) |
888 | ++#define MY_ATOMIC_MODE "dummy (non-atomic)" |
889 | ++#else /* not MY_ATOMIC_MODE_DUMMY */ |
890 | ++ |
891 | ++typedef struct {pthread_mutex_t rw;} my_atomic_rwlock_t; |
892 | ++ |
893 | ++#ifndef SAFE_MUTEX |
894 | ++ |
895 | ++/* |
896 | ++ we're using read-write lock macros but map them to mutex locks, and they're |
897 | ++ faster. Still, having semantically rich API we can change the |
898 | ++ underlying implementation, if necessary. |
899 | ++*/ |
900 | ++#define my_atomic_rwlock_destroy(name) pthread_mutex_destroy(& (name)->rw) |
901 | ++#define my_atomic_rwlock_init(name) pthread_mutex_init(& (name)->rw, 0) |
902 | ++#define my_atomic_rwlock_rdlock(name) pthread_mutex_lock(& (name)->rw) |
903 | ++#define my_atomic_rwlock_wrlock(name) pthread_mutex_lock(& (name)->rw) |
904 | ++#define my_atomic_rwlock_rdunlock(name) pthread_mutex_unlock(& (name)->rw) |
905 | ++#define my_atomic_rwlock_wrunlock(name) pthread_mutex_unlock(& (name)->rw) |
906 | ++ |
907 | ++#else /* SAFE_MUTEX */ |
908 | ++ |
909 | ++/* |
910 | ++ SAFE_MUTEX pollutes the compiling name space with macros |
911 | ++ that alter pthread_mutex_t, pthread_mutex_init, etc. |
912 | ++ Atomic operations should never use the safe mutex wrappers. |
913 | ++ Unfortunately, there is no way to have both: |
914 | ++ - safe mutex macros expanding pthread_mutex_lock to safe_mutex_lock |
915 | ++ - my_atomic macros expanding to unmodified pthread_mutex_lock |
916 | ++ inlined in the same compilation unit. |
917 | ++ So, in case of SAFE_MUTEX, a function call is required. |
918 | ++ Given that SAFE_MUTEX is a debugging facility, |
919 | ++ this extra function call is not a performance concern for |
920 | ++ production builds. |
921 | ++*/ |
922 | ++C_MODE_START |
923 | ++extern void plain_pthread_mutex_init(safe_mutex_t *); |
924 | ++extern void plain_pthread_mutex_destroy(safe_mutex_t *); |
925 | ++extern void plain_pthread_mutex_lock(safe_mutex_t *); |
926 | ++extern void plain_pthread_mutex_unlock(safe_mutex_t *); |
927 | ++C_MODE_END |
928 | ++ |
929 | ++#define my_atomic_rwlock_destroy(name) plain_pthread_mutex_destroy(&(name)->rw) |
930 | ++#define my_atomic_rwlock_init(name) plain_pthread_mutex_init(&(name)->rw) |
931 | ++#define my_atomic_rwlock_rdlock(name) plain_pthread_mutex_lock(&(name)->rw) |
932 | ++#define my_atomic_rwlock_wrlock(name) plain_pthread_mutex_lock(&(name)->rw) |
933 | ++#define my_atomic_rwlock_rdunlock(name) plain_pthread_mutex_unlock(&(name)->rw) |
934 | ++#define my_atomic_rwlock_wrunlock(name) plain_pthread_mutex_unlock(&(name)->rw) |
935 | ++ |
936 | ++#endif /* SAFE_MUTEX */ |
937 | ++ |
938 | ++#define MY_ATOMIC_MODE "mutex" |
939 | ++#ifndef MY_ATOMIC_MODE_RWLOCKS |
940 | ++#define MY_ATOMIC_MODE_RWLOCKS 1 |
941 | ++#endif |
942 | ++#endif |
943 | ++ |
944 | ++#define make_atomic_add_body(S) int ## S sav; sav= *a; *a+= v; v=sav; |
945 | ++#define make_atomic_fas_body(S) int ## S sav; sav= *a; *a= v; v=sav; |
946 | ++#define make_atomic_cas_body(S) if ((ret= (*a == *cmp))) *a= set; else *cmp=*a; |
947 | ++#define make_atomic_load_body(S) ret= *a; |
948 | ++#define make_atomic_store_body(S) *a= v; |
949 | ++ |
950 | ++#endif /* ATOMIC_RWLOCK_INCLUDED */ |
951 | +diff -ruN /dev/null b/include/atomic/solaris.h |
952 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
953 | ++++ b/include/atomic/solaris.h 2011-06-07 08:59:00.000000000 +0300 |
954 | +@@ -0,0 +1,72 @@ |
955 | ++/* Copyright (C) 2008 MySQL AB, 2009 Sun Microsystems, Inc |
956 | ++ |
957 | ++ This program is free software; you can redistribute it and/or modify |
958 | ++ it under the terms of the GNU General Public License as published by |
959 | ++ the Free Software Foundation; version 2 of the License. |
960 | ++ |
961 | ++ This program is distributed in the hope that it will be useful, |
962 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
963 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
964 | ++ GNU General Public License for more details. |
965 | ++ |
966 | ++ You should have received a copy of the GNU General Public License |
967 | ++ along with this program; if not, write to the Free Software |
968 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
969 | ++ |
970 | ++#ifndef _atomic_h_cleanup_ |
971 | ++#define _atomic_h_cleanup_ "atomic/solaris.h" |
972 | ++ |
973 | ++#include <atomic.h> |
974 | ++ |
975 | ++#define MY_ATOMIC_MODE "solaris-atomic" |
976 | ++ |
977 | ++#if defined(__GNUC__) |
978 | ++#define atomic_typeof(T,V) __typeof__(V) |
979 | ++#else |
980 | ++#define atomic_typeof(T,V) T |
981 | ++#endif |
982 | ++ |
983 | ++#define uintptr_t void * |
984 | ++#define atomic_or_ptr_nv(X,Y) (void *)atomic_or_ulong_nv((volatile ulong_t *)X, Y) |
985 | ++ |
986 | ++#define make_atomic_cas_body(S) \ |
987 | ++ atomic_typeof(uint ## S ## _t, *cmp) sav; \ |
988 | ++ sav = atomic_cas_ ## S( \ |
989 | ++ (volatile uint ## S ## _t *)a, \ |
990 | ++ (uint ## S ## _t)*cmp, \ |
991 | ++ (uint ## S ## _t)set); \ |
992 | ++ if (! (ret= (sav == *cmp))) \ |
993 | ++ *cmp= sav; |
994 | ++ |
995 | ++#define make_atomic_add_body(S) \ |
996 | ++ int ## S nv; /* new value */ \ |
997 | ++ nv= atomic_add_ ## S ## _nv((volatile uint ## S ## _t *)a, v); \ |
998 | ++ v= nv - v |
999 | ++ |
1000 | ++/* ------------------------------------------------------------------------ */ |
1001 | ++ |
1002 | ++#ifdef MY_ATOMIC_MODE_DUMMY |
1003 | ++ |
1004 | ++#define make_atomic_load_body(S) ret= *a |
1005 | ++#define make_atomic_store_body(S) *a= v |
1006 | ++ |
1007 | ++#else /* MY_ATOMIC_MODE_DUMMY */ |
1008 | ++ |
1009 | ++#define make_atomic_load_body(S) \ |
1010 | ++ ret= atomic_or_ ## S ## _nv((volatile uint ## S ## _t *)a, 0) |
1011 | ++ |
1012 | ++#define make_atomic_store_body(S) \ |
1013 | ++ (void) atomic_swap_ ## S((volatile uint ## S ## _t *)a, (uint ## S ## _t)v) |
1014 | ++ |
1015 | ++#endif |
1016 | ++ |
1017 | ++#define make_atomic_fas_body(S) \ |
1018 | ++ v= atomic_swap_ ## S((volatile uint ## S ## _t *)a, (uint ## S ## _t)v) |
1019 | ++ |
1020 | ++#else /* cleanup */ |
1021 | ++ |
1022 | ++#undef uintptr_t |
1023 | ++#undef atomic_or_ptr_nv |
1024 | ++ |
1025 | ++#endif |
1026 | ++ |
1027 | +diff -ruN /dev/null b/include/atomic/x86-gcc.h |
1028 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
1029 | ++++ b/include/atomic/x86-gcc.h 2011-06-07 08:59:00.000000000 +0300 |
1030 | +@@ -0,0 +1,145 @@ |
1031 | ++#ifndef ATOMIC_X86_GCC_INCLUDED |
1032 | ++#define ATOMIC_X86_GCC_INCLUDED |
1033 | ++ |
1034 | ++/* Copyright (C) 2006 MySQL AB |
1035 | ++ |
1036 | ++ This program is free software; you can redistribute it and/or modify |
1037 | ++ it under the terms of the GNU General Public License as published by |
1038 | ++ the Free Software Foundation; version 2 of the License. |
1039 | ++ |
1040 | ++ This program is distributed in the hope that it will be useful, |
1041 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
1042 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1043 | ++ GNU General Public License for more details. |
1044 | ++ |
1045 | ++ You should have received a copy of the GNU General Public License |
1046 | ++ along with this program; if not, write to the Free Software |
1047 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
1048 | ++ |
1049 | ++/* |
1050 | ++ XXX 64-bit atomic operations can be implemented using |
1051 | ++ cmpxchg8b, if necessary. Though I've heard that not all 64-bit |
1052 | ++ architectures support double-word (128-bit) cas. |
1053 | ++*/ |
1054 | ++ |
1055 | ++/* |
1056 | ++ No special support of 8 and 16 bit operations are implemented here |
1057 | ++ currently. |
1058 | ++*/ |
1059 | ++#undef MY_ATOMIC_HAS_8_AND_16 |
1060 | ++ |
1061 | ++#ifdef __x86_64__ |
1062 | ++# ifdef MY_ATOMIC_NO_XADD |
1063 | ++# define MY_ATOMIC_MODE "gcc-amd64" LOCK_prefix "-no-xadd" |
1064 | ++# else |
1065 | ++# define MY_ATOMIC_MODE "gcc-amd64" LOCK_prefix |
1066 | ++# endif |
1067 | ++#else |
1068 | ++# ifdef MY_ATOMIC_NO_XADD |
1069 | ++# define MY_ATOMIC_MODE "gcc-x86" LOCK_prefix "-no-xadd" |
1070 | ++# else |
1071 | ++# define MY_ATOMIC_MODE "gcc-x86" LOCK_prefix |
1072 | ++# endif |
1073 | ++#endif |
1074 | ++ |
1075 | ++/* fix -ansi errors while maintaining readability */ |
1076 | ++#ifndef asm |
1077 | ++#define asm __asm__ |
1078 | ++#endif |
1079 | ++ |
1080 | ++#ifndef MY_ATOMIC_NO_XADD |
1081 | ++#define make_atomic_add_body(S) make_atomic_add_body ## S |
1082 | ++#define make_atomic_cas_body(S) make_atomic_cas_body ## S |
1083 | ++#endif |
1084 | ++ |
1085 | ++#define make_atomic_add_body32 \ |
1086 | ++ asm volatile (LOCK_prefix "; xadd %0, %1;" \ |
1087 | ++ : "+r" (v), "=m" (*a) \ |
1088 | ++ : "m" (*a) \ |
1089 | ++ : "memory") |
1090 | ++ |
1091 | ++#define make_atomic_cas_body32 \ |
1092 | ++ __typeof__(*cmp) sav; \ |
1093 | ++ asm volatile (LOCK_prefix "; cmpxchg %3, %0; setz %2;" \ |
1094 | ++ : "=m" (*a), "=a" (sav), "=q" (ret) \ |
1095 | ++ : "r" (set), "m" (*a), "a" (*cmp) \ |
1096 | ++ : "memory"); \ |
1097 | ++ if (!ret) \ |
1098 | ++ *cmp= sav |
1099 | ++ |
1100 | ++#ifdef __x86_64__ |
1101 | ++#define make_atomic_add_body64 make_atomic_add_body32 |
1102 | ++#define make_atomic_cas_body64 make_atomic_cas_body32 |
1103 | ++ |
1104 | ++#define make_atomic_fas_body(S) \ |
1105 | ++ asm volatile ("xchg %0, %1;" \ |
1106 | ++ : "+r" (v), "=m" (*a) \ |
1107 | ++ : "m" (*a) \ |
1108 | ++ : "memory") |
1109 | ++ |
1110 | ++/* |
1111 | ++ Actually 32/64-bit reads/writes are always atomic on x86_64, |
1112 | ++ nonetheless issue memory barriers as appropriate. |
1113 | ++*/ |
1114 | ++#define make_atomic_load_body(S) \ |
1115 | ++ /* Serialize prior load and store operations. */ \ |
1116 | ++ asm volatile ("mfence" ::: "memory"); \ |
1117 | ++ ret= *a; \ |
1118 | ++ /* Prevent compiler from reordering instructions. */ \ |
1119 | ++ asm volatile ("" ::: "memory") |
1120 | ++#define make_atomic_store_body(S) \ |
1121 | ++ asm volatile ("; xchg %0, %1;" \ |
1122 | ++ : "=m" (*a), "+r" (v) \ |
1123 | ++ : "m" (*a) \ |
1124 | ++ : "memory") |
1125 | ++ |
1126 | ++#else |
1127 | ++/* |
1128 | ++ Use default implementations of 64-bit operations since we solved |
1129 | ++ the 64-bit problem on 32-bit platforms for CAS, no need to solve it |
1130 | ++ once more for ADD, LOAD, STORE and FAS as well. |
1131 | ++ Since we already added add32 support, we need to define add64 |
1132 | ++ here, but we haven't defined fas, load and store at all, so |
1133 | ++ we can fallback on default implementations. |
1134 | ++*/ |
1135 | ++#define make_atomic_add_body64 \ |
1136 | ++ int64 tmp=*a; \ |
1137 | ++ while (!my_atomic_cas64(a, &tmp, tmp+v)) ; \ |
1138 | ++ v=tmp; |
1139 | ++ |
1140 | ++/* |
1141 | ++ On some platforms (e.g. Mac OS X and Solaris) the ebx register |
1142 | ++ is held as a pointer to the global offset table. Thus we're not |
1143 | ++ allowed to use the b-register on those platforms when compiling |
1144 | ++ PIC code, to avoid this we push ebx and pop ebx. The new value |
1145 | ++ is copied directly from memory to avoid problems with a implicit |
1146 | ++ manipulation of the stack pointer by the push. |
1147 | ++ |
1148 | ++ cmpxchg8b works on both 32-bit platforms and 64-bit platforms but |
1149 | ++ the code here is only used on 32-bit platforms, on 64-bit |
1150 | ++ platforms the much simpler make_atomic_cas_body32 will work |
1151 | ++ fine. |
1152 | ++*/ |
1153 | ++#define make_atomic_cas_body64 \ |
1154 | ++ asm volatile ("push %%ebx;" \ |
1155 | ++ "movl (%%ecx), %%ebx;" \ |
1156 | ++ "movl 4(%%ecx), %%ecx;" \ |
1157 | ++ LOCK_prefix "; cmpxchg8b %0;" \ |
1158 | ++ "setz %2; pop %%ebx" \ |
1159 | ++ : "=m" (*a), "+A" (*cmp), "=c" (ret) \ |
1160 | ++ : "c" (&set), "m" (*a) \ |
1161 | ++ : "memory", "esp") |
1162 | ++#endif |
1163 | ++ |
1164 | ++/* |
1165 | ++ The implementation of make_atomic_cas_body32 is adaptable to |
1166 | ++ the OS word size, so on 64-bit platforms it will automatically |
1167 | ++ adapt to 64-bits and so it will work also on 64-bit platforms |
1168 | ++*/ |
1169 | ++#define make_atomic_cas_bodyptr make_atomic_cas_body32 |
1170 | ++ |
1171 | ++#ifdef MY_ATOMIC_MODE_DUMMY |
1172 | ++#define make_atomic_load_body(S) ret=*a |
1173 | ++#define make_atomic_store_body(S) *a=v |
1174 | ++#endif |
1175 | ++#endif /* ATOMIC_X86_GCC_INCLUDED */ |
1176 | +diff -ruN /dev/null b/mysys/my_atomic.c |
1177 | +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 |
1178 | ++++ b/mysys/my_atomic.c 2011-06-07 08:59:00.000000000 +0300 |
1179 | +@@ -0,0 +1,289 @@ |
1180 | ++#ifndef MY_ATOMIC_INCLUDED |
1181 | ++#define MY_ATOMIC_INCLUDED |
1182 | ++ |
1183 | ++/* Copyright (C) 2006 MySQL AB |
1184 | ++ |
1185 | ++ This program is free software; you can redistribute it and/or modify |
1186 | ++ it under the terms of the GNU General Public License as published by |
1187 | ++ the Free Software Foundation; version 2 of the License. |
1188 | ++ |
1189 | ++ This program is distributed in the hope that it will be useful, |
1190 | ++ but WITHOUT ANY WARRANTY; without even the implied warranty of |
1191 | ++ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1192 | ++ GNU General Public License for more details. |
1193 | ++ |
1194 | ++ You should have received a copy of the GNU General Public License |
1195 | ++ along with this program; if not, write to the Free Software |
1196 | ++ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ |
1197 | ++ |
1198 | ++/* |
1199 | ++ This header defines five atomic operations: |
1200 | ++ |
1201 | ++ my_atomic_add#(&var, what) |
1202 | ++ 'Fetch and Add' |
1203 | ++ add 'what' to *var, and return the old value of *var |
1204 | ++ |
1205 | ++ my_atomic_fas#(&var, what) |
1206 | ++ 'Fetch And Store' |
1207 | ++ store 'what' in *var, and return the old value of *var |
1208 | ++ |
1209 | ++ my_atomic_cas#(&var, &old, new) |
1210 | ++ An odd variation of 'Compare And Set/Swap' |
1211 | ++ if *var is equal to *old, then store 'new' in *var, and return TRUE |
1212 | ++ otherwise store *var in *old, and return FALSE |
1213 | ++ Usually, &old should not be accessed if the operation is successful. |
1214 | ++ |
1215 | ++ my_atomic_load#(&var) |
1216 | ++ return *var |
1217 | ++ |
1218 | ++ my_atomic_store#(&var, what) |
1219 | ++ store 'what' in *var |
1220 | ++ |
1221 | ++ '#' is substituted by a size suffix - 8, 16, 32, 64, or ptr |
1222 | ++ (e.g. my_atomic_add8, my_atomic_fas32, my_atomic_casptr). |
1223 | ++ |
1224 | ++ NOTE This operations are not always atomic, so they always must be |
1225 | ++ enclosed in my_atomic_rwlock_rdlock(lock)/my_atomic_rwlock_rdunlock(lock) |
1226 | ++ or my_atomic_rwlock_wrlock(lock)/my_atomic_rwlock_wrunlock(lock). |
1227 | ++ Hint: if a code block makes intensive use of atomic ops, it make sense |
1228 | ++ to take/release rwlock once for the whole block, not for every statement. |
1229 | ++ |
1230 | ++ On architectures where these operations are really atomic, rwlocks will |
1231 | ++ be optimized away. |
1232 | ++ 8- and 16-bit atomics aren't implemented for windows (see generic-msvc.h), |
1233 | ++ but can be added, if necessary. |
1234 | ++*/ |
1235 | ++ |
1236 | ++#include "my_global.h" |
1237 | ++ |
1238 | ++#ifndef my_atomic_rwlock_init |
1239 | ++ |
1240 | ++#define intptr void * |
1241 | ++/** |
1242 | ++ Currently we don't support 8-bit and 16-bit operations. |
1243 | ++ It can be added later if needed. |
1244 | ++*/ |
1245 | ++#undef MY_ATOMIC_HAS_8_16 |
1246 | ++ |
1247 | ++#ifndef MY_ATOMIC_MODE_RWLOCKS |
1248 | ++/* |
1249 | ++ * Attempt to do atomic ops without locks |
1250 | ++ */ |
1251 | ++#include "atomic/nolock.h" |
1252 | ++#endif |
1253 | ++ |
1254 | ++#ifndef make_atomic_cas_body |
1255 | ++/* nolock.h was not able to generate even a CAS function, fall back */ |
1256 | ++#include "atomic/rwlock.h" |
1257 | ++#endif |
1258 | ++ |
1259 | ++/* define missing functions by using the already generated ones */ |
1260 | ++#ifndef make_atomic_add_body |
1261 | ++#define make_atomic_add_body(S) \ |
1262 | ++ int ## S tmp=*a; \ |
1263 | ++ while (!my_atomic_cas ## S(a, &tmp, tmp+v)) ; \ |
1264 | ++ v=tmp; |
1265 | ++#endif |
1266 | ++#ifndef make_atomic_fas_body |
1267 | ++#define make_atomic_fas_body(S) \ |
1268 | ++ int ## S tmp=*a; \ |
1269 | ++ while (!my_atomic_cas ## S(a, &tmp, v)) ; \ |
1270 | ++ v=tmp; |
1271 | ++#endif |
1272 | ++#ifndef make_atomic_load_body |
1273 | ++#define make_atomic_load_body(S) \ |
1274 | ++ ret= 0; /* avoid compiler warning */ \ |
1275 | ++ (void)(my_atomic_cas ## S(a, &ret, ret)); |
1276 | ++#endif |
1277 | ++#ifndef make_atomic_store_body |
1278 | ++#define make_atomic_store_body(S) \ |
1279 | ++ (void)(my_atomic_fas ## S (a, v)); |
1280 | ++#endif |
1281 | ++ |
1282 | ++/* |
1283 | ++ transparent_union doesn't work in g++ |
1284 | ++ Bug ? |
1285 | ++ |
1286 | ++ Darwin's gcc doesn't want to put pointers in a transparent_union |
1287 | ++ when built with -arch ppc64. Complains: |
1288 | ++ warning: 'transparent_union' attribute ignored |
1289 | ++*/ |
1290 | ++#if defined(__GNUC__) && !defined(__cplusplus) && \ |
1291 | ++ ! (defined(__APPLE__) && (defined(_ARCH_PPC64) ||defined (_ARCH_PPC))) |
1292 | ++/* |
1293 | ++ we want to be able to use my_atomic_xxx functions with |
1294 | ++ both signed and unsigned integers. But gcc will issue a warning |
1295 | ++ "passing arg N of `my_atomic_XXX' as [un]signed due to prototype" |
1296 | ++ if the signedness of the argument doesn't match the prototype, or |
1297 | ++ "pointer targets in passing argument N of my_atomic_XXX differ in signedness" |
1298 | ++ if int* is used where uint* is expected (or vice versa). |
1299 | ++ Let's shut these warnings up |
1300 | ++*/ |
1301 | ++#define make_transparent_unions(S) \ |
1302 | ++ typedef union { \ |
1303 | ++ int ## S i; \ |
1304 | ++ uint ## S u; \ |
1305 | ++ } U_ ## S __attribute__ ((transparent_union)); \ |
1306 | ++ typedef union { \ |
1307 | ++ int ## S volatile *i; \ |
1308 | ++ uint ## S volatile *u; \ |
1309 | ++ } Uv_ ## S __attribute__ ((transparent_union)); |
1310 | ++#define uintptr intptr |
1311 | ++make_transparent_unions(8) |
1312 | ++make_transparent_unions(16) |
1313 | ++make_transparent_unions(32) |
1314 | ++make_transparent_unions(64) |
1315 | ++make_transparent_unions(ptr) |
1316 | ++#undef uintptr |
1317 | ++#undef make_transparent_unions |
1318 | ++#define a U_a.i |
1319 | ++#define cmp U_cmp.i |
1320 | ++#define v U_v.i |
1321 | ++#define set U_set.i |
1322 | ++#else |
1323 | ++#define U_8 int8 |
1324 | ++#define U_16 int16 |
1325 | ++#define U_32 int32 |
1326 | ++#define U_64 int64 |
1327 | ++#define U_ptr intptr |
1328 | ++#define Uv_8 int8 |
1329 | ++#define Uv_16 int16 |
1330 | ++#define Uv_32 int32 |
1331 | ++#define Uv_64 int64 |
1332 | ++#define Uv_ptr intptr |
1333 | ++#define U_a volatile *a |
1334 | ++#define U_cmp *cmp |
1335 | ++#define U_v v |
1336 | ++#define U_set set |
1337 | ++#endif /* __GCC__ transparent_union magic */ |
1338 | ++ |
1339 | ++#define make_atomic_cas(S) \ |
1340 | ++static inline int my_atomic_cas ## S(Uv_ ## S U_a, \ |
1341 | ++ Uv_ ## S U_cmp, U_ ## S U_set) \ |
1342 | ++{ \ |
1343 | ++ int8 ret; \ |
1344 | ++ make_atomic_cas_body(S); \ |
1345 | ++ return ret; \ |
1346 | ++} |
1347 | ++ |
1348 | ++#define make_atomic_add(S) \ |
1349 | ++static inline int ## S my_atomic_add ## S( \ |
1350 | ++ Uv_ ## S U_a, U_ ## S U_v) \ |
1351 | ++{ \ |
1352 | ++ make_atomic_add_body(S); \ |
1353 | ++ return v; \ |
1354 | ++} |
1355 | ++ |
1356 | ++#define make_atomic_fas(S) \ |
1357 | ++static inline int ## S my_atomic_fas ## S( \ |
1358 | ++ Uv_ ## S U_a, U_ ## S U_v) \ |
1359 | ++{ \ |
1360 | ++ make_atomic_fas_body(S); \ |
1361 | ++ return v; \ |
1362 | ++} |
1363 | ++ |
1364 | ++#define make_atomic_load(S) \ |
1365 | ++static inline int ## S my_atomic_load ## S(Uv_ ## S U_a) \ |
1366 | ++{ \ |
1367 | ++ int ## S ret; \ |
1368 | ++ make_atomic_load_body(S); \ |
1369 | ++ return ret; \ |
1370 | ++} |
1371 | ++ |
1372 | ++#define make_atomic_store(S) \ |
1373 | ++static inline void my_atomic_store ## S( \ |
1374 | ++ Uv_ ## S U_a, U_ ## S U_v) \ |
1375 | ++{ \ |
1376 | ++ make_atomic_store_body(S); \ |
1377 | ++} |
1378 | ++ |
1379 | ++#ifdef MY_ATOMIC_HAS_8_16 |
1380 | ++make_atomic_cas(8) |
1381 | ++make_atomic_cas(16) |
1382 | ++#endif |
1383 | ++make_atomic_cas(32) |
1384 | ++make_atomic_cas(64) |
1385 | ++make_atomic_cas(ptr) |
1386 | ++ |
1387 | ++#ifdef MY_ATOMIC_HAS_8_16 |
1388 | ++make_atomic_add(8) |
1389 | ++make_atomic_add(16) |
1390 | ++#endif |
1391 | ++make_atomic_add(32) |
1392 | ++make_atomic_add(64) |
1393 | ++ |
1394 | ++#ifdef MY_ATOMIC_HAS_8_16 |
1395 | ++make_atomic_load(8) |
1396 | ++make_atomic_load(16) |
1397 | ++#endif |
1398 | ++make_atomic_load(32) |
1399 | ++make_atomic_load(64) |
1400 | ++make_atomic_load(ptr) |
1401 | ++ |
1402 | ++#ifdef MY_ATOMIC_HAS_8_16 |
1403 | ++make_atomic_fas(8) |
1404 | ++make_atomic_fas(16) |
1405 | ++#endif |
1406 | ++make_atomic_fas(32) |
1407 | ++make_atomic_fas(64) |
1408 | ++make_atomic_fas(ptr) |
1409 | ++ |
1410 | ++#ifdef MY_ATOMIC_HAS_8_16 |
1411 | ++make_atomic_store(8) |
1412 | ++make_atomic_store(16) |
1413 | ++#endif |
1414 | ++make_atomic_store(32) |
1415 | ++make_atomic_store(64) |
1416 | ++make_atomic_store(ptr) |
1417 | ++ |
1418 | ++#ifdef _atomic_h_cleanup_ |
1419 | ++#include _atomic_h_cleanup_ |
1420 | ++#undef _atomic_h_cleanup_ |
1421 | ++#endif |
1422 | ++ |
1423 | ++#undef U_8 |
1424 | ++#undef U_16 |
1425 | ++#undef U_32 |
1426 | ++#undef U_64 |
1427 | ++#undef U_ptr |
1428 | ++#undef Uv_8 |
1429 | ++#undef Uv_16 |
1430 | ++#undef Uv_32 |
1431 | ++#undef Uv_64 |
1432 | ++#undef Uv_ptr |
1433 | ++#undef a |
1434 | ++#undef cmp |
1435 | ++#undef v |
1436 | ++#undef set |
1437 | ++#undef U_a |
1438 | ++#undef U_cmp |
1439 | ++#undef U_v |
1440 | ++#undef U_set |
1441 | ++#undef make_atomic_add |
1442 | ++#undef make_atomic_cas |
1443 | ++#undef make_atomic_load |
1444 | ++#undef make_atomic_store |
1445 | ++#undef make_atomic_fas |
1446 | ++#undef make_atomic_add_body |
1447 | ++#undef make_atomic_cas_body |
1448 | ++#undef make_atomic_load_body |
1449 | ++#undef make_atomic_store_body |
1450 | ++#undef make_atomic_fas_body |
1451 | ++#undef intptr |
1452 | ++ |
1453 | ++/* |
1454 | ++ the macro below defines (as an expression) the code that |
1455 | ++ will be run in spin-loops. Intel manuals recummend to have PAUSE there. |
1456 | ++ It is expected to be defined in include/atomic/ *.h files |
1457 | ++*/ |
1458 | ++#ifndef LF_BACKOFF |
1459 | ++#define LF_BACKOFF (1) |
1460 | ++#endif |
1461 | ++ |
1462 | ++#define MY_ATOMIC_OK 0 |
1463 | ++#define MY_ATOMIC_NOT_1CPU 1 |
1464 | ++extern int my_atomic_initialize(); |
1465 | ++ |
1466 | ++#endif |
1467 | ++ |
1468 | ++#endif /* MY_ATOMIC_INCLUDED */ |
Looks good for me
But you should also request QA review