Merge lp:~epics-core/epics-base/valgrind into lp:~epics-core/epics-base/3.16

Proposed by mdavidsaver
Status: Merged
Merged at revision: 12701
Proposed branch: lp:~epics-core/epics-base/valgrind
Merge into: lp:~epics-core/epics-base/3.16
Diff against target: 7055 lines (+6726/-11)
11 files modified
documentation/RELEASE_NOTES.html (+16/-0)
src/libCom/Makefile (+5/-0)
src/libCom/dbmf/dbmf.c (+21/-5)
src/libCom/freeList/freeListLib.c (+29/-3)
src/libCom/misc/epicsExit.c (+4/-0)
src/libCom/osi/epicsMutex.cpp (+26/-0)
src/libCom/taskwd/taskwd.c (+17/-3)
src/libCom/valgrind/valgrind.h (+6587/-0)
src/std/filters/arr.c (+7/-0)
src/std/filters/dbnd.c (+7/-0)
src/std/filters/sync.c (+7/-0)
To merge this branch: bzr merge lp:~epics-core/epics-base/valgrind
Reviewer Review Type Date Requested Status
Andrew Johnson Approve
mdavidsaver Approve
Review via email: mp+281141@code.launchpad.net

Description of the change

Include valgrind.h in Base and use to instrument some of the free lists used (dbmf, freeListLib, osi mutex, and taskwd.

To post a comment you must log in.
Revision history for this message
mdavidsaver (mdavidsaver) wrote :

The included valgrind.h is installed as valgrind/valgrind.h so it will obscure whatever might be in /usr/include. It is also enabled by default, but can be disabled with '-DNVALGRIND'. I wouldn't mind having it enabled by default, but can understand if others don't want this.

Also, note that valgrind.h is not included by any .h files. If merged, this should probably become a policy. This way use of valgrind.h becomes a local feature which downstream code can be ignorant of.

review: Needs Information
Revision history for this message
Andrew Johnson (anj) wrote :

This needs an entry in the Release Notes, describing how to enable/disable and use it. That would also be a good place to mention the don't-include-the-valgrind-header-in-other-headers policy.

The valgrind.h header disables itself automatically unless it's running on an architecture supported by valgrind, so there's no need to do that in the Makefile.

Should we have some kind of 'ENABLE_VALGRIND_HELPERS' setting in the configure/CONFIG_SITE file? The other possible place for a global switch would be for src/libCom/Makefile to include something like
    # Uncomment this to remove the (benign) valgrind helper stubs
    # USR_CFLAGS += -DNVALGRIND
which I think I prefer since it's more localized and hidden there.

What is the line "USR_CFLAGS += -DUSE_VALGRIND" doing in src/libCom/Makefile? Was it meant to be the above switch but spelled wrongly? The macro USE_VALGRIND does not appear in the code anywhere else...

review: Needs Fixing
lp:~epics-core/epics-base/valgrind updated
12704. By mdavidsaver

update release notes

Revision history for this message
mdavidsaver (mdavidsaver) wrote :

> This needs an entry in the Release Notes

Done.

> ... I think I prefer since it's more localized and hidden there.

This is my preference as well.

> What is the line "USR_CFLAGS += -DUSE_VALGRIND"

This was a leftover from a previous iteration. Now replaced with your NVALGRIND example.

review: Approve
Revision history for this message
Andrew Johnson (anj) wrote :

Merging with a few minor tweaks.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'documentation/RELEASE_NOTES.html'
2--- documentation/RELEASE_NOTES.html 2015-09-07 19:23:04 +0000
3+++ documentation/RELEASE_NOTES.html 2016-01-12 15:16:24 +0000
4@@ -20,6 +20,22 @@
5
6 -->
7
8+<h3>Valgrind instrumentation</h3>
9+
10+<p>The header valgrind/valgrind.h is now included in, and installed by, Base.
11+When included this file defines some macros which expand to provide hints
12+to the Valgrind runtime.
13+These hints are now added to some free-lists within libCom, including freeListLib,
14+to allow valgrind to provide more accurate information about potential leaks.</p>
15+
16+<p>valgrind.h automatically disables itself when included targets unsupported by Valgrind.
17+It can also explicitly be disabled by defining the macro <em>NVALGRIND</em>.
18+See <em>src/libCom/Makefile</em> for an example.</p>
19+
20+<p>As a matter of policy valgrind.h is not, and will not be, included by any header file installed by
21+Base, and will remain an internal implementation detail.
22+Support modules which choose to use valgrind.h are advised to avoid to do likewise.</p>
23+
24 <h3>Database Multi-locking</h3>
25
26 <p>dbLock.c is re-written with an expanded API, and the removal of global mutex locks.</p>
27
28=== modified file 'src/libCom/Makefile'
29--- src/libCom/Makefile 2014-07-24 18:22:52 +0000
30+++ src/libCom/Makefile 2016-01-12 15:16:24 +0000
31@@ -9,9 +9,14 @@
32 TOP = ../..
33 include $(TOP)/configure/CONFIG
34
35+# Uncomment this to remove the (benign) valgrind helper stubs
36+#USR_CFLAGS += -DNVALGRIND
37+
38 SRC = $(TOP)/src
39 LIBCOM = $(SRC)/libCom
40
41+INC += valgrind/valgrind.h
42+
43 include $(LIBCOM)/as/Makefile
44 include $(LIBCOM)/bucketLib/Makefile
45 include $(LIBCOM)/calc/Makefile
46
47=== modified file 'src/libCom/dbmf/dbmf.c'
48--- src/libCom/dbmf/dbmf.c 2015-12-21 18:44:23 +0000
49+++ src/libCom/dbmf/dbmf.c 2016-01-12 15:16:24 +0000
50@@ -19,6 +19,15 @@
51 #include <stdio.h>
52 #include <string.h>
53
54+#include "valgrind/valgrind.h"
55+
56+#ifndef NVALGRIND
57+/* buffer around allocations to detect out of bounds access */
58+#define REDZONE sizeof(double)
59+#else
60+#define REDZONE 0
61+#endif
62+
63 #define epicsExportSharedSymbols
64 #include "epicsMutex.h"
65 #include "ellLib.h"
66@@ -70,13 +79,17 @@
67 pdbmfPvt->lock = epicsMutexMustCreate();
68 /*allign to at least a double*/
69 pdbmfPvt->size = size + size%sizeof(double);
70- pdbmfPvt->allocSize = pdbmfPvt->size + sizeof(itemHeader);
71+ /* layout is
72+ * | itemHeader | REDZONE | size | REDZONE |
73+ */
74+ pdbmfPvt->allocSize = pdbmfPvt->size + sizeof(itemHeader) + 2*REDZONE;
75 pdbmfPvt->chunkItems = chunkItems;
76 pdbmfPvt->chunkSize = pdbmfPvt->allocSize * pdbmfPvt->chunkItems;
77 pdbmfPvt->nAlloc = 0;
78 pdbmfPvt->nFree = 0;
79 pdbmfPvt->nGtSize = 0;
80 pdbmfPvt->freeList = NULL;
81+ VALGRIND_CREATE_MEMPOOL(pdbmfPvt, REDZONE, 0);
82 return(0);
83 }
84
85@@ -124,7 +137,7 @@
86 pitemHeader = (itemHeader *)pnextFree;
87 pitemHeader->pchunkNode->nNotFree += 1;
88 } else {
89- pmem = malloc(sizeof(itemHeader) + size);
90+ pmem = malloc(sizeof(itemHeader) + 2*REDZONE + size);
91 if(!pmem) {
92 epicsMutexUnlock(pdbmfPvt->lock);
93 printf("dbmfMalloc malloc failed\n");
94@@ -133,12 +146,14 @@
95 pdbmfPvt->nAlloc++;
96 pdbmfPvt->nGtSize++;
97 pitemHeader = (itemHeader *)pmem;
98- pitemHeader->pchunkNode = NULL;
99+ pitemHeader->pchunkNode = NULL; /* not part of free list */
100 if(dbmfDebug) printf("dbmfMalloc: size %lu mem %p\n",
101 (unsigned long)size,pmem);
102 }
103 epicsMutexUnlock(pdbmfPvt->lock);
104- return((void *)(pmem + sizeof(itemHeader)));
105+ pmem += sizeof(itemHeader) + REDZONE;
106+ VALGRIND_MEMPOOL_ALLOC(pdbmfPvt, pmem, size);
107+ return((void *)pmem);
108 }
109
110 char * epicsShareAPI dbmfStrdup(unsigned char *str)
111@@ -160,7 +175,8 @@
112 printf("dbmfFree called but dbmfInit never called\n");
113 return;
114 }
115- pmem -= sizeof(itemHeader);
116+ VALGRIND_MEMPOOL_FREE(pdbmfPvt, mem);
117+ pmem -= sizeof(itemHeader) + REDZONE;
118 epicsMutexMustLock(pdbmfPvt->lock);
119 pitemHeader = (itemHeader *)pmem;
120 if(!pitemHeader->pchunkNode) {
121
122=== modified file 'src/libCom/freeList/freeListLib.c'
123--- src/libCom/freeList/freeListLib.c 2015-12-21 18:44:23 +0000
124+++ src/libCom/freeList/freeListLib.c 2016-01-12 15:16:24 +0000
125@@ -15,6 +15,15 @@
126 #include <stdlib.h>
127 #include <stddef.h>
128
129+#include "valgrind/valgrind.h"
130+
131+#ifndef NVALGRIND
132+/* buffer around allocations to detect out of bounds access */
133+#define REDZONE sizeof(double)
134+#else
135+#define REDZONE 0
136+#endif
137+
138 #define epicsExportSharedSymbols
139 #include "cantProceed.h"
140 #include "epicsMutex.h"
141@@ -47,6 +56,7 @@
142 pfl->nBlocksAvailable = 0u;
143 pfl->lock = epicsMutexMustCreate();
144 *ppvt = (void *)pfl;
145+ VALGRIND_CREATE_MEMPOOL(pfl, REDZONE, 0);
146 return;
147 }
148
149@@ -78,7 +88,14 @@
150 epicsMutexMustLock(pfl->lock);
151 ptemp = pfl->head;
152 if(ptemp==0) {
153- ptemp = (void *)malloc(pfl->nmalloc*pfl->size);
154+ /* layout of each block. nmalloc+1 REDZONEs for nmallocs.
155+ * The first sizeof(void*) bytes are used to store a pointer
156+ * to the next free block.
157+ *
158+ * | RED | size0 ------ | RED | size1 | ... | RED |
159+ * | | next | ----- |
160+ */
161+ ptemp = (void *)malloc(pfl->nmalloc*(pfl->size+REDZONE)+REDZONE);
162 if(ptemp==0) {
163 epicsMutexUnlock(pfl->lock);
164 return(0);
165@@ -89,15 +106,17 @@
166 free(ptemp);
167 return(0);
168 }
169- pallocmem->memory = ptemp;
170+ pallocmem->memory = ptemp; /* real allocation */
171+ ptemp += REDZONE; /* skip first REDZONE */
172 if(pfl->mallochead)
173 pallocmem->next = pfl->mallochead;
174 pfl->mallochead = pallocmem;
175 for(i=0; i<pfl->nmalloc; i++) {
176 ppnext = ptemp;
177+ VALGRIND_MEMPOOL_ALLOC(pfl, ptemp, sizeof(void*));
178 *ppnext = pfl->head;
179 pfl->head = ptemp;
180- ptemp = ((char *)ptemp) + pfl->size;
181+ ptemp = ((char *)ptemp) + pfl->size+REDZONE;
182 }
183 ptemp = pfl->head;
184 pfl->nBlocksAvailable += pfl->nmalloc;
185@@ -106,6 +125,8 @@
186 pfl->head = *ppnext;
187 pfl->nBlocksAvailable--;
188 epicsMutexUnlock(pfl->lock);
189+ VALGRIND_MEMPOOL_FREE(pfl, ptemp);
190+ VALGRIND_MEMPOOL_ALLOC(pfl, ptemp, pfl->size);
191 return(ptemp);
192 # endif
193 }
194@@ -119,6 +140,9 @@
195 # else
196 void **ppnext;
197
198+ VALGRIND_MEMPOOL_FREE(pvt, pmem);
199+ VALGRIND_MEMPOOL_ALLOC(pvt, pmem, sizeof(void*));
200+
201 epicsMutexMustLock(pfl->lock);
202 ppnext = pmem;
203 *ppnext = pfl->head;
204@@ -134,6 +158,8 @@
205 allocMem *phead;
206 allocMem *pnext;
207
208+ VALGRIND_DESTROY_MEMPOOL(pvt);
209+
210 phead = pfl->mallochead;
211 while(phead) {
212 pnext = phead->next;
213
214=== modified file 'src/libCom/misc/epicsExit.c'
215--- src/libCom/misc/epicsExit.c 2014-11-13 16:58:35 +0000
216+++ src/libCom/misc/epicsExit.c 2016-01-12 15:16:24 +0000
217@@ -35,6 +35,8 @@
218 #include "cantProceed.h"
219 #include "epicsExit.h"
220
221+void epicsMutexCleanup(void);
222+
223 typedef struct exitNode {
224 ELLNODE node;
225 epicsExitFunc func;
226@@ -113,6 +115,8 @@
227 epicsExitCallAtExitsPvt ( pep );
228 destroyExitPvt ( pep );
229 }
230+ /* Handle specially to avoid circular reference */
231+ epicsMutexCleanup();
232 }
233
234 epicsShareFunc void epicsExitCallAtThreadExits(void)
235
236=== modified file 'src/libCom/osi/epicsMutex.cpp'
237--- src/libCom/osi/epicsMutex.cpp 2014-02-07 20:13:12 +0000
238+++ src/libCom/osi/epicsMutex.cpp 2016-01-12 15:16:24 +0000
239@@ -28,6 +28,7 @@
240 #define epicsExportSharedSymbols
241 #include "epicsStdio.h"
242 #include "epicsThread.h"
243+#include "valgrind/valgrind.h"
244 #include "ellLib.h"
245 #include "errlog.h"
246 #include "epicsMutex.h"
247@@ -85,6 +86,7 @@
248 firstTime=0;
249 ellInit(&mutexList);
250 ellInit(&freeList);
251+ VALGRIND_CREATE_MEMPOOL(&freeList, 0, 0);
252 epicsMutexGlobalLock = epicsMutexOsdCreate();
253 }
254 id = epicsMutexOsdCreate();
255@@ -98,9 +100,11 @@
256 reinterpret_cast < epicsMutexParm * > ( ellFirst(&freeList) );
257 if(pmutexNode) {
258 ellDelete(&freeList,&pmutexNode->node);
259+ VALGRIND_MEMPOOL_FREE(&freeList, pmutexNode);
260 } else {
261 pmutexNode = static_cast < epicsMutexParm * > ( calloc(1,sizeof(epicsMutexParm)) );
262 }
263+ VALGRIND_MEMPOOL_ALLOC(&freeList, pmutexNode, sizeof(epicsMutexParm));
264 pmutexNode->id = id;
265 # ifdef LOG_LAST_OWNER
266 pmutexNode->lastOwner = 0;
267@@ -127,6 +131,8 @@
268 assert ( lockStat == epicsMutexLockOK );
269 ellDelete(&mutexList,&pmutexNode->node);
270 epicsMutexOsdDestroy(pmutexNode->id);
271+ VALGRIND_MEMPOOL_FREE(&freeList, pmutexNode);
272+ VALGRIND_MEMPOOL_ALLOC(&freeList, &pmutexNode->node, sizeof(pmutexNode->node));
273 ellAdd(&freeList,&pmutexNode->node);
274 epicsMutexOsdUnlock(epicsMutexGlobalLock);
275 }
276@@ -162,6 +168,26 @@
277 return status;
278 }
279
280+/* Empty the freeList.
281+ * Called from epicsExit.c, but not via epicsAtExit()
282+ * to avoid the possibility of a circular reference.
283+ */
284+extern "C"
285+void epicsMutexCleanup(void)
286+{
287+ ELLNODE *cur;
288+ epicsMutexLockStatus lockStat =
289+ epicsMutexOsdLock(epicsMutexGlobalLock);
290+ assert ( lockStat == epicsMutexLockOK );
291+
292+ while((cur=ellGet(&freeList))!=NULL) {
293+ VALGRIND_MEMPOOL_FREE(&freeList, cur);
294+ free(cur);
295+ }
296+
297+ epicsMutexOsdUnlock(epicsMutexGlobalLock);
298+}
299+
300 void epicsShareAPI epicsMutexShow(
301 epicsMutexId pmutexNode, unsigned int level)
302 {
303
304=== modified file 'src/libCom/taskwd/taskwd.c'
305--- src/libCom/taskwd/taskwd.c 2013-12-11 23:50:29 +0000
306+++ src/libCom/taskwd/taskwd.c 2016-01-12 15:16:24 +0000
307@@ -25,6 +25,7 @@
308 #include "epicsStdioRedirect.h"
309 #include "epicsThread.h"
310 #include "epicsMutex.h"
311+#include "valgrind/valgrind.h"
312 #include "errlog.h"
313 #include "ellLib.h"
314 #include "errMdef.h"
315@@ -130,9 +131,15 @@
316
317 static void twdShutdown(void *arg)
318 {
319+ ELLNODE *cur;
320 twdCtl = twdctlExit;
321 epicsEventSignal(loopEvent);
322 epicsEventWait(exitEvent);
323+ while((cur=ellGet(&fList))!=NULL) {
324+ VALGRIND_MEMPOOL_FREE(&fList, cur);
325+ free(cur);
326+ }
327+ VALGRIND_DESTROY_MEMPOOL(&fList);
328 }
329
330 static void twdInitOnce(void *arg)
331@@ -142,6 +149,8 @@
332 tLock = epicsMutexMustCreate();
333 mLock = epicsMutexMustCreate();
334 fLock = epicsMutexMustCreate();
335+ ellInit(&fList);
336+ VALGRIND_CREATE_MEMPOOL(&fList, 0, 0);
337
338 twdCtl = twdctlRun;
339 loopEvent = epicsEventMustCreate(epicsEventEmpty);
340@@ -391,11 +400,14 @@
341 pn = (union twdNode *)ellFirst(&fList);
342 if (pn) {
343 ellDelete(&fList, (void *)pn);
344- epicsMutexUnlock(fLock);
345- return pn;
346+ VALGRIND_MEMPOOL_FREE(&fList, pn);
347 }
348 epicsMutexUnlock(fLock);
349- return calloc(1, sizeof(union twdNode));
350+ if(!pn)
351+ pn = calloc(1, sizeof(union twdNode));
352+ if(pn)
353+ VALGRIND_MEMPOOL_ALLOC(&fList, pn, sizeof(*pn));
354+ return pn;
355 }
356
357 static union twdNode *allocNode(void)
358@@ -411,6 +423,8 @@
359
360 static void freeNode(union twdNode *pn)
361 {
362+ VALGRIND_MEMPOOL_FREE(&fList, pn);
363+ VALGRIND_MEMPOOL_ALLOC(&fList, pn, sizeof(ELLNODE));
364 epicsMutexMustLock(fLock);
365 ellAdd(&fList, (void *)pn);
366 epicsMutexUnlock(fLock);
367
368=== added directory 'src/libCom/valgrind'
369=== added file 'src/libCom/valgrind/valgrind.h'
370--- src/libCom/valgrind/valgrind.h 1970-01-01 00:00:00 +0000
371+++ src/libCom/valgrind/valgrind.h 2016-01-12 15:16:24 +0000
372@@ -0,0 +1,6587 @@
373+/* -*- c -*-
374+ ----------------------------------------------------------------
375+
376+ Notice that the following BSD-style license applies to this one
377+ file (valgrind.h) only. The rest of Valgrind is licensed under the
378+ terms of the GNU General Public License, version 2, unless
379+ otherwise indicated. See the COPYING file in the source
380+ distribution for details.
381+
382+ ----------------------------------------------------------------
383+
384+ This file is part of Valgrind, a dynamic binary instrumentation
385+ framework.
386+
387+ Copyright (C) 2000-2013 Julian Seward. All rights reserved.
388+
389+ Redistribution and use in source and binary forms, with or without
390+ modification, are permitted provided that the following conditions
391+ are met:
392+
393+ 1. Redistributions of source code must retain the above copyright
394+ notice, this list of conditions and the following disclaimer.
395+
396+ 2. The origin of this software must not be misrepresented; you must
397+ not claim that you wrote the original software. If you use this
398+ software in a product, an acknowledgment in the product
399+ documentation would be appreciated but is not required.
400+
401+ 3. Altered source versions must be plainly marked as such, and must
402+ not be misrepresented as being the original software.
403+
404+ 4. The name of the author may not be used to endorse or promote
405+ products derived from this software without specific prior written
406+ permission.
407+
408+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS
409+ OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
410+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
411+ ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
412+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
413+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
414+ GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
415+ INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
416+ WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
417+ NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
418+ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
419+
420+ ----------------------------------------------------------------
421+
422+ Notice that the above BSD-style license applies to this one file
423+ (valgrind.h) only. The entire rest of Valgrind is licensed under
424+ the terms of the GNU General Public License, version 2. See the
425+ COPYING file in the source distribution for details.
426+
427+ ----------------------------------------------------------------
428+*/
429+
430+
431+/* This file is for inclusion into client (your!) code.
432+
433+ You can use these macros to manipulate and query Valgrind's
434+ execution inside your own programs.
435+
436+ The resulting executables will still run without Valgrind, just a
437+ little bit more slowly than they otherwise would, but otherwise
438+ unchanged. When not running on valgrind, each client request
439+ consumes very few (eg. 7) instructions, so the resulting performance
440+ loss is negligible unless you plan to execute client requests
441+ millions of times per second. Nevertheless, if that is still a
442+ problem, you can compile with the NVALGRIND symbol defined (gcc
443+ -DNVALGRIND) so that client requests are not even compiled in. */
444+
445+#ifndef __VALGRIND_H
446+#define __VALGRIND_H
447+
448+
449+/* ------------------------------------------------------------------ */
450+/* VERSION NUMBER OF VALGRIND */
451+/* ------------------------------------------------------------------ */
452+
453+/* Specify Valgrind's version number, so that user code can
454+ conditionally compile based on our version number. Note that these
455+ were introduced at version 3.6 and so do not exist in version 3.5
456+ or earlier. The recommended way to use them to check for "version
457+ X.Y or later" is (eg)
458+
459+#if defined(__VALGRIND_MAJOR__) && defined(__VALGRIND_MINOR__) \
460+ && (__VALGRIND_MAJOR__ > 3 \
461+ || (__VALGRIND_MAJOR__ == 3 && __VALGRIND_MINOR__ >= 6))
462+*/
463+#define __VALGRIND_MAJOR__ 3
464+#define __VALGRIND_MINOR__ 10
465+
466+
467+#include <stdarg.h>
468+
469+/* Nb: this file might be included in a file compiled with -ansi. So
470+ we can't use C++ style "//" comments nor the "asm" keyword (instead
471+ use "__asm__"). */
472+
473+/* Derive some tags indicating what the target platform is. Note
474+ that in this file we're using the compiler's CPP symbols for
475+ identifying architectures, which are different to the ones we use
476+ within the rest of Valgrind. Note, __powerpc__ is active for both
477+ 32 and 64-bit PPC, whereas __powerpc64__ is only active for the
478+ latter (on Linux, that is).
479+
480+ Misc note: how to find out what's predefined in gcc by default:
481+ gcc -Wp,-dM somefile.c
482+*/
483+#undef PLAT_x86_darwin
484+#undef PLAT_amd64_darwin
485+#undef PLAT_x86_win32
486+#undef PLAT_amd64_win64
487+#undef PLAT_x86_linux
488+#undef PLAT_amd64_linux
489+#undef PLAT_ppc32_linux
490+#undef PLAT_ppc64be_linux
491+#undef PLAT_ppc64le_linux
492+#undef PLAT_arm_linux
493+#undef PLAT_arm64_linux
494+#undef PLAT_s390x_linux
495+#undef PLAT_mips32_linux
496+#undef PLAT_mips64_linux
497+
498+
499+#if defined(__APPLE__) && defined(__i386__)
500+# define PLAT_x86_darwin 1
501+#elif defined(__APPLE__) && defined(__x86_64__)
502+# define PLAT_amd64_darwin 1
503+#elif (defined(__MINGW32__) && !defined(__MINGW64__)) \
504+ || defined(__CYGWIN32__) \
505+ || (defined(_WIN32) && defined(_M_IX86))
506+# define PLAT_x86_win32 1
507+#elif defined(__MINGW64__) \
508+ || (defined(_WIN64) && defined(_M_X64))
509+# define PLAT_amd64_win64 1
510+#elif defined(__linux__) && defined(__i386__)
511+# define PLAT_x86_linux 1
512+#elif defined(__linux__) && defined(__x86_64__)
513+# define PLAT_amd64_linux 1
514+#elif defined(__linux__) && defined(__powerpc__) && !defined(__powerpc64__)
515+# define PLAT_ppc32_linux 1
516+#elif defined(__linux__) && defined(__powerpc__) && defined(__powerpc64__) && _CALL_ELF != 2
517+/* Big Endian uses ELF version 1 */
518+# define PLAT_ppc64be_linux 1
519+#elif defined(__linux__) && defined(__powerpc__) && defined(__powerpc64__) && _CALL_ELF == 2
520+/* Little Endian uses ELF version 2 */
521+# define PLAT_ppc64le_linux 1
522+#elif defined(__linux__) && defined(__arm__) && !defined(__aarch64__)
523+# define PLAT_arm_linux 1
524+#elif defined(__linux__) && defined(__aarch64__) && !defined(__arm__)
525+# define PLAT_arm64_linux 1
526+#elif defined(__linux__) && defined(__s390__) && defined(__s390x__)
527+# define PLAT_s390x_linux 1
528+#elif defined(__linux__) && defined(__mips__) && (__mips==64)
529+# define PLAT_mips64_linux 1
530+#elif defined(__linux__) && defined(__mips__) && (__mips!=64)
531+# define PLAT_mips32_linux 1
532+#else
533+/* If we're not compiling for our target platform, don't generate
534+ any inline asms. */
535+# if !defined(NVALGRIND)
536+# define NVALGRIND 1
537+# endif
538+#endif
539+
540+
541+/* ------------------------------------------------------------------ */
542+/* ARCHITECTURE SPECIFICS for SPECIAL INSTRUCTIONS. There is nothing */
543+/* in here of use to end-users -- skip to the next section. */
544+/* ------------------------------------------------------------------ */
545+
546+/*
547+ * VALGRIND_DO_CLIENT_REQUEST(): a statement that invokes a Valgrind client
548+ * request. Accepts both pointers and integers as arguments.
549+ *
550+ * VALGRIND_DO_CLIENT_REQUEST_STMT(): a statement that invokes a Valgrind
551+ * client request that does not return a value.
552+
553+ * VALGRIND_DO_CLIENT_REQUEST_EXPR(): a C expression that invokes a Valgrind
554+ * client request and whose value equals the client request result. Accepts
555+ * both pointers and integers as arguments. Note that such calls are not
556+ * necessarily pure functions -- they may have side effects.
557+ */
558+
559+#define VALGRIND_DO_CLIENT_REQUEST(_zzq_rlval, _zzq_default, \
560+ _zzq_request, _zzq_arg1, _zzq_arg2, \
561+ _zzq_arg3, _zzq_arg4, _zzq_arg5) \
562+ do { (_zzq_rlval) = VALGRIND_DO_CLIENT_REQUEST_EXPR((_zzq_default), \
563+ (_zzq_request), (_zzq_arg1), (_zzq_arg2), \
564+ (_zzq_arg3), (_zzq_arg4), (_zzq_arg5)); } while (0)
565+
566+#define VALGRIND_DO_CLIENT_REQUEST_STMT(_zzq_request, _zzq_arg1, \
567+ _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
568+ do { (void) VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
569+ (_zzq_request), (_zzq_arg1), (_zzq_arg2), \
570+ (_zzq_arg3), (_zzq_arg4), (_zzq_arg5)); } while (0)
571+
572+#if defined(NVALGRIND)
573+
574+/* Define NVALGRIND to completely remove the Valgrind magic sequence
575+ from the compiled code (analogous to NDEBUG's effects on
576+ assert()) */
577+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
578+ _zzq_default, _zzq_request, \
579+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
580+ (_zzq_default)
581+
582+#else /* ! NVALGRIND */
583+
584+/* The following defines the magic code sequences which the JITter
585+ spots and handles magically. Don't look too closely at them as
586+ they will rot your brain.
587+
588+ The assembly code sequences for all architectures is in this one
589+ file. This is because this file must be stand-alone, and we don't
590+ want to have multiple files.
591+
592+ For VALGRIND_DO_CLIENT_REQUEST, we must ensure that the default
593+ value gets put in the return slot, so that everything works when
594+ this is executed not under Valgrind. Args are passed in a memory
595+ block, and so there's no intrinsic limit to the number that could
596+ be passed, but it's currently five.
597+
598+ The macro args are:
599+ _zzq_rlval result lvalue
600+ _zzq_default default value (result returned when running on real CPU)
601+ _zzq_request request code
602+ _zzq_arg1..5 request params
603+
604+ The other two macros are used to support function wrapping, and are
605+ a lot simpler. VALGRIND_GET_NR_CONTEXT returns the value of the
606+ guest's NRADDR pseudo-register and whatever other information is
607+ needed to safely run the call original from the wrapper: on
608+ ppc64-linux, the R2 value at the divert point is also needed. This
609+ information is abstracted into a user-visible type, OrigFn.
610+
611+ VALGRIND_CALL_NOREDIR_* behaves the same as the following on the
612+ guest, but guarantees that the branch instruction will not be
613+ redirected: x86: call *%eax, amd64: call *%rax, ppc32/ppc64:
614+ branch-and-link-to-r11. VALGRIND_CALL_NOREDIR is just text, not a
615+ complete inline asm, since it needs to be combined with more magic
616+ inline asm stuff to be useful.
617+*/
618+
619+/* ------------------------- x86-{linux,darwin} ---------------- */
620+
621+#if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin) \
622+ || (defined(PLAT_x86_win32) && defined(__GNUC__))
623+
624+typedef
625+ struct {
626+ unsigned int nraddr; /* where's the code? */
627+ }
628+ OrigFn;
629+
630+#define __SPECIAL_INSTRUCTION_PREAMBLE \
631+ "roll $3, %%edi ; roll $13, %%edi\n\t" \
632+ "roll $29, %%edi ; roll $19, %%edi\n\t"
633+
634+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
635+ _zzq_default, _zzq_request, \
636+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
637+ __extension__ \
638+ ({volatile unsigned int _zzq_args[6]; \
639+ volatile unsigned int _zzq_result; \
640+ _zzq_args[0] = (unsigned int)(_zzq_request); \
641+ _zzq_args[1] = (unsigned int)(_zzq_arg1); \
642+ _zzq_args[2] = (unsigned int)(_zzq_arg2); \
643+ _zzq_args[3] = (unsigned int)(_zzq_arg3); \
644+ _zzq_args[4] = (unsigned int)(_zzq_arg4); \
645+ _zzq_args[5] = (unsigned int)(_zzq_arg5); \
646+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
647+ /* %EDX = client_request ( %EAX ) */ \
648+ "xchgl %%ebx,%%ebx" \
649+ : "=d" (_zzq_result) \
650+ : "a" (&_zzq_args[0]), "0" (_zzq_default) \
651+ : "cc", "memory" \
652+ ); \
653+ _zzq_result; \
654+ })
655+
656+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
657+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
658+ volatile unsigned int __addr; \
659+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
660+ /* %EAX = guest_NRADDR */ \
661+ "xchgl %%ecx,%%ecx" \
662+ : "=a" (__addr) \
663+ : \
664+ : "cc", "memory" \
665+ ); \
666+ _zzq_orig->nraddr = __addr; \
667+ }
668+
669+#define VALGRIND_CALL_NOREDIR_EAX \
670+ __SPECIAL_INSTRUCTION_PREAMBLE \
671+ /* call-noredir *%EAX */ \
672+ "xchgl %%edx,%%edx\n\t"
673+
674+#define VALGRIND_VEX_INJECT_IR() \
675+ do { \
676+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
677+ "xchgl %%edi,%%edi\n\t" \
678+ : : : "cc", "memory" \
679+ ); \
680+ } while (0)
681+
682+#endif /* PLAT_x86_linux || PLAT_x86_darwin || (PLAT_x86_win32 && __GNUC__) */
683+
684+/* ------------------------- x86-Win32 ------------------------- */
685+
686+#if defined(PLAT_x86_win32) && !defined(__GNUC__)
687+
688+typedef
689+ struct {
690+ unsigned int nraddr; /* where's the code? */
691+ }
692+ OrigFn;
693+
694+#if defined(_MSC_VER)
695+
696+#define __SPECIAL_INSTRUCTION_PREAMBLE \
697+ __asm rol edi, 3 __asm rol edi, 13 \
698+ __asm rol edi, 29 __asm rol edi, 19
699+
700+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
701+ _zzq_default, _zzq_request, \
702+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
703+ valgrind_do_client_request_expr((uintptr_t)(_zzq_default), \
704+ (uintptr_t)(_zzq_request), (uintptr_t)(_zzq_arg1), \
705+ (uintptr_t)(_zzq_arg2), (uintptr_t)(_zzq_arg3), \
706+ (uintptr_t)(_zzq_arg4), (uintptr_t)(_zzq_arg5))
707+
708+static __inline uintptr_t
709+valgrind_do_client_request_expr(uintptr_t _zzq_default, uintptr_t _zzq_request,
710+ uintptr_t _zzq_arg1, uintptr_t _zzq_arg2,
711+ uintptr_t _zzq_arg3, uintptr_t _zzq_arg4,
712+ uintptr_t _zzq_arg5)
713+{
714+ volatile uintptr_t _zzq_args[6];
715+ volatile unsigned int _zzq_result;
716+ _zzq_args[0] = (uintptr_t)(_zzq_request);
717+ _zzq_args[1] = (uintptr_t)(_zzq_arg1);
718+ _zzq_args[2] = (uintptr_t)(_zzq_arg2);
719+ _zzq_args[3] = (uintptr_t)(_zzq_arg3);
720+ _zzq_args[4] = (uintptr_t)(_zzq_arg4);
721+ _zzq_args[5] = (uintptr_t)(_zzq_arg5);
722+ __asm { __asm lea eax, _zzq_args __asm mov edx, _zzq_default
723+ __SPECIAL_INSTRUCTION_PREAMBLE
724+ /* %EDX = client_request ( %EAX ) */
725+ __asm xchg ebx,ebx
726+ __asm mov _zzq_result, edx
727+ }
728+ return _zzq_result;
729+}
730+
731+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
732+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
733+ volatile unsigned int __addr; \
734+ __asm { __SPECIAL_INSTRUCTION_PREAMBLE \
735+ /* %EAX = guest_NRADDR */ \
736+ __asm xchg ecx,ecx \
737+ __asm mov __addr, eax \
738+ } \
739+ _zzq_orig->nraddr = __addr; \
740+ }
741+
742+#define VALGRIND_CALL_NOREDIR_EAX ERROR
743+
744+#define VALGRIND_VEX_INJECT_IR() \
745+ do { \
746+ __asm { __SPECIAL_INSTRUCTION_PREAMBLE \
747+ __asm xchg edi,edi \
748+ } \
749+ } while (0)
750+
751+#else
752+#error Unsupported compiler.
753+#endif
754+
755+#endif /* PLAT_x86_win32 */
756+
757+/* ------------------------ amd64-{linux,darwin} --------------- */
758+
759+#if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin) \
760+ || (defined(PLAT_amd64_win64) && defined(__GNUC__))
761+
762+typedef
763+ struct {
764+ unsigned long long int nraddr; /* where's the code? */
765+ }
766+ OrigFn;
767+
768+#define __SPECIAL_INSTRUCTION_PREAMBLE \
769+ "rolq $3, %%rdi ; rolq $13, %%rdi\n\t" \
770+ "rolq $61, %%rdi ; rolq $51, %%rdi\n\t"
771+
772+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
773+ _zzq_default, _zzq_request, \
774+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
775+ __extension__ \
776+ ({ volatile unsigned long long int _zzq_args[6]; \
777+ volatile unsigned long long int _zzq_result; \
778+ _zzq_args[0] = (unsigned long long int)(_zzq_request); \
779+ _zzq_args[1] = (unsigned long long int)(_zzq_arg1); \
780+ _zzq_args[2] = (unsigned long long int)(_zzq_arg2); \
781+ _zzq_args[3] = (unsigned long long int)(_zzq_arg3); \
782+ _zzq_args[4] = (unsigned long long int)(_zzq_arg4); \
783+ _zzq_args[5] = (unsigned long long int)(_zzq_arg5); \
784+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
785+ /* %RDX = client_request ( %RAX ) */ \
786+ "xchgq %%rbx,%%rbx" \
787+ : "=d" (_zzq_result) \
788+ : "a" (&_zzq_args[0]), "0" (_zzq_default) \
789+ : "cc", "memory" \
790+ ); \
791+ _zzq_result; \
792+ })
793+
794+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
795+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
796+ volatile unsigned long long int __addr; \
797+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
798+ /* %RAX = guest_NRADDR */ \
799+ "xchgq %%rcx,%%rcx" \
800+ : "=a" (__addr) \
801+ : \
802+ : "cc", "memory" \
803+ ); \
804+ _zzq_orig->nraddr = __addr; \
805+ }
806+
807+#define VALGRIND_CALL_NOREDIR_RAX \
808+ __SPECIAL_INSTRUCTION_PREAMBLE \
809+ /* call-noredir *%RAX */ \
810+ "xchgq %%rdx,%%rdx\n\t"
811+
812+#define VALGRIND_VEX_INJECT_IR() \
813+ do { \
814+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
815+ "xchgq %%rdi,%%rdi\n\t" \
816+ : : : "cc", "memory" \
817+ ); \
818+ } while (0)
819+
820+#endif /* PLAT_amd64_linux || PLAT_amd64_darwin */
821+
822+/* ------------------------- amd64-Win64 ------------------------- */
823+
824+#if defined(PLAT_amd64_win64) && !defined(__GNUC__)
825+
826+#error Unsupported compiler.
827+
828+#endif /* PLAT_amd64_win64 */
829+
830+/* ------------------------ ppc32-linux ------------------------ */
831+
832+#if defined(PLAT_ppc32_linux)
833+
834+typedef
835+ struct {
836+ unsigned int nraddr; /* where's the code? */
837+ }
838+ OrigFn;
839+
840+#define __SPECIAL_INSTRUCTION_PREAMBLE \
841+ "rlwinm 0,0,3,0,31 ; rlwinm 0,0,13,0,31\n\t" \
842+ "rlwinm 0,0,29,0,31 ; rlwinm 0,0,19,0,31\n\t"
843+
844+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
845+ _zzq_default, _zzq_request, \
846+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
847+ \
848+ __extension__ \
849+ ({ unsigned int _zzq_args[6]; \
850+ unsigned int _zzq_result; \
851+ unsigned int* _zzq_ptr; \
852+ _zzq_args[0] = (unsigned int)(_zzq_request); \
853+ _zzq_args[1] = (unsigned int)(_zzq_arg1); \
854+ _zzq_args[2] = (unsigned int)(_zzq_arg2); \
855+ _zzq_args[3] = (unsigned int)(_zzq_arg3); \
856+ _zzq_args[4] = (unsigned int)(_zzq_arg4); \
857+ _zzq_args[5] = (unsigned int)(_zzq_arg5); \
858+ _zzq_ptr = _zzq_args; \
859+ __asm__ volatile("mr 3,%1\n\t" /*default*/ \
860+ "mr 4,%2\n\t" /*ptr*/ \
861+ __SPECIAL_INSTRUCTION_PREAMBLE \
862+ /* %R3 = client_request ( %R4 ) */ \
863+ "or 1,1,1\n\t" \
864+ "mr %0,3" /*result*/ \
865+ : "=b" (_zzq_result) \
866+ : "b" (_zzq_default), "b" (_zzq_ptr) \
867+ : "cc", "memory", "r3", "r4"); \
868+ _zzq_result; \
869+ })
870+
871+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
872+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
873+ unsigned int __addr; \
874+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
875+ /* %R3 = guest_NRADDR */ \
876+ "or 2,2,2\n\t" \
877+ "mr %0,3" \
878+ : "=b" (__addr) \
879+ : \
880+ : "cc", "memory", "r3" \
881+ ); \
882+ _zzq_orig->nraddr = __addr; \
883+ }
884+
885+#define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
886+ __SPECIAL_INSTRUCTION_PREAMBLE \
887+ /* branch-and-link-to-noredir *%R11 */ \
888+ "or 3,3,3\n\t"
889+
890+#define VALGRIND_VEX_INJECT_IR() \
891+ do { \
892+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
893+ "or 5,5,5\n\t" \
894+ ); \
895+ } while (0)
896+
897+#endif /* PLAT_ppc32_linux */
898+
899+/* ------------------------ ppc64-linux ------------------------ */
900+
901+#if defined(PLAT_ppc64be_linux)
902+
903+typedef
904+ struct {
905+ unsigned long long int nraddr; /* where's the code? */
906+ unsigned long long int r2; /* what tocptr do we need? */
907+ }
908+ OrigFn;
909+
910+#define __SPECIAL_INSTRUCTION_PREAMBLE \
911+ "rotldi 0,0,3 ; rotldi 0,0,13\n\t" \
912+ "rotldi 0,0,61 ; rotldi 0,0,51\n\t"
913+
914+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
915+ _zzq_default, _zzq_request, \
916+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
917+ \
918+ __extension__ \
919+ ({ unsigned long long int _zzq_args[6]; \
920+ unsigned long long int _zzq_result; \
921+ unsigned long long int* _zzq_ptr; \
922+ _zzq_args[0] = (unsigned long long int)(_zzq_request); \
923+ _zzq_args[1] = (unsigned long long int)(_zzq_arg1); \
924+ _zzq_args[2] = (unsigned long long int)(_zzq_arg2); \
925+ _zzq_args[3] = (unsigned long long int)(_zzq_arg3); \
926+ _zzq_args[4] = (unsigned long long int)(_zzq_arg4); \
927+ _zzq_args[5] = (unsigned long long int)(_zzq_arg5); \
928+ _zzq_ptr = _zzq_args; \
929+ __asm__ volatile("mr 3,%1\n\t" /*default*/ \
930+ "mr 4,%2\n\t" /*ptr*/ \
931+ __SPECIAL_INSTRUCTION_PREAMBLE \
932+ /* %R3 = client_request ( %R4 ) */ \
933+ "or 1,1,1\n\t" \
934+ "mr %0,3" /*result*/ \
935+ : "=b" (_zzq_result) \
936+ : "b" (_zzq_default), "b" (_zzq_ptr) \
937+ : "cc", "memory", "r3", "r4"); \
938+ _zzq_result; \
939+ })
940+
941+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
942+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
943+ unsigned long long int __addr; \
944+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
945+ /* %R3 = guest_NRADDR */ \
946+ "or 2,2,2\n\t" \
947+ "mr %0,3" \
948+ : "=b" (__addr) \
949+ : \
950+ : "cc", "memory", "r3" \
951+ ); \
952+ _zzq_orig->nraddr = __addr; \
953+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
954+ /* %R3 = guest_NRADDR_GPR2 */ \
955+ "or 4,4,4\n\t" \
956+ "mr %0,3" \
957+ : "=b" (__addr) \
958+ : \
959+ : "cc", "memory", "r3" \
960+ ); \
961+ _zzq_orig->r2 = __addr; \
962+ }
963+
964+#define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
965+ __SPECIAL_INSTRUCTION_PREAMBLE \
966+ /* branch-and-link-to-noredir *%R11 */ \
967+ "or 3,3,3\n\t"
968+
969+#define VALGRIND_VEX_INJECT_IR() \
970+ do { \
971+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
972+ "or 5,5,5\n\t" \
973+ ); \
974+ } while (0)
975+
976+#endif /* PLAT_ppc64be_linux */
977+
978+#if defined(PLAT_ppc64le_linux)
979+
980+typedef
981+ struct {
982+ unsigned long long int nraddr; /* where's the code? */
983+ unsigned long long int r2; /* what tocptr do we need? */
984+ }
985+ OrigFn;
986+
987+#define __SPECIAL_INSTRUCTION_PREAMBLE \
988+ "rotldi 0,0,3 ; rotldi 0,0,13\n\t" \
989+ "rotldi 0,0,61 ; rotldi 0,0,51\n\t"
990+
991+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
992+ _zzq_default, _zzq_request, \
993+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
994+ \
995+ __extension__ \
996+ ({ unsigned long long int _zzq_args[6]; \
997+ unsigned long long int _zzq_result; \
998+ unsigned long long int* _zzq_ptr; \
999+ _zzq_args[0] = (unsigned long long int)(_zzq_request); \
1000+ _zzq_args[1] = (unsigned long long int)(_zzq_arg1); \
1001+ _zzq_args[2] = (unsigned long long int)(_zzq_arg2); \
1002+ _zzq_args[3] = (unsigned long long int)(_zzq_arg3); \
1003+ _zzq_args[4] = (unsigned long long int)(_zzq_arg4); \
1004+ _zzq_args[5] = (unsigned long long int)(_zzq_arg5); \
1005+ _zzq_ptr = _zzq_args; \
1006+ __asm__ volatile("mr 3,%1\n\t" /*default*/ \
1007+ "mr 4,%2\n\t" /*ptr*/ \
1008+ __SPECIAL_INSTRUCTION_PREAMBLE \
1009+ /* %R3 = client_request ( %R4 ) */ \
1010+ "or 1,1,1\n\t" \
1011+ "mr %0,3" /*result*/ \
1012+ : "=b" (_zzq_result) \
1013+ : "b" (_zzq_default), "b" (_zzq_ptr) \
1014+ : "cc", "memory", "r3", "r4"); \
1015+ _zzq_result; \
1016+ })
1017+
1018+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
1019+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
1020+ unsigned long long int __addr; \
1021+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1022+ /* %R3 = guest_NRADDR */ \
1023+ "or 2,2,2\n\t" \
1024+ "mr %0,3" \
1025+ : "=b" (__addr) \
1026+ : \
1027+ : "cc", "memory", "r3" \
1028+ ); \
1029+ _zzq_orig->nraddr = __addr; \
1030+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1031+ /* %R3 = guest_NRADDR_GPR2 */ \
1032+ "or 4,4,4\n\t" \
1033+ "mr %0,3" \
1034+ : "=b" (__addr) \
1035+ : \
1036+ : "cc", "memory", "r3" \
1037+ ); \
1038+ _zzq_orig->r2 = __addr; \
1039+ }
1040+
1041+#define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
1042+ __SPECIAL_INSTRUCTION_PREAMBLE \
1043+ /* branch-and-link-to-noredir *%R12 */ \
1044+ "or 3,3,3\n\t"
1045+
1046+#define VALGRIND_VEX_INJECT_IR() \
1047+ do { \
1048+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1049+ "or 5,5,5\n\t" \
1050+ ); \
1051+ } while (0)
1052+
1053+#endif /* PLAT_ppc64le_linux */
1054+
1055+/* ------------------------- arm-linux ------------------------- */
1056+
1057+#if defined(PLAT_arm_linux)
1058+
1059+typedef
1060+ struct {
1061+ unsigned int nraddr; /* where's the code? */
1062+ }
1063+ OrigFn;
1064+
1065+#define __SPECIAL_INSTRUCTION_PREAMBLE \
1066+ "mov r12, r12, ror #3 ; mov r12, r12, ror #13 \n\t" \
1067+ "mov r12, r12, ror #29 ; mov r12, r12, ror #19 \n\t"
1068+
1069+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
1070+ _zzq_default, _zzq_request, \
1071+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
1072+ \
1073+ __extension__ \
1074+ ({volatile unsigned int _zzq_args[6]; \
1075+ volatile unsigned int _zzq_result; \
1076+ _zzq_args[0] = (unsigned int)(_zzq_request); \
1077+ _zzq_args[1] = (unsigned int)(_zzq_arg1); \
1078+ _zzq_args[2] = (unsigned int)(_zzq_arg2); \
1079+ _zzq_args[3] = (unsigned int)(_zzq_arg3); \
1080+ _zzq_args[4] = (unsigned int)(_zzq_arg4); \
1081+ _zzq_args[5] = (unsigned int)(_zzq_arg5); \
1082+ __asm__ volatile("mov r3, %1\n\t" /*default*/ \
1083+ "mov r4, %2\n\t" /*ptr*/ \
1084+ __SPECIAL_INSTRUCTION_PREAMBLE \
1085+ /* R3 = client_request ( R4 ) */ \
1086+ "orr r10, r10, r10\n\t" \
1087+ "mov %0, r3" /*result*/ \
1088+ : "=r" (_zzq_result) \
1089+ : "r" (_zzq_default), "r" (&_zzq_args[0]) \
1090+ : "cc","memory", "r3", "r4"); \
1091+ _zzq_result; \
1092+ })
1093+
1094+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
1095+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
1096+ unsigned int __addr; \
1097+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1098+ /* R3 = guest_NRADDR */ \
1099+ "orr r11, r11, r11\n\t" \
1100+ "mov %0, r3" \
1101+ : "=r" (__addr) \
1102+ : \
1103+ : "cc", "memory", "r3" \
1104+ ); \
1105+ _zzq_orig->nraddr = __addr; \
1106+ }
1107+
1108+#define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
1109+ __SPECIAL_INSTRUCTION_PREAMBLE \
1110+ /* branch-and-link-to-noredir *%R4 */ \
1111+ "orr r12, r12, r12\n\t"
1112+
1113+#define VALGRIND_VEX_INJECT_IR() \
1114+ do { \
1115+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1116+ "orr r9, r9, r9\n\t" \
1117+ : : : "cc", "memory" \
1118+ ); \
1119+ } while (0)
1120+
1121+#endif /* PLAT_arm_linux */
1122+
1123+/* ------------------------ arm64-linux ------------------------- */
1124+
1125+#if defined(PLAT_arm64_linux)
1126+
1127+typedef
1128+ struct {
1129+ unsigned long long int nraddr; /* where's the code? */
1130+ }
1131+ OrigFn;
1132+
1133+#define __SPECIAL_INSTRUCTION_PREAMBLE \
1134+ "ror x12, x12, #3 ; ror x12, x12, #13 \n\t" \
1135+ "ror x12, x12, #51 ; ror x12, x12, #61 \n\t"
1136+
1137+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
1138+ _zzq_default, _zzq_request, \
1139+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
1140+ \
1141+ __extension__ \
1142+ ({volatile unsigned long long int _zzq_args[6]; \
1143+ volatile unsigned long long int _zzq_result; \
1144+ _zzq_args[0] = (unsigned long long int)(_zzq_request); \
1145+ _zzq_args[1] = (unsigned long long int)(_zzq_arg1); \
1146+ _zzq_args[2] = (unsigned long long int)(_zzq_arg2); \
1147+ _zzq_args[3] = (unsigned long long int)(_zzq_arg3); \
1148+ _zzq_args[4] = (unsigned long long int)(_zzq_arg4); \
1149+ _zzq_args[5] = (unsigned long long int)(_zzq_arg5); \
1150+ __asm__ volatile("mov x3, %1\n\t" /*default*/ \
1151+ "mov x4, %2\n\t" /*ptr*/ \
1152+ __SPECIAL_INSTRUCTION_PREAMBLE \
1153+ /* X3 = client_request ( X4 ) */ \
1154+ "orr x10, x10, x10\n\t" \
1155+ "mov %0, x3" /*result*/ \
1156+ : "=r" (_zzq_result) \
1157+ : "r" (_zzq_default), "r" (&_zzq_args[0]) \
1158+ : "cc","memory", "x3", "x4"); \
1159+ _zzq_result; \
1160+ })
1161+
1162+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
1163+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
1164+ unsigned long long int __addr; \
1165+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1166+ /* X3 = guest_NRADDR */ \
1167+ "orr x11, x11, x11\n\t" \
1168+ "mov %0, x3" \
1169+ : "=r" (__addr) \
1170+ : \
1171+ : "cc", "memory", "x3" \
1172+ ); \
1173+ _zzq_orig->nraddr = __addr; \
1174+ }
1175+
1176+#define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
1177+ __SPECIAL_INSTRUCTION_PREAMBLE \
1178+ /* branch-and-link-to-noredir X8 */ \
1179+ "orr x12, x12, x12\n\t"
1180+
1181+#define VALGRIND_VEX_INJECT_IR() \
1182+ do { \
1183+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1184+ "orr x9, x9, x9\n\t" \
1185+ : : : "cc", "memory" \
1186+ ); \
1187+ } while (0)
1188+
1189+#endif /* PLAT_arm64_linux */
1190+
1191+/* ------------------------ s390x-linux ------------------------ */
1192+
1193+#if defined(PLAT_s390x_linux)
1194+
1195+typedef
1196+ struct {
1197+ unsigned long long int nraddr; /* where's the code? */
1198+ }
1199+ OrigFn;
1200+
1201+/* __SPECIAL_INSTRUCTION_PREAMBLE will be used to identify Valgrind specific
1202+ * code. This detection is implemented in platform specific toIR.c
1203+ * (e.g. VEX/priv/guest_s390_decoder.c).
1204+ */
1205+#define __SPECIAL_INSTRUCTION_PREAMBLE \
1206+ "lr 15,15\n\t" \
1207+ "lr 1,1\n\t" \
1208+ "lr 2,2\n\t" \
1209+ "lr 3,3\n\t"
1210+
1211+#define __CLIENT_REQUEST_CODE "lr 2,2\n\t"
1212+#define __GET_NR_CONTEXT_CODE "lr 3,3\n\t"
1213+#define __CALL_NO_REDIR_CODE "lr 4,4\n\t"
1214+#define __VEX_INJECT_IR_CODE "lr 5,5\n\t"
1215+
1216+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
1217+ _zzq_default, _zzq_request, \
1218+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
1219+ __extension__ \
1220+ ({volatile unsigned long long int _zzq_args[6]; \
1221+ volatile unsigned long long int _zzq_result; \
1222+ _zzq_args[0] = (unsigned long long int)(_zzq_request); \
1223+ _zzq_args[1] = (unsigned long long int)(_zzq_arg1); \
1224+ _zzq_args[2] = (unsigned long long int)(_zzq_arg2); \
1225+ _zzq_args[3] = (unsigned long long int)(_zzq_arg3); \
1226+ _zzq_args[4] = (unsigned long long int)(_zzq_arg4); \
1227+ _zzq_args[5] = (unsigned long long int)(_zzq_arg5); \
1228+ __asm__ volatile(/* r2 = args */ \
1229+ "lgr 2,%1\n\t" \
1230+ /* r3 = default */ \
1231+ "lgr 3,%2\n\t" \
1232+ __SPECIAL_INSTRUCTION_PREAMBLE \
1233+ __CLIENT_REQUEST_CODE \
1234+ /* results = r3 */ \
1235+ "lgr %0, 3\n\t" \
1236+ : "=d" (_zzq_result) \
1237+ : "a" (&_zzq_args[0]), "0" (_zzq_default) \
1238+ : "cc", "2", "3", "memory" \
1239+ ); \
1240+ _zzq_result; \
1241+ })
1242+
1243+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
1244+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
1245+ volatile unsigned long long int __addr; \
1246+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1247+ __GET_NR_CONTEXT_CODE \
1248+ "lgr %0, 3\n\t" \
1249+ : "=a" (__addr) \
1250+ : \
1251+ : "cc", "3", "memory" \
1252+ ); \
1253+ _zzq_orig->nraddr = __addr; \
1254+ }
1255+
1256+#define VALGRIND_CALL_NOREDIR_R1 \
1257+ __SPECIAL_INSTRUCTION_PREAMBLE \
1258+ __CALL_NO_REDIR_CODE
1259+
1260+#define VALGRIND_VEX_INJECT_IR() \
1261+ do { \
1262+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1263+ __VEX_INJECT_IR_CODE); \
1264+ } while (0)
1265+
1266+#endif /* PLAT_s390x_linux */
1267+
1268+/* ------------------------- mips32-linux ---------------- */
1269+
1270+#if defined(PLAT_mips32_linux)
1271+
1272+typedef
1273+ struct {
1274+ unsigned int nraddr; /* where's the code? */
1275+ }
1276+ OrigFn;
1277+
1278+/* .word 0x342
1279+ * .word 0x742
1280+ * .word 0xC2
1281+ * .word 0x4C2*/
1282+#define __SPECIAL_INSTRUCTION_PREAMBLE \
1283+ "srl $0, $0, 13\n\t" \
1284+ "srl $0, $0, 29\n\t" \
1285+ "srl $0, $0, 3\n\t" \
1286+ "srl $0, $0, 19\n\t"
1287+
1288+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
1289+ _zzq_default, _zzq_request, \
1290+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
1291+ __extension__ \
1292+ ({ volatile unsigned int _zzq_args[6]; \
1293+ volatile unsigned int _zzq_result; \
1294+ _zzq_args[0] = (unsigned int)(_zzq_request); \
1295+ _zzq_args[1] = (unsigned int)(_zzq_arg1); \
1296+ _zzq_args[2] = (unsigned int)(_zzq_arg2); \
1297+ _zzq_args[3] = (unsigned int)(_zzq_arg3); \
1298+ _zzq_args[4] = (unsigned int)(_zzq_arg4); \
1299+ _zzq_args[5] = (unsigned int)(_zzq_arg5); \
1300+ __asm__ volatile("move $11, %1\n\t" /*default*/ \
1301+ "move $12, %2\n\t" /*ptr*/ \
1302+ __SPECIAL_INSTRUCTION_PREAMBLE \
1303+ /* T3 = client_request ( T4 ) */ \
1304+ "or $13, $13, $13\n\t" \
1305+ "move %0, $11\n\t" /*result*/ \
1306+ : "=r" (_zzq_result) \
1307+ : "r" (_zzq_default), "r" (&_zzq_args[0]) \
1308+ : "$11", "$12"); \
1309+ _zzq_result; \
1310+ })
1311+
1312+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
1313+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
1314+ volatile unsigned int __addr; \
1315+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1316+ /* %t9 = guest_NRADDR */ \
1317+ "or $14, $14, $14\n\t" \
1318+ "move %0, $11" /*result*/ \
1319+ : "=r" (__addr) \
1320+ : \
1321+ : "$11" \
1322+ ); \
1323+ _zzq_orig->nraddr = __addr; \
1324+ }
1325+
1326+#define VALGRIND_CALL_NOREDIR_T9 \
1327+ __SPECIAL_INSTRUCTION_PREAMBLE \
1328+ /* call-noredir *%t9 */ \
1329+ "or $15, $15, $15\n\t"
1330+
1331+#define VALGRIND_VEX_INJECT_IR() \
1332+ do { \
1333+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1334+ "or $11, $11, $11\n\t" \
1335+ ); \
1336+ } while (0)
1337+
1338+
1339+#endif /* PLAT_mips32_linux */
1340+
1341+/* ------------------------- mips64-linux ---------------- */
1342+
1343+#if defined(PLAT_mips64_linux)
1344+
1345+typedef
1346+ struct {
1347+ unsigned long long nraddr; /* where's the code? */
1348+ }
1349+ OrigFn;
1350+
1351+/* dsll $0,$0, 3
1352+ * dsll $0,$0, 13
1353+ * dsll $0,$0, 29
1354+ * dsll $0,$0, 19*/
1355+#define __SPECIAL_INSTRUCTION_PREAMBLE \
1356+ "dsll $0,$0, 3 ; dsll $0,$0,13\n\t" \
1357+ "dsll $0,$0,29 ; dsll $0,$0,19\n\t"
1358+
1359+#define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
1360+ _zzq_default, _zzq_request, \
1361+ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
1362+ __extension__ \
1363+ ({ volatile unsigned long long int _zzq_args[6]; \
1364+ volatile unsigned long long int _zzq_result; \
1365+ _zzq_args[0] = (unsigned long long int)(_zzq_request); \
1366+ _zzq_args[1] = (unsigned long long int)(_zzq_arg1); \
1367+ _zzq_args[2] = (unsigned long long int)(_zzq_arg2); \
1368+ _zzq_args[3] = (unsigned long long int)(_zzq_arg3); \
1369+ _zzq_args[4] = (unsigned long long int)(_zzq_arg4); \
1370+ _zzq_args[5] = (unsigned long long int)(_zzq_arg5); \
1371+ __asm__ volatile("move $11, %1\n\t" /*default*/ \
1372+ "move $12, %2\n\t" /*ptr*/ \
1373+ __SPECIAL_INSTRUCTION_PREAMBLE \
1374+ /* $11 = client_request ( $12 ) */ \
1375+ "or $13, $13, $13\n\t" \
1376+ "move %0, $11\n\t" /*result*/ \
1377+ : "=r" (_zzq_result) \
1378+ : "r" (_zzq_default), "r" (&_zzq_args[0]) \
1379+ : "$11", "$12"); \
1380+ _zzq_result; \
1381+ })
1382+
1383+#define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
1384+ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
1385+ volatile unsigned long long int __addr; \
1386+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1387+ /* $11 = guest_NRADDR */ \
1388+ "or $14, $14, $14\n\t" \
1389+ "move %0, $11" /*result*/ \
1390+ : "=r" (__addr) \
1391+ : \
1392+ : "$11"); \
1393+ _zzq_orig->nraddr = __addr; \
1394+ }
1395+
1396+#define VALGRIND_CALL_NOREDIR_T9 \
1397+ __SPECIAL_INSTRUCTION_PREAMBLE \
1398+ /* call-noredir $25 */ \
1399+ "or $15, $15, $15\n\t"
1400+
1401+#define VALGRIND_VEX_INJECT_IR() \
1402+ do { \
1403+ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
1404+ "or $11, $11, $11\n\t" \
1405+ ); \
1406+ } while (0)
1407+
1408+#endif /* PLAT_mips64_linux */
1409+
1410+/* Insert assembly code for other platforms here... */
1411+
1412+#endif /* NVALGRIND */
1413+
1414+
1415+/* ------------------------------------------------------------------ */
1416+/* PLATFORM SPECIFICS for FUNCTION WRAPPING. This is all very */
1417+/* ugly. It's the least-worst tradeoff I can think of. */
1418+/* ------------------------------------------------------------------ */
1419+
1420+/* This section defines magic (a.k.a appalling-hack) macros for doing
1421+ guaranteed-no-redirection macros, so as to get from function
1422+ wrappers to the functions they are wrapping. The whole point is to
1423+ construct standard call sequences, but to do the call itself with a
1424+ special no-redirect call pseudo-instruction that the JIT
1425+ understands and handles specially. This section is long and
1426+ repetitious, and I can't see a way to make it shorter.
1427+
1428+ The naming scheme is as follows:
1429+
1430+ CALL_FN_{W,v}_{v,W,WW,WWW,WWWW,5W,6W,7W,etc}
1431+
1432+ 'W' stands for "word" and 'v' for "void". Hence there are
1433+ different macros for calling arity 0, 1, 2, 3, 4, etc, functions,
1434+ and for each, the possibility of returning a word-typed result, or
1435+ no result.
1436+*/
1437+
1438+/* Use these to write the name of your wrapper. NOTE: duplicates
1439+ VG_WRAP_FUNCTION_Z{U,Z} in pub_tool_redir.h. NOTE also: inserts
1440+ the default behaviour equivalance class tag "0000" into the name.
1441+ See pub_tool_redir.h for details -- normally you don't need to
1442+ think about this, though. */
1443+
1444+/* Use an extra level of macroisation so as to ensure the soname/fnname
1445+ args are fully macro-expanded before pasting them together. */
1446+#define VG_CONCAT4(_aa,_bb,_cc,_dd) _aa##_bb##_cc##_dd
1447+
1448+#define I_WRAP_SONAME_FNNAME_ZU(soname,fnname) \
1449+ VG_CONCAT4(_vgw00000ZU_,soname,_,fnname)
1450+
1451+#define I_WRAP_SONAME_FNNAME_ZZ(soname,fnname) \
1452+ VG_CONCAT4(_vgw00000ZZ_,soname,_,fnname)
1453+
1454+/* Use this macro from within a wrapper function to collect the
1455+ context (address and possibly other info) of the original function.
1456+ Once you have that you can then use it in one of the CALL_FN_
1457+ macros. The type of the argument _lval is OrigFn. */
1458+#define VALGRIND_GET_ORIG_FN(_lval) VALGRIND_GET_NR_CONTEXT(_lval)
1459+
1460+/* Also provide end-user facilities for function replacement, rather
1461+ than wrapping. A replacement function differs from a wrapper in
1462+ that it has no way to get hold of the original function being
1463+ called, and hence no way to call onwards to it. In a replacement
1464+ function, VALGRIND_GET_ORIG_FN always returns zero. */
1465+
1466+#define I_REPLACE_SONAME_FNNAME_ZU(soname,fnname) \
1467+ VG_CONCAT4(_vgr00000ZU_,soname,_,fnname)
1468+
1469+#define I_REPLACE_SONAME_FNNAME_ZZ(soname,fnname) \
1470+ VG_CONCAT4(_vgr00000ZZ_,soname,_,fnname)
1471+
1472+/* Derivatives of the main macros below, for calling functions
1473+ returning void. */
1474+
1475+#define CALL_FN_v_v(fnptr) \
1476+ do { volatile unsigned long _junk; \
1477+ CALL_FN_W_v(_junk,fnptr); } while (0)
1478+
1479+#define CALL_FN_v_W(fnptr, arg1) \
1480+ do { volatile unsigned long _junk; \
1481+ CALL_FN_W_W(_junk,fnptr,arg1); } while (0)
1482+
1483+#define CALL_FN_v_WW(fnptr, arg1,arg2) \
1484+ do { volatile unsigned long _junk; \
1485+ CALL_FN_W_WW(_junk,fnptr,arg1,arg2); } while (0)
1486+
1487+#define CALL_FN_v_WWW(fnptr, arg1,arg2,arg3) \
1488+ do { volatile unsigned long _junk; \
1489+ CALL_FN_W_WWW(_junk,fnptr,arg1,arg2,arg3); } while (0)
1490+
1491+#define CALL_FN_v_WWWW(fnptr, arg1,arg2,arg3,arg4) \
1492+ do { volatile unsigned long _junk; \
1493+ CALL_FN_W_WWWW(_junk,fnptr,arg1,arg2,arg3,arg4); } while (0)
1494+
1495+#define CALL_FN_v_5W(fnptr, arg1,arg2,arg3,arg4,arg5) \
1496+ do { volatile unsigned long _junk; \
1497+ CALL_FN_W_5W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5); } while (0)
1498+
1499+#define CALL_FN_v_6W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6) \
1500+ do { volatile unsigned long _junk; \
1501+ CALL_FN_W_6W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6); } while (0)
1502+
1503+#define CALL_FN_v_7W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6,arg7) \
1504+ do { volatile unsigned long _junk; \
1505+ CALL_FN_W_7W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6,arg7); } while (0)
1506+
1507+/* ------------------------- x86-{linux,darwin} ---------------- */
1508+
1509+#if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin)
1510+
1511+/* These regs are trashed by the hidden call. No need to mention eax
1512+ as gcc can already see that, plus causes gcc to bomb. */
1513+#define __CALLER_SAVED_REGS /*"eax"*/ "ecx", "edx"
1514+
1515+/* Macros to save and align the stack before making a function
1516+ call and restore it afterwards as gcc may not keep the stack
1517+ pointer aligned if it doesn't realise calls are being made
1518+ to other functions. */
1519+
1520+#define VALGRIND_ALIGN_STACK \
1521+ "movl %%esp,%%edi\n\t" \
1522+ "andl $0xfffffff0,%%esp\n\t"
1523+#define VALGRIND_RESTORE_STACK \
1524+ "movl %%edi,%%esp\n\t"
1525+
1526+/* These CALL_FN_ macros assume that on x86-linux, sizeof(unsigned
1527+ long) == 4. */
1528+
1529+#define CALL_FN_W_v(lval, orig) \
1530+ do { \
1531+ volatile OrigFn _orig = (orig); \
1532+ volatile unsigned long _argvec[1]; \
1533+ volatile unsigned long _res; \
1534+ _argvec[0] = (unsigned long)_orig.nraddr; \
1535+ __asm__ volatile( \
1536+ VALGRIND_ALIGN_STACK \
1537+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1538+ VALGRIND_CALL_NOREDIR_EAX \
1539+ VALGRIND_RESTORE_STACK \
1540+ : /*out*/ "=a" (_res) \
1541+ : /*in*/ "a" (&_argvec[0]) \
1542+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1543+ ); \
1544+ lval = (__typeof__(lval)) _res; \
1545+ } while (0)
1546+
1547+#define CALL_FN_W_W(lval, orig, arg1) \
1548+ do { \
1549+ volatile OrigFn _orig = (orig); \
1550+ volatile unsigned long _argvec[2]; \
1551+ volatile unsigned long _res; \
1552+ _argvec[0] = (unsigned long)_orig.nraddr; \
1553+ _argvec[1] = (unsigned long)(arg1); \
1554+ __asm__ volatile( \
1555+ VALGRIND_ALIGN_STACK \
1556+ "subl $12, %%esp\n\t" \
1557+ "pushl 4(%%eax)\n\t" \
1558+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1559+ VALGRIND_CALL_NOREDIR_EAX \
1560+ VALGRIND_RESTORE_STACK \
1561+ : /*out*/ "=a" (_res) \
1562+ : /*in*/ "a" (&_argvec[0]) \
1563+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1564+ ); \
1565+ lval = (__typeof__(lval)) _res; \
1566+ } while (0)
1567+
1568+#define CALL_FN_W_WW(lval, orig, arg1,arg2) \
1569+ do { \
1570+ volatile OrigFn _orig = (orig); \
1571+ volatile unsigned long _argvec[3]; \
1572+ volatile unsigned long _res; \
1573+ _argvec[0] = (unsigned long)_orig.nraddr; \
1574+ _argvec[1] = (unsigned long)(arg1); \
1575+ _argvec[2] = (unsigned long)(arg2); \
1576+ __asm__ volatile( \
1577+ VALGRIND_ALIGN_STACK \
1578+ "subl $8, %%esp\n\t" \
1579+ "pushl 8(%%eax)\n\t" \
1580+ "pushl 4(%%eax)\n\t" \
1581+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1582+ VALGRIND_CALL_NOREDIR_EAX \
1583+ VALGRIND_RESTORE_STACK \
1584+ : /*out*/ "=a" (_res) \
1585+ : /*in*/ "a" (&_argvec[0]) \
1586+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1587+ ); \
1588+ lval = (__typeof__(lval)) _res; \
1589+ } while (0)
1590+
1591+#define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
1592+ do { \
1593+ volatile OrigFn _orig = (orig); \
1594+ volatile unsigned long _argvec[4]; \
1595+ volatile unsigned long _res; \
1596+ _argvec[0] = (unsigned long)_orig.nraddr; \
1597+ _argvec[1] = (unsigned long)(arg1); \
1598+ _argvec[2] = (unsigned long)(arg2); \
1599+ _argvec[3] = (unsigned long)(arg3); \
1600+ __asm__ volatile( \
1601+ VALGRIND_ALIGN_STACK \
1602+ "subl $4, %%esp\n\t" \
1603+ "pushl 12(%%eax)\n\t" \
1604+ "pushl 8(%%eax)\n\t" \
1605+ "pushl 4(%%eax)\n\t" \
1606+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1607+ VALGRIND_CALL_NOREDIR_EAX \
1608+ VALGRIND_RESTORE_STACK \
1609+ : /*out*/ "=a" (_res) \
1610+ : /*in*/ "a" (&_argvec[0]) \
1611+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1612+ ); \
1613+ lval = (__typeof__(lval)) _res; \
1614+ } while (0)
1615+
1616+#define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
1617+ do { \
1618+ volatile OrigFn _orig = (orig); \
1619+ volatile unsigned long _argvec[5]; \
1620+ volatile unsigned long _res; \
1621+ _argvec[0] = (unsigned long)_orig.nraddr; \
1622+ _argvec[1] = (unsigned long)(arg1); \
1623+ _argvec[2] = (unsigned long)(arg2); \
1624+ _argvec[3] = (unsigned long)(arg3); \
1625+ _argvec[4] = (unsigned long)(arg4); \
1626+ __asm__ volatile( \
1627+ VALGRIND_ALIGN_STACK \
1628+ "pushl 16(%%eax)\n\t" \
1629+ "pushl 12(%%eax)\n\t" \
1630+ "pushl 8(%%eax)\n\t" \
1631+ "pushl 4(%%eax)\n\t" \
1632+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1633+ VALGRIND_CALL_NOREDIR_EAX \
1634+ VALGRIND_RESTORE_STACK \
1635+ : /*out*/ "=a" (_res) \
1636+ : /*in*/ "a" (&_argvec[0]) \
1637+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1638+ ); \
1639+ lval = (__typeof__(lval)) _res; \
1640+ } while (0)
1641+
1642+#define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
1643+ do { \
1644+ volatile OrigFn _orig = (orig); \
1645+ volatile unsigned long _argvec[6]; \
1646+ volatile unsigned long _res; \
1647+ _argvec[0] = (unsigned long)_orig.nraddr; \
1648+ _argvec[1] = (unsigned long)(arg1); \
1649+ _argvec[2] = (unsigned long)(arg2); \
1650+ _argvec[3] = (unsigned long)(arg3); \
1651+ _argvec[4] = (unsigned long)(arg4); \
1652+ _argvec[5] = (unsigned long)(arg5); \
1653+ __asm__ volatile( \
1654+ VALGRIND_ALIGN_STACK \
1655+ "subl $12, %%esp\n\t" \
1656+ "pushl 20(%%eax)\n\t" \
1657+ "pushl 16(%%eax)\n\t" \
1658+ "pushl 12(%%eax)\n\t" \
1659+ "pushl 8(%%eax)\n\t" \
1660+ "pushl 4(%%eax)\n\t" \
1661+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1662+ VALGRIND_CALL_NOREDIR_EAX \
1663+ VALGRIND_RESTORE_STACK \
1664+ : /*out*/ "=a" (_res) \
1665+ : /*in*/ "a" (&_argvec[0]) \
1666+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1667+ ); \
1668+ lval = (__typeof__(lval)) _res; \
1669+ } while (0)
1670+
1671+#define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
1672+ do { \
1673+ volatile OrigFn _orig = (orig); \
1674+ volatile unsigned long _argvec[7]; \
1675+ volatile unsigned long _res; \
1676+ _argvec[0] = (unsigned long)_orig.nraddr; \
1677+ _argvec[1] = (unsigned long)(arg1); \
1678+ _argvec[2] = (unsigned long)(arg2); \
1679+ _argvec[3] = (unsigned long)(arg3); \
1680+ _argvec[4] = (unsigned long)(arg4); \
1681+ _argvec[5] = (unsigned long)(arg5); \
1682+ _argvec[6] = (unsigned long)(arg6); \
1683+ __asm__ volatile( \
1684+ VALGRIND_ALIGN_STACK \
1685+ "subl $8, %%esp\n\t" \
1686+ "pushl 24(%%eax)\n\t" \
1687+ "pushl 20(%%eax)\n\t" \
1688+ "pushl 16(%%eax)\n\t" \
1689+ "pushl 12(%%eax)\n\t" \
1690+ "pushl 8(%%eax)\n\t" \
1691+ "pushl 4(%%eax)\n\t" \
1692+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1693+ VALGRIND_CALL_NOREDIR_EAX \
1694+ VALGRIND_RESTORE_STACK \
1695+ : /*out*/ "=a" (_res) \
1696+ : /*in*/ "a" (&_argvec[0]) \
1697+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1698+ ); \
1699+ lval = (__typeof__(lval)) _res; \
1700+ } while (0)
1701+
1702+#define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1703+ arg7) \
1704+ do { \
1705+ volatile OrigFn _orig = (orig); \
1706+ volatile unsigned long _argvec[8]; \
1707+ volatile unsigned long _res; \
1708+ _argvec[0] = (unsigned long)_orig.nraddr; \
1709+ _argvec[1] = (unsigned long)(arg1); \
1710+ _argvec[2] = (unsigned long)(arg2); \
1711+ _argvec[3] = (unsigned long)(arg3); \
1712+ _argvec[4] = (unsigned long)(arg4); \
1713+ _argvec[5] = (unsigned long)(arg5); \
1714+ _argvec[6] = (unsigned long)(arg6); \
1715+ _argvec[7] = (unsigned long)(arg7); \
1716+ __asm__ volatile( \
1717+ VALGRIND_ALIGN_STACK \
1718+ "subl $4, %%esp\n\t" \
1719+ "pushl 28(%%eax)\n\t" \
1720+ "pushl 24(%%eax)\n\t" \
1721+ "pushl 20(%%eax)\n\t" \
1722+ "pushl 16(%%eax)\n\t" \
1723+ "pushl 12(%%eax)\n\t" \
1724+ "pushl 8(%%eax)\n\t" \
1725+ "pushl 4(%%eax)\n\t" \
1726+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1727+ VALGRIND_CALL_NOREDIR_EAX \
1728+ VALGRIND_RESTORE_STACK \
1729+ : /*out*/ "=a" (_res) \
1730+ : /*in*/ "a" (&_argvec[0]) \
1731+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1732+ ); \
1733+ lval = (__typeof__(lval)) _res; \
1734+ } while (0)
1735+
1736+#define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1737+ arg7,arg8) \
1738+ do { \
1739+ volatile OrigFn _orig = (orig); \
1740+ volatile unsigned long _argvec[9]; \
1741+ volatile unsigned long _res; \
1742+ _argvec[0] = (unsigned long)_orig.nraddr; \
1743+ _argvec[1] = (unsigned long)(arg1); \
1744+ _argvec[2] = (unsigned long)(arg2); \
1745+ _argvec[3] = (unsigned long)(arg3); \
1746+ _argvec[4] = (unsigned long)(arg4); \
1747+ _argvec[5] = (unsigned long)(arg5); \
1748+ _argvec[6] = (unsigned long)(arg6); \
1749+ _argvec[7] = (unsigned long)(arg7); \
1750+ _argvec[8] = (unsigned long)(arg8); \
1751+ __asm__ volatile( \
1752+ VALGRIND_ALIGN_STACK \
1753+ "pushl 32(%%eax)\n\t" \
1754+ "pushl 28(%%eax)\n\t" \
1755+ "pushl 24(%%eax)\n\t" \
1756+ "pushl 20(%%eax)\n\t" \
1757+ "pushl 16(%%eax)\n\t" \
1758+ "pushl 12(%%eax)\n\t" \
1759+ "pushl 8(%%eax)\n\t" \
1760+ "pushl 4(%%eax)\n\t" \
1761+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1762+ VALGRIND_CALL_NOREDIR_EAX \
1763+ VALGRIND_RESTORE_STACK \
1764+ : /*out*/ "=a" (_res) \
1765+ : /*in*/ "a" (&_argvec[0]) \
1766+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1767+ ); \
1768+ lval = (__typeof__(lval)) _res; \
1769+ } while (0)
1770+
1771+#define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1772+ arg7,arg8,arg9) \
1773+ do { \
1774+ volatile OrigFn _orig = (orig); \
1775+ volatile unsigned long _argvec[10]; \
1776+ volatile unsigned long _res; \
1777+ _argvec[0] = (unsigned long)_orig.nraddr; \
1778+ _argvec[1] = (unsigned long)(arg1); \
1779+ _argvec[2] = (unsigned long)(arg2); \
1780+ _argvec[3] = (unsigned long)(arg3); \
1781+ _argvec[4] = (unsigned long)(arg4); \
1782+ _argvec[5] = (unsigned long)(arg5); \
1783+ _argvec[6] = (unsigned long)(arg6); \
1784+ _argvec[7] = (unsigned long)(arg7); \
1785+ _argvec[8] = (unsigned long)(arg8); \
1786+ _argvec[9] = (unsigned long)(arg9); \
1787+ __asm__ volatile( \
1788+ VALGRIND_ALIGN_STACK \
1789+ "subl $12, %%esp\n\t" \
1790+ "pushl 36(%%eax)\n\t" \
1791+ "pushl 32(%%eax)\n\t" \
1792+ "pushl 28(%%eax)\n\t" \
1793+ "pushl 24(%%eax)\n\t" \
1794+ "pushl 20(%%eax)\n\t" \
1795+ "pushl 16(%%eax)\n\t" \
1796+ "pushl 12(%%eax)\n\t" \
1797+ "pushl 8(%%eax)\n\t" \
1798+ "pushl 4(%%eax)\n\t" \
1799+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1800+ VALGRIND_CALL_NOREDIR_EAX \
1801+ VALGRIND_RESTORE_STACK \
1802+ : /*out*/ "=a" (_res) \
1803+ : /*in*/ "a" (&_argvec[0]) \
1804+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1805+ ); \
1806+ lval = (__typeof__(lval)) _res; \
1807+ } while (0)
1808+
1809+#define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1810+ arg7,arg8,arg9,arg10) \
1811+ do { \
1812+ volatile OrigFn _orig = (orig); \
1813+ volatile unsigned long _argvec[11]; \
1814+ volatile unsigned long _res; \
1815+ _argvec[0] = (unsigned long)_orig.nraddr; \
1816+ _argvec[1] = (unsigned long)(arg1); \
1817+ _argvec[2] = (unsigned long)(arg2); \
1818+ _argvec[3] = (unsigned long)(arg3); \
1819+ _argvec[4] = (unsigned long)(arg4); \
1820+ _argvec[5] = (unsigned long)(arg5); \
1821+ _argvec[6] = (unsigned long)(arg6); \
1822+ _argvec[7] = (unsigned long)(arg7); \
1823+ _argvec[8] = (unsigned long)(arg8); \
1824+ _argvec[9] = (unsigned long)(arg9); \
1825+ _argvec[10] = (unsigned long)(arg10); \
1826+ __asm__ volatile( \
1827+ VALGRIND_ALIGN_STACK \
1828+ "subl $8, %%esp\n\t" \
1829+ "pushl 40(%%eax)\n\t" \
1830+ "pushl 36(%%eax)\n\t" \
1831+ "pushl 32(%%eax)\n\t" \
1832+ "pushl 28(%%eax)\n\t" \
1833+ "pushl 24(%%eax)\n\t" \
1834+ "pushl 20(%%eax)\n\t" \
1835+ "pushl 16(%%eax)\n\t" \
1836+ "pushl 12(%%eax)\n\t" \
1837+ "pushl 8(%%eax)\n\t" \
1838+ "pushl 4(%%eax)\n\t" \
1839+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1840+ VALGRIND_CALL_NOREDIR_EAX \
1841+ VALGRIND_RESTORE_STACK \
1842+ : /*out*/ "=a" (_res) \
1843+ : /*in*/ "a" (&_argvec[0]) \
1844+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1845+ ); \
1846+ lval = (__typeof__(lval)) _res; \
1847+ } while (0)
1848+
1849+#define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
1850+ arg6,arg7,arg8,arg9,arg10, \
1851+ arg11) \
1852+ do { \
1853+ volatile OrigFn _orig = (orig); \
1854+ volatile unsigned long _argvec[12]; \
1855+ volatile unsigned long _res; \
1856+ _argvec[0] = (unsigned long)_orig.nraddr; \
1857+ _argvec[1] = (unsigned long)(arg1); \
1858+ _argvec[2] = (unsigned long)(arg2); \
1859+ _argvec[3] = (unsigned long)(arg3); \
1860+ _argvec[4] = (unsigned long)(arg4); \
1861+ _argvec[5] = (unsigned long)(arg5); \
1862+ _argvec[6] = (unsigned long)(arg6); \
1863+ _argvec[7] = (unsigned long)(arg7); \
1864+ _argvec[8] = (unsigned long)(arg8); \
1865+ _argvec[9] = (unsigned long)(arg9); \
1866+ _argvec[10] = (unsigned long)(arg10); \
1867+ _argvec[11] = (unsigned long)(arg11); \
1868+ __asm__ volatile( \
1869+ VALGRIND_ALIGN_STACK \
1870+ "subl $4, %%esp\n\t" \
1871+ "pushl 44(%%eax)\n\t" \
1872+ "pushl 40(%%eax)\n\t" \
1873+ "pushl 36(%%eax)\n\t" \
1874+ "pushl 32(%%eax)\n\t" \
1875+ "pushl 28(%%eax)\n\t" \
1876+ "pushl 24(%%eax)\n\t" \
1877+ "pushl 20(%%eax)\n\t" \
1878+ "pushl 16(%%eax)\n\t" \
1879+ "pushl 12(%%eax)\n\t" \
1880+ "pushl 8(%%eax)\n\t" \
1881+ "pushl 4(%%eax)\n\t" \
1882+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1883+ VALGRIND_CALL_NOREDIR_EAX \
1884+ VALGRIND_RESTORE_STACK \
1885+ : /*out*/ "=a" (_res) \
1886+ : /*in*/ "a" (&_argvec[0]) \
1887+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1888+ ); \
1889+ lval = (__typeof__(lval)) _res; \
1890+ } while (0)
1891+
1892+#define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
1893+ arg6,arg7,arg8,arg9,arg10, \
1894+ arg11,arg12) \
1895+ do { \
1896+ volatile OrigFn _orig = (orig); \
1897+ volatile unsigned long _argvec[13]; \
1898+ volatile unsigned long _res; \
1899+ _argvec[0] = (unsigned long)_orig.nraddr; \
1900+ _argvec[1] = (unsigned long)(arg1); \
1901+ _argvec[2] = (unsigned long)(arg2); \
1902+ _argvec[3] = (unsigned long)(arg3); \
1903+ _argvec[4] = (unsigned long)(arg4); \
1904+ _argvec[5] = (unsigned long)(arg5); \
1905+ _argvec[6] = (unsigned long)(arg6); \
1906+ _argvec[7] = (unsigned long)(arg7); \
1907+ _argvec[8] = (unsigned long)(arg8); \
1908+ _argvec[9] = (unsigned long)(arg9); \
1909+ _argvec[10] = (unsigned long)(arg10); \
1910+ _argvec[11] = (unsigned long)(arg11); \
1911+ _argvec[12] = (unsigned long)(arg12); \
1912+ __asm__ volatile( \
1913+ VALGRIND_ALIGN_STACK \
1914+ "pushl 48(%%eax)\n\t" \
1915+ "pushl 44(%%eax)\n\t" \
1916+ "pushl 40(%%eax)\n\t" \
1917+ "pushl 36(%%eax)\n\t" \
1918+ "pushl 32(%%eax)\n\t" \
1919+ "pushl 28(%%eax)\n\t" \
1920+ "pushl 24(%%eax)\n\t" \
1921+ "pushl 20(%%eax)\n\t" \
1922+ "pushl 16(%%eax)\n\t" \
1923+ "pushl 12(%%eax)\n\t" \
1924+ "pushl 8(%%eax)\n\t" \
1925+ "pushl 4(%%eax)\n\t" \
1926+ "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1927+ VALGRIND_CALL_NOREDIR_EAX \
1928+ VALGRIND_RESTORE_STACK \
1929+ : /*out*/ "=a" (_res) \
1930+ : /*in*/ "a" (&_argvec[0]) \
1931+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \
1932+ ); \
1933+ lval = (__typeof__(lval)) _res; \
1934+ } while (0)
1935+
1936+#endif /* PLAT_x86_linux || PLAT_x86_darwin */
1937+
1938+/* ------------------------ amd64-{linux,darwin} --------------- */
1939+
1940+#if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin)
1941+
1942+/* ARGREGS: rdi rsi rdx rcx r8 r9 (the rest on stack in R-to-L order) */
1943+
1944+/* These regs are trashed by the hidden call. */
1945+#define __CALLER_SAVED_REGS /*"rax",*/ "rcx", "rdx", "rsi", \
1946+ "rdi", "r8", "r9", "r10", "r11"
1947+
1948+/* This is all pretty complex. It's so as to make stack unwinding
1949+ work reliably. See bug 243270. The basic problem is the sub and
1950+ add of 128 of %rsp in all of the following macros. If gcc believes
1951+ the CFA is in %rsp, then unwinding may fail, because what's at the
1952+ CFA is not what gcc "expected" when it constructs the CFIs for the
1953+ places where the macros are instantiated.
1954+
1955+ But we can't just add a CFI annotation to increase the CFA offset
1956+ by 128, to match the sub of 128 from %rsp, because we don't know
1957+ whether gcc has chosen %rsp as the CFA at that point, or whether it
1958+ has chosen some other register (eg, %rbp). In the latter case,
1959+ adding a CFI annotation to change the CFA offset is simply wrong.
1960+
1961+ So the solution is to get hold of the CFA using
1962+ __builtin_dwarf_cfa(), put it in a known register, and add a
1963+ CFI annotation to say what the register is. We choose %rbp for
1964+ this (perhaps perversely), because:
1965+
1966+ (1) %rbp is already subject to unwinding. If a new register was
1967+ chosen then the unwinder would have to unwind it in all stack
1968+ traces, which is expensive, and
1969+
1970+ (2) %rbp is already subject to precise exception updates in the
1971+ JIT. If a new register was chosen, we'd have to have precise
1972+ exceptions for it too, which reduces performance of the
1973+ generated code.
1974+
1975+ However .. one extra complication. We can't just whack the result
1976+ of __builtin_dwarf_cfa() into %rbp and then add %rbp to the
1977+ list of trashed registers at the end of the inline assembly
1978+ fragments; gcc won't allow %rbp to appear in that list. Hence
1979+ instead we need to stash %rbp in %r15 for the duration of the asm,
1980+ and say that %r15 is trashed instead. gcc seems happy to go with
1981+ that.
1982+
1983+ Oh .. and this all needs to be conditionalised so that it is
1984+ unchanged from before this commit, when compiled with older gccs
1985+ that don't support __builtin_dwarf_cfa. Furthermore, since
1986+ this header file is freestanding, it has to be independent of
1987+ config.h, and so the following conditionalisation cannot depend on
1988+ configure time checks.
1989+
1990+ Although it's not clear from
1991+ 'defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)',
1992+ this expression excludes Darwin.
1993+ .cfi directives in Darwin assembly appear to be completely
1994+ different and I haven't investigated how they work.
1995+
1996+ For even more entertainment value, note we have to use the
1997+ completely undocumented __builtin_dwarf_cfa(), which appears to
1998+ really compute the CFA, whereas __builtin_frame_address(0) claims
1999+ to but actually doesn't. See
2000+ https://bugs.kde.org/show_bug.cgi?id=243270#c47
2001+*/
2002+#if defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)
2003+# define __FRAME_POINTER \
2004+ ,"r"(__builtin_dwarf_cfa())
2005+# define VALGRIND_CFI_PROLOGUE \
2006+ "movq %%rbp, %%r15\n\t" \
2007+ "movq %2, %%rbp\n\t" \
2008+ ".cfi_remember_state\n\t" \
2009+ ".cfi_def_cfa rbp, 0\n\t"
2010+# define VALGRIND_CFI_EPILOGUE \
2011+ "movq %%r15, %%rbp\n\t" \
2012+ ".cfi_restore_state\n\t"
2013+#else
2014+# define __FRAME_POINTER
2015+# define VALGRIND_CFI_PROLOGUE
2016+# define VALGRIND_CFI_EPILOGUE
2017+#endif
2018+
2019+/* Macros to save and align the stack before making a function
2020+ call and restore it afterwards as gcc may not keep the stack
2021+ pointer aligned if it doesn't realise calls are being made
2022+ to other functions. */
2023+
2024+#define VALGRIND_ALIGN_STACK \
2025+ "movq %%rsp,%%r14\n\t" \
2026+ "andq $0xfffffffffffffff0,%%rsp\n\t"
2027+#define VALGRIND_RESTORE_STACK \
2028+ "movq %%r14,%%rsp\n\t"
2029+
2030+/* These CALL_FN_ macros assume that on amd64-linux, sizeof(unsigned
2031+ long) == 8. */
2032+
2033+/* NB 9 Sept 07. There is a nasty kludge here in all these CALL_FN_
2034+ macros. In order not to trash the stack redzone, we need to drop
2035+ %rsp by 128 before the hidden call, and restore afterwards. The
2036+ nastyness is that it is only by luck that the stack still appears
2037+ to be unwindable during the hidden call - since then the behaviour
2038+ of any routine using this macro does not match what the CFI data
2039+ says. Sigh.
2040+
2041+ Why is this important? Imagine that a wrapper has a stack
2042+ allocated local, and passes to the hidden call, a pointer to it.
2043+ Because gcc does not know about the hidden call, it may allocate
2044+ that local in the redzone. Unfortunately the hidden call may then
2045+ trash it before it comes to use it. So we must step clear of the
2046+ redzone, for the duration of the hidden call, to make it safe.
2047+
2048+ Probably the same problem afflicts the other redzone-style ABIs too
2049+ (ppc64-linux); but for those, the stack is
2050+ self describing (none of this CFI nonsense) so at least messing
2051+ with the stack pointer doesn't give a danger of non-unwindable
2052+ stack. */
2053+
2054+#define CALL_FN_W_v(lval, orig) \
2055+ do { \
2056+ volatile OrigFn _orig = (orig); \
2057+ volatile unsigned long _argvec[1]; \
2058+ volatile unsigned long _res; \
2059+ _argvec[0] = (unsigned long)_orig.nraddr; \
2060+ __asm__ volatile( \
2061+ VALGRIND_CFI_PROLOGUE \
2062+ VALGRIND_ALIGN_STACK \
2063+ "subq $128,%%rsp\n\t" \
2064+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2065+ VALGRIND_CALL_NOREDIR_RAX \
2066+ VALGRIND_RESTORE_STACK \
2067+ VALGRIND_CFI_EPILOGUE \
2068+ : /*out*/ "=a" (_res) \
2069+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2070+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2071+ ); \
2072+ lval = (__typeof__(lval)) _res; \
2073+ } while (0)
2074+
2075+#define CALL_FN_W_W(lval, orig, arg1) \
2076+ do { \
2077+ volatile OrigFn _orig = (orig); \
2078+ volatile unsigned long _argvec[2]; \
2079+ volatile unsigned long _res; \
2080+ _argvec[0] = (unsigned long)_orig.nraddr; \
2081+ _argvec[1] = (unsigned long)(arg1); \
2082+ __asm__ volatile( \
2083+ VALGRIND_CFI_PROLOGUE \
2084+ VALGRIND_ALIGN_STACK \
2085+ "subq $128,%%rsp\n\t" \
2086+ "movq 8(%%rax), %%rdi\n\t" \
2087+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2088+ VALGRIND_CALL_NOREDIR_RAX \
2089+ VALGRIND_RESTORE_STACK \
2090+ VALGRIND_CFI_EPILOGUE \
2091+ : /*out*/ "=a" (_res) \
2092+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2093+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2094+ ); \
2095+ lval = (__typeof__(lval)) _res; \
2096+ } while (0)
2097+
2098+#define CALL_FN_W_WW(lval, orig, arg1,arg2) \
2099+ do { \
2100+ volatile OrigFn _orig = (orig); \
2101+ volatile unsigned long _argvec[3]; \
2102+ volatile unsigned long _res; \
2103+ _argvec[0] = (unsigned long)_orig.nraddr; \
2104+ _argvec[1] = (unsigned long)(arg1); \
2105+ _argvec[2] = (unsigned long)(arg2); \
2106+ __asm__ volatile( \
2107+ VALGRIND_CFI_PROLOGUE \
2108+ VALGRIND_ALIGN_STACK \
2109+ "subq $128,%%rsp\n\t" \
2110+ "movq 16(%%rax), %%rsi\n\t" \
2111+ "movq 8(%%rax), %%rdi\n\t" \
2112+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2113+ VALGRIND_CALL_NOREDIR_RAX \
2114+ VALGRIND_RESTORE_STACK \
2115+ VALGRIND_CFI_EPILOGUE \
2116+ : /*out*/ "=a" (_res) \
2117+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2118+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2119+ ); \
2120+ lval = (__typeof__(lval)) _res; \
2121+ } while (0)
2122+
2123+#define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
2124+ do { \
2125+ volatile OrigFn _orig = (orig); \
2126+ volatile unsigned long _argvec[4]; \
2127+ volatile unsigned long _res; \
2128+ _argvec[0] = (unsigned long)_orig.nraddr; \
2129+ _argvec[1] = (unsigned long)(arg1); \
2130+ _argvec[2] = (unsigned long)(arg2); \
2131+ _argvec[3] = (unsigned long)(arg3); \
2132+ __asm__ volatile( \
2133+ VALGRIND_CFI_PROLOGUE \
2134+ VALGRIND_ALIGN_STACK \
2135+ "subq $128,%%rsp\n\t" \
2136+ "movq 24(%%rax), %%rdx\n\t" \
2137+ "movq 16(%%rax), %%rsi\n\t" \
2138+ "movq 8(%%rax), %%rdi\n\t" \
2139+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2140+ VALGRIND_CALL_NOREDIR_RAX \
2141+ VALGRIND_RESTORE_STACK \
2142+ VALGRIND_CFI_EPILOGUE \
2143+ : /*out*/ "=a" (_res) \
2144+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2145+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2146+ ); \
2147+ lval = (__typeof__(lval)) _res; \
2148+ } while (0)
2149+
2150+#define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
2151+ do { \
2152+ volatile OrigFn _orig = (orig); \
2153+ volatile unsigned long _argvec[5]; \
2154+ volatile unsigned long _res; \
2155+ _argvec[0] = (unsigned long)_orig.nraddr; \
2156+ _argvec[1] = (unsigned long)(arg1); \
2157+ _argvec[2] = (unsigned long)(arg2); \
2158+ _argvec[3] = (unsigned long)(arg3); \
2159+ _argvec[4] = (unsigned long)(arg4); \
2160+ __asm__ volatile( \
2161+ VALGRIND_CFI_PROLOGUE \
2162+ VALGRIND_ALIGN_STACK \
2163+ "subq $128,%%rsp\n\t" \
2164+ "movq 32(%%rax), %%rcx\n\t" \
2165+ "movq 24(%%rax), %%rdx\n\t" \
2166+ "movq 16(%%rax), %%rsi\n\t" \
2167+ "movq 8(%%rax), %%rdi\n\t" \
2168+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2169+ VALGRIND_CALL_NOREDIR_RAX \
2170+ VALGRIND_RESTORE_STACK \
2171+ VALGRIND_CFI_EPILOGUE \
2172+ : /*out*/ "=a" (_res) \
2173+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2174+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2175+ ); \
2176+ lval = (__typeof__(lval)) _res; \
2177+ } while (0)
2178+
2179+#define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
2180+ do { \
2181+ volatile OrigFn _orig = (orig); \
2182+ volatile unsigned long _argvec[6]; \
2183+ volatile unsigned long _res; \
2184+ _argvec[0] = (unsigned long)_orig.nraddr; \
2185+ _argvec[1] = (unsigned long)(arg1); \
2186+ _argvec[2] = (unsigned long)(arg2); \
2187+ _argvec[3] = (unsigned long)(arg3); \
2188+ _argvec[4] = (unsigned long)(arg4); \
2189+ _argvec[5] = (unsigned long)(arg5); \
2190+ __asm__ volatile( \
2191+ VALGRIND_CFI_PROLOGUE \
2192+ VALGRIND_ALIGN_STACK \
2193+ "subq $128,%%rsp\n\t" \
2194+ "movq 40(%%rax), %%r8\n\t" \
2195+ "movq 32(%%rax), %%rcx\n\t" \
2196+ "movq 24(%%rax), %%rdx\n\t" \
2197+ "movq 16(%%rax), %%rsi\n\t" \
2198+ "movq 8(%%rax), %%rdi\n\t" \
2199+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2200+ VALGRIND_CALL_NOREDIR_RAX \
2201+ VALGRIND_RESTORE_STACK \
2202+ VALGRIND_CFI_EPILOGUE \
2203+ : /*out*/ "=a" (_res) \
2204+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2205+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2206+ ); \
2207+ lval = (__typeof__(lval)) _res; \
2208+ } while (0)
2209+
2210+#define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
2211+ do { \
2212+ volatile OrigFn _orig = (orig); \
2213+ volatile unsigned long _argvec[7]; \
2214+ volatile unsigned long _res; \
2215+ _argvec[0] = (unsigned long)_orig.nraddr; \
2216+ _argvec[1] = (unsigned long)(arg1); \
2217+ _argvec[2] = (unsigned long)(arg2); \
2218+ _argvec[3] = (unsigned long)(arg3); \
2219+ _argvec[4] = (unsigned long)(arg4); \
2220+ _argvec[5] = (unsigned long)(arg5); \
2221+ _argvec[6] = (unsigned long)(arg6); \
2222+ __asm__ volatile( \
2223+ VALGRIND_CFI_PROLOGUE \
2224+ VALGRIND_ALIGN_STACK \
2225+ "subq $128,%%rsp\n\t" \
2226+ "movq 48(%%rax), %%r9\n\t" \
2227+ "movq 40(%%rax), %%r8\n\t" \
2228+ "movq 32(%%rax), %%rcx\n\t" \
2229+ "movq 24(%%rax), %%rdx\n\t" \
2230+ "movq 16(%%rax), %%rsi\n\t" \
2231+ "movq 8(%%rax), %%rdi\n\t" \
2232+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2233+ VALGRIND_CALL_NOREDIR_RAX \
2234+ VALGRIND_RESTORE_STACK \
2235+ VALGRIND_CFI_EPILOGUE \
2236+ : /*out*/ "=a" (_res) \
2237+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2238+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2239+ ); \
2240+ lval = (__typeof__(lval)) _res; \
2241+ } while (0)
2242+
2243+#define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2244+ arg7) \
2245+ do { \
2246+ volatile OrigFn _orig = (orig); \
2247+ volatile unsigned long _argvec[8]; \
2248+ volatile unsigned long _res; \
2249+ _argvec[0] = (unsigned long)_orig.nraddr; \
2250+ _argvec[1] = (unsigned long)(arg1); \
2251+ _argvec[2] = (unsigned long)(arg2); \
2252+ _argvec[3] = (unsigned long)(arg3); \
2253+ _argvec[4] = (unsigned long)(arg4); \
2254+ _argvec[5] = (unsigned long)(arg5); \
2255+ _argvec[6] = (unsigned long)(arg6); \
2256+ _argvec[7] = (unsigned long)(arg7); \
2257+ __asm__ volatile( \
2258+ VALGRIND_CFI_PROLOGUE \
2259+ VALGRIND_ALIGN_STACK \
2260+ "subq $136,%%rsp\n\t" \
2261+ "pushq 56(%%rax)\n\t" \
2262+ "movq 48(%%rax), %%r9\n\t" \
2263+ "movq 40(%%rax), %%r8\n\t" \
2264+ "movq 32(%%rax), %%rcx\n\t" \
2265+ "movq 24(%%rax), %%rdx\n\t" \
2266+ "movq 16(%%rax), %%rsi\n\t" \
2267+ "movq 8(%%rax), %%rdi\n\t" \
2268+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2269+ VALGRIND_CALL_NOREDIR_RAX \
2270+ VALGRIND_RESTORE_STACK \
2271+ VALGRIND_CFI_EPILOGUE \
2272+ : /*out*/ "=a" (_res) \
2273+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2274+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2275+ ); \
2276+ lval = (__typeof__(lval)) _res; \
2277+ } while (0)
2278+
2279+#define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2280+ arg7,arg8) \
2281+ do { \
2282+ volatile OrigFn _orig = (orig); \
2283+ volatile unsigned long _argvec[9]; \
2284+ volatile unsigned long _res; \
2285+ _argvec[0] = (unsigned long)_orig.nraddr; \
2286+ _argvec[1] = (unsigned long)(arg1); \
2287+ _argvec[2] = (unsigned long)(arg2); \
2288+ _argvec[3] = (unsigned long)(arg3); \
2289+ _argvec[4] = (unsigned long)(arg4); \
2290+ _argvec[5] = (unsigned long)(arg5); \
2291+ _argvec[6] = (unsigned long)(arg6); \
2292+ _argvec[7] = (unsigned long)(arg7); \
2293+ _argvec[8] = (unsigned long)(arg8); \
2294+ __asm__ volatile( \
2295+ VALGRIND_CFI_PROLOGUE \
2296+ VALGRIND_ALIGN_STACK \
2297+ "subq $128,%%rsp\n\t" \
2298+ "pushq 64(%%rax)\n\t" \
2299+ "pushq 56(%%rax)\n\t" \
2300+ "movq 48(%%rax), %%r9\n\t" \
2301+ "movq 40(%%rax), %%r8\n\t" \
2302+ "movq 32(%%rax), %%rcx\n\t" \
2303+ "movq 24(%%rax), %%rdx\n\t" \
2304+ "movq 16(%%rax), %%rsi\n\t" \
2305+ "movq 8(%%rax), %%rdi\n\t" \
2306+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2307+ VALGRIND_CALL_NOREDIR_RAX \
2308+ VALGRIND_RESTORE_STACK \
2309+ VALGRIND_CFI_EPILOGUE \
2310+ : /*out*/ "=a" (_res) \
2311+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2312+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2313+ ); \
2314+ lval = (__typeof__(lval)) _res; \
2315+ } while (0)
2316+
2317+#define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2318+ arg7,arg8,arg9) \
2319+ do { \
2320+ volatile OrigFn _orig = (orig); \
2321+ volatile unsigned long _argvec[10]; \
2322+ volatile unsigned long _res; \
2323+ _argvec[0] = (unsigned long)_orig.nraddr; \
2324+ _argvec[1] = (unsigned long)(arg1); \
2325+ _argvec[2] = (unsigned long)(arg2); \
2326+ _argvec[3] = (unsigned long)(arg3); \
2327+ _argvec[4] = (unsigned long)(arg4); \
2328+ _argvec[5] = (unsigned long)(arg5); \
2329+ _argvec[6] = (unsigned long)(arg6); \
2330+ _argvec[7] = (unsigned long)(arg7); \
2331+ _argvec[8] = (unsigned long)(arg8); \
2332+ _argvec[9] = (unsigned long)(arg9); \
2333+ __asm__ volatile( \
2334+ VALGRIND_CFI_PROLOGUE \
2335+ VALGRIND_ALIGN_STACK \
2336+ "subq $136,%%rsp\n\t" \
2337+ "pushq 72(%%rax)\n\t" \
2338+ "pushq 64(%%rax)\n\t" \
2339+ "pushq 56(%%rax)\n\t" \
2340+ "movq 48(%%rax), %%r9\n\t" \
2341+ "movq 40(%%rax), %%r8\n\t" \
2342+ "movq 32(%%rax), %%rcx\n\t" \
2343+ "movq 24(%%rax), %%rdx\n\t" \
2344+ "movq 16(%%rax), %%rsi\n\t" \
2345+ "movq 8(%%rax), %%rdi\n\t" \
2346+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2347+ VALGRIND_CALL_NOREDIR_RAX \
2348+ VALGRIND_RESTORE_STACK \
2349+ VALGRIND_CFI_EPILOGUE \
2350+ : /*out*/ "=a" (_res) \
2351+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2352+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2353+ ); \
2354+ lval = (__typeof__(lval)) _res; \
2355+ } while (0)
2356+
2357+#define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2358+ arg7,arg8,arg9,arg10) \
2359+ do { \
2360+ volatile OrigFn _orig = (orig); \
2361+ volatile unsigned long _argvec[11]; \
2362+ volatile unsigned long _res; \
2363+ _argvec[0] = (unsigned long)_orig.nraddr; \
2364+ _argvec[1] = (unsigned long)(arg1); \
2365+ _argvec[2] = (unsigned long)(arg2); \
2366+ _argvec[3] = (unsigned long)(arg3); \
2367+ _argvec[4] = (unsigned long)(arg4); \
2368+ _argvec[5] = (unsigned long)(arg5); \
2369+ _argvec[6] = (unsigned long)(arg6); \
2370+ _argvec[7] = (unsigned long)(arg7); \
2371+ _argvec[8] = (unsigned long)(arg8); \
2372+ _argvec[9] = (unsigned long)(arg9); \
2373+ _argvec[10] = (unsigned long)(arg10); \
2374+ __asm__ volatile( \
2375+ VALGRIND_CFI_PROLOGUE \
2376+ VALGRIND_ALIGN_STACK \
2377+ "subq $128,%%rsp\n\t" \
2378+ "pushq 80(%%rax)\n\t" \
2379+ "pushq 72(%%rax)\n\t" \
2380+ "pushq 64(%%rax)\n\t" \
2381+ "pushq 56(%%rax)\n\t" \
2382+ "movq 48(%%rax), %%r9\n\t" \
2383+ "movq 40(%%rax), %%r8\n\t" \
2384+ "movq 32(%%rax), %%rcx\n\t" \
2385+ "movq 24(%%rax), %%rdx\n\t" \
2386+ "movq 16(%%rax), %%rsi\n\t" \
2387+ "movq 8(%%rax), %%rdi\n\t" \
2388+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2389+ VALGRIND_CALL_NOREDIR_RAX \
2390+ VALGRIND_RESTORE_STACK \
2391+ VALGRIND_CFI_EPILOGUE \
2392+ : /*out*/ "=a" (_res) \
2393+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2394+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2395+ ); \
2396+ lval = (__typeof__(lval)) _res; \
2397+ } while (0)
2398+
2399+#define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2400+ arg7,arg8,arg9,arg10,arg11) \
2401+ do { \
2402+ volatile OrigFn _orig = (orig); \
2403+ volatile unsigned long _argvec[12]; \
2404+ volatile unsigned long _res; \
2405+ _argvec[0] = (unsigned long)_orig.nraddr; \
2406+ _argvec[1] = (unsigned long)(arg1); \
2407+ _argvec[2] = (unsigned long)(arg2); \
2408+ _argvec[3] = (unsigned long)(arg3); \
2409+ _argvec[4] = (unsigned long)(arg4); \
2410+ _argvec[5] = (unsigned long)(arg5); \
2411+ _argvec[6] = (unsigned long)(arg6); \
2412+ _argvec[7] = (unsigned long)(arg7); \
2413+ _argvec[8] = (unsigned long)(arg8); \
2414+ _argvec[9] = (unsigned long)(arg9); \
2415+ _argvec[10] = (unsigned long)(arg10); \
2416+ _argvec[11] = (unsigned long)(arg11); \
2417+ __asm__ volatile( \
2418+ VALGRIND_CFI_PROLOGUE \
2419+ VALGRIND_ALIGN_STACK \
2420+ "subq $136,%%rsp\n\t" \
2421+ "pushq 88(%%rax)\n\t" \
2422+ "pushq 80(%%rax)\n\t" \
2423+ "pushq 72(%%rax)\n\t" \
2424+ "pushq 64(%%rax)\n\t" \
2425+ "pushq 56(%%rax)\n\t" \
2426+ "movq 48(%%rax), %%r9\n\t" \
2427+ "movq 40(%%rax), %%r8\n\t" \
2428+ "movq 32(%%rax), %%rcx\n\t" \
2429+ "movq 24(%%rax), %%rdx\n\t" \
2430+ "movq 16(%%rax), %%rsi\n\t" \
2431+ "movq 8(%%rax), %%rdi\n\t" \
2432+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2433+ VALGRIND_CALL_NOREDIR_RAX \
2434+ VALGRIND_RESTORE_STACK \
2435+ VALGRIND_CFI_EPILOGUE \
2436+ : /*out*/ "=a" (_res) \
2437+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2438+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2439+ ); \
2440+ lval = (__typeof__(lval)) _res; \
2441+ } while (0)
2442+
2443+#define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2444+ arg7,arg8,arg9,arg10,arg11,arg12) \
2445+ do { \
2446+ volatile OrigFn _orig = (orig); \
2447+ volatile unsigned long _argvec[13]; \
2448+ volatile unsigned long _res; \
2449+ _argvec[0] = (unsigned long)_orig.nraddr; \
2450+ _argvec[1] = (unsigned long)(arg1); \
2451+ _argvec[2] = (unsigned long)(arg2); \
2452+ _argvec[3] = (unsigned long)(arg3); \
2453+ _argvec[4] = (unsigned long)(arg4); \
2454+ _argvec[5] = (unsigned long)(arg5); \
2455+ _argvec[6] = (unsigned long)(arg6); \
2456+ _argvec[7] = (unsigned long)(arg7); \
2457+ _argvec[8] = (unsigned long)(arg8); \
2458+ _argvec[9] = (unsigned long)(arg9); \
2459+ _argvec[10] = (unsigned long)(arg10); \
2460+ _argvec[11] = (unsigned long)(arg11); \
2461+ _argvec[12] = (unsigned long)(arg12); \
2462+ __asm__ volatile( \
2463+ VALGRIND_CFI_PROLOGUE \
2464+ VALGRIND_ALIGN_STACK \
2465+ "subq $128,%%rsp\n\t" \
2466+ "pushq 96(%%rax)\n\t" \
2467+ "pushq 88(%%rax)\n\t" \
2468+ "pushq 80(%%rax)\n\t" \
2469+ "pushq 72(%%rax)\n\t" \
2470+ "pushq 64(%%rax)\n\t" \
2471+ "pushq 56(%%rax)\n\t" \
2472+ "movq 48(%%rax), %%r9\n\t" \
2473+ "movq 40(%%rax), %%r8\n\t" \
2474+ "movq 32(%%rax), %%rcx\n\t" \
2475+ "movq 24(%%rax), %%rdx\n\t" \
2476+ "movq 16(%%rax), %%rsi\n\t" \
2477+ "movq 8(%%rax), %%rdi\n\t" \
2478+ "movq (%%rax), %%rax\n\t" /* target->%rax */ \
2479+ VALGRIND_CALL_NOREDIR_RAX \
2480+ VALGRIND_RESTORE_STACK \
2481+ VALGRIND_CFI_EPILOGUE \
2482+ : /*out*/ "=a" (_res) \
2483+ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
2484+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \
2485+ ); \
2486+ lval = (__typeof__(lval)) _res; \
2487+ } while (0)
2488+
2489+#endif /* PLAT_amd64_linux || PLAT_amd64_darwin */
2490+
2491+/* ------------------------ ppc32-linux ------------------------ */
2492+
2493+#if defined(PLAT_ppc32_linux)
2494+
2495+/* This is useful for finding out about the on-stack stuff:
2496+
2497+ extern int f9 ( int,int,int,int,int,int,int,int,int );
2498+ extern int f10 ( int,int,int,int,int,int,int,int,int,int );
2499+ extern int f11 ( int,int,int,int,int,int,int,int,int,int,int );
2500+ extern int f12 ( int,int,int,int,int,int,int,int,int,int,int,int );
2501+
2502+ int g9 ( void ) {
2503+ return f9(11,22,33,44,55,66,77,88,99);
2504+ }
2505+ int g10 ( void ) {
2506+ return f10(11,22,33,44,55,66,77,88,99,110);
2507+ }
2508+ int g11 ( void ) {
2509+ return f11(11,22,33,44,55,66,77,88,99,110,121);
2510+ }
2511+ int g12 ( void ) {
2512+ return f12(11,22,33,44,55,66,77,88,99,110,121,132);
2513+ }
2514+*/
2515+
2516+/* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */
2517+
2518+/* These regs are trashed by the hidden call. */
2519+#define __CALLER_SAVED_REGS \
2520+ "lr", "ctr", "xer", \
2521+ "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \
2522+ "r0", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \
2523+ "r11", "r12", "r13"
2524+
2525+/* Macros to save and align the stack before making a function
2526+ call and restore it afterwards as gcc may not keep the stack
2527+ pointer aligned if it doesn't realise calls are being made
2528+ to other functions. */
2529+
2530+#define VALGRIND_ALIGN_STACK \
2531+ "mr 28,1\n\t" \
2532+ "rlwinm 1,1,0,0,27\n\t"
2533+#define VALGRIND_RESTORE_STACK \
2534+ "mr 1,28\n\t"
2535+
2536+/* These CALL_FN_ macros assume that on ppc32-linux,
2537+ sizeof(unsigned long) == 4. */
2538+
2539+#define CALL_FN_W_v(lval, orig) \
2540+ do { \
2541+ volatile OrigFn _orig = (orig); \
2542+ volatile unsigned long _argvec[1]; \
2543+ volatile unsigned long _res; \
2544+ _argvec[0] = (unsigned long)_orig.nraddr; \
2545+ __asm__ volatile( \
2546+ VALGRIND_ALIGN_STACK \
2547+ "mr 11,%1\n\t" \
2548+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2549+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2550+ VALGRIND_RESTORE_STACK \
2551+ "mr %0,3" \
2552+ : /*out*/ "=r" (_res) \
2553+ : /*in*/ "r" (&_argvec[0]) \
2554+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2555+ ); \
2556+ lval = (__typeof__(lval)) _res; \
2557+ } while (0)
2558+
2559+#define CALL_FN_W_W(lval, orig, arg1) \
2560+ do { \
2561+ volatile OrigFn _orig = (orig); \
2562+ volatile unsigned long _argvec[2]; \
2563+ volatile unsigned long _res; \
2564+ _argvec[0] = (unsigned long)_orig.nraddr; \
2565+ _argvec[1] = (unsigned long)arg1; \
2566+ __asm__ volatile( \
2567+ VALGRIND_ALIGN_STACK \
2568+ "mr 11,%1\n\t" \
2569+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2570+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2571+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2572+ VALGRIND_RESTORE_STACK \
2573+ "mr %0,3" \
2574+ : /*out*/ "=r" (_res) \
2575+ : /*in*/ "r" (&_argvec[0]) \
2576+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2577+ ); \
2578+ lval = (__typeof__(lval)) _res; \
2579+ } while (0)
2580+
2581+#define CALL_FN_W_WW(lval, orig, arg1,arg2) \
2582+ do { \
2583+ volatile OrigFn _orig = (orig); \
2584+ volatile unsigned long _argvec[3]; \
2585+ volatile unsigned long _res; \
2586+ _argvec[0] = (unsigned long)_orig.nraddr; \
2587+ _argvec[1] = (unsigned long)arg1; \
2588+ _argvec[2] = (unsigned long)arg2; \
2589+ __asm__ volatile( \
2590+ VALGRIND_ALIGN_STACK \
2591+ "mr 11,%1\n\t" \
2592+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2593+ "lwz 4,8(11)\n\t" \
2594+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2595+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2596+ VALGRIND_RESTORE_STACK \
2597+ "mr %0,3" \
2598+ : /*out*/ "=r" (_res) \
2599+ : /*in*/ "r" (&_argvec[0]) \
2600+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2601+ ); \
2602+ lval = (__typeof__(lval)) _res; \
2603+ } while (0)
2604+
2605+#define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
2606+ do { \
2607+ volatile OrigFn _orig = (orig); \
2608+ volatile unsigned long _argvec[4]; \
2609+ volatile unsigned long _res; \
2610+ _argvec[0] = (unsigned long)_orig.nraddr; \
2611+ _argvec[1] = (unsigned long)arg1; \
2612+ _argvec[2] = (unsigned long)arg2; \
2613+ _argvec[3] = (unsigned long)arg3; \
2614+ __asm__ volatile( \
2615+ VALGRIND_ALIGN_STACK \
2616+ "mr 11,%1\n\t" \
2617+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2618+ "lwz 4,8(11)\n\t" \
2619+ "lwz 5,12(11)\n\t" \
2620+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2621+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2622+ VALGRIND_RESTORE_STACK \
2623+ "mr %0,3" \
2624+ : /*out*/ "=r" (_res) \
2625+ : /*in*/ "r" (&_argvec[0]) \
2626+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2627+ ); \
2628+ lval = (__typeof__(lval)) _res; \
2629+ } while (0)
2630+
2631+#define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
2632+ do { \
2633+ volatile OrigFn _orig = (orig); \
2634+ volatile unsigned long _argvec[5]; \
2635+ volatile unsigned long _res; \
2636+ _argvec[0] = (unsigned long)_orig.nraddr; \
2637+ _argvec[1] = (unsigned long)arg1; \
2638+ _argvec[2] = (unsigned long)arg2; \
2639+ _argvec[3] = (unsigned long)arg3; \
2640+ _argvec[4] = (unsigned long)arg4; \
2641+ __asm__ volatile( \
2642+ VALGRIND_ALIGN_STACK \
2643+ "mr 11,%1\n\t" \
2644+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2645+ "lwz 4,8(11)\n\t" \
2646+ "lwz 5,12(11)\n\t" \
2647+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2648+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2649+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2650+ VALGRIND_RESTORE_STACK \
2651+ "mr %0,3" \
2652+ : /*out*/ "=r" (_res) \
2653+ : /*in*/ "r" (&_argvec[0]) \
2654+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2655+ ); \
2656+ lval = (__typeof__(lval)) _res; \
2657+ } while (0)
2658+
2659+#define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
2660+ do { \
2661+ volatile OrigFn _orig = (orig); \
2662+ volatile unsigned long _argvec[6]; \
2663+ volatile unsigned long _res; \
2664+ _argvec[0] = (unsigned long)_orig.nraddr; \
2665+ _argvec[1] = (unsigned long)arg1; \
2666+ _argvec[2] = (unsigned long)arg2; \
2667+ _argvec[3] = (unsigned long)arg3; \
2668+ _argvec[4] = (unsigned long)arg4; \
2669+ _argvec[5] = (unsigned long)arg5; \
2670+ __asm__ volatile( \
2671+ VALGRIND_ALIGN_STACK \
2672+ "mr 11,%1\n\t" \
2673+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2674+ "lwz 4,8(11)\n\t" \
2675+ "lwz 5,12(11)\n\t" \
2676+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2677+ "lwz 7,20(11)\n\t" \
2678+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2679+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2680+ VALGRIND_RESTORE_STACK \
2681+ "mr %0,3" \
2682+ : /*out*/ "=r" (_res) \
2683+ : /*in*/ "r" (&_argvec[0]) \
2684+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2685+ ); \
2686+ lval = (__typeof__(lval)) _res; \
2687+ } while (0)
2688+
2689+#define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
2690+ do { \
2691+ volatile OrigFn _orig = (orig); \
2692+ volatile unsigned long _argvec[7]; \
2693+ volatile unsigned long _res; \
2694+ _argvec[0] = (unsigned long)_orig.nraddr; \
2695+ _argvec[1] = (unsigned long)arg1; \
2696+ _argvec[2] = (unsigned long)arg2; \
2697+ _argvec[3] = (unsigned long)arg3; \
2698+ _argvec[4] = (unsigned long)arg4; \
2699+ _argvec[5] = (unsigned long)arg5; \
2700+ _argvec[6] = (unsigned long)arg6; \
2701+ __asm__ volatile( \
2702+ VALGRIND_ALIGN_STACK \
2703+ "mr 11,%1\n\t" \
2704+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2705+ "lwz 4,8(11)\n\t" \
2706+ "lwz 5,12(11)\n\t" \
2707+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2708+ "lwz 7,20(11)\n\t" \
2709+ "lwz 8,24(11)\n\t" \
2710+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2711+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2712+ VALGRIND_RESTORE_STACK \
2713+ "mr %0,3" \
2714+ : /*out*/ "=r" (_res) \
2715+ : /*in*/ "r" (&_argvec[0]) \
2716+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2717+ ); \
2718+ lval = (__typeof__(lval)) _res; \
2719+ } while (0)
2720+
2721+#define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2722+ arg7) \
2723+ do { \
2724+ volatile OrigFn _orig = (orig); \
2725+ volatile unsigned long _argvec[8]; \
2726+ volatile unsigned long _res; \
2727+ _argvec[0] = (unsigned long)_orig.nraddr; \
2728+ _argvec[1] = (unsigned long)arg1; \
2729+ _argvec[2] = (unsigned long)arg2; \
2730+ _argvec[3] = (unsigned long)arg3; \
2731+ _argvec[4] = (unsigned long)arg4; \
2732+ _argvec[5] = (unsigned long)arg5; \
2733+ _argvec[6] = (unsigned long)arg6; \
2734+ _argvec[7] = (unsigned long)arg7; \
2735+ __asm__ volatile( \
2736+ VALGRIND_ALIGN_STACK \
2737+ "mr 11,%1\n\t" \
2738+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2739+ "lwz 4,8(11)\n\t" \
2740+ "lwz 5,12(11)\n\t" \
2741+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2742+ "lwz 7,20(11)\n\t" \
2743+ "lwz 8,24(11)\n\t" \
2744+ "lwz 9,28(11)\n\t" \
2745+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2746+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2747+ VALGRIND_RESTORE_STACK \
2748+ "mr %0,3" \
2749+ : /*out*/ "=r" (_res) \
2750+ : /*in*/ "r" (&_argvec[0]) \
2751+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2752+ ); \
2753+ lval = (__typeof__(lval)) _res; \
2754+ } while (0)
2755+
2756+#define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2757+ arg7,arg8) \
2758+ do { \
2759+ volatile OrigFn _orig = (orig); \
2760+ volatile unsigned long _argvec[9]; \
2761+ volatile unsigned long _res; \
2762+ _argvec[0] = (unsigned long)_orig.nraddr; \
2763+ _argvec[1] = (unsigned long)arg1; \
2764+ _argvec[2] = (unsigned long)arg2; \
2765+ _argvec[3] = (unsigned long)arg3; \
2766+ _argvec[4] = (unsigned long)arg4; \
2767+ _argvec[5] = (unsigned long)arg5; \
2768+ _argvec[6] = (unsigned long)arg6; \
2769+ _argvec[7] = (unsigned long)arg7; \
2770+ _argvec[8] = (unsigned long)arg8; \
2771+ __asm__ volatile( \
2772+ VALGRIND_ALIGN_STACK \
2773+ "mr 11,%1\n\t" \
2774+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2775+ "lwz 4,8(11)\n\t" \
2776+ "lwz 5,12(11)\n\t" \
2777+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2778+ "lwz 7,20(11)\n\t" \
2779+ "lwz 8,24(11)\n\t" \
2780+ "lwz 9,28(11)\n\t" \
2781+ "lwz 10,32(11)\n\t" /* arg8->r10 */ \
2782+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2783+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2784+ VALGRIND_RESTORE_STACK \
2785+ "mr %0,3" \
2786+ : /*out*/ "=r" (_res) \
2787+ : /*in*/ "r" (&_argvec[0]) \
2788+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2789+ ); \
2790+ lval = (__typeof__(lval)) _res; \
2791+ } while (0)
2792+
2793+#define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2794+ arg7,arg8,arg9) \
2795+ do { \
2796+ volatile OrigFn _orig = (orig); \
2797+ volatile unsigned long _argvec[10]; \
2798+ volatile unsigned long _res; \
2799+ _argvec[0] = (unsigned long)_orig.nraddr; \
2800+ _argvec[1] = (unsigned long)arg1; \
2801+ _argvec[2] = (unsigned long)arg2; \
2802+ _argvec[3] = (unsigned long)arg3; \
2803+ _argvec[4] = (unsigned long)arg4; \
2804+ _argvec[5] = (unsigned long)arg5; \
2805+ _argvec[6] = (unsigned long)arg6; \
2806+ _argvec[7] = (unsigned long)arg7; \
2807+ _argvec[8] = (unsigned long)arg8; \
2808+ _argvec[9] = (unsigned long)arg9; \
2809+ __asm__ volatile( \
2810+ VALGRIND_ALIGN_STACK \
2811+ "mr 11,%1\n\t" \
2812+ "addi 1,1,-16\n\t" \
2813+ /* arg9 */ \
2814+ "lwz 3,36(11)\n\t" \
2815+ "stw 3,8(1)\n\t" \
2816+ /* args1-8 */ \
2817+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2818+ "lwz 4,8(11)\n\t" \
2819+ "lwz 5,12(11)\n\t" \
2820+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2821+ "lwz 7,20(11)\n\t" \
2822+ "lwz 8,24(11)\n\t" \
2823+ "lwz 9,28(11)\n\t" \
2824+ "lwz 10,32(11)\n\t" /* arg8->r10 */ \
2825+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2826+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2827+ VALGRIND_RESTORE_STACK \
2828+ "mr %0,3" \
2829+ : /*out*/ "=r" (_res) \
2830+ : /*in*/ "r" (&_argvec[0]) \
2831+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2832+ ); \
2833+ lval = (__typeof__(lval)) _res; \
2834+ } while (0)
2835+
2836+#define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2837+ arg7,arg8,arg9,arg10) \
2838+ do { \
2839+ volatile OrigFn _orig = (orig); \
2840+ volatile unsigned long _argvec[11]; \
2841+ volatile unsigned long _res; \
2842+ _argvec[0] = (unsigned long)_orig.nraddr; \
2843+ _argvec[1] = (unsigned long)arg1; \
2844+ _argvec[2] = (unsigned long)arg2; \
2845+ _argvec[3] = (unsigned long)arg3; \
2846+ _argvec[4] = (unsigned long)arg4; \
2847+ _argvec[5] = (unsigned long)arg5; \
2848+ _argvec[6] = (unsigned long)arg6; \
2849+ _argvec[7] = (unsigned long)arg7; \
2850+ _argvec[8] = (unsigned long)arg8; \
2851+ _argvec[9] = (unsigned long)arg9; \
2852+ _argvec[10] = (unsigned long)arg10; \
2853+ __asm__ volatile( \
2854+ VALGRIND_ALIGN_STACK \
2855+ "mr 11,%1\n\t" \
2856+ "addi 1,1,-16\n\t" \
2857+ /* arg10 */ \
2858+ "lwz 3,40(11)\n\t" \
2859+ "stw 3,12(1)\n\t" \
2860+ /* arg9 */ \
2861+ "lwz 3,36(11)\n\t" \
2862+ "stw 3,8(1)\n\t" \
2863+ /* args1-8 */ \
2864+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2865+ "lwz 4,8(11)\n\t" \
2866+ "lwz 5,12(11)\n\t" \
2867+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2868+ "lwz 7,20(11)\n\t" \
2869+ "lwz 8,24(11)\n\t" \
2870+ "lwz 9,28(11)\n\t" \
2871+ "lwz 10,32(11)\n\t" /* arg8->r10 */ \
2872+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2873+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2874+ VALGRIND_RESTORE_STACK \
2875+ "mr %0,3" \
2876+ : /*out*/ "=r" (_res) \
2877+ : /*in*/ "r" (&_argvec[0]) \
2878+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2879+ ); \
2880+ lval = (__typeof__(lval)) _res; \
2881+ } while (0)
2882+
2883+#define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2884+ arg7,arg8,arg9,arg10,arg11) \
2885+ do { \
2886+ volatile OrigFn _orig = (orig); \
2887+ volatile unsigned long _argvec[12]; \
2888+ volatile unsigned long _res; \
2889+ _argvec[0] = (unsigned long)_orig.nraddr; \
2890+ _argvec[1] = (unsigned long)arg1; \
2891+ _argvec[2] = (unsigned long)arg2; \
2892+ _argvec[3] = (unsigned long)arg3; \
2893+ _argvec[4] = (unsigned long)arg4; \
2894+ _argvec[5] = (unsigned long)arg5; \
2895+ _argvec[6] = (unsigned long)arg6; \
2896+ _argvec[7] = (unsigned long)arg7; \
2897+ _argvec[8] = (unsigned long)arg8; \
2898+ _argvec[9] = (unsigned long)arg9; \
2899+ _argvec[10] = (unsigned long)arg10; \
2900+ _argvec[11] = (unsigned long)arg11; \
2901+ __asm__ volatile( \
2902+ VALGRIND_ALIGN_STACK \
2903+ "mr 11,%1\n\t" \
2904+ "addi 1,1,-32\n\t" \
2905+ /* arg11 */ \
2906+ "lwz 3,44(11)\n\t" \
2907+ "stw 3,16(1)\n\t" \
2908+ /* arg10 */ \
2909+ "lwz 3,40(11)\n\t" \
2910+ "stw 3,12(1)\n\t" \
2911+ /* arg9 */ \
2912+ "lwz 3,36(11)\n\t" \
2913+ "stw 3,8(1)\n\t" \
2914+ /* args1-8 */ \
2915+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2916+ "lwz 4,8(11)\n\t" \
2917+ "lwz 5,12(11)\n\t" \
2918+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2919+ "lwz 7,20(11)\n\t" \
2920+ "lwz 8,24(11)\n\t" \
2921+ "lwz 9,28(11)\n\t" \
2922+ "lwz 10,32(11)\n\t" /* arg8->r10 */ \
2923+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2924+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2925+ VALGRIND_RESTORE_STACK \
2926+ "mr %0,3" \
2927+ : /*out*/ "=r" (_res) \
2928+ : /*in*/ "r" (&_argvec[0]) \
2929+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2930+ ); \
2931+ lval = (__typeof__(lval)) _res; \
2932+ } while (0)
2933+
2934+#define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2935+ arg7,arg8,arg9,arg10,arg11,arg12) \
2936+ do { \
2937+ volatile OrigFn _orig = (orig); \
2938+ volatile unsigned long _argvec[13]; \
2939+ volatile unsigned long _res; \
2940+ _argvec[0] = (unsigned long)_orig.nraddr; \
2941+ _argvec[1] = (unsigned long)arg1; \
2942+ _argvec[2] = (unsigned long)arg2; \
2943+ _argvec[3] = (unsigned long)arg3; \
2944+ _argvec[4] = (unsigned long)arg4; \
2945+ _argvec[5] = (unsigned long)arg5; \
2946+ _argvec[6] = (unsigned long)arg6; \
2947+ _argvec[7] = (unsigned long)arg7; \
2948+ _argvec[8] = (unsigned long)arg8; \
2949+ _argvec[9] = (unsigned long)arg9; \
2950+ _argvec[10] = (unsigned long)arg10; \
2951+ _argvec[11] = (unsigned long)arg11; \
2952+ _argvec[12] = (unsigned long)arg12; \
2953+ __asm__ volatile( \
2954+ VALGRIND_ALIGN_STACK \
2955+ "mr 11,%1\n\t" \
2956+ "addi 1,1,-32\n\t" \
2957+ /* arg12 */ \
2958+ "lwz 3,48(11)\n\t" \
2959+ "stw 3,20(1)\n\t" \
2960+ /* arg11 */ \
2961+ "lwz 3,44(11)\n\t" \
2962+ "stw 3,16(1)\n\t" \
2963+ /* arg10 */ \
2964+ "lwz 3,40(11)\n\t" \
2965+ "stw 3,12(1)\n\t" \
2966+ /* arg9 */ \
2967+ "lwz 3,36(11)\n\t" \
2968+ "stw 3,8(1)\n\t" \
2969+ /* args1-8 */ \
2970+ "lwz 3,4(11)\n\t" /* arg1->r3 */ \
2971+ "lwz 4,8(11)\n\t" \
2972+ "lwz 5,12(11)\n\t" \
2973+ "lwz 6,16(11)\n\t" /* arg4->r6 */ \
2974+ "lwz 7,20(11)\n\t" \
2975+ "lwz 8,24(11)\n\t" \
2976+ "lwz 9,28(11)\n\t" \
2977+ "lwz 10,32(11)\n\t" /* arg8->r10 */ \
2978+ "lwz 11,0(11)\n\t" /* target->r11 */ \
2979+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2980+ VALGRIND_RESTORE_STACK \
2981+ "mr %0,3" \
2982+ : /*out*/ "=r" (_res) \
2983+ : /*in*/ "r" (&_argvec[0]) \
2984+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
2985+ ); \
2986+ lval = (__typeof__(lval)) _res; \
2987+ } while (0)
2988+
2989+#endif /* PLAT_ppc32_linux */
2990+
2991+/* ------------------------ ppc64-linux ------------------------ */
2992+
2993+#if defined(PLAT_ppc64be_linux)
2994+
2995+/* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */
2996+
2997+/* These regs are trashed by the hidden call. */
2998+#define __CALLER_SAVED_REGS \
2999+ "lr", "ctr", "xer", \
3000+ "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \
3001+ "r0", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \
3002+ "r11", "r12", "r13"
3003+
3004+/* Macros to save and align the stack before making a function
3005+ call and restore it afterwards as gcc may not keep the stack
3006+ pointer aligned if it doesn't realise calls are being made
3007+ to other functions. */
3008+
3009+#define VALGRIND_ALIGN_STACK \
3010+ "mr 28,1\n\t" \
3011+ "rldicr 1,1,0,59\n\t"
3012+#define VALGRIND_RESTORE_STACK \
3013+ "mr 1,28\n\t"
3014+
3015+/* These CALL_FN_ macros assume that on ppc64-linux, sizeof(unsigned
3016+ long) == 8. */
3017+
3018+#define CALL_FN_W_v(lval, orig) \
3019+ do { \
3020+ volatile OrigFn _orig = (orig); \
3021+ volatile unsigned long _argvec[3+0]; \
3022+ volatile unsigned long _res; \
3023+ /* _argvec[0] holds current r2 across the call */ \
3024+ _argvec[1] = (unsigned long)_orig.r2; \
3025+ _argvec[2] = (unsigned long)_orig.nraddr; \
3026+ __asm__ volatile( \
3027+ VALGRIND_ALIGN_STACK \
3028+ "mr 11,%1\n\t" \
3029+ "std 2,-16(11)\n\t" /* save tocptr */ \
3030+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3031+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3032+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3033+ "mr 11,%1\n\t" \
3034+ "mr %0,3\n\t" \
3035+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3036+ VALGRIND_RESTORE_STACK \
3037+ : /*out*/ "=r" (_res) \
3038+ : /*in*/ "r" (&_argvec[2]) \
3039+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3040+ ); \
3041+ lval = (__typeof__(lval)) _res; \
3042+ } while (0)
3043+
3044+#define CALL_FN_W_W(lval, orig, arg1) \
3045+ do { \
3046+ volatile OrigFn _orig = (orig); \
3047+ volatile unsigned long _argvec[3+1]; \
3048+ volatile unsigned long _res; \
3049+ /* _argvec[0] holds current r2 across the call */ \
3050+ _argvec[1] = (unsigned long)_orig.r2; \
3051+ _argvec[2] = (unsigned long)_orig.nraddr; \
3052+ _argvec[2+1] = (unsigned long)arg1; \
3053+ __asm__ volatile( \
3054+ VALGRIND_ALIGN_STACK \
3055+ "mr 11,%1\n\t" \
3056+ "std 2,-16(11)\n\t" /* save tocptr */ \
3057+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3058+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3059+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3060+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3061+ "mr 11,%1\n\t" \
3062+ "mr %0,3\n\t" \
3063+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3064+ VALGRIND_RESTORE_STACK \
3065+ : /*out*/ "=r" (_res) \
3066+ : /*in*/ "r" (&_argvec[2]) \
3067+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3068+ ); \
3069+ lval = (__typeof__(lval)) _res; \
3070+ } while (0)
3071+
3072+#define CALL_FN_W_WW(lval, orig, arg1,arg2) \
3073+ do { \
3074+ volatile OrigFn _orig = (orig); \
3075+ volatile unsigned long _argvec[3+2]; \
3076+ volatile unsigned long _res; \
3077+ /* _argvec[0] holds current r2 across the call */ \
3078+ _argvec[1] = (unsigned long)_orig.r2; \
3079+ _argvec[2] = (unsigned long)_orig.nraddr; \
3080+ _argvec[2+1] = (unsigned long)arg1; \
3081+ _argvec[2+2] = (unsigned long)arg2; \
3082+ __asm__ volatile( \
3083+ VALGRIND_ALIGN_STACK \
3084+ "mr 11,%1\n\t" \
3085+ "std 2,-16(11)\n\t" /* save tocptr */ \
3086+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3087+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3088+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3089+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3090+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3091+ "mr 11,%1\n\t" \
3092+ "mr %0,3\n\t" \
3093+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3094+ VALGRIND_RESTORE_STACK \
3095+ : /*out*/ "=r" (_res) \
3096+ : /*in*/ "r" (&_argvec[2]) \
3097+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3098+ ); \
3099+ lval = (__typeof__(lval)) _res; \
3100+ } while (0)
3101+
3102+#define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
3103+ do { \
3104+ volatile OrigFn _orig = (orig); \
3105+ volatile unsigned long _argvec[3+3]; \
3106+ volatile unsigned long _res; \
3107+ /* _argvec[0] holds current r2 across the call */ \
3108+ _argvec[1] = (unsigned long)_orig.r2; \
3109+ _argvec[2] = (unsigned long)_orig.nraddr; \
3110+ _argvec[2+1] = (unsigned long)arg1; \
3111+ _argvec[2+2] = (unsigned long)arg2; \
3112+ _argvec[2+3] = (unsigned long)arg3; \
3113+ __asm__ volatile( \
3114+ VALGRIND_ALIGN_STACK \
3115+ "mr 11,%1\n\t" \
3116+ "std 2,-16(11)\n\t" /* save tocptr */ \
3117+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3118+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3119+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3120+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3121+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3122+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3123+ "mr 11,%1\n\t" \
3124+ "mr %0,3\n\t" \
3125+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3126+ VALGRIND_RESTORE_STACK \
3127+ : /*out*/ "=r" (_res) \
3128+ : /*in*/ "r" (&_argvec[2]) \
3129+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3130+ ); \
3131+ lval = (__typeof__(lval)) _res; \
3132+ } while (0)
3133+
3134+#define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
3135+ do { \
3136+ volatile OrigFn _orig = (orig); \
3137+ volatile unsigned long _argvec[3+4]; \
3138+ volatile unsigned long _res; \
3139+ /* _argvec[0] holds current r2 across the call */ \
3140+ _argvec[1] = (unsigned long)_orig.r2; \
3141+ _argvec[2] = (unsigned long)_orig.nraddr; \
3142+ _argvec[2+1] = (unsigned long)arg1; \
3143+ _argvec[2+2] = (unsigned long)arg2; \
3144+ _argvec[2+3] = (unsigned long)arg3; \
3145+ _argvec[2+4] = (unsigned long)arg4; \
3146+ __asm__ volatile( \
3147+ VALGRIND_ALIGN_STACK \
3148+ "mr 11,%1\n\t" \
3149+ "std 2,-16(11)\n\t" /* save tocptr */ \
3150+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3151+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3152+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3153+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3154+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3155+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3156+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3157+ "mr 11,%1\n\t" \
3158+ "mr %0,3\n\t" \
3159+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3160+ VALGRIND_RESTORE_STACK \
3161+ : /*out*/ "=r" (_res) \
3162+ : /*in*/ "r" (&_argvec[2]) \
3163+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3164+ ); \
3165+ lval = (__typeof__(lval)) _res; \
3166+ } while (0)
3167+
3168+#define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
3169+ do { \
3170+ volatile OrigFn _orig = (orig); \
3171+ volatile unsigned long _argvec[3+5]; \
3172+ volatile unsigned long _res; \
3173+ /* _argvec[0] holds current r2 across the call */ \
3174+ _argvec[1] = (unsigned long)_orig.r2; \
3175+ _argvec[2] = (unsigned long)_orig.nraddr; \
3176+ _argvec[2+1] = (unsigned long)arg1; \
3177+ _argvec[2+2] = (unsigned long)arg2; \
3178+ _argvec[2+3] = (unsigned long)arg3; \
3179+ _argvec[2+4] = (unsigned long)arg4; \
3180+ _argvec[2+5] = (unsigned long)arg5; \
3181+ __asm__ volatile( \
3182+ VALGRIND_ALIGN_STACK \
3183+ "mr 11,%1\n\t" \
3184+ "std 2,-16(11)\n\t" /* save tocptr */ \
3185+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3186+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3187+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3188+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3189+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3190+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3191+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3192+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3193+ "mr 11,%1\n\t" \
3194+ "mr %0,3\n\t" \
3195+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3196+ VALGRIND_RESTORE_STACK \
3197+ : /*out*/ "=r" (_res) \
3198+ : /*in*/ "r" (&_argvec[2]) \
3199+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3200+ ); \
3201+ lval = (__typeof__(lval)) _res; \
3202+ } while (0)
3203+
3204+#define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
3205+ do { \
3206+ volatile OrigFn _orig = (orig); \
3207+ volatile unsigned long _argvec[3+6]; \
3208+ volatile unsigned long _res; \
3209+ /* _argvec[0] holds current r2 across the call */ \
3210+ _argvec[1] = (unsigned long)_orig.r2; \
3211+ _argvec[2] = (unsigned long)_orig.nraddr; \
3212+ _argvec[2+1] = (unsigned long)arg1; \
3213+ _argvec[2+2] = (unsigned long)arg2; \
3214+ _argvec[2+3] = (unsigned long)arg3; \
3215+ _argvec[2+4] = (unsigned long)arg4; \
3216+ _argvec[2+5] = (unsigned long)arg5; \
3217+ _argvec[2+6] = (unsigned long)arg6; \
3218+ __asm__ volatile( \
3219+ VALGRIND_ALIGN_STACK \
3220+ "mr 11,%1\n\t" \
3221+ "std 2,-16(11)\n\t" /* save tocptr */ \
3222+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3223+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3224+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3225+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3226+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3227+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3228+ "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3229+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3230+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3231+ "mr 11,%1\n\t" \
3232+ "mr %0,3\n\t" \
3233+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3234+ VALGRIND_RESTORE_STACK \
3235+ : /*out*/ "=r" (_res) \
3236+ : /*in*/ "r" (&_argvec[2]) \
3237+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3238+ ); \
3239+ lval = (__typeof__(lval)) _res; \
3240+ } while (0)
3241+
3242+#define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3243+ arg7) \
3244+ do { \
3245+ volatile OrigFn _orig = (orig); \
3246+ volatile unsigned long _argvec[3+7]; \
3247+ volatile unsigned long _res; \
3248+ /* _argvec[0] holds current r2 across the call */ \
3249+ _argvec[1] = (unsigned long)_orig.r2; \
3250+ _argvec[2] = (unsigned long)_orig.nraddr; \
3251+ _argvec[2+1] = (unsigned long)arg1; \
3252+ _argvec[2+2] = (unsigned long)arg2; \
3253+ _argvec[2+3] = (unsigned long)arg3; \
3254+ _argvec[2+4] = (unsigned long)arg4; \
3255+ _argvec[2+5] = (unsigned long)arg5; \
3256+ _argvec[2+6] = (unsigned long)arg6; \
3257+ _argvec[2+7] = (unsigned long)arg7; \
3258+ __asm__ volatile( \
3259+ VALGRIND_ALIGN_STACK \
3260+ "mr 11,%1\n\t" \
3261+ "std 2,-16(11)\n\t" /* save tocptr */ \
3262+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3263+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3264+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3265+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3266+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3267+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3268+ "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3269+ "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3270+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3271+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3272+ "mr 11,%1\n\t" \
3273+ "mr %0,3\n\t" \
3274+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3275+ VALGRIND_RESTORE_STACK \
3276+ : /*out*/ "=r" (_res) \
3277+ : /*in*/ "r" (&_argvec[2]) \
3278+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3279+ ); \
3280+ lval = (__typeof__(lval)) _res; \
3281+ } while (0)
3282+
3283+#define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3284+ arg7,arg8) \
3285+ do { \
3286+ volatile OrigFn _orig = (orig); \
3287+ volatile unsigned long _argvec[3+8]; \
3288+ volatile unsigned long _res; \
3289+ /* _argvec[0] holds current r2 across the call */ \
3290+ _argvec[1] = (unsigned long)_orig.r2; \
3291+ _argvec[2] = (unsigned long)_orig.nraddr; \
3292+ _argvec[2+1] = (unsigned long)arg1; \
3293+ _argvec[2+2] = (unsigned long)arg2; \
3294+ _argvec[2+3] = (unsigned long)arg3; \
3295+ _argvec[2+4] = (unsigned long)arg4; \
3296+ _argvec[2+5] = (unsigned long)arg5; \
3297+ _argvec[2+6] = (unsigned long)arg6; \
3298+ _argvec[2+7] = (unsigned long)arg7; \
3299+ _argvec[2+8] = (unsigned long)arg8; \
3300+ __asm__ volatile( \
3301+ VALGRIND_ALIGN_STACK \
3302+ "mr 11,%1\n\t" \
3303+ "std 2,-16(11)\n\t" /* save tocptr */ \
3304+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3305+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3306+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3307+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3308+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3309+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3310+ "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3311+ "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3312+ "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3313+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3314+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3315+ "mr 11,%1\n\t" \
3316+ "mr %0,3\n\t" \
3317+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3318+ VALGRIND_RESTORE_STACK \
3319+ : /*out*/ "=r" (_res) \
3320+ : /*in*/ "r" (&_argvec[2]) \
3321+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3322+ ); \
3323+ lval = (__typeof__(lval)) _res; \
3324+ } while (0)
3325+
3326+#define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3327+ arg7,arg8,arg9) \
3328+ do { \
3329+ volatile OrigFn _orig = (orig); \
3330+ volatile unsigned long _argvec[3+9]; \
3331+ volatile unsigned long _res; \
3332+ /* _argvec[0] holds current r2 across the call */ \
3333+ _argvec[1] = (unsigned long)_orig.r2; \
3334+ _argvec[2] = (unsigned long)_orig.nraddr; \
3335+ _argvec[2+1] = (unsigned long)arg1; \
3336+ _argvec[2+2] = (unsigned long)arg2; \
3337+ _argvec[2+3] = (unsigned long)arg3; \
3338+ _argvec[2+4] = (unsigned long)arg4; \
3339+ _argvec[2+5] = (unsigned long)arg5; \
3340+ _argvec[2+6] = (unsigned long)arg6; \
3341+ _argvec[2+7] = (unsigned long)arg7; \
3342+ _argvec[2+8] = (unsigned long)arg8; \
3343+ _argvec[2+9] = (unsigned long)arg9; \
3344+ __asm__ volatile( \
3345+ VALGRIND_ALIGN_STACK \
3346+ "mr 11,%1\n\t" \
3347+ "std 2,-16(11)\n\t" /* save tocptr */ \
3348+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3349+ "addi 1,1,-128\n\t" /* expand stack frame */ \
3350+ /* arg9 */ \
3351+ "ld 3,72(11)\n\t" \
3352+ "std 3,112(1)\n\t" \
3353+ /* args1-8 */ \
3354+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3355+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3356+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3357+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3358+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3359+ "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3360+ "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3361+ "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3362+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3363+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3364+ "mr 11,%1\n\t" \
3365+ "mr %0,3\n\t" \
3366+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3367+ VALGRIND_RESTORE_STACK \
3368+ : /*out*/ "=r" (_res) \
3369+ : /*in*/ "r" (&_argvec[2]) \
3370+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3371+ ); \
3372+ lval = (__typeof__(lval)) _res; \
3373+ } while (0)
3374+
3375+#define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3376+ arg7,arg8,arg9,arg10) \
3377+ do { \
3378+ volatile OrigFn _orig = (orig); \
3379+ volatile unsigned long _argvec[3+10]; \
3380+ volatile unsigned long _res; \
3381+ /* _argvec[0] holds current r2 across the call */ \
3382+ _argvec[1] = (unsigned long)_orig.r2; \
3383+ _argvec[2] = (unsigned long)_orig.nraddr; \
3384+ _argvec[2+1] = (unsigned long)arg1; \
3385+ _argvec[2+2] = (unsigned long)arg2; \
3386+ _argvec[2+3] = (unsigned long)arg3; \
3387+ _argvec[2+4] = (unsigned long)arg4; \
3388+ _argvec[2+5] = (unsigned long)arg5; \
3389+ _argvec[2+6] = (unsigned long)arg6; \
3390+ _argvec[2+7] = (unsigned long)arg7; \
3391+ _argvec[2+8] = (unsigned long)arg8; \
3392+ _argvec[2+9] = (unsigned long)arg9; \
3393+ _argvec[2+10] = (unsigned long)arg10; \
3394+ __asm__ volatile( \
3395+ VALGRIND_ALIGN_STACK \
3396+ "mr 11,%1\n\t" \
3397+ "std 2,-16(11)\n\t" /* save tocptr */ \
3398+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3399+ "addi 1,1,-128\n\t" /* expand stack frame */ \
3400+ /* arg10 */ \
3401+ "ld 3,80(11)\n\t" \
3402+ "std 3,120(1)\n\t" \
3403+ /* arg9 */ \
3404+ "ld 3,72(11)\n\t" \
3405+ "std 3,112(1)\n\t" \
3406+ /* args1-8 */ \
3407+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3408+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3409+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3410+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3411+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3412+ "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3413+ "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3414+ "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3415+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3416+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3417+ "mr 11,%1\n\t" \
3418+ "mr %0,3\n\t" \
3419+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3420+ VALGRIND_RESTORE_STACK \
3421+ : /*out*/ "=r" (_res) \
3422+ : /*in*/ "r" (&_argvec[2]) \
3423+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3424+ ); \
3425+ lval = (__typeof__(lval)) _res; \
3426+ } while (0)
3427+
3428+#define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3429+ arg7,arg8,arg9,arg10,arg11) \
3430+ do { \
3431+ volatile OrigFn _orig = (orig); \
3432+ volatile unsigned long _argvec[3+11]; \
3433+ volatile unsigned long _res; \
3434+ /* _argvec[0] holds current r2 across the call */ \
3435+ _argvec[1] = (unsigned long)_orig.r2; \
3436+ _argvec[2] = (unsigned long)_orig.nraddr; \
3437+ _argvec[2+1] = (unsigned long)arg1; \
3438+ _argvec[2+2] = (unsigned long)arg2; \
3439+ _argvec[2+3] = (unsigned long)arg3; \
3440+ _argvec[2+4] = (unsigned long)arg4; \
3441+ _argvec[2+5] = (unsigned long)arg5; \
3442+ _argvec[2+6] = (unsigned long)arg6; \
3443+ _argvec[2+7] = (unsigned long)arg7; \
3444+ _argvec[2+8] = (unsigned long)arg8; \
3445+ _argvec[2+9] = (unsigned long)arg9; \
3446+ _argvec[2+10] = (unsigned long)arg10; \
3447+ _argvec[2+11] = (unsigned long)arg11; \
3448+ __asm__ volatile( \
3449+ VALGRIND_ALIGN_STACK \
3450+ "mr 11,%1\n\t" \
3451+ "std 2,-16(11)\n\t" /* save tocptr */ \
3452+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3453+ "addi 1,1,-144\n\t" /* expand stack frame */ \
3454+ /* arg11 */ \
3455+ "ld 3,88(11)\n\t" \
3456+ "std 3,128(1)\n\t" \
3457+ /* arg10 */ \
3458+ "ld 3,80(11)\n\t" \
3459+ "std 3,120(1)\n\t" \
3460+ /* arg9 */ \
3461+ "ld 3,72(11)\n\t" \
3462+ "std 3,112(1)\n\t" \
3463+ /* args1-8 */ \
3464+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3465+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3466+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3467+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3468+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3469+ "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3470+ "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3471+ "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3472+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3473+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3474+ "mr 11,%1\n\t" \
3475+ "mr %0,3\n\t" \
3476+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3477+ VALGRIND_RESTORE_STACK \
3478+ : /*out*/ "=r" (_res) \
3479+ : /*in*/ "r" (&_argvec[2]) \
3480+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3481+ ); \
3482+ lval = (__typeof__(lval)) _res; \
3483+ } while (0)
3484+
3485+#define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3486+ arg7,arg8,arg9,arg10,arg11,arg12) \
3487+ do { \
3488+ volatile OrigFn _orig = (orig); \
3489+ volatile unsigned long _argvec[3+12]; \
3490+ volatile unsigned long _res; \
3491+ /* _argvec[0] holds current r2 across the call */ \
3492+ _argvec[1] = (unsigned long)_orig.r2; \
3493+ _argvec[2] = (unsigned long)_orig.nraddr; \
3494+ _argvec[2+1] = (unsigned long)arg1; \
3495+ _argvec[2+2] = (unsigned long)arg2; \
3496+ _argvec[2+3] = (unsigned long)arg3; \
3497+ _argvec[2+4] = (unsigned long)arg4; \
3498+ _argvec[2+5] = (unsigned long)arg5; \
3499+ _argvec[2+6] = (unsigned long)arg6; \
3500+ _argvec[2+7] = (unsigned long)arg7; \
3501+ _argvec[2+8] = (unsigned long)arg8; \
3502+ _argvec[2+9] = (unsigned long)arg9; \
3503+ _argvec[2+10] = (unsigned long)arg10; \
3504+ _argvec[2+11] = (unsigned long)arg11; \
3505+ _argvec[2+12] = (unsigned long)arg12; \
3506+ __asm__ volatile( \
3507+ VALGRIND_ALIGN_STACK \
3508+ "mr 11,%1\n\t" \
3509+ "std 2,-16(11)\n\t" /* save tocptr */ \
3510+ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3511+ "addi 1,1,-144\n\t" /* expand stack frame */ \
3512+ /* arg12 */ \
3513+ "ld 3,96(11)\n\t" \
3514+ "std 3,136(1)\n\t" \
3515+ /* arg11 */ \
3516+ "ld 3,88(11)\n\t" \
3517+ "std 3,128(1)\n\t" \
3518+ /* arg10 */ \
3519+ "ld 3,80(11)\n\t" \
3520+ "std 3,120(1)\n\t" \
3521+ /* arg9 */ \
3522+ "ld 3,72(11)\n\t" \
3523+ "std 3,112(1)\n\t" \
3524+ /* args1-8 */ \
3525+ "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3526+ "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3527+ "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3528+ "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3529+ "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3530+ "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3531+ "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3532+ "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3533+ "ld 11, 0(11)\n\t" /* target->r11 */ \
3534+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3535+ "mr 11,%1\n\t" \
3536+ "mr %0,3\n\t" \
3537+ "ld 2,-16(11)\n\t" /* restore tocptr */ \
3538+ VALGRIND_RESTORE_STACK \
3539+ : /*out*/ "=r" (_res) \
3540+ : /*in*/ "r" (&_argvec[2]) \
3541+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3542+ ); \
3543+ lval = (__typeof__(lval)) _res; \
3544+ } while (0)
3545+
3546+#endif /* PLAT_ppc64be_linux */
3547+
3548+/* ------------------------- ppc64le-linux ----------------------- */
3549+#if defined(PLAT_ppc64le_linux)
3550+
3551+/* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */
3552+
3553+/* These regs are trashed by the hidden call. */
3554+#define __CALLER_SAVED_REGS \
3555+ "lr", "ctr", "xer", \
3556+ "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \
3557+ "r0", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \
3558+ "r11", "r12", "r13"
3559+
3560+/* Macros to save and align the stack before making a function
3561+ call and restore it afterwards as gcc may not keep the stack
3562+ pointer aligned if it doesn't realise calls are being made
3563+ to other functions. */
3564+
3565+#define VALGRIND_ALIGN_STACK \
3566+ "mr 28,1\n\t" \
3567+ "rldicr 1,1,0,59\n\t"
3568+#define VALGRIND_RESTORE_STACK \
3569+ "mr 1,28\n\t"
3570+
3571+/* These CALL_FN_ macros assume that on ppc64-linux, sizeof(unsigned
3572+ long) == 8. */
3573+
3574+#define CALL_FN_W_v(lval, orig) \
3575+ do { \
3576+ volatile OrigFn _orig = (orig); \
3577+ volatile unsigned long _argvec[3+0]; \
3578+ volatile unsigned long _res; \
3579+ /* _argvec[0] holds current r2 across the call */ \
3580+ _argvec[1] = (unsigned long)_orig.r2; \
3581+ _argvec[2] = (unsigned long)_orig.nraddr; \
3582+ __asm__ volatile( \
3583+ VALGRIND_ALIGN_STACK \
3584+ "mr 12,%1\n\t" \
3585+ "std 2,-16(12)\n\t" /* save tocptr */ \
3586+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3587+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3588+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3589+ "mr 12,%1\n\t" \
3590+ "mr %0,3\n\t" \
3591+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3592+ VALGRIND_RESTORE_STACK \
3593+ : /*out*/ "=r" (_res) \
3594+ : /*in*/ "r" (&_argvec[2]) \
3595+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3596+ ); \
3597+ lval = (__typeof__(lval)) _res; \
3598+ } while (0)
3599+
3600+#define CALL_FN_W_W(lval, orig, arg1) \
3601+ do { \
3602+ volatile OrigFn _orig = (orig); \
3603+ volatile unsigned long _argvec[3+1]; \
3604+ volatile unsigned long _res; \
3605+ /* _argvec[0] holds current r2 across the call */ \
3606+ _argvec[1] = (unsigned long)_orig.r2; \
3607+ _argvec[2] = (unsigned long)_orig.nraddr; \
3608+ _argvec[2+1] = (unsigned long)arg1; \
3609+ __asm__ volatile( \
3610+ VALGRIND_ALIGN_STACK \
3611+ "mr 12,%1\n\t" \
3612+ "std 2,-16(12)\n\t" /* save tocptr */ \
3613+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3614+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3615+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3616+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3617+ "mr 12,%1\n\t" \
3618+ "mr %0,3\n\t" \
3619+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3620+ VALGRIND_RESTORE_STACK \
3621+ : /*out*/ "=r" (_res) \
3622+ : /*in*/ "r" (&_argvec[2]) \
3623+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3624+ ); \
3625+ lval = (__typeof__(lval)) _res; \
3626+ } while (0)
3627+
3628+#define CALL_FN_W_WW(lval, orig, arg1,arg2) \
3629+ do { \
3630+ volatile OrigFn _orig = (orig); \
3631+ volatile unsigned long _argvec[3+2]; \
3632+ volatile unsigned long _res; \
3633+ /* _argvec[0] holds current r2 across the call */ \
3634+ _argvec[1] = (unsigned long)_orig.r2; \
3635+ _argvec[2] = (unsigned long)_orig.nraddr; \
3636+ _argvec[2+1] = (unsigned long)arg1; \
3637+ _argvec[2+2] = (unsigned long)arg2; \
3638+ __asm__ volatile( \
3639+ VALGRIND_ALIGN_STACK \
3640+ "mr 12,%1\n\t" \
3641+ "std 2,-16(12)\n\t" /* save tocptr */ \
3642+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3643+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3644+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3645+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3646+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3647+ "mr 12,%1\n\t" \
3648+ "mr %0,3\n\t" \
3649+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3650+ VALGRIND_RESTORE_STACK \
3651+ : /*out*/ "=r" (_res) \
3652+ : /*in*/ "r" (&_argvec[2]) \
3653+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3654+ ); \
3655+ lval = (__typeof__(lval)) _res; \
3656+ } while (0)
3657+
3658+#define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
3659+ do { \
3660+ volatile OrigFn _orig = (orig); \
3661+ volatile unsigned long _argvec[3+3]; \
3662+ volatile unsigned long _res; \
3663+ /* _argvec[0] holds current r2 across the call */ \
3664+ _argvec[1] = (unsigned long)_orig.r2; \
3665+ _argvec[2] = (unsigned long)_orig.nraddr; \
3666+ _argvec[2+1] = (unsigned long)arg1; \
3667+ _argvec[2+2] = (unsigned long)arg2; \
3668+ _argvec[2+3] = (unsigned long)arg3; \
3669+ __asm__ volatile( \
3670+ VALGRIND_ALIGN_STACK \
3671+ "mr 12,%1\n\t" \
3672+ "std 2,-16(12)\n\t" /* save tocptr */ \
3673+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3674+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3675+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3676+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3677+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3678+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3679+ "mr 12,%1\n\t" \
3680+ "mr %0,3\n\t" \
3681+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3682+ VALGRIND_RESTORE_STACK \
3683+ : /*out*/ "=r" (_res) \
3684+ : /*in*/ "r" (&_argvec[2]) \
3685+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3686+ ); \
3687+ lval = (__typeof__(lval)) _res; \
3688+ } while (0)
3689+
3690+#define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
3691+ do { \
3692+ volatile OrigFn _orig = (orig); \
3693+ volatile unsigned long _argvec[3+4]; \
3694+ volatile unsigned long _res; \
3695+ /* _argvec[0] holds current r2 across the call */ \
3696+ _argvec[1] = (unsigned long)_orig.r2; \
3697+ _argvec[2] = (unsigned long)_orig.nraddr; \
3698+ _argvec[2+1] = (unsigned long)arg1; \
3699+ _argvec[2+2] = (unsigned long)arg2; \
3700+ _argvec[2+3] = (unsigned long)arg3; \
3701+ _argvec[2+4] = (unsigned long)arg4; \
3702+ __asm__ volatile( \
3703+ VALGRIND_ALIGN_STACK \
3704+ "mr 12,%1\n\t" \
3705+ "std 2,-16(12)\n\t" /* save tocptr */ \
3706+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3707+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3708+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3709+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3710+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
3711+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3712+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3713+ "mr 12,%1\n\t" \
3714+ "mr %0,3\n\t" \
3715+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3716+ VALGRIND_RESTORE_STACK \
3717+ : /*out*/ "=r" (_res) \
3718+ : /*in*/ "r" (&_argvec[2]) \
3719+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3720+ ); \
3721+ lval = (__typeof__(lval)) _res; \
3722+ } while (0)
3723+
3724+#define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
3725+ do { \
3726+ volatile OrigFn _orig = (orig); \
3727+ volatile unsigned long _argvec[3+5]; \
3728+ volatile unsigned long _res; \
3729+ /* _argvec[0] holds current r2 across the call */ \
3730+ _argvec[1] = (unsigned long)_orig.r2; \
3731+ _argvec[2] = (unsigned long)_orig.nraddr; \
3732+ _argvec[2+1] = (unsigned long)arg1; \
3733+ _argvec[2+2] = (unsigned long)arg2; \
3734+ _argvec[2+3] = (unsigned long)arg3; \
3735+ _argvec[2+4] = (unsigned long)arg4; \
3736+ _argvec[2+5] = (unsigned long)arg5; \
3737+ __asm__ volatile( \
3738+ VALGRIND_ALIGN_STACK \
3739+ "mr 12,%1\n\t" \
3740+ "std 2,-16(12)\n\t" /* save tocptr */ \
3741+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3742+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3743+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3744+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3745+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
3746+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
3747+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3748+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3749+ "mr 12,%1\n\t" \
3750+ "mr %0,3\n\t" \
3751+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3752+ VALGRIND_RESTORE_STACK \
3753+ : /*out*/ "=r" (_res) \
3754+ : /*in*/ "r" (&_argvec[2]) \
3755+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3756+ ); \
3757+ lval = (__typeof__(lval)) _res; \
3758+ } while (0)
3759+
3760+#define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
3761+ do { \
3762+ volatile OrigFn _orig = (orig); \
3763+ volatile unsigned long _argvec[3+6]; \
3764+ volatile unsigned long _res; \
3765+ /* _argvec[0] holds current r2 across the call */ \
3766+ _argvec[1] = (unsigned long)_orig.r2; \
3767+ _argvec[2] = (unsigned long)_orig.nraddr; \
3768+ _argvec[2+1] = (unsigned long)arg1; \
3769+ _argvec[2+2] = (unsigned long)arg2; \
3770+ _argvec[2+3] = (unsigned long)arg3; \
3771+ _argvec[2+4] = (unsigned long)arg4; \
3772+ _argvec[2+5] = (unsigned long)arg5; \
3773+ _argvec[2+6] = (unsigned long)arg6; \
3774+ __asm__ volatile( \
3775+ VALGRIND_ALIGN_STACK \
3776+ "mr 12,%1\n\t" \
3777+ "std 2,-16(12)\n\t" /* save tocptr */ \
3778+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3779+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3780+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3781+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3782+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
3783+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
3784+ "ld 8, 48(12)\n\t" /* arg6->r8 */ \
3785+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3786+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3787+ "mr 12,%1\n\t" \
3788+ "mr %0,3\n\t" \
3789+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3790+ VALGRIND_RESTORE_STACK \
3791+ : /*out*/ "=r" (_res) \
3792+ : /*in*/ "r" (&_argvec[2]) \
3793+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3794+ ); \
3795+ lval = (__typeof__(lval)) _res; \
3796+ } while (0)
3797+
3798+#define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3799+ arg7) \
3800+ do { \
3801+ volatile OrigFn _orig = (orig); \
3802+ volatile unsigned long _argvec[3+7]; \
3803+ volatile unsigned long _res; \
3804+ /* _argvec[0] holds current r2 across the call */ \
3805+ _argvec[1] = (unsigned long)_orig.r2; \
3806+ _argvec[2] = (unsigned long)_orig.nraddr; \
3807+ _argvec[2+1] = (unsigned long)arg1; \
3808+ _argvec[2+2] = (unsigned long)arg2; \
3809+ _argvec[2+3] = (unsigned long)arg3; \
3810+ _argvec[2+4] = (unsigned long)arg4; \
3811+ _argvec[2+5] = (unsigned long)arg5; \
3812+ _argvec[2+6] = (unsigned long)arg6; \
3813+ _argvec[2+7] = (unsigned long)arg7; \
3814+ __asm__ volatile( \
3815+ VALGRIND_ALIGN_STACK \
3816+ "mr 12,%1\n\t" \
3817+ "std 2,-16(12)\n\t" /* save tocptr */ \
3818+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3819+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3820+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3821+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3822+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
3823+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
3824+ "ld 8, 48(12)\n\t" /* arg6->r8 */ \
3825+ "ld 9, 56(12)\n\t" /* arg7->r9 */ \
3826+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3827+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3828+ "mr 12,%1\n\t" \
3829+ "mr %0,3\n\t" \
3830+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3831+ VALGRIND_RESTORE_STACK \
3832+ : /*out*/ "=r" (_res) \
3833+ : /*in*/ "r" (&_argvec[2]) \
3834+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3835+ ); \
3836+ lval = (__typeof__(lval)) _res; \
3837+ } while (0)
3838+
3839+#define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3840+ arg7,arg8) \
3841+ do { \
3842+ volatile OrigFn _orig = (orig); \
3843+ volatile unsigned long _argvec[3+8]; \
3844+ volatile unsigned long _res; \
3845+ /* _argvec[0] holds current r2 across the call */ \
3846+ _argvec[1] = (unsigned long)_orig.r2; \
3847+ _argvec[2] = (unsigned long)_orig.nraddr; \
3848+ _argvec[2+1] = (unsigned long)arg1; \
3849+ _argvec[2+2] = (unsigned long)arg2; \
3850+ _argvec[2+3] = (unsigned long)arg3; \
3851+ _argvec[2+4] = (unsigned long)arg4; \
3852+ _argvec[2+5] = (unsigned long)arg5; \
3853+ _argvec[2+6] = (unsigned long)arg6; \
3854+ _argvec[2+7] = (unsigned long)arg7; \
3855+ _argvec[2+8] = (unsigned long)arg8; \
3856+ __asm__ volatile( \
3857+ VALGRIND_ALIGN_STACK \
3858+ "mr 12,%1\n\t" \
3859+ "std 2,-16(12)\n\t" /* save tocptr */ \
3860+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3861+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3862+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3863+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3864+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
3865+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
3866+ "ld 8, 48(12)\n\t" /* arg6->r8 */ \
3867+ "ld 9, 56(12)\n\t" /* arg7->r9 */ \
3868+ "ld 10, 64(12)\n\t" /* arg8->r10 */ \
3869+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3870+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3871+ "mr 12,%1\n\t" \
3872+ "mr %0,3\n\t" \
3873+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3874+ VALGRIND_RESTORE_STACK \
3875+ : /*out*/ "=r" (_res) \
3876+ : /*in*/ "r" (&_argvec[2]) \
3877+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3878+ ); \
3879+ lval = (__typeof__(lval)) _res; \
3880+ } while (0)
3881+
3882+#define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3883+ arg7,arg8,arg9) \
3884+ do { \
3885+ volatile OrigFn _orig = (orig); \
3886+ volatile unsigned long _argvec[3+9]; \
3887+ volatile unsigned long _res; \
3888+ /* _argvec[0] holds current r2 across the call */ \
3889+ _argvec[1] = (unsigned long)_orig.r2; \
3890+ _argvec[2] = (unsigned long)_orig.nraddr; \
3891+ _argvec[2+1] = (unsigned long)arg1; \
3892+ _argvec[2+2] = (unsigned long)arg2; \
3893+ _argvec[2+3] = (unsigned long)arg3; \
3894+ _argvec[2+4] = (unsigned long)arg4; \
3895+ _argvec[2+5] = (unsigned long)arg5; \
3896+ _argvec[2+6] = (unsigned long)arg6; \
3897+ _argvec[2+7] = (unsigned long)arg7; \
3898+ _argvec[2+8] = (unsigned long)arg8; \
3899+ _argvec[2+9] = (unsigned long)arg9; \
3900+ __asm__ volatile( \
3901+ VALGRIND_ALIGN_STACK \
3902+ "mr 12,%1\n\t" \
3903+ "std 2,-16(12)\n\t" /* save tocptr */ \
3904+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3905+ "addi 1,1,-128\n\t" /* expand stack frame */ \
3906+ /* arg9 */ \
3907+ "ld 3,72(12)\n\t" \
3908+ "std 3,96(1)\n\t" \
3909+ /* args1-8 */ \
3910+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3911+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3912+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3913+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
3914+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
3915+ "ld 8, 48(12)\n\t" /* arg6->r8 */ \
3916+ "ld 9, 56(12)\n\t" /* arg7->r9 */ \
3917+ "ld 10, 64(12)\n\t" /* arg8->r10 */ \
3918+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3919+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3920+ "mr 12,%1\n\t" \
3921+ "mr %0,3\n\t" \
3922+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3923+ VALGRIND_RESTORE_STACK \
3924+ : /*out*/ "=r" (_res) \
3925+ : /*in*/ "r" (&_argvec[2]) \
3926+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3927+ ); \
3928+ lval = (__typeof__(lval)) _res; \
3929+ } while (0)
3930+
3931+#define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3932+ arg7,arg8,arg9,arg10) \
3933+ do { \
3934+ volatile OrigFn _orig = (orig); \
3935+ volatile unsigned long _argvec[3+10]; \
3936+ volatile unsigned long _res; \
3937+ /* _argvec[0] holds current r2 across the call */ \
3938+ _argvec[1] = (unsigned long)_orig.r2; \
3939+ _argvec[2] = (unsigned long)_orig.nraddr; \
3940+ _argvec[2+1] = (unsigned long)arg1; \
3941+ _argvec[2+2] = (unsigned long)arg2; \
3942+ _argvec[2+3] = (unsigned long)arg3; \
3943+ _argvec[2+4] = (unsigned long)arg4; \
3944+ _argvec[2+5] = (unsigned long)arg5; \
3945+ _argvec[2+6] = (unsigned long)arg6; \
3946+ _argvec[2+7] = (unsigned long)arg7; \
3947+ _argvec[2+8] = (unsigned long)arg8; \
3948+ _argvec[2+9] = (unsigned long)arg9; \
3949+ _argvec[2+10] = (unsigned long)arg10; \
3950+ __asm__ volatile( \
3951+ VALGRIND_ALIGN_STACK \
3952+ "mr 12,%1\n\t" \
3953+ "std 2,-16(12)\n\t" /* save tocptr */ \
3954+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
3955+ "addi 1,1,-128\n\t" /* expand stack frame */ \
3956+ /* arg10 */ \
3957+ "ld 3,80(12)\n\t" \
3958+ "std 3,104(1)\n\t" \
3959+ /* arg9 */ \
3960+ "ld 3,72(12)\n\t" \
3961+ "std 3,96(1)\n\t" \
3962+ /* args1-8 */ \
3963+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
3964+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
3965+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
3966+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
3967+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
3968+ "ld 8, 48(12)\n\t" /* arg6->r8 */ \
3969+ "ld 9, 56(12)\n\t" /* arg7->r9 */ \
3970+ "ld 10, 64(12)\n\t" /* arg8->r10 */ \
3971+ "ld 12, 0(12)\n\t" /* target->r12 */ \
3972+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
3973+ "mr 12,%1\n\t" \
3974+ "mr %0,3\n\t" \
3975+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
3976+ VALGRIND_RESTORE_STACK \
3977+ : /*out*/ "=r" (_res) \
3978+ : /*in*/ "r" (&_argvec[2]) \
3979+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
3980+ ); \
3981+ lval = (__typeof__(lval)) _res; \
3982+ } while (0)
3983+
3984+#define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3985+ arg7,arg8,arg9,arg10,arg11) \
3986+ do { \
3987+ volatile OrigFn _orig = (orig); \
3988+ volatile unsigned long _argvec[3+11]; \
3989+ volatile unsigned long _res; \
3990+ /* _argvec[0] holds current r2 across the call */ \
3991+ _argvec[1] = (unsigned long)_orig.r2; \
3992+ _argvec[2] = (unsigned long)_orig.nraddr; \
3993+ _argvec[2+1] = (unsigned long)arg1; \
3994+ _argvec[2+2] = (unsigned long)arg2; \
3995+ _argvec[2+3] = (unsigned long)arg3; \
3996+ _argvec[2+4] = (unsigned long)arg4; \
3997+ _argvec[2+5] = (unsigned long)arg5; \
3998+ _argvec[2+6] = (unsigned long)arg6; \
3999+ _argvec[2+7] = (unsigned long)arg7; \
4000+ _argvec[2+8] = (unsigned long)arg8; \
4001+ _argvec[2+9] = (unsigned long)arg9; \
4002+ _argvec[2+10] = (unsigned long)arg10; \
4003+ _argvec[2+11] = (unsigned long)arg11; \
4004+ __asm__ volatile( \
4005+ VALGRIND_ALIGN_STACK \
4006+ "mr 12,%1\n\t" \
4007+ "std 2,-16(12)\n\t" /* save tocptr */ \
4008+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
4009+ "addi 1,1,-144\n\t" /* expand stack frame */ \
4010+ /* arg11 */ \
4011+ "ld 3,88(12)\n\t" \
4012+ "std 3,112(1)\n\t" \
4013+ /* arg10 */ \
4014+ "ld 3,80(12)\n\t" \
4015+ "std 3,104(1)\n\t" \
4016+ /* arg9 */ \
4017+ "ld 3,72(12)\n\t" \
4018+ "std 3,96(1)\n\t" \
4019+ /* args1-8 */ \
4020+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
4021+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
4022+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
4023+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
4024+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
4025+ "ld 8, 48(12)\n\t" /* arg6->r8 */ \
4026+ "ld 9, 56(12)\n\t" /* arg7->r9 */ \
4027+ "ld 10, 64(12)\n\t" /* arg8->r10 */ \
4028+ "ld 12, 0(12)\n\t" /* target->r12 */ \
4029+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
4030+ "mr 12,%1\n\t" \
4031+ "mr %0,3\n\t" \
4032+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
4033+ VALGRIND_RESTORE_STACK \
4034+ : /*out*/ "=r" (_res) \
4035+ : /*in*/ "r" (&_argvec[2]) \
4036+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
4037+ ); \
4038+ lval = (__typeof__(lval)) _res; \
4039+ } while (0)
4040+
4041+#define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4042+ arg7,arg8,arg9,arg10,arg11,arg12) \
4043+ do { \
4044+ volatile OrigFn _orig = (orig); \
4045+ volatile unsigned long _argvec[3+12]; \
4046+ volatile unsigned long _res; \
4047+ /* _argvec[0] holds current r2 across the call */ \
4048+ _argvec[1] = (unsigned long)_orig.r2; \
4049+ _argvec[2] = (unsigned long)_orig.nraddr; \
4050+ _argvec[2+1] = (unsigned long)arg1; \
4051+ _argvec[2+2] = (unsigned long)arg2; \
4052+ _argvec[2+3] = (unsigned long)arg3; \
4053+ _argvec[2+4] = (unsigned long)arg4; \
4054+ _argvec[2+5] = (unsigned long)arg5; \
4055+ _argvec[2+6] = (unsigned long)arg6; \
4056+ _argvec[2+7] = (unsigned long)arg7; \
4057+ _argvec[2+8] = (unsigned long)arg8; \
4058+ _argvec[2+9] = (unsigned long)arg9; \
4059+ _argvec[2+10] = (unsigned long)arg10; \
4060+ _argvec[2+11] = (unsigned long)arg11; \
4061+ _argvec[2+12] = (unsigned long)arg12; \
4062+ __asm__ volatile( \
4063+ VALGRIND_ALIGN_STACK \
4064+ "mr 12,%1\n\t" \
4065+ "std 2,-16(12)\n\t" /* save tocptr */ \
4066+ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \
4067+ "addi 1,1,-144\n\t" /* expand stack frame */ \
4068+ /* arg12 */ \
4069+ "ld 3,96(12)\n\t" \
4070+ "std 3,120(1)\n\t" \
4071+ /* arg11 */ \
4072+ "ld 3,88(12)\n\t" \
4073+ "std 3,112(1)\n\t" \
4074+ /* arg10 */ \
4075+ "ld 3,80(12)\n\t" \
4076+ "std 3,104(1)\n\t" \
4077+ /* arg9 */ \
4078+ "ld 3,72(12)\n\t" \
4079+ "std 3,96(1)\n\t" \
4080+ /* args1-8 */ \
4081+ "ld 3, 8(12)\n\t" /* arg1->r3 */ \
4082+ "ld 4, 16(12)\n\t" /* arg2->r4 */ \
4083+ "ld 5, 24(12)\n\t" /* arg3->r5 */ \
4084+ "ld 6, 32(12)\n\t" /* arg4->r6 */ \
4085+ "ld 7, 40(12)\n\t" /* arg5->r7 */ \
4086+ "ld 8, 48(12)\n\t" /* arg6->r8 */ \
4087+ "ld 9, 56(12)\n\t" /* arg7->r9 */ \
4088+ "ld 10, 64(12)\n\t" /* arg8->r10 */ \
4089+ "ld 12, 0(12)\n\t" /* target->r12 */ \
4090+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \
4091+ "mr 12,%1\n\t" \
4092+ "mr %0,3\n\t" \
4093+ "ld 2,-16(12)\n\t" /* restore tocptr */ \
4094+ VALGRIND_RESTORE_STACK \
4095+ : /*out*/ "=r" (_res) \
4096+ : /*in*/ "r" (&_argvec[2]) \
4097+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \
4098+ ); \
4099+ lval = (__typeof__(lval)) _res; \
4100+ } while (0)
4101+
4102+#endif /* PLAT_ppc64le_linux */
4103+
4104+/* ------------------------- arm-linux ------------------------- */
4105+
4106+#if defined(PLAT_arm_linux)
4107+
4108+/* These regs are trashed by the hidden call. */
4109+#define __CALLER_SAVED_REGS "r0", "r1", "r2", "r3","r4","r14"
4110+
4111+/* Macros to save and align the stack before making a function
4112+ call and restore it afterwards as gcc may not keep the stack
4113+ pointer aligned if it doesn't realise calls are being made
4114+ to other functions. */
4115+
4116+/* This is a bit tricky. We store the original stack pointer in r10
4117+ as it is callee-saves. gcc doesn't allow the use of r11 for some
4118+ reason. Also, we can't directly "bic" the stack pointer in thumb
4119+ mode since r13 isn't an allowed register number in that context.
4120+ So use r4 as a temporary, since that is about to get trashed
4121+ anyway, just after each use of this macro. Side effect is we need
4122+ to be very careful about any future changes, since
4123+ VALGRIND_ALIGN_STACK simply assumes r4 is usable. */
4124+#define VALGRIND_ALIGN_STACK \
4125+ "mov r10, sp\n\t" \
4126+ "mov r4, sp\n\t" \
4127+ "bic r4, r4, #7\n\t" \
4128+ "mov sp, r4\n\t"
4129+#define VALGRIND_RESTORE_STACK \
4130+ "mov sp, r10\n\t"
4131+
4132+/* These CALL_FN_ macros assume that on arm-linux, sizeof(unsigned
4133+ long) == 4. */
4134+
4135+#define CALL_FN_W_v(lval, orig) \
4136+ do { \
4137+ volatile OrigFn _orig = (orig); \
4138+ volatile unsigned long _argvec[1]; \
4139+ volatile unsigned long _res; \
4140+ _argvec[0] = (unsigned long)_orig.nraddr; \
4141+ __asm__ volatile( \
4142+ VALGRIND_ALIGN_STACK \
4143+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4144+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4145+ VALGRIND_RESTORE_STACK \
4146+ "mov %0, r0\n" \
4147+ : /*out*/ "=r" (_res) \
4148+ : /*in*/ "0" (&_argvec[0]) \
4149+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4150+ ); \
4151+ lval = (__typeof__(lval)) _res; \
4152+ } while (0)
4153+
4154+#define CALL_FN_W_W(lval, orig, arg1) \
4155+ do { \
4156+ volatile OrigFn _orig = (orig); \
4157+ volatile unsigned long _argvec[2]; \
4158+ volatile unsigned long _res; \
4159+ _argvec[0] = (unsigned long)_orig.nraddr; \
4160+ _argvec[1] = (unsigned long)(arg1); \
4161+ __asm__ volatile( \
4162+ VALGRIND_ALIGN_STACK \
4163+ "ldr r0, [%1, #4] \n\t" \
4164+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4165+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4166+ VALGRIND_RESTORE_STACK \
4167+ "mov %0, r0\n" \
4168+ : /*out*/ "=r" (_res) \
4169+ : /*in*/ "0" (&_argvec[0]) \
4170+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4171+ ); \
4172+ lval = (__typeof__(lval)) _res; \
4173+ } while (0)
4174+
4175+#define CALL_FN_W_WW(lval, orig, arg1,arg2) \
4176+ do { \
4177+ volatile OrigFn _orig = (orig); \
4178+ volatile unsigned long _argvec[3]; \
4179+ volatile unsigned long _res; \
4180+ _argvec[0] = (unsigned long)_orig.nraddr; \
4181+ _argvec[1] = (unsigned long)(arg1); \
4182+ _argvec[2] = (unsigned long)(arg2); \
4183+ __asm__ volatile( \
4184+ VALGRIND_ALIGN_STACK \
4185+ "ldr r0, [%1, #4] \n\t" \
4186+ "ldr r1, [%1, #8] \n\t" \
4187+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4188+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4189+ VALGRIND_RESTORE_STACK \
4190+ "mov %0, r0\n" \
4191+ : /*out*/ "=r" (_res) \
4192+ : /*in*/ "0" (&_argvec[0]) \
4193+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4194+ ); \
4195+ lval = (__typeof__(lval)) _res; \
4196+ } while (0)
4197+
4198+#define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
4199+ do { \
4200+ volatile OrigFn _orig = (orig); \
4201+ volatile unsigned long _argvec[4]; \
4202+ volatile unsigned long _res; \
4203+ _argvec[0] = (unsigned long)_orig.nraddr; \
4204+ _argvec[1] = (unsigned long)(arg1); \
4205+ _argvec[2] = (unsigned long)(arg2); \
4206+ _argvec[3] = (unsigned long)(arg3); \
4207+ __asm__ volatile( \
4208+ VALGRIND_ALIGN_STACK \
4209+ "ldr r0, [%1, #4] \n\t" \
4210+ "ldr r1, [%1, #8] \n\t" \
4211+ "ldr r2, [%1, #12] \n\t" \
4212+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4213+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4214+ VALGRIND_RESTORE_STACK \
4215+ "mov %0, r0\n" \
4216+ : /*out*/ "=r" (_res) \
4217+ : /*in*/ "0" (&_argvec[0]) \
4218+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4219+ ); \
4220+ lval = (__typeof__(lval)) _res; \
4221+ } while (0)
4222+
4223+#define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
4224+ do { \
4225+ volatile OrigFn _orig = (orig); \
4226+ volatile unsigned long _argvec[5]; \
4227+ volatile unsigned long _res; \
4228+ _argvec[0] = (unsigned long)_orig.nraddr; \
4229+ _argvec[1] = (unsigned long)(arg1); \
4230+ _argvec[2] = (unsigned long)(arg2); \
4231+ _argvec[3] = (unsigned long)(arg3); \
4232+ _argvec[4] = (unsigned long)(arg4); \
4233+ __asm__ volatile( \
4234+ VALGRIND_ALIGN_STACK \
4235+ "ldr r0, [%1, #4] \n\t" \
4236+ "ldr r1, [%1, #8] \n\t" \
4237+ "ldr r2, [%1, #12] \n\t" \
4238+ "ldr r3, [%1, #16] \n\t" \
4239+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4240+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4241+ VALGRIND_RESTORE_STACK \
4242+ "mov %0, r0" \
4243+ : /*out*/ "=r" (_res) \
4244+ : /*in*/ "0" (&_argvec[0]) \
4245+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4246+ ); \
4247+ lval = (__typeof__(lval)) _res; \
4248+ } while (0)
4249+
4250+#define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
4251+ do { \
4252+ volatile OrigFn _orig = (orig); \
4253+ volatile unsigned long _argvec[6]; \
4254+ volatile unsigned long _res; \
4255+ _argvec[0] = (unsigned long)_orig.nraddr; \
4256+ _argvec[1] = (unsigned long)(arg1); \
4257+ _argvec[2] = (unsigned long)(arg2); \
4258+ _argvec[3] = (unsigned long)(arg3); \
4259+ _argvec[4] = (unsigned long)(arg4); \
4260+ _argvec[5] = (unsigned long)(arg5); \
4261+ __asm__ volatile( \
4262+ VALGRIND_ALIGN_STACK \
4263+ "sub sp, sp, #4 \n\t" \
4264+ "ldr r0, [%1, #20] \n\t" \
4265+ "push {r0} \n\t" \
4266+ "ldr r0, [%1, #4] \n\t" \
4267+ "ldr r1, [%1, #8] \n\t" \
4268+ "ldr r2, [%1, #12] \n\t" \
4269+ "ldr r3, [%1, #16] \n\t" \
4270+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4271+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4272+ VALGRIND_RESTORE_STACK \
4273+ "mov %0, r0" \
4274+ : /*out*/ "=r" (_res) \
4275+ : /*in*/ "0" (&_argvec[0]) \
4276+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4277+ ); \
4278+ lval = (__typeof__(lval)) _res; \
4279+ } while (0)
4280+
4281+#define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
4282+ do { \
4283+ volatile OrigFn _orig = (orig); \
4284+ volatile unsigned long _argvec[7]; \
4285+ volatile unsigned long _res; \
4286+ _argvec[0] = (unsigned long)_orig.nraddr; \
4287+ _argvec[1] = (unsigned long)(arg1); \
4288+ _argvec[2] = (unsigned long)(arg2); \
4289+ _argvec[3] = (unsigned long)(arg3); \
4290+ _argvec[4] = (unsigned long)(arg4); \
4291+ _argvec[5] = (unsigned long)(arg5); \
4292+ _argvec[6] = (unsigned long)(arg6); \
4293+ __asm__ volatile( \
4294+ VALGRIND_ALIGN_STACK \
4295+ "ldr r0, [%1, #20] \n\t" \
4296+ "ldr r1, [%1, #24] \n\t" \
4297+ "push {r0, r1} \n\t" \
4298+ "ldr r0, [%1, #4] \n\t" \
4299+ "ldr r1, [%1, #8] \n\t" \
4300+ "ldr r2, [%1, #12] \n\t" \
4301+ "ldr r3, [%1, #16] \n\t" \
4302+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4303+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4304+ VALGRIND_RESTORE_STACK \
4305+ "mov %0, r0" \
4306+ : /*out*/ "=r" (_res) \
4307+ : /*in*/ "0" (&_argvec[0]) \
4308+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4309+ ); \
4310+ lval = (__typeof__(lval)) _res; \
4311+ } while (0)
4312+
4313+#define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4314+ arg7) \
4315+ do { \
4316+ volatile OrigFn _orig = (orig); \
4317+ volatile unsigned long _argvec[8]; \
4318+ volatile unsigned long _res; \
4319+ _argvec[0] = (unsigned long)_orig.nraddr; \
4320+ _argvec[1] = (unsigned long)(arg1); \
4321+ _argvec[2] = (unsigned long)(arg2); \
4322+ _argvec[3] = (unsigned long)(arg3); \
4323+ _argvec[4] = (unsigned long)(arg4); \
4324+ _argvec[5] = (unsigned long)(arg5); \
4325+ _argvec[6] = (unsigned long)(arg6); \
4326+ _argvec[7] = (unsigned long)(arg7); \
4327+ __asm__ volatile( \
4328+ VALGRIND_ALIGN_STACK \
4329+ "sub sp, sp, #4 \n\t" \
4330+ "ldr r0, [%1, #20] \n\t" \
4331+ "ldr r1, [%1, #24] \n\t" \
4332+ "ldr r2, [%1, #28] \n\t" \
4333+ "push {r0, r1, r2} \n\t" \
4334+ "ldr r0, [%1, #4] \n\t" \
4335+ "ldr r1, [%1, #8] \n\t" \
4336+ "ldr r2, [%1, #12] \n\t" \
4337+ "ldr r3, [%1, #16] \n\t" \
4338+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4339+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4340+ VALGRIND_RESTORE_STACK \
4341+ "mov %0, r0" \
4342+ : /*out*/ "=r" (_res) \
4343+ : /*in*/ "0" (&_argvec[0]) \
4344+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4345+ ); \
4346+ lval = (__typeof__(lval)) _res; \
4347+ } while (0)
4348+
4349+#define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4350+ arg7,arg8) \
4351+ do { \
4352+ volatile OrigFn _orig = (orig); \
4353+ volatile unsigned long _argvec[9]; \
4354+ volatile unsigned long _res; \
4355+ _argvec[0] = (unsigned long)_orig.nraddr; \
4356+ _argvec[1] = (unsigned long)(arg1); \
4357+ _argvec[2] = (unsigned long)(arg2); \
4358+ _argvec[3] = (unsigned long)(arg3); \
4359+ _argvec[4] = (unsigned long)(arg4); \
4360+ _argvec[5] = (unsigned long)(arg5); \
4361+ _argvec[6] = (unsigned long)(arg6); \
4362+ _argvec[7] = (unsigned long)(arg7); \
4363+ _argvec[8] = (unsigned long)(arg8); \
4364+ __asm__ volatile( \
4365+ VALGRIND_ALIGN_STACK \
4366+ "ldr r0, [%1, #20] \n\t" \
4367+ "ldr r1, [%1, #24] \n\t" \
4368+ "ldr r2, [%1, #28] \n\t" \
4369+ "ldr r3, [%1, #32] \n\t" \
4370+ "push {r0, r1, r2, r3} \n\t" \
4371+ "ldr r0, [%1, #4] \n\t" \
4372+ "ldr r1, [%1, #8] \n\t" \
4373+ "ldr r2, [%1, #12] \n\t" \
4374+ "ldr r3, [%1, #16] \n\t" \
4375+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4376+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4377+ VALGRIND_RESTORE_STACK \
4378+ "mov %0, r0" \
4379+ : /*out*/ "=r" (_res) \
4380+ : /*in*/ "0" (&_argvec[0]) \
4381+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4382+ ); \
4383+ lval = (__typeof__(lval)) _res; \
4384+ } while (0)
4385+
4386+#define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4387+ arg7,arg8,arg9) \
4388+ do { \
4389+ volatile OrigFn _orig = (orig); \
4390+ volatile unsigned long _argvec[10]; \
4391+ volatile unsigned long _res; \
4392+ _argvec[0] = (unsigned long)_orig.nraddr; \
4393+ _argvec[1] = (unsigned long)(arg1); \
4394+ _argvec[2] = (unsigned long)(arg2); \
4395+ _argvec[3] = (unsigned long)(arg3); \
4396+ _argvec[4] = (unsigned long)(arg4); \
4397+ _argvec[5] = (unsigned long)(arg5); \
4398+ _argvec[6] = (unsigned long)(arg6); \
4399+ _argvec[7] = (unsigned long)(arg7); \
4400+ _argvec[8] = (unsigned long)(arg8); \
4401+ _argvec[9] = (unsigned long)(arg9); \
4402+ __asm__ volatile( \
4403+ VALGRIND_ALIGN_STACK \
4404+ "sub sp, sp, #4 \n\t" \
4405+ "ldr r0, [%1, #20] \n\t" \
4406+ "ldr r1, [%1, #24] \n\t" \
4407+ "ldr r2, [%1, #28] \n\t" \
4408+ "ldr r3, [%1, #32] \n\t" \
4409+ "ldr r4, [%1, #36] \n\t" \
4410+ "push {r0, r1, r2, r3, r4} \n\t" \
4411+ "ldr r0, [%1, #4] \n\t" \
4412+ "ldr r1, [%1, #8] \n\t" \
4413+ "ldr r2, [%1, #12] \n\t" \
4414+ "ldr r3, [%1, #16] \n\t" \
4415+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4416+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4417+ VALGRIND_RESTORE_STACK \
4418+ "mov %0, r0" \
4419+ : /*out*/ "=r" (_res) \
4420+ : /*in*/ "0" (&_argvec[0]) \
4421+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4422+ ); \
4423+ lval = (__typeof__(lval)) _res; \
4424+ } while (0)
4425+
4426+#define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4427+ arg7,arg8,arg9,arg10) \
4428+ do { \
4429+ volatile OrigFn _orig = (orig); \
4430+ volatile unsigned long _argvec[11]; \
4431+ volatile unsigned long _res; \
4432+ _argvec[0] = (unsigned long)_orig.nraddr; \
4433+ _argvec[1] = (unsigned long)(arg1); \
4434+ _argvec[2] = (unsigned long)(arg2); \
4435+ _argvec[3] = (unsigned long)(arg3); \
4436+ _argvec[4] = (unsigned long)(arg4); \
4437+ _argvec[5] = (unsigned long)(arg5); \
4438+ _argvec[6] = (unsigned long)(arg6); \
4439+ _argvec[7] = (unsigned long)(arg7); \
4440+ _argvec[8] = (unsigned long)(arg8); \
4441+ _argvec[9] = (unsigned long)(arg9); \
4442+ _argvec[10] = (unsigned long)(arg10); \
4443+ __asm__ volatile( \
4444+ VALGRIND_ALIGN_STACK \
4445+ "ldr r0, [%1, #40] \n\t" \
4446+ "push {r0} \n\t" \
4447+ "ldr r0, [%1, #20] \n\t" \
4448+ "ldr r1, [%1, #24] \n\t" \
4449+ "ldr r2, [%1, #28] \n\t" \
4450+ "ldr r3, [%1, #32] \n\t" \
4451+ "ldr r4, [%1, #36] \n\t" \
4452+ "push {r0, r1, r2, r3, r4} \n\t" \
4453+ "ldr r0, [%1, #4] \n\t" \
4454+ "ldr r1, [%1, #8] \n\t" \
4455+ "ldr r2, [%1, #12] \n\t" \
4456+ "ldr r3, [%1, #16] \n\t" \
4457+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4458+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4459+ VALGRIND_RESTORE_STACK \
4460+ "mov %0, r0" \
4461+ : /*out*/ "=r" (_res) \
4462+ : /*in*/ "0" (&_argvec[0]) \
4463+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4464+ ); \
4465+ lval = (__typeof__(lval)) _res; \
4466+ } while (0)
4467+
4468+#define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
4469+ arg6,arg7,arg8,arg9,arg10, \
4470+ arg11) \
4471+ do { \
4472+ volatile OrigFn _orig = (orig); \
4473+ volatile unsigned long _argvec[12]; \
4474+ volatile unsigned long _res; \
4475+ _argvec[0] = (unsigned long)_orig.nraddr; \
4476+ _argvec[1] = (unsigned long)(arg1); \
4477+ _argvec[2] = (unsigned long)(arg2); \
4478+ _argvec[3] = (unsigned long)(arg3); \
4479+ _argvec[4] = (unsigned long)(arg4); \
4480+ _argvec[5] = (unsigned long)(arg5); \
4481+ _argvec[6] = (unsigned long)(arg6); \
4482+ _argvec[7] = (unsigned long)(arg7); \
4483+ _argvec[8] = (unsigned long)(arg8); \
4484+ _argvec[9] = (unsigned long)(arg9); \
4485+ _argvec[10] = (unsigned long)(arg10); \
4486+ _argvec[11] = (unsigned long)(arg11); \
4487+ __asm__ volatile( \
4488+ VALGRIND_ALIGN_STACK \
4489+ "sub sp, sp, #4 \n\t" \
4490+ "ldr r0, [%1, #40] \n\t" \
4491+ "ldr r1, [%1, #44] \n\t" \
4492+ "push {r0, r1} \n\t" \
4493+ "ldr r0, [%1, #20] \n\t" \
4494+ "ldr r1, [%1, #24] \n\t" \
4495+ "ldr r2, [%1, #28] \n\t" \
4496+ "ldr r3, [%1, #32] \n\t" \
4497+ "ldr r4, [%1, #36] \n\t" \
4498+ "push {r0, r1, r2, r3, r4} \n\t" \
4499+ "ldr r0, [%1, #4] \n\t" \
4500+ "ldr r1, [%1, #8] \n\t" \
4501+ "ldr r2, [%1, #12] \n\t" \
4502+ "ldr r3, [%1, #16] \n\t" \
4503+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4504+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4505+ VALGRIND_RESTORE_STACK \
4506+ "mov %0, r0" \
4507+ : /*out*/ "=r" (_res) \
4508+ : /*in*/ "0" (&_argvec[0]) \
4509+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4510+ ); \
4511+ lval = (__typeof__(lval)) _res; \
4512+ } while (0)
4513+
4514+#define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
4515+ arg6,arg7,arg8,arg9,arg10, \
4516+ arg11,arg12) \
4517+ do { \
4518+ volatile OrigFn _orig = (orig); \
4519+ volatile unsigned long _argvec[13]; \
4520+ volatile unsigned long _res; \
4521+ _argvec[0] = (unsigned long)_orig.nraddr; \
4522+ _argvec[1] = (unsigned long)(arg1); \
4523+ _argvec[2] = (unsigned long)(arg2); \
4524+ _argvec[3] = (unsigned long)(arg3); \
4525+ _argvec[4] = (unsigned long)(arg4); \
4526+ _argvec[5] = (unsigned long)(arg5); \
4527+ _argvec[6] = (unsigned long)(arg6); \
4528+ _argvec[7] = (unsigned long)(arg7); \
4529+ _argvec[8] = (unsigned long)(arg8); \
4530+ _argvec[9] = (unsigned long)(arg9); \
4531+ _argvec[10] = (unsigned long)(arg10); \
4532+ _argvec[11] = (unsigned long)(arg11); \
4533+ _argvec[12] = (unsigned long)(arg12); \
4534+ __asm__ volatile( \
4535+ VALGRIND_ALIGN_STACK \
4536+ "ldr r0, [%1, #40] \n\t" \
4537+ "ldr r1, [%1, #44] \n\t" \
4538+ "ldr r2, [%1, #48] \n\t" \
4539+ "push {r0, r1, r2} \n\t" \
4540+ "ldr r0, [%1, #20] \n\t" \
4541+ "ldr r1, [%1, #24] \n\t" \
4542+ "ldr r2, [%1, #28] \n\t" \
4543+ "ldr r3, [%1, #32] \n\t" \
4544+ "ldr r4, [%1, #36] \n\t" \
4545+ "push {r0, r1, r2, r3, r4} \n\t" \
4546+ "ldr r0, [%1, #4] \n\t" \
4547+ "ldr r1, [%1, #8] \n\t" \
4548+ "ldr r2, [%1, #12] \n\t" \
4549+ "ldr r3, [%1, #16] \n\t" \
4550+ "ldr r4, [%1] \n\t" /* target->r4 */ \
4551+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
4552+ VALGRIND_RESTORE_STACK \
4553+ "mov %0, r0" \
4554+ : /*out*/ "=r" (_res) \
4555+ : /*in*/ "0" (&_argvec[0]) \
4556+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \
4557+ ); \
4558+ lval = (__typeof__(lval)) _res; \
4559+ } while (0)
4560+
4561+#endif /* PLAT_arm_linux */
4562+
4563+/* ------------------------ arm64-linux ------------------------ */
4564+
4565+#if defined(PLAT_arm64_linux)
4566+
4567+/* These regs are trashed by the hidden call. */
4568+#define __CALLER_SAVED_REGS \
4569+ "x0", "x1", "x2", "x3","x4", "x5", "x6", "x7", "x8", "x9", \
4570+ "x10", "x11", "x12", "x13", "x14", "x15", "x16", "x17", \
4571+ "x18", "x19", "x20", "x30", \
4572+ "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v8", "v9", \
4573+ "v10", "v11", "v12", "v13", "v14", "v15", "v16", "v17", \
4574+ "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25", \
4575+ "v26", "v27", "v28", "v29", "v30", "v31"
4576+
4577+/* x21 is callee-saved, so we can use it to save and restore SP around
4578+ the hidden call. */
4579+#define VALGRIND_ALIGN_STACK \
4580+ "mov x21, sp\n\t" \
4581+ "bic sp, x21, #15\n\t"
4582+#define VALGRIND_RESTORE_STACK \
4583+ "mov sp, x21\n\t"
4584+
4585+/* These CALL_FN_ macros assume that on arm64-linux,
4586+ sizeof(unsigned long) == 8. */
4587+
4588+#define CALL_FN_W_v(lval, orig) \
4589+ do { \
4590+ volatile OrigFn _orig = (orig); \
4591+ volatile unsigned long _argvec[1]; \
4592+ volatile unsigned long _res; \
4593+ _argvec[0] = (unsigned long)_orig.nraddr; \
4594+ __asm__ volatile( \
4595+ VALGRIND_ALIGN_STACK \
4596+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4597+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4598+ VALGRIND_RESTORE_STACK \
4599+ "mov %0, x0\n" \
4600+ : /*out*/ "=r" (_res) \
4601+ : /*in*/ "0" (&_argvec[0]) \
4602+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4603+ ); \
4604+ lval = (__typeof__(lval)) _res; \
4605+ } while (0)
4606+
4607+#define CALL_FN_W_W(lval, orig, arg1) \
4608+ do { \
4609+ volatile OrigFn _orig = (orig); \
4610+ volatile unsigned long _argvec[2]; \
4611+ volatile unsigned long _res; \
4612+ _argvec[0] = (unsigned long)_orig.nraddr; \
4613+ _argvec[1] = (unsigned long)(arg1); \
4614+ __asm__ volatile( \
4615+ VALGRIND_ALIGN_STACK \
4616+ "ldr x0, [%1, #8] \n\t" \
4617+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4618+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4619+ VALGRIND_RESTORE_STACK \
4620+ "mov %0, x0\n" \
4621+ : /*out*/ "=r" (_res) \
4622+ : /*in*/ "0" (&_argvec[0]) \
4623+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4624+ ); \
4625+ lval = (__typeof__(lval)) _res; \
4626+ } while (0)
4627+
4628+#define CALL_FN_W_WW(lval, orig, arg1,arg2) \
4629+ do { \
4630+ volatile OrigFn _orig = (orig); \
4631+ volatile unsigned long _argvec[3]; \
4632+ volatile unsigned long _res; \
4633+ _argvec[0] = (unsigned long)_orig.nraddr; \
4634+ _argvec[1] = (unsigned long)(arg1); \
4635+ _argvec[2] = (unsigned long)(arg2); \
4636+ __asm__ volatile( \
4637+ VALGRIND_ALIGN_STACK \
4638+ "ldr x0, [%1, #8] \n\t" \
4639+ "ldr x1, [%1, #16] \n\t" \
4640+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4641+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4642+ VALGRIND_RESTORE_STACK \
4643+ "mov %0, x0\n" \
4644+ : /*out*/ "=r" (_res) \
4645+ : /*in*/ "0" (&_argvec[0]) \
4646+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4647+ ); \
4648+ lval = (__typeof__(lval)) _res; \
4649+ } while (0)
4650+
4651+#define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
4652+ do { \
4653+ volatile OrigFn _orig = (orig); \
4654+ volatile unsigned long _argvec[4]; \
4655+ volatile unsigned long _res; \
4656+ _argvec[0] = (unsigned long)_orig.nraddr; \
4657+ _argvec[1] = (unsigned long)(arg1); \
4658+ _argvec[2] = (unsigned long)(arg2); \
4659+ _argvec[3] = (unsigned long)(arg3); \
4660+ __asm__ volatile( \
4661+ VALGRIND_ALIGN_STACK \
4662+ "ldr x0, [%1, #8] \n\t" \
4663+ "ldr x1, [%1, #16] \n\t" \
4664+ "ldr x2, [%1, #24] \n\t" \
4665+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4666+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4667+ VALGRIND_RESTORE_STACK \
4668+ "mov %0, x0\n" \
4669+ : /*out*/ "=r" (_res) \
4670+ : /*in*/ "0" (&_argvec[0]) \
4671+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4672+ ); \
4673+ lval = (__typeof__(lval)) _res; \
4674+ } while (0)
4675+
4676+#define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
4677+ do { \
4678+ volatile OrigFn _orig = (orig); \
4679+ volatile unsigned long _argvec[5]; \
4680+ volatile unsigned long _res; \
4681+ _argvec[0] = (unsigned long)_orig.nraddr; \
4682+ _argvec[1] = (unsigned long)(arg1); \
4683+ _argvec[2] = (unsigned long)(arg2); \
4684+ _argvec[3] = (unsigned long)(arg3); \
4685+ _argvec[4] = (unsigned long)(arg4); \
4686+ __asm__ volatile( \
4687+ VALGRIND_ALIGN_STACK \
4688+ "ldr x0, [%1, #8] \n\t" \
4689+ "ldr x1, [%1, #16] \n\t" \
4690+ "ldr x2, [%1, #24] \n\t" \
4691+ "ldr x3, [%1, #32] \n\t" \
4692+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4693+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4694+ VALGRIND_RESTORE_STACK \
4695+ "mov %0, x0" \
4696+ : /*out*/ "=r" (_res) \
4697+ : /*in*/ "0" (&_argvec[0]) \
4698+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4699+ ); \
4700+ lval = (__typeof__(lval)) _res; \
4701+ } while (0)
4702+
4703+#define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
4704+ do { \
4705+ volatile OrigFn _orig = (orig); \
4706+ volatile unsigned long _argvec[6]; \
4707+ volatile unsigned long _res; \
4708+ _argvec[0] = (unsigned long)_orig.nraddr; \
4709+ _argvec[1] = (unsigned long)(arg1); \
4710+ _argvec[2] = (unsigned long)(arg2); \
4711+ _argvec[3] = (unsigned long)(arg3); \
4712+ _argvec[4] = (unsigned long)(arg4); \
4713+ _argvec[5] = (unsigned long)(arg5); \
4714+ __asm__ volatile( \
4715+ VALGRIND_ALIGN_STACK \
4716+ "ldr x0, [%1, #8] \n\t" \
4717+ "ldr x1, [%1, #16] \n\t" \
4718+ "ldr x2, [%1, #24] \n\t" \
4719+ "ldr x3, [%1, #32] \n\t" \
4720+ "ldr x4, [%1, #40] \n\t" \
4721+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4722+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4723+ VALGRIND_RESTORE_STACK \
4724+ "mov %0, x0" \
4725+ : /*out*/ "=r" (_res) \
4726+ : /*in*/ "0" (&_argvec[0]) \
4727+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4728+ ); \
4729+ lval = (__typeof__(lval)) _res; \
4730+ } while (0)
4731+
4732+#define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
4733+ do { \
4734+ volatile OrigFn _orig = (orig); \
4735+ volatile unsigned long _argvec[7]; \
4736+ volatile unsigned long _res; \
4737+ _argvec[0] = (unsigned long)_orig.nraddr; \
4738+ _argvec[1] = (unsigned long)(arg1); \
4739+ _argvec[2] = (unsigned long)(arg2); \
4740+ _argvec[3] = (unsigned long)(arg3); \
4741+ _argvec[4] = (unsigned long)(arg4); \
4742+ _argvec[5] = (unsigned long)(arg5); \
4743+ _argvec[6] = (unsigned long)(arg6); \
4744+ __asm__ volatile( \
4745+ VALGRIND_ALIGN_STACK \
4746+ "ldr x0, [%1, #8] \n\t" \
4747+ "ldr x1, [%1, #16] \n\t" \
4748+ "ldr x2, [%1, #24] \n\t" \
4749+ "ldr x3, [%1, #32] \n\t" \
4750+ "ldr x4, [%1, #40] \n\t" \
4751+ "ldr x5, [%1, #48] \n\t" \
4752+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4753+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4754+ VALGRIND_RESTORE_STACK \
4755+ "mov %0, x0" \
4756+ : /*out*/ "=r" (_res) \
4757+ : /*in*/ "0" (&_argvec[0]) \
4758+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4759+ ); \
4760+ lval = (__typeof__(lval)) _res; \
4761+ } while (0)
4762+
4763+#define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4764+ arg7) \
4765+ do { \
4766+ volatile OrigFn _orig = (orig); \
4767+ volatile unsigned long _argvec[8]; \
4768+ volatile unsigned long _res; \
4769+ _argvec[0] = (unsigned long)_orig.nraddr; \
4770+ _argvec[1] = (unsigned long)(arg1); \
4771+ _argvec[2] = (unsigned long)(arg2); \
4772+ _argvec[3] = (unsigned long)(arg3); \
4773+ _argvec[4] = (unsigned long)(arg4); \
4774+ _argvec[5] = (unsigned long)(arg5); \
4775+ _argvec[6] = (unsigned long)(arg6); \
4776+ _argvec[7] = (unsigned long)(arg7); \
4777+ __asm__ volatile( \
4778+ VALGRIND_ALIGN_STACK \
4779+ "ldr x0, [%1, #8] \n\t" \
4780+ "ldr x1, [%1, #16] \n\t" \
4781+ "ldr x2, [%1, #24] \n\t" \
4782+ "ldr x3, [%1, #32] \n\t" \
4783+ "ldr x4, [%1, #40] \n\t" \
4784+ "ldr x5, [%1, #48] \n\t" \
4785+ "ldr x6, [%1, #56] \n\t" \
4786+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4787+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4788+ VALGRIND_RESTORE_STACK \
4789+ "mov %0, x0" \
4790+ : /*out*/ "=r" (_res) \
4791+ : /*in*/ "0" (&_argvec[0]) \
4792+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4793+ ); \
4794+ lval = (__typeof__(lval)) _res; \
4795+ } while (0)
4796+
4797+#define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4798+ arg7,arg8) \
4799+ do { \
4800+ volatile OrigFn _orig = (orig); \
4801+ volatile unsigned long _argvec[9]; \
4802+ volatile unsigned long _res; \
4803+ _argvec[0] = (unsigned long)_orig.nraddr; \
4804+ _argvec[1] = (unsigned long)(arg1); \
4805+ _argvec[2] = (unsigned long)(arg2); \
4806+ _argvec[3] = (unsigned long)(arg3); \
4807+ _argvec[4] = (unsigned long)(arg4); \
4808+ _argvec[5] = (unsigned long)(arg5); \
4809+ _argvec[6] = (unsigned long)(arg6); \
4810+ _argvec[7] = (unsigned long)(arg7); \
4811+ _argvec[8] = (unsigned long)(arg8); \
4812+ __asm__ volatile( \
4813+ VALGRIND_ALIGN_STACK \
4814+ "ldr x0, [%1, #8] \n\t" \
4815+ "ldr x1, [%1, #16] \n\t" \
4816+ "ldr x2, [%1, #24] \n\t" \
4817+ "ldr x3, [%1, #32] \n\t" \
4818+ "ldr x4, [%1, #40] \n\t" \
4819+ "ldr x5, [%1, #48] \n\t" \
4820+ "ldr x6, [%1, #56] \n\t" \
4821+ "ldr x7, [%1, #64] \n\t" \
4822+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4823+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4824+ VALGRIND_RESTORE_STACK \
4825+ "mov %0, x0" \
4826+ : /*out*/ "=r" (_res) \
4827+ : /*in*/ "0" (&_argvec[0]) \
4828+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4829+ ); \
4830+ lval = (__typeof__(lval)) _res; \
4831+ } while (0)
4832+
4833+#define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4834+ arg7,arg8,arg9) \
4835+ do { \
4836+ volatile OrigFn _orig = (orig); \
4837+ volatile unsigned long _argvec[10]; \
4838+ volatile unsigned long _res; \
4839+ _argvec[0] = (unsigned long)_orig.nraddr; \
4840+ _argvec[1] = (unsigned long)(arg1); \
4841+ _argvec[2] = (unsigned long)(arg2); \
4842+ _argvec[3] = (unsigned long)(arg3); \
4843+ _argvec[4] = (unsigned long)(arg4); \
4844+ _argvec[5] = (unsigned long)(arg5); \
4845+ _argvec[6] = (unsigned long)(arg6); \
4846+ _argvec[7] = (unsigned long)(arg7); \
4847+ _argvec[8] = (unsigned long)(arg8); \
4848+ _argvec[9] = (unsigned long)(arg9); \
4849+ __asm__ volatile( \
4850+ VALGRIND_ALIGN_STACK \
4851+ "sub sp, sp, #0x20 \n\t" \
4852+ "ldr x0, [%1, #8] \n\t" \
4853+ "ldr x1, [%1, #16] \n\t" \
4854+ "ldr x2, [%1, #24] \n\t" \
4855+ "ldr x3, [%1, #32] \n\t" \
4856+ "ldr x4, [%1, #40] \n\t" \
4857+ "ldr x5, [%1, #48] \n\t" \
4858+ "ldr x6, [%1, #56] \n\t" \
4859+ "ldr x7, [%1, #64] \n\t" \
4860+ "ldr x8, [%1, #72] \n\t" \
4861+ "str x8, [sp, #0] \n\t" \
4862+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4863+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4864+ VALGRIND_RESTORE_STACK \
4865+ "mov %0, x0" \
4866+ : /*out*/ "=r" (_res) \
4867+ : /*in*/ "0" (&_argvec[0]) \
4868+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4869+ ); \
4870+ lval = (__typeof__(lval)) _res; \
4871+ } while (0)
4872+
4873+#define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4874+ arg7,arg8,arg9,arg10) \
4875+ do { \
4876+ volatile OrigFn _orig = (orig); \
4877+ volatile unsigned long _argvec[11]; \
4878+ volatile unsigned long _res; \
4879+ _argvec[0] = (unsigned long)_orig.nraddr; \
4880+ _argvec[1] = (unsigned long)(arg1); \
4881+ _argvec[2] = (unsigned long)(arg2); \
4882+ _argvec[3] = (unsigned long)(arg3); \
4883+ _argvec[4] = (unsigned long)(arg4); \
4884+ _argvec[5] = (unsigned long)(arg5); \
4885+ _argvec[6] = (unsigned long)(arg6); \
4886+ _argvec[7] = (unsigned long)(arg7); \
4887+ _argvec[8] = (unsigned long)(arg8); \
4888+ _argvec[9] = (unsigned long)(arg9); \
4889+ _argvec[10] = (unsigned long)(arg10); \
4890+ __asm__ volatile( \
4891+ VALGRIND_ALIGN_STACK \
4892+ "sub sp, sp, #0x20 \n\t" \
4893+ "ldr x0, [%1, #8] \n\t" \
4894+ "ldr x1, [%1, #16] \n\t" \
4895+ "ldr x2, [%1, #24] \n\t" \
4896+ "ldr x3, [%1, #32] \n\t" \
4897+ "ldr x4, [%1, #40] \n\t" \
4898+ "ldr x5, [%1, #48] \n\t" \
4899+ "ldr x6, [%1, #56] \n\t" \
4900+ "ldr x7, [%1, #64] \n\t" \
4901+ "ldr x8, [%1, #72] \n\t" \
4902+ "str x8, [sp, #0] \n\t" \
4903+ "ldr x8, [%1, #80] \n\t" \
4904+ "str x8, [sp, #8] \n\t" \
4905+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4906+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4907+ VALGRIND_RESTORE_STACK \
4908+ "mov %0, x0" \
4909+ : /*out*/ "=r" (_res) \
4910+ : /*in*/ "0" (&_argvec[0]) \
4911+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4912+ ); \
4913+ lval = (__typeof__(lval)) _res; \
4914+ } while (0)
4915+
4916+#define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4917+ arg7,arg8,arg9,arg10,arg11) \
4918+ do { \
4919+ volatile OrigFn _orig = (orig); \
4920+ volatile unsigned long _argvec[12]; \
4921+ volatile unsigned long _res; \
4922+ _argvec[0] = (unsigned long)_orig.nraddr; \
4923+ _argvec[1] = (unsigned long)(arg1); \
4924+ _argvec[2] = (unsigned long)(arg2); \
4925+ _argvec[3] = (unsigned long)(arg3); \
4926+ _argvec[4] = (unsigned long)(arg4); \
4927+ _argvec[5] = (unsigned long)(arg5); \
4928+ _argvec[6] = (unsigned long)(arg6); \
4929+ _argvec[7] = (unsigned long)(arg7); \
4930+ _argvec[8] = (unsigned long)(arg8); \
4931+ _argvec[9] = (unsigned long)(arg9); \
4932+ _argvec[10] = (unsigned long)(arg10); \
4933+ _argvec[11] = (unsigned long)(arg11); \
4934+ __asm__ volatile( \
4935+ VALGRIND_ALIGN_STACK \
4936+ "sub sp, sp, #0x30 \n\t" \
4937+ "ldr x0, [%1, #8] \n\t" \
4938+ "ldr x1, [%1, #16] \n\t" \
4939+ "ldr x2, [%1, #24] \n\t" \
4940+ "ldr x3, [%1, #32] \n\t" \
4941+ "ldr x4, [%1, #40] \n\t" \
4942+ "ldr x5, [%1, #48] \n\t" \
4943+ "ldr x6, [%1, #56] \n\t" \
4944+ "ldr x7, [%1, #64] \n\t" \
4945+ "ldr x8, [%1, #72] \n\t" \
4946+ "str x8, [sp, #0] \n\t" \
4947+ "ldr x8, [%1, #80] \n\t" \
4948+ "str x8, [sp, #8] \n\t" \
4949+ "ldr x8, [%1, #88] \n\t" \
4950+ "str x8, [sp, #16] \n\t" \
4951+ "ldr x8, [%1] \n\t" /* target->x8 */ \
4952+ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \
4953+ VALGRIND_RESTORE_STACK \
4954+ "mov %0, x0" \
4955+ : /*out*/ "=r" (_res) \
4956+ : /*in*/ "0" (&_argvec[0]) \
4957+ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \
4958+ ); \
4959+ lval = (__typeof__(lval)) _res; \
4960+ } while (0)
4961+
4962+#define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
4963+ arg7,arg8,arg9,arg10,arg11, \
4964+ arg12) \
4965+ do { \
4966+ volatile OrigFn _orig = (orig); \
4967+ volatile unsigned long _argvec[13]; \
4968+ volatile unsigned long _res; \
4969+ _argvec[0] = (unsigned long)_orig.nraddr; \
4970+ _argvec[1] = (unsigned long)(arg1); \
4971+ _argvec[2] = (unsigned long)(arg2); \
4972+ _argvec[3] = (unsigned long)(arg3); \
4973+ _argvec[4] = (unsigned long)(arg4); \
4974+ _argvec[5] = (unsigned long)(arg5); \
4975+ _argvec[6] = (unsigned long)(arg6); \
4976+ _argvec[7] = (unsigned long)(arg7); \
4977+ _argvec[8] = (unsigned long)(arg8); \
4978+ _argvec[9] = (unsigned long)(arg9); \
4979+ _argvec[10] = (unsigned long)(arg10); \
4980+ _argvec[11] = (unsigned long)(arg11); \
4981+ _argvec[12] = (unsigned long)(arg12); \
4982+ __asm__ volatile( \
4983+ VALGRIND_ALIGN_STACK \
4984+ "sub sp, sp, #0x30 \n\t" \
4985+ "ldr x0, [%1, #8] \n\t" \
4986+ "ldr x1, [%1, #16] \n\t" \
4987+ "ldr x2, [%1, #24] \n\t" \
4988+ "ldr x3, [%1, #32] \n\t" \
4989+ "ldr x4, [%1, #40] \n\t" \
4990+ "ldr x5, [%1, #48] \n\t" \
4991+ "ldr x6, [%1, #56] \n\t" \
4992+ "ldr x7, [%1, #64] \n\t" \
4993+ "ldr x8, [%1, #72] \n\t" \
4994+ "str x8, [sp, #0] \n\t" \
4995+ "ldr x8, [%1, #80] \n\t" \
4996+ "str x8, [sp, #8] \n\t" \
4997+ "ldr x8, [%1, #88] \n\t" \
4998+ "str x8, [sp, #16] \n\t" \
4999+ "ldr x8, [%1, #96] \n\t" \
5000+ "str x8, [sp, #24] \n\t" \
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches