Merge lp:~maddevelopers/mg5amcnlo/refactor_amcatnlo_run_interface into lp:~maddevelopers/mg5amcnlo/2.3.3

Proposed by Rikkert Frederix
Status: Merged
Merged at revision: 298
Proposed branch: lp:~maddevelopers/mg5amcnlo/refactor_amcatnlo_run_interface
Merge into: lp:~maddevelopers/mg5amcnlo/2.3.3
Diff against target: 2609 lines (+899/-1178)
10 files modified
Template/NLO/Source/run_config.inc (+1/-1)
Template/NLO/SubProcesses/ajob_template (+38/-99)
Template/NLO/SubProcesses/combine_results.sh (+0/-55)
Template/NLO/SubProcesses/combine_results_FO.sh (+0/-52)
Template/NLO/SubProcesses/driver_mintFO.f (+85/-114)
Template/NLO/SubProcesses/driver_mintMC.f (+12/-4)
Template/NLO/SubProcesses/sumres.py (+0/-241)
Template/NLO/SubProcesses/symmetry_fks_v3.f (+3/-10)
Template/NLO/SubProcesses/write_ajob.f (+1/-1)
madgraph/interface/amcatnlo_run_interface.py (+759/-601)
To merge this branch: bzr merge lp:~maddevelopers/mg5amcnlo/refactor_amcatnlo_run_interface
Reviewer Review Type Date Requested Status
marco zaro Approve
Review via email: mp+272714@code.launchpad.net

Description of the change

Major rewriting of the "def run()" function in amcatnlo_run_interface and most of the functions that are called from within the "def run()" function. This gives a much more precise control over all the jobs that need to be run and which need to be collected to get the final result.

One, very simple, improvement already applied: for fixed-order runs, only increase accuracy for each of the jobs by at most a given amount. If the results do not turn out to be accurate enough, ONLY resubmit the jobs that are not yet precise enough with higher required accuracy.

Removed the "combine_results.sh" and "sumres.py", because this is now dealt with directly from within the amcatnlo_run_interface.py file.

This is a first step towards "split generation" for fixed-order and MINT steps 0 and 1 in (N)LO+PS runs.

To post a comment you must log in.
289. By Rikkert Frederix

removed a double 'self.write_input_file()'.

290. By Rikkert Frederix

removed a double 'self.prepare_directories()' for fixed order runs.

291. By Rikkert Frederix

Small fix in the driver_mintFO: the 'RUN_MODE' was not correctly read
from the input_app.txt file

Revision history for this message
marco zaro (marco-zaro) wrote :

Hi Rik,
excellent work, please go ahead with the merge!
Cheers,

Marco

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Template/NLO/Source/run_config.inc'
2--- Template/NLO/Source/run_config.inc 2012-08-28 21:06:34 +0000
3+++ Template/NLO/Source/run_config.inc 2015-10-02 07:00:47 +0000
4@@ -34,7 +34,7 @@
5 parameter (PBS_QUE = 'madgraph')
6
7 integer ChanPerJob
8- parameter (ChanPerJob=1) !Number of channels / job for survey
9+ parameter (ChanPerJob=100000000) !Number of channels / job for survey
10
11 c integer max_np
12 c parameter (max_np=1) !Number of channels / job for refine
13
14=== modified file 'Template/NLO/SubProcesses/ajob_template'
15--- Template/NLO/SubProcesses/ajob_template 2015-03-18 14:50:37 +0000
16+++ Template/NLO/SubProcesses/ajob_template 2015-10-02 07:00:47 +0000
17@@ -26,57 +26,38 @@
18 exit
19 fi
20
21+channel=$1
22+run_mode=$2
23+runnumber=$3
24+integration_step=$4
25+
26 TAGTAGTAGTAGTAGTAGTAG for i in 1 ; do
27
28- runnumber=0
29- if [[ $1 == '0' ]]; then
30- j=$2\_G$i
31- if [[ ! -e $j ]]; then
32- mkdir $j
33- fi
34- cd $j
35- if [[ "$4" != "" ]]; then
36- if [[ -e ../$4\_G$i ]]; then
37- if [[ $1 == '0' ]]; then
38- cp -f ../$4\_G$i/mint_grids . >/dev/null 2>&1
39- cp -f ../$4\_G$i/grid.MC_integer . >/dev/null 2>&1
40- elif [[ $1 == '1' ]]; then
41- cp -f ../$4\_G$i/mint_grids . >/dev/null 2>&1
42- cp -f ../$4\_G$i/grid.MC_integer . >/dev/null 2>&1
43- fi
44- else
45- echo "Cannot find directory ../$4\_G$i/" > log.txt
46- exit
47- fi
48- fi
49- elif [[ $1 == '2' ]]; then
50- j=G$2$i
51- if [[ ! -e $j ]]; then
52- mkdir $j
53- fi
54-
55- cd $j
56- if [[ "$4" != "" ]]; then
57- if [[ "$4" == "H" ||"$4" == "S" || "$4" == "V" || "$4" == "B" || "$4" == "F" ]]; then
58- if [[ -e ../G$4$i ]]; then
59- cp -f ../G$4$i/mint_grids ./preset_mint_grids >/dev/null 2>&1
60- cp -f ../G$4$i/grid.MC_integer . >/dev/null 2>&1
61- else
62- echo "Cannot find direcotry ../G$4$i/" > log.txt
63- exit
64- fi
65- else
66- runnumber=$4
67- if [[ ! -e ../${j}_$4 ]]; then
68- mkdir ../${j}_$4
69- fi
70- cd ../${j}_$4
71- ln -sf ../${j}/mint_grids
72- ln -sf ../${j}/mint_grids_NLO
73- ln -sf ../${j}/grid.MC_integer
74- ln -sf ../${j}/res_1
75- fi
76- fi
77+ if [[ $run_mode == 'all' || $run_mode == 'born' ]] ; then
78+ j=$run_mode\_G$i
79+ else
80+ if [[ $runnumber == '0' ]] ; then
81+ j=G$run_mode$i
82+ else
83+ j=G$run_mode$i\_$runnumber
84+ fi
85+ fi
86+ cd $j
87+
88+ if [[ -e res.dat ]] ; then
89+ rm -f res.dat
90+ fi
91+ if [[ -e log.txt ]] ; then
92+ rm -f log.txt
93+ fi
94+ if [[ -e MADatNLO.top ]] ; then
95+ rm -f MADatNLO.top
96+ fi
97+ if [[ -e MADatNLO.HwU ]] ; then
98+ rm -f MADatNLO.HwU
99+ fi
100+ if [[ -e MADatNLO.root ]] ; then
101+ rm -f MADatNLO.root
102 fi
103 if [[ -e randinit ]] ; then
104 rm -f randinit
105@@ -97,67 +78,25 @@
106 link1up FKS_params.dat
107 link1up configs_and_props_info.dat
108 link1up leshouche_info.dat
109-# Not necessary anymore
110-# link1up MadLoop5_resources
111 link1up OLE_order.olc
112 link1up param_card.dat
113 link1up initial_states_map.dat
114
115-# check where is the basic file for the creation of input_app.txt
116-#
117- if [[ $1 == '0' ]]; then
118- if [[ $3 == '-1' && -e ./madinM1 ]] ; then
119- input_template=./madinM1
120- else
121- if [[ -e ../madin.$2 ]] ; then
122- input_template=../madin.$2
123- else
124- input_template=../../madin.$2
125- fi
126- fi
127- elif [[ $1 == '2' ]]; then
128- if [[ $3 == '0' || $3 == '2' ]]; then
129- if [[ -e ../madinMMC_$2.2 ]] ; then
130- input_template=../madinMMC_$2.2
131- else
132- input_template=../../madinMMC_$2.2
133- fi
134- else
135- input_template=./madinM1
136- fi
137- fi
138
139- if [[ $1 == '0' ]]; then
140- head -n 5 $input_template >& input_app.txt
141- echo $i >> input_app.txt
142- tail -n 4 $input_template >> input_app.txt
143- T="$(date +%s)"
144+ T="$(date +%s)"
145+ if [[ $run_mode == 'all' || $run_mode == 'born' ]]; then
146 ../madevent_mintFO > log.txt <input_app.txt 2>&1
147- status=$?
148- T="$(($(date +%s)-T))"
149- echo "Time in seconds: ${T}" >>log.txt
150- elif [[ $1 == '2' ]]; then
151+ else
152 if [[ $runnumber != 0 ]]; then
153- tar --extract --file=../nevents.tar nevts_${j}_$runnumber
154- mv nevts_${j}_$runnumber nevts
155 echo "$runnumber" >& moffset.dat
156 fi
157- if [[ $3 == '0' || $3 == '2' ]]; then
158- head -n 6 $input_template > input_app.txt
159- echo $i >> input_app.txt
160- tail -n 3 $input_template >> input_app.txt
161- elif [[ $3 == '1' ]]; then
162- head -n 6 $input_template > input_app.txt
163- echo $i >> input_app.txt
164- tail -n 3 $input_template >> input_app.txt
165- fi
166- T="$(date +%s)"
167 ../madevent_mintMC > log.txt <input_app.txt 2>&1
168- status=$?
169- T="$(($(date +%s)-T))"
170- echo "Time in seconds: ${T}" >>log.txt
171- cp -f log.txt log_MINT$3.txt >/dev/null 2>&1
172 fi
173+ status=$?
174+ T="$(($(date +%s)-T))"
175+ echo "Time in seconds: ${T}" >>log.txt
176+ cp -f log.txt log_MINT$integration_step.txt >/dev/null 2>&1
177+ cp -f res.dat res_$integration_step.dat >/dev/null 2>&1
178 exit $status
179 done
180
181
182=== removed file 'Template/NLO/SubProcesses/combine_results.sh'
183--- Template/NLO/SubProcesses/combine_results.sh 2013-06-28 13:57:17 +0000
184+++ Template/NLO/SubProcesses/combine_results.sh 1970-01-01 00:00:00 +0000
185@@ -1,55 +0,0 @@
186-#!/bin/bash
187-
188-# find the correct directory
189-if [[ ! -d ./SubProcesses ]]; then
190- cd ../
191-fi
192-if [[ -d ./SubProcesses ]]; then
193- cd SubProcesses
194-fi
195-
196-if [[ -e res.txt ]]; then
197- rm -f res.txt
198-fi
199-if [[ -e dirs.txt ]]; then
200- rm -f dirs.txt
201-fi
202-if [[ -e nevents_unweighted ]]; then
203- rm -f nevents_unweighted
204-fi
205-
206-arg1=$1
207-arg2=$2
208-arg3=$3
209-# shift the list of arguments by 3
210-shift
211-shift
212-shift
213-if [[ "$@" == "" ]]; then
214- echo "Please give the G directories that should be combined,"
215- echo "e.g. 'GF* GV*', as final arguments of this script"
216- exit
217-fi
218-
219-touch res.txt
220-touch dirs.txt
221-NTOT=0
222-for dir in "$@" ; do
223- N=`ls -d P*/$dir | wc -l`
224- NTOT=`expr $NTOT + $N`
225- ls -d P*/$dir >> dirs.txt
226- grep -H 'Final result' P*/$dir/res_$arg1 >> res.txt
227-done
228-echo N of directories: $NTOT
229-if [[ $arg1 == '0' ]] ; then
230- echo 'Determining the number of unweighted events per channel'
231-elif [[ $arg1 == '1' ]] ; then
232- echo 'Updating the number of unweighted events per channel'
233-fi
234-./sumres.py $NTOT $arg2 $arg3
235-
236-echo 'Integrated abs(cross-section)'
237-tail -n2 res.txt | head -n1
238-echo 'Integrated cross-section'
239-tail -n1 res.txt
240-mv res.txt res_$arg1.txt
241
242=== removed file 'Template/NLO/SubProcesses/combine_results_FO.sh'
243--- Template/NLO/SubProcesses/combine_results_FO.sh 2013-06-28 13:57:17 +0000
244+++ Template/NLO/SubProcesses/combine_results_FO.sh 1970-01-01 00:00:00 +0000
245@@ -1,52 +0,0 @@
246-#!/bin/bash
247-
248-# find the correct directory
249-if [[ ! -d ./SubProcesses ]]; then
250- cd ../
251-fi
252-if [[ -d ./SubProcesses ]]; then
253- cd SubProcesses
254-fi
255-
256-if [[ $1 == "0" ]] ; then
257- mint_mode=0
258- shift
259-elif [[ $1 == "1" ]] ; then
260- mint_mode=1
261- shift
262-elif [[ $1 == "2" ]] ; then
263- echo "Cannot combine results for mint_mode 2"
264- exit
265-else
266- mint_mode=0
267-fi
268-
269-if [[ -e res.txt ]]; then
270- rm -f res.txt
271-fi
272-if [[ -e dirs.txt ]]; then
273- rm -f dirs.txt
274-fi
275-
276-req_acc=$1
277-shift
278-
279-touch res.txt
280-touch dirs.txt
281-NTOT=0
282-for dir in "$@" ; do
283- N=`ls -d P*/$dir | wc -l`
284- NTOT=`expr $NTOT + $N`
285- ls -d P*/$dir >> dirs.txt
286- grep -H 'Final result' P*/$dir/res_$mint_mode >> res.txt
287-done
288-
289-sed -i.bak s/"\+\/\-"/" \+\/\-"/ res.txt
290-
291-echo N of directories: $NTOT
292-
293-./sumres.py $NTOT -1 $req_acc
294-
295-rm -r res.txt.bak
296-
297-tail -n1 res.txt
298
299=== modified file 'Template/NLO/SubProcesses/driver_mintFO.f'
300--- Template/NLO/SubProcesses/driver_mintFO.f 2015-03-09 18:27:17 +0000
301+++ Template/NLO/SubProcesses/driver_mintFO.f 2015-10-02 07:00:47 +0000
302@@ -18,7 +18,6 @@
303 C LOCAL
304 C
305 integer i,j,l,l1,l2,ndim
306- integer npoints
307 character*130 buf
308 c
309 c Global
310@@ -219,23 +218,7 @@
311 do j=1,nintervals_virt
312 read (12,*) (ave_virt(j,i),i=1,ndim)
313 enddo
314- if (ncall.gt.0 .and. accuracy.ne.0d0) then
315- read (12,*) ans(1),unc(1),ncall,itmax
316-c Update the number of PS points based on unc(1), ncall and accuracy
317- itmax_fl=itmax*(unc(1)/accuracy)**2
318- if (itmax_fl.le.4d0) then
319- itmax=max(nint(itmax_fl),2)
320- elseif (itmax_fl.gt.4d0 .and. itmax_fl.le.16d0) then
321- ncall=nint(ncall*itmax_fl/4d0)
322- itmax=4
323- else
324- itmax=nint(sqrt(itmax_fl))
325- ncall=nint(ncall*itmax_fl/nint(sqrt(itmax_fl)))
326- endif
327- accuracy=accuracy/ans(1) ! relative accuracy on the ABS X-section
328- else
329- read (12,*) ans(1),unc(1),dummy,dummy
330- endif
331+ read (12,*) ans(1),unc(1),dummy,dummy
332 read (12,*) virtual_fraction,average_virtual
333 close (12)
334 write (*,*) "Update iterations and points to",itmax,ncall
335@@ -253,10 +236,6 @@
336 call mint(sigint,ndim,ncall,itmax,imode,xgrid,ymax,ymax_virt
337 $ ,ans,unc,chi2)
338 call topout
339- open(unit=58,file='res_0',status='unknown')
340- write(58,*)'Final result [ABS]:',ans(1),' +/-',unc(1)
341- write(58,*)'Final result:',ans(2),' +/-',unc(2)
342- close(58)
343 write(*,*)'Final result [ABS]:',ans(1),' +/-',unc(1)
344 write(*,*)'Final result:',ans(2),' +/-',unc(2)
345 write(*,*)'chi**2 per D.o.F.:',chi2(1)
346@@ -337,6 +316,11 @@
347 write(*,*) 'Time spent in Other_tasks : ',tOther
348 write(*,*) 'Time spent in Total : ',tTot
349
350+ open (unit=12, file='res.dat',status='unknown')
351+ write (12,*)ans(1),unc(1),ans(2),unc(2),itmax,ncall,tTot
352+ close(12)
353+
354+
355 if(i_momcmp_count.ne.0)then
356 write(*,*)' '
357 write(*,*)'WARNING: genps_fks code 555555'
358@@ -680,8 +664,6 @@
359 character * 70 idstring
360 logical savegrid
361
362- character * 80 runstr
363- common/runstr/runstr
364 logical usexinteg,mint
365 common/cusexinteg/usexinteg,mint
366 logical unwgt
367@@ -692,84 +674,93 @@
368 double precision volh
369 common/mc_int2/volh,mc_hel,ihel,fillh
370
371-
372+ logical done
373+ character*100 buffer
374 c-----
375 c Begin Code
376 c-----
377 mint=.true.
378 unwgt=.false.
379- write(*,'(a)') 'Enter number of events and iterations: '
380- read(*,*) ncall,itmax
381- write(*,*) 'Number of events and iterations ',ncall,itmax
382- write(*,'(a)') 'Enter desired accuracy: '
383- read(*,*) accuracy
384- write(*,*) 'Desired absolute accuracy: ',accuracy
385-
386- write(*,'(a)') 'Enter 0 for fixed, 2 for adjustable grid: '
387- read(*,*) use_cut
388- if (use_cut .lt. 0 .or. use_cut .gt. 2) then
389- write(*,*) 'Bad choice, using 2',use_cut
390- use_cut = 2
391- endif
392-
393- write(*,10) 'Suppress amplitude (0 no, 1 yes)? '
394- read(*,*) i
395- if (i .eq. 1) then
396- multi_channel = .true.
397- write(*,*) 'Using suppressed amplitude.'
398- else
399- multi_channel = .false.
400- write(*,*) 'Using full amplitude.'
401- endif
402-
403- write(*,10) 'Exact helicity sum (0 yes, n = number/event)? '
404- read(*,*) i
405- if (i .eq. 0) then
406- mc_hel = 0
407- write(*,*) 'Explicitly summing over helicities for virt'
408- else
409- mc_hel= i
410- write(*,*) 'Summing over',i,' helicities/event for virt'
411- endif
412- isum_hel=0
413-
414- write(*,10) 'Enter Configuration Number: '
415- read(*,*) dconfig
416- iconfig = int(dconfig)
417- do i=1,mapconfig(0)
418- if (iconfig.eq.mapconfig(i)) then
419- iconfig=i
420- exit
421+ open (unit=83,file='input_app.txt',status='old')
422+ done=.false.
423+ do while (.not. done)
424+ read(83,'(a)',err=222,end=222) buffer
425+ if (buffer(1:7).eq.'NPOINTS') then
426+ buffer=buffer(10:100)
427+ read(buffer,*) ncall
428+ write (*,*) 'Number of phase-space points per iteration:',ncall
429+ elseif(buffer(1:11).eq.'NITERATIONS') then
430+ read(buffer(14:),*) itmax
431+ write (*,*) 'Maximum number of iterations is:',itmax
432+ elseif(buffer(1:8).eq.'ACCURACY') then
433+ read(buffer(11:),*) accuracy
434+ write (*,*) 'Desired accuracy is:',accuracy
435+ elseif(buffer(1:10).eq.'ADAPT_GRID') then
436+ read(buffer(13:),*) use_cut
437+ write (*,*) 'Using adaptive grids:',use_cut
438+ elseif(buffer(1:12).eq.'MULTICHANNEL') then
439+ read(buffer(15:),*) i
440+ if (i.eq.1) then
441+ multi_channel=.true.
442+ write (*,*) 'Using Multi-channel integration'
443+ else
444+ multi_channel=.false.
445+ write (*,*) 'Not using Multi-channel integration'
446+ endif
447+ elseif(buffer(1:12).eq.'SUM_HELICITY') then
448+ read(buffer(15:),*) i
449+ if (i.eq.0) then
450+ mc_hel=0
451+ write (*,*) 'Explicitly summing over helicities'
452+ else
453+ mc_hel=1
454+ write(*,*) 'Do MC over helicities for the virtuals'
455+ endif
456+ isum_hel=0
457+ elseif(buffer(1:7).eq.'CHANNEL') then
458+ read(buffer(10:),*) dconfig
459+ iconfig = int(dconfig)
460+ do i=1,mapconfig(0)
461+ if (iconfig.eq.mapconfig(i)) then
462+ iconfig=i
463+ exit
464+ endif
465+ enddo
466+ write(*,12) 'Running Configuration Number: ',iconfig
467+ elseif(buffer(1:5).eq.'SPLIT') then
468+ read(buffer(8:),*) i
469+ write (*,*) 'Splitting channel:',i
470+ elseif(buffer(1:8).eq.'RUN_MODE') then
471+ read(buffer(11:),*) abrvinput
472+ if(abrvinput(5:5).eq.'0')then
473+ nbody=.true.
474+ else
475+ nbody=.false.
476+ endif
477+ abrv=abrvinput(1:4)
478+ write (*,*) "doing the ",abrv," of this channel"
479+ if(nbody)then
480+ write (*,*) "integration Born/virtual with Sfunction=1"
481+ else
482+ write (*,*) "Normal integration (Sfunction != 1)"
483+ endif
484+ elseif(buffer(1:7).eq.'RESTART') then
485+ read(buffer(10:),*) irestart
486+ if (irestart.eq.0) then
487+ write (*,*) 'RESTART: Fresh run'
488+ elseif(irestart.eq.-1) then
489+ write (*,*) 'RESTART: Use old grids, but refil plots'
490+ elseif(irestart.eq.1) then
491+ write (*,*) 'RESTART: continue with existing run'
492+ else
493+ write (*,*) 'RESTART:',irestart
494+ endif
495 endif
496+ cycle
497+ 222 done=.true.
498 enddo
499- write(*,12) 'Running Configuration Number: ',iconfig
500-c
501-c Enter parameters that control Vegas grids
502-c
503- write(*,*)'enter id string for this run'
504- read(*,*) idstring
505- runstr=idstring
506- write(*,*)'enter 1 if you want restart files'
507- read (*,*) itmp
508- if(itmp.eq.1) then
509- savegrid = .true.
510- else
511- savegrid = .false.
512- endif
513- write(*,*)'enter 0 to exclude, 1 for new run, 2 to restart'
514- read(5,*)irestart
515+ close(83)
516
517- abrvinput=' '
518- write (*,*) "'all ', 'born', 'real', 'virt', 'novi' or 'grid'?"
519- write (*,*) "Enter 'born0' or 'virt0' to perform"
520- write (*,*) " a pure n-body integration (no S functions)"
521- read(5,*) abrvinput
522- if(abrvinput(5:5).eq.'0')then
523- nbody=.true.
524- else
525- nbody=.false.
526- endif
527- abrv=abrvinput(1:4)
528 if (fks_configs.eq.1) then
529 if (pdg_type_d(1,fks_i_d(1)).eq.-21) then
530 write (*,*) 'Process generated with [LOonly=QCD]. '/
531@@ -782,26 +773,6 @@
532 endif
533 endif
534 endif
535-c Options are way too many: make sure we understand all of them
536- if ( abrv.ne.'all '.and.abrv.ne.'born'.and.abrv.ne.'real'.and.
537- & abrv.ne.'virt'.and.
538- & abrv.ne.'viSC'.and.abrv.ne.'viLC'.and.abrv.ne.'novA'.and.
539- & abrv.ne.'novB'.and.abrv.ne.'viSA'.and.abrv.ne.'viSB') then
540- write(*,*)'Error in input: abrv is:',abrv
541- stop
542- endif
543- if(nbody.and.abrv.ne.'born'.and.abrv(1:2).ne.'vi'
544- & .and. abrv.ne.'grid')then
545- write(*,*)'Error in driver: inconsistent input',abrvinput
546- stop
547- endif
548-
549- write (*,*) "doing the ",abrv," of this channel"
550- if(nbody)then
551- write (*,*) "integration Born/virtual with Sfunction=1"
552- else
553- write (*,*) "Normal integration (Sfunction != 1)"
554- endif
555 c
556 c
557 c Here I want to set up with B.W. we map and which we don't
558
559=== modified file 'Template/NLO/SubProcesses/driver_mintMC.f'
560--- Template/NLO/SubProcesses/driver_mintMC.f 2015-08-13 12:43:02 +0000
561+++ Template/NLO/SubProcesses/driver_mintMC.f 2015-10-02 07:00:47 +0000
562@@ -189,7 +189,7 @@
563 enddo
564 else
565 c to restore grids:
566- open (unit=12, file='preset_mint_grids',status='old')
567+ open (unit=12, file='mint_grids',status='old')
568 do j=0,nintervals
569 read (12,*) (xgrid(j,i),i=1,ndim)
570 enddo
571@@ -285,7 +285,7 @@
572 close(58)
573
574 c to save grids:
575- open (unit=12, file='mint_grids_NLO',status='unknown')
576+ open (unit=12, file='mint_grids',status='unknown')
577 write (12,*) (xgrid(0,i),i=1,ndim)
578 do j=1,nintervals
579 write (12,*) (xgrid(j,i),i=1,ndim)
580@@ -301,7 +301,6 @@
581 write (12,*) virtual_fraction,average_virtual
582 close (12)
583
584-
585 c*************************************************************
586 c event generation
587 c*************************************************************
588@@ -323,7 +322,7 @@
589 ncall=nevts ! Update ncall with the number found in 'nevts'
590
591 c to restore grids:
592- open (unit=12, file='mint_grids_NLO',status='unknown')
593+ open (unit=12, file='mint_grids',status='unknown')
594 read (12,*) (xgrid(0,i),i=1,ndim)
595 do j=1,nintervals
596 read (12,*) (xgrid(j,i),i=1,ndim)
597@@ -468,6 +467,15 @@
598 write(*,*) 'Time spent in Other_tasks : ',tOther
599 write(*,*) 'Time spent in Total : ',tTot
600
601+ open (unit=12, file='res.dat',status='unknown')
602+ if (imode.eq.0) then
603+ write (12,*)ans(1),unc(1),ans(2),unc(2),itmax,ncall,tTot
604+ else
605+ write (12,*)ans(1)+ans(5),sqrt(unc(1)**2+unc(5)**2),ans(2)
606+ $ ,unc(2),itmax,ncall,tTot
607+ endif
608+ close(12)
609+
610 return
611 999 write (*,*) 'nevts file not found'
612 stop
613
614=== removed file 'Template/NLO/SubProcesses/sumres.py'
615--- Template/NLO/SubProcesses/sumres.py 2014-07-23 10:33:38 +0000
616+++ Template/NLO/SubProcesses/sumres.py 1970-01-01 00:00:00 +0000
617@@ -1,241 +0,0 @@
618-#!/usr/bin/env python
619-
620-#script to combine reults and tell the number of events that need
621-# to be generated in each channel.
622-# Replaces the sumres.f and sumres2.f files
623-# MZ, 2011-10-22
624-
625-from __future__ import division
626-import math
627-import sys
628-import random
629-import os
630-
631-nexpected=int(sys.argv[1])
632-nevents=int(sys.argv[2])
633-req_acc=float(sys.argv[3])
634-# if nevents is >=0 the script will also determine the
635-# number of events required for each process
636-
637-
638-def Mirrorprocs(p1, p2):
639- """determine if the folder names p1, p2 (with the _N already taken out)
640- correspond to the same process with
641- mirrror initial state. Returns true/false"""
642- return False
643-
644-file=open("res.txt")
645-content = file.read()
646-file.close()
647-lines = content.split("\n")
648-processes=[]
649-tot=0
650-err=0
651-totABS=0
652-errABS=0
653-
654-# open the file containing the list of directories
655-file=open("dirs.txt")
656-dirs = file.read().split("\n")
657-file.close()
658-dirs.remove('')
659-
660-# The syntax of lines should be first the ABS cross section for the
661-# channel and the line after that the cross section for the same
662-# channel.
663-for line in range(0,len(lines),2):
664- list = lines[line].split()
665- if list:
666- proc={}
667- proc['folder'] = list[0].split('/')[0]
668- proc['subproc'] = proc['folder'][0:proc['folder'].rfind('_')]
669- proc['channel'] = list[0].split('/')[1]
670- dirs.remove(os.path.join(proc['folder'], proc['channel']))
671- proc['resultABS'] = float(list[4])
672- proc['errorABS'] = float(list[6])
673- proc['err_percABS'] = proc['errorABS']/proc['resultABS']*100.
674- processes.append(proc)
675- totABS+= proc['resultABS']
676- errABS+= math.pow(proc['errorABS'],2)
677- list = lines[line+1].split()
678- if list:
679- proc['result'] = float(list[3])
680- proc['error'] = float(list[5])
681- proc['err_perc'] = proc['error']/proc['result']*100.
682- tot+= proc['result']
683- err+= math.pow(proc['error'],2)
684-if dirs:
685- print "%d jobs did not terminate correctly: " % len(dirs)
686- print '\n'.join(dirs)
687- print "The results are probably not correct. Please check the relevant log files corresponding to the above jobs for more information."
688-
689-processes.sort(key = lambda proc: -proc['errorABS'])
690-
691-correct = len(processes) == nexpected
692-print "Found %d correctly terminated jobs " %len(processes)
693-if not len(processes)==nexpected:
694- print len(processes), nexpected
695-
696-subprocs_string=[]
697-for proc in processes:
698- subprocs_string.append(proc['subproc'])
699-subprocs_string=set(subprocs_string)
700-
701-content+='\n\nCross-section per integration channel:\n'
702-for proc in processes:
703- content+='%(folder)20s %(channel)15s %(result)10.8e %(error)6.4e %(err_perc)6.4f%% \n' % proc
704-
705-content+='\n\nABS cross-section per integration channel:\n'
706-for proc in processes:
707- content+='%(folder)20s %(channel)15s %(resultABS)10.8e %(errorABS)6.4e %(err_percABS)6.4f%% \n' % proc
708-
709-content+='\n\nCross-section per subprocess:\n'
710-#for subpr in sorted(set(subprocs)):
711-subprocesses=[]
712-for sub in subprocs_string:
713- subpr={}
714- subpr['subproc']=sub
715- subpr['xsect']=0.
716- subpr['err']=0.
717- for proc in processes:
718- if proc['subproc'] == sub:
719- subpr['xsect'] += proc['result']
720- subpr['err'] += math.pow(proc['error'],2)
721- subpr['err']=math.sqrt(subpr['err'])
722- subprocesses.append(subpr)
723-
724-
725-#find and combine mirror configurations (if in v4)
726-for i1, s1 in enumerate(subprocesses):
727- for i2, s2 in enumerate(subprocesses):
728- if Mirrorprocs(s1['subproc'], s2['subproc']) and i1 >= i2:
729- s1['xsect'] += s2['xsect']
730- s1['err'] = math.sqrt(math.pow(s1['err'],2)+ math.pow(s2['err'],2))
731- s2['toremove'] = True
732-
733-new = []
734-for s in subprocesses:
735- try:
736- a= s['toremove']
737- except KeyError:
738- new.append(s)
739-subprocesses= new
740-
741-
742-subprocesses.sort(key = lambda proc: -proc['xsect'])
743-for subpr in subprocesses:
744- content+= '%(subproc)20s %(xsect)10.8e %(err)6.4e\n' % subpr
745-
746-
747-content+='\nTotal ABS and \nTotal: \n %10.8e +- %6.4e (%6.4e%%)\n %10.8e +- %6.4e (%6.4e%%)\n' %\
748- (totABS, math.sqrt(errABS), math.sqrt(errABS)/totABS *100.,tot, math.sqrt(err), math.sqrt(err)/tot *100.)
749-
750-if not correct:
751- sys.exit('ERROR: not all jobs terminated correctly\n')
752-
753-file=open("res.txt", 'w')
754-
755-file.write(content)
756-file.close()
757-
758-#determine the events for each process:
759-if nevents>=0:
760- if req_acc<0:
761- req_acc2_inv=nevents
762- else:
763- req_acc2_inv=1/(req_acc*req_acc)
764- #get the random number seed from the randinit file
765- file=open("randinit")
766- exec file
767- file.close
768- print "random seed found in 'randinit' is", r
769- random.seed(r)
770- totevts=nevents
771- for proc in processes:
772- proc['lhefile'] = os.path.join(proc['folder'], proc['channel'], 'events.lhe')
773- proc['nevents'] = 0
774- while totevts :
775- target = random.random() * totABS
776- crosssum = 0.
777- i = 0
778- while i<len(processes) and crosssum < target:
779- proc = processes[i]
780- crosssum += proc['resultABS']
781- i += 1
782- totevts -= 1
783- i -= 1
784- processes[i]['nevents'] += 1
785-
786-#check that we now have all the events in the channels
787- totevents = sum(proc['nevents'] for proc in processes)
788- if totevents != nevents:
789- sys.exit('failed to obtain the correct number of events. Required: %d, Obtained: %d' \
790- % (nevents, totevents))
791-
792- content_evts = ''
793- for proc in processes:
794- content_evts+= ' '+proc['lhefile']+' %(nevents)10d %(resultABS)10.8e 1.0 \n' % proc
795- nevts_file = open(os.path.join(proc['folder'], proc['channel'], 'nevts'),'w')
796- nevts_file.write('%10d\n' % proc['nevents'])
797- nevts_file.close()
798- if proc['channel'][1] == 'B':
799- fileinputs = open("madinMMC_B.2")
800- elif proc['channel'][1] == 'F':
801- fileinputs = open("madinMMC_F.2")
802- elif proc['channel'][1] == 'V':
803- fileinputs = open("madinMMC_V.2")
804- else:
805- sys.exit("ERROR, DONT KNOW WHICH INPUTS TO USE")
806- fileinputschannel = open(os.path.join(proc['folder'], proc['channel'], 'madinM1'),'w')
807- i=0
808- for line in fileinputs:
809- i += 1
810- if i == 2:
811- accuracy=min(math.sqrt(totABS/(req_acc2_inv*proc['resultABS'])),0.2)
812- fileinputschannel.write('%10.8e\n' % accuracy)
813- elif i == 8:
814- fileinputschannel.write('1 ! MINT mode\n')
815- else:
816- fileinputschannel.write(line)
817- fileinputschannel.close()
818- fileinputs.close()
819-
820- evts_file = open('nevents_unweighted', 'w')
821- evts_file.write(content_evts)
822- evts_file.close()
823-
824-# if nevents = -1 and req_acc >= 0, we need to determine the required
825-# accuracy in each of the channels: this is for fixed order running!
826-elif req_acc>=0 and nevents==-1:
827- req_accABS=req_acc*abs(tot)/totABS
828- content_evts = ''
829- for proc in processes:
830- if proc['channel'][0:3] == 'all':
831- fileinputs = open("madin.all")
832- elif proc['channel'][0:4] == 'novB':
833- fileinputs = open("madin.novB")
834- elif proc['channel'][0:4] == 'born':
835- fileinputs = open("madin.born")
836- elif proc['channel'][0:4] == 'grid':
837- fileinputs = open("madin.grid")
838- elif proc['channel'][0:4] == 'viSB':
839- fileinputs = open("madin.viSB")
840- elif proc['channel'][0:4] == 'virt':
841- fileinputs = open("madin.virt")
842- elif proc['channel'][0:4] == 'novi':
843- fileinputs = open("madin.novi")
844- else:
845- sys.exit("ERROR, DONT KNOW WHICH INPUTS TO USE")
846- fileinputschannel = open(os.path.join(proc['folder'], proc['channel'], 'madinM1'),'w')
847- i=0
848- for line in fileinputs:
849- i += 1
850- if i == 2:
851- accuracy=req_accABS*math.sqrt(totABS*proc['resultABS'])
852- fileinputschannel.write('%10.8e\n' % accuracy)
853- elif i == 9:
854- fileinputschannel.write('-1 ! restart from existing grids\n')
855- else:
856- fileinputschannel.write(line)
857- fileinputschannel.close()
858- fileinputs.close()
859
860=== modified file 'Template/NLO/SubProcesses/symmetry_fks_v3.f'
861--- Template/NLO/SubProcesses/symmetry_fks_v3.f 2014-06-26 08:45:41 +0000
862+++ Template/NLO/SubProcesses/symmetry_fks_v3.f 2015-10-02 07:00:47 +0000
863@@ -541,6 +541,8 @@
864 lname=4
865 mname='mg'
866 call open_bash_file(26,fname,lname,mname)
867+ call close_bash_file(26)
868+ open(unit=26,file='channels.txt',status='unknown')
869 ic = 0
870 do i=1,mapconfig(0)
871 if (use_config(i) .gt. 0) then
872@@ -567,15 +569,6 @@
873 done = .false.
874 do while (.not. done)
875 call enCode(icode,iarray,ibase,imax)
876- ic=ic+1
877- if (ic .gt. ChanPerJob) then
878- call close_bash_file(26)
879- fname='ajob'
880- lname=4
881- mname='mg'
882- call open_bash_file(26,fname,lname,mname)
883- ic = 1
884- endif
885 c write(*,*) 'mapping',ic,mapconfig(i)
886 c$$$ if (r_from_b(mapconfig(i)) .lt. 10) then
887 c$$$ write(26,'(i1$)') r_from_b(mapconfig(i))
888@@ -611,7 +604,7 @@
889 enddo
890 endif
891 enddo
892- call close_bash_file(26)
893+ close(26)
894 if (mapconfig(0) .gt. 9999) then
895 write(*,*) 'Only writing first 9999 jobs',mapconfig(0)
896 endif
897
898=== modified file 'Template/NLO/SubProcesses/write_ajob.f'
899--- Template/NLO/SubProcesses/write_ajob.f 2012-10-18 06:17:30 +0000
900+++ Template/NLO/SubProcesses/write_ajob.f 2015-10-02 07:00:47 +0000
901@@ -53,7 +53,7 @@
902 if (index(buff,'TAGTAGTAGTAGTAG').ne.0) exit
903 write(lun,15) buff
904 enddo
905- write(lun,'(a$)') 'for i in '
906+ write(lun,'(a$)') 'for i in $channel '
907 return
908 99 write (*,*) 'ajob_template or ajob_template_cluster '/
909 & /'does not have the correct format'
910
911=== modified file 'madgraph/interface/amcatnlo_run_interface.py'
912--- madgraph/interface/amcatnlo_run_interface.py 2015-10-01 16:00:08 +0000
913+++ madgraph/interface/amcatnlo_run_interface.py 2015-10-02 07:00:47 +0000
914@@ -1202,7 +1202,7 @@
915 self.compile(mode, options)
916 evt_file = self.run(mode, options)
917
918- if int(self.run_card['nevents']) == 0 and not mode in ['LO', 'NLO']:
919+ if self.run_card['nevents'] == 0 and not mode in ['LO', 'NLO']:
920 logger.info('No event file generated: grids have been set-up with a '\
921 'relative precision of %s' % self.run_card['req_acc'])
922 return
923@@ -1222,7 +1222,7 @@
924
925
926 self.update_status('', level='all', update_results=True)
927- if int(self.run_card['ickkw']) == 3 and mode in ['noshower', 'aMC@NLO']:
928+ if self.run_card['ickkw'] == 3 and mode in ['noshower', 'aMC@NLO']:
929 logger.warning("""You are running with FxFx merging enabled.
930 To be able to merge samples of various multiplicities without double counting,
931 you have to remove some events after showering 'by hand'.
932@@ -1248,59 +1248,11 @@
933
934 self.update_status('', level='all', update_results=True)
935
936- def print_results_in_shell(self, data):
937- """Have a nice results prints in the shell,
938- data should be of type: gen_crossxhtml.OneTagResults"""
939- if not data:
940- return
941- logger.info(" === Results Summary for run: %s tag: %s ===\n" % (data['run_name'],data['tag']))
942- if self.ninitial == 1:
943- logger.info(" Width : %.4g +- %.4g GeV" % (data['cross'], data['error']))
944- else:
945- logger.info(" Cross-section : %.4g +- %.4g pb" % (data['cross'], data['error']))
946- logger.info(" Nb of events : %s" % data['nb_event'] )
947- #if data['cross_pythia'] and data['nb_event_pythia']:
948- # if self.ninitial == 1:
949- # logger.info(" Matched Width : %.4g +- %.4g GeV" % (data['cross_pythia'], data['error_pythia']))
950- # else:
951- # logger.info(" Matched Cross-section : %.4g +- %.4g pb" % (data['cross_pythia'], data['error_pythia']))
952- # logger.info(" Nb of events after Matching : %s" % data['nb_event_pythia'])
953- # if self.run_card['use_syst'] in self.true:
954- # logger.info(" Be carefull that matched information are here NOT for the central value. Refer to SysCalc output for it")
955- logger.info(" " )
956-
957- def print_results_in_file(self, data, path, mode='w'):
958- """Have a nice results prints in the shell,
959- data should be of type: gen_crossxhtml.OneTagResults"""
960- if not data:
961- return
962-
963- fsock = open(path, mode)
964-
965- fsock.write(" === Results Summary for run: %s tag: %s process: %s ===\n" % \
966- (data['run_name'],data['tag'], os.path.basename(self.me_dir)))
967-
968- if self.ninitial == 1:
969- fsock.write(" Width : %.4g +- %.4g GeV\n" % (data['cross'], data['error']))
970- else:
971- fsock.write(" Cross-section : %.4g +- %.4g pb\n" % (data['cross'], data['error']))
972- fsock.write(" Nb of events : %s\n" % data['nb_event'] )
973- #if data['cross_pythia'] and data['nb_event_pythia']:
974- # if self.ninitial == 1:
975- # fsock.write(" Matched Width : %.4g +- %.4g GeV\n" % (data['cross_pythia'], data['error_pythia']))
976- # else:
977- # fsock.write(" Matched Cross-section : %.4g +- %.4g pb\n" % (data['cross_pythia'], data['error_pythia']))
978- # fsock.write(" Nb of events after Matching : %s\n" % data['nb_event_pythia'])
979- fsock.write(" \n" )
980-
981-
982-
983-
984
985 def update_random_seed(self):
986 """Update random number seed with the value from the run_card.
987 If this is 0, update the number according to a fresh one"""
988- iseed = int(self.run_card['iseed'])
989+ iseed = self.run_card['iseed']
990 if iseed == 0:
991 randinit = open(pjoin(self.me_dir, 'SubProcesses', 'randinit'))
992 iseed = int(randinit.read()[2:]) + 1
993@@ -1317,218 +1269,94 @@
994 if not 'only_generation' in options.keys():
995 options['only_generation'] = False
996
997+ # for second step in applgrid mode, do only the event generation step
998 if mode in ['LO', 'NLO'] and self.run_card['iappl'] == 2 and not options['only_generation']:
999 options['only_generation'] = True
1000 self.get_characteristics(pjoin(self.me_dir, 'SubProcesses', 'proc_characteristics'))
1001-
1002- if self.cluster_mode == 1:
1003- cluster_name = self.options['cluster_type']
1004- self.cluster = cluster.from_name[cluster_name](**self.options)
1005- if self.cluster_mode == 2:
1006- try:
1007- import multiprocessing
1008- if not self.nb_core:
1009- try:
1010- self.nb_core = int(self.options['nb_core'])
1011- except TypeError:
1012- self.nb_core = multiprocessing.cpu_count()
1013- logger.info('Using %d cores' % self.nb_core)
1014- except ImportError:
1015- self.nb_core = 1
1016- logger.warning('Impossible to detect the number of cores => Using One.\n'+
1017- 'Use set nb_core X in order to set this number and be able to'+
1018- 'run in multicore.')
1019-
1020- self.cluster = cluster.MultiCore(**self.options)
1021+ self.setup_cluster_or_multicore()
1022 self.update_random_seed()
1023 #find and keep track of all the jobs
1024 folder_names = {'LO': ['born_G*'], 'NLO': ['all_G*'],
1025 'aMC@LO': ['GB*'], 'aMC@NLO': ['GF*']}
1026 folder_names['noshower'] = folder_names['aMC@NLO']
1027 folder_names['noshowerLO'] = folder_names['aMC@LO']
1028- job_dict = {}
1029 p_dirs = [d for d in \
1030 open(pjoin(self.me_dir, 'SubProcesses', 'subproc.mg')).read().split('\n') if d]
1031- #find jobs and clean previous results
1032- if not options['only_generation'] and not options['reweightonly']:
1033- self.update_status('Cleaning previous results', level=None)
1034- for dir in p_dirs:
1035- job_dict[dir] = [file for file in \
1036- os.listdir(pjoin(self.me_dir, 'SubProcesses', dir)) \
1037- if file.startswith('ajob')]
1038- #find old folders to be removed
1039- for obj in folder_names[mode]:
1040- to_rm = [file for file in \
1041- os.listdir(pjoin(self.me_dir, 'SubProcesses', dir)) \
1042- if file.startswith(obj[:-1]) and \
1043- (os.path.isdir(pjoin(self.me_dir, 'SubProcesses', dir, file)) or \
1044- os.path.exists(pjoin(self.me_dir, 'SubProcesses', dir, file)))]
1045- #always clean dirs for the splitted event generation
1046- # do not include the born_G/ grid_G which should be kept when
1047- # doing a f.o. run keeping old grids
1048- to_always_rm = [file for file in \
1049- os.listdir(pjoin(self.me_dir, 'SubProcesses', dir)) \
1050- if file.startswith(obj[:-1]) and
1051- '_' in file and not '_G' in file and \
1052- (os.path.isdir(pjoin(self.me_dir, 'SubProcesses', dir, file)) or \
1053- os.path.exists(pjoin(self.me_dir, 'SubProcesses', dir, file)))]
1054-
1055- if not options['only_generation'] and not options['reweightonly']:
1056- to_always_rm.extend(to_rm)
1057- if os.path.exists(pjoin(self.me_dir, 'SubProcesses', dir,'MadLoop5_resources.tar.gz')):
1058- to_always_rm.append(pjoin(self.me_dir, 'SubProcesses', dir,'MadLoop5_resources.tar.gz'))
1059- files.rm([pjoin(self.me_dir, 'SubProcesses', dir, d) for d in to_always_rm])
1060-
1061- mcatnlo_status = ['Setting up grid', 'Computing upper envelope', 'Generating events']
1062-
1063- if self.run_card['iappl'] == 2:
1064- self.applgrid_distribute(options,mode,p_dirs)
1065+ #Clean previous results
1066+ self.clean_previous_results(options,p_dirs,folder_names[mode])
1067+
1068+ mcatnlo_status = ['Setting up grids', 'Computing upper envelope', 'Generating events']
1069+
1070
1071 if options['reweightonly']:
1072 event_norm=self.run_card['event_norm']
1073- nevents=int(self.run_card['nevents'])
1074+ nevents=self.run_card['nevents']
1075 return self.reweight_and_collect_events(options, mode, nevents, event_norm)
1076
1077 devnull = os.open(os.devnull, os.O_RDWR)
1078+
1079 if mode in ['LO', 'NLO']:
1080 # this is for fixed order runs
1081 mode_dict = {'NLO': 'all', 'LO': 'born'}
1082 logger.info('Doing fixed order %s' % mode)
1083 req_acc = self.run_card['req_acc_FO']
1084- if not options['only_generation'] and req_acc != -1:
1085- self.write_madin_file(pjoin(self.me_dir, 'SubProcesses'), mode_dict[mode], 0, '-1', '6','0.10')
1086- self.update_status('Setting up grids', level=None)
1087- self.run_all(job_dict, [['0', mode_dict[mode], '0']], 'Setting up grids')
1088- elif not options['only_generation']:
1089- npoints = self.run_card['npoints_FO_grid']
1090- niters = self.run_card['niters_FO_grid']
1091- self.write_madin_file(pjoin(self.me_dir, 'SubProcesses'), mode_dict[mode], 0, npoints, niters)
1092- self.update_status('Setting up grids', level=None)
1093- self.run_all(job_dict, [['0', mode_dict[mode], '0']], 'Setting up grids')
1094-
1095- if req_acc != -1 and req_acc <= 0.003 and not options['only_generation']:
1096- # required accuracy is rather smal. It is more
1097- # efficient to do an extra step in between with a
1098- # required accuracy of 10*req_acc, and only after that
1099- # to go to final req_acc. This is particularly true
1100- # for the plots, that otherwise might see larger
1101- # fluctuations from the first (couple of) iterations.
1102- req_accs=[min(req_acc*10,0.01),req_acc]
1103- else:
1104- req_accs=[req_acc]
1105-
1106- for req_acc in req_accs:
1107- npoints = self.run_card['npoints_FO']
1108- niters = self.run_card['niters_FO']
1109- self.write_madin_file(pjoin(self.me_dir, 'SubProcesses'), mode_dict[mode], -1, npoints, niters)
1110- # collect the results and logs
1111- self.collect_log_files(folder_names[mode], 0)
1112- p = misc.Popen(['./combine_results_FO.sh', str(req_acc), '%s_G*' % mode_dict[mode]], \
1113- stdout=subprocess.PIPE, \
1114- cwd=pjoin(self.me_dir, 'SubProcesses'))
1115- output = p.communicate()
1116-
1117- self.cross_sect_dict = self.read_results(output, mode)
1118- self.print_summary(options, 0, mode)
1119- cross, error = sum_html.make_all_html_results(self, ['%s*' % mode_dict[mode]])
1120- self.results.add_detail('cross', cross)
1121- self.results.add_detail('error', error)
1122-
1123- self.update_status('Computing cross-section', level=None)
1124- self.run_all(job_dict, [['0', mode_dict[mode], '0', mode_dict[mode]]], 'Computing cross-section')
1125-
1126- # collect the results and logs
1127- self.collect_log_files(folder_names[mode], 1)
1128- p = misc.Popen(['./combine_results_FO.sh', '-1'] + folder_names[mode], \
1129- stdout=subprocess.PIPE,
1130- cwd=pjoin(self.me_dir, 'SubProcesses'))
1131- output = p.communicate()
1132- self.cross_sect_dict = self.read_results(output, mode)
1133-
1134- # collect the scale and PDF uncertainties
1135- scale_pdf_info={}
1136- if self.run_card['reweight_scale'] or self.run_card['reweight_PDF']:
1137- data_files=[]
1138- for dir in p_dirs:
1139- for obj in folder_names[mode]:
1140- for file in os.listdir(pjoin(self.me_dir, 'SubProcesses', dir)):
1141- if file.startswith(obj[:-1]) and \
1142- (os.path.exists(pjoin(self.me_dir, 'SubProcesses', dir, file,'scale_pdf_dependence.dat'))):
1143- data_files.append(pjoin(dir,file,'scale_pdf_dependence.dat'))
1144- scale_pdf_info = self.pdf_scale_from_reweighting(data_files)
1145- # print the results:
1146- self.print_summary(options, 1, mode, scale_pdf_info)
1147-
1148- files.cp(pjoin(self.me_dir, 'SubProcesses', 'res.txt'),
1149- pjoin(self.me_dir, 'Events', self.run_name))
1150-
1151- if self.analyse_card['fo_analysis_format'].lower() == 'topdrawer':
1152- misc.call(['./combine_plots_FO.sh'] + folder_names[mode], \
1153- stdout=devnull,
1154- cwd=pjoin(self.me_dir, 'SubProcesses'))
1155- files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.top'),
1156- pjoin(self.me_dir, 'Events', self.run_name))
1157- logger.info('The results of this run and the TopDrawer file with the plots' + \
1158- ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1159- elif self.analyse_card['fo_analysis_format'].lower() == 'hwu':
1160- self.combine_plots_HwU(folder_names[mode])
1161- files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.HwU'),
1162- pjoin(self.me_dir, 'Events', self.run_name))
1163- files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.gnuplot'),
1164- pjoin(self.me_dir, 'Events', self.run_name))
1165- try:
1166- misc.call(['gnuplot','MADatNLO.gnuplot'],\
1167- stdout=os.open(os.devnull, os.O_RDWR),\
1168- stderr=os.open(os.devnull, os.O_RDWR),\
1169- cwd=pjoin(self.me_dir, 'Events', self.run_name))
1170- except Exception:
1171- pass
1172-
1173-
1174- logger.info('The results of this run and the HwU and GnuPlot files with the plots' + \
1175- ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1176- elif self.analyse_card['fo_analysis_format'].lower() == 'root':
1177- misc.call(['./combine_root.sh'] + folder_names[mode], \
1178- stdout=devnull,
1179- cwd=pjoin(self.me_dir, 'SubProcesses'))
1180- files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.root'),
1181- pjoin(self.me_dir, 'Events', self.run_name))
1182- logger.info('The results of this run and the ROOT file with the plots' + \
1183- ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1184- else:
1185- logger.info('The results of this run' + \
1186- ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1187-
1188- cross, error = sum_html.make_all_html_results(self, folder_names[mode])
1189- self.results.add_detail('cross', cross)
1190- self.results.add_detail('error', error)
1191- if self.run_card['iappl'] != 0:
1192- self.applgrid_combine(cross,error)
1193+
1194+ # Re-distribute the grids for the 2nd step of the applgrid
1195+ # running
1196+ if self.run_card['iappl'] == 2:
1197+ self.applgrid_distribute(options,mode_dict[mode],p_dirs)
1198+
1199+ # create a list of dictionaries "jobs_to_run" with all the
1200+ # jobs that need to be run
1201+ integration_step=-1
1202+ jobs_to_run,integration_step = self.create_jobs_to_run(options,p_dirs, \
1203+ req_acc,mode_dict[mode],integration_step,mode,fixed_order=True)
1204+ jobs_to_collect=copy.copy(jobs_to_run)
1205+ self.prepare_directories(jobs_to_run,mode)
1206+
1207+ # loop over the integration steps. After every step, check
1208+ # if we have the required accuracy. If this is the case,
1209+ # stop running, else do another step.
1210+ while True:
1211+ integration_step=integration_step+1
1212+ self.run_all_jobs(jobs_to_run,integration_step)
1213+ self.collect_log_files(jobs_to_run,integration_step)
1214+ jobs_to_run,jobs_to_collect=self.collect_the_results(options,req_acc,jobs_to_run, \
1215+ jobs_to_collect,integration_step,mode,mode_dict[mode])
1216+ if not jobs_to_run:
1217+ # there are no more jobs to run (jobs_to_run is empty)
1218+ break
1219+ # We are done.
1220+ self.finalise_run_FO(folder_names[mode],jobs_to_collect)
1221 self.update_status('Run complete', level='parton', update_results=True)
1222-
1223 return
1224
1225 elif mode in ['aMC@NLO','aMC@LO','noshower','noshowerLO']:
1226+ mode_dict = {'aMC@NLO': 'all', 'aMC@LO': 'born',\
1227+ 'noshower': 'all', 'noshowerLO': 'born'}
1228 shower = self.run_card['parton_shower'].upper()
1229- nevents = int(self.run_card['nevents'])
1230+ nevents = self.run_card['nevents']
1231 req_acc = self.run_card['req_acc']
1232- if nevents == 0 and float(req_acc) < 0 :
1233+ if nevents == 0 and req_acc < 0 :
1234 raise aMCatNLOError('Cannot determine the required accuracy from the number '\
1235 'of events, because 0 events requested. Please set '\
1236- 'the "req_acc" parameter in the run_card to a value between 0 and 1')
1237- elif float(req_acc) >1 or float(req_acc) == 0 :
1238+ 'the "req_acc" parameter in the run_card to a value '\
1239+ 'between 0 and 1')
1240+ elif req_acc >1 or req_acc == 0 :
1241 raise aMCatNLOError('Required accuracy ("req_acc" in the run_card) should '\
1242 'be between larger than 0 and smaller than 1, '\
1243- 'or set to -1 for automatic determination. Current value is %s' % req_acc)
1244+ 'or set to -1 for automatic determination. Current '\
1245+ 'value is %f' % req_acc)
1246 # For more than 1M events, set req_acc to 0.001 (except when it was explicitly set in the run_card)
1247- elif float(req_acc) < 0 and nevents > 1000000 :
1248- req_acc='0.001'
1249+ elif req_acc < 0 and nevents > 1000000 :
1250+ req_acc=0.001
1251
1252 shower_list = ['HERWIG6', 'HERWIGPP', 'PYTHIA6Q', 'PYTHIA6PT', 'PYTHIA8']
1253
1254 if not shower in shower_list:
1255- raise aMCatNLOError('%s is not a valid parton shower. Please use one of the following: %s' \
1256- % (shower, ', '.join(shower_list)))
1257+ raise aMCatNLOError('%s is not a valid parton shower. '\
1258+ 'Please use one of the following: %s' \
1259+ % (shower, ', '.join(shower_list)))
1260
1261 # check that PYTHIA6PT is not used for processes with FSR
1262 if shower == 'PYTHIA6PT' and self.proc_characteristics['has_fsr']:
1263@@ -1541,98 +1369,538 @@
1264 elif options['only_generation']:
1265 logger.info('Generating events starting from existing results')
1266
1267-
1268- for i, status in enumerate(mcatnlo_status):
1269- #check if need to split jobs
1270- # at least one channel must have enough events
1271- try:
1272- nevents_unweighted = open(pjoin(self.me_dir,
1273- 'SubProcesses',
1274- 'nevents_unweighted')).read().split('\n')
1275- except IOError:
1276- nevents_unweighted = []
1277-
1278- split = i == 2 and \
1279- int(self.run_card['nevt_job']) > 0
1280-
1281- if i == 2 or not options['only_generation']:
1282- # if the number of events requested is zero,
1283- # skip mint step 2
1284- if i==2 and nevents==0:
1285- self.print_summary(options, 2,mode)
1286- return
1287-
1288- if split:
1289- # split the event generation
1290- misc.call([pjoin(self.me_dir, 'bin', 'internal', 'split_jobs.py')] + \
1291- [str(self.run_card['nevt_job'])],
1292- stdout = devnull,
1293- cwd = pjoin(self.me_dir, 'SubProcesses'))
1294- assert os.path.exists(pjoin(self.me_dir, 'SubProcesses',
1295- 'nevents_unweighted_splitted'))
1296-
1297- self.update_status(status, level='parton')
1298- if mode in ['aMC@NLO', 'noshower']:
1299- self.write_madinMMC_file(pjoin(self.me_dir, 'SubProcesses'), 'all', i)
1300- self.run_all(job_dict, [['2', 'F', '%d' % i]], status, split_jobs = split)
1301-
1302- elif mode in ['aMC@LO', 'noshowerLO']:
1303- self.write_madinMMC_file(
1304- pjoin(self.me_dir, 'SubProcesses'), 'born', i)
1305- self.run_all(job_dict,
1306- [['2', 'B', '%d' % i]],
1307- '%s at LO' % status, split_jobs = split)
1308-
1309- if (i < 2 and not options['only_generation']) or i == 1 :
1310- # collect the results and logs
1311- self.collect_log_files(folder_names[mode], i)
1312- p = misc.Popen(['./combine_results.sh'] + \
1313- ['%d' % i,'%d' % nevents, '%s' % req_acc ] + \
1314- folder_names[mode],
1315- stdout=subprocess.PIPE,
1316- cwd = pjoin(self.me_dir, 'SubProcesses'))
1317- output = p.communicate()
1318- files.cp(pjoin(self.me_dir, 'SubProcesses', 'res_%d.txt' % i), \
1319- pjoin(self.me_dir, 'Events', self.run_name))
1320-
1321- self.cross_sect_dict = self.read_results(output, mode)
1322- self.print_summary(options, i, mode)
1323-
1324- cross, error = sum_html.make_all_html_results(self, folder_names[mode])
1325- self.results.add_detail('cross', cross)
1326- self.results.add_detail('error', error)
1327-
1328- #check that split jobs are all correctly terminated
1329- if split:
1330- self.check_event_files()
1331-
1332- if self.cluster_mode == 1:
1333- #if cluster run, wait 15 sec so that event files are transferred back
1334- self.update_status(
1335+ jobs_to_run,integration_step = self.create_jobs_to_run(options,p_dirs, \
1336+ req_acc,mode_dict[mode],1,mode,fixed_order=False)
1337+ jobs_to_collect=copy.copy(jobs_to_run)
1338+ self.prepare_directories(jobs_to_run,mode,fixed_order=False)
1339+
1340+ # Make sure to update all the jobs to be ready for the event generation step
1341+ if options['only_generation']:
1342+ jobs_to_run,jobs_to_collect=self.collect_the_results(options,req_acc,jobs_to_run, \
1343+ jobs_to_collect,1,mode,mode_dict[mode],fixed_order=False)
1344+
1345+ # Main loop over the three MINT generation steps:
1346+ for mint_step, status in enumerate(mcatnlo_status):
1347+ if options['only_generation'] and mint_step < 2:
1348+ continue
1349+ self.update_status(status, level='parton')
1350+ self.run_all_jobs(jobs_to_run,mint_step,fixed_order=False)
1351+ self.collect_log_files(jobs_to_run,mint_step)
1352+ jobs_to_run,jobs_to_collect=self.collect_the_results(options,req_acc,jobs_to_run, \
1353+ jobs_to_collect,mint_step,mode,mode_dict[mode],fixed_order=False)
1354+ # Sanity check on the event files. If error the jobs are resubmitted
1355+ self.check_event_files(jobs_to_collect)
1356+
1357+ if self.cluster_mode == 1:
1358+ #if cluster run, wait 10 sec so that event files are transferred back
1359+ self.update_status(
1360 'Waiting while files are transferred back from the cluster nodes',
1361 level='parton')
1362- time.sleep(10)
1363- if split:
1364- files.cp(pjoin(self.me_dir, 'SubProcesses', 'nevents_unweighted_splitted'), \
1365- pjoin(self.me_dir, 'SubProcesses', 'nevents_unweighted'))
1366-
1367-
1368- event_norm=self.run_card['event_norm']
1369- self.collect_log_files(folder_names[mode], 2)
1370- return self.reweight_and_collect_events(options, mode, nevents, event_norm)
1371-
1372- def combine_plots_HwU(self,folder_names):
1373+ time.sleep(10)
1374+
1375+ event_norm=self.run_card['event_norm']
1376+ return self.reweight_and_collect_events(options, mode, nevents, event_norm)
1377+
1378+ def create_jobs_to_run(self,options,p_dirs,req_acc,run_mode,\
1379+ integration_step,mode,fixed_order=True):
1380+ """Creates a list of dictionaries with all the jobs to be run"""
1381+ jobs_to_run=[]
1382+ if not options['only_generation']:
1383+ # Fresh, new run. Check all the P*/channels.txt files
1384+ # (created by the 'gensym' executable) to set-up all the
1385+ # jobs using the default inputs.
1386+ npoints = self.run_card['npoints_FO_grid']
1387+ niters = self.run_card['niters_FO_grid']
1388+ for p_dir in p_dirs:
1389+ with open(pjoin(self.me_dir,'SubProcesses',p_dir,'channels.txt')) as chan_file:
1390+ channels=chan_file.readline().split()
1391+ for channel in channels:
1392+ job={}
1393+ job['p_dir']=p_dir
1394+ job['channel']=channel
1395+ job['split']=0
1396+ if fixed_order and req_acc == -1:
1397+ job['accuracy']=0
1398+ job['niters']=niters
1399+ job['npoints']=npoints
1400+ elif fixed_order and req_acc > 0:
1401+ job['accuracy']=0.10
1402+ job['niters']=6
1403+ job['npoints']=-1
1404+ elif not fixed_order:
1405+ job['accuracy']=0.03
1406+ job['niters']=12
1407+ job['npoints']=-1
1408+ else:
1409+ raise aMCatNLOError('No consistent "req_acc_FO" set. Use a value '+
1410+ 'between 0 and 1 or set it equal to -1.')
1411+ job['mint_mode']=0
1412+ job['run_mode']=run_mode
1413+ job['wgt_frac']=1.0
1414+ jobs_to_run.append(job)
1415+ else:
1416+ # if options['only_generation'] is true, we need to loop
1417+ # over all the existing G* directories and create the jobs
1418+ # from there.
1419+ name_suffix={'born' :'B', 'all':'F'}
1420+ for p_dir in p_dirs:
1421+ for chan_dir in os.listdir(pjoin(self.me_dir,'SubProcesses',p_dir)):
1422+ if ((chan_dir.startswith(run_mode+'_G') and fixed_order) or\
1423+ (chan_dir.startswith('G'+name_suffix[run_mode]) and (not fixed_order))) and \
1424+ (os.path.isdir(pjoin(self.me_dir, 'SubProcesses', p_dir, chan_dir)) or \
1425+ os.path.exists(pjoin(self.me_dir, 'SubProcesses', p_dir, chan_dir))):
1426+ job={}
1427+ job['p_dir']=p_dir
1428+ if fixed_order:
1429+ channel=chan_dir.split('_')[1]
1430+ job['channel']=channel[1:] # remove the 'G'
1431+ if len(chan_dir.split('_')) == 3:
1432+ split=int(chan_dir.split('_')[2])
1433+ else:
1434+ split=0
1435+ else:
1436+ if len(chan_dir.split('_')) == 2:
1437+ split=int(chan_dir.split('_')[1])
1438+ channel=chan_dir.split('_')[0]
1439+ job['channel']=channel[2:] # remove the 'G'
1440+ else:
1441+ job['channel']=chan_dir[2:] # remove the 'G'
1442+ split=0
1443+ job['split']=split
1444+ job['run_mode']=run_mode
1445+ job['dirname']=pjoin(self.me_dir, 'SubProcesses', p_dir, chan_dir)
1446+ job['wgt_frac']=1.0
1447+ if not fixed_order: job['mint_mode']=1
1448+ jobs_to_run.append(job)
1449+ jobs_to_collect=copy.copy(jobs_to_run) # These are all jobs
1450+ if fixed_order:
1451+ jobs_to_run,jobs_to_collect=self.collect_the_results(options,req_acc,jobs_to_run,
1452+ jobs_to_collect,integration_step,mode,run_mode)
1453+ # Update the integration_step to make sure that nothing will be overwritten
1454+ integration_step=1
1455+ for job in jobs_to_run:
1456+ while os.path.exists(pjoin(job['dirname'],'res_%s.dat' % integration_step)):
1457+ integration_step=integration_step+1
1458+ integration_step=integration_step-1
1459+ else:
1460+ self.append_the_results(jobs_to_collect,integration_step)
1461+ return jobs_to_run,integration_step
1462+
1463+ def prepare_directories(self,jobs_to_run,mode,fixed_order=True):
1464+ """Set-up the G* directories for running"""
1465+ name_suffix={'born' :'B' , 'all':'F'}
1466+ for job in jobs_to_run:
1467+ if job['split'] == 0:
1468+ if fixed_order :
1469+ dirname=pjoin(self.me_dir,'SubProcesses',job['p_dir'],
1470+ job['run_mode']+'_G'+job['channel'])
1471+ else:
1472+ dirname=pjoin(self.me_dir,'SubProcesses',job['p_dir'],
1473+ 'G'+name_suffix[job['run_mode']]+job['channel'])
1474+ else:
1475+ if fixed_order :
1476+ dirname=pjoin(self.me_dir,'SubProcesses',job['p_dir'],
1477+ job['run_mode']+'_G'+job['channel']+'_'+str(job['split']))
1478+ else:
1479+ dirname=pjoin(self.me_dir,'SubProcesses',job['p_dir'],
1480+ 'G'+name_suffix[job['run_mode']]+job['channel']+'_'+str(job['split']))
1481+ job['dirname']=dirname
1482+ if not os.path.isdir(dirname):
1483+ os.makedirs(dirname)
1484+ self.write_input_file(job,fixed_order)
1485+ if not fixed_order:
1486+ # copy the grids from the base directory to the split directory:
1487+ if job['split'] != 0:
1488+ for f in ['grid.MC_integer','mint_grids','res_1']:
1489+ if not os.path.isfile(pjoin(job['dirname'],f)):
1490+ files.ln(pjoin(job['dirname'].rsplit("_",1)[0],f),job['dirname'])
1491+
1492+
1493+ def write_input_file(self,job,fixed_order):
1494+ """write the input file for the madevent_mint* executable in the appropriate directory"""
1495+ if fixed_order:
1496+ content= \
1497+"""NPOINTS = %(npoints)s
1498+NITERATIONS = %(niters)s
1499+ACCURACY = %(accuracy)s
1500+ADAPT_GRID = 2
1501+MULTICHANNEL = 1
1502+SUM_HELICITY = 1
1503+CHANNEL = %(channel)s
1504+SPLIT = %(split)s
1505+RUN_MODE = %(run_mode)s
1506+RESTART = %(mint_mode)s
1507+""" \
1508+ % job
1509+ else:
1510+ content = \
1511+"""-1 12 ! points, iterations
1512+%(accuracy)s ! desired fractional accuracy
1513+1 -0.1 ! alpha, beta for Gsoft
1514+-1 -0.1 ! alpha, beta for Gazi
1515+1 ! Suppress amplitude (0 no, 1 yes)?
1516+1 ! Exact helicity sum (0 yes, n = number/event)?
1517+%(channel)s ! Enter Configuration Number:
1518+%(mint_mode)s ! MINT imode: 0 to set-up grids, 1 to perform integral, 2 generate events
1519+1 1 1 ! if imode is 1: Folding parameters for xi_i, phi_i and y_ij
1520+%(run_mode)s ! all, born, real, virt
1521+""" \
1522+ % job
1523+ with open(pjoin(job['dirname'], 'input_app.txt'), 'w') as input_file:
1524+ input_file.write(content)
1525+
1526+
1527+ def run_all_jobs(self,jobs_to_run,integration_step,fixed_order=True):
1528+ """Loops over the jobs_to_run and executes them using the function 'run_exe'"""
1529+ if fixed_order:
1530+ if integration_step == 0:
1531+ self.update_status('Setting up grids', level=None)
1532+ else:
1533+ self.update_status('Refining results, step %i' % integration_step, level=None)
1534+ self.ijob = 0
1535+ name_suffix={'born' :'B', 'all':'F'}
1536+ for job in jobs_to_run:
1537+ executable='ajob1'
1538+ if fixed_order:
1539+ arguments=[job['channel'],job['run_mode'], \
1540+ str(job['split']),str(integration_step)]
1541+ run_type="Fixed order integration step %s" % integration_step
1542+ else:
1543+ arguments=[job['channel'],name_suffix[job['run_mode']], \
1544+ str(job['split']),str(integration_step)]
1545+ run_type="MINT step %s" % integration_step
1546+ self.run_exe(executable,arguments,run_type,
1547+ cwd=pjoin(self.me_dir,'SubProcesses',job['p_dir']))
1548+
1549+ if self.cluster_mode == 2:
1550+ time.sleep(1) # security to allow all jobs to be launched
1551+ self.njobs=len(jobs_to_run)
1552+ self.wait_for_complete(run_type)
1553+
1554+
1555+ def collect_the_results(self,options,req_acc,jobs_to_run,jobs_to_collect,\
1556+ integration_step,mode,run_mode,fixed_order=True):
1557+ """Collect the results, make HTML pages, print the summary and
1558+ determine if there are more jobs to run. Returns the list
1559+ of the jobs that still need to be run, as well as the
1560+ complete list of jobs that need to be collected to get the
1561+ final answer.
1562+ """
1563+# Get the results of the current integration/MINT step
1564+ self.append_the_results(jobs_to_run,integration_step)
1565+ self.cross_sect_dict = self.write_res_txt_file(jobs_to_collect,integration_step)
1566+# Update HTML pages
1567+ if fixed_order:
1568+ cross, error = sum_html.make_all_html_results(self, ['%s*' % run_mode])
1569+ else:
1570+ name_suffix={'born' :'B' , 'all':'F'}
1571+ cross, error = sum_html.make_all_html_results(self, ['G%s*' % name_suffix[run_mode]])
1572+ self.results.add_detail('cross', cross)
1573+ self.results.add_detail('error', error)
1574+# Set-up jobs for the next iteration/MINT step
1575+ jobs_to_run_new=self.update_jobs_to_run(req_acc,integration_step,jobs_to_run,fixed_order)
1576+ # if there are no more jobs, we are done!
1577+# Print summary
1578+ if (not jobs_to_run_new) and fixed_order:
1579+ # print final summary of results (for fixed order)
1580+ scale_pdf_info=self.collect_scale_pdf_info(options,jobs_to_collect)
1581+ self.print_summary(options,integration_step,mode,scale_pdf_info,done=True)
1582+ return jobs_to_run_new,jobs_to_collect
1583+ elif jobs_to_run_new:
1584+ # print intermediate summary of results
1585+ scale_pdf_info={}
1586+ self.print_summary(options,integration_step,mode,scale_pdf_info,done=False)
1587+ else:
1588+ # When we are done for (N)LO+PS runs, do not print
1589+ # anything yet. This will be done after the reweighting
1590+ # and collection of the events
1591+ scale_pdf_info={}
1592+# Prepare for the next integration/MINT step
1593+ if (not fixed_order) and integration_step+1 == 2 :
1594+ # next step is event generation (mint_step 2)
1595+ jobs_to_run_new,jobs_to_collect_new= \
1596+ self.check_the_need_to_split(jobs_to_run_new,jobs_to_collect)
1597+ self.prepare_directories(jobs_to_run_new,mode,fixed_order)
1598+ self.write_nevents_unweighted_file(jobs_to_collect_new)
1599+ self.write_nevts_files(jobs_to_run_new)
1600+ else:
1601+ self.prepare_directories(jobs_to_run_new,mode,fixed_order)
1602+ jobs_to_collect_new=jobs_to_collect
1603+ return jobs_to_run_new,jobs_to_collect_new
1604+
1605+
1606+ def write_nevents_unweighted_file(self,jobs):
1607+ """writes the nevents_unweighted file in the SubProcesses directory"""
1608+ content=[]
1609+ for job in jobs:
1610+ path=pjoin(job['dirname'].split('/')[-2],job['dirname'].split('/')[-1])
1611+ lhefile=pjoin(path,'events.lhe')
1612+ content.append(' %s %d %9e %9e' % \
1613+ (lhefile.ljust(40),job['nevents'],job['resultABS']*job['wgt_frac'],job['wgt_frac']))
1614+ with open(pjoin(self.me_dir,'SubProcesses',"nevents_unweighted"),'w') as f:
1615+ f.write('\n'.join(content)+'\n')
1616+
1617+ def write_nevts_files(self,jobs):
1618+ """write the nevts files in the SubProcesses/P*/G*/ directories"""
1619+ for job in jobs:
1620+ with open(pjoin(job['dirname'],'nevts'),'w') as f:
1621+ f.write('%i\n' % job['nevents'])
1622+
1623+ def check_the_need_to_split(self,jobs_to_run,jobs_to_collect):
1624+ """Looks in the jobs_to_run to see if there is the need to split the
1625+ event generation step. Updates jobs_to_run and
1626+ jobs_to_collect to replace the split-job by its
1627+ splits. Also removes jobs that do not need any events.
1628+ """
1629+ nevt_job=self.run_card['nevt_job']
1630+ if nevt_job > 0:
1631+ jobs_to_collect_new=copy.copy(jobs_to_collect)
1632+ for job in jobs_to_run:
1633+ nevents=job['nevents']
1634+ if nevents == 0:
1635+ jobs_to_collect_new.remove(job)
1636+ elif nevents > nevt_job:
1637+ jobs_to_collect_new.remove(job)
1638+ nsplit=int(nevents/nevt_job)+1
1639+ for i in range(1,nsplit+1):
1640+ job_new=copy.copy(job)
1641+ left_over=nevents % nsplit
1642+ if i <= left_over:
1643+ job_new['nevents']=int(nevents/nsplit)+1
1644+ job_new['wgt_frac']=float(job_new['nevents'])/float(nevents)
1645+ else:
1646+ job_new['nevents']=int(nevents/nsplit)
1647+ job_new['wgt_frac']=float(job_new['nevents'])/float(nevents)
1648+ job_new['split']=i
1649+ job_new['dirname']=job['dirname']+'_%i' % job_new['split']
1650+ jobs_to_collect_new.append(job_new)
1651+ jobs_to_run_new=copy.copy(jobs_to_collect_new)
1652+ else:
1653+ jobs_to_run_new=copy.copy(jobs_to_collect)
1654+ for job in jobs_to_collect:
1655+ if job['nevents'] == 0:
1656+ jobs_to_run_new.remove(job)
1657+ jobs_to_collect_new=copy.copy(jobs_to_run_new)
1658+
1659+ return jobs_to_run_new,jobs_to_collect_new
1660+
1661+
1662+ def update_jobs_to_run(self,req_acc,step,jobs,fixed_order=True):
1663+ """
1664+ For (N)LO+PS: determines the number of events and/or the required
1665+ accuracy per job.
1666+ For fixed order: determines which jobs need higher precision and
1667+ returns those with the newly requested precision.
1668+ """
1669+ err=self.cross_sect_dict['errt']
1670+ tot=self.cross_sect_dict['xsect']
1671+ errABS=self.cross_sect_dict['erra']
1672+ totABS=self.cross_sect_dict['xseca']
1673+ jobs_new=[]
1674+ if fixed_order:
1675+ if req_acc == -1:
1676+ if step == 0:
1677+ npoints = self.run_card['npoints_FO']
1678+ niters = self.run_card['niters_FO']
1679+ for job in jobs:
1680+ job['mint_mode']=-1
1681+ job['niters']=niters
1682+ job['npoints']=npoints
1683+ jobs_new.append(job)
1684+ elif step > 0:
1685+ raise aMCatNLOError('Cannot determine number of iterations and PS points '+
1686+ 'for integration step %i' % step )
1687+ elif ( req_acc > 0 and err/tot > req_acc*1.2 ) or step == 0:
1688+ req_accABS=req_acc*abs(tot)/totABS # overal relative required accuracy on ABS Xsec.
1689+ for job in jobs:
1690+ job['mint_mode']=-1
1691+ # Determine relative required accuracy on the ABS for this job
1692+ job['accuracy']=req_accABS*math.sqrt(totABS/job['resultABS'])
1693+ # If already accurate enough, skip running
1694+ if job['accuracy'] > job['errorABS']/job['resultABS'] and step != 0:
1695+ continue
1696+ # Update the number of PS points based on errorABS, ncall and accuracy
1697+ itmax_fl=job['niters_done']*math.pow(job['errorABS']/
1698+ (job['accuracy']*job['resultABS']),2)
1699+ if itmax_fl <= 4.0 :
1700+ job['niters']=max(int(round(itmax_fl)),2)
1701+ job['npoints']=job['npoints_done']*2
1702+ elif itmax_fl > 4.0 and itmax_fl <= 16.0 :
1703+ job['niters']=4
1704+ job['npoints']=int(round(job['npoints_done']*itmax_fl/4.0))*2
1705+ else:
1706+ if itmax_fl > 100.0 : itmax_fl=50.0
1707+ job['niters']=int(round(math.sqrt(itmax_fl)))
1708+ job['npoints']=int(round(job['npoints_done']*itmax_fl/
1709+ round(math.sqrt(itmax_fl))))*2
1710+ # Add the job to the list of jobs that need to be run
1711+ jobs_new.append(job)
1712+ return jobs_new
1713+ elif step+1 <= 2:
1714+ nevents=self.run_card['nevents']
1715+ # Total required accuracy for the upper bounding envelope
1716+ if req_acc<0:
1717+ req_acc2_inv=nevents
1718+ else:
1719+ req_acc2_inv=1/(req_acc*req_acc)
1720+ if step+1 == 1:
1721+ # determine the req. accuracy for each of the jobs for Mint-step = 1
1722+ for job in jobs:
1723+ accuracy=min(math.sqrt(totABS/(req_acc2_inv*job['resultABS'])),0.2)
1724+ job['accuracy']=accuracy
1725+ elif step+1 == 2:
1726+ # Randomly (based on the relative ABS Xsec of the job) determine the
1727+ # number of events each job needs to generate for MINT-step = 2.
1728+ r=self.get_randinit_seed()
1729+ random.seed(r)
1730+ totevts=nevents
1731+ for job in jobs:
1732+ job['nevents'] = 0
1733+ while totevts :
1734+ target = random.random() * totABS
1735+ crosssum = 0.
1736+ i = 0
1737+ while i<len(jobs) and crosssum < target:
1738+ job = jobs[i]
1739+ crosssum += job['resultABS']
1740+ i += 1
1741+ totevts -= 1
1742+ i -= 1
1743+ jobs[i]['nevents'] += 1
1744+ for job in jobs:
1745+ job['mint_mode']=step+1 # next step
1746+ return jobs
1747+ else:
1748+ return []
1749+
1750+
1751+ def get_randinit_seed(self):
1752+ """ Get the random number seed from the randinit file """
1753+ with open(pjoin(self.me_dir,"SubProcesses","randinit")) as randinit:
1754+ # format of the file is "r=%d".
1755+ iseed = int(randinit.read()[2:])
1756+ return iseed
1757+
1758+
1759+ def append_the_results(self,jobs,integration_step):
1760+ """Appends the results for each of the jobs in the job list"""
1761+ error_found=False
1762+ for job in jobs:
1763+ try:
1764+ if integration_step >= 0 :
1765+ with open(pjoin(job['dirname'],'res_%s.dat' % integration_step)) as res_file:
1766+ results=res_file.readline().split()
1767+ else:
1768+ # should only be here when doing fixed order with the 'only_generation'
1769+ # option equal to True. Take the results from the final run done.
1770+ with open(pjoin(job['dirname'],'res.dat')) as res_file:
1771+ results=res_file.readline().split()
1772+ except IOError:
1773+ if not error_found:
1774+ error_found=True
1775+ error_log=[]
1776+ error_log.append(pjoin(job['dirname'],'log.txt'))
1777+ continue
1778+ job['resultABS']=float(results[0])
1779+ job['errorABS']=float(results[1])
1780+ job['result']=float(results[2])
1781+ job['error']=float(results[3])
1782+ job['niters_done']=int(results[4])
1783+ job['npoints_done']=int(results[5])
1784+ job['time_spend']=float(results[6])
1785+ job['err_percABS'] = job['errorABS']/job['resultABS']*100.
1786+ job['err_perc'] = job['error']/job['result']*100.
1787+ if error_found:
1788+ raise aMCatNLOError('An error occurred during the collection of results.\n' +
1789+ 'Please check the .log files inside the directories which failed:\n' +
1790+ '\n'.join(error_log)+'\n')
1791+
1792+
1793+
1794+ def write_res_txt_file(self,jobs,integration_step):
1795+ """writes the res.txt files in the SubProcess dir"""
1796+ jobs.sort(key = lambda job: -job['errorABS'])
1797+ content=[]
1798+ content.append('\n\nCross-section per integration channel:')
1799+ for job in jobs:
1800+ content.append('%(p_dir)20s %(channel)15s %(result)10.8e %(error)6.4e %(err_perc)6.4f%% ' % job)
1801+ content.append('\n\nABS cross-section per integration channel:')
1802+ for job in jobs:
1803+ content.append('%(p_dir)20s %(channel)15s %(resultABS)10.8e %(errorABS)6.4e %(err_percABS)6.4f%% ' % job)
1804+ totABS=0
1805+ errABS=0
1806+ tot=0
1807+ err=0
1808+ for job in jobs:
1809+ totABS+= job['resultABS']
1810+ errABS+= math.pow(job['errorABS'],2)
1811+ tot+= job['result']
1812+ err+= math.pow(job['error'],2)
1813+ content.append('\nTotal ABS and \nTotal: \n %10.8e +- %6.4e (%6.4e%%)\n %10.8e +- %6.4e (%6.4e%%) \n' %\
1814+ (totABS, math.sqrt(errABS), math.sqrt(errABS)/totABS *100.,tot, math.sqrt(err), math.sqrt(err)/tot *100.))
1815+ with open(pjoin(self.me_dir,'SubProcesses','res_%s.txt' % integration_step),'w') as res_file:
1816+ res_file.write('\n'.join(content))
1817+ randinit=self.get_randinit_seed()
1818+ return {'xsect':tot,'xseca':totABS,'errt':math.sqrt(err),\
1819+ 'erra':math.sqrt(errABS),'randinit':randinit}
1820+
1821+
1822+ def collect_scale_pdf_info(self,options,jobs):
1823+ """read the scale_pdf_dependence.dat files and collects there results"""
1824+ scale_pdf_info={}
1825+ if self.run_card['reweight_scale'] or self.run_card['reweight_PDF']:
1826+ data_files=[]
1827+ for job in jobs:
1828+ data_files.append(pjoin(job['dirname'],'scale_pdf_dependence.dat'))
1829+ scale_pdf_info = self.pdf_scale_from_reweighting(data_files)
1830+ return scale_pdf_info
1831+
1832+
1833+ def combine_plots_FO(self,folder_name,jobs):
1834+ """combines the plots and puts then in the Events/run* directory"""
1835+ devnull = os.open(os.devnull, os.O_RDWR)
1836+ if self.analyse_card['fo_analysis_format'].lower() == 'topdrawer':
1837+ misc.call(['./combine_plots_FO.sh'] + folder_name, \
1838+ stdout=devnull,
1839+ cwd=pjoin(self.me_dir, 'SubProcesses'))
1840+ files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.top'),
1841+ pjoin(self.me_dir, 'Events', self.run_name))
1842+ logger.info('The results of this run and the TopDrawer file with the plots' + \
1843+ ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1844+ elif self.analyse_card['fo_analysis_format'].lower() == 'hwu':
1845+ self.combine_plots_HwU(jobs)
1846+ files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.HwU'),
1847+ pjoin(self.me_dir, 'Events', self.run_name))
1848+ files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.gnuplot'),
1849+ pjoin(self.me_dir, 'Events', self.run_name))
1850+ try:
1851+ misc.call(['gnuplot','MADatNLO.gnuplot'],\
1852+ stdout=devnull,stderr=devnull,\
1853+ cwd=pjoin(self.me_dir, 'Events', self.run_name))
1854+ except Exception:
1855+ pass
1856+ logger.info('The results of this run and the HwU and GnuPlot files with the plots' + \
1857+ ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1858+ elif self.analyse_card['fo_analysis_format'].lower() == 'root':
1859+ misc.call(['./combine_root.sh'] + folder_name, \
1860+ stdout=devnull,
1861+ cwd=pjoin(self.me_dir, 'SubProcesses'))
1862+ files.cp(pjoin(self.me_dir, 'SubProcesses', 'MADatNLO.root'),
1863+ pjoin(self.me_dir, 'Events', self.run_name))
1864+ logger.info('The results of this run and the ROOT file with the plots' + \
1865+ ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1866+ else:
1867+ logger.info('The results of this run' + \
1868+ ' have been saved in %s' % pjoin(self.me_dir, 'Events', self.run_name))
1869+
1870+
1871+ def combine_plots_HwU(self,jobs):
1872 """Sums all the plots in the HwU format."""
1873-
1874 logger.debug('Combining HwU plots.')
1875-
1876- with open(pjoin(self.me_dir,'SubProcesses','dirs.txt')) as dirf:
1877- all_histo_paths = dirf.readlines()
1878- all_histo_paths = [pjoin(self.me_dir,'SubProcesses',
1879- path.rstrip(),"MADatNLO.HwU") for path in all_histo_paths]
1880-
1881+ all_histo_paths=[]
1882+ for job in jobs:
1883+ all_histo_paths.append(pjoin(job['dirname'],"MADatNLO.HwU"))
1884 histogram_list = histograms.HwUList(all_histo_paths[0])
1885-
1886 for histo_path in all_histo_paths[1:]:
1887 for i, histo in enumerate(histograms.HwUList(histo_path)):
1888 # First make sure the plots have the same weight labels and such
1889@@ -1644,19 +1912,18 @@
1890 histogram_list.output(pjoin(self.me_dir,'SubProcesses',"MADatNLO"),
1891 format = 'gnuplot')
1892
1893- def applgrid_combine(self,cross,error):
1894+ def applgrid_combine(self,cross,error,jobs):
1895 """Combines the APPLgrids in all the SubProcess/P*/all_G*/ directories"""
1896 logger.debug('Combining APPLgrids \n')
1897 applcomb=pjoin(self.options['applgrid'].rstrip('applgrid-config'),
1898 'applgrid-combine')
1899- with open(pjoin(self.me_dir,'SubProcesses','dirs.txt')) as dirf:
1900- all_jobs=dirf.readlines()
1901+ all_jobs=[]
1902+ for job in jobs:
1903+ all_jobs.append(job['dirname'])
1904 ngrids=len(all_jobs)
1905- nobs =len([name for name in os.listdir(pjoin(self.me_dir,'SubProcesses',
1906- all_jobs[0].rstrip())) if name.endswith("_out.root")])
1907+ nobs =len([name for name in os.listdir(all_jobs[0]) if name.endswith("_out.root")])
1908 for obs in range(0,nobs):
1909- gdir = [pjoin(self.me_dir,'SubProcesses',job.rstrip(),"grid_obs_"+
1910- str(obs)+"_out.root") for job in all_jobs]
1911+ gdir = [pjoin(job,"grid_obs_"+str(obs)+"_out.root") for job in all_jobs]
1912 # combine APPLgrids from different channels for observable 'obs'
1913 if self.run_card["iappl"] == 1:
1914 misc.call([applcomb,'-o', pjoin(self.me_dir,"Events",self.run_name,
1915@@ -1668,8 +1935,7 @@
1916 self.run_name,"aMCfast_obs_"+str(obs)+".root"),'-s',
1917 str(unc2_inv),'--weight',str(unc2_inv)]+ gdir)
1918 for job in all_jobs:
1919- os.remove(pjoin(self.me_dir,'SubProcesses',job.rstrip(),
1920- "grid_obs_"+str(obs)+"_in.root"))
1921+ os.remove(pjoin(job,"grid_obs_"+str(obs)+"_in.root"))
1922 else:
1923 raise aMCatNLOError('iappl parameter can only be 0, 1 or 2')
1924 # after combining, delete the original grids
1925@@ -1710,14 +1976,10 @@
1926 if not hasattr(self, 'appl_start_grid') or not self.appl_start_grid:
1927 raise self.InvalidCmd('No APPLgrid name currently defined.'+
1928 'Please provide this information.')
1929- if mode == 'NLO':
1930- gdir='all_G'
1931- elif mode == 'LO':
1932- gdir='born_G'
1933 #copy the grid to all relevant directories
1934 for pdir in p_dirs:
1935 g_dirs = [file for file in os.listdir(pjoin(self.me_dir,
1936- "SubProcesses",pdir)) if file.startswith(gdir) and
1937+ "SubProcesses",pdir)) if file.startswith(mode+'_G') and
1938 os.path.isdir(pjoin(self.me_dir,"SubProcesses",pdir, file))]
1939 for g_dir in g_dirs:
1940 for grid in all_grids:
1941@@ -1726,28 +1988,20 @@
1942 'grid_obs_'+obs+'_in.root'))
1943
1944
1945- def collect_log_files(self, folders, istep):
1946+
1947+
1948+ def collect_log_files(self, jobs, integration_step):
1949 """collect the log files and put them in a single, html-friendly file
1950- inside the run_... directory"""
1951- step_list = ['Grid setting', 'Cross-section computation',
1952- 'Event generation']
1953+ inside the Events/run_.../ directory"""
1954 log_file = pjoin(self.me_dir, 'Events', self.run_name,
1955- 'alllogs_%d.html' % istep)
1956- # this keeps track of which step has been computed for which channel
1957- channel_dict = {}
1958- log_files = []
1959- for folder in folders:
1960- log_files += glob.glob(pjoin(self.me_dir, 'SubProcesses', 'P*',
1961- folder, 'log.txt'))
1962+ 'alllogs_%d.html' % integration_step)
1963+ outfile = open(log_file, 'w')
1964
1965 content = ''
1966-
1967- outfile = open(log_file, 'w')
1968-
1969 content += '<HTML><BODY>\n<font face="courier" size=2>'
1970- for log in log_files:
1971- channel_dict[os.path.dirname(log)] = [istep]
1972+ for job in jobs:
1973 # put an anchor
1974+ log=pjoin(job['dirname'],'log_MINT%s.txt' % integration_step)
1975 content += '<a name=%s></a>\n' % (os.path.dirname(log).replace(
1976 pjoin(self.me_dir,'SubProcesses'),''))
1977 # and put some nice header
1978@@ -1755,7 +2009,7 @@
1979 content += '<br>LOG file for integration channel %s, %s <br>' % \
1980 (os.path.dirname(log).replace(pjoin(self.me_dir,
1981 'SubProcesses'), ''),
1982- step_list[istep])
1983+ integration_step)
1984 content += '</font>\n'
1985 #then just flush the content of the small log inside the big log
1986 #the PRE tag prints everything verbatim
1987@@ -1768,53 +2022,78 @@
1988 outfile.close()
1989
1990
1991- def read_results(self, output, mode):
1992- """extract results (cross-section, absolute cross-section and errors)
1993- from output, which should be formatted as
1994- Found 4 correctly terminated jobs
1995- random seed found in 'randinit' is 33
1996- Integrated abs(cross-section)
1997- 7.94473937e+03 +- 2.9953e+01 (3.7702e-01%)
1998- Integrated cross-section
1999- 6.63392298e+03 +- 3.7669e+01 (5.6782e-01%)
2000- for aMC@NLO/aMC@LO, and as
2001-
2002- for NLO/LO
2003- The cross_sect_dict is returned"""
2004- res = {}
2005- if mode in ['aMC@LO', 'aMC@NLO', 'noshower', 'noshowerLO']:
2006- pat = re.compile(\
2007-'''Found (\d+) correctly terminated jobs
2008-random seed found in 'randinit' is (\d+)
2009-Integrated abs\(cross-section\)
2010-\s*(\d+\.\d+e[+-]\d+) \+\- (\d+\.\d+e[+-]\d+) \((\d+\.\d+e[+-]\d+)\%\)
2011-Integrated cross-section
2012-\s*(\-?\d+\.\d+e[+-]\d+) \+\- (\d+\.\d+e[+-]\d+) \((\-?\d+\.\d+e[+-]\d+)\%\)''')
2013- else:
2014- pat = re.compile(\
2015-'''Found (\d+) correctly terminated jobs
2016-\s*(\-?\d+\.\d+e[+-]\d+) \+\- (\d+\.\d+e[+-]\d+) \((\-?\d+\.\d+e[+-]\d+)\%\)''')
2017- pass
2018-
2019- match = re.search(pat, output[0])
2020- if not match or output[1]:
2021- logger.info('Return code of the event collection: '+str(output[1]))
2022- logger.info('Output of the event collection:\n'+output[0])
2023- raise aMCatNLOError('An error occurred during the collection of results.\n' +
2024- 'Please check the .log files inside the directories which failed.')
2025-# if int(match.groups()[0]) != self.njobs:
2026-# raise aMCatNLOError('Not all jobs terminated successfully')
2027- if mode in ['aMC@LO', 'aMC@NLO', 'noshower', 'noshowerLO']:
2028- return {'randinit' : int(match.groups()[1]),
2029- 'xseca' : float(match.groups()[2]),
2030- 'erra' : float(match.groups()[3]),
2031- 'xsect' : float(match.groups()[5]),
2032- 'errt' : float(match.groups()[6])}
2033- else:
2034- return {'xsect' : float(match.groups()[1]),
2035- 'errt' : float(match.groups()[2])}
2036-
2037- def print_summary(self, options, step, mode, scale_pdf_info={}):
2038+ def finalise_run_FO(self,folder_name,jobs):
2039+ """Combine the plots and put the res*.txt files in the Events/run.../ folder."""
2040+ # Copy the res_*.txt files to the Events/run* folder
2041+ res_files=glob.glob(pjoin(self.me_dir, 'SubProcesses', 'res_*.txt'))
2042+ for res_file in res_files:
2043+ files.mv(res_file,pjoin(self.me_dir, 'Events', self.run_name))
2044+ # Collect the plots and put them in the Events/run* folder
2045+ self.combine_plots_FO(folder_name,jobs)
2046+ # If doing the applgrid-stuff, also combine those grids
2047+ # and put those in the Events/run* folder
2048+ if self.run_card['iappl'] != 0:
2049+ self.applgrid_combine(cross,error)
2050+
2051+
2052+ def setup_cluster_or_multicore(self):
2053+ """setup the number of cores for multicore, and the cluster-type for cluster runs"""
2054+ if self.cluster_mode == 1:
2055+ cluster_name = self.options['cluster_type']
2056+ self.cluster = cluster.from_name[cluster_name](**self.options)
2057+ if self.cluster_mode == 2:
2058+ try:
2059+ import multiprocessing
2060+ if not self.nb_core:
2061+ try:
2062+ self.nb_core = int(self.options['nb_core'])
2063+ except TypeError:
2064+ self.nb_core = multiprocessing.cpu_count()
2065+ logger.info('Using %d cores' % self.nb_core)
2066+ except ImportError:
2067+ self.nb_core = 1
2068+ logger.warning('Impossible to detect the number of cores => Using One.\n'+
2069+ 'Use set nb_core X in order to set this number and be able to'+
2070+ 'run in multicore.')
2071+
2072+ self.cluster = cluster.MultiCore(**self.options)
2073+
2074+
2075+ def clean_previous_results(self,options,p_dirs,folder_name):
2076+ """Clean previous results.
2077+ o. If doing only the reweighting step, do not delete anything and return directlty.
2078+ o. Always remove all the G*_* files (from split event generation).
2079+ o. Remove the G* (or born_G* or all_G*) only when NOT doing only_generation or reweight_only."""
2080+ if options['reweightonly']:
2081+ return
2082+ if not options['only_generation']:
2083+ self.update_status('Cleaning previous results', level=None)
2084+ for dir in p_dirs:
2085+ #find old folders to be removed
2086+ for obj in folder_name:
2087+ # list all the G* (or all_G* or born_G*) directories
2088+ to_rm = [file for file in \
2089+ os.listdir(pjoin(self.me_dir, 'SubProcesses', dir)) \
2090+ if file.startswith(obj[:-1]) and \
2091+ (os.path.isdir(pjoin(self.me_dir, 'SubProcesses', dir, file)) or \
2092+ os.path.exists(pjoin(self.me_dir, 'SubProcesses', dir, file)))]
2093+ # list all the G*_* directories (from split event generation)
2094+ to_always_rm = [file for file in \
2095+ os.listdir(pjoin(self.me_dir, 'SubProcesses', dir)) \
2096+ if file.startswith(obj[:-1]) and
2097+ '_' in file and not '_G' in file and \
2098+ (os.path.isdir(pjoin(self.me_dir, 'SubProcesses', dir, file)) or \
2099+ os.path.exists(pjoin(self.me_dir, 'SubProcesses', dir, file)))]
2100+
2101+ if not options['only_generation']:
2102+ to_always_rm.extend(to_rm)
2103+ if os.path.exists(pjoin(self.me_dir, 'SubProcesses', dir,'MadLoop5_resources.tar.gz')):
2104+ to_always_rm.append(pjoin(self.me_dir, 'SubProcesses', dir,'MadLoop5_resources.tar.gz'))
2105+ files.rm([pjoin(self.me_dir, 'SubProcesses', dir, d) for d in to_always_rm])
2106+ return
2107+
2108+
2109+ def print_summary(self, options, step, mode, scale_pdf_info={}, done=True):
2110 """print a summary of the results contained in self.cross_sect_dict.
2111 step corresponds to the mintMC step, if =2 (i.e. after event generation)
2112 some additional infos are printed"""
2113@@ -1833,17 +2112,16 @@
2114 if mode in ['aMC@NLO', 'aMC@LO', 'noshower', 'noshowerLO']:
2115 log_GV_files = glob.glob(pjoin(self.me_dir, \
2116 'SubProcesses', 'P*','G*','log_MINT*.txt'))
2117- all_log_files = glob.glob(pjoin(self.me_dir, \
2118- 'SubProcesses', 'P*','G*','log*.txt'))
2119+ all_log_files = log_GV_files
2120 elif mode == 'NLO':
2121 log_GV_files = glob.glob(pjoin(self.me_dir, \
2122- 'SubProcesses', 'P*','all_G*','log*.txt'))
2123- all_log_files = sum([glob.glob(pjoin(self.me_dir,'SubProcesses', 'P*',
2124- '%sG*'%foldName,'log*.txt')) for foldName in ['all_']],[])
2125+ 'SubProcesses', 'P*','all_G*','log_MINT*.txt'))
2126+ all_log_files = log_GV_files
2127+
2128 elif mode == 'LO':
2129 log_GV_files = ''
2130- all_log_files = sum([glob.glob(pjoin(self.me_dir,'SubProcesses', 'P*',
2131- '%sG*'%foldName,'log*.txt')) for foldName in ['born_']],[])
2132+ all_log_files = glob.glob(pjoin(self.me_dir, \
2133+ 'SubProcesses', 'P*','born_G*','log_MINT*.txt'))
2134 else:
2135 raise aMCatNLOError, 'Running mode %s not supported.'%mode
2136
2137@@ -1886,18 +2164,22 @@
2138 misc.format_timer(time.time()-self.start_time))
2139
2140 elif mode in ['NLO', 'LO']:
2141- status = ['Results after grid setup (cross-section is non-physical):',
2142+ status = ['Results after grid setup:','Current results:',
2143 'Final results and run summary:']
2144- if step == 0:
2145- message = '\n ' + status[step] + \
2146- '\n Total cross-section: %(xsect)8.3e +- %(errt)6.1e pb' % \
2147- self.cross_sect_dict
2148- elif step == 1:
2149- message = '\n ' + status[step] + proc_info + \
2150+ if (not done) and (step == 0):
2151+ message = '\n ' + status[0] + \
2152+ '\n Total cross-section: %(xsect)8.3e +- %(errt)6.1e pb' % \
2153+ self.cross_sect_dict
2154+ elif not done:
2155+ message = '\n ' + status[1] + \
2156+ '\n Total cross-section: %(xsect)8.3e +- %(errt)6.1e pb' % \
2157+ self.cross_sect_dict
2158+ elif done:
2159+ message = '\n ' + status[2] + proc_info + \
2160 '\n Total cross-section: %(xsect)8.3e +- %(errt)6.1e pb' % \
2161 self.cross_sect_dict
2162 if self.run_card['reweight_scale']:
2163- if int(self.run_card['ickkw'])!=-1:
2164+ if self.run_card['ickkw'] != -1:
2165 message = message + \
2166 ('\n Ren. and fac. scale uncertainty: +%0.1f%% -%0.1f%%') % \
2167 (scale_pdf_info['scale_upp'], scale_pdf_info['scale_low'])
2168@@ -1910,7 +2192,7 @@
2169 ('\n PDF uncertainty: +%0.1f%% -%0.1f%%') % \
2170 (scale_pdf_info['pdf_upp'], scale_pdf_info['pdf_low'])
2171
2172- if (mode in ['NLO', 'LO'] and step!=1) or \
2173+ if (mode in ['NLO', 'LO'] and not done) or \
2174 (mode in ['aMC@NLO', 'aMC@LO', 'noshower', 'noshowerLO'] and step!=2):
2175 logger.info(message+'\n')
2176 return
2177@@ -2371,7 +2653,6 @@
2178 scale_pdf_info={}
2179 if self.run_card['reweight_scale'] or self.run_card['reweight_PDF'] :
2180 scale_pdf_info = self.run_reweight(options['reweightonly'])
2181-
2182 self.update_status('Collecting events', level='parton', update_results=True)
2183 misc.compile(['collect_events'],
2184 cwd=pjoin(self.me_dir, 'SubProcesses'))
2185@@ -2395,6 +2676,10 @@
2186 misc.gzip(pjoin(self.me_dir, 'SubProcesses', filename), stdout=evt_file)
2187 if not options['reweightonly']:
2188 self.print_summary(options, 2, mode, scale_pdf_info)
2189+ res_files=glob.glob(pjoin(self.me_dir, 'SubProcesses', 'res*.txt'))
2190+ for res_file in res_files:
2191+ files.mv(res_file,pjoin(self.me_dir, 'Events', self.run_name))
2192+
2193 logger.info('The %s file has been generated.\n' % (evt_file))
2194 self.results.add_detail('nb_event', nevents)
2195 self.update_status('Events generated', level='parton', update_results=True)
2196@@ -2415,9 +2700,9 @@
2197
2198 #check that the number of split event files divides the number of
2199 # events, otherwise set it to 1
2200- if int(int(self.banner.get_detail('run_card', 'nevents')) / \
2201+ if int(self.banner.get_detail('run_card', 'nevents') / \
2202 self.shower_card['nsplit_jobs']) * self.shower_card['nsplit_jobs'] \
2203- != int(self.banner.get_detail('run_card', 'nevents')):
2204+ != self.banner.get_detail('run_card', 'nevents'):
2205 logger.warning(\
2206 'nsplit_jobs in the shower card is not a divisor of the number of events.\n' + \
2207 'Setting it to 1.')
2208@@ -2425,7 +2710,7 @@
2209
2210 # don't split jobs if the user asks to shower only a part of the events
2211 if self.shower_card['nevents'] > 0 and \
2212- self.shower_card['nevents'] < int(self.banner.get_detail('run_card', 'nevents')) and \
2213+ self.shower_card['nevents'] < self.banner.get_detail('run_card', 'nevents') and \
2214 self.shower_card['nsplit_jobs'] != 1:
2215 logger.warning(\
2216 'Only a part of the events will be showered.\n' + \
2217@@ -3011,8 +3296,8 @@
2218 init_dict = self.get_init_dict(evt_file)
2219
2220 if nevents < 0 or \
2221- nevents > int(self.banner.get_detail('run_card', 'nevents')):
2222- nevents = int(self.banner.get_detail('run_card', 'nevents'))
2223+ nevents > self.banner.get_detail('run_card', 'nevents'):
2224+ nevents = self.banner.get_detail('run_card', 'nevents')
2225
2226 nevents = nevents / self.shower_card['nsplit_jobs']
2227
2228@@ -3024,7 +3309,7 @@
2229
2230 content = 'EVPREFIX=%s\n' % pjoin(os.path.split(evt_file)[1])
2231 content += 'NEVENTS=%d\n' % nevents
2232- content += 'NEVENTS_TOT=%d\n' % (int(self.banner.get_detail('run_card', 'nevents')) /\
2233+ content += 'NEVENTS_TOT=%d\n' % (self.banner.get_detail('run_card', 'nevents') /\
2234 self.shower_card['nsplit_jobs'])
2235 content += 'MCMODE=%s\n' % shower
2236 content += 'PDLABEL=%s\n' % pdlabel
2237@@ -3137,7 +3422,7 @@
2238
2239
2240 def run_reweight(self, only):
2241- """runs the reweight_xsec_events eecutables on each sub-event file generated
2242+ """runs the reweight_xsec_events executables on each sub-event file generated
2243 to compute on the fly scale and/or PDF uncertainities"""
2244 logger.info(' Doing reweight')
2245
2246@@ -3255,7 +3540,7 @@
2247 scale_pdf_info['scale_low'] = 0.0
2248
2249 # get the pdf uncertainty in percent (according to the Hessian method)
2250- lhaid=int(self.run_card['lhaid'])
2251+ lhaid=self.run_card['lhaid']
2252 pdf_upp=0.0
2253 pdf_low=0.0
2254 if lhaid <= 90000:
2255@@ -3270,7 +3555,6 @@
2256 else:
2257 scale_pdf_info['pdf_upp'] = 0.0
2258 scale_pdf_info['pdf_low'] = 0.0
2259-
2260 else:
2261 # use Gaussian method (NNPDF)
2262 pdf_stdev=0.0
2263@@ -3287,7 +3571,6 @@
2264
2265 def wait_for_complete(self, run_type):
2266 """this function waits for jobs on cluster to complete their run."""
2267-
2268 starttime = time.time()
2269 #logger.info(' Waiting for submitted jobs to complete')
2270 update_status = lambda i, r, f: self.update_status((i, r, f, run_type),
2271@@ -3300,29 +3583,15 @@
2272
2273 def run_all(self, job_dict, arg_list, run_type='monitor', split_jobs = False):
2274 """runs the jobs in job_dict (organized as folder: [job_list]), with arguments args"""
2275- njob_split = 0
2276 self.ijob = 0
2277-
2278- # this is to keep track, if splitting evt generation, of the various
2279- # folders/args in order to resubmit the jobs if some of them fail
2280- self.split_folders = {}
2281-
2282 if run_type != 'shower':
2283 self.njobs = sum(len(jobs) for jobs in job_dict.values()) * len(arg_list)
2284 for args in arg_list:
2285 for Pdir, jobs in job_dict.items():
2286 for job in jobs:
2287- if not split_jobs:
2288- self.run_exe(job, args, run_type, cwd=pjoin(self.me_dir, 'SubProcesses', Pdir) )
2289- else:
2290- for n in self.find_jobs_to_split(Pdir, job, args[1]):
2291- self.run_exe(job, args + [n], run_type, cwd=pjoin(self.me_dir, 'SubProcesses', Pdir) )
2292- njob_split += 1
2293- # print some statistics if running serially
2294+ self.run_exe(job, args, run_type, cwd=pjoin(self.me_dir, 'SubProcesses', Pdir) )
2295 if self.cluster_mode == 2:
2296 time.sleep(1) # security to allow all jobs to be launched
2297- if njob_split > 0:
2298- self.njobs = njob_split
2299 else:
2300 self.njobs = len(arg_list)
2301 for args in arg_list:
2302@@ -3333,37 +3602,27 @@
2303
2304
2305
2306- def check_event_files(self):
2307+ def check_event_files(self,jobs):
2308 """check the integrity of the event files after splitting, and resubmit
2309 those which are not nicely terminated"""
2310- to_resubmit = []
2311- for dir in self.split_folders.keys():
2312+ jobs_to_resubmit = []
2313+ for job in jobs:
2314 last_line = ''
2315 try:
2316 last_line = subprocess.Popen(
2317- ['tail', '-n1', pjoin(dir, 'events.lhe')], \
2318+ ['tail', '-n1', pjoin(job['dirname'], 'events.lhe')], \
2319 stdout = subprocess.PIPE).stdout.read().strip()
2320 except IOError:
2321 pass
2322-
2323 if last_line != "</LesHouchesEvents>":
2324- to_resubmit.append(dir)
2325-
2326+ jobs_to_resubmit.append(job)
2327 self.njobs = 0
2328- if to_resubmit:
2329+ if jobs_to_resubmit:
2330 run_type = 'Resubmitting broken jobs'
2331 logger.info('Some event files are broken, corresponding jobs will be resubmitted.')
2332- logger.debug('Resubmitting\n' + '\n'.join(to_resubmit) + '\n')
2333- for dir in to_resubmit:
2334- files.rm([dir])
2335- job = self.split_folders[dir][0]
2336- args = self.split_folders[dir][1:]
2337- run_type = 'monitor'
2338- cwd = os.path.split(dir)[0]
2339- self.run_exe(job, args, run_type, cwd=cwd )
2340- self.njobs +=1
2341-
2342- self.wait_for_complete(run_type)
2343+ for job in jobs_to_resubmit:
2344+ logger.debug('Resubmitting ' + job['dirname'] + '\n')
2345+ self.run_all_jobs(jobs_to_resubmit,2,fixed_order=False)
2346
2347
2348 def find_jobs_to_split(self, pdir, job, arg):
2349@@ -3436,16 +3695,16 @@
2350 # the 'standard' amcatnlo job
2351 # check if args is a list of string
2352 if type(args[0]) == str:
2353- input_files, output_files, required_output, args = self.getIO_ajob(exe,cwd, args)
2354+ input_files, output_files, required_output, args = self.getIO_ajob(exe,cwd,args)
2355 #submitting
2356 self.cluster.submit2(exe, args, cwd=cwd,
2357 input_files=input_files, output_files=output_files,
2358 required_output=required_output)
2359
2360- # keep track of folders and arguments for splitted evt gen
2361- subfolder=output_files[-1].split('/')[0]
2362- if len(args) == 4 and '_' in subfolder:
2363- self.split_folders[pjoin(cwd,subfolder)] = [exe] + args
2364+# # keep track of folders and arguments for splitted evt gen
2365+# subfolder=output_files[-1].split('/')[0]
2366+# if len(args) == 4 and '_' in subfolder:
2367+# self.split_folders[pjoin(cwd,subfolder)] = [exe] + args
2368
2369 elif 'shower' in exe:
2370 # a shower job
2371@@ -3511,7 +3770,6 @@
2372 # use local disk if possible => need to stands what are the
2373 # input/output files
2374
2375- keep_fourth_arg = False
2376 output_files = []
2377 required_output = []
2378 input_files = [pjoin(self.me_dir, 'SubProcesses', 'randinit'),
2379@@ -3536,84 +3794,48 @@
2380 dereference=True)
2381 tf.add(pjoin(cwd,'MadLoop5_resources'),arcname='MadLoop5_resources')
2382 tf.close()
2383-
2384- Ire = re.compile("for i in ([\d\s]*) ; do")
2385- try :
2386- fsock = open(exe)
2387- except IOError:
2388- fsock = open(pjoin(cwd,exe))
2389- text = fsock.read()
2390- data = Ire.findall(text)
2391- subdir = ' '.join(data).split()
2392
2393- if args[0] == '0':
2394+ if args[1] == 'born' or args[1] == 'all':
2395 # MADEVENT MINT FO MODE
2396 input_files.append(pjoin(cwd, 'madevent_mintFO'))
2397- input_files.append(pjoin(self.me_dir, 'SubProcesses','madin.%s' % args[1]))
2398- #j=$2\_G$i
2399- for i in subdir:
2400- current = '%s_G%s' % (args[1],i)
2401- if os.path.exists(pjoin(cwd,current)):
2402- input_files.append(pjoin(cwd, current))
2403- output_files.append(current)
2404+ if args[2] == '0':
2405+ current = '%s_G%s' % (args[1],args[0])
2406+ else:
2407+ current = '%s_G%s_%s' % (args[1],args[0],args[2])
2408+ if os.path.exists(pjoin(cwd,current)):
2409+ input_files.append(pjoin(cwd, current))
2410+ output_files.append(current)
2411
2412- required_output.append('%s/results.dat' % current)
2413- required_output.append('%s/log.txt' % current)
2414- required_output.append('%s/mint_grids' % current)
2415- required_output.append('%s/grid.MC_integer' % current)
2416- if len(args) == 4:
2417- required_output.append('%s/scale_pdf_dependence.dat' % current)
2418- args[2] = '-1'
2419- # use a grid train on another part
2420- base = '%s_G%s' % (args[3],i)
2421- if args[0] == '0':
2422- to_move = ['grid.MC_integer','mint_grids']
2423- elif args[0] == '1':
2424- to_move = ['mint_grids', 'grid.MC_integer']
2425- else:
2426- to_move = []
2427- if self.run_card['iappl'] == 2:
2428- for grid in glob.glob(pjoin(cwd,base,'grid_obs_*_in.root')):
2429- to_move.append(grid)
2430- if not os.path.exists(pjoin(cwd,current)):
2431- os.mkdir(pjoin(cwd,current))
2432- input_files.append(pjoin(cwd, current))
2433- for name in to_move:
2434- files.cp(pjoin(cwd,base, name),
2435- pjoin(cwd,current))
2436- files.cp(pjoin(cwd,base, 'grid.MC_integer'),
2437- pjoin(cwd,current))
2438+ required_output.append('%s/results.dat' % current)
2439+ required_output.append('%s/res_%s.dat' % (current,args[3]))
2440+ required_output.append('%s/log_MINT%s.txt' % (current,args[3]))
2441+ required_output.append('%s/mint_grids' % current)
2442+ required_output.append('%s/grid.MC_integer' % current)
2443+ if args[3] != '0':
2444+ required_output.append('%s/scale_pdf_dependence.dat' % current)
2445
2446- elif args[0] == '2':
2447+ elif args[1] == 'F' or args[1] == 'B':
2448 # MINTMC MODE
2449 input_files.append(pjoin(cwd, 'madevent_mintMC'))
2450- if args[2] in ['0','2']:
2451- input_files.append(pjoin(self.me_dir, 'SubProcesses','madinMMC_%s.2' % args[1]))
2452-
2453- for i in subdir:
2454- current = 'G%s%s' % (args[1], i)
2455- if os.path.exists(pjoin(cwd,current)):
2456- input_files.append(pjoin(cwd, current))
2457- output_files.append(current)
2458- if len(args) == 4 and args[3] in ['H','S','V','B','F']:
2459- # use a grid train on another part
2460- base = '%s_%s' % (args[3],i)
2461- files.ln(pjoin(cwd,base,'mint_grids'), name = 'preset_mint_grids',
2462- starting_dir=pjoin(cwd,current))
2463- files.ln(pjoin(cwd,base,'grid.MC_integer'),
2464- starting_dir=pjoin(cwd,current))
2465- elif len(args) ==4:
2466- keep_fourth_arg = True
2467- # this is for the split event generation
2468- output_files.append('G%s%s_%s' % (args[1], i, args[3]))
2469- required_output.append('G%s%s_%s/log_MINT%s.txt' % (args[1], i, args[3],args[2]))
2470-
2471- else:
2472- required_output.append('%s/log_MINT%s.txt' % (current,args[2]))
2473- if args[2] in ['0','1']:
2474- required_output.append('%s/results.dat' % current)
2475- if args[2] == '1':
2476- output_files.append('%s/results.dat' % current)
2477+
2478+ if args[2] == '0':
2479+ current = 'G%s%s' % (args[1],args[0])
2480+ else:
2481+ current = 'G%s%s_%s' % (args[1],args[0],args[2])
2482+ if os.path.exists(pjoin(cwd,current)):
2483+ input_files.append(pjoin(cwd, current))
2484+ output_files.append(current)
2485+ if args[2] > '0':
2486+ # this is for the split event generation
2487+ output_files.append('G%s%s_%s' % (args[1], args[0], args[2]))
2488+ required_output.append('G%s%s_%s/log_MINT%s.txt' % (args[1],args[0],args[2],args[3]))
2489+
2490+ else:
2491+ required_output.append('%s/log_MINT%s.txt' % (current,args[3]))
2492+ if args[3] in ['0','1']:
2493+ required_output.append('%s/results.dat' % current)
2494+ if args[3] == '1':
2495+ output_files.append('%s/results.dat' % current)
2496
2497 else:
2498 raise aMCatNLOError, 'not valid arguments: %s' %(', '.join(args))
2499@@ -3621,73 +3843,9 @@
2500 #Find the correct PDF input file
2501 pdfinput = self.get_pdf_input_filename()
2502 if os.path.exists(pdfinput):
2503- input_files.append(pdfinput)
2504-
2505- if len(args) == 4 and not keep_fourth_arg:
2506- args = args[:3]
2507-
2508+ input_files.append(pdfinput)
2509 return input_files, output_files, required_output, args
2510-
2511- def write_madinMMC_file(self, path, run_mode, mint_mode):
2512- """writes the madinMMC_?.2 file"""
2513- #check the validity of the arguments
2514- run_modes = ['born', 'virt', 'novi', 'all', 'viSB', 'novB']
2515- if run_mode not in run_modes:
2516- raise aMCatNLOError('%s is not a valid mode for run. Please use one of the following: %s' \
2517- % (run_mode, ', '.join(run_modes)))
2518- mint_modes = [0, 1, 2]
2519- if mint_mode not in mint_modes:
2520- raise aMCatNLOError('%s is not a valid mode for mintMC. Please use one of the following: %s' \
2521- % (mint_mode, ', '.join(mint_modes)))
2522- if run_mode in ['born']:
2523- name_suffix = 'B'
2524- elif run_mode in ['virt', 'viSB']:
2525- name_suffix = 'V'
2526- else:
2527- name_suffix = 'F'
2528-
2529- content = \
2530-"""-1 12 ! points, iterations
2531-0.03 ! desired fractional accuracy
2532-1 -0.1 ! alpha, beta for Gsoft
2533--1 -0.1 ! alpha, beta for Gazi
2534-1 ! Suppress amplitude (0 no, 1 yes)?
2535-1 ! Exact helicity sum (0 yes, n = number/event)?
2536-1 ! Enter Configuration Number:
2537-%1d ! MINT imode: 0 to set-up grids, 1 to perform integral, 2 generate events
2538-1 1 1 ! if imode is 1: Folding parameters for xi_i, phi_i and y_ij
2539-%s ! all, born, real, virt
2540-""" \
2541- % (mint_mode, run_mode)
2542- file = open(pjoin(path, 'madinMMC_%s.2' % name_suffix), 'w')
2543- file.write(content)
2544- file.close()
2545-
2546- def write_madin_file(self, path, run_mode, vegas_mode, npoints, niters, accuracy='0'):
2547- """writes the madin.run_mode file"""
2548- #check the validity of the arguments
2549- run_modes = ['born', 'virt', 'novi', 'all', 'viSB', 'novB', 'grid']
2550- if run_mode not in run_modes:
2551- raise aMCatNLOError('%s is not a valid mode for run. Please use one of the following: %s' \
2552- % (run_mode, ', '.join(run_modes)))
2553- name_suffix = run_mode
2554-
2555- content = \
2556-"""%s %s ! points, iterations
2557-%s ! accuracy
2558-2 ! 0 fixed grid 2 adjust
2559-1 ! 1 suppress amp, 0 doesnt
2560-1 ! 0 for exact hel sum
2561-1 ! hel configuration numb
2562-'test'
2563-1 ! 1 to save grids
2564-%s ! 0 to exclude, 1 for new run, 2 to restart, 3 to reset w/ keeping grid
2565-%s ! all, born, real, virt
2566-""" \
2567- % (npoints,niters,accuracy,vegas_mode,run_mode)
2568- file = open(pjoin(path, 'madin.%s' % name_suffix), 'w')
2569- file.write(content)
2570- file.close()
2571+
2572
2573 def compile(self, mode, options):
2574 """compiles aMC@NLO to compute either NLO or NLO matched to shower, as
2575@@ -3750,10 +3908,10 @@
2576
2577 self.link_lhapdf(libdir, [pjoin('SubProcesses', p) for p in p_dirs])
2578 pdfsetsdir = self.get_lhapdf_pdfsetsdir()
2579- lhaid_list = [int(self.run_card['lhaid'])]
2580+ lhaid_list = [self.run_card['lhaid']]
2581 if self.run_card['reweight_PDF']:
2582- lhaid_list.append(int(self.run_card['PDF_set_min']))
2583- lhaid_list.append(int(self.run_card['PDF_set_max']))
2584+ lhaid_list.append(self.run_card['PDF_set_min'])
2585+ lhaid_list.append(self.run_card['PDF_set_max'])
2586 self.copy_lhapdf_set(lhaid_list, pdfsetsdir)
2587
2588 else:
2589@@ -4292,9 +4450,9 @@
2590 if mode in ['LO','aMC@LO','noshowerLO']:
2591 self.run_name += '_LO'
2592 self.set_run_name(self.run_name, self.run_tag, 'parton')
2593- if int(self.run_card['ickkw']) == 3 and mode in ['LO', 'aMC@LO', 'noshowerLO']:
2594+ if self.run_card['ickkw'] == 3 and mode in ['LO', 'aMC@LO', 'noshowerLO']:
2595 raise self.InvalidCmd("""FxFx merging (ickkw=3) not allowed at LO""")
2596- elif int(self.run_card['ickkw']) == 3 and mode in ['aMC@NLO', 'noshower']:
2597+ elif self.run_card['ickkw'] == 3 and mode in ['aMC@NLO', 'noshower']:
2598 logger.warning("""You are running with FxFx merging enabled. To be able to merge
2599 samples of various multiplicities without double counting, you
2600 have to remove some events after showering 'by hand'. Please
2601@@ -4310,7 +4468,7 @@
2602 error = '''Stop opertation'''
2603 self.ask_run_configuration(mode, options)
2604 # raise aMCatNLOError(error)
2605- elif int(self.run_card['ickkw']) == -1 and mode in ['aMC@NLO', 'noshower']:
2606+ elif self.run_card['ickkw'] == -1 and mode in ['aMC@NLO', 'noshower']:
2607 # NNLL+NLO jet-veto only possible for LO event generation or fNLO runs.
2608 raise self.InvalidCmd("""NNLL+NLO jet veto runs (ickkw=-1) only possible for fNLO or LO.""")
2609 if 'aMC@' in mode or mode == 'onlyshower':

Subscribers

People subscribed via source and target branches

to all changes: