Merge lp:~maddevelopers/mg5amcnlo/2.3.0_nopdftransfer into lp:~maddevelopers/mg5amcnlo/2.3
- 2.3.0_nopdftransfer
- Merge into 2.3
Status: | Merged |
---|---|
Merged at revision: | 313 |
Proposed branch: | lp:~maddevelopers/mg5amcnlo/2.3.0_nopdftransfer |
Merge into: | lp:~maddevelopers/mg5amcnlo/2.3 |
Diff against target: |
798 lines (+217/-168) 22 files modified
Template/LO/Source/PDF/pdfwrap_lhapdf.f (+0/-68) Template/LO/SubProcesses/refine.sh (+5/-0) Template/LO/SubProcesses/refine_splitted.sh (+4/-0) Template/LO/SubProcesses/survey.sh (+5/-0) Template/NLO/Source/PDF/opendata.f (+0/-69) Template/NLO/SubProcesses/ajob_template (+5/-0) Template/NLO/SubProcesses/reweight_xsec_events.local (+5/-0) UpdateNotes.txt (+6/-2) VERSION (+3/-2) input/.mg5_configuration_default.txt (+4/-0) madgraph/interface/common_run_interface.py (+69/-11) madgraph/interface/madevent_interface.py (+3/-1) madgraph/interface/madgraph_interface.py (+4/-1) madgraph/iolibs/export_fks.py (+9/-0) madgraph/iolibs/export_v4.py (+75/-4) madgraph/iolibs/template_files/madevent_combine_events.f (+1/-1) madgraph/iolibs/template_files/madevent_makefile_source (+3/-3) madgraph/iolibs/template_files/pdf_opendata.f (+5/-2) madgraph/iolibs/template_files/pdf_wrap_lhapdf.f (+2/-0) madgraph/madevent/gen_crossxhtml.py (+6/-3) madgraph/various/lhe_parser.py (+1/-1) madgraph/various/process_checks.py (+2/-0) |
To merge this branch: | bzr merge lp:~maddevelopers/mg5amcnlo/2.3.0_nopdftransfer |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
marco zaro | Approve | ||
Review via email: mp+252820@code.launchpad.net |
This proposal supersedes a proposal from 2015-03-12.
Commit message
Description of the change
marco zaro (marco-zaro) wrote : Posted in a previous version of this proposal | # |
Olivier Mattelaer (olivier-mattelaer) wrote : Posted in a previous version of this proposal | # |
Hi Marco,
For the first point, what can we do?
Looks like your pdf is not part of the one install on each node.
Download them in a different directory? (is that possible?)
For the second point, this is not a bug. This is a feature, the information from one config file is
NEVER pass to the one of madevent/amc@NLO. The madevent/amc@nlo interface load BOTH file (in order).
This allow less ambiguity.
The configuration file reading order is
0. default value in python
1. ${MADGRAPH_
2. ~/.mg5/
3. ./Cards/
4. input/mg5_
5. ./Cards/
The fact that the cards 3 and 5 are the same is required since the path to the card 4 is present inside file 3.
Cheers,
Olivier
marco zaro (marco-zaro) wrote : Posted in a previous version of this proposal | # |
Hi Olivier,
the problem is that even for files that are there, it tries and fail to download them..
ls /cvmfs/
CT10/ list.txt MSTW2008nlo68cl/ MSTW2008nnlo68c
CT10nlo/ MSTW2008lo68cl/ MSTW2008nlo68cl
CT10nnlo/ MSTW2008lo68cl_nf4/ MSTW2008nnlo68cl/
Command "launch NLO -c" interrupted with error:
MadGraph5Error : Could not download MSTW2008nlo68cl into /cvmfs/
Please report this bug on https:/
More information is found in '/nfs/scratch/
Please attach this file to your report.
On 10 Mar 2015, at 11:34, Olivier Mattelaer <email address hidden> wrote:
> Hi Marco,
>
> For the first point, what can we do?
> Looks like your pdf is not part of the one install on each node.
> Download them in a different directory? (is that possible?)
>
>
>
> For the second point, this is not a bug. This is a feature, the information from one config file is
> NEVER pass to the one of madevent/amc@NLO. The madevent/amc@nlo interface load BOTH file (in order).
> This allow less ambiguity.
>
> The configuration file reading order is
> 0. default value in python
> 1. ${MADGRAPH_
> 2. ~/.mg5/
> 3. ./Cards/
> 4. input/mg5_
> 5. ./Cards/
>
> The fact that the cards 3 and 5 are the same is required since the path to the card 4 is present inside file 3.
>
> Cheers,
>
> Olivier
>
>
>
> --
> https:/
> You are reviewing the proposed merge of lp:~maddevelopers/mg5amcnlo/2.3.0_nopdftransfer into lp:~maddevelopers/mg5amcnlo/LoopInduced_splittedrefine.
Olivier Mattelaer (olivier-mattelaer) wrote : Posted in a previous version of this proposal | # |
Ok thanks to have found this, the problem was linked that I never use the "-c" option since I always plays with the configuration file.
Thanks a lot,
Olivier
Olivier Mattelaer (olivier-mattelaer) wrote : Posted in a previous version of this proposal | # |
Just update the target destination for this merge.
In the same line of idea of this merge. I have (in loop induced mode) reduce the number of file that are send back to the /nfs/ position. Those are mainly log/grid which where not use anyway and therefore a waste of ressources to keep them.
marco zaro (marco-zaro) wrote : Posted in a previous version of this proposal | # |
Ciao Olivier,
are you sure that you want to merge 2.3 into the splittedrefine?
Just to be sure...
Cheers,
Marco
Marco Zaro
On Thu, Mar 12, 2015 at 3:11 AM, Olivier Mattelaer <
<email address hidden>> wrote:
> Just update the target destination for this merge.
>
> In the same line of idea of this merge. I have (in loop induced mode)
> reduce the number of file that are send back to the /nfs/ position. Those
> are mainly log/grid which where not use anyway and therefore a waste of
> ressources to keep them.
>
>
>
> --
> https:/
> You are requested to review the proposed merge of
> lp:~maddevelopers/mg5amcnlo/2.3 into
> lp:~maddevelopers/mg5amcnlo/LoopInduced_splittedrefine.
>
Olivier Mattelaer (olivier-mattelaer) wrote : | # |
I did not realise that it was possible the change both the origin and the destination of a merge request. So sorry I change the wrong one.
Olivier
marco zaro (marco-zaro) wrote : | # |
Ciao Olivier,
it does not work on ingrid.
I have tried both with the default PDF setn (224600) and with mstw2008nlo (21000) and it fails after resubmitting the job a couple of times.
The error in the logs is always hat the PDF file is not found..
==== LHAPDF6 USING DEFAULT-TYPE LHAGLUE INTERFACE ====
terminate called after throwing an instance of 'LHAPDF::ReadError'
what(): Info file not found for PDF set 'MSTW2008nlo68cl'
Time in seconds: 1
Cheers,
Marco
Olivier Mattelaer (olivier-mattelaer) wrote : | # |
Hi Marco,
Thanks for checking, this is really weird since both Valentin and I are using a lot this branch for producing the table of our loopinduced paper.
So I have checked back today with a lot of different configuration and from a fully fresh version.
and they all succeed.
So I arrive to a point to suspect a configuration problem.
The main difference between your configuration and mine is that you set in your ~/.bashrc
LHAPATH=
I checked that this variable was correctly overwritten by the python code and the environment variable associate to the condor_job indicates correctly:
LHAPATH = /cvmfs/
lhapdf_config = /cvmfs/
But If I ask the bash script to print that variable it returns:
LHAPATH = /home/fynu/
So looks like the LHAPATH variable is overwritten by the node in some way.
Do you have any idea why is it the case? I do not.
I guess that the ~/.bashrc is read in a way that I do not understand. So not sure of what i can do in this case.
Cheers,
Olivier
- 476. By Olivier Mattelaer
-
Try to set lhapath internally to fortran to fix Marco configuration issue
- 477. By Olivier Mattelaer
-
revert last try since the function is not supported on linux (it is on mac)
Olivier Mattelaer (olivier-mattelaer) wrote : | # |
So I try to set the enviroment level at the fortran level.
But it fails since the associate routine is only define on mac/windows...
So I see two solutions:
1) warn the user if LHAPATH is define and that this mode is on (but if LHAPATH points to the correct path)
2) save the correct value in a "madgraph" specific environment variable (like CLUSTER_LHAPATH). and change all our bashscript to startwiths:
if [$CLUSTER_LHAPATH]; then
export LHAPATH=
fi
The second solution is not nice but should work nicely.
What do you think Marco?
Cheers,
Olivier
marco zaro (marco-zaro) wrote : | # |
Ciao Olivier,
thanks for checking, the second solution should be ok, and this should set the variable only on the cluster node, right?
btw, what is the lifetime of these variables? with export they remain forever, right?
I will have a look at how the branch works when i comment the lhapath in my .bashrc.
btw, hos shall it work if i want to use one pdfset which is not among those jerome has pre-installed?
Cheers,
Marco
On 17 Mar 2015, at 06:43, Olivier Mattelaer <email address hidden> wrote:
> So I try to set the enviroment level at the fortran level.
> But it fails since the associate routine is only define on mac/windows...
>
> So I see two solutions:
> 1) warn the user if LHAPATH is define and that this mode is on (but if LHAPATH points to the correct path)
>
> 2) save the correct value in a "madgraph" specific environment variable (like CLUSTER_LHAPATH). and change all our bashscript to startwiths:
> if [$CLUSTER_LHAPATH]; then
> export LHAPATH=
> fi
> The second solution is not nice but should work nicely.
>
> What do you think Marco?
>
> Cheers,
>
> Olivier
> --
> https:/
> You are reviewing the proposed merge of lp:~maddevelopers/mg5amcnlo/2.3.0_nopdftransfer into lp:~maddevelopers/mg5amcnlo/2.3.
- 478. By Olivier Mattelaer
-
apply the fix to force the node to use the correct LHAPATH
- 479. By Olivier Mattelaer
-
remove the dependencies in lhapdf in combine_events (test)
- 480. By Olivier Mattelaer
-
add the protection for NLO reweighting
Olivier Mattelaer (olivier-mattelaer) wrote : | # |
Hi Marco,
Ok I have implemented the fix that I was describing above.
I have checked that it is working for
MadEvent
LoopInduced
aMC@NLO
fix order NLO
(by defining a wrong LHAPATH in my .bashrc)
If some bash script are not cover, this is anyway not critical since this is enough to remove the LHAPATH definition from the .bashrc
Please accept the review as soon as possible, I would really like to be able to make this available (hoping that it will reduce the ingrid latency)
Cheers,
Olivier
Olivier Mattelaer (olivier-mattelaer) wrote : | # |
Forget to answer your question:
> thanks for checking, the second solution should be ok, and this should set the variable only on the cluster node, right?
btw, what is the lifetime of these variables? with export they remain forever, right?
This is going to be contained in the node and only for the process in progress but accessible by all the code launched by the bash script.
> hos shall it work if i want to use one pdfset which is not among those jerome has pre-installed?
If you use the version of lhapdf install by Jerome, this will certainly crash since you can not write in the correct path.
Otherwise I'm not sure, to be honest do not care too much about that case.
At worse, this either means that the user has to use it's own version of lhapdf or to ask Jerome to add one additional pdf set.
Cheers,
Olivier
marco zaro (marco-zaro) wrote : | # |
Ciao Olivier,
great, NLO and aMC@NLO+shower modes are working now on ingrid (I have commented the LHAPATH setting in my .bashrc).
Just one thing: this is printed on the screen
INFO: Using LHAPDF v6.1.5 interface for PDFs
DEBUG: [Errno 30] Read-only file system: '/cvmfs/
DEBUG: [Errno 30] Read-only file system: '/cvmfs/
/cvmfs/
The info/debug statements are OK, but the last line (/cvmfs/
Thanks a lot!
Cheers,
Marco
Olivier Mattelaer (olivier-mattelaer) wrote : | # |
Hi Marco,
I do not find the printout in the code (and do not see him). I think that this is my fault, I might have add it when I was using your login.
Thanks,
Olivier
On 19 Mar 2015, at 16:52, marco zaro <email address hidden> wrote:
> Review: Approve
>
> Ciao Olivier,
> great, NLO and aMC@NLO+shower modes are working now on ingrid (I have commented the LHAPATH setting in my .bashrc).
> Just one thing: this is printed on the screen
>
> INFO: Using LHAPDF v6.1.5 interface for PDFs
> DEBUG: [Errno 30] Read-only file system: '/cvmfs/
> DEBUG: [Errno 30] Read-only file system: '/cvmfs/
> /cvmfs/
>
> The info/debug statements are OK, but the last line (/cvmfs/
>
> Thanks a lot!
>
> Cheers,
>
> Marco
> --
> https:/
> Your team MadDevelopers is subscribed to branch lp:~maddevelopers/mg5amcnlo/2.3.
Preview Diff
1 | === removed file 'Template/LO/Source/PDF/pdfwrap_lhapdf.f' |
2 | --- Template/LO/Source/PDF/pdfwrap_lhapdf.f 2012-11-07 05:57:53 +0000 |
3 | +++ Template/LO/Source/PDF/pdfwrap_lhapdf.f 1970-01-01 00:00:00 +0000 |
4 | @@ -1,68 +0,0 @@ |
5 | - subroutine pdfwrap |
6 | - implicit none |
7 | -C |
8 | -C INCLUDE |
9 | -C |
10 | - include 'pdf.inc' |
11 | - include '../alfas.inc' |
12 | - real*8 zmass |
13 | - data zmass/91.188d0/ |
14 | - Character*150 LHAPath |
15 | - character*20 parm(20) |
16 | - double precision value(20) |
17 | - real*8 alphasPDF |
18 | - external alphasPDF |
19 | - |
20 | - |
21 | -c------------------- |
22 | -c START THE CODE |
23 | -c------------------- |
24 | - |
25 | -c initialize the pdf set |
26 | - call FindPDFPath(LHAPath) |
27 | - CALL SetPDFPath(LHAPath) |
28 | - value(1)=lhaid |
29 | - parm(1)='DEFAULT' |
30 | - call pdfset(parm,value) |
31 | - call GetOrderAs(nloop) |
32 | - nloop=nloop+1 |
33 | - asmz=alphasPDF(zmass) |
34 | - |
35 | - return |
36 | - end |
37 | - |
38 | - |
39 | - subroutine FindPDFPath(LHAPath) |
40 | -c******************************************************************** |
41 | -c generic subroutine to open the table files in the right directories |
42 | -c******************************************************************** |
43 | - implicit none |
44 | -c |
45 | - Character LHAPath*150,up*3 |
46 | - data up/'../'/ |
47 | - logical exists |
48 | - integer i |
49 | - |
50 | -c first try in the current directory |
51 | - LHAPath='PDFsets' |
52 | - Inquire(File=LHAPath, exist=exists) |
53 | - if(exists)return |
54 | -c then try one directory up |
55 | - LHAPath=up//LHAPath |
56 | - Inquire(File=LHAPath, exist=exists) |
57 | - if(exists)return |
58 | -c finally try in the lib directory |
59 | - LHAPath='lib/PDFsets' |
60 | - Inquire(File=LHAPath, exist=exists) |
61 | - if(exists)return |
62 | - do i=1,6 |
63 | - LHAPath=up//LHAPath |
64 | - Inquire(File=LHAPath, exist=exists) |
65 | - if(exists)return |
66 | - enddo |
67 | - print*,'Could not find PDFsets directory, quitting' |
68 | - stop 1 |
69 | - |
70 | - return |
71 | - end |
72 | - |
73 | |
74 | === modified file 'Template/LO/SubProcesses/refine.sh' |
75 | --- Template/LO/SubProcesses/refine.sh 2015-03-09 21:04:25 +0000 |
76 | +++ Template/LO/SubProcesses/refine.sh 2015-03-19 03:16:56 +0000 |
77 | @@ -1,5 +1,10 @@ |
78 | #!/bin/bash |
79 | |
80 | +# For support of LHAPATH in cluster mode |
81 | +if [ $CLUSTER_LHAPATH ]; then |
82 | + export LHAPATH=$CLUSTER_LHAPATH; |
83 | +fi |
84 | + |
85 | if [[ -e MadLoop5_resources.tar.gz && ! -e MadLoop5_resources ]]; then |
86 | tar -xzf MadLoop5_resources.tar.gz |
87 | fi |
88 | |
89 | === modified file 'Template/LO/SubProcesses/refine_splitted.sh' |
90 | --- Template/LO/SubProcesses/refine_splitted.sh 2015-03-11 19:19:19 +0000 |
91 | +++ Template/LO/SubProcesses/refine_splitted.sh 2015-03-19 03:16:56 +0000 |
92 | @@ -1,5 +1,9 @@ |
93 | #!/bin/bash |
94 | |
95 | +# For support of LHAPATH in cluster mode |
96 | +if [ $CLUSTER_LHAPATH ]; then |
97 | + export LHAPATH=$CLUSTER_LHAPATH; |
98 | +fi |
99 | if [[ -e MadLoop5_resources.tar.gz && ! -e MadLoop5_resources ]]; then |
100 | tar -xzf MadLoop5_resources.tar.gz |
101 | fi |
102 | |
103 | === modified file 'Template/LO/SubProcesses/survey.sh' |
104 | --- Template/LO/SubProcesses/survey.sh 2015-03-11 19:19:19 +0000 |
105 | +++ Template/LO/SubProcesses/survey.sh 2015-03-19 03:16:56 +0000 |
106 | @@ -1,5 +1,10 @@ |
107 | #!/bin/bash |
108 | |
109 | +# For support of LHAPATH in cluster mode |
110 | +if [ $CLUSTER_LHAPATH ]; then |
111 | + export LHAPATH=$CLUSTER_LHAPATH; |
112 | +fi |
113 | + |
114 | if [[ -e MadLoop5_resources.tar.gz && ! -e MadLoop5_resources ]]; then |
115 | tar -xzf MadLoop5_resources.tar.gz; |
116 | fi |
117 | |
118 | === removed file 'Template/NLO/Source/PDF/opendata.f' |
119 | --- Template/NLO/Source/PDF/opendata.f 2014-05-16 08:09:51 +0000 |
120 | +++ Template/NLO/Source/PDF/opendata.f 1970-01-01 00:00:00 +0000 |
121 | @@ -1,69 +0,0 @@ |
122 | - Integer Function NextUnopen() |
123 | -c******************************************************************** |
124 | -C Returns an unallocated FORTRAN i/o unit. |
125 | -c******************************************************************** |
126 | - |
127 | - Logical EX |
128 | -C |
129 | - Do 10 N = 10, 300 |
130 | - INQUIRE (UNIT=N, OPENED=EX) |
131 | - If (.NOT. EX) then |
132 | - NextUnopen = N |
133 | - Return |
134 | - Endif |
135 | - 10 Continue |
136 | - Stop ' There is no available I/O unit. ' |
137 | -C ************************* |
138 | - End |
139 | - |
140 | - |
141 | - |
142 | - subroutine OpenData(Tablefile) |
143 | -c******************************************************************** |
144 | -c generic subroutine to open the table files in the right directories |
145 | -c******************************************************************** |
146 | - implicit none |
147 | -c |
148 | - Character Tablefile*(*),up*3,lib*4,dir*8,tempname*100 |
149 | - data up,lib,dir/'../','lib/','Pdfdata/'/ |
150 | - Integer IU,NextUnopen,i |
151 | - External NextUnopen |
152 | - common/IU/IU |
153 | -c |
154 | -c-- start |
155 | -c |
156 | - IU=NextUnopen() |
157 | - |
158 | -c first try in the current directory |
159 | - |
160 | - tempname=Tablefile |
161 | - open(IU,file=tempname,status='old',ERR=10) |
162 | - return |
163 | - |
164 | - 10 tempname=up//Tablefile |
165 | - open(IU,file=tempname,status='old',ERR=20) |
166 | - return |
167 | - |
168 | - 20 tempname=dir//Tablefile |
169 | - open(IU,file=tempname,status='old',ERR=30) |
170 | - return |
171 | - |
172 | - 30 tempname=lib//tempname |
173 | - open(IU,file=tempname,status='old',ERR=40) |
174 | - |
175 | - 40 continue |
176 | - do i=0,6 |
177 | - open(IU,file=tempname,status='old',ERR=50) |
178 | - return |
179 | - 50 tempname=up//tempname |
180 | - if (i.eq.6)then |
181 | - write(*,*) 'Error: PDF file ',Tablefile,' not found' |
182 | - stop |
183 | - endif |
184 | - enddo |
185 | - |
186 | - print*,'table for the pdf NOT found!!!' |
187 | - |
188 | - return |
189 | - end |
190 | - |
191 | |
192 | === modified file 'Template/NLO/SubProcesses/ajob_template' |
193 | --- Template/NLO/SubProcesses/ajob_template 2014-10-31 11:06:18 +0000 |
194 | +++ Template/NLO/SubProcesses/ajob_template 2015-03-19 03:16:56 +0000 |
195 | @@ -7,6 +7,11 @@ |
196 | fi |
197 | } |
198 | |
199 | +#Force LHAPATH to be set correctly on cluster |
200 | +if [ $CLUSTER_LHAPATH ]; then |
201 | + export LHAPATH=$CLUSTER_LHAPATH; |
202 | +fi |
203 | + |
204 | tarCounter=0 |
205 | while [[ (-f MadLoop5_resources.tar.gz) && (! -f MadLoop5_resources/HelConfigs.dat) && ($tarCounter < 10) ]]; do |
206 | if [[ $tarCounter > 0 ]]; then |
207 | |
208 | === modified file 'Template/NLO/SubProcesses/reweight_xsec_events.local' |
209 | --- Template/NLO/SubProcesses/reweight_xsec_events.local 2012-12-02 22:17:23 +0000 |
210 | +++ Template/NLO/SubProcesses/reweight_xsec_events.local 2015-03-19 03:16:56 +0000 |
211 | @@ -3,6 +3,11 @@ |
212 | event_file=$1 |
213 | save_wgts=$2 |
214 | |
215 | +# For support of LHAPATH in cluster mode |
216 | +if [ $CLUSTER_LHAPATH ]; then |
217 | + export LHAPATH=$CLUSTER_LHAPATH; |
218 | +fi |
219 | + |
220 | if [[ -e ./reweight_xsec_events ]] |
221 | then |
222 | (echo $event_file; echo $save_wgts) | ./reweight_xsec_events > reweight_xsec_events.output |
223 | |
224 | === modified file 'UpdateNotes.txt' |
225 | --- UpdateNotes.txt 2015-03-12 08:12:39 +0000 |
226 | +++ UpdateNotes.txt 2015-03-19 03:16:56 +0000 |
227 | @@ -1,13 +1,13 @@ |
228 | Update notes for MadGraph5_aMC@NLO (in reverse time order) |
229 | |
230 | -2.3.0(XX/XX/XX) OM+VH: Adding the possibility to compute cross-section for loop-induced process |
231 | +2.3.0(XX/XX/XX) OM+VH: Adding the possibility to compute cross-section/generate events for loop-induced process |
232 | JB+OM: Addign matchbox output for matching in the Matchbox framework |
233 | OM: New MultiCore class with better thread support |
234 | OM+VH: Change the handling of the run_card. |
235 | - The default value depends now of your running process |
236 | - cut_decays is now on False by default |
237 | - nhel can only take 0/1 value. 1 is a real MC over helicity (with importance sampling) |
238 | - - use_syst is set on by default (but for matching) |
239 | + - use_syst is set on by default (but for matching where it is keep off) |
240 | OM: Cuts are also applied for 1>N processes (but the default run_card doesn't have any cut). |
241 | RF: Fixed a bug in the aMCfast/APPLGrid interface introduced in version 2.2.3 |
242 | RF: Fixed a bug in the setting of the integration grids (LO process generation) for the minimum |
243 | @@ -20,6 +20,10 @@ |
244 | MZ+RF: Added 'LOonly' asNLO mode to export processes without any real and virtuals |
245 | (useful e.g. for higher multiplicities when merging) |
246 | RF: Added support for the computation of NLO+NNLL jet veto cross sections |
247 | + OM: Possibility to not transfer pdf file to the node for each job. |
248 | + This is done via a new option (cluster_local_path) which should contain the pdf set. |
249 | + This path is intented to point to a node specific filesystem. |
250 | + |
251 | |
252 | 2.2.3(10/02/15) RF: Re-factoring of the structure of the code for fNLO computations. |
253 | OM: Fix a bug in MadWeight (correlated param_card was not creating the correct input file) |
254 | |
255 | === modified file 'VERSION' |
256 | --- VERSION 2015-02-10 02:53:57 +0000 |
257 | +++ VERSION 2015-03-19 03:16:56 +0000 |
258 | @@ -1,5 +1,6 @@ |
259 | -version = 2.2.3 |
260 | -date = 2015-02-10 |
261 | +version = 2.3.0.beta |
262 | +date = 2015-03-01 |
263 | + |
264 | |
265 | |
266 | |
267 | |
268 | === modified file 'input/.mg5_configuration_default.txt' |
269 | --- input/.mg5_configuration_default.txt 2015-02-09 03:11:19 +0000 |
270 | +++ input/.mg5_configuration_default.txt 2015-03-19 03:16:56 +0000 |
271 | @@ -99,6 +99,10 @@ |
272 | #! options didn't modify condor cluster) |
273 | # cluster_temp_path = None |
274 | |
275 | +#! path to a node directory where local file can be found (typically pdf) |
276 | +#! to avoid to send them to the node (if cluster_temp_path is on True or condor) |
277 | +# cluster_local_path = /cvmfs/cp3.uclouvain.be/madgraph/ |
278 | + |
279 | #! Cluster waiting time for status update |
280 | #! First number is when the number of waiting job is higher than the number |
281 | #! of running one (time in second). The second number is in the second case. |
282 | |
283 | === modified file 'madgraph/interface/common_run_interface.py' |
284 | --- madgraph/interface/common_run_interface.py 2015-03-09 04:06:53 +0000 |
285 | +++ madgraph/interface/common_run_interface.py 2015-03-19 03:16:56 +0000 |
286 | @@ -482,6 +482,7 @@ |
287 | 'cluster_type': 'condor', |
288 | 'cluster_status_update': (600, 30), |
289 | 'cluster_nb_retry':1, |
290 | + 'cluster_local_path': "/cvmfs/cp3.uclouvain.be/madgraph/", |
291 | 'cluster_retry_wait':300} |
292 | |
293 | options_madgraph= {'stdout_level':None} |
294 | @@ -1286,6 +1287,25 @@ |
295 | def get_pdf_input_filename(self): |
296 | """return the name of the file which is used by the pdfset""" |
297 | |
298 | + if self.options["cluster_local_path"] and self.options['run_mode'] ==1: |
299 | + # no need to transfer the pdf. |
300 | + return '' |
301 | + |
302 | + def check_cluster(path): |
303 | + if not self.options["cluster_local_path"] or self.options['run_mode'] !=1: |
304 | + return path |
305 | + main = self.options["cluster_local_path"] |
306 | + if os.path.isfile(path): |
307 | + filename = os.path.basename(path) |
308 | + possible_path = [pjoin(main, filename), |
309 | + pjoin(main, "lhadpf", filename), |
310 | + pjoin(main, "Pdfdata", filename)] |
311 | + if any(os.path.exists(p) for p in possible_path): |
312 | + return " " |
313 | + else: |
314 | + return path |
315 | + |
316 | + |
317 | if hasattr(self, 'pdffile') and self.pdffile: |
318 | return self.pdffile |
319 | else: |
320 | @@ -1294,11 +1314,15 @@ |
321 | if len(data) < 4: |
322 | continue |
323 | if data[1].lower() == self.run_card['pdlabel'].lower(): |
324 | - self.pdffile = pjoin(self.me_dir, 'lib', 'Pdfdata', data[2]) |
325 | + self.pdffile = check_cluster(pjoin(self.me_dir, 'lib', 'Pdfdata', data[2])) |
326 | return self.pdffile |
327 | else: |
328 | # possible when using lhapdf |
329 | - self.pdffile = pjoin(self.me_dir, 'lib', 'PDFsets') |
330 | + path = pjoin(self.me_dir, 'lib', 'PDFsets') |
331 | + if os.path.exists(path): |
332 | + self.pdffile = path |
333 | + else: |
334 | + self.pdffile = " " |
335 | return self.pdffile |
336 | |
337 | def do_quit(self, line): |
338 | @@ -1457,6 +1481,7 @@ |
339 | """change the way to submit job 0: single core, 1: cluster, 2: multicore""" |
340 | |
341 | self.cluster_mode = run_mode |
342 | + self.options['run_mode'] = run_mode |
343 | |
344 | if run_mode == 2: |
345 | if not self.options['nb_core']: |
346 | @@ -1472,13 +1497,14 @@ |
347 | self.cluster = cluster.MultiCore( |
348 | **self.options) |
349 | self.cluster.nb_core = nb_core |
350 | - #cluster_temp_path=self.options['cluster_temp_path'], |
351 | + #cluster_temp_path=self.options['cluster_temp_path'], |
352 | |
353 | if self.cluster_mode == 1: |
354 | opt = self.options |
355 | cluster_name = opt['cluster_type'] |
356 | self.cluster = cluster.from_name[cluster_name](**opt) |
357 | |
358 | + |
359 | def check_param_card(self, path, run=True): |
360 | """Check that all the width are define in the param_card. |
361 | If some width are set on 'Auto', call the computation tools.""" |
362 | @@ -1639,6 +1665,7 @@ |
363 | # read the file and extract information |
364 | logger.info('load configuration from %s ' % config_file.name) |
365 | for line in config_file: |
366 | + |
367 | if '#' in line: |
368 | line = line.split('#',1)[0] |
369 | line = line.replace('\n','').replace('\r\n','') |
370 | @@ -1649,7 +1676,7 @@ |
371 | else: |
372 | name = name.strip() |
373 | value = value.strip() |
374 | - if name.endswith('_path'): |
375 | + if name.endswith('_path') and not name.startswith('cluster'): |
376 | path = value |
377 | if os.path.isdir(path): |
378 | self.options[name] = os.path.realpath(path) |
379 | @@ -1668,12 +1695,11 @@ |
380 | if not final: |
381 | return self.options # the return is usefull for unittest |
382 | |
383 | - |
384 | # Treat each expected input |
385 | # delphes/pythia/... path |
386 | for key in self.options: |
387 | # Final cross check for the path |
388 | - if key.endswith('path'): |
389 | + if key.endswith('path') and not key.startswith("cluster"): |
390 | path = self.options[key] |
391 | if path is None: |
392 | continue |
393 | @@ -1692,7 +1718,7 @@ |
394 | self.options[key] = None |
395 | elif key.startswith('cluster') and key != 'cluster_status_update': |
396 | if key in ('cluster_nb_retry','cluster_wait_retry'): |
397 | - self.options[key] = int(self.options[key]) |
398 | + self.options[key] = int(self.options[key]) |
399 | if hasattr(self,'cluster'): |
400 | del self.cluster |
401 | pass |
402 | @@ -1709,9 +1735,7 @@ |
403 | % key) |
404 | |
405 | # Configure the way to open a file: |
406 | - misc.open_file.configure(self.options) |
407 | self.configure_run_mode(self.options['run_mode']) |
408 | - |
409 | return self.options |
410 | |
411 | @staticmethod |
412 | @@ -1980,9 +2004,44 @@ |
413 | os.mkdir(pdfsets_dir) |
414 | except OSError: |
415 | pdfsets_dir = pjoin(self.me_dir, 'lib', 'PDFsets') |
416 | + else: |
417 | + #clean previous set of pdf used |
418 | + for name in os.listdir(pdfsets_dir): |
419 | + if name != pdfsetname: |
420 | + try: |
421 | + if os.path.isdir(pjoin(pdfsets_dir, name)): |
422 | + shutil.rmtree(pjoin(pdfsets_dir, name)) |
423 | + else: |
424 | + os.remove(pjoin(pdfsets_dir, name)) |
425 | + except Exception, error: |
426 | + logger.debug('%s', error) |
427 | + |
428 | + lhapdf_cluster_possibilities = [self.options["cluster_local_path"], |
429 | + pjoin(self.options["cluster_local_path"], "lhapdf"), |
430 | + pjoin(self.options["cluster_local_path"], "lhapdf", "pdfsets"), |
431 | + pjoin(self.options["cluster_local_path"], "..", "lhapdf"), |
432 | + pjoin(self.options["cluster_local_path"], "..", "lhapdf", "pdfsets"), |
433 | + pjoin(self.options["cluster_local_path"], "..", "lhapdf","pdfsets", "6.1") |
434 | + ] |
435 | + |
436 | + # Check if we need to copy the pdf |
437 | + if self.options["cluster_local_path"] and self.options["run_mode"] == 1 and \ |
438 | + any((os.path.exists(pjoin(d, pdfsetname)) for d in lhapdf_cluster_possibilities)): |
439 | |
440 | + os.environ["LHAPATH"] = [d for d in lhapdf_cluster_possibilities if os.path.exists(pjoin(d, pdfsetname))][0] |
441 | + os.environ["CLUSTER_LHAPATH"] = os.environ["LHAPATH"] |
442 | + # no need to copy it |
443 | + if os.path.exists(pjoin(pdfsets_dir, pdfsetname)): |
444 | + try: |
445 | + if os.path.isdir(pjoin(pdfsets_dir, name)): |
446 | + shutil.rmtree(pjoin(pdfsets_dir, name)) |
447 | + else: |
448 | + os.remove(pjoin(pdfsets_dir, name)) |
449 | + except Exception, error: |
450 | + logger.debug('%s', error) |
451 | + |
452 | #check that the pdfset is not already there |
453 | - if not os.path.exists(pjoin(self.me_dir, 'lib', 'PDFsets', pdfsetname)) and \ |
454 | + elif not os.path.exists(pjoin(self.me_dir, 'lib', 'PDFsets', pdfsetname)) and \ |
455 | not os.path.isdir(pjoin(self.me_dir, 'lib', 'PDFsets', pdfsetname)): |
456 | |
457 | if pdfsetname and not os.path.exists(pjoin(pdfsets_dir, pdfsetname)): |
458 | @@ -1993,7 +2052,6 @@ |
459 | elif os.path.exists(pjoin(os.path.dirname(pdfsets_dir), pdfsetname)): |
460 | files.cp(pjoin(os.path.dirname(pdfsets_dir), pdfsetname), pjoin(self.me_dir, 'lib', 'PDFsets')) |
461 | |
462 | - |
463 | def install_lhapdf_pdfset(self, pdfsets_dir, filename): |
464 | """idownloads and install the pdfset filename in the pdfsets_dir""" |
465 | lhapdf_version = self.get_lhapdf_version() |
466 | |
467 | === modified file 'madgraph/interface/madevent_interface.py' |
468 | --- madgraph/interface/madevent_interface.py 2015-03-12 23:21:23 +0000 |
469 | +++ madgraph/interface/madevent_interface.py 2015-03-19 03:16:56 +0000 |
470 | @@ -1653,15 +1653,17 @@ |
471 | |
472 | super(MadEventCmd,self).set_configuration(amcatnlo=amcatnlo, |
473 | final=final, **opt) |
474 | + |
475 | if not final: |
476 | return self.options # the return is usefull for unittest |
477 | |
478 | + |
479 | # Treat each expected input |
480 | # delphes/pythia/... path |
481 | # ONLY the ONE LINKED TO Madevent ONLY!!! |
482 | for key in (k for k in self.options if k.endswith('path')): |
483 | path = self.options[key] |
484 | - if path is None: |
485 | + if path is None or key.startswith("cluster"): |
486 | continue |
487 | if not os.path.isdir(path): |
488 | path = pjoin(self.me_dir, self.options[key]) |
489 | |
490 | === modified file 'madgraph/interface/madgraph_interface.py' |
491 | --- madgraph/interface/madgraph_interface.py 2015-03-11 09:51:21 +0000 |
492 | +++ madgraph/interface/madgraph_interface.py 2015-03-19 03:16:56 +0000 |
493 | @@ -2470,6 +2470,7 @@ |
494 | 'applgrid':'applgrid-config', |
495 | 'amcfast':'amcfast-config', |
496 | 'cluster_temp_path':None, |
497 | + 'cluster_local_path': '/cvmfs/cp3.uclouvain.be/madgraph/', |
498 | 'OLP': 'MadLoop', |
499 | 'cluster_nb_retry':1, |
500 | 'cluster_retry_wait':300, |
501 | @@ -5212,7 +5213,6 @@ |
502 | self.history.append('set %s %s' % (key, self.options[key])) |
503 | # Configure the way to open a file: |
504 | launch_ext.open_file.configure(self.options) |
505 | - |
506 | return self.options |
507 | |
508 | def check_for_export_dir(self, filepath): |
509 | @@ -5744,6 +5744,9 @@ |
510 | 'cluster_retry_wait', 'cluster_size']: |
511 | self.options[args[0]] = int(args[1]) |
512 | |
513 | + elif args[0] in ['cluster_local_path']: |
514 | + self.options[args[0]] = args[1].strip() |
515 | + |
516 | elif args[0] == 'cluster_status_update': |
517 | if '(' in args[1]: |
518 | data = ' '.join([a for a in args[1:] if not a.startswith('-')]) |
519 | |
520 | === modified file 'madgraph/iolibs/export_fks.py' |
521 | --- madgraph/iolibs/export_fks.py 2015-03-12 08:12:39 +0000 |
522 | +++ madgraph/iolibs/export_fks.py 2015-03-19 03:16:56 +0000 |
523 | @@ -73,6 +73,7 @@ |
524 | dir_path = self.dir_path |
525 | clean =self.opt['clean'] |
526 | |
527 | + |
528 | #First copy the full template tree if dir_path doesn't exit |
529 | if not os.path.isdir(dir_path): |
530 | if not mgme_dir: |
531 | @@ -191,6 +192,9 @@ |
532 | # Copy the different python files in the Template |
533 | self.copy_python_files() |
534 | |
535 | + # We need to create the correct open_data for the pdf |
536 | + self.write_pdf_opendata() |
537 | + |
538 | # I put it here not in optimized one, because I want to use the same makefile_loop.inc |
539 | # Also, we overload this function (i.e. it is already defined in |
540 | # LoopProcessExporterFortranSA) because the path of the template makefile |
541 | @@ -3151,6 +3155,11 @@ |
542 | |
543 | self.copy_python_files() |
544 | |
545 | + |
546 | + # We need to create the correct open_data for the pdf |
547 | + self.write_pdf_opendata() |
548 | + |
549 | + |
550 | # Return to original PWD |
551 | os.chdir(cwd) |
552 | |
553 | |
554 | === modified file 'madgraph/iolibs/export_v4.py' |
555 | --- madgraph/iolibs/export_v4.py 2015-03-05 00:14:16 +0000 |
556 | +++ madgraph/iolibs/export_v4.py 2015-03-19 03:16:56 +0000 |
557 | @@ -229,6 +229,11 @@ |
558 | files.cp(pjoin(MG5DIR,'vendor', 'DiscreteSampler', 'StringCast.f'), |
559 | pjoin(self.dir_path, 'Source')) |
560 | |
561 | + # We need to create the correct open_data for the pdf |
562 | + self.write_pdf_opendata() |
563 | + |
564 | + |
565 | + |
566 | |
567 | #=========================================================================== |
568 | # write a procdef_mg5 (an equivalent of the MG4 proc_card.dat) |
569 | @@ -298,6 +303,67 @@ |
570 | pass |
571 | |
572 | #=========================================================================== |
573 | + # write_pdf_opendata |
574 | + #=========================================================================== |
575 | + def write_pdf_opendata(self): |
576 | + """ modify the pdf opendata file, to allow direct access to cluster node |
577 | + repository if configure""" |
578 | + |
579 | + if not self.opt["cluster_local_path"]: |
580 | + changer = {"pdf_systemwide": ""} |
581 | + else: |
582 | + to_add = """ |
583 | + tempname='%(path)s'//Tablefile |
584 | + open(IU,file=tempname,status='old',ERR=1) |
585 | + return |
586 | + 1 tempname='%(path)s/Pdfdata/'//Tablefile |
587 | + open(IU,file=tempname,status='old',ERR=2) |
588 | + return |
589 | + 2 tempname='%(path)s/lhapdf'//Tablefile |
590 | + open(IU,file=tempname,status='old',ERR=3) |
591 | + return |
592 | + 3 tempname='%(path)s/../lhapdf/pdfsets/'//Tablefile |
593 | + open(IU,file=tempname,status='old',ERR=4) |
594 | + return |
595 | + 4 tempname='%(path)s/../lhapdf/pdfsets/6.1/'//Tablefile |
596 | + open(IU,file=tempname,status='old',ERR=10) |
597 | + return |
598 | + """ % {"path" : self.opt["cluster_local_path"]} |
599 | + |
600 | + changer = {"pdf_systemwide": to_add} |
601 | + |
602 | + ff = open(pjoin(self.dir_path, "Source", "PDF", "opendata.f"),"w") |
603 | + template = open(pjoin(MG5DIR, "madgraph", "iolibs", "template_files", "pdf_opendata.f"),"r").read() |
604 | + ff.write(template % changer) |
605 | + |
606 | + # Do the same for lhapdf set |
607 | + if not self.opt["cluster_local_path"]: |
608 | + changer = {"cluster_specific_path": ""} |
609 | + else: |
610 | + to_add=""" |
611 | + LHAPath='%(path)s/PDFsets' |
612 | + Inquire(File=LHAPath, exist=exists) |
613 | + if(exists)return |
614 | + LHAPath='%(path)s/../lhapdf/pdfsets/6.1/' |
615 | + Inquire(File=LHAPath, exist=exists) |
616 | + if(exists)return |
617 | + LHAPath='%(path)s/../lhapdf/pdfsets/' |
618 | + Inquire(File=LHAPath, exist=exists) |
619 | + if(exists)return |
620 | + LHAPath='./PDFsets' |
621 | + """ % {"path" : self.opt["cluster_local_path"]} |
622 | + changer = {"cluster_specific_path": to_add} |
623 | + |
624 | + ff = open(pjoin(self.dir_path, "Source", "PDF", "pdfwrap_lhapdf.f"),"w") |
625 | + template = open(pjoin(MG5DIR, "madgraph", "iolibs", "template_files", "pdf_wrap_lhapdf.f"),"r").read() |
626 | + ff.write(template % changer) |
627 | + |
628 | + |
629 | + return |
630 | + |
631 | + |
632 | + |
633 | + #=========================================================================== |
634 | # write_maxparticles_file |
635 | #=========================================================================== |
636 | def write_maxparticles_file(self, writer, matrix_elements): |
637 | @@ -5905,6 +5971,8 @@ |
638 | |
639 | group_subprocesses = cmd.options['group_subprocesses'] |
640 | |
641 | + opt = cmd.options |
642 | + |
643 | # First treat the MadLoop5 standalone case |
644 | MadLoop_SA_options = {'clean': not noclean, |
645 | 'complex_mass':cmd.options['complex_mass_scheme'], |
646 | @@ -5944,7 +6012,8 @@ |
647 | elif output_type=='amcatnlo': |
648 | import madgraph.iolibs.export_fks as export_fks |
649 | ExporterClass=None |
650 | - amcatnlo_options = MadLoop_SA_options |
651 | + amcatnlo_options = dict(opt) |
652 | + amcatnlo_options.update(MadLoop_SA_options) |
653 | amcatnlo_options['mp'] = len(cmd._fks_multi_proc.get_virt_amplitudes()) > 0 |
654 | if not cmd.options['loop_optimized_output']: |
655 | logger.info("Writing out the aMC@NLO code") |
656 | @@ -5968,19 +6037,21 @@ |
657 | |
658 | assert group_subprocesses in [True, False] |
659 | |
660 | - opt = {'clean': not noclean, |
661 | + opt = dict(opt) |
662 | + opt.update({'clean': not noclean, |
663 | 'complex_mass': cmd.options['complex_mass_scheme'], |
664 | 'export_format':cmd._export_format, |
665 | 'mp': False, |
666 | 'sa_symmetry':False, |
667 | - 'model': cmd._curr_model.get('name') } |
668 | + 'model': cmd._curr_model.get('name') }) |
669 | |
670 | format = cmd._export_format #shortcut |
671 | |
672 | if format in ['standalone_msP', 'standalone_msF', 'standalone_rw']: |
673 | opt['sa_symmetry'] = True |
674 | |
675 | - loop_induced_opt = MadLoop_SA_options |
676 | + loop_induced_opt = dict(opt) |
677 | + loop_induced_opt.update(MadLoop_SA_options) |
678 | loop_induced_opt['export_format'] = 'madloop_optimized' |
679 | loop_induced_opt['SubProc_prefix'] = 'PV' |
680 | # For loop_induced output with MadEvent, we must have access to the |
681 | |
682 | === modified file 'madgraph/iolibs/template_files/madevent_combine_events.f' |
683 | --- madgraph/iolibs/template_files/madevent_combine_events.f 2015-03-07 01:00:32 +0000 |
684 | +++ madgraph/iolibs/template_files/madevent_combine_events.f 2015-03-19 03:16:56 +0000 |
685 | @@ -87,7 +87,7 @@ |
686 | endif |
687 | c Get information for the <init> block |
688 | param_card_name = '%(param_card_name)s' |
689 | - call setrun |
690 | +c call setrun |
691 | |
692 | c nreq = 10000 |
693 | c |
694 | |
695 | === modified file 'madgraph/iolibs/template_files/madevent_makefile_source' |
696 | --- madgraph/iolibs/template_files/madevent_makefile_source 2015-02-24 23:12:53 +0000 |
697 | +++ madgraph/iolibs/template_files/madevent_makefile_source 2015-03-19 03:16:56 +0000 |
698 | @@ -17,7 +17,7 @@ |
699 | rw_events.o rw_routines.o kin_functions.o open_file.o basecode.o setrun.o \ |
700 | run_printout.o dgauss.o readgrid.o getissud.o |
701 | INCLUDEF= coupl.inc genps.inc hbook.inc DECAY/decay.inc psample.inc cluster.inc sudgrid.inc |
702 | -COMBINE = combine_events.o rw_events.o ranmar.o kin_functions.o open_file.o rw_routines.o alfas_functions.o setrun.o |
703 | +COMBINE = combine_events.o rw_events.o ranmar.o kin_functions.o open_file.o rw_routines.o |
704 | GENSUDGRID = gensudgrid.o is-sud.o setrun_gen.o rw_routines.o open_file.o |
705 | |
706 | # Locally compiled libraries |
707 | @@ -50,8 +50,8 @@ |
708 | |
709 | $(BINDIR)gen_ximprove: gen_ximprove.o ranmar.o rw_routines.o open_file.o |
710 | $(FC) $(FFLAGS) -o $@ $^ |
711 | -$(BINDIR)combine_events: $(COMBINE) $(LIBDIR)libmodel.$(libext) $(LIBDIR)libpdf.$(libext) run_card.inc |
712 | - $(FC) $(FFLAGS) -o $@ $(COMBINE) -L$(LIBDIR) -lmodel -lpdf $(lhapdf) |
713 | +$(BINDIR)combine_events: $(COMBINE) run_card.inc |
714 | + $(FC) $(FFLAGS) -o $@ $(COMBINE) |
715 | $(BINDIR)gensudgrid: $(GENSUDGRID) $(LIBDIR)libpdf.$(libext) $(LIBDIR)libcernlib.$(libext) |
716 | $(FC) $(FFLAGS) -o $@ $(GENSUDGRID) -L$(LIBDIR) -lmodel -lpdf -lcernlib $(lhapdf) |
717 | |
718 | |
719 | === renamed file 'Template/LO/Source/PDF/opendata.f' => 'madgraph/iolibs/template_files/pdf_opendata.f' |
720 | --- Template/LO/Source/PDF/opendata.f 2014-02-23 03:15:09 +0000 |
721 | +++ madgraph/iolibs/template_files/pdf_opendata.f 2015-03-19 03:16:56 +0000 |
722 | @@ -34,8 +34,11 @@ |
723 | c |
724 | IU=NextUnopen() |
725 | |
726 | -c first try in the current directory (for cluster use) |
727 | - tempname=Tablefile |
728 | +c First try system wide (for cluster if define) |
729 | + %(pdf_systemwide)s |
730 | + |
731 | +c Then try in the current directory (for cluster use) |
732 | + 5 tempname=Tablefile |
733 | open(IU,file=tempname,status='old',ERR=10) |
734 | return |
735 | |
736 | |
737 | === renamed file 'Template/NLO/Source/PDF/pdfwrap_lhapdf.f' => 'madgraph/iolibs/template_files/pdf_wrap_lhapdf.f' |
738 | --- Template/NLO/Source/PDF/pdfwrap_lhapdf.f 2012-11-07 02:20:18 +0000 |
739 | +++ madgraph/iolibs/template_files/pdf_wrap_lhapdf.f 2015-03-19 03:16:56 +0000 |
740 | @@ -43,10 +43,12 @@ |
741 | logical exists |
742 | integer i |
743 | |
744 | + |
745 | c first try in the current directory |
746 | LHAPath='./PDFsets' |
747 | Inquire(File=LHAPath, exist=exists) |
748 | if(exists)return |
749 | + %(cluster_specific_path)s |
750 | do i=1,6 |
751 | LHAPath=up//LHAPath |
752 | Inquire(File=LHAPath, exist=exists) |
753 | |
754 | === modified file 'madgraph/madevent/gen_crossxhtml.py' |
755 | --- madgraph/madevent/gen_crossxhtml.py 2015-03-06 18:09:36 +0000 |
756 | +++ madgraph/madevent/gen_crossxhtml.py 2015-03-19 03:16:56 +0000 |
757 | @@ -912,9 +912,12 @@ |
758 | elif exists(pjoin(self.me_dir, 'Events', self['run_name'], 'events.lhe')) or\ |
759 | exists(pjoin(self.me_dir, 'Events', self['run_name'], 'events.lhe.gz')): |
760 | link = './Events/%(run_name)s/events.lhe' |
761 | - level = 'parton' |
762 | - name = 'LHE' |
763 | - out += self.special_link(link, level, name) |
764 | + else: |
765 | + link = None |
766 | + if link: |
767 | + level = 'parton' |
768 | + name = 'LHE' |
769 | + out += self.special_link(link, level, name) |
770 | if 'root' in self.parton: |
771 | out += ' <a href="./Events/%(run_name)s/unweighted_events.root">rootfile</a>' |
772 | if 'plot' in self.parton: |
773 | |
774 | === modified file 'madgraph/various/lhe_parser.py' |
775 | --- madgraph/various/lhe_parser.py 2015-03-12 23:21:23 +0000 |
776 | +++ madgraph/various/lhe_parser.py 2015-03-19 03:16:56 +0000 |
777 | @@ -720,7 +720,7 @@ |
778 | try: |
779 | self.reweight_data = dict([(pid, float(value)) for (pid, value) in data |
780 | if not self.reweight_order.append(pid)]) |
781 | - # the if is to create the order file on the flight |
782 | + # the if is to create the order file on the flight |
783 | except ValueError, error: |
784 | raise Exception, 'Event File has unvalid weight. %s' % error |
785 | self.tag = self.tag[:start] + self.tag[stop+7:] |
786 | |
787 | === modified file 'madgraph/various/process_checks.py' |
788 | --- madgraph/various/process_checks.py 2015-03-10 23:52:32 +0000 |
789 | +++ madgraph/various/process_checks.py 2015-03-19 03:16:56 +0000 |
790 | @@ -718,6 +718,8 @@ |
791 | |
792 | for key, value in MLOptions.items(): |
793 | if key == "MLReductionLib": |
794 | + if isinstance(value, int): |
795 | + ml_reds = str(value) |
796 | if isinstance(value,list): |
797 | if len(value)==0: |
798 | ml_reds = '1' |
Ciao Olivier, configuration. txt the cluster_local_path is still commented...
2 thisgs:
1)I get this error with the latest version (see at the end)...
2)in Cards/amcatnlo_
Cheers,
Marco
INFO: Using LHAPDF v6.1.5 interface for PDFs cp3.uclouvain. be/lhapdf/ lhapdf- 6.1.5_amd64_ gcc44/share/ LHAPDF/ lhapdf. conf' cp3.uclouvain. be/lhapdf/ lhapdf- 6.1.5_amd64_ gcc44/share/ LHAPDF/ pdfsets. index' nlo_as_ 0118_qed cp3.uclouvain. be/lhapdf/ lhapdf- 6.1.5_amd64_ gcc44/share/ LHAPDF/ NNPDF23_ nlo_as_ 0118_qed. tar.gz cp3.uclouvain. be/lhapdf/ lhapdf- 6.1.5_amd64_ gcc44/share/ LHAPDF/ NNPDF23_ nlo_as_ 0118_qed. tar.gz cp3.uclouvain. be/lhapdf/ lhapdf- 6.1.5_amd64_ gcc44/share/ LHAPDF/ NNPDF23_ nlo_as_ 0118_qed. tar.gz nlo_as_ 0118_qed into /cvmfs/ cp3.uclouvain. be/lhapdf/ lhapdf- 6.1.5_amd64_ gcc44/share/ LHAPDF. Please try to install it manually. /bugs.launchpad .net/madgraph5 fynu/mzaro/ 2.3.0_nopdftran sfer/PROCNLO_ loop_sm_ 1/run_01_ tag_1_debug. log'.
DEBUG: [Errno 30] Read-only file system: '/cvmfs/
DEBUG: [Errno 30] Read-only file system: '/cvmfs/
INFO: Trying to download NNPDF23_
ERROR: Could not write to /cvmfs/
ERROR: Could not write to /cvmfs/
ERROR: Could not write to /cvmfs/
Command "launch auto -c" interrupted with error:
MadGraph5Error : Could not download NNPDF23_
Please report this bug on https:/
More information is found in '/nfs/scratch/
Please attach this file to your report.