Merge lp:~maddevelopers/mg5amcnlo/PY8_parallelization into lp:~maddevelopers/mg5amcnlo/2.5.1
- PY8_parallelization
- Merge into 2.5.1
Status: | Superseded |
---|---|
Proposed branch: | lp:~maddevelopers/mg5amcnlo/PY8_parallelization |
Merge into: | lp:~maddevelopers/mg5amcnlo/2.5.1 |
Diff against target: |
848 lines (+446/-149) 5 files modified
madgraph/interface/common_run_interface.py (+13/-1) madgraph/interface/madevent_interface.py (+368/-81) madgraph/various/banner.py (+13/-2) madgraph/various/histograms.py (+42/-62) madgraph/various/lhe_parser.py (+10/-3) |
To merge this branch: | bzr merge lp:~maddevelopers/mg5amcnlo/PY8_parallelization |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Olivier Mattelaer | Pending | ||
Review via email: mp+304843@code.launchpad.net |
This proposal has been superseded by a proposal from 2016-09-03.
Commit message
Description of the change
This branch implements PY8 LO parallelization.
It is fully functional and has been tested for a few cases (not exhaustive yet).
The remaining issues are:
1) [Not related to this branch] Systematics parallelization crash when using a cluster_
2) [Not related to this branch] The PY8 HTML is screwed up past the first run/tag.
3) The .lhe splitting is slow. It would be nice to have an advanced function for this in lhe_parser.py that bypasses the (full) parsing of the event files.
Also, obtaining the number of events in the event file is slow. Again an optimized static method for this bypassing the full parsing would be nice.
4) The merging of the split HEPMC is done in a very efficient way in this branch (very important given their size). However it use two system calls which are not secure. They need to be made secure.
5) More testing must be made, especially a comparison of the results between parallel and sequential runs for the merged_x_secs, HwU plots et hepmc event files must be performed so as to guarantee the correctness of the implementation.
6) The new bits of code in do_pythia8() could be a bit more refactored. In particular the two parts of the code related to parallel submission and merging of split results could be factored out in dedicated functions.
Olivier, could you review this and fix what you can already.
I you manage to clean it up all, then don't hesitate to merge this already to 2.5.1 (or even 2.5.0 since it is a nice feature and introduces some important bug fixing).
If there is still something that needs to be discussed with me, then we will only be able to do this on tuesday, since I will be mostly unavailable 'til then now.
Thanks,
- 300. By Valentin Hirschi
-
1. Fixed a typo in an MA5 option.
2. Updated the parallelization of PY8 in the UpdateNotes.
Valentin Hirschi (valentin-hirschi) wrote : | # |
- 301. By Olivier Mattelaer
-
faster parsing for splitting event/get number of events
- 302. By Olivier Mattelaer
-
also apply the bypass of parsing for systematics
- 303. By Valentin Hirschi
-
1. Fixed the sanity check of PY8 log file which was not ok with the parallelization.
- 304. By Valentin Hirschi
-
1. fixed an issue with the warning about failing PY8 log. (needed to close the log stream).
2. Sandboxed the HEPMC merging syscalls. - 305. By Valentin Hirschi
-
1. Merged with latest version of 2.5.1
Unmerged revisions
- 305. By Valentin Hirschi
-
1. Merged with latest version of 2.5.1
- 304. By Valentin Hirschi
-
1. fixed an issue with the warning about failing PY8 log. (needed to close the log stream).
2. Sandboxed the HEPMC merging syscalls. - 303. By Valentin Hirschi
-
1. Fixed the sanity check of PY8 log file which was not ok with the parallelization.
- 302. By Olivier Mattelaer
-
also apply the bypass of parsing for systematics
- 301. By Olivier Mattelaer
-
faster parsing for splitting event/get number of events
Preview Diff
1 | === modified file 'madgraph/interface/common_run_interface.py' |
2 | --- madgraph/interface/common_run_interface.py 2016-09-02 02:17:02 +0000 |
3 | +++ madgraph/interface/common_run_interface.py 2016-09-03 09:10:54 +0000 |
4 | @@ -3860,6 +3860,17 @@ |
5 | 'ilc': ['run_card lpp1 0', 'run_card lpp2 0', 'run_card ebeam1 %(0)s/2', 'run_card ebeam2 %(0)s/2'], |
6 | 'lcc':['run_card lpp1 1', 'run_card lpp2 1', 'run_card ebeam1 %(0)s*1000/2', 'run_card ebeam2 %(0)s*1000/2'], |
7 | 'fixed_scale': ['run_card fixed_fac_scale T', 'run_card fixed_ren_scale T', 'run_card scale %(0)s', 'run_card dsqrt_q2fact1 %(0)s' ,'run_card dsqrt_q2fact2 %(0)s'], |
8 | + 'simplepy8':['pythia8_card hadronlevel:all False', |
9 | + 'pythia8_card partonlevel:mpi False', |
10 | + 'pythia8_card BeamRemnants:primordialKT False', |
11 | + 'pythia8_card PartonLevel:Remnants False', |
12 | + 'pythia8_card Check:event False', |
13 | + 'pythia8_card TimeShower:QEDshowerByQ False', |
14 | + 'pythia8_card TimeShower:QEDshowerByL False', |
15 | + 'pythia8_card SpaceShower:QEDshowerByQ False', |
16 | + 'pythia8_card SpaceShower:QEDshowerByL False', |
17 | + 'pythia8_card PartonLevel:FSRinResonances False', |
18 | + 'pythia8_card ProcessLevel:resonanceDecays False'] |
19 | } |
20 | |
21 | special_shortcut_help = { |
22 | @@ -3873,7 +3884,8 @@ |
23 | ' 3 : means PDF for elastic photon emited from an electron', |
24 | 'lhc' : 'syntax: set lhc VALUE:\n Set for a proton-proton collision with that given center of mass energy (in TeV)', |
25 | 'lep' : 'syntax: set lep VALUE:\n Set for a electron-positron collision with that given center of mass energy (in GeV)', |
26 | - 'fixed_scale' : 'syntax: set fixed_scale VALUE:\n Set all scales to the give value (in GeV)', |
27 | + 'fixed_scale' : 'syntax: set fixed_scale VALUE:\n Set all scales to the give value (in GeV)', |
28 | + 'simplepy8' : 'syntax: Turn off non-perturbative slow features of PY8.' |
29 | } |
30 | |
31 | def load_default(self): |
32 | |
33 | === modified file 'madgraph/interface/madevent_interface.py' |
34 | --- madgraph/interface/madevent_interface.py 2016-09-01 20:55:35 +0000 |
35 | +++ madgraph/interface/madevent_interface.py 2016-09-03 09:10:54 +0000 |
36 | @@ -38,6 +38,7 @@ |
37 | import tarfile |
38 | import StringIO |
39 | import shutil |
40 | +import copy |
41 | import xml.dom.minidom as minidom |
42 | |
43 | try: |
44 | @@ -77,6 +78,8 @@ |
45 | import internal.sum_html as sum_html |
46 | import internal.combine_runs as combine_runs |
47 | import internal.lhe_parser as lhe_parser |
48 | + import internal.histograms as histograms |
49 | + from internal.files import ln |
50 | else: |
51 | # import from madgraph directory |
52 | MADEVENT = False |
53 | @@ -92,8 +95,10 @@ |
54 | import madgraph.various.misc as misc |
55 | import madgraph.madevent.combine_runs as combine_runs |
56 | import madgraph.various.lhe_parser as lhe_parser |
57 | + import madgraph.various.histograms as histograms |
58 | |
59 | - import models.check_param_card as check_param_card |
60 | + import models.check_param_card as check_param_card |
61 | + from madgraph.iolibs.files import ln |
62 | from madgraph import InvalidCmd, MadGraph5Error, MG5DIR, ReadWrite |
63 | |
64 | |
65 | @@ -3502,7 +3507,7 @@ |
66 | """ Setup the Pythia8 Run environment and card. In particular all the process and run specific parameters |
67 | of the card are automatically set here. This function returns the path where HEPMC events will be output, |
68 | if any.""" |
69 | - |
70 | + |
71 | HepMC_event_output = None |
72 | tag = self.run_tag |
73 | |
74 | @@ -3554,7 +3559,7 @@ |
75 | # only if it is not already user_set. |
76 | if PY8_Card['JetMatching:qCut']==-1.0: |
77 | PY8_Card.MadGraphSet('JetMatching:qCut',1.5*self.run_card['xqcut']) |
78 | - |
79 | + |
80 | if PY8_Card['JetMatching:qCut']<(1.5*self.run_card['xqcut']): |
81 | logger.error( |
82 | 'The MLM merging qCut parameter you chose (%f) is less than'%PY8_Card['JetMatching:qCut']+ |
83 | @@ -3568,6 +3573,7 @@ |
84 | # Automatically set qWeed to xqcut if not defined by the user. |
85 | if PY8_Card['SysCalc:qWeed']==-1.0: |
86 | PY8_Card.MadGraphSet('SysCalc:qWeed',self.run_card['xqcut']) |
87 | + |
88 | if PY8_Card['SysCalc:qCutList']=='auto': |
89 | if self.run_card['use_syst']: |
90 | if self.run_card['sys_matchscale']=='auto': |
91 | @@ -3580,7 +3586,7 @@ |
92 | if PY8_Card['JetMatching:qCut'] not in qCutList: |
93 | qCutList.append(PY8_Card['JetMatching:qCut']) |
94 | PY8_Card.MadGraphSet('SysCalc:qCutList', qCutList) |
95 | - |
96 | + |
97 | for scale in PY8_Card['SysCalc:qCutList']: |
98 | if scale<(1.5*self.run_card['xqcut']): |
99 | logger.error( |
100 | @@ -3728,12 +3734,11 @@ |
101 | self.options['automatic_html_opening'] = False |
102 | |
103 | if self.run_card['event_norm'] not in ['unit','average']: |
104 | - |
105 | logger.critical("Pythia8 does not support normalization to the sum. Not running Pythia8") |
106 | return |
107 | - #\n"+\ |
108 | - #"The normalisation of the hepmc output file will be wrong (i.e. non-standard).\n"+\ |
109 | - #"Please use 'event_norm = average' in the run_card to avoid this problem.") |
110 | + #\n"+\ |
111 | + #"The normalisation of the hepmc output file will be wrong (i.e. non-standard).\n"+\ |
112 | + #"Please use 'event_norm = average' in the run_card to avoid this problem.") |
113 | |
114 | # Update the banner with the pythia card |
115 | if not self.banner or len(self.banner) <=1: |
116 | @@ -3837,6 +3842,11 @@ |
117 | ( os.path.exists(HepMC_event_output) and \ |
118 | stat.S_ISFIFO(os.stat(HepMC_event_output).st_mode)) |
119 | startPY8timer = time.time() |
120 | + |
121 | + # Information that will be extracted from this PY8 run |
122 | + PY8_extracted_information={ 'sigma_m':None, 'Nacc':None, 'Ntry':None, |
123 | + 'cross_sections':{} } |
124 | + |
125 | if is_HepMC_output_fifo: |
126 | logger.info( |
127 | """Pythia8 is set to output HEPMC events to to a fifo file. |
128 | @@ -3853,17 +3863,294 @@ |
129 | %HepMC_event_output,'$MG:color:GREEN') |
130 | return |
131 | else: |
132 | - logger.info('Follow Pythia8 shower by running the '+ |
133 | - 'following command (in a separate terminal):\n tail -f %s'%pythia_log) |
134 | + if self.options['run_mode']!=0: |
135 | + # Start a parallelization instance (stored in self.cluster) |
136 | + self.configure_run_mode(self.options['run_mode']) |
137 | + if self.options['run_mode']==1: |
138 | + n_cores = max(self.options['cluster_size'],1) |
139 | + elif self.options['run_mode']==2: |
140 | + n_cores = max(self.cluster.nb_core,1) |
141 | + |
142 | + lhe_file_name = os.path.basename(PY8_Card.subruns[0]['Beams:LHEF']) |
143 | + lhe_file = lhe_parser.EventFile(pjoin(self.me_dir,'Events', |
144 | + self.run_name,PY8_Card.subruns[0]['Beams:LHEF'])) |
145 | + n_events = len(lhe_file) |
146 | |
147 | - ret_code = self.cluster.launch_and_wait(wrapper_path, |
148 | - argument= [], stdout= pythia_log, stderr=subprocess.STDOUT, |
149 | - cwd=pjoin(self.me_dir,'Events',self.run_name)) |
150 | - if ret_code != 0: |
151 | - raise self.InvalidCmd, 'Pythia8 shower interrupted with return'+\ |
152 | - ' code %d.\n'%ret_code+\ |
153 | - 'You can find more information in this log file:\n%s'%pythia_log |
154 | + # Implement a security to insure a minimum numbe of events per job |
155 | + if self.options['run_mode']==2: |
156 | + min_n_events_per_job = 100 |
157 | + elif self.options['run_mode']==1: |
158 | + min_n_events_per_job = 1000 |
159 | + min_n_core = n_events//min_n_events_per_job |
160 | + n_cores = min(min_n_core,n_cores) |
161 | + |
162 | + if self.options['run_mode']==0 or (self.options['run_mode']==2 and self.options['nb_core']==1): |
163 | + # No need for parallelization anymore |
164 | + self.cluster = None |
165 | + logger.info('Follow Pythia8 shower by running the '+ |
166 | + 'following command (in a separate terminal):\n tail -f %s'%pythia_log) |
167 | |
168 | + ret_code = self.cluster.launch_and_wait(wrapper_path, |
169 | + argument= [], stdout= pythia_log, stderr=subprocess.STDOUT, |
170 | + cwd=pjoin(self.me_dir,'Events',self.run_name)) |
171 | + if ret_code != 0: |
172 | + raise self.InvalidCmd, 'Pythia8 shower interrupted with return'+\ |
173 | + ' code %d.\n'%ret_code+\ |
174 | + 'You can find more information in this log file:\n%s'%pythia_log |
175 | + else: |
176 | + if self.run_card['event_norm']=='sum': |
177 | + logger.error("") |
178 | + logger.error("Either run in single core or change event_norm to 'average'.") |
179 | + raise InvalidCmd("Pythia8 parallelization with event_norm set to 'sum' is not supported." |
180 | + "Either run in single core or change event_norm to 'average'.") |
181 | + |
182 | + # Create the parallelization folder |
183 | + parallelization_dir = pjoin(self.me_dir,'Events',self.run_name,'PY8_parallelization') |
184 | + if os.path.isdir(parallelization_dir): |
185 | + shutil.rmtree(parallelization_dir) |
186 | + os.mkdir(parallelization_dir) |
187 | + # Copy what should be the now standalone executable for PY8 |
188 | + shutil.copy(pythia_main,parallelization_dir) |
189 | + # Add a safe card in parallelization |
190 | + ParallelPY8Card = copy.copy(PY8_Card) |
191 | + # Normalize the name of the HEPMCouput and lhe input |
192 | + if HepMC_event_output: |
193 | + ParallelPY8Card['HEPMCoutput:file']='events.hepmc' |
194 | + else: |
195 | + ParallelPY8Card['HEPMCoutput:file']='/dev/null' |
196 | + |
197 | + ParallelPY8Card.subruns[0].systemSet('Beams:LHEF','events.lhe.gz') |
198 | + ParallelPY8Card.write(pjoin(parallelization_dir,'PY8Card.dat'), |
199 | + pjoin(self.me_dir,'Cards','pythia8_card_default.dat'), |
200 | + direct_pythia_input=True) |
201 | + # Write the wrapper |
202 | + wrapper_path = pjoin(parallelization_dir,'run_PY8.sh') |
203 | + wrapper = open(wrapper_path,'w') |
204 | + if self.options['cluster_temp_path'] is None: |
205 | + exe_cmd = \ |
206 | +"""#!%s |
207 | +./%s >& PY8_log.txt |
208 | +""" |
209 | + else: |
210 | + exe_cmd = \ |
211 | +"""#!%s |
212 | +ln -s ./events_$1.lhe.gz ./events.lhe.gz |
213 | +./%s >& PY8_log.txt |
214 | +mkdir split_$1 |
215 | +if [ -f ./events.hepmc ]; |
216 | +then |
217 | + mv ./events.hepmc ./split_$1/ |
218 | +fi |
219 | +if [ -f ./pts.dat ]; |
220 | +then |
221 | + mv ./pts.dat ./split_$1/ |
222 | +fi |
223 | +if [ -f ./djrs.dat ]; |
224 | +then |
225 | + mv ./djrs.dat ./split_$1/ |
226 | +fi |
227 | +if [ -f ./PY8_log.txt ]; |
228 | +then |
229 | + mv ./PY8_log.txt ./split_$1/ |
230 | +fi |
231 | +tar -czf split_$1.tar.gz split_$1 |
232 | +""" |
233 | + exe_cmd = exe_cmd%(shell_exe,' '.join([os.path.basename(pythia_main),'PY8Card.dat'])) |
234 | + wrapper.write(exe_cmd) |
235 | + wrapper.close() |
236 | + # Set it as executable |
237 | + st = os.stat(wrapper_path) |
238 | + os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC) |
239 | + |
240 | + # Split the .lhe event file, create event partition |
241 | + partition=[n_events//n_cores]*n_cores |
242 | + for i in range(n_events%n_cores): |
243 | + partition[i] += 1 |
244 | + |
245 | + logger.info('Splitting .lhe event file for PY8 parallelization...') |
246 | + n_splits = lhe_file.split(partition=partition, cwd=parallelization_dir, zip=True) |
247 | + |
248 | + # Distribute the split events |
249 | + split_files = [] |
250 | + split_dirs = [] |
251 | + for split_id in range(n_splits): |
252 | + split_files.append('events_%s.lhe.gz'%split_id) |
253 | + split_dirs.append(pjoin(parallelization_dir,'split_%d'%split_id)) |
254 | + # Add the necessary run content |
255 | + shutil.move(pjoin(parallelization_dir,lhe_file.name+'_%d.lhe.gz'%split_id), |
256 | + pjoin(parallelization_dir,split_files[-1])) |
257 | + |
258 | + logger.info('Submitting Pythia8 jobs...') |
259 | + for i, split_file in enumerate(split_files): |
260 | + in_files = [pjoin(parallelization_dir,os.path.basename(pythia_main)), |
261 | + pjoin(parallelization_dir,'PY8Card.dat'), |
262 | + pjoin(parallelization_dir,split_file)] |
263 | + if self.options['cluster_temp_path'] is None: |
264 | + out_files = [] |
265 | + os.mkdir(pjoin(parallelization_dir,'split_%d'%i)) |
266 | + selected_cwd = pjoin(parallelization_dir,'split_%d'%i) |
267 | + for in_file in in_files+[pjoin(parallelization_dir,'run_PY8.sh')]: |
268 | + # Make sure to rename the split_file link from events_<x>.lhe.gz to events.lhe.gz |
269 | + if os.path.basename(in_file)==split_file: |
270 | + ln(in_file,selected_cwd,name='events.lhe.gz') |
271 | + else: |
272 | + ln(in_file,selected_cwd) |
273 | + in_files = [] |
274 | + else: |
275 | + out_files = ['split_%d.tar.gz'%i] |
276 | + selected_cwd = parallelization_dir |
277 | + self.cluster.submit2(wrapper_path, |
278 | + argument=[str(i)], cwd=selected_cwd, |
279 | + input_files=in_files, |
280 | + output_files=out_files, |
281 | + required_output=out_files) |
282 | + |
283 | + def wait_monitoring(Idle, Running, Done): |
284 | + if Idle+Running+Done == 0: |
285 | + return |
286 | + logger.info('Pythia8 shower jobs: %d Idle, %d Running, %d Done [%s]'\ |
287 | + %(Idle, Running, Done, misc.format_time(time.time() - startPY8timer))) |
288 | + self.cluster.wait(parallelization_dir,wait_monitoring) |
289 | + |
290 | + logger.info('Merging results from the split PY8 runs...') |
291 | + if self.options['cluster_temp_path']: |
292 | + # Decompressing the output |
293 | + for i, split_file in enumerate(split_files): |
294 | + misc.call(['tar','-xzf','split_%d.tar.gz'%i],cwd=parallelization_dir) |
295 | + os.remove(pjoin(parallelization_dir,'split_%d.tar.gz'%i)) |
296 | + |
297 | + # Now merge logs |
298 | + pythia_log_file = open(pythia_log,'w') |
299 | + |
300 | + n_added = 0 |
301 | + for split_dir in split_dirs: |
302 | + log_file = pjoin(split_dir,'PY8_log.txt') |
303 | + pythia_log_file.write('='*35+'\n') |
304 | + pythia_log_file.write(' -> Pythia8 log file for run %d <-'%i+'\n') |
305 | + pythia_log_file.write('='*35+'\n') |
306 | + pythia_log_file.write(open(log_file,'r').read()+'\n') |
307 | + if run_type in merged_run_types: |
308 | + sigma_m, Nacc, Ntry = self.parse_PY8_log_file(log_file) |
309 | + if any(elem is None for elem in [sigma_m, Nacc, Ntry]): |
310 | + continue |
311 | + n_added += 1 |
312 | + if PY8_extracted_information['sigma_m'] is None: |
313 | + PY8_extracted_information['sigma_m'] = sigma_m |
314 | + else: |
315 | + PY8_extracted_information['sigma_m'] += sigma_m |
316 | + if PY8_extracted_information['Nacc'] is None: |
317 | + PY8_extracted_information['Nacc'] = Nacc |
318 | + else: |
319 | + PY8_extracted_information['Nacc'] += Nacc |
320 | + if PY8_extracted_information['Ntry'] is None: |
321 | + PY8_extracted_information['Ntry'] = Ntry |
322 | + else: |
323 | + PY8_extracted_information['Ntry'] += Ntry |
324 | + # Normalize the values added |
325 | + if n_added>0: |
326 | + PY8_extracted_information['sigma_m'] /= float(n_added) |
327 | + |
328 | + # djr plots |
329 | + djr_HwU = None |
330 | + n_added = 0 |
331 | + for split_dir in split_dirs: |
332 | + djr_file = pjoin(split_dir,'djrs.dat') |
333 | + if not os.path.isfile(djr_file): |
334 | + continue |
335 | + xsecs = self.extract_cross_sections_from_DJR(djr_file) |
336 | + if len(xsecs)>0: |
337 | + n_added += 1 |
338 | + if len(PY8_extracted_information['cross_sections'])==0: |
339 | + PY8_extracted_information['cross_sections'] = xsecs |
340 | + # Square the error term |
341 | + for key in PY8_extracted_information['cross_sections']: |
342 | + PY8_extracted_information['cross_sections'][key][1] = \ |
343 | + PY8_extracted_information['cross_sections'][key][1]**2 |
344 | + else: |
345 | + for key, value in xsecs.items(): |
346 | + PY8_extracted_information['cross_sections'][key][0] += value[0] |
347 | + # Add error in quadrature |
348 | + PY8_extracted_information['cross_sections'][key][1] += value[1]**2 |
349 | + new_djr_HwU = histograms.HwUList(djr_file,run_id=0) |
350 | + if djr_HwU is None: |
351 | + djr_HwU = new_djr_HwU |
352 | + else: |
353 | + for i, hist in enumerate(djr_HwU): |
354 | + djr_HwU[i] = hist + new_djr_HwU[i] |
355 | + if not djr_HwU is None: |
356 | + djr_HwU.output(pjoin(self.me_dir,'Events',self.run_name,'djrs'),format='HwU') |
357 | + shutil.move(pjoin(self.me_dir,'Events',self.run_name,'djrs.HwU'), |
358 | + pjoin(self.me_dir,'Events',self.run_name,'%s_djrs.dat'%tag)) |
359 | + if n_added>0: |
360 | + for key in PY8_extracted_information['cross_sections']: |
361 | + PY8_extracted_information['cross_sections'][key][0] /= float(n_added) |
362 | + PY8_extracted_information['cross_sections'][key][1] = \ |
363 | + math.sqrt(PY8_extracted_information['cross_sections'][key][1])/float(n_added) |
364 | + |
365 | + # pts plots |
366 | + pts_HwU = None |
367 | + for split_dir in split_dirs: |
368 | + pts_file = pjoin(split_dir,'pts.dat') |
369 | + if not os.path.isfile(pts_file): |
370 | + continue |
371 | + new_pts_HwU = histograms.HwUList(pts_file,run_id=0) |
372 | + if pts_HwU is None: |
373 | + pts_HwU = new_pts_HwU |
374 | + else: |
375 | + for i, hist in enumerate(pts_HwU): |
376 | + pts_HwU[i] = hist + new_pts_HwU[i] |
377 | + if not pts_HwU is None: |
378 | + pts_HwU.output(pjoin(self.me_dir,'Events',self.run_name,'pts'),format='HwU') |
379 | + shutil.move(pjoin(self.me_dir,'Events',self.run_name,'pts.HwU'), |
380 | + pjoin(self.me_dir,'Events',self.run_name,'%s_pts.dat'%tag)) |
381 | + |
382 | + # HepMC events now. |
383 | + all_hepmc_files = [] |
384 | + for split_dir in split_dirs: |
385 | + hepmc_file = pjoin(split_dir,'events.hepmc') |
386 | + if not os.path.isfile(hepmc_file): |
387 | + continue |
388 | + all_hepmc_files.append(hepmc_file) |
389 | + |
390 | + if len(all_hepmc_files)>0: |
391 | + hepmc_output = pjoin(self.me_dir,'Events',self.run_name,HepMC_event_output) |
392 | + with misc.TMP_directory() as tmp_dir: |
393 | + # Use system calls to quickly put these together |
394 | + header = open(pjoin(tmp_dir,'header.hepmc'),'w') |
395 | + n_head = 0 |
396 | + for line in open(all_hepmc_files[0],'r'): |
397 | + if not line.startswith('E'): |
398 | + n_head += 1 |
399 | + header.write(line) |
400 | + else: |
401 | + break |
402 | + header.close() |
403 | + tail = open(pjoin(tmp_dir,'tail.hepmc'),'w') |
404 | + n_tail = 0 |
405 | + for line in misc.BackRead(all_hepmc_files[-1]): |
406 | + if line.startswith('HepMC::'): |
407 | + n_tail += 1 |
408 | + tail.write(line) |
409 | + else: |
410 | + break |
411 | + tail.close() |
412 | + if n_tail>1: |
413 | + raise MadGraph5Error,'HEPMC files should only have one trailing command.' |
414 | + ###################################################################### |
415 | + # This is the most efficient way of putting together HEPMC's, *BUT* # |
416 | + # WARNING: NEED TO RENDER THE CODE BELOW SAFE TOWARDS INJECTION # |
417 | + ###################################################################### |
418 | + for hepmc_file in all_hepmc_files: |
419 | + # Remove in an efficient way the starting and trailing HEPMC tags |
420 | + os.system(' '.join(['sed','-i',"''","'%s;$d'"% |
421 | + (';'.join('%id'%(i+1) for i in range(n_head))),hepmc_file])) |
422 | + os.system(' '.join(['cat',pjoin(tmp_dir,'header.hepmc')]+all_hepmc_files+ |
423 | + [pjoin(tmp_dir,'tail.hepmc'),'>',hepmc_output])) |
424 | + |
425 | + # We are done with the parallelization directory. Clean it. |
426 | + if os.path.isdir(parallelization_dir): |
427 | + shutil.rmtree(parallelization_dir) |
428 | + |
429 | # Properly rename the djr and pts output if present. |
430 | djr_output = pjoin(self.me_dir,'Events', self.run_name, 'djrs.dat') |
431 | if os.path.isfile(djr_output): |
432 | @@ -3875,7 +4162,7 @@ |
433 | self.run_name, '%s_pts.dat' % tag)) |
434 | |
435 | if not os.path.isfile(pythia_log) or \ |
436 | - 'PYTHIA Abort' in '\n'.join(open(pythia_log,'r').readlines()[-20]): |
437 | + 'PYTHIA Abort' in '\n'.join(open(pythia_log,'r').readlines()[:-20]): |
438 | logger.warning('Fail to produce a pythia8 output. More info in \n %s'%pythia_log) |
439 | return |
440 | |
441 | @@ -3888,70 +4175,34 @@ |
442 | |
443 | # Study matched cross-sections |
444 | if run_type in merged_run_types: |
445 | - ##### |
446 | # From the log file |
447 | - ##### |
448 | - # read the line from the bottom of the file |
449 | - pythia_log = misc.BackRead(pjoin(self.me_dir,'Events', self.run_name, |
450 | - '%s_pythia8.log' % tag)) |
451 | - # The main89 driver should be modified so as to allow for easier parsing |
452 | - pythiare = re.compile("Les Houches User Process\(es\)\s*\d+\s*\|\s*(?P<tried>\d+)\s*(?P<selected>\d+)\s*(?P<generated>\d+)\s*\|\s*(?P<xsec>[\d\.e\-\+]+)\s*(?P<xsec_error>[\d\.e\-\+]+)") |
453 | - for line in pythia_log: |
454 | - info = pythiare.search(line) |
455 | - if not info: |
456 | - continue |
457 | - try: |
458 | - # Pythia cross section in mb, we want pb |
459 | - sigma_m = float(info.group('xsec')) *1e9 |
460 | - Nacc = int(info.group('generated')) |
461 | - Ntry = int(info.group('tried')) |
462 | - if Nacc==0: |
463 | - raise self.InvalidCmd, 'Pythia8 shower failed since it'+\ |
464 | - ' did not accept any event from the MG5aMC event file.'+\ |
465 | - 'You can find more information in this log file:\n%s'%pythia_log |
466 | - |
467 | - except ValueError: |
468 | - # xsec is not float - this should not happen |
469 | - self.results.add_detail('cross_pythia', 0) |
470 | - self.results.add_detail('nb_event_pythia', 0) |
471 | - self.results.add_detail('error_pythia', 0) |
472 | - else: |
473 | - self.results.add_detail('cross_pythia', sigma_m) |
474 | - self.results.add_detail('nb_event_pythia', Nacc) |
475 | - #compute pythia error |
476 | - error = self.results[self.run_name].return_tag(self.run_tag)['error'] |
477 | - try: |
478 | - error_m = math.sqrt((error * Nacc/Ntry)**2 + sigma_m**2 *(1-Nacc/Ntry)/Nacc) |
479 | - except ZeroDivisionError: |
480 | - # Cannot compute error |
481 | - error_m = -1.0 |
482 | - # works both for fixed number of generated events and fixed accepted events |
483 | - self.results.add_detail('error_pythia', error_m) |
484 | - break |
485 | - pythia_log.close() |
486 | + if all(PY8_extracted_information[_] is None for _ in ['sigma_m','Nacc','Ntry']): |
487 | + PY8_extracted_information['sigma_m'],PY8_extracted_information['Nacc'],\ |
488 | + PY8_extracted_information['Ntry'] = self.parse_PY8_log_file( |
489 | + pjoin(self.me_dir,'Events', self.run_name,'%s_pythia8.log' % tag)) |
490 | + |
491 | + if not any(PY8_extracted_information[_] is None for _ in ['sigma_m','Nacc','Ntry']): |
492 | + self.results.add_detail('cross_pythia', PY8_extracted_information['sigma_m']) |
493 | + self.results.add_detail('nb_event_pythia', PY8_extracted_information['Nacc']) |
494 | + # Compute pythia error |
495 | + error = self.results[self.run_name].return_tag(self.run_tag)['error'] |
496 | + try: |
497 | + error_m = math.sqrt((error * Nacc/Ntry)**2 + sigma_m**2 *(1-Nacc/Ntry)/Nacc) |
498 | + except ZeroDivisionError: |
499 | + # Cannot compute error |
500 | + error_m = -1.0 |
501 | + # works both for fixed number of generated events and fixed accepted events |
502 | + self.results.add_detail('error_pythia', error_m) |
503 | + |
504 | if self.run_card['use_syst']: |
505 | self.results.add_detail('cross_pythia', -1) |
506 | self.results.add_detail('error_pythia', 0) |
507 | - ##### |
508 | + |
509 | # From the djr file generated |
510 | - ##### |
511 | djr_output = pjoin(self.me_dir,'Events',self.run_name,'%s_djrs.dat'%tag) |
512 | - cross_sections = None |
513 | - if os.path.isfile(djr_output): |
514 | - run_nodes = minidom.parse(djr_output).getElementsByTagName("run") |
515 | - all_nodes = dict((int(node.getAttribute('id')),node) for |
516 | - node in run_nodes) |
517 | - try: |
518 | - selected_run_node = all_nodes[0] |
519 | - except: |
520 | - selected_run_node = None |
521 | - if selected_run_node: |
522 | - xsections = selected_run_node.getElementsByTagName("xsection") |
523 | - # We need to translate PY8's output in mb into pb |
524 | - cross_sections = dict((xsec.getAttribute('name'), |
525 | - (float(xsec.childNodes[0].data.split()[0])*1e9, |
526 | - float(xsec.childNodes[0].data.split()[1])*1e9)) |
527 | - for xsec in xsections) |
528 | + if os.path.isfile(djr_output) and len(PY8_extracted_information['cross_sections'])==0: |
529 | + PY8_extracted_information['cross_sections'] = self.extract_cross_sections_from_DJR(djr_output) |
530 | + cross_sections = PY8_extracted_information['cross_sections'] |
531 | if cross_sections: |
532 | # Filter the cross_sections specified an keep only the ones |
533 | # with central parameters and a different merging scale |
534 | @@ -3969,11 +4220,10 @@ |
535 | self.results.add_detail('cross_pythia8', cross_sections[central_scale][0]) |
536 | self.results.add_detail('error_pythia8', cross_sections[central_scale][1]) |
537 | |
538 | - #if len(cross_sections)>0: |
539 | - # logger.info('Pythia8 merged cross-sections are:') |
540 | - # for scale in sorted(cross_sections.keys()): |
541 | - # logger.info(' > Merging scale = %-6.4g : %-11.5g +/- %-7.2g [pb]'%\ |
542 | - # (scale,cross_sections[scale][0],cross_sections[scale][1])) |
543 | + #logger.info('Pythia8 merged cross-sections are:') |
544 | + #for scale in sorted(cross_sections.keys()): |
545 | + # logger.info(' > Merging scale = %-6.4g : %-11.5g +/- %-7.2g [pb]'%\ |
546 | + # (scale,cross_sections[scale][0],cross_sections[scale][1])) |
547 | |
548 | xsecs_file = open(pjoin(self.me_dir,'Events',self.run_name, |
549 | '%s_merged_xsecs.txt'%tag),'w') |
550 | @@ -4007,6 +4257,43 @@ |
551 | self.exec_cmd('delphes --no_default', postcmd=False, printcmd=False) |
552 | self.print_results_in_shell(self.results.current) |
553 | |
554 | + def parse_PY8_log_file(self, log_file_path): |
555 | + """ Parse a log file to extract number of event and cross-section. """ |
556 | + pythiare = re.compile("Les Houches User Process\(es\)\s*\d+\s*\|\s*(?P<tried>\d+)\s*(?P<selected>\d+)\s*(?P<generated>\d+)\s*\|\s*(?P<xsec>[\d\.e\-\+]+)\s*(?P<xsec_error>[\d\.e\-\+]+)") |
557 | + for line in misc.BackRead(log_file_path): |
558 | + info = pythiare.search(line) |
559 | + if not info: |
560 | + continue |
561 | + try: |
562 | + # Pythia cross section in mb, we want pb |
563 | + sigma_m = float(info.group('xsec')) *1e9 |
564 | + Nacc = int(info.group('generated')) |
565 | + Ntry = int(info.group('tried')) |
566 | + if Nacc==0: |
567 | + raise self.InvalidCmd, 'Pythia8 shower failed since it'+\ |
568 | + ' did not accept any event from the MG5aMC event file.' |
569 | + return sigma_m, Nacc, Ntry |
570 | + except ValueError: |
571 | + return None,None,None |
572 | + raise self.InvalidCmd, "Could not find cross-section and event number information "+\ |
573 | + "in Pythia8 log\n '%s'."%log_file_path |
574 | + |
575 | + def extract_cross_sections_from_DJR(self,djr_output): |
576 | + """Extract cross-sections from a djr XML output.""" |
577 | + run_nodes = minidom.parse(djr_output).getElementsByTagName("run") |
578 | + all_nodes = dict((int(node.getAttribute('id')),node) for |
579 | + node in run_nodes) |
580 | + try: |
581 | + selected_run_node = all_nodes[0] |
582 | + except: |
583 | + return {} |
584 | + xsections = selected_run_node.getElementsByTagName("xsection") |
585 | + # We need to translate PY8's output in mb into pb |
586 | + return dict((xsec.getAttribute('name'), |
587 | + [float(xsec.childNodes[0].data.split()[0])*1e9, |
588 | + float(xsec.childNodes[0].data.split()[1])*1e9]) |
589 | + for xsec in xsections) |
590 | + |
591 | def do_pythia(self, line): |
592 | """launch pythia""" |
593 | |
594 | |
595 | === modified file 'madgraph/various/banner.py' |
596 | --- madgraph/various/banner.py 2016-09-02 02:17:02 +0000 |
597 | +++ madgraph/various/banner.py 2016-09-03 09:10:54 +0000 |
598 | @@ -1483,6 +1483,17 @@ |
599 | self.add_param("Merging:Dparameter", 0.4, hidden=True, always_write_to_card=False) |
600 | self.add_param("Merging:doPTLundMerging", False, hidden=True, always_write_to_card=False) |
601 | |
602 | + # Special Pythia8 paremeters useful to simplify the shower. |
603 | + self.add_param("BeamRemnants:primordialKT", False, hidden=True, always_write_to_card=True) |
604 | + self.add_param("PartonLevel:Remnants", False, hidden=True, always_write_to_card=True) |
605 | + self.add_param("Check:event", False, hidden=True, always_write_to_card=True) |
606 | + self.add_param("TimeShower:QEDshowerByQ", False, hidden=True, always_write_to_card=True) |
607 | + self.add_param("TimeShower:QEDshowerByL", False, hidden=True, always_write_to_card=True) |
608 | + self.add_param("SpaceShower:QEDshowerByQ", False, hidden=True, always_write_to_card=True) |
609 | + self.add_param("SpaceShower:QEDshowerByL", False, hidden=True, always_write_to_card=True) |
610 | + self.add_param("PartonLevel:FSRinResonances", False, hidden=True, always_write_to_card=True) |
611 | + self.add_param("ProcessLevel:resonanceDecays", False, hidden=True, always_write_to_card=True) |
612 | + |
613 | # Add parameters controlling the subruns execution flow. |
614 | # These parameters should not be part of PY8SubRun daughter. |
615 | self.add_default_subruns('parameters') |
616 | @@ -1613,9 +1624,9 @@ |
617 | return "%s" % value |
618 | elif formatv == 'list': |
619 | if len(value) and isinstance(value[0],float): |
620 | - return ', '.join([PY8Card.pythia8_formatting(arg, 'shortfloat') for arg in value]) |
621 | + return ','.join([PY8Card.pythia8_formatting(arg, 'shortfloat') for arg in value]) |
622 | else: |
623 | - return ', '.join([PY8Card.pythia8_formatting(arg) for arg in value]) |
624 | + return ','.join([PY8Card.pythia8_formatting(arg) for arg in value]) |
625 | |
626 | |
627 | def write(self, output_file, template, read_subrun=False, |
628 | |
629 | === modified file 'madgraph/various/histograms.py' |
630 | --- madgraph/various/histograms.py 2016-08-31 03:10:04 +0000 |
631 | +++ madgraph/various/histograms.py 2016-09-03 09:10:54 +0000 |
632 | @@ -13,7 +13,6 @@ |
633 | # For more information, visit madgraph.phys.ucl.ac.be and amcatnlo.web.cern.ch |
634 | # |
635 | ################################################################################ |
636 | - |
637 | """Module for the handling of histograms, including Monte-Carlo error per bin |
638 | and scale/PDF uncertainties.""" |
639 | |
640 | @@ -681,7 +680,7 @@ |
641 | |
642 | |
643 | def __init__(self, file_path=None, weight_header=None, |
644 | - raw_labels=False, consider_reweights='ALL', **opts): |
645 | + raw_labels=False, consider_reweights='ALL', selected_central_weight=None, **opts): |
646 | """ Read one plot from a file_path or a stream. Notice that this |
647 | constructor only reads one, and the first one, of the plots specified. |
648 | If file_path was a path in argument, it would then close the opened stream. |
649 | @@ -711,7 +710,9 @@ |
650 | weight_header = HwU.parse_weight_header(stream, raw_labels=raw_labels) |
651 | |
652 | if not self.parse_one_histo_from_stream(stream, weight_header, |
653 | - consider_reweights=consider_reweights, raw_labels=raw_labels): |
654 | + consider_reweights=consider_reweights, |
655 | + selected_central_weight=selected_central_weight, |
656 | + raw_labels=raw_labels): |
657 | # Indicate that the initialization of the histogram was unsuccessful |
658 | # by setting the BinList property to None. |
659 | super(Histogram,self).__setattr__('bins',None) |
660 | @@ -1092,7 +1093,7 @@ |
661 | return ' '.join(res) |
662 | |
663 | def parse_one_histo_from_stream(self, stream, all_weight_header, |
664 | - consider_reweights='ALL', raw_labels=False): |
665 | + consider_reweights='ALL', raw_labels=False, selected_central_weight=None): |
666 | """ Reads *one* histogram from a stream, with the mandatory specification |
667 | of the ordered list of weight names. Return True or False depending |
668 | on whether the starting definition of a new plot could be found in this |
669 | @@ -1136,10 +1137,14 @@ |
670 | if j == len(all_weight_header): |
671 | raise HwU.ParseError, "There is more bin weights"+\ |
672 | " specified than expected (%i)"%len(weight_header) |
673 | + if selected_central_weight == all_weight_header[j]: |
674 | + bin_weights['central'] = float(weight.group('weight')) |
675 | if all_weight_header[j] == 'boundary_xmin': |
676 | boundaries[0] = float(weight.group('weight')) |
677 | elif all_weight_header[j] == 'boundary_xmax': |
678 | - boundaries[1] = float(weight.group('weight')) |
679 | + boundaries[1] = float(weight.group('weight')) |
680 | + elif all_weight_header[j] == 'central' and not selected_central_weight is None: |
681 | + continue |
682 | elif all_weight_header[j] in weight_header: |
683 | bin_weights[all_weight_header[j]] = \ |
684 | float(weight.group('weight')) |
685 | @@ -1439,6 +1444,14 @@ |
686 | # of the two new weight label added. |
687 | return (position,labels) |
688 | |
689 | + def select_central_weight(self, selected_label): |
690 | + """ Select a specific merging scale for the central value of this Histogram. """ |
691 | + if selected_label not in self.bins.weight_labels: |
692 | + raise MadGraph5Error, "Selected weight label '%s' could not be found in this HwU."%selected_label |
693 | + |
694 | + for bin in self.bins: |
695 | + bin.wgts['central']=bin.wgts[selected_label] |
696 | + |
697 | def rebin(self, n_rebin): |
698 | """ Rebin the x-axis so as to merge n_rebin consecutive bins into a |
699 | single one. """ |
700 | @@ -1602,7 +1615,7 @@ |
701 | |
702 | def __init__(self, file_path, weight_header=None, run_id=None, |
703 | merging_scale=None, accepted_types_order=[], consider_reweights='ALL', |
704 | - raw_labels=False, **opts): |
705 | + raw_labels=False, **opts): |
706 | """ Read one plot from a file_path or a stream. |
707 | This constructor reads all plots specified in target file. |
708 | File_path can be a path or a stream in the argument. |
709 | @@ -1630,7 +1643,7 @@ |
710 | self.parse_histos_from_PY8_XML_stream(stream, run_id, |
711 | merging_scale, accepted_types_order, |
712 | consider_reweights=consider_reweights, |
713 | - raw_labels=raw_labels) |
714 | + raw_labels=raw_labels) |
715 | except XMLParsingError: |
716 | # Rewinding the stream |
717 | stream.seek(0) |
718 | @@ -1638,19 +1651,34 @@ |
719 | if not weight_header: |
720 | weight_header = HwU.parse_weight_header(stream,raw_labels=raw_labels) |
721 | |
722 | + # Select a specific merging scale if asked for: |
723 | + selected_label = None |
724 | + if not merging_scale is None: |
725 | + for label in weight_header: |
726 | + if HwU.get_HwU_wgt_label_type(label)=='merging_scale': |
727 | + if float(label[1])==merging_scale: |
728 | + selected_label = label |
729 | + break |
730 | + if selected_label is None: |
731 | + raise MadGraph5Error, "No weight could be found in the input HwU "+\ |
732 | + "for the selected merging scale '%4.2f'."%merging_scale |
733 | + |
734 | new_histo = HwU(stream, weight_header,raw_labels=raw_labels, |
735 | - consider_reweights=consider_reweights) |
736 | + consider_reweights=consider_reweights, |
737 | + selected_central_weight=selected_label) |
738 | +# new_histo.select_central_weight(selected_label) |
739 | while not new_histo.bins is None: |
740 | if accepted_types_order==[] or \ |
741 | new_histo.type in accepted_types_order: |
742 | self.append(new_histo) |
743 | new_histo = HwU(stream, weight_header, raw_labels=raw_labels, |
744 | - consider_reweights=consider_reweights) |
745 | - |
746 | - if not run_id is None: |
747 | - logger.info("The run_id '%s' was specified, but "%run_id+ |
748 | - "format of the HwU plot source is the MG5aMC"+ |
749 | - " so that the run_id information is ignored.") |
750 | + consider_reweights=consider_reweights, |
751 | + selected_central_weight=selected_label) |
752 | + |
753 | + # if not run_id is None: |
754 | + # logger.debug("The run_id '%s' was specified, but "%run_id+ |
755 | + # "format of the HwU plot source is the MG5aMC"+ |
756 | + # " so that the run_id information is ignored.") |
757 | |
758 | # Order the histograms according to their type. |
759 | titles_order = [h.title for h in self] |
760 | @@ -3229,54 +3257,6 @@ |
761 | ################################################################################ |
762 | ## matplotlib related function |
763 | ################################################################################ |
764 | -######## Routine from https://gist.github.com/thriveth/8352565 |
765 | -######## To fill for histograms data in matplotlib |
766 | -def fill_between_steps(x, y1, y2=0, h_align='right', ax=None, **kwargs): |
767 | - ''' Fills a hole in matplotlib: fill_between for step plots. |
768 | - Parameters : |
769 | - ------------ |
770 | - x : array-like |
771 | - Array/vector of index values. These are assumed to be equally-spaced. |
772 | - If not, the result will probably look weird... |
773 | - y1 : array-like |
774 | - Array/vector of values to be filled under. |
775 | - y2 : array-Like |
776 | - Array/vector or bottom values for filled area. Default is 0. |
777 | - **kwargs will be passed to the matplotlib fill_between() function. |
778 | - ''' |
779 | - # If no Axes opject given, grab the current one: |
780 | - if ax is None: |
781 | - ax = plt.gca() |
782 | - |
783 | - |
784 | - # First, duplicate the x values |
785 | - #duplicate the info # xx = numpy.repeat(2)[1:] |
786 | - xx= []; [(xx.append(d),xx.append(d)) for d in x]; xx = xx[1:] |
787 | - # Now: the average x binwidth |
788 | - xstep = x[1] -x[0] |
789 | - # Now: add one step at end of row. |
790 | - xx.append(xx[-1] + xstep) |
791 | - |
792 | - # Make it possible to change step alignment. |
793 | - if h_align == 'mid': |
794 | - xx = [X-xstep/2. for X in xx] |
795 | - elif h_align == 'right': |
796 | - xx = [X-xstep for X in xx] |
797 | - |
798 | - # Also, duplicate each y coordinate in both arrays |
799 | - yy1 = []; [(yy1.append(d),yy1.append(d)) for d in y1] |
800 | - if isinstance(y1, list): |
801 | - yy2 = []; [(yy2.append(d),yy2.append(d)) for d in y2] |
802 | - else: |
803 | - yy2=y2 |
804 | - |
805 | - # now to the plotting part: |
806 | - ax.fill_between(xx, yy1, y2=yy2, **kwargs) |
807 | - |
808 | - return ax |
809 | -######## end routine from https://gist.github.com/thriveth/835256 |
810 | - |
811 | - |
812 | def plot_ratio_from_HWU(path, ax, hwu_variable, hwu_numerator, hwu_denominator, *args, **opts): |
813 | """INPUT: |
814 | - path can be a path to HwU or an HwUList instance |
815 | |
816 | === modified file 'madgraph/various/lhe_parser.py' |
817 | --- madgraph/various/lhe_parser.py 2016-09-01 09:14:58 +0000 |
818 | +++ madgraph/various/lhe_parser.py 2016-09-03 09:10:54 +0000 |
819 | @@ -513,19 +513,26 @@ |
820 | else: |
821 | return out |
822 | |
823 | - def split(self, nb_event=0): |
824 | + def split(self, nb_event=0, partition=None, cwd=os.path.curdir, zip=False): |
825 | """split the file in multiple file. Do not change the weight!""" |
826 | |
827 | nb_file = -1 |
828 | for i, event in enumerate(self): |
829 | - if i % nb_event == 0: |
830 | + if (not (partition is None) and i==sum(partition[:nb_file+1])) or \ |
831 | + (partition is None and i % nb_event == 0): |
832 | if i: |
833 | #close previous file |
834 | current.write('</LesHouchesEvent>\n') |
835 | current.close() |
836 | # create the new file |
837 | nb_file +=1 |
838 | - current = open('%s_%s.lhe' % (self.name, nb_file),'w') |
839 | + # If end of partition then finish writing events here. |
840 | + if not partition is None and (nb_file+1>len(partition)): |
841 | + return nb_file+1 |
842 | + if zip: |
843 | + current = EventFile(pjoin(cwd,'%s_%s.lhe.gz' % (self.name, nb_file)),'w') |
844 | + else: |
845 | + current = open(pjoin(cwd,'%s_%s.lhe' % (self.name, nb_file)),'w') |
846 | current.write(self.banner) |
847 | current.write(str(event)) |
848 | if i!=0: |
To test the above on a condor cluster, one must re-install MG5aMC_ PY8_interface, because I modified its installation so that it links *statically* against HEPMC2 so that it doesn't have to be found on the worker nodes at run time.