Merge lp:~madteam/mg5amcnlo/madevent_output into lp:~madteam/mg5amcnlo/trunk

Proposed by Johan Alwall
Status: Merged
Approved by: Johan Alwall
Approved revision: 88
Merged at revision: 29
Proposed branch: lp:~madteam/mg5amcnlo/madevent_output
Merge into: lp:~madteam/mg5amcnlo/trunk
Diff against target: 22726 lines (+12376/-5459)
24 files modified
.pydevproject (+3/-2)
madgraph/VERSION (+2/-2)
madgraph/core/base_objects.py (+178/-99)
madgraph/core/color_algebra.py (+27/-27)
madgraph/core/color_amp.py (+150/-21)
madgraph/core/diagram_generation.py (+337/-87)
madgraph/core/helas_objects.py (+2037/-232)
madgraph/interface/cmd_interface.py (+311/-63)
madgraph/iolibs/drawing.py (+30/-19)
madgraph/iolibs/export_v4.py (+926/-344)
madgraph/iolibs/files.py (+21/-0)
madgraph/iolibs/import_model_v4.py (+27/-26)
madgraph/iolibs/template_files/auto_dsig_v4.inc (+77/-0)
madgraph/iolibs/template_files/matrix_madevent_v4.inc (+209/-0)
madgraph/iolibs/template_files/matrix_standalone_v4.inc (+115/-0)
tests/input_files/test_draw.obj (+3590/-3404)
tests/unit_tests/core/test_base_objects.py (+34/-24)
tests/unit_tests/core/test_color_algebra.py (+2/-1)
tests/unit_tests/core/test_color_amp.py (+105/-72)
tests/unit_tests/core/test_diagram_generation.py (+437/-114)
tests/unit_tests/core/test_helas_objects.py (+1323/-574)
tests/unit_tests/iolibs/test_drawing.py (+82/-54)
tests/unit_tests/iolibs/test_export_v4.py (+2337/-293)
tests/unit_tests/iolibs/test_import_model_v4.py (+16/-1)
To merge this branch: bzr merge lp:~madteam/mg5amcnlo/madevent_output
Reviewer Review Type Date Requested Status
Olivier Mattelaer Approve
Tim Stelzer Needs Information
Michel Herquet (community) Approve
FabioMaltoni Pending
MadTeam Pending
Review via email: mp+23603@code.launchpad.net

Description of the change

* Decay chain implementation (by Johan).
* MadEvent output (by Johan and Michel).
* A number of bugs and minor problems fixed, both relating to Helas objects and color.

I would recommend that you divide the revision work, e.g. Tim and Michel focus on the decay chains (mainly in helas_objects.py) while Fabio and Olivier take care of the MadEvent output (mainly in export_v4.py).

To post a comment you must log in.
54. By Johan Alwall <alwall@alwall-laptop>

Added possibility for comments starting with # in the command interface

55. By Johan Alwall <alwall@alwall-laptop>

Added possibility to add process to existing processes. Note that for decay chain processes, can not do export twice.

56. By Johan Alwall <alwall@alwall-laptop>

Fixed problem with adding processes after decay chain export

57. By Johan Alwall <alwall@alwall-laptop>

Fixed error discovered by Olivier when there are no valid decays

58. By Johan Alwall <alwall@alwall-laptop>

Fixed path for generate_subprocess_directory_v4_madevent, added writeout of maxamps.inc file

59. By Johan Alwall <alwall@alwall-laptop>

Added help line for export v4madevent

60. By Johan Alwall <alwall@alwall-laptop>

Updated MadEvent matrix.f output according to new standard in MG/ME v. 4.4.36

61. By Johan Alwall <alwall@alwall-laptop>

Updated MadEvent auto_dsig.f output according to new standard in MG/ME v. 4.4.36

Revision history for this message
Michel Herquet (herquet) wrote :

Honestly I looked at it for quite some time, and I can only say… fantastic work Johan! Every single line of code is very well documented, and even though there are many tricky parts (some of them I am not sure I understand, but it would require more time), I think the code is quite DRY. I haven't yet ran manual test, but I already give my formal approval for the quality of the code. Once again, great job...

review: Approve
62. By Johan Alwall <alwall@alwall-laptop>

Fixed bug in propagator in configs.inc for t-channel case

63. By Johan Alwall <alwall@alwall-laptop>

Fixed bug and test

64. By Johan Alwall <alwall@alwall-laptop>

Merged with Oliviers changes

65. By Johan Alwall <alwall@alwall-laptop>

Fixed same error as rev 62 but for wavefunction - should work but cannot find example to test

66. By Johan Alwall <alwall@alwall-laptop>

Added emptyline() so empty line does not repeat previous line

67. By Michel Herquet

fixed ordering of element from the color basis dictionary when creating the color matrix and outputting JAMPs

68. By Johan Alwall <alwall@alwall-laptop>

First pass at initial state mirror process optimization

69. By Johan Alwall <alwall@alwall-laptop>

Reverted to rev. 67, since mirror process optimization turned out to not give optimal subproc combination

70. By Michel Herquet

Fix ordering of color matrix (2d attempt)

71. By Johan Alwall <alwall@alwall-laptop>

Fixed the tests after Michels fix

72. By Michel Herquet

Trying to fix col matrix ordering, 3rd attempt...

73. By Michel Herquet

Fixed VERY strange bug where a color basis element was shared between two different instances (not reset in __init__)

74. By Johan Alwall <alwall@alwall-laptop>

First pass on >50% memory optimization. Still need to fix tests.

75. By Johan Alwall <alwall@alwall-laptop>

Second pass on >50% memory optimization, still need to fix tests.

76. By Johan Alwall <alwall@alwall-laptop>

All tests fixed for memory optimization, except one drawing test.

77. By Johan Alwall <alwall@alwall-laptop>

Oops, fixed slight bug

78. By Johan Alwall <alwall@alwall-laptop>

Fixed memory for decay chains, in particular problems with deepcopied model

79. By Johan Alwall <alwall@alwall-laptop>

Merged with new version info

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) wrote :

Hi Johan,

I've still not work intensively on the reviewing. But this is a first comment.

In the files export_v4, you have define a lot of long string.
I think that it will be more clear if you put those files content in some files in the
Template_file directory.

I suggest to create one files for each of those long string (with the associate name of mg4)
and then load them via the string.Template class.

Olivier

PS: Off course this is only a suggestion.

In this sense this breaks some rules of how to

Why do not use the Templates

review: Needs Information (suggestion)
Revision history for this message
Johan Alwall (johan-alwall) wrote :

Hi Olivier,

I just hadn't thought about the possibility to use templates. I'll try it out and see how it works. The only worry is that it might make the code slightly less clear, but on the other hand more concise so probably the pros are larger than the cons.

Johan

80. By Johan Alwall <alwall@alwall-laptop>

Fix cmd_interface error

81. By Johan Alwall <alwall@alwall-laptop>

Did Oliviers suggested move of long file strings in export_v4.py to template_files/. Improved memory optimization yet a couple of notches.

Revision history for this message
Johan Alwall (johan-alwall) wrote :

Ok I did as Olivier suggested and moved the long file strings (matrix.f and auto_dsig.f) to template files in the iolibs/template_files directory. Also improved memory optimization another couple of notches (by reducing the number of Legs and Vertices kept in memory).

82. By Olivier Mattelaer

correct the test problem with decay chains. and update the script creating the decay chains

83. By Olivier Mattelaer

review of Olivier:
minor modification (mainly remove unused variable)

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) wrote :

Thanks Johan for your modification,

I've made a more deep review of the export_v4.py.
And I should say that everything is perfect. Thanks a lot for this beautifull work!!!

I've suppress/modify some lines, mainly because they were simply not use. This doesn't change anything deep. And the test passes afterwards.

Also what do you think to put the class FortranWriter outside export_v4.py.
I think this will be more clear if this class is not in export_v4. but in some specific module
file_writer.py (where we will insert the writer for C++ files for examples).

Thanks a lot,

Olivier

review: Approve
84. By Olivier Mattelaer

minor modification

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) wrote :

Hi Johan,

Could you also look at this:
generate u u~ > t t~ / a $ a QED=1 , (t > w+ b / a $ a , w+ > l+ vl / a $ a  ) , (t~ > w- b~ / a $ a , w- > l- vl~ / a $ a  )
INFO:diagram_generation:Trying process: u u~ > t t~ / a a QED=1
...

It seems that the $ request is not take correctly take into account (at least for the information message)

review: Needs Fixing
Revision history for this message
Tim Stelzer (tstelzer) wrote :

Very impressive. If I understand things correctly, this works VERY differently from V4. You are separately calculating the production M.E. and the decay M.E. and then "attaching" them. That will certainly have great speed advantages.... but required a lot of very nice coding.

It is very well documented, and what I followed in (helas_objects.py) looked good. But, I have not had the opportunity to run a battery of tests. (Have you already done that? is it part of the test suite?)

That is my only remaining activity before approving.

review: Needs Information
85. By Johan Alwall <alwall@alwall-laptop>

Fixed broken small MadEvent files, added tests

Revision history for this message
Johan Alwall (johan-alwall) wrote :

> Could you also look at this:
> generate u u~ > t t~ / a $ a QED=1 , (t > w+ b / a $ a , w+ > l+ vl / a $ a  )
> , (t~ > w- b~ / a $ a , w- > l- vl~ / a $ a  )
> INFO:diagram_generation:Trying process: u u~ > t t~ / a a QED=1

Hi Olivier,

In fact the order is fixed to be $ before /, as indicated in the help_generate. Please go ahead and change this is you want - at present the order of all features is complately fixed.

Best,
Johan

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) wrote :

Ok Thanks,

I'll check that the converter mg4-> mg5 respect this convention.

review: Approve
86. By Johan Alwall <alwall@alwall-laptop>

Changed line in auto_dsig from core process to complete decay process

87. By Johan Alwall <alwall@alwall-laptop>

Allow @ flag anywhere, allow arbitrary order of $ and /

88. By Johan Alwall <alwall@alwall-laptop>

Woops, fixed bug when @N had no characters after it

89. By Johan Alwall <alwall@alwall-laptop>

Added some better error messages to cmd_interface

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.pydevproject'
2--- .pydevproject 2010-02-12 13:09:25 +0000
3+++ .pydevproject 2010-05-07 07:20:46 +0000
4@@ -1,9 +1,10 @@
5-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
6+<?xml version="1.0" encoding="UTF-8"?>
7 <?eclipse-pydev version="1.0"?>
8+
9 <pydev_project>
10 <pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.6</pydev_property>
11 <pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
12 <pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
13-<path>/MG-trunk</path>
14+<path>/madevent_output</path>
15 </pydev_pathproperty>
16 </pydev_project>
17
18=== modified file 'madgraph/VERSION'
19--- madgraph/VERSION 2010-02-12 13:07:52 +0000
20+++ madgraph/VERSION 2010-05-07 07:20:46 +0000
21@@ -1,2 +1,2 @@
22-version = 0.3.0
23-date = 2010-02-12
24+version = 0.4.0
25+date = 2010-05-01
26
27=== modified file 'madgraph/core/base_objects.py'
28--- madgraph/core/base_objects.py 2010-02-12 12:53:59 +0000
29+++ madgraph/core/base_objects.py 2010-05-07 07:20:46 +0000
30@@ -59,7 +59,7 @@
31
32 try:
33 return dict.__getitem__(self, name)
34- except:
35+ except KeyError:
36 self.is_valid_prop(name) #raise the correct error
37
38
39@@ -81,16 +81,9 @@
40
41 return True
42
43- def __getitem__(self, name):
44- """ Get the value of the property name."""
45-
46- if self.is_valid_prop(name):
47- return dict.__getitem__(self, name)
48-
49 def get(self, name):
50 """Get the value of the property name."""
51
52- #if self.is_valid_prop(name): #done automaticaly in __getitem__
53 return self[name]
54
55 def set(self, name, value):
56@@ -332,7 +325,16 @@
57 """Return the color code with a correct minus sign"""
58
59 if not self['is_part'] and self['color'] in [3, 6]:
60- return -self['color']
61+ return - self['color']
62+ else:
63+ return self['color']
64+
65+ def get_anti_color(self):
66+ """Return the color code of the antiparticle with a correct minus sign
67+ """
68+
69+ if self['is_part'] and self['color'] in [3, 6]:
70+ return - self['color']
71 else:
72 return self['color']
73
74@@ -369,6 +371,11 @@
75
76 return self['spin'] % 2 == 0
77
78+ def is_boson(self):
79+ """Returns True if this is a boson, False if fermion"""
80+
81+ return self['spin'] % 2 == 1
82+
83 #===============================================================================
84 # ParticleList
85 #===============================================================================
86@@ -799,7 +806,8 @@
87
88 self['id'] = 0
89 self['number'] = 0
90- self['state'] = 'final'
91+ # True = final, False = initial (boolean to save memory)
92+ self['state'] = True
93 self['from_group'] = True
94
95 def filter(self, name, value):
96@@ -811,17 +819,13 @@
97 "%s is not a valid integer for leg id" % str(value)
98
99 if name == 'state':
100- if not isinstance(value, str):
101- raise self.PhysicsObjectError, \
102- "%s is not a valid string for leg state" % \
103- str(value)
104- if value not in ['initial', 'final']:
105- raise self.PhysicsObjectError, \
106- "%s is not a valid leg state (initial|final)" % \
107+ if not isinstance(value, bool):
108+ raise self.PhysicsObjectError, \
109+ "%s is not a valid leg state (True|False)" % \
110 str(value)
111
112 if name == 'from_group':
113- if not isinstance(value, bool):
114+ if not isinstance(value, bool) and value != None:
115 raise self.PhysicsObjectError, \
116 "%s is not a valid boolean for leg flag from_group" % \
117 str(value)
118@@ -853,8 +857,8 @@
119
120 part = model.get('particle_dict')[self['id']]
121 return part.is_fermion() and \
122- (self.get('state') == 'initial' and part.get('is_part') or \
123- self.get('state') == 'final' and not part.get('is_part'))
124+ (self.get('state') == False and part.get('is_part') or \
125+ self.get('state') == True and not part.get('is_part'))
126
127 def is_outgoing_fermion(self, model):
128 """Returns True if leg is an outgoing fermion, i.e., initial
129@@ -866,8 +870,8 @@
130
131 part = model.get('particle_dict')[self['id']]
132 return part.is_fermion() and \
133- (self.get('state') == 'final' and part.get('is_part') or \
134- self.get('state') == 'initial' and not part.get('is_part'))
135+ (self.get('state') == True and part.get('is_part') or \
136+ self.get('state') == False and not part.get('is_part'))
137
138 #===============================================================================
139 # LegList
140@@ -906,10 +910,24 @@
141 else:
142 return False
143
144- def can_combine_to_0(self, ref_dict_to0):
145+ def can_combine_to_0(self, ref_dict_to0, is_decay_chain=False):
146 """If has at least two 'from_group' True and in ref_dict_to0,
147- return the vertex (with id from ref_dict_to0), otherwise return None
148- """
149+
150+ return the vertex (with id from ref_dict_to0), otherwise return None
151+
152+ If is_decay_chain = True, we only allow clustering of the
153+ initial leg, since we want this to be the last wavefunction to
154+ be evaluated.
155+ """
156+ if is_decay_chain:
157+ # Special treatment - here we only allow combination to 0
158+ # if the initial leg (marked by from_group = None) is
159+ # unclustered, since we want this to stay until the very
160+ # end.
161+ return any(leg.get('from_group') == None for leg in self) and \
162+ ref_dict_to0.has_key(tuple(sorted([leg.get('id') \
163+ for leg in self])))
164+
165 if self.minimum_two_from_group():
166 return ref_dict_to0.has_key(tuple(sorted([leg.get('id') for leg in self])))
167 else:
168@@ -926,7 +944,7 @@
169 return res
170
171 for leg in self:
172- if leg.get('state') == 'initial':
173+ if leg.get('state') == False:
174 res.append(model.get('particle_dict')[leg.get('id')].get_anti_pdg_code())
175 else:
176 res.append(leg.get('id'))
177@@ -944,7 +962,7 @@
178 """Default values for all properties"""
179
180 self['ids'] = []
181- self['state'] = 'final'
182+ self['state'] = True
183
184 def filter(self, name, value):
185 """Filter for valid multileg property values."""
186@@ -959,11 +977,7 @@
187 "%s is not a valid list of integers" % str(value)
188
189 if name == 'state':
190- if not isinstance(value, str):
191- raise self.PhysicsObjectError, \
192- "%s is not a valid string for leg state" % \
193- str(value)
194- if value not in ['initial', 'final']:
195+ if not isinstance(value, bool):
196 raise self.PhysicsObjectError, \
197 "%s is not a valid leg state (initial|final)" % \
198 str(value)
199@@ -1030,8 +1044,8 @@
200
201 if ninitial == 1:
202 # For one initial particle, all legs are s-channel
203- # Only need to flip particle id if state is 'initial'
204- if leg.get('state') == 'final':
205+ # Only need to flip particle id if state is False
206+ if leg.get('state') == True:
207 return leg.get('id')
208 else:
209 return model.get('particle_dict')[leg.get('id')].\
210@@ -1039,7 +1053,7 @@
211
212 # Number of initial particles is at least 2
213 if self.get('id') == 0 or \
214- leg.get('state') == 'initial':
215+ leg.get('state') == False:
216 # identity vertex or t-channel particle
217 return 0
218
219@@ -1053,7 +1067,7 @@
220
221 ## Check if the other legs are initial or final.
222 ## If the latter, return leg id, if the former, return -leg id
223- #if self.get('legs')[0].get('state') == 'final':
224+ #if self.get('legs')[0].get('state') == True:
225 # return leg.get('id')
226 #else:
227 # return model.get('particle_dict')[leg.get('id')].\
228@@ -1146,11 +1160,12 @@
229
230 return isinstance(obj, Diagram)
231
232- def nice_string(self):
233+ def nice_string(self, indent=0):
234 """Returns a nicely formatted string"""
235- mystr = str(len(self)) + ' diagrams:\n'
236- for diag in self:
237- mystr = mystr + " " + diag.nice_string() + '\n'
238+ mystr = " " * indent + str(len(self)) + ' diagrams:\n'
239+ for i, diag in enumerate(self):
240+ mystr = mystr + " " * indent + str(i+1) + " " + \
241+ diag.nice_string() + '\n'
242 return mystr[:-1]
243
244
245@@ -1175,6 +1190,9 @@
246 self['required_s_channels'] = []
247 self['forbidden_s_channels'] = []
248 self['forbidden_particles'] = []
249+ self['is_decay_chain'] = False
250+ # Decay chain processes associated with this process
251+ self['decay_chains'] = ProcessList()
252
253 def filter(self, name, value):
254 """Filter for valid process property values."""
255@@ -1220,25 +1238,43 @@
256 raise self.PhysicsObjectError, \
257 "Forbidden particles should have a positive PDG code" % str(value)
258
259+ if name == 'is_decay_chain':
260+ if not isinstance(value, bool):
261+ raise self.PhysicsObjectError, \
262+ "%s is not a valid bool" % str(value)
263+
264+ if name == 'decay_chains':
265+ if not isinstance(value, ProcessList):
266+ raise self.PhysicsObjectError, \
267+ "%s is not a valid ProcessList" % str(value)
268+
269 return True
270
271+ def set(self, name, value):
272+ """Special set for forbidden particles - set to abs value."""
273+
274+ if name == 'forbidden_particles':
275+ value = [abs(i) for i in value]
276+
277+ return super(Process, self).set(name, value) # call the mother routine
278+
279 def get_sorted_keys(self):
280 """Return process property names as a nicely sorted list."""
281
282 return ['legs', 'orders', 'model', 'id',
283 'required_s_channels', 'forbidden_s_channels',
284- 'forbidden_particles']
285+ 'forbidden_particles', 'is_decay_chain', 'decay_chains']
286
287- def nice_string(self):
288+ def nice_string(self, indent=0):
289 """Returns a nicely formated string about current process
290 content"""
291
292- mystr = "Process: "
293+ mystr = " " * indent + "Process: "
294 prevleg = None
295 for leg in self['legs']:
296 mypart = self['model'].get('particle_dict')[leg['id']]
297- if prevleg and prevleg['state'] == 'initial' \
298- and leg['state'] == 'final':
299+ if prevleg and prevleg['state'] == False \
300+ and leg['state'] == True:
301 # Separate initial and final legs by ">"
302 mystr = mystr + '> '
303 # Add required s-channels
304@@ -1267,10 +1303,35 @@
305 mystr = mystr + forbpart.get_name() + ' '
306
307 if self['orders']:
308- mystr = mystr[:-1] + "\n"
309- mystr = mystr + 'Orders: '
310- mystr = mystr + ", ".join([key + '=' + repr(self['orders'][key]) \
311+ mystr = mystr + " ".join([key + '=' + repr(self['orders'][key]) \
312 for key in self['orders']]) + ' '
313+
314+ # Remove last space
315+ mystr = mystr[:-1]
316+
317+ if not self.get('decay_chains'):
318+ return mystr
319+
320+ for decay in self['decay_chains']:
321+ mystr = mystr + '\n' + \
322+ decay.nice_string(indent + 2).replace('Process', 'Decay')
323+
324+ return mystr
325+
326+ def base_string(self):
327+ """Returns a string containing only the basic process (w/ decays)."""
328+
329+ mystr = ""
330+ prevleg = None
331+ for leg in self.get_legs_with_decays():
332+ mypart = self['model'].get('particle_dict')[leg['id']]
333+ if prevleg and prevleg['state'] == False \
334+ and leg['state'] == True:
335+ # Separate initial and final legs by ">"
336+ mystr = mystr + '> '
337+ mystr = mystr + mypart.get_name() + ' '
338+ prevleg = leg
339+
340 # Remove last space
341 return mystr[:-1]
342
343@@ -1283,8 +1344,8 @@
344 prevleg = None
345 for leg in self['legs']:
346 mypart = self['model'].get('particle_dict')[leg['id']]
347- if prevleg and prevleg['state'] == 'initial' \
348- and leg['state'] == 'final':
349+ if prevleg and prevleg['state'] == False \
350+ and leg['state'] == True:
351 # Separate initial and final legs by ">"
352 mystr = mystr + '_'
353 # Add required s-channels
354@@ -1311,6 +1372,9 @@
355 # Just to be safe, remove all spaces
356 mystr = mystr.replace(' ', '')
357
358+ for decay in self.get('decay_chains'):
359+ mystr = mystr + decay.shell_string().replace("%d_" % decay.get('id'),
360+ "_", 1)
361 return mystr
362
363 def shell_string_v4(self):
364@@ -1319,10 +1383,10 @@
365
366 mystr = "%d_" % self['id']
367 prevleg = None
368- for leg in self['legs']:
369+ for leg in self.get_legs_with_decays():
370 mypart = self['model'].get('particle_dict')[leg['id']]
371- if prevleg and prevleg['state'] == 'initial' \
372- and leg['state'] == 'final':
373+ if prevleg and prevleg['state'] == False \
374+ and leg['state'] == True:
375 # Separate initial and final legs by ">"
376 mystr = mystr + '_'
377 if mypart['is_part']:
378@@ -1338,6 +1402,54 @@
379
380 return mystr
381
382+ # Helper functions
383+
384+ def get_ninitial(self):
385+ """Gives number of initial state particles"""
386+
387+ return len(filter(lambda leg: leg.get('state') == False,
388+ self.get('legs')))
389+
390+ def get_initial_ids(self):
391+ """Gives the pdg codes for initial state particles"""
392+
393+ return [leg.get('id') for leg in \
394+ filter(lambda leg: leg.get('state') == False,
395+ self.get('legs'))]
396+
397+ def get_initial_pdg(self, number):
398+ """Return the pdg codes for initial state particles for beam number"""
399+
400+ return filter(lambda leg: leg.get('state') == False and\
401+ leg.get('number') == number,
402+ self.get('legs'))[0].get('id')
403+
404+ def get_final_legs(self):
405+ """Gives the pdg codes for initial state particles"""
406+
407+ return filter(lambda leg: leg.get('state') == True,
408+ self.get('legs'))
409+
410+ def get_legs_with_decays(self):
411+ """Return process with all decay chains substituted in."""
412+
413+ legs = copy.deepcopy(self.get('legs'))
414+ if self.get('is_decay_chain'):
415+ legs.pop(0)
416+ ileg = 0
417+ for decay in self.get('decay_chains'):
418+ while legs[ileg].get('state') == False or \
419+ legs[ileg].get('id') != decay.get('legs')[0].get('id'):
420+ ileg = ileg + 1
421+ decay_legs = decay.get_legs_with_decays()
422+ legs = legs[:ileg] + decay_legs + legs[ileg+1:]
423+ ileg = ileg + len(decay_legs)
424+
425+ for ileg, leg in enumerate(legs):
426+ leg.set('number', ileg + 1)
427+
428+ return LegList(legs)
429+
430 #===============================================================================
431 # ProcessList
432 #===============================================================================
433@@ -1353,7 +1465,7 @@
434 #===============================================================================
435 # ProcessDefinition
436 #===============================================================================
437-class ProcessDefinition(PhysicsObject):
438+class ProcessDefinition(Process):
439 """ProcessDefinition: list of multilegs (ordered)
440 dictionary of orders
441 model
442@@ -1363,13 +1475,11 @@
443 def default_setup(self):
444 """Default values for all properties"""
445
446+ super(ProcessDefinition, self).default_setup()
447+
448 self['legs'] = MultiLegList()
449- self['orders'] = {}
450- self['model'] = Model()
451- self['id'] = 0
452- self['required_s_channels'] = []
453- self['forbidden_s_channels'] = []
454- self['forbidden_particles'] = []
455+ # Decay chain processes associated with this process
456+ self['decay_chains'] = ProcessDefinitionList()
457
458 def filter(self, name, value):
459 """Filter for valid process property values."""
460@@ -1378,51 +1488,20 @@
461 if not isinstance(value, MultiLegList):
462 raise self.PhysicsObjectError, \
463 "%s is not a valid MultiLegList object" % str(value)
464- if name == 'orders':
465- Interaction.filter(Interaction(), 'orders', value)
466-
467- if name == 'model':
468- if not isinstance(value, Model):
469- raise self.PhysicsObjectError, \
470- "%s is not a valid Model object" % str(value)
471- if name is 'id':
472- if not isinstance(value, int):
473- raise self.PhysicsObjectError, \
474- "Process id %s is not an integer" % repr(value)
475-
476- if name in ['required_s_channels',
477- 'forbidden_s_channels']:
478- if not isinstance(value, list):
479- raise self.PhysicsObjectError, \
480- "%s is not a valid list" % str(value)
481- for i in value:
482- if not isinstance(i, int):
483- raise self.PhysicsObjectError, \
484- "%s is not a valid list of integers" % str(value)
485- if i == 0:
486- raise self.PhysicsObjectError, \
487- "Not valid PDG code %d for s-channel particle" % str(value)
488-
489- if name == 'forbidden_particles':
490- if not isinstance(value, list):
491- raise self.PhysicsObjectError, \
492- "%s is not a valid list" % str(value)
493- for i in value:
494- if not isinstance(i, int):
495- raise self.PhysicsObjectError, \
496- "%s is not a valid list of integers" % str(value)
497- if i <= 0:
498- raise self.PhysicsObjectError, \
499- "Forbidden particles should have a positive PDG code" % str(value)
500+ elif name == 'decay_chains':
501+ if not isinstance(value, ProcessDefinitionList):
502+ raise self.PhysicsObjectError, \
503+ "%s is not a valid ProcessDefinitionList" % str(value)
504+
505+ else:
506+ return super(ProcessDefinition, self).filter(name, value)
507
508 return True
509
510 def get_sorted_keys(self):
511 """Return process property names as a nicely sorted list."""
512
513- return ['legs', 'orders', 'model', 'id',
514- 'required_s_channels', 'forbidden_s_channels',
515- 'forbidden_particles']
516+ return super(ProcessDefinition, self).get_sorted_keys()
517
518 #===============================================================================
519 # ProcessDefinitionList
520
521=== modified file 'madgraph/core/color_algebra.py'
522--- madgraph/core/color_algebra.py 2010-02-11 13:27:47 +0000
523+++ madgraph/core/color_algebra.py 2010-05-07 07:20:46 +0000
524@@ -78,6 +78,7 @@
525
526 __copy__ = create_copy
527
528+
529 #===============================================================================
530 # Tr
531 #===============================================================================
532@@ -201,11 +202,10 @@
533
534 return None
535
536- def pair_simplify(self, col_obj, simplify_T_product=False):
537+ def pair_simplify(self, col_obj):
538 """Implement T(a,...,i,j)T(b,...,j,k) = T(a,...,b,...,i,k)
539 and T(a,x,b,i,j)T(c,x,d,k,l) = 1/2(T(a,d,i,l)T(c,b,k,j)
540- -1/Nc T(a,b,i,j)T(c,d,k,l))
541- but only if the simplify_T_product tag is True."""
542+ -1/Nc T(a,b,i,j)T(c,d,k,l))."""
543
544 if isinstance(col_obj, T):
545 ij1 = self[-2:]
546@@ -219,30 +219,29 @@
547 ij2[1]])))])])
548 # T(a,x,b,i,j)T(c,x,d,k,l) = 1/2(T(a,d,i,l)T(c,b,k,j)
549 # -1/Nc T(a,b,i,j)T(c,d,k,l))
550- if simplify_T_product:
551- for i1, index1 in enumerate(self[:-2]):
552- for i2, index2 in enumerate(col_obj[:-2]):
553- if index1 == index2:
554- a = self[:i1]
555- b = self[i1 + 1:-2]
556- c = col_obj[:i2]
557- d = col_obj[i2 + 1:-2]
558- col_str1 = ColorString([T(*(a + d + \
559- array.array('i',
560- [ij1[0], ij2[1]]))),
561- T(*(c + b + \
562- array.array('i',
563- [ij2[0], ij1[1]])))])
564- col_str2 = ColorString([T(*(a + b + \
565- array.array('i',
566- [ij1[0], ij1[1]]))),
567- T(*(c + d + \
568- array.array('i',
569- [ij2[0], ij2[1]])))])
570- col_str1.coeff = fractions.Fraction(1, 2)
571- col_str2.coeff = fractions.Fraction(-1, 2)
572- col_str2.Nc_power = -1
573- return ColorFactor([col_str1, col_str2])
574+ for i1, index1 in enumerate(self[:-2]):
575+ for i2, index2 in enumerate(col_obj[:-2]):
576+ if index1 == index2:
577+ a = self[:i1]
578+ b = self[i1 + 1:-2]
579+ c = col_obj[:i2]
580+ d = col_obj[i2 + 1:-2]
581+ col_str1 = ColorString([T(*(a + d + \
582+ array.array('i',
583+ [ij1[0], ij2[1]]))),
584+ T(*(c + b + \
585+ array.array('i',
586+ [ij2[0], ij1[1]])))])
587+ col_str2 = ColorString([T(*(a + b + \
588+ array.array('i',
589+ [ij1[0], ij1[1]]))),
590+ T(*(c + d + \
591+ array.array('i',
592+ [ij2[0], ij2[1]])))])
593+ col_str1.coeff = fractions.Fraction(1, 2)
594+ col_str2.coeff = fractions.Fraction(-1, 2)
595+ col_str2.Nc_power = -1
596+ return ColorFactor([col_str1, col_str2])
597
598 def complex_conjugate(self):
599 """Complex conjugation. Overwritten here because the two last indices
600@@ -631,3 +630,4 @@
601
602
603
604+
605
606=== modified file 'madgraph/core/color_amp.py'
607--- madgraph/core/color_amp.py 2010-02-12 09:52:24 +0000
608+++ madgraph/core/color_amp.py 2010-05-07 07:20:46 +0000
609@@ -43,6 +43,11 @@
610 # Dictionary store the raw colorize information
611 _list_color_dict = []
612
613+ class ColorBasisError(Exception):
614+ """Exception raised if an error occurs in the definition
615+ or the execution of a color basis object."""
616+ pass
617+
618 def colorize(self, diagram, model):
619 """Takes a diagram and a model and outputs a dictionary with keys being
620 color coefficient index tuples and values a color string (before
621@@ -76,6 +81,7 @@
622 for cs in res_dict.values()]):
623 res_dict = {}
624 # Return since this must be the last vertex
625+
626 return res_dict
627
628 # NORMAL VERTICES WITH ID != 0 -----------------------------------------
629@@ -86,6 +92,7 @@
630 if all([cs == color_algebra.ColorString() \
631 for cs in res_dict.values()]):
632 res_dict = {}
633+
634 return res_dict
635
636 def add_vertex(self, vertex, diagram, model,
637@@ -95,16 +102,14 @@
638 If the id0_rep list is not None, perform the requested replacement on the
639 last leg number before going further."""
640
641- # Create a list of pdg codes entering the vertex ordered as in
642- # interactions.py
643- list_pdg = [part.get_pdg_code() for part in \
644- model.get_interaction(vertex.get('id')).get('particles')]
645- # Create a dictionary pdg code --> leg(s)
646- dict_pdg_leg = {}
647+ # Create a list of (color,leg number) pairs for the vertex, where color
648+ # can be negative for anti particles
649+
650+ color_num_pairs = []
651
652 for index, leg in enumerate(vertex.get('legs')):
653 curr_num = leg.get('number')
654- curr_pdg = leg.get('id')
655+ curr_color = model.get('particle_dict')[leg.get('id')].get_color()
656
657 # If this is the next-to-last vertex and the last vertex is
658 # the special identity id=0, start by applying the replacement rule
659@@ -114,12 +119,12 @@
660 curr_num = id0_rep[id0_rep.index(curr_num) - 1]
661
662 # If this is the last leg and not the last vertex
663- # flip part/antipart. If it is not the last, AND not the next-to-last
664+ # flip color. If it is not the last, AND not the next-to-last
665 # before an id=0 vertex, replace last index by a new summed index.
666 if index == len(vertex.get('legs')) - 1 and \
667 vertex != diagram.get('vertices')[-1]:
668- curr_pdg = \
669- model.get('particle_dict')[curr_pdg].get_anti_pdg_code()
670+ curr_color = \
671+ model.get('particle_dict')[leg.get('id')].get_anti_color()
672 if not id0_rep:
673 repl_dict[curr_num] = min_index
674 min_index = min_index - 1
675@@ -130,21 +135,23 @@
676 except KeyError:
677 pass
678
679- try:
680- dict_pdg_leg[curr_pdg].append(curr_num)
681- except KeyError:
682- dict_pdg_leg[curr_pdg] = [curr_num]
683+ # Discard color singlets
684+ if curr_color != 1:
685+ color_num_pairs.append((curr_color, curr_num))
686+
687+ # Order the color/number pairs according to increasing color (assumed
688+ # to be the ordering chose in interactions.py). For identical colors,
689+ # keep the normal leg ordering.
690+ color_num_pairs = sorted(color_num_pairs, lambda p1, p2:p1[0] - p2[0])
691
692 # Create a list of associated leg number following the same order
693- list_numbers = []
694- for pdg_code in list_pdg:
695- list_numbers.append(dict_pdg_leg[pdg_code].pop(0))
696+ list_numbers = [p[1] for p in color_num_pairs]
697+
698 # ... and the associated dictionary for replacement
699 match_dict = dict(enumerate(list_numbers))
700
701 # Update the result dict using the current vertex ColorString object
702 # If more than one, create different entries
703-
704 inter_color = model.get_interaction(vertex['id'])['color']
705
706 # For colorless vertices, return a copy of res_dict
707@@ -285,6 +292,14 @@
708 or 1 arguments). If one arguments is given, it's interpreted as
709 an amplitude."""
710
711+ dict.__init__(self)
712+
713+ # Dictionary to save simplifications already done in a canonical form
714+ self._canonical_dict = {}
715+
716+ # Dictionary store the raw colorize information
717+ self._list_color_dict = []
718+
719 if len(args) not in (0, 1):
720 raise ValueError, \
721 "Object ColorBasis must be initialized with 0 or 1 arguments"
722@@ -318,6 +333,120 @@
723
724 return dict([v, k] for k, v in mydict.items())
725
726+ @staticmethod
727+ def get_color_flow_string(my_color_string, octet_indices):
728+ """Return the color_flow_string (i.e., composed only of T's with 2
729+ indices) associated to my_color_string. Take a list of the external leg
730+ color octet state indices as an input. Returns only the leading N
731+ contribution!"""
732+
733+ # Create a new color factor to allow for simplification
734+ my_cf = color_algebra.ColorFactor([my_color_string])
735+
736+ # Add one T per external octet
737+ for indices in octet_indices:
738+ my_cf[0].append(color_algebra.T(indices[0],
739+ indices[1],
740+ indices[2]))
741+
742+ # Simplify the whole thing
743+ my_cf = my_cf.full_simplify()
744+
745+ # Return the string with the highest N coefficient
746+ # (leading N decomposition), and the value of this coeff
747+ max_coeff = max([cs.Nc_power for cs in my_cf])
748+
749+ res_cs = [cs for cs in my_cf if cs.Nc_power == max_coeff]
750+
751+ # If more than one string at leading N...
752+ if len(res_cs) > 1:
753+ raise ColorBasis.ColorBasisError, \
754+ "More than one color string with leading N coeff: %s" % str(res_cs)
755+
756+ res_cs = res_cs[0]
757+
758+ # If the result string does not contain only T's with two indices
759+ for col_obj in res_cs:
760+ if col_obj.__class__.__name__ != 'T':
761+ raise ColorBasis.ColorBasisError, \
762+ "Color flow decomposition %s contains non T elements" % \
763+ str(res_cs)
764+ if len(col_obj) != 2:
765+ raise ColorBasis.ColorBasisError, \
766+ "Color flow decomposition %s contains T's w/o 2 indices" % \
767+ str(res_cs)
768+
769+ return res_cs
770+
771+ def color_flow_decomposition(self, repr_dict, ninitial):
772+ """Returns the color flow decomposition of the current basis, i.e. a
773+ list of dictionaries (one per color basis entry) with keys corresponding
774+ to external leg numbers and values tuples containing two color indices
775+ ( (0,0) for singlets, (X,0) for triplet, (0,X) for antitriplet and
776+ (X,Y) for octets). Other color representations are not yet supported
777+ here (an error is raised). Needs a dictionary with keys being external
778+ leg numbers, and value the corresponding color representation."""
779+
780+ # Offsets used to introduce fake quark indices for gluons
781+ offset1 = 1000
782+ offset2 = 2000
783+
784+ res = []
785+
786+ for col_basis_entry in sorted(self.keys()):
787+
788+ res_dict = {}
789+ fake_repl = []
790+
791+ # Rebuild a color string from a CB entry
792+ col_str = color_algebra.ColorString()
793+ col_str.from_immutable(col_basis_entry)
794+
795+ for (leg_num, leg_repr) in repr_dict.items():
796+ # By default, assign a (0,0) color flow
797+ res_dict[leg_num] = [0, 0]
798+
799+ # Raise an error if external legs contain non supported repr
800+ if leg_repr not in [1, 3, -3, 8]:
801+ raise ColorBasis.ColorBasisError, \
802+ "Particle ID=%i has an unsupported color representation" % leg_repr
803+
804+ # Build the fake indices replacements for octets
805+ if leg_repr == 8:
806+ fake_repl.append((leg_num,
807+ offset1 + leg_num,
808+ offset2 + leg_num))
809+
810+ # Get the actual color flow
811+ col_str_flow = self.get_color_flow_string(col_str, fake_repl)
812+
813+ # Offset for color flow
814+ offset = 501
815+
816+ for col_obj in col_str_flow:
817+ for i, index in enumerate(col_obj):
818+ if index < offset1:
819+ res_dict[index][i] = offset
820+ elif index > offset1 and index < offset2:
821+ res_dict[index - offset1][0] = offset
822+ elif index > offset2:
823+ res_dict[index - offset2][1] = offset
824+ offset = offset + 1
825+
826+ # Reverse ordering for initial state to stick to the (weird)
827+ # les houches convention
828+
829+ for key in res_dict.keys():
830+ if key <= ninitial:
831+ res_dict[key].reverse()
832+
833+ res.append(res_dict)
834+
835+ return res
836+
837+
838+
839+
840 #===============================================================================
841 # ColorMatrix
842 #===============================================================================
843@@ -364,11 +493,10 @@
844 to be symmetric."""
845
846 canonical_dict = {}
847-
848 for i1, struct1 in \
849- enumerate(self._col_basis1.keys()):
850+ enumerate(sorted(self._col_basis1.keys())):
851 for i2, struct2 in \
852- enumerate(self._col_basis2.keys()):
853+ enumerate(sorted(self._col_basis2.keys())):
854
855 # Only scan upper right triangle if symmetric
856 if is_symmetric and i2 < i1:
857@@ -528,3 +656,4 @@
858 def lcmm(*args):
859 """Return lcm of args."""
860 return reduce(ColorMatrix.lcm, args)
861+
862
863=== modified file 'madgraph/core/diagram_generation.py'
864--- madgraph/core/diagram_generation.py 2010-02-12 12:53:59 +0000
865+++ madgraph/core/diagram_generation.py 2010-05-07 07:20:46 +0000
866@@ -83,6 +83,25 @@
867
868 return ['process', 'diagrams']
869
870+ def get_number_of_diagrams(self):
871+ """Returns number of diagrams for this amplitude"""
872+ return len(self.get('diagrams'))
873+
874+ def get_amplitudes(self):
875+ """Return an AmplitudeList with just this amplitude.
876+ Needed for DecayChainAmplitude."""
877+
878+ return AmplitudeList([self])
879+
880+ def nice_string(self, indent=0):
881+ """Returns a nicely formatted string of the amplitude content."""
882+ return self.get('process').nice_string(indent) + "\n" + \
883+ self.get('diagrams').nice_string(indent)
884+
885+ def nice_string_processes(self, indent=0):
886+ """Returns a nicely formatted string of the amplitude process."""
887+ return self.get('process').nice_string(indent)
888+
889 def generate_diagrams(self):
890 """Generate diagrams. Algorithm:
891
892@@ -116,6 +135,11 @@
893
894 Be aware that the resulting vertices have all particles outgoing,
895 so need to flip for incoming particles when used.
896+
897+ SPECIAL CASE: For A>BC... processes which are legs in decay
898+ chains, we need to ensure that BC... combine first, giving A=A
899+ as a final vertex. This case is defined by the Process
900+ property is_decay_chain = True.
901 """
902
903 model = self['process'].get('model')
904@@ -164,7 +188,7 @@
905
906 # Need to flip part-antipart for incoming particles,
907 # so they are all outgoing
908- if leg.get('state') == 'initial':
909+ if leg.get('state') == False:
910 part = model.get('particle_dict')[leg.get('id')]
911 leg.set('id', part.get_anti_pdg_code())
912
913@@ -177,9 +201,30 @@
914 # Reduce the leg list and return the corresponding
915 # list of vertices
916
917- reduced_leglist = self.reduce_leglist(leglist,
918- max_multi_to1,
919- self.get('process').get('orders'))
920+ # Set is_decay_chain to True if this is a 1->N decay process
921+ # as part of a decay chain.
922+ is_decay_chain = self['process'].get('is_decay_chain')
923+ if is_decay_chain:
924+ part = model.get('particle_dict')[leglist[0].get('id')]
925+ # For decay chain legs, we want everything to combine to
926+ # the initial leg. This is done by only allowing the
927+ # initial leg to combine as a final identity.
928+ ref_dict_to0 = {(part.get_pdg_code(),part.get_anti_pdg_code()):0,
929+ (part.get_anti_pdg_code(),part.get_pdg_code()):0}
930+ # Need to set initial leg from_group to None, to make sure
931+ # it can only be combined at the end.
932+ leglist[0].set('from_group', None)
933+ reduced_leglist = self.reduce_leglist(leglist,
934+ max_multi_to1,
935+ ref_dict_to0,
936+ is_decay_chain,
937+ self.get('process').get('orders'))
938+ else:
939+ reduced_leglist = self.reduce_leglist(leglist,
940+ max_multi_to1,
941+ model.get('ref_dict_to0'),
942+ is_decay_chain,
943+ self.get('process').get('orders'))
944
945 for vertex_list in reduced_leglist:
946 res.append(base_objects.Diagram(
947@@ -194,7 +239,7 @@
948 # Note that we shouldn't look at the last vertex in each
949 # diagram, since that is the n->0 vertex
950 if self['process'].get('required_s_channels'):
951- ninitial = len(filter(lambda leg: leg.get('state') == 'initial',
952+ ninitial = len(filter(lambda leg: leg.get('state') == False,
953 self['process'].get('legs')))
954 res = base_objects.DiagramList(\
955 filter(lambda diagram: \
956@@ -210,7 +255,7 @@
957 # Note that we shouldn't look at the last vertex in each
958 # diagram, since that is the n->0 vertex
959 if self['process'].get('forbidden_s_channels'):
960- ninitial = len(filter(lambda leg: leg.get('state') == 'initial',
961+ ninitial = len(filter(lambda leg: leg.get('state') == False,
962 self['process'].get('legs')))
963 res = base_objects.DiagramList(\
964 filter(lambda diagram: \
965@@ -223,13 +268,16 @@
966 # Set diagrams to res
967 self['diagrams'] = res
968
969+ # Trim down number of legs and vertices used to save memory
970+ self.trim_diagrams()
971+
972 if res:
973 logger.info("Process has %d diagrams" % len(res))
974
975 return not failed_crossing
976
977- def reduce_leglist(self, curr_leglist, max_multi_to1,
978- coupling_orders=None):
979+ def reduce_leglist(self, curr_leglist, max_multi_to1, ref_dict_to0,
980+ is_decay_chain = False, coupling_orders = None):
981 """Recursive function to reduce N LegList to N-1
982 For algorithm, see doc for generate_diagrams.
983 """
984@@ -245,13 +293,13 @@
985
986 # Extract ref dict information
987 model = self['process'].get('model')
988- ref_dict_to0 = self['process'].get('model').get('ref_dict_to0')
989 ref_dict_to1 = self['process'].get('model').get('ref_dict_to1')
990
991
992 # If all legs can be combined in one single vertex, add this
993- # vertex to res and continue
994- if curr_leglist.can_combine_to_0(ref_dict_to0):
995+ # vertex to res and continue.
996+ # Special treatment for decay chain legs
997+ if curr_leglist.can_combine_to_0(ref_dict_to0, is_decay_chain):
998 # Extract the interaction id associated to the vertex
999 vertex_id = ref_dict_to0[tuple(sorted([leg.get('id') for \
1000 leg in curr_leglist]))]
1001@@ -300,6 +348,8 @@
1002 # First, reduce again the leg part
1003 reduced_diagram = self.reduce_leglist(leg_vertex_tuple[0],
1004 max_multi_to1,
1005+ ref_dict_to0,
1006+ is_decay_chain,
1007 new_coupling_orders)
1008 # If there is a reduced diagram
1009 if reduced_diagram:
1010@@ -444,11 +494,11 @@
1011 number = min([leg.get('number') for leg in entry])
1012 # 3) state is final, unless there is exactly one initial
1013 # state particle involved in the combination -> t-channel
1014- if len(filter(lambda leg: leg.get('state') == 'initial',
1015+ if len(filter(lambda leg: leg.get('state') == False,
1016 entry)) == 1:
1017- state = 'initial'
1018+ state = False
1019 else:
1020- state = 'final'
1021+ state = True
1022 # 4) from_group is True, by definition
1023
1024 # Create and add the object
1025@@ -484,7 +534,11 @@
1026 # and add it
1027 else:
1028 cp_entry = copy.copy(entry)
1029- cp_entry.set('from_group', False)
1030+ # Need special case for from_group == None; this
1031+ # is for initial state leg of decay chain process
1032+ # (see Leg.can_combine_to_0)
1033+ if cp_entry.get('from_group') != None:
1034+ cp_entry.set('from_group', False)
1035 reduced_list.append(cp_entry)
1036
1037 # Flatten the obtained leg and vertex lists
1038@@ -498,6 +552,28 @@
1039
1040 return res
1041
1042+ def trim_diagrams(self):
1043+ """Reduce the number of legs and vertices used in memory."""
1044+
1045+ legs = []
1046+ vertices = []
1047+
1048+ for diagram in self.get('diagrams'):
1049+ for ivx, vertex in enumerate(diagram.get('vertices')):
1050+ for ileg, leg in enumerate(vertex.get('legs')):
1051+ leg.set('from_group', False)
1052+ try:
1053+ index = legs.index(leg)
1054+ vertex.get('legs')[ileg] = legs[index]
1055+ except ValueError:
1056+ legs.append(leg)
1057+ try:
1058+ index = vertices.index(vertex)
1059+ diagram.get('vertices')[ivx] = vertices[index]
1060+ except ValueError:
1061+ vertices.append(vertex)
1062+
1063+
1064 #===============================================================================
1065 # AmplitudeList
1066 #===============================================================================
1067@@ -511,6 +587,143 @@
1068 return isinstance(obj, Amplitude)
1069
1070 #===============================================================================
1071+# DecayChainAmplitude
1072+#===============================================================================
1073+class DecayChainAmplitude(Amplitude):
1074+ """A list of amplitudes + a list of decay chain amplitude lists;
1075+ corresponding to a ProcessDefinition with a list of decay chains
1076+ """
1077+
1078+ def default_setup(self):
1079+ """Default values for all properties"""
1080+
1081+ self['amplitudes'] = AmplitudeList()
1082+ self['decay_chains'] = DecayChainAmplitudeList()
1083+
1084+ def __init__(self, argument=None):
1085+ """Allow initialization with Process and with ProcessDefinition"""
1086+
1087+ if isinstance(argument, base_objects.Process):
1088+ super(DecayChainAmplitude, self).__init__()
1089+ if isinstance(argument, base_objects.ProcessDefinition):
1090+ self['amplitudes'].extend(\
1091+ MultiProcess.generate_multi_amplitudes(argument))
1092+ else:
1093+ self['amplitudes'].append(Amplitude(argument))
1094+ # Clean decay chains from process, since we haven't
1095+ # combined processes with decay chains yet
1096+ process = copy.copy(self['amplitudes'][0].get('process'))
1097+ process.set('decay_chains', base_objects.ProcessList())
1098+ self['amplitudes'][0].set('process', process)
1099+ for process in argument.get('decay_chains'):
1100+ if not process.get('is_decay_chain'):
1101+ process.set('is_decay_chain',True)
1102+ if not process.get_ninitial() == 1:
1103+ raise self.PhysicsObjectError,\
1104+ "Decay chain process must have exactly one" + \
1105+ " incoming particle"
1106+ self['decay_chains'].append(\
1107+ DecayChainAmplitude(process))
1108+ elif argument != None:
1109+ # call the mother routine
1110+ super(DecayChainAmplitude, self).__init__(argument)
1111+ else:
1112+ # call the mother routine
1113+ super(DecayChainAmplitude, self).__init__()
1114+
1115+ def filter(self, name, value):
1116+ """Filter for valid amplitude property values."""
1117+
1118+ if name == 'amplitudes':
1119+ if not isinstance(value, AmplitudeList):
1120+ raise self.PhysicsObjectError, \
1121+ "%s is not a valid AmplitudeList" % str(value)
1122+ if name == 'decay_chains':
1123+ if not isinstance(value, DecayChainAmplitudeList):
1124+ raise self.PhysicsObjectError, \
1125+ "%s is not a valid DecayChainAmplitudeList object" % \
1126+ str(value)
1127+ return True
1128+
1129+ def get_sorted_keys(self):
1130+ """Return diagram property names as a nicely sorted list."""
1131+
1132+ return ['amplitudes', 'decay_chains']
1133+
1134+ # Helper functions
1135+
1136+ def get_number_of_diagrams(self):
1137+ """Returns number of diagrams for this amplitude"""
1138+ return sum(len(a.get('diagrams')) for a in self.get('amplitudes')) \
1139+ + sum(d.get_number_of_diagrams() for d in \
1140+ self.get('decay_chains'))
1141+
1142+ def nice_string(self, indent = 0):
1143+ """Returns a nicely formatted string of the amplitude content."""
1144+ mystr = ""
1145+ for amplitude in self.get('amplitudes'):
1146+ mystr = mystr + amplitude.nice_string(indent) + "\n"
1147+
1148+ if self.get('decay_chains'):
1149+ mystr = mystr + " " * indent + "Decays:\n"
1150+ for dec in self.get('decay_chains'):
1151+ mystr = mystr + dec.nice_string(indent + 2) + "\n"
1152+
1153+ return mystr[:-1]
1154+
1155+ def nice_string_processes(self, indent = 0):
1156+ """Returns a nicely formatted string of the amplitude processes."""
1157+ mystr = ""
1158+ for amplitude in self.get('amplitudes'):
1159+ mystr = mystr + amplitude.nice_string_processes(indent) + "\n"
1160+
1161+ if self.get('decay_chains'):
1162+ mystr = mystr + " " * indent + "Decays:\n"
1163+ for dec in self.get('decay_chains'):
1164+ mystr = mystr + dec.nice_string_processes(indent + 2) + "\n"
1165+
1166+ return mystr[:-1]
1167+
1168+ def get_decay_ids(self):
1169+ """Returns a set of all particle ids for which a decay is defined"""
1170+
1171+ decay_ids = []
1172+
1173+ # Get all amplitudes for the decay processes
1174+ for amp in sum([dc.get('amplitudes') for dc \
1175+ in self['decay_chains']], []):
1176+ # For each amplitude, find the initial state leg
1177+ decay_ids.append(amp.get('process').get_initial_ids()[0])
1178+
1179+ # Return a list with unique ids
1180+ return list(set(decay_ids))
1181+
1182+ def get_amplitudes(self):
1183+ """Recursive function to extract all amplitudes for this process"""
1184+
1185+ amplitudes = AmplitudeList()
1186+
1187+ amplitudes.extend(self.get('amplitudes'))
1188+ for decay in self.get('decay_chains'):
1189+ amplitudes.extend(decay.get_amplitudes())
1190+
1191+ return amplitudes
1192+
1193+
1194+#===============================================================================
1195+# DecayChainAmplitudeList
1196+#===============================================================================
1197+class DecayChainAmplitudeList(base_objects.PhysicsObjectList):
1198+ """List of DecayChainAmplitude objects
1199+ """
1200+
1201+ def is_valid_element(self, obj):
1202+ """Test if object obj is a valid DecayChainAmplitude for the list."""
1203+
1204+ return isinstance(obj, DecayChainAmplitude)
1205+
1206+
1207+#===============================================================================
1208 # MultiProcess
1209 #===============================================================================
1210 class MultiProcess(base_objects.PhysicsObject):
1211@@ -523,8 +736,30 @@
1212 """Default values for all properties"""
1213
1214 self['process_definitions'] = base_objects.ProcessDefinitionList()
1215+ # self['amplitudes'] can be an AmplitudeList or a
1216+ # DecayChainAmplitudeList, depending on whether there are
1217+ # decay chains in the process definitions or not.
1218 self['amplitudes'] = AmplitudeList()
1219
1220+ def __init__(self, argument=None):
1221+ """Allow initialization with ProcessDefinition or
1222+ ProcessDefinitionList"""
1223+
1224+ if isinstance(argument, base_objects.ProcessDefinition):
1225+ super(MultiProcess, self).__init__()
1226+ self['process_definitions'].append(argument)
1227+ self.get('amplitudes')
1228+ elif isinstance(argument, base_objects.ProcessDefinitionList):
1229+ super(MultiProcess, self).__init__()
1230+ self['process_definitions'] = argument
1231+ self.get('amplitudes')
1232+ elif argument != None:
1233+ # call the mother routine
1234+ super(MultiProcess, self).__init__(argument)
1235+ else:
1236+ # call the mother routine
1237+ super(MultiProcess, self).__init__()
1238+
1239 def filter(self, name, value):
1240 """Filter for valid process property values."""
1241
1242@@ -544,8 +779,15 @@
1243 """Get the value of the property name."""
1244
1245 if (name == 'amplitudes') and not self[name]:
1246- if self.get('process_definitions'):
1247- self.generate_amplitudes()
1248+ for process_def in self.get('process_definitions'):
1249+ if process_def.get('decay_chains'):
1250+ # This is a decay chain process
1251+ # Store amplitude(s) as DecayChainAmplitude
1252+ self['amplitudes'].append(\
1253+ DecayChainAmplitude(process_def))
1254+ else:
1255+ self['amplitudes'].extend(\
1256+ MultiProcess.generate_multi_amplitudes(process_def))
1257
1258 return MultiProcess.__bases__[0].get(self, name) # call the mother routine
1259
1260@@ -554,7 +796,8 @@
1261
1262 return ['process_definitions', 'amplitudes']
1263
1264- def generate_amplitudes(self):
1265+ @staticmethod
1266+ def generate_multi_amplitudes(process_definition):
1267 """Generate amplitudes in a semi-efficient way.
1268 Make use of crossing symmetry for processes that fail diagram
1269 generation, but not for processes that succeed diagram
1270@@ -562,79 +805,86 @@
1271 identify processes with identical amplitudes.
1272 """
1273
1274- # Loop over all process definitions and multilegs
1275+ if not isinstance(process_definition, base_objects.ProcessDefinition):
1276+ raise base_objects.PhysicsObjectError,\
1277+ "%s not valid ProcessDefinition object" % \
1278+ repr(process_definition)
1279+
1280 processes = base_objects.ProcessList()
1281-
1282- for process_def in self['process_definitions']:
1283-
1284- # failed_procs are processes that have already failed
1285- # based on crossing symmetry
1286- failed_procs = []
1287-
1288- model = process_def['model']
1289-
1290- isids = [leg['ids'] for leg in \
1291- filter(lambda leg: leg['state'] == 'initial', process_def['legs'])]
1292- fsids = [leg['ids'] for leg in \
1293- filter(lambda leg: leg['state'] == 'final', process_def['legs'])]
1294-
1295- # Generate all combinations for the initial state
1296-
1297- for prod in apply(itertools.product, isids):
1298- islegs = [\
1299- base_objects.Leg({'id':id, 'state': 'initial'}) \
1300+ amplitudes = AmplitudeList()
1301+
1302+ # failed_procs are processes that have already failed
1303+ # based on crossing symmetry
1304+ failed_procs = []
1305+
1306+ model = process_definition['model']
1307+
1308+ isids = [leg['ids'] for leg in \
1309+ filter(lambda leg: leg['state'] == False, process_definition['legs'])]
1310+ fsids = [leg['ids'] for leg in \
1311+ filter(lambda leg: leg['state'] == True, process_definition['legs'])]
1312+
1313+ # Generate all combinations for the initial state
1314+
1315+ for prod in apply(itertools.product, isids):
1316+ islegs = [\
1317+ base_objects.Leg({'id':id, 'state': False}) \
1318 for id in prod]
1319
1320- # Generate all combinations for the final state, and make
1321- # sure to remove double counting
1322-
1323- red_fsidlist = []
1324-
1325- for prod in apply(itertools.product, fsids):
1326-
1327- # Remove double counting between final states
1328- if tuple(sorted(prod)) in red_fsidlist:
1329- continue
1330-
1331- red_fsidlist.append(tuple(sorted(prod)));
1332-
1333- # Generate leg list for process
1334- leg_list = [copy.copy(leg) for leg in islegs]
1335-
1336- leg_list.extend([\
1337- base_objects.Leg({'id':id, 'state': 'final'}) \
1338+ # Generate all combinations for the final state, and make
1339+ # sure to remove double counting
1340+
1341+ red_fsidlist = []
1342+
1343+ for prod in apply(itertools.product, fsids):
1344+
1345+ # Remove double counting between final states
1346+ if tuple(sorted(prod)) in red_fsidlist:
1347+ continue
1348+
1349+ red_fsidlist.append(tuple(sorted(prod)));
1350+
1351+ # Generate leg list for process
1352+ leg_list = [copy.copy(leg) for leg in islegs]
1353+
1354+ leg_list.extend([\
1355+ base_objects.Leg({'id':id, 'state': True}) \
1356 for id in prod])
1357-
1358- legs = base_objects.LegList(leg_list)
1359-
1360- # Setup process
1361- process = base_objects.Process({\
1362- 'legs':legs,
1363- 'model':process_def.get('model'),
1364- 'id': process_def.get('id'),
1365- 'orders': process_def.get('orders'),
1366- 'required_s_channels': \
1367- process_def.get('required_s_channels'),
1368- 'forbidden_s_channels': \
1369- process_def.get('forbidden_s_channels'),
1370- 'forbidden_particles': \
1371- process_def.get('forbidden_particles')})
1372-
1373- # Check for crossed processes
1374- sorted_legs = sorted(legs.get_outgoing_id_list(model))
1375- # Check if crossed process has already failed
1376- # In that case don't check process
1377- if tuple(sorted_legs) in failed_procs:
1378- continue
1379-
1380- amplitude = Amplitude({"process": process})
1381- if not amplitude.generate_diagrams():
1382- # Add process to failed_procs
1383- failed_procs.append(tuple(sorted_legs))
1384- if amplitude.get('diagrams'):
1385- self['amplitudes'].append(amplitude)
1386-
1387-
1388+
1389+ legs = base_objects.LegList(leg_list)
1390+
1391+ # Setup process
1392+ process = base_objects.Process({\
1393+ 'legs':legs,
1394+ 'model':process_definition.get('model'),
1395+ 'id': process_definition.get('id'),
1396+ 'orders': process_definition.get('orders'),
1397+ 'required_s_channels': \
1398+ process_definition.get('required_s_channels'),
1399+ 'forbidden_s_channels': \
1400+ process_definition.get('forbidden_s_channels'),
1401+ 'forbidden_particles': \
1402+ process_definition.get('forbidden_particles'),
1403+ 'is_decay_chain': \
1404+ process_definition.get('is_decay_chain')})
1405+
1406+ # Check for crossed processes
1407+ sorted_legs = sorted(legs.get_outgoing_id_list(model))
1408+ # Check if crossed process has already failed
1409+ # In that case don't check process
1410+ if tuple(sorted_legs) in failed_procs:
1411+ continue
1412+
1413+ amplitude = Amplitude({"process": process})
1414+ if not amplitude.generate_diagrams():
1415+ # Add process to failed_procs
1416+ failed_procs.append(tuple(sorted_legs))
1417+ if amplitude.get('diagrams'):
1418+ amplitudes.append(amplitude)
1419+
1420+ # Return the produced amplitudes
1421+ return amplitudes
1422+
1423 #===============================================================================
1424 # Global helper methods
1425 #===============================================================================
1426
1427=== modified file 'madgraph/core/helas_objects.py'
1428--- madgraph/core/helas_objects.py 2010-02-13 04:20:50 +0000
1429+++ madgraph/core/helas_objects.py 2010-05-07 07:20:46 +0000
1430@@ -15,6 +15,7 @@
1431
1432 import array
1433 import copy
1434+import gc
1435 import logging
1436 import re
1437 import itertools
1438@@ -64,15 +65,9 @@
1439 # width = 'zero'
1440 # is_part = 'true' Particle not antiparticle
1441 # self_antipart='false' gluon, photo, h, or majorana would be true
1442- self['pdg_code'] = 0
1443- self['name'] = 'none'
1444- self['antiname'] = 'none'
1445- self['spin'] = 1
1446- self['color'] = 1
1447- self['mass'] = 'zero'
1448- self['width'] = 'zero'
1449+ self['particle'] = base_objects.Particle()
1450+ self['antiparticle'] = base_objects.Particle()
1451 self['is_part'] = True
1452- self['self_antipart'] = False
1453 # Properties related to the interaction generating the propagator
1454 # For an e- produced from an e+e-A vertex would have the following
1455 # proporties:
1456@@ -92,6 +87,8 @@
1457 # state = initial/final (for external bosons),
1458 # intermediate (for intermediate bosons),
1459 # incoming/outgoing (for fermions)
1460+ # leg_state = initial/final for initial/final legs
1461+ # intermediate for non-external wavefunctions
1462 # number_external = the 'number' property of the corresponding Leg,
1463 # corresponds to the number of the first external
1464 # particle contributing to this leg
1465@@ -99,10 +96,18 @@
1466 # -1 is used only if there is a fermion flow clash
1467 # due to a Majorana particle
1468 self['state'] = 'incoming'
1469+ self['leg_state'] = True
1470 self['mothers'] = HelasWavefunctionList()
1471 self['number_external'] = 0
1472 self['number'] = 0
1473 self['fermionflow'] = 1
1474+ # The decay flag is used in processes with defined decay chains,
1475+ # to indicate that this wavefunction has a decay defined
1476+ self['decay'] = False
1477+ # The onshell flag is used in processes with defined decay
1478+ # chains, to indicate that this wavefunction is decayed and
1479+ # should be onshell
1480+ self['onshell'] = False
1481
1482 # Customized constructor
1483 def __init__(self, *arguments):
1484@@ -117,17 +122,30 @@
1485 leg = arguments[0]
1486 interaction_id = arguments[1]
1487 model = arguments[2]
1488- self.set('pdg_code', leg.get('id'), model)
1489+ # decay_ids is the pdg codes for particles with decay
1490+ # chains defined
1491+ decay_ids = []
1492+ if len(arguments) > 3:
1493+ decay_ids = arguments[3]
1494+ self.set('particle', leg.get('id'), model)
1495 self.set('number_external', leg.get('number'))
1496 self.set('number', leg.get('number'))
1497- self.set('state', leg.get('state'))
1498+ self.set('state', {False: 'initial', True: 'final'}[leg.get('state')])
1499+ self.set('leg_state', leg.get('state'))
1500+ # Need to set 'decay' to True for particles which will be
1501+ # decayed later, in order to not combine such processes
1502+ # although they might have identical matrix elements before
1503+ # the decay is applied
1504+ if self['state'] == 'final' and self.get('pdg_code') in decay_ids:
1505+ self.set('decay', True)
1506+
1507 # Set fermion flow state. Initial particle and final
1508 # antiparticle are incoming, and vice versa for
1509 # outgoing
1510 if self.is_fermion():
1511- if leg.get('state') == 'initial' and \
1512+ if leg.get('state') == False and \
1513 self.get('is_part') or \
1514- leg.get('state') == 'final' and \
1515+ leg.get('state') == True and \
1516 not self.get('is_part'):
1517 self.set('state', 'incoming')
1518 else:
1519@@ -141,45 +159,12 @@
1520 def filter(self, name, value):
1521 """Filter for valid wavefunction property values."""
1522
1523- if name == 'pdg_code':
1524- if not isinstance(value, int):
1525- raise self.PhysicsObjectError, \
1526- "%s is not a valid pdg_code for wavefunction" % \
1527- str(value)
1528-
1529- if name in ['name', 'antiname']:
1530- # Must start with a letter, followed by letters, digits,
1531- # - and + only
1532- p = re.compile('\A[a-zA-Z]+[\w]*[\-\+]*~?\Z')
1533- if not p.match(value):
1534- raise self.PhysicsObjectError, \
1535- "%s is not a valid particle name" % value
1536-
1537- if name is 'spin':
1538- if not isinstance(value, int):
1539- raise self.PhysicsObjectError, \
1540- "Spin %s is not an integer" % repr(value)
1541- if value < 1 or value > 5:
1542- raise self.PhysicsObjectError, \
1543- "Spin %i is smaller than one" % value
1544-
1545- if name is 'color':
1546- if not isinstance(value, int):
1547- raise self.PhysicsObjectError, \
1548- "Color %s is not an integer" % repr(value)
1549- if value not in [1, 3, 6, 8]:
1550- raise self.PhysicsObjectError, \
1551- "Color %i is not valid" % value
1552-
1553- if name in ['mass', 'width']:
1554- # Must start with a letter, followed by letters, digits or _
1555- p = re.compile('\A[a-zA-Z]+[\w\_]*\Z')
1556- if not p.match(value):
1557- raise self.PhysicsObjectError, \
1558- "%s is not a valid name for mass/width variable" % \
1559- value
1560-
1561- if name in ['is_part', 'self_antipart']:
1562+ if name in ['particle', 'antiparticle']:
1563+ if not isinstance(value, base_objects.Particle):
1564+ raise self.PhysicsObjectError, \
1565+ "%s tag %s is not a particle" % (name, repr(value))
1566+
1567+ if name == 'is_part':
1568 if not isinstance(value, bool):
1569 raise self.PhysicsObjectError, \
1570 "%s tag %s is not a boolean" % (name, repr(value))
1571@@ -220,7 +205,7 @@
1572 str(value)
1573
1574 if name == 'coupl_key':
1575- if not isinstance(value, tuple):
1576+ if value and not isinstance(value, tuple):
1577 raise self.PhysicsObjectError, \
1578 "%s is not a valid tuple" % str(value)
1579 if len(value) != 2:
1580@@ -240,6 +225,11 @@
1581 raise self.PhysicsObjectError, \
1582 "%s is not a valid wavefunction " % str(value) + \
1583 "state (incoming|outgoing|intermediate)"
1584+ if name == 'leg_state':
1585+ if value not in [False, True]:
1586+ raise self.PhysicsObjectError, \
1587+ "%s is not a valid wavefunction " % str(value) + \
1588+ "state (incoming|outgoing|intermediate)"
1589 if name in ['fermionflow']:
1590 if not isinstance(value, int):
1591 raise self.PhysicsObjectError, \
1592@@ -260,8 +250,33 @@
1593 "%s is not a valid list of mothers for wavefunction" % \
1594 str(value)
1595
1596+ if name in ['decay', 'onshell']:
1597+ if not isinstance(value, bool):
1598+ raise self.PhysicsObjectError, \
1599+ "%s is not a valid bool" % str(value) + \
1600+ " for decay or onshell"
1601+
1602 return True
1603
1604+ # Enhanced get function, where we can directly call the properties of the particle
1605+ def get(self, name):
1606+ """When calling any property related to the particle,
1607+ automatically call the corresponding property of the particle."""
1608+
1609+ if name in ['spin', 'mass', 'width', 'self_antipart']:
1610+ return self['particle'].get(name)
1611+ elif name == 'pdg_code':
1612+ return self['particle'].get_pdg_code()
1613+ elif name == 'color':
1614+ return self['particle'].get_color()
1615+ elif name == 'name':
1616+ return self['particle'].get_name()
1617+ elif name == 'antiname':
1618+ return self['particle'].get_anti_name()
1619+ else:
1620+ return super(HelasWavefunction, self).get(name)
1621+
1622+
1623 # Enhanced set function, where we can append a model
1624
1625 def set(self, *arguments):
1626@@ -296,17 +311,13 @@
1627 if inter.get('couplings'):
1628 self.set('coupling', inter.get('couplings').values()[0])
1629 return True
1630- elif name == 'pdg_code':
1631- self.set('pdg_code', value)
1632- part = model.get('particle_dict')[value]
1633- self.set('name', part.get('name'))
1634- self.set('antiname', part.get('antiname'))
1635- self.set('spin', part.get('spin'))
1636- self.set('color', part.get('color'))
1637- self.set('mass', part.get('mass'))
1638- self.set('width', part.get('width'))
1639- self.set('is_part', part.get('is_part'))
1640- self.set('self_antipart', part.get('self_antipart'))
1641+ elif name == 'particle':
1642+ self.set('particle', model.get('particle_dict')[value])
1643+ self.set('is_part', self['particle'].get('is_part'))
1644+ if self['particle'].get('self_antipart'):
1645+ self.set('antiparticle', self['particle'])
1646+ else:
1647+ self.set('antiparticle', model.get('particle_dict')[-value])
1648 return True
1649 else:
1650 raise self.PhysicsObjectError, \
1651@@ -317,14 +328,19 @@
1652 def get_sorted_keys(self):
1653 """Return particle property names as a nicely sorted list."""
1654
1655- return ['pdg_code', 'name', 'antiname', 'spin', 'color',
1656- 'mass', 'width', 'is_part', 'self_antipart',
1657+ return ['particle', 'antiparticle', 'is_part',
1658 'interaction_id', 'pdg_codes', 'inter_color', 'lorentz',
1659 'coupling', 'coupl_key', 'state', 'number_external',
1660 'number', 'fermionflow', 'mothers']
1661
1662 # Helper functions
1663
1664+ def flip_part_antipart(self):
1665+ """Flip between particle and antiparticle."""
1666+ part = self.get('particle')
1667+ self.set('particle', self.get('antiparticle'))
1668+ self.set('antiparticle', part)
1669+
1670 def is_fermion(self):
1671 return self.get('spin') % 2 == 0
1672
1673@@ -343,29 +359,17 @@
1674 # Lorentz or color structures
1675 array_rep.extend(list(self['coupl_key']))
1676 # Finally, the mother numbers
1677- array_rep.extend([mother.get('number') for \
1678+ array_rep.extend([mother['number'] for \
1679 mother in self['mothers']])
1680 return array_rep
1681
1682- def get_pdg_code_outgoing(self):
1683+ def get_pdg_code(self):
1684 """Generate the corresponding pdg_code for an outgoing particle,
1685 taking into account fermion flow, for mother wavefunctions"""
1686
1687- if self.get('self_antipart'):
1688- #This is its own antiparticle e.g. a gluon
1689- return self.get('pdg_code')
1690-
1691- if self.is_boson():
1692- # This is a boson
1693- return self.get('pdg_code')
1694-
1695- if (self.get('state') == 'incoming' and self.get('is_part') \
1696- or self.get('state') == 'outgoing' and not self.get('is_part')):
1697- return - self.get('pdg_code')
1698- else:
1699- return self.get('pdg_code')
1700-
1701- def get_pdg_code_incoming(self):
1702+ return self.get('pdg_code')
1703+
1704+ def get_anti_pdg_code(self):
1705 """Generate the corresponding pdg_code for an incoming particle,
1706 taking into account fermion flow, for mother wavefunctions"""
1707
1708@@ -373,15 +377,7 @@
1709 #This is its own antiparticle e.g. gluon
1710 return self.get('pdg_code')
1711
1712- if self.is_boson():
1713- # This is a boson
1714- return - self.get('pdg_code')
1715-
1716- if (self.get('state') == 'outgoing' and self.get('is_part') \
1717- or self.get('state') == 'incoming' and not self.get('is_part')):
1718- return - self.get('pdg_code')
1719- else:
1720- return self.get('pdg_code')
1721+ return - self.get('pdg_code')
1722
1723 def set_scalar_coupling_sign(self, model):
1724 """Check if we need to add a minus sign due to non-identical
1725@@ -408,6 +404,16 @@
1726 raise self.PhysicsObjectError, \
1727 "%s is not a valid model for call to set_state_and_particle" \
1728 % repr(model)
1729+
1730+ # leg_state is final, unless there is exactly one initial
1731+ # state particle involved in the combination -> t-channel
1732+ if len(filter(lambda mother: mother.get('leg_state') == False,
1733+ self.get('mothers'))) == 1:
1734+ leg_state = False
1735+ else:
1736+ leg_state = True
1737+ self.set('leg_state', leg_state)
1738+
1739 # Start by setting the state of the wavefunction
1740 if self.is_boson():
1741 # For boson, set state to intermediate
1742@@ -449,14 +455,6 @@
1743 self.set('state', mother.get_with_flow('state'))
1744 self.set('is_part', mother.get_with_flow('is_part'))
1745
1746- # We want the particle created here to go into the next
1747- # vertex, so we need to flip identity for incoming
1748- # antiparticle and outgoing particle.
1749- if not self.get('self_antipart') and \
1750- (self.get('state') == 'incoming' and not self.get('is_part') \
1751- or self.get('state') == 'outgoing' and self.get('is_part')):
1752- self.set('pdg_code', -self.get('pdg_code'), model)
1753-
1754 return True
1755
1756 def check_and_fix_fermion_flow(self,
1757@@ -497,13 +495,14 @@
1758 external_wavefunctions,
1759 self.get_with_flow('state'),
1760 wf_number)
1761+
1762 return self, wf_number
1763
1764 def check_majorana_and_flip_flow(self, found_majorana,
1765 wavefunctions,
1766 diagram_wavefunctions,
1767 external_wavefunctions,
1768- wf_number):
1769+ wf_number, force_flip_flow=False):
1770 """Recursive function. Check for Majorana fermion. If found,
1771 continue down to external leg, then flip all the fermion flows
1772 on the way back up, in the correct way:
1773@@ -511,6 +510,15 @@
1774 wavefunctions before the last Majorana fermion, instead flip
1775 particle identities and state. Return the new (or old)
1776 wavefunction, and the present wavefunction number.
1777+
1778+ Arguments:
1779+ found_majorana: boolean
1780+ wavefunctions: HelasWavefunctionList with previously
1781+ defined wavefunctions
1782+ diagram_wavefunctions: HelasWavefunctionList with the wavefunctions
1783+ already defined in this diagram
1784+ external_wavefunctions: dictionary from legnumber to external wf
1785+ wf_number: The present wavefunction number
1786 """
1787
1788 if not found_majorana:
1789@@ -523,7 +531,9 @@
1790 # Stop recursion at the external leg
1791 mothers = copy.copy(self.get('mothers'))
1792 if not mothers:
1793- if not self.get('self_antipart'):
1794+ if force_flip_flow:
1795+ flip_flow = True
1796+ elif not self.get('self_antipart'):
1797 flip_flow = found_majorana
1798 else:
1799 flip_sign = found_majorana
1800@@ -538,17 +548,20 @@
1801 raise self.PhysicsObjectError, \
1802 "6-fermion vertices not yet implemented"
1803 if len(fermion_mother) == 0:
1804- raise self.PhysicsObjectError, \
1805- "Previous unresolved fermion flow in mother chain"
1806-
1807- # Perform recursion by calling on mother
1808- new_mother, wf_number = fermion_mother[0].\
1809- check_majorana_and_flip_flow(\
1810- found_majorana,
1811- wavefunctions,
1812- diagram_wavefunctions,
1813- external_wavefunctions,
1814- wf_number)
1815+ # Fermion flow already resolved for mothers
1816+ fermion_mother = filter(lambda wf: wf.is_fermion(),
1817+ mothers)
1818+ new_mother = fermion_mother[0]
1819+ else:
1820+ # Perform recursion by calling on mother
1821+ new_mother, wf_number = fermion_mother[0].\
1822+ check_majorana_and_flip_flow(\
1823+ found_majorana,
1824+ wavefunctions,
1825+ diagram_wavefunctions,
1826+ external_wavefunctions,
1827+ wf_number,
1828+ force_flip_flow)
1829
1830 # If this is Majorana and mother has different fermion
1831 # flow, it means that we should from now on in the chain
1832@@ -575,8 +588,29 @@
1833 # Update wavefunction number
1834 wf_number = wf_number + 1
1835 new_wf.set('number', wf_number)
1836- new_wf.set('mothers', mothers)
1837- diagram_wavefunctions.append(new_wf)
1838+ try:
1839+ # In call from insert_decay, we want to replace
1840+ # also identical wavefunctions in the same diagram
1841+ old_wf_index = diagram_wavefunctions.index(self)
1842+ diagram_wavefunctions[old_wf_index] = new_wf
1843+ except ValueError:
1844+ diagram_wavefunctions.append(new_wf)
1845+ # Make sure that new_wf comes before any wavefunction
1846+ # which has it as mother
1847+ for i, wf in enumerate(diagram_wavefunctions):
1848+ if self in wf.get('mothers'):
1849+ # Remove new_wf, in order to insert it below
1850+ diagram_wavefunctions.pop()
1851+ # Update wf numbers
1852+ new_wf.set('number', wf.get('number'))
1853+ for w in diagram_wavefunctions[i:]:
1854+ w.set('number', w.get('number') + 1)
1855+ # Insert wavefunction
1856+ diagram_wavefunctions.insert(i, new_wf)
1857+ break
1858+
1859+ # Set new mothers
1860+ new_wf.set('mothers', mothers)
1861
1862 # Now flip flow or sign
1863 if flip_flow:
1864@@ -590,14 +624,17 @@
1865 state != new_wf.get('state'),
1866 ['incoming', 'outgoing'])[0])
1867 new_wf.set('is_part', not new_wf.get('is_part'))
1868- if not new_wf.get('self_antipart'):
1869- new_wf.set('pdg_code', -new_wf.get('pdg_code'))
1870
1871 try:
1872 # Use the copy in wavefunctions instead.
1873 # Remove this copy from diagram_wavefunctions
1874 new_wf = wavefunctions[wavefunctions.index(new_wf)]
1875- diagram_wavefunctions.remove(new_wf)
1876+ index = diagram_wavefunctions.index(new_wf)
1877+ diagram_wavefunctions.pop(index)
1878+ # We need to decrease the wf number for later
1879+ # diagram wavefunctions
1880+ for wf in diagram_wavefunctions[index:]:
1881+ wf.set('number', wf.get('number') - 1)
1882 # Since we reuse the old wavefunction, reset wf_number
1883 wf_number = wf_number - 1
1884 except ValueError:
1885@@ -659,7 +696,7 @@
1886 fermion_number_list.extend(in_list[1])
1887 fermion_number_list.extend(out_list[1])
1888 elif len(in_fermions) != len(out_fermions):
1889- raise self.HelasWavefunctionError, \
1890+ raise self.PhysicsObjectError, \
1891 "Error: %d incoming fermions != %d outgoing fermions" % \
1892 (len(in_fermions), len(out_fermions))
1893
1894@@ -718,6 +755,211 @@
1895
1896 return (tuple(res), self.get('lorentz'))
1897
1898+ def get_base_vertices(self, wf_dict = {}, vx_list = [], optimization = 1):
1899+ """Recursive method to get a base_objects.VertexList
1900+ corresponding to this wavefunction and its mothers."""
1901+
1902+ vertices = base_objects.VertexList()
1903+
1904+ if not self.get('mothers'):
1905+ return vertices
1906+
1907+ # Add vertices for all mothers
1908+ for mother in self.get('mothers'):
1909+ # This is where recursion happens
1910+ vertices.extend(mother.get_base_vertices(wf_dict, vx_list,
1911+ optimization))
1912+ # Generate last vertex
1913+ legs = base_objects.LegList()
1914+
1915+ # We use the from_group flag to indicate whether this outgoing
1916+ # leg corresponds to a decaying (onshell) particle or not
1917+ try:
1918+ lastleg = wf_dict[self.get('number')]
1919+ except KeyError:
1920+ lastleg = base_objects.Leg({
1921+ 'id': self.get_pdg_code(),
1922+ 'number': self.get('number_external'),
1923+ 'state': self.get('leg_state'),
1924+ 'from_group': self.get('onshell')
1925+ })
1926+ if optimization != 0:
1927+ wf_dict[self.get('number')] = lastleg
1928+
1929+ for mother in self.get('mothers'):
1930+ try:
1931+ leg = wf_dict[mother.get('number')]
1932+ except KeyError:
1933+ leg = base_objects.Leg({
1934+ 'id': mother.get_pdg_code(),
1935+ 'number': mother.get('number_external'),
1936+ 'state': mother.get('leg_state'),
1937+ 'from_group': mother.get('onshell')
1938+ })
1939+ if optimization != 0:
1940+ wf_dict[mother.get('number')] = leg
1941+ legs.append(leg)
1942+
1943+ legs.append(lastleg)
1944+
1945+ vertex = base_objects.Vertex({
1946+ 'id': self.get('interaction_id'),
1947+ 'legs': legs})
1948+
1949+ try:
1950+ index = vx_list.index(vertex)
1951+ vertex = vx_list[index]
1952+ except ValueError:
1953+ pass
1954+
1955+ vertices.append(vertex)
1956+
1957+ return vertices
1958+
1959+ def get_color_indices(self):
1960+ """Recursive method to get the color indices corresponding to
1961+ this wavefunction and its mothers."""
1962+
1963+ if not self.get('mothers'):
1964+ return []
1965+
1966+ color_indices = []
1967+
1968+ # Add color indices for all mothers
1969+ for mother in self.get('mothers'):
1970+ # This is where recursion happens
1971+ color_indices.extend(mother.get_color_indices())
1972+ # Add this wf's color index
1973+ color_indices.append(self.get('coupl_key')[0])
1974+
1975+ return color_indices
1976+
1977+ def get_s_and_t_channels(self, ninitial, mother_leg):
1978+ """Returns two lists of vertices corresponding to the s- and
1979+ t-channels that can be traced from this wavefunction, ordered
1980+ from the outermost s-channel and in/down towards the highest
1981+ number initial state leg."""
1982+
1983+ schannels = base_objects.VertexList()
1984+ tchannels = base_objects.VertexList()
1985+
1986+ mother_leg = copy.copy(mother_leg)
1987+
1988+ # Add vertices for all s-channel mothers
1989+ final_mothers = filter(lambda wf: wf.get('number_external') > ninitial,
1990+ self.get('mothers'))
1991+
1992+ for mother in final_mothers:
1993+ schannels.extend(mother.get_base_vertices(optimization = 0))
1994+
1995+ # Extract initial state mothers
1996+ init_mothers = filter(lambda wf: wf.get('number_external') <= ninitial,
1997+ self.get('mothers'))
1998+
1999+ if len(init_mothers) > 2:
2000+ raise self.PhysicsObjectError, \
2001+ "get_s_and_t_channels can only handle up to 2 initial states"
2002+
2003+ if len(init_mothers) == 1:
2004+ # This is an s-channel or t-channel leg. Add vertex and
2005+ # continue stepping down towards external initial state
2006+ legs = base_objects.LegList()
2007+
2008+ if ninitial == 1 or init_mothers[0].get('number_external') == 2 \
2009+ or init_mothers[0].get('leg_state') == True:
2010+ # This is an s-channel or a leg on its way towards the
2011+ # final vertex
2012+ mothers = final_mothers + init_mothers
2013+ else:
2014+ # This is a t-channel leg going up towards leg number 1
2015+ mothers = init_mothers + final_mothers
2016+
2017+ for mother in mothers:
2018+ legs.append(base_objects.Leg({
2019+ 'id': mother.get_pdg_code(),
2020+ 'number': mother.get('number_external'),
2021+ 'state': mother.get('leg_state'),
2022+ 'from_group': False
2023+ }))
2024+
2025+ if ninitial == 1 or (init_mothers[0].get('number_external') == 1 \
2026+ and init_mothers[0].get('leg_state') == False):
2027+ # For decay processes or if the mother is going
2028+ # towards external leg 1, mother leg is resulting wf
2029+ legs.append(mother_leg)
2030+ else:
2031+ # If this is an s-channel leg or we are going towards
2032+ # external leg 2, mother leg is one of the mothers
2033+ legs.insert(-1, mother_leg)
2034+
2035+ # Renumber resulting leg according to minimum leg number
2036+ legs[-1].set('number', min([l.get('number') for l in legs[:-1]]))
2037+
2038+ vertex = base_objects.Vertex({
2039+ 'id': self.get('interaction_id'),
2040+ 'legs': legs})
2041+
2042+ # Add s- and t-channels from further down
2043+ mother_s, tchannels = \
2044+ init_mothers[0].get_s_and_t_channels(ninitial, legs[-1])
2045+
2046+ schannels.extend(mother_s)
2047+
2048+ if ninitial == 1 or init_mothers[0].get('leg_state') == True:
2049+ # This leg is s-channel
2050+ schannels.append(vertex)
2051+ elif init_mothers[0].get('number_external') == 1:
2052+ # If we are going towards external leg 1, add to
2053+ # t-channels, at end
2054+ tchannels.append(vertex)
2055+ else:
2056+ # If we are going towards external leg 2, add to
2057+ # t-channels, at start
2058+ tchannels.insert(0, vertex)
2059+
2060+ elif len(init_mothers) == 2:
2061+ # This is a t-channel junction. Start with the leg going
2062+ # towards external particle 1, and then do external
2063+ # particle 2
2064+ init_mothers1 = filter(lambda wf: wf.get('number_external') == 1,
2065+ init_mothers)[0]
2066+ init_mothers2 = filter(lambda wf: wf.get('number_external') == 2,
2067+ init_mothers)[0]
2068+
2069+ # Create vertex
2070+ legs = base_objects.LegList()
2071+ for mother in final_mothers + [init_mothers1, init_mothers2]:
2072+ legs.append(base_objects.Leg({
2073+ 'id': mother.get_pdg_code(),
2074+ 'number': mother.get('number_external'),
2075+ 'state': mother.get('leg_state'),
2076+ 'from_group': False
2077+ }))
2078+ legs.insert(-1, mother_leg)
2079+
2080+ # Renumber resulting leg according to minimum leg number
2081+ legs[-1].set('number', min([l.get('number') for l in legs[:-1]]))
2082+
2083+ vertex = base_objects.Vertex({
2084+ 'id': self.get('interaction_id'),
2085+ 'legs': legs})
2086+
2087+ # Add s- and t-channels going down towards leg 1
2088+ mother_s, tchannels = \
2089+ init_mothers1.get_s_and_t_channels(ninitial, legs[0])
2090+ schannels.extend(mother_s)
2091+
2092+ # Add vertex
2093+ tchannels.append(vertex)
2094+
2095+ # Add s- and t-channels going down towards leg 2
2096+ mother_s, mother_t = \
2097+ init_mothers2.get_s_and_t_channels(ninitial, legs[-1])
2098+ schannels.extend(mother_s)
2099+ tchannels.extend(mother_t)
2100+
2101+ return schannels, tchannels
2102+
2103 # Overloaded operators
2104
2105 def __eq__(self, other):
2106@@ -733,20 +975,24 @@
2107
2108 # Check relevant directly defined properties
2109 if self['number_external'] != other['number_external'] or \
2110- self['spin'] != other['spin'] or \
2111- self['self_antipart'] != other['self_antipart'] or \
2112 self['fermionflow'] != other['fermionflow'] or \
2113- self['mass'] != other['mass'] or \
2114- self['width'] != other['width'] or \
2115- self['color'] != other['color'] or \
2116+ self['coupl_key'] != other['coupl_key'] or \
2117 self['lorentz'] != other['lorentz'] or \
2118 self['coupling'] != other['coupling'] or \
2119- self['state'] != other['state']:
2120+ self['state'] != other['state'] or \
2121+ self['onshell'] != other['onshell'] or \
2122+ self.get('spin') != other.get('spin') or \
2123+ self.get('self_antipart') != other.get('self_antipart') or \
2124+ self.get('mass') != other.get('mass') or \
2125+ self.get('width') != other.get('width') or \
2126+ self.get('color') != other.get('color') or \
2127+ self['decay'] != other['decay'] or \
2128+ self['decay'] and self['particle'] != other['particle']:
2129 return False
2130
2131 # Check that mothers have the same numbers (only relevant info)
2132- return [ mother.get('number') for mother in self['mothers'] ] == \
2133- [ mother.get('number') for mother in other['mothers'] ]
2134+ return sorted([mother['number'] for mother in self['mothers']]) == \
2135+ sorted([mother['number'] for mother in other['mothers']])
2136
2137 def __ne__(self, other):
2138 """Overloading the nonequality operator, to make comparison easy"""
2139@@ -768,12 +1014,16 @@
2140
2141 # Helper functions
2142
2143+ def to_array(self):
2144+ return array.array('i', [w['number'] for w in self])
2145+
2146 def check_and_fix_fermion_flow(self,
2147 wavefunctions,
2148 diagram_wavefunctions,
2149 external_wavefunctions,
2150 my_state,
2151- wf_number):
2152+ wf_number,
2153+ force_flip_flow=False):
2154 """Check for clashing fermion flow (N(incoming) !=
2155 N(outgoing)). If found, we need to trace back through the
2156 mother structure (only looking at fermions), until we find a
2157@@ -813,6 +1063,11 @@
2158 Please decompose your vertex into 2-fermion
2159 vertices to get fermion flow correct."""
2160
2161+ # Note that the order of mothers is given by the external
2162+ # number meaning that the order will in general be different
2163+ # for t- and s-channel diagrams, hence giving duplication of
2164+ # wave functions. I don't think there's anything to do about
2165+ # this.
2166 for mother in fermion_mothers:
2167 if Nincoming > Noutgoing and \
2168 mother.get_with_flow('state') == 'outgoing' or \
2169@@ -824,14 +1079,16 @@
2170
2171 # Call recursive function to check for Majorana fermions
2172 # and flip fermionflow if found
2173+
2174 found_majorana = False
2175-
2176 new_mother, wf_number = mother.check_majorana_and_flip_flow(\
2177 found_majorana,
2178 wavefunctions,
2179 diagram_wavefunctions,
2180 external_wavefunctions,
2181- wf_number)
2182+ wf_number,
2183+ force_flip_flow)
2184+
2185 # Replace old mother with new mother
2186 self[self.index(mother)] = new_mother
2187
2188@@ -849,12 +1106,41 @@
2189 mother_states))
2190
2191 if Nincoming != Noutgoing:
2192- raise self.PhysicsObjectListError, \
2193- "Failed to fix fermion flow, %d != %d" % \
2194- (Nincoming, Noutgoing)
2195+ # No Majorana fermion in any relevant legs - try again,
2196+ # but simply use the first relevant leg
2197+ force_flip_flow = True
2198+ wf_number = self.check_and_fix_fermion_flow(\
2199+ wavefunctions,
2200+ diagram_wavefunctions,
2201+ external_wavefunctions,
2202+ my_state,
2203+ wf_number,
2204+ force_flip_flow)
2205
2206 return wf_number
2207
2208+ def insert_own_mothers(self):
2209+ """Recursively go through a wavefunction list and insert the
2210+ mothers of all wavefunctions, return the result.
2211+ Assumes that all wavefunctions have unique numbers."""
2212+
2213+ res = copy.copy(self)
2214+ # Recursively build up res
2215+ for wf in self:
2216+ index = res.index(wf)
2217+ res = res[:index] + wf.get('mothers').insert_own_mothers() \
2218+ + res[index:]
2219+
2220+ # Make sure no wavefunctions occur twice, by removing doublets
2221+ # from the back
2222+ i = len(res) - 1
2223+ while res[:i]:
2224+ if res[i].get('number') in [w.get('number') for w in res[:i]]:
2225+ res.pop(i)
2226+ i = i - 1
2227+
2228+ return res
2229+
2230 #===============================================================================
2231 # HelasAmplitude
2232 #===============================================================================
2233@@ -873,6 +1159,8 @@
2234 self['inter_color'] = None
2235 self['lorentz'] = ''
2236 self['coupling'] = 'none'
2237+ # The Lorentz and color index used in this amplitude
2238+ self['coupl_key'] = (0, 0)
2239 # Properties relating to the vertex
2240 self['number'] = 0
2241 self['fermionfactor'] = 0
2242@@ -933,6 +1221,17 @@
2243 "%s is not a valid coupling string" % \
2244 str(value)
2245
2246+ if name == 'coupl_key':
2247+ if value and not isinstance(value, tuple):
2248+ raise self.PhysicsObjectError, \
2249+ "%s is not a valid tuple" % str(value)
2250+ if len(value) != 2:
2251+ raise self.PhysicsObjectError, \
2252+ "%s is not a valid tuple with 2 elements" % str(value)
2253+ if not isinstance(value[0], int) or not isinstance(value[1], int):
2254+ raise self.PhysicsObjectError, \
2255+ "%s is not a valid tuple of integer" % str(value)
2256+
2257 if name == 'number':
2258 if not isinstance(value, int):
2259 raise self.PhysicsObjectError, \
2260@@ -1019,8 +1318,8 @@
2261 """Return particle property names as a nicely sorted list."""
2262
2263 return ['interaction_id', 'pdg_codes', 'inter_color', 'lorentz',
2264- 'coupling', 'number', 'color_indices', 'fermionfactor',
2265- 'mothers']
2266+ 'coupling', 'coupl_key', 'number', 'color_indices',
2267+ 'fermionfactor', 'mothers']
2268
2269
2270 # Helper functions
2271@@ -1039,9 +1338,10 @@
2272 wavefunctions,
2273 diagram_wavefunctions,
2274 external_wavefunctions,
2275- 'nostate',
2276+ "Nostate",
2277 wf_number)
2278
2279+
2280 def needs_hermitian_conjugate(self):
2281 """Returns true if any of the mothers have negative
2282 fermionflow"""
2283@@ -1099,7 +1399,7 @@
2284 fermion_number_list.extend(in_list[1])
2285 fermion_number_list.extend(out_list[1])
2286 elif len(in_fermions) != len(out_fermions):
2287- raise self.HelasWavefunctionError, \
2288+ raise self.PhysicsObjectError, \
2289 "Error: %d incoming fermions != %d outgoing fermions" % \
2290 (len(in_fermions), len(out_fermions))
2291
2292@@ -1124,6 +1424,180 @@
2293
2294 return (-1) ** nflips
2295
2296+ def get_base_diagram(self, wf_dict = {}, vx_list = [], optimization = 1):
2297+ """Return the base_objects.Diagram which corresponds to this
2298+ amplitude, using a recursive method for the wavefunctions."""
2299+
2300+ vertices = base_objects.VertexList()
2301+
2302+ # Add vertices for all mothers
2303+ for mother in self.get('mothers'):
2304+ vertices.extend(mother.get_base_vertices(wf_dict, vx_list,
2305+ optimization))
2306+ # Generate last vertex
2307+ legs = base_objects.LegList()
2308+ for mother in self.get('mothers'):
2309+ try:
2310+ leg = wf_dict[mother.get('number')]
2311+ except KeyError:
2312+ leg = base_objects.Leg({
2313+ 'id': mother.get_pdg_code(),
2314+ 'number': mother.get('number_external'),
2315+ 'state': mother.get('leg_state'),
2316+ 'from_group': mother.get('onshell')
2317+ })
2318+ if optimization != 0:
2319+ wf_dict[mother.get('number')] = leg
2320+ legs.append(leg)
2321+
2322+ vertices.append(base_objects.Vertex({
2323+ 'id': self.get('interaction_id'),
2324+ 'legs': legs}))
2325+
2326+ return base_objects.Diagram({'vertices': vertices})
2327+
2328+ def get_s_and_t_channels(self, ninitial):
2329+ """Returns two lists of vertices corresponding to the s- and
2330+ t-channels of this amplitude/diagram, ordered from the outermost
2331+ s-channel and in/down towards the highest number initial state
2332+ leg."""
2333+
2334+ schannels = base_objects.VertexList()
2335+ tchannels = base_objects.VertexList()
2336+
2337+ # Add vertices for all s-channel mothers
2338+ final_mothers = filter(lambda wf: wf.get('number_external') > ninitial,
2339+ self.get('mothers'))
2340+
2341+ for mother in final_mothers:
2342+ schannels.extend(mother.get_base_vertices(optimization = 0))
2343+
2344+ # Extract initial state mothers
2345+ init_mothers = filter(lambda wf: wf.get('number_external') <= ninitial,
2346+ self.get('mothers'))
2347+
2348+ if len(init_mothers) > 2:
2349+ raise self.PhysicsObjectError, \
2350+ "get_s_and_t_channels can only handle up to 2 initial states"
2351+
2352+ if len(init_mothers) == 1:
2353+ # This is an s-channel leg. Add vertex and start stepping down
2354+ # towards initial state
2355+
2356+ # Create vertex
2357+ legs = base_objects.LegList()
2358+ for mother in final_mothers + init_mothers:
2359+ legs.append(base_objects.Leg({
2360+ 'id': mother.get_pdg_code(),
2361+ 'number': mother.get('number_external'),
2362+ 'state': mother.get('leg_state'),
2363+ 'from_group': False
2364+ }))
2365+
2366+ # Renumber resulting leg according to minimum leg number
2367+ legs[-1].set('number', min([l.get('number') for l in legs[:-1]]))
2368+
2369+ # Add vertex to s-channels
2370+ schannels.append(base_objects.Vertex({
2371+ 'id': self.get('interaction_id'),
2372+ 'legs': legs}))
2373+
2374+ # Add s- and t-channels from further down
2375+ mother_s, tchannels = init_mothers[0].\
2376+ get_s_and_t_channels(ninitial, legs[-1])
2377+
2378+ schannels.extend(mother_s)
2379+ else:
2380+ # This is a t-channel leg. Start with the leg going
2381+ # towards external particle 1, and then do external
2382+ # particle 2
2383+ init_mothers1 = filter(lambda wf: wf.get('number_external') == 1,
2384+ init_mothers)[0]
2385+ init_mothers2 = filter(lambda wf: wf.get('number_external') == 2,
2386+ init_mothers)[0]
2387+
2388+ # Create vertex
2389+ legs = base_objects.LegList()
2390+ for mother in [init_mothers1] + final_mothers + [init_mothers2]:
2391+ legs.append(base_objects.Leg({
2392+ 'id': mother.get_pdg_code(),
2393+ 'number': mother.get('number_external'),
2394+ 'state': mother.get('leg_state'),
2395+ 'from_group': False
2396+ }))
2397+ # Sort the legs
2398+ legs = base_objects.LegList(\
2399+ sorted(legs[:-1], lambda l1, l2: l1.get('number') - \
2400+ l2.get('number')) + legs[-1:])
2401+ # Renumber resulting leg according to minimum leg number
2402+ legs[-1].set('number', min([l.get('number') for l in legs[:-1]]))
2403+
2404+ vertex = base_objects.Vertex({
2405+ 'id': self.get('interaction_id'),
2406+ 'legs': legs})
2407+
2408+ # Add s- and t-channels going down towards leg 1
2409+ mother_s, tchannels = \
2410+ init_mothers1.get_s_and_t_channels(ninitial, legs[0])
2411+
2412+ schannels.extend(mother_s)
2413+
2414+ # Add vertex to t-channels
2415+ tchannels.append(vertex)
2416+
2417+ # Add s- and t-channels going down towards leg 2
2418+ mother_s, mother_t = \
2419+ init_mothers2.get_s_and_t_channels(ninitial, legs[-1])
2420+ schannels.extend(mother_s)
2421+ tchannels.extend(mother_t)
2422+
2423+ # Finally go through all vertices, sort the legs and replace
2424+ # leg number with propagator number -1, -2, ...
2425+ number_dict = {}
2426+ nprop = 0
2427+ for vertex in schannels + tchannels:
2428+ # Sort the legs
2429+ legs = vertex.get('legs')[:-1]
2430+ if vertex in schannels:
2431+ legs.sort(lambda l1, l2: l2.get('number') - \
2432+ l1.get('number'))
2433+ else:
2434+ legs.sort(lambda l1, l2: l1.get('number') - \
2435+ l2.get('number'))
2436+ for leg in legs:
2437+ try:
2438+ leg.set('number', number_dict[leg.get('number')])
2439+ except KeyError:
2440+ pass
2441+ nprop = nprop - 1
2442+ last_leg = vertex.get('legs')[-1]
2443+ number_dict[last_leg.get('number')] = nprop
2444+ last_leg.set('number', nprop)
2445+ legs.append(last_leg)
2446+ vertex.set('legs', base_objects.LegList(legs))
2447+
2448+ return schannels, tchannels
2449+
2450+ def get_color_indices(self):
2451+ """Get the color indices corresponding to
2452+ this amplitude and its mothers, using a recursive function."""
2453+
2454+ if not self.get('mothers'):
2455+ return []
2456+
2457+ color_indices = []
2458+
2459+ # Add color indices for all mothers
2460+ for mother in self.get('mothers'):
2461+ # This is where recursion happens
2462+ color_indices.extend(mother.get_color_indices())
2463+
2464+ # Add this amp's color index
2465+ if self.get('interaction_id'):
2466+ color_indices.append(self.get('coupl_key')[0])
2467+
2468+ return color_indices
2469+
2470 # Comparison between different amplitudes, to allow check for
2471 # identical processes. Note that we are then not interested in
2472 # interaction id, but in all other properties.
2473@@ -1142,8 +1616,8 @@
2474 return False
2475
2476 # Check that mothers have the same numbers (only relevant info)
2477- return [ mother.get('number') for mother in self['mothers'] ] == \
2478- [ mother.get('number') for mother in other['mothers'] ]
2479+ return sorted([mother['number'] for mother in self['mothers']]) == \
2480+ sorted([mother['number'] for mother in other['mothers']])
2481
2482 def __ne__(self, other):
2483 """Overloading the nonequality operator, to make comparison easy"""
2484@@ -1178,6 +1652,7 @@
2485 # different Lorentz or color structures associated with this
2486 # diagram
2487 self['amplitudes'] = HelasAmplitudeList()
2488+ self['number'] = 0
2489
2490 def filter(self, name, value):
2491 """Filter for valid diagram property values."""
2492@@ -1241,8 +1716,13 @@
2493
2494 self['processes'] = base_objects.ProcessList()
2495 self['diagrams'] = HelasDiagramList()
2496+ self['identical_particle_factor'] = 0
2497 self['color_basis'] = color_amp.ColorBasis()
2498 self['color_matrix'] = color_amp.ColorMatrix(color_amp.ColorBasis())
2499+ # base_amplitude is the Amplitude to be used in color
2500+ # generation, drawing etc. For decay chain processes, this is
2501+ # the Amplitude which corresponds to the combined process.
2502+ self['base_amplitude'] = None
2503
2504 def filter(self, name, value):
2505 """Filter for valid diagram property values."""
2506@@ -1255,6 +1735,10 @@
2507 if not isinstance(value, HelasDiagramList):
2508 raise self.PhysicsObjectError, \
2509 "%s is not a valid HelasDiagramList object" % str(value)
2510+ if name == 'identical_particle_factor':
2511+ if not isinstance(value, int):
2512+ raise self.PhysicsObjectError, \
2513+ "%s is not a valid int object" % str(value)
2514 if name == 'color_basis':
2515 if not isinstance(value, color_amp.ColorBasis):
2516 raise self.PhysicsObjectError, \
2517@@ -1263,15 +1747,32 @@
2518 if not isinstance(value, color_amp.ColorMatrix):
2519 raise self.PhysicsObjectError, \
2520 "%s is not a valid ColorMatrix object" % str(value)
2521+ if name == 'base_amplitude':
2522+ if value != None and not \
2523+ isinstance(value, diagram_generation.Amplitude):
2524+ raise self.PhysicsObjectError, \
2525+ "%s is not a valid Amplitude object" % str(value)
2526 return True
2527
2528 def get_sorted_keys(self):
2529 """Return particle property names as a nicely sorted list."""
2530
2531- return ['processes', 'diagrams', 'color_basis', 'color_matrix']
2532+ return ['processes', 'identical_particle_factor',
2533+ 'diagrams', 'color_basis', 'color_matrix',
2534+ 'base_amplitude']
2535+
2536+ # Enhanced get function
2537+ def get(self, name):
2538+ """Get the value of the property name."""
2539+
2540+ if name == 'base_amplitude' and not self[name]:
2541+ self['base_amplitude'] = self.get_base_amplitude()
2542+
2543+ return super(HelasMatrixElement, self).get(name)
2544
2545 # Customized constructor
2546- def __init__(self, amplitude=None, optimization=1, gen_color=True):
2547+ def __init__(self, amplitude=None, optimization=1,
2548+ decay_ids=[], gen_color=True):
2549 """Constructor for the HelasMatrixElement. In particular allows
2550 generating a HelasMatrixElement from an Amplitude, with
2551 automatic generation of the necessary wavefunctions
2552@@ -1281,10 +1782,11 @@
2553 if isinstance(amplitude, diagram_generation.Amplitude):
2554 super(HelasMatrixElement, self).__init__()
2555 self.get('processes').append(amplitude.get('process'))
2556- self.generate_helas_diagrams(amplitude, optimization)
2557+ self.generate_helas_diagrams(amplitude, optimization, decay_ids)
2558 self.calculate_fermionfactors()
2559+ self.calculate_identical_particle_factors()
2560 if gen_color:
2561- self.get('color_basis').build(amplitude)
2562+ self.get('color_basis').build(self.get('base_amplitude'))
2563 self.set('color_matrix',
2564 color_amp.ColorMatrix(self.get('color_basis')))
2565 else:
2566@@ -1304,10 +1806,20 @@
2567 if not isinstance(other, HelasMatrixElement):
2568 return False
2569
2570+ # If no processes, this is an empty matrix element
2571+ if not self['processes'] and not other['processes']:
2572+ return True
2573+
2574 # Should only check if diagrams and process id are identical
2575+ # Except in case of decay processes: then also initial state
2576+ # must be the same
2577 if self['processes'] and not other['processes'] or \
2578 self['processes'] and \
2579 self['processes'][0]['id'] != other['processes'][0]['id'] or \
2580+ self['processes'][0]['is_decay_chain'] or \
2581+ other['processes'][0]['is_decay_chain'] or \
2582+ self['identical_particle_factor'] != \
2583+ other['identical_particle_factor'] or \
2584 self['diagrams'] != other['diagrams']:
2585 return False
2586
2587@@ -1317,13 +1829,17 @@
2588 """Overloading the nonequality operator, to make comparison easy"""
2589 return not self.__eq__(other)
2590
2591- def generate_helas_diagrams(self, amplitude, optimization=1):
2592+ def generate_helas_diagrams(self, amplitude, optimization=1,
2593+ decay_ids=[]):
2594 """Starting from a list of Diagrams from the diagram
2595 generation, generate the corresponding HelasDiagrams, i.e.,
2596 the wave functions and amplitudes. Choose between default
2597 optimization (= 1, maximum recycling of wavefunctions) or no
2598 optimization (= 0, no recycling of wavefunctions, useful for
2599 GPU calculations with very restricted memory).
2600+
2601+ Note that we need special treatment for decay chains, since
2602+ the end product then is a wavefunction, not an amplitude.
2603 """
2604
2605 if not isinstance(amplitude, diagram_generation.Amplitude) or \
2606@@ -1333,6 +1849,7 @@
2607
2608 diagram_list = amplitude.get('diagrams')
2609 process = amplitude.get('process')
2610+
2611 model = process.get('model')
2612 if not diagram_list:
2613 return
2614@@ -1347,26 +1864,36 @@
2615
2616 # Generate wavefunctions for the external particles
2617 external_wavefunctions = dict([(leg.get('number'),
2618- HelasWavefunction(leg, 0, model)) \
2619+ HelasWavefunction(leg, 0, model,
2620+ decay_ids)) \
2621 for leg in process.get('legs')])
2622+
2623 # Initially, have one wavefunction for each external leg.
2624 wf_number = len(process.get('legs'))
2625
2626- # For initial state bosons, need to flip PDG code (if has antipart)
2627+ # For initial state bosons, need to flip part-antipart
2628 # since all bosons should be treated as outgoing
2629 for key in external_wavefunctions.keys():
2630 wf = external_wavefunctions[key]
2631 if wf.is_boson() and wf.get('state') == 'initial' and \
2632 not wf.get('self_antipart'):
2633- wf.set('pdg_code', -wf.get('pdg_code'))
2634 wf.set('is_part', not wf.get('is_part'))
2635
2636+ # For initial state particles, need to flip PDG code (if has
2637+ # antipart)
2638+ for key in external_wavefunctions.keys():
2639+ wf = external_wavefunctions[key]
2640+ if wf.get('leg_state') == False and \
2641+ not wf.get('self_antipart'):
2642+ wf.flip_part_antipart()
2643+
2644 # Now go through the diagrams, looking for undefined wavefunctions
2645
2646 helas_diagrams = HelasDiagramList()
2647
2648- # Keep track of amplitude number
2649+ # Keep track of amplitude number and diagram number
2650 amplitude_number = 0
2651+ diagram_number = 0
2652
2653 for diagram in diagram_list:
2654
2655@@ -1387,7 +1914,7 @@
2656 lastvx = vertices.pop()
2657
2658 # Check if last vertex is identity vertex
2659- if lastvx.get('id') == 0:
2660+ if not process.get('is_decay_chain') and lastvx.get('id') == 0:
2661 # Need to "glue together" last and next-to-last
2662 # vertext, by replacing the (incoming) last leg of the
2663 # next-to-last vertex with the (outgoing) leg in the
2664@@ -1432,6 +1959,7 @@
2665 # Need one amplitude for each Lorentz/color structure,
2666 # i.e. for each coupling
2667 for coupl_key in inter.get('couplings').keys():
2668+
2669 wf = HelasWavefunction(last_leg, vertex.get('id'), model)
2670 wf.set('coupling', inter.get('couplings')[coupl_key])
2671 # Special feature: For HVS vertices with the two
2672@@ -1462,7 +1990,10 @@
2673 new_number_wf_dict = copy.copy(number_wf_dict)
2674
2675 # Store wavefunction
2676- if not wf in diagram_wavefunctions:
2677+ try:
2678+ wf = diagram_wavefunctions[\
2679+ diagram_wavefunctions.index(wf)]
2680+ except ValueError:
2681 # Update wf number
2682 wf_number = wf_number + 1
2683 wf.set('number', wf_number)
2684@@ -1477,7 +2008,7 @@
2685 except ValueError:
2686 diagram_wavefunctions.append(wf)
2687
2688- new_number_wf_dict[last_leg.get('number')] = wf
2689+ new_number_wf_dict[last_leg.get('number')] = wf
2690
2691 # Store the new copy of number_wf_dict
2692 new_number_to_wavefunctions.append(\
2693@@ -1493,6 +2024,8 @@
2694 # Generate all amplitudes corresponding to the different
2695 # copies of this diagram
2696 helas_diagram = HelasDiagram()
2697+ diagram_number = diagram_number + 1
2698+ helas_diagram.set('number', diagram_number)
2699 for number_wf_dict, color_list in zip(number_to_wavefunctions,
2700 color_lists):
2701 # Find mothers for the amplitude
2702@@ -1506,29 +2039,42 @@
2703 wf_number = mothers.check_and_fix_fermion_flow(wavefunctions,
2704 diagram_wavefunctions,
2705 external_wavefunctions,
2706- 'nostate',
2707+ "Nostate",
2708 wf_number)
2709+
2710 # Sort the wavefunctions according to number
2711 diagram_wavefunctions.sort(lambda wf1, wf2: \
2712 wf1.get('number') - wf2.get('number'))
2713
2714 # Now generate HelasAmplitudes from the last vertex.
2715- inter = model.get('interaction_dict')[lastvx.get('id')]
2716- for i, coupl_key in enumerate(inter.get('couplings').keys()):
2717+ if lastvx.get('id'):
2718+ inter = model.get('interaction_dict')[lastvx.get('id')]
2719+ keys = inter.get('couplings').keys()
2720+ else:
2721+ # Special case for decay chain - amplitude is just a
2722+ # placeholder for replaced wavefunction
2723+ inter = None
2724+ keys = [(0, 0)]
2725+ for i, coupl_key in enumerate(keys):
2726 amp = HelasAmplitude(lastvx, model)
2727- amp.set('coupling', inter.get('couplings')[coupl_key])
2728- if inter.get('color'):
2729- amp.set('inter_color', inter.get('color')[coupl_key[0]])
2730- amp.set('lorentz', inter.get('lorentz')[coupl_key[1]])
2731+ if inter:
2732+ amp.set('coupling', inter.get('couplings')[coupl_key])
2733+ amp.set('lorentz', inter.get('lorentz')[\
2734+ coupl_key[1]])
2735+ if inter.get('color'):
2736+ amp.set('inter_color', inter.get('color')[\
2737+ coupl_key[0]])
2738+ amp.set('coupl_key', coupl_key)
2739 amp.set('mothers', mothers)
2740 amplitude_number = amplitude_number + 1
2741 amp.set('number', amplitude_number)
2742 # Add the list with color indices to the amplitude
2743 new_color_list = copy.copy(color_list)
2744- new_color_list.append(coupl_key[0])
2745+ if inter:
2746+ new_color_list.append(coupl_key[0])
2747 amp.set('color_indices', new_color_list)
2748+
2749 # Generate HelasDiagram
2750-
2751 helas_diagram.get('amplitudes').append(amp)
2752 if diagram_wavefunctions and not \
2753 helas_diagram.get('wavefunctions'):
2754@@ -1546,15 +2092,714 @@
2755
2756
2757 self.set('diagrams', helas_diagrams)
2758+ # Sort all mothers according to the order wanted in Helas calls
2759+
2760+ for wf in self.get_all_wavefunctions():
2761+ wf.set('mothers', HelasMatrixElement.sorted_mothers(wf))
2762+ for amp in self.get_all_amplitudes():
2763+ amp.set('mothers', HelasMatrixElement.sorted_mothers(amp))
2764+ amp.set('color_indices', amp.get_color_indices())
2765+
2766+ def insert_decay_chains(self, decay_dict):
2767+ """Iteratively insert decay chains decays into this matrix
2768+ element.
2769+ * decay_dict: a dictionary from external leg number
2770+ to decay matrix element.
2771+ """
2772+
2773+ # We need to keep track of how the
2774+ # wavefunction numbers change
2775+ replace_dict = {}
2776+ for number in decay_dict.keys():
2777+ # Find all wavefunctions corresponding to this external
2778+ # leg number
2779+ replace_dict[number] = [wf for wf in \
2780+ filter(lambda wf: not wf.get('mothers') and \
2781+ wf.get('number_external') == number,
2782+ self.get_all_wavefunctions())]
2783+
2784+ # Keep track of wavefunction and amplitude numbers, to ensure
2785+ # unique numbers for all new wfs and amps during manipulations
2786+ numbers = [self.get_all_wavefunctions()[-1].get('number'),
2787+ self.get_all_amplitudes()[-1].get('number')]
2788+
2789+ # Check if there are any Majorana particles present in any diagrams
2790+ got_majoranas = False
2791+ for wf in self.get_all_wavefunctions() + \
2792+ sum([d.get_all_wavefunctions() for d in \
2793+ decay_dict.values()], []):
2794+ if wf.get('self_antipart') and wf.is_fermion():
2795+ got_majoranas = True
2796+
2797+ # Now insert decays for all legs that have decays
2798+ for number in decay_dict.keys():
2799+
2800+ self.insert_decay(replace_dict[number],
2801+ decay_dict[number],
2802+ numbers,
2803+ got_majoranas)
2804+
2805+ # Final cleaning out duplicate wavefunctions - needed only if
2806+ # we have multiple fermion flows, i.e., either multiple replaced
2807+ # wavefunctions or majorana fermions and multiple diagrams
2808+ flows = reduce(lambda i1, i2: i1 * i2,
2809+ [len(replace_dict[i]) for i in decay_dict.keys()], 1)
2810+ diagrams = reduce(lambda i1, i2: i1 * i2,
2811+ [len(decay_dict[i].get('diagrams')) for i in \
2812+ decay_dict.keys()], 1)
2813+
2814+ if flows > 1 or (diagrams > 1 and got_majoranas):
2815+
2816+ # Clean out any doublet wavefunctions
2817+
2818+ earlier_wfs = []
2819+
2820+ earlier_wf_arrays = []
2821+
2822+ mothers = self.get_all_wavefunctions() + self.get_all_amplitudes()
2823+ mother_arrays = [w['mothers'].to_array() \
2824+ for w in mothers]
2825+
2826+ for diagram in self.get('diagrams'):
2827+
2828+ if diagram.get('number') > 1:
2829+ earlier_wfs.extend(self.get('diagrams')[\
2830+ diagram.get('number') - 2].get('wavefunctions'))
2831+
2832+ i = 0
2833+ diag_wfs = diagram.get('wavefunctions')
2834+
2835+
2836+ # Remove wavefunctions and replace mothers
2837+ while diag_wfs[i:]:
2838+ try:
2839+ new_wf = earlier_wfs[\
2840+ earlier_wfs.index(diag_wfs[i])]
2841+ wf = diag_wfs.pop(i)
2842+
2843+ self.update_later_mothers(wf, new_wf, mothers,
2844+ mother_arrays)
2845+ except ValueError:
2846+ i = i + 1
2847+
2848+ # When we are done with all decays, set wavefunction and
2849+ # amplitude numbers
2850+ for i, wf in enumerate(self.get_all_wavefunctions()):
2851+ wf.set('number', i + 1)
2852+ for i, amp in enumerate(self.get_all_amplitudes()):
2853+ amp.set('number', i + 1)
2854+ # Update fermion factors for all amplitudes
2855+ amp.calculate_fermionfactor()
2856+ # Update color indices
2857+ amp.set('color_indices', amp.get_color_indices())
2858+
2859+ # Calculate identical particle factors for
2860+ # this matrix element
2861+ self.identical_decay_chain_factor(decay_dict.values())
2862+
2863+ def insert_decay(self, old_wfs, decay, numbers, got_majoranas):
2864+ """Insert a decay chain matrix element into the matrix element.
2865+ * old_wfs: the wavefunctions to be replaced.
2866+ They all correspond to the same external particle, but might
2867+ have different fermion flow directions
2868+ * decay: the matrix element for the decay chain
2869+ * numbers: the present wavefunction and amplitude number,
2870+ to allow for unique numbering
2871+
2872+ Note that:
2873+ 1) All amplitudes and all wavefunctions using the decaying wf
2874+ must be copied as many times as there are amplitudes in the
2875+ decay matrix element
2876+ 2) In the presence of Majorana particles, we must make sure
2877+ to flip fermion flow for the decay process if needed.
2878+
2879+ The algorithm is the following:
2880+ 1) Multiply the diagrams with the number of diagrams Ndiag in
2881+ the decay element
2882+ 2) For each diagram in the decay element, work on the diagrams
2883+ which corresponds to it
2884+ 3) Flip fermion flow for the decay wavefunctions if needed
2885+ 4) Insert all auxiliary wavefunctions into the diagram (i.e., all
2886+ except the final wavefunctions, which directly replace the
2887+ original final state wavefunctions)
2888+ 4) Replace the wavefunctions recursively, so that we always replace
2889+ each old wavefunctions with Namp new ones, where Namp is
2890+ the number of amplitudes in this decay element
2891+ diagram. Do recursion for wavefunctions which have this
2892+ wavefunction as mother. Simultaneously replace any
2893+ amplitudes which have this wavefunction as mother.
2894+ """
2895+
2896+ len_decay = len(decay.get('diagrams'))
2897+
2898+ number_external = old_wfs[0].get('number_external')
2899+
2900+ # Insert the decay process in the process
2901+ for process in self.get('processes'):
2902+ process.get('decay_chains').append(\
2903+ decay.get('processes')[0])
2904+
2905+ # We need one copy of the decay element diagrams for each
2906+ # old_wf to be replaced, since we need different wavefunction
2907+ # numbers for them
2908+ decay_elements = [copy.deepcopy(d) for d in \
2909+ [ decay.get('diagrams') ] * len(old_wfs)]
2910+
2911+ # Need to replace Particle in all wavefunctions to avoid
2912+ # deepcopy
2913+ idecay = 0
2914+ for decay_element in decay_elements:
2915+ for idiag, diagram in enumerate(decay.get('diagrams')):
2916+ wfs = diagram.get('wavefunctions')
2917+ decay_diag = decay_element[idiag]
2918+ for i, wf in enumerate(decay_diag.get('wavefunctions')):
2919+ wf.set('particle', wfs[i].get('particle'))
2920+ wf.set('antiparticle', wfs[i].get('antiparticle'))
2921+
2922+ for decay_element in decay_elements:
2923+
2924+ # Remove the unwanted initial state wavefunctions from decay
2925+ for decay_diag in decay_element:
2926+ for wf in filter(lambda wf: wf.get('number_external') == 1,
2927+ decay_diag.get('wavefunctions')):
2928+ decay_diag.get('wavefunctions').remove(wf)
2929+
2930+ decay_wfs = sum([d.get('wavefunctions') for d in decay_element], [])
2931+
2932+ # External wavefunction offset for new wfs
2933+ incr_new = number_external - \
2934+ decay_wfs[0].get('number_external')
2935+
2936+ for wf in decay_wfs:
2937+ # Set number_external for new wavefunctions
2938+ wf.set('number_external', wf.get('number_external') + incr_new)
2939+ # Set unique number for new wavefunctions
2940+ numbers[0] = numbers[0] + 1
2941+ wf.set('number', numbers[0])
2942+
2943+ # External wavefunction offset for new wfs
2944+ incr_new = number_external - \
2945+ decay_wfs[0].get('number_external')
2946+ for wf in decay_wfs:
2947+ wf.set('number_external', wf.get('number_external') + incr_new)
2948+
2949+
2950+ # External wavefunction offset for old wfs, only the first
2951+ # time this external wavefunction is replaced
2952+ (nex, nin) = decay.get_nexternal_ninitial()
2953+ incr_old = nex - 2 # due to the incoming particle
2954+ wavefunctions = self.get_all_wavefunctions()
2955+ for wf in wavefunctions:
2956+ # Only modify number_external for wavefunctions above old_wf
2957+ if wf.get('number_external') > number_external:
2958+ wf.set('number_external',
2959+ wf.get('number_external') + incr_old)
2960+
2961+ # Multiply the diagrams by Ndiag
2962+
2963+ diagrams = HelasDiagramList()
2964+ for diagram in self.get('diagrams'):
2965+ new_diagrams = [copy.copy(diag) for diag in \
2966+ [ diagram ] * (len_decay - 1)]
2967+ # Update diagram number
2968+ diagram.set('number', (diagram.get('number') - 1) * \
2969+ len_decay + 1)
2970+
2971+ for i, diag in enumerate(new_diagrams):
2972+ # Set diagram number
2973+ diag.set('number', diagram.get('number') + i + 1)
2974+ # Copy over all wavefunctions
2975+ diag.set('wavefunctions',
2976+ copy.copy(diagram.get('wavefunctions')))
2977+ # Copy over the amplitudes
2978+ amplitudes = HelasAmplitudeList(\
2979+ [copy.copy(amp) for amp in \
2980+ diag.get('amplitudes')])
2981+ # Renumber amplitudes
2982+ for amp in amplitudes:
2983+ numbers[1] = numbers[1] + 1
2984+ amp.set('number', numbers[1])
2985+ diag.set('amplitudes', amplitudes)
2986+ # Add old and new diagram to diagrams
2987+ diagrams.append(diagram)
2988+ diagrams.extend(new_diagrams)
2989+
2990+ self.set('diagrams', diagrams)
2991+
2992+ # Now we work by decay process diagram, parameterized by numdecay
2993+ for numdecay in range(len_decay):
2994+
2995+ # Pick out the diagrams which correspond to this decay diagram
2996+ diagrams = [self.get('diagrams')[i] for i in \
2997+ range(numdecay, len(self.get('diagrams')), len_decay)]
2998+
2999+ # Perform replacement for each of the wavefunctions in old_wfs
3000+ for decay_element, old_wf in zip(decay_elements, old_wfs):
3001+
3002+ decay_diag = decay_element[numdecay]
3003+
3004+ # Find the diagrams which have old_wf
3005+ my_diagrams = filter(lambda diag: (old_wf.get('number') in \
3006+ [wf.get('number') for wf in \
3007+ diag.get('wavefunctions')]),
3008+ diagrams)
3009+
3010+ # Ignore possibility for unoptimizated generation for now
3011+ if len(my_diagrams) > 1:
3012+ raise self.PhysicsObjectError, \
3013+ "Decay chains not yet prepared for GPU"
3014+
3015+ for diagram in my_diagrams:
3016+
3017+ if got_majoranas:
3018+ # If there are Majorana particles in any of
3019+ # the matrix elements, we need to check for
3020+ # fermion flow
3021+
3022+ # Earlier wavefunctions, will be used for fermion flow
3023+ index = [d.get('number') for d in diagrams].\
3024+ index(diagram.get('number'))
3025+ earlier_wavefunctions = \
3026+ sum([d.get('wavefunctions') for d in \
3027+ diagrams[:index]], [])
3028+
3029+ # Don't want to affect original decay
3030+ # wavefunctions, so need to deepcopy
3031+ decay_diag_wfs = copy.deepcopy(\
3032+ decay_diag.get('wavefunctions'))
3033+ # Need to replace Particle in all
3034+ # wavefunctions to avoid deepcopy
3035+ idecay = 0
3036+ for i, wf in enumerate(decay_diag.get('wavefunctions')):
3037+ decay_diag_wfs[i].set('particle', \
3038+ wf.get('particle'))
3039+ decay_diag_wfs[i].set('antiparticle', \
3040+ wf.get('antiparticle'))
3041+
3042+ # Complete decay_diag_wfs with the mother wavefunctions
3043+ # to allow for independent fermion flow flips
3044+ decay_diag_wfs = decay_diag_wfs.insert_own_mothers()
3045+
3046+ # These are the wavefunctions which directly replace old_wf
3047+ final_decay_wfs = [amp.get('mothers')[1] for amp in \
3048+ decay_diag.get('amplitudes')]
3049+
3050+ # Since we made deepcopy, need to syncronize
3051+ for i, wf in enumerate(final_decay_wfs):
3052+ final_decay_wfs[i] = \
3053+ decay_diag_wfs[decay_diag_wfs.index(wf)]
3054+ # Remove final wavefunctions from decay_diag_wfs,
3055+ # since these will be replaced separately by
3056+ # replace_wavefunctions
3057+ for wf in final_decay_wfs:
3058+ decay_diag_wfs.remove(wf)
3059+
3060+ # Check fermion flow direction
3061+ if old_wf.is_fermion() and \
3062+ old_wf.get_with_flow('state') != \
3063+ final_decay_wfs[0].get_with_flow('state'):
3064+
3065+ # Not same flow state - need to flip flow of wf
3066+
3067+ for i, wf in enumerate(final_decay_wfs):
3068+
3069+ # We use the function
3070+ # check_majorana_and_flip_flow, as in the
3071+ # helas diagram generation. Since we have
3072+ # different flow, there is already a Majorana
3073+ # particle along the fermion line.
3074+
3075+ final_decay_wfs[i], numbers[0] = \
3076+ wf.check_majorana_and_flip_flow(\
3077+ True,
3078+ earlier_wavefunctions,
3079+ decay_diag_wfs,
3080+ {},
3081+ numbers[0])
3082+
3083+ # Remove wavefunctions which are already present in
3084+ # earlier_wavefunctions
3085+ i = 0
3086+ earlier_wavefunctions = \
3087+ sum([d.get('wavefunctions') for d in \
3088+ self.get('diagrams')[:diagram.get('number') - 1]], \
3089+ [])
3090+ earlier_wf_numbers = [wf.get('number') for wf in \
3091+ earlier_wavefunctions]
3092+
3093+ i = 0
3094+ mother_arrays = [w.get('mothers').to_array() for \
3095+ w in final_decay_wfs]
3096+ while decay_diag_wfs[i:]:
3097+ wf = decay_diag_wfs[i]
3098+ try:
3099+ new_wf = earlier_wavefunctions[\
3100+ earlier_wf_numbers.index(wf.get('number'))]
3101+ # If the wavefunctions are not identical,
3102+ # then we should keep this wavefunction,
3103+ # and update its number so it is unique
3104+ if wf != new_wf:
3105+ numbers[0] = numbers[0] + 1
3106+ wf.set('number', numbers[0])
3107+ continue
3108+ decay_diag_wfs.pop(i)
3109+ pres_mother_arrays = [w.get('mothers').to_array() for \
3110+ w in decay_diag_wfs[i:]] + \
3111+ mother_arrays
3112+ self.update_later_mothers(wf, new_wf,
3113+ decay_diag_wfs[i:] + \
3114+ final_decay_wfs,
3115+ pres_mother_arrays)
3116+ except ValueError:
3117+ i = i + 1
3118+
3119+ # Since we made deepcopy, go through mothers and make
3120+ # sure we are using the ones in earlier_wavefunctions
3121+ for decay_wf in decay_diag_wfs + final_decay_wfs:
3122+ mothers = decay_wf.get('mothers')
3123+ for i, wf in enumerate(mothers):
3124+ try:
3125+ mothers[i] = earlier_wavefunctions[\
3126+ earlier_wf_numbers.index(wf.get('number'))]
3127+ except ValueError:
3128+ pass
3129+ else:
3130+ # If there are no Majorana particles, the
3131+ # treatment is much simpler
3132+ decay_diag_wfs = \
3133+ copy.copy(decay_diag.get('wavefunctions'))
3134+
3135+ # These are the wavefunctions which directly
3136+ # replace old_wf
3137+ final_decay_wfs = [amp.get('mothers')[1] for amp in \
3138+ decay_diag.get('amplitudes')]
3139+
3140+ # Remove final wavefunctions from decay_diag_wfs,
3141+ # since these will be replaced separately by
3142+ # replace_wavefunctions
3143+ for wf in final_decay_wfs:
3144+ decay_diag_wfs.remove(wf)
3145+
3146+
3147+ diagram_wfs = diagram.get('wavefunctions')
3148+
3149+ old_wf_index = [wf.get('number') for wf in \
3150+ diagram_wfs].index(old_wf.get('number'))
3151+
3152+ diagram_wfs = diagram_wfs[0:old_wf_index] + \
3153+ decay_diag_wfs + diagram_wfs[old_wf_index:]
3154+
3155+ diagram.set('wavefunctions', HelasWavefunctionList(diagram_wfs))
3156+
3157+ # Set the decay flag for final_decay_wfs, to
3158+ # indicate that these correspond to decayed
3159+ # particles
3160+ for wf in final_decay_wfs:
3161+ wf.set('onshell', True)
3162+
3163+ if len_decay == 1:
3164+ # Can use simplified treatment, by just modifying old_wf
3165+ self.replace_single_wavefunction(old_wf,
3166+ final_decay_wfs[0])
3167+ else:
3168+ # Multiply wavefunctions and amplitudes and
3169+ # insert mothers using a recursive function
3170+ self.replace_wavefunctions(old_wf,
3171+ final_decay_wfs,
3172+ diagrams,
3173+ numbers)
3174+
3175+ # Now that we are done with this set of diagrams, we need
3176+ # to clean out duplicate wavefunctions (i.e., remove
3177+ # identical wavefunctions which are already present in
3178+ # earlier diagrams)
3179+ for diagram in diagrams:
3180+
3181+ # We can have duplicate wfs only from previous copies of
3182+ # this diagram
3183+ earlier_wfs = sum([d.get('wavefunctions') for d in \
3184+ self.get('diagrams')[\
3185+ diagram.get('number') - numdecay - 1:\
3186+ diagram.get('number') - 1]], [])
3187+
3188+ later_wfs = sum([d.get('wavefunctions') for d in \
3189+ self.get('diagrams')[\
3190+ diagram.get('number'):]], [])
3191+
3192+ later_amps = sum([d.get('amplitudes') for d in \
3193+ self.get('diagrams')[\
3194+ diagram.get('number') - 1:]], [])
3195+
3196+ i = 0
3197+ diag_wfs = diagram.get('wavefunctions')
3198+
3199+ # Remove wavefunctions and replace mothers, to make
3200+ # sure we only have one copy of each wavefunction
3201+ # number
3202+
3203+ mother_arrays = [w.get('mothers').to_array() for \
3204+ w in later_wfs + later_amps]
3205+
3206+ while diag_wfs[i:]:
3207+ try:
3208+ index = [w.get('number') for w in earlier_wfs].\
3209+ index(diag_wfs[i].get('number'))
3210+ wf = diag_wfs.pop(i)
3211+ pres_mother_arrays = [w.get('mothers').to_array() for \
3212+ w in diag_wfs[i:]] + \
3213+ mother_arrays
3214+ self.update_later_mothers(wf, earlier_wfs[index],
3215+ diag_wfs[i:] + later_wfs + later_amps,
3216+ pres_mother_arrays)
3217+ except ValueError:
3218+ i = i + 1
3219+
3220+ def update_later_mothers(self, wf, new_wf, later_wfs, later_wf_arrays):
3221+ """Update mothers for all later wavefunctions"""
3222+
3223+ daughters = filter(lambda tup: wf.get('number') in tup[1],
3224+ enumerate(later_wf_arrays))
3225+
3226+ for (index, mothers) in daughters:
3227+ try:
3228+ # Replace mother
3229+ later_wfs[index].get('mothers')[\
3230+ mothers.index(wf.get('number'))] = new_wf
3231+ except ValueError:
3232+ pass
3233+
3234+ def replace_wavefunctions(self, old_wf, new_wfs,
3235+ diagrams, numbers):
3236+ """Recursive function to replace old_wf with new_wfs, and
3237+ multiply all wavefunctions or amplitudes that use old_wf
3238+
3239+ * old_wf: The wavefunction to be replaced
3240+ * new_wfs: The replacing wavefunction
3241+ * diagrams - the diagrams that are relevant for these new
3242+ wavefunctions.
3243+ * numbers: the present wavefunction and amplitude number,
3244+ to allow for unique numbering
3245+ """
3246+
3247+ # Pick out the diagrams which have the old_wf
3248+ my_diagrams = filter(lambda diag: old_wf.get('number') in \
3249+ [wf.get('number') for wf in diag.get('wavefunctions')],
3250+ diagrams)
3251+
3252+ # Replace old_wf with new_wfs in the diagrams
3253+ for diagram in my_diagrams:
3254+
3255+ diagram_wfs = diagram.get('wavefunctions')
3256+
3257+ old_wf_index = [wf.get('number') for wf in \
3258+ diagram.get('wavefunctions')].index(old_wf.get('number'))
3259+ diagram_wfs = diagram_wfs[:old_wf_index] + \
3260+ new_wfs + diagram_wfs[old_wf_index + 1:]
3261+
3262+ diagram.set('wavefunctions', HelasWavefunctionList(diagram_wfs))
3263+
3264+ # Now need to take care of amplitudes and wavefunctions which
3265+ # are daughters of old_wf (only among the relevant diagrams)
3266+
3267+ # Pick out diagrams with amplitudes which are daughters of old_wf
3268+ amp_diagrams = filter(lambda diag: old_wf.get('number') in \
3269+ sum([[wf.get('number') for wf in amp.get('mothers')] \
3270+ for amp in diag.get('amplitudes')], []),
3271+ diagrams)
3272+
3273+ for diagram in amp_diagrams:
3274+
3275+ # Amplitudes in this diagram that are daughters of old_wf
3276+ daughter_amps = filter(lambda amp: old_wf.get('number') in \
3277+ [wf.get('number') for wf in amp.get('mothers')],
3278+ diagram.get('amplitudes'))
3279+
3280+ new_amplitudes = copy.copy(diagram.get('amplitudes'))
3281+
3282+ # Loop over daughter_amps, to multiply each amp by the
3283+ # number of replacement wavefunctions and substitute mothers
3284+ for old_amp in daughter_amps:
3285+ # Create copies of this amp
3286+ new_amps = [copy.copy(amp) for amp in \
3287+ [ old_amp ] * len(new_wfs)]
3288+ # Replace the old mother with the new ones
3289+ for i, (new_amp, new_wf) in enumerate(zip(new_amps, new_wfs)):
3290+ mothers = copy.copy(new_amp.get('mothers'))
3291+ old_wf_index = [wf.get('number') for wf in mothers].index(\
3292+ old_wf.get('number'))
3293+ # Update mother
3294+ mothers[old_wf_index] = new_wf
3295+ new_amp.set('mothers', mothers)
3296+ # Update amp numbers for replaced amp
3297+ numbers[1] = numbers[1] + 1
3298+ new_amp.set('number', numbers[1])
3299+
3300+ # Insert the new amplitudes in diagram amplitudes
3301+ index = [a.get('number') for a in new_amplitudes].\
3302+ index(old_amp.get('number'))
3303+ new_amplitudes = new_amplitudes[:index] + \
3304+ new_amps + new_amplitudes[index + 1:]
3305+
3306+ # Replace diagram amplitudes with the new ones
3307+ diagram.set('amplitudes', HelasAmplitudeList(new_amplitudes))
3308+
3309+ # Find wavefunctions that are daughters of old_wf
3310+ daughter_wfs = filter(lambda wf: old_wf.get('number') in \
3311+ [wf1.get('number') for wf1 in wf.get('mothers')],
3312+ sum([diag.get('wavefunctions') for diag in \
3313+ diagrams], []))
3314+
3315+ # Loop over daughter_wfs, multiply them and replace mothers
3316+ for daughter_wf in daughter_wfs:
3317+
3318+ # Pick out the diagrams where daughter_wf occurs
3319+ wf_diagrams = filter(lambda diag: daughter_wf.get('number') in \
3320+ [wf.get('number') for wf in \
3321+ diag.get('wavefunctions')],
3322+ diagrams)
3323+
3324+ if len(wf_diagrams) > 1:
3325+ raise self.PhysicsObjectError, \
3326+ "Decay chains not yet prepared for GPU"
3327+
3328+ for diagram in wf_diagrams:
3329+
3330+ # Now create new wfs with updated mothers
3331+ replace_daughters = [ copy.copy(wf) for wf in \
3332+ [daughter_wf] * len(new_wfs) ]
3333+
3334+ index = [wf.get('number') for wf in \
3335+ daughter_wf.get('mothers')].index(old_wf.get('number'))
3336+
3337+ # Replace the old mother with the new ones, update wf numbers
3338+ for i, (new_daughter, new_wf) in \
3339+ enumerate(zip(replace_daughters, new_wfs)):
3340+ mothers = copy.copy(new_daughter.get('mothers'))
3341+ mothers[index] = new_wf
3342+ new_daughter.set('mothers', mothers)
3343+ numbers[0] = numbers[0] + 1
3344+ new_daughter.set('number', numbers[0])
3345+
3346+ # This is where recursion happens. We need to replace
3347+ # the daughter wavefunction, and fix amplitudes and
3348+ # wavefunctions which have it as mothers.
3349+
3350+ self.replace_wavefunctions(daughter_wf,
3351+ replace_daughters,
3352+ diagrams,
3353+ numbers)
3354+
3355+ def replace_single_wavefunction(self, old_wf, new_wf):
3356+ """Insert decay chain by simply modifying wavefunction. This
3357+ is possible only if there is only one diagram in the decay."""
3358+
3359+ for key in old_wf.keys():
3360+ old_wf.set(key, new_wf.get(key))
3361+
3362+ def identical_decay_chain_factor(self, decay_chains):
3363+ """Calculate the denominator factor from identical decay chains"""
3364+
3365+ final_legs = [leg.get('id') for leg in \
3366+ filter(lambda leg: leg.get('state') == True, \
3367+ self.get('processes')[0].get('legs'))]
3368+
3369+ # Leg ids for legs being replaced by decay chains
3370+ decay_ids = [decay.get('legs')[0].get('id') for decay in \
3371+ self.get('processes')[0].get('decay_chains')]
3372+
3373+ # Find all leg ids which are not being replaced by decay chains
3374+ non_decay_legs = filter(lambda id: id not in decay_ids,
3375+ final_legs)
3376+
3377+ # Identical particle factor for legs not being decayed
3378+ identical_indices = {}
3379+ for id in non_decay_legs:
3380+ if id in identical_indices:
3381+ identical_indices[id] = \
3382+ identical_indices[id] + 1
3383+ else:
3384+ identical_indices[id] = 1
3385+ non_chain_factor = reduce(lambda x, y: x * y,
3386+ [ math.factorial(val) for val in \
3387+ identical_indices.values() ], 1)
3388+
3389+ # Identical particle factor for decay chains
3390+ # Go through chains to find identical ones
3391+ chains = copy.copy(decay_chains)
3392+ iden_chains_factor = 1
3393+ while chains:
3394+ ident_copies = 1
3395+ first_chain = chains.pop(0)
3396+ i = 0
3397+ while i < len(chains):
3398+ chain = chains[i]
3399+ if HelasMatrixElement.check_equal_decay_processes(\
3400+ first_chain, chain):
3401+ ident_copies = ident_copies + 1
3402+ chains.pop(i)
3403+ else:
3404+ i = i + 1
3405+ iden_chains_factor = iden_chains_factor * \
3406+ math.factorial(ident_copies)
3407+
3408+ self['identical_particle_factor'] = non_chain_factor * \
3409+ iden_chains_factor * \
3410+ reduce(lambda x1, x2: x1 * x2,
3411+ [me.get('identical_particle_factor') \
3412+ for me in decay_chains], 1)
3413
3414 def calculate_fermionfactors(self):
3415- """Generate the fermion factors for all diagrams in the amplitude
3416+ """Generate the fermion factors for all diagrams in the matrix element
3417 """
3418
3419 for diagram in self.get('diagrams'):
3420 for amplitude in diagram.get('amplitudes'):
3421 amplitude.get('fermionfactor')
3422
3423+ def calculate_identical_particle_factors(self):
3424+ """Calculate the denominator factor for identical final state particles
3425+ """
3426+
3427+ final_legs = filter(lambda leg: leg.get('state') == True, \
3428+ self.get('processes')[0].get('legs'))
3429+
3430+ identical_indices = {}
3431+ for leg in final_legs:
3432+ if leg.get('id') in identical_indices:
3433+ identical_indices[leg.get('id')] = \
3434+ identical_indices[leg.get('id')] + 1
3435+ else:
3436+ identical_indices[leg.get('id')] = 1
3437+ self["identical_particle_factor"] = reduce(lambda x, y: x * y,
3438+ [ math.factorial(val) for val in \
3439+ identical_indices.values() ])
3440+
3441+ def get_base_amplitude(self):
3442+ """Generate a diagram_generation.Amplitude from a
3443+ HelasMatrixElement. This is used to generate both color
3444+ amplitudes and diagram drawing."""
3445+
3446+ # Need to take care of diagram numbering for decay chains
3447+ # before this can be used for those!
3448+
3449+ optimization = 1
3450+ if len(filter(lambda wf: wf.get('number') == 1,
3451+ self.get_all_wavefunctions())) > 1:
3452+ optimization = 0
3453+
3454+ model = self.get('processes')[0].get('model')
3455+
3456+ wf_dict = {}
3457+ vx_list = []
3458+ diagrams = base_objects.DiagramList()
3459+ for diag in self.get('diagrams'):
3460+ diagrams.append(diag.get('amplitudes')[0].get_base_diagram(\
3461+ wf_dict, vx_list, optimization))
3462+
3463+ return diagram_generation.Amplitude({\
3464+ 'process': self.get('processes')[0],
3465+ 'diagrams': diagrams})
3466+
3467 # Helper methods
3468
3469 def getmothers(self, legs, number_to_wavefunctions,
3470@@ -1585,6 +2830,36 @@
3471 return sum([ len(d.get('wavefunctions')) for d in \
3472 self.get('diagrams')])
3473
3474+ def get_all_wavefunctions(self):
3475+ """Gives a list of all wavefunctions for this ME"""
3476+
3477+ return sum([d.get('wavefunctions') for d in \
3478+ self.get('diagrams')], [])
3479+
3480+ def get_all_amplitudes(self):
3481+ """Gives a list of all amplitudes for this ME"""
3482+
3483+ return sum([d.get('amplitudes') for d in \
3484+ self.get('diagrams')], [])
3485+
3486+ def get_external_wavefunctions(self):
3487+ """Gives the external wavefunctions for this ME"""
3488+
3489+ external_wfs = filter(lambda wf: not wf.get('mothers'),
3490+ self.get('diagrams')[0].get('wavefunctions'))
3491+
3492+ external_wfs.sort(lambda w1, w2: w1.get('number_external') - \
3493+ w1.get('number_external'))
3494+
3495+ i = 1
3496+ while i < len(external_wfs):
3497+ if external_wfs[i].get('number_external') == \
3498+ external_wfs[i - 1].get('number_external'):
3499+ external_wfs.pop(i)
3500+ else:
3501+ i = i + 1
3502+ return external_wfs
3503+
3504 def get_number_of_amplitudes(self):
3505 """Gives the total number of amplitudes for this ME"""
3506
3507@@ -1595,9 +2870,15 @@
3508 """Gives (number or external particles, number of
3509 incoming particles)"""
3510
3511- return (len(self.get('processes')[0].get('legs')),
3512- len(filter(lambda leg: leg.get('state') == 'initial',
3513- self.get('processes')[0].get('legs'))))
3514+ external_wfs = filter(lambda wf: wf.get('leg_state') != \
3515+ 'intermediate',
3516+ self.get_all_wavefunctions())
3517+
3518+ return (len(set([wf.get('number_external') for wf in \
3519+ external_wfs])),
3520+ len(set([wf.get('number_external') for wf in \
3521+ filter(lambda wf: wf.get('leg_state') == False,
3522+ external_wfs)])))
3523
3524 def get_helicity_combinations(self):
3525 """Gives the number of helicity combinations for external
3526@@ -1609,9 +2890,9 @@
3527 model = self.get('processes')[0].get('model')
3528
3529 return reduce(lambda x, y: x * y,
3530- [ len(model.get('particle_dict')[leg.get('id')].\
3531+ [ len(model.get('particle_dict')[wf.get('pdg_code')].\
3532 get_helicity_states())\
3533- for leg in self.get('processes')[0].get('legs') ])
3534+ for wf in self.get_external_wavefunctions() ], 1)
3535
3536 def get_helicity_matrix(self):
3537 """Gives the helicity matrix for external wavefunctions"""
3538@@ -1623,8 +2904,8 @@
3539 model = process.get('model')
3540
3541 return apply(itertools.product, [ model.get('particle_dict')[\
3542- leg.get('id')].get_helicity_states()\
3543- for leg in process.get('legs') ])
3544+ wf.get('pdg_code')].get_helicity_states()\
3545+ for wf in self.get_external_wavefunctions()])
3546
3547 def get_denominator_factor(self):
3548 """Calculate the denominator factor due to:
3549@@ -1633,7 +2914,7 @@
3550
3551 model = self.get('processes')[0].get('model')
3552
3553- initial_legs = filter(lambda leg: leg.get('state') == 'initial', \
3554+ initial_legs = filter(lambda leg: leg.get('state') == False, \
3555 self.get('processes')[0].get('legs'))
3556
3557 spin_factor = reduce(lambda x, y: x * y,
3558@@ -1646,21 +2927,244 @@
3559 get('color')\
3560 for leg in initial_legs ])
3561
3562- final_legs = filter(lambda leg: leg.get('state') == 'final', \
3563- self.get('processes')[0].get('legs'))
3564-
3565- identical_indices = {}
3566- for leg in final_legs:
3567- try:
3568- identical_indices[leg.get('id')] = \
3569- identical_indices[leg.get('id')] + 1
3570- except KeyError:
3571- identical_indices[leg.get('id')] = 1
3572- identical_factor = reduce(lambda x, y: x * y,
3573- [ math.factorial(val) for val in \
3574- identical_indices.values() ])
3575-
3576- return spin_factor * color_factor * identical_factor
3577+ return spin_factor * color_factor * self['identical_particle_factor']
3578+
3579+ def get_color_amplitudes(self):
3580+ """Return a list of (coefficient, amplitude number) lists,
3581+ corresponding to the JAMPs for this matrix element. The
3582+ coefficients are given in the format (fermion factor, color
3583+ coeff (frac), imaginary, Nc power)."""
3584+
3585+ if not self.get('color_basis'):
3586+ # No color, simply add all amplitudes with correct factor
3587+ # for first color amplitude
3588+ col_amp = []
3589+ for diagram in self.get('diagrams'):
3590+ for amplitude in diagram.get('amplitudes'):
3591+ col_amp.append(((amplitude.get('fermionfactor'),
3592+ 1, False, 0),
3593+ amplitude.get('number')))
3594+ return [col_amp]
3595+
3596+ # There is a color basis - create a list of coefficients and
3597+ # amplitude numbers
3598+
3599+ col_amp_list = []
3600+ for i, col_basis_elem in \
3601+ enumerate(sorted(self.get('color_basis').keys())):
3602+
3603+ col_amp = []
3604+ for diag_tuple in self.get('color_basis')[col_basis_elem]:
3605+ res_amp = filter(lambda amp: \
3606+ tuple(amp.get('color_indices')) == diag_tuple[1],
3607+ self.get('diagrams')[diag_tuple[0]].get('amplitudes'))
3608+ if not res_amp:
3609+ raise FortranWriter.FortranWriterError, \
3610+ """No amplitude found for color structure
3611+ %s and color index chain (%s) (diagram %i)""" % \
3612+ (col_basis_elem,
3613+ str(diag_tuple[1]),
3614+ diag_tuple[0])
3615+
3616+ if len(res_amp) > 1:
3617+ raise FortranWriter.FortranWriterError, \
3618+ """More than one amplitude found for color structure
3619+ %s and color index chain (%s) (diagram %i)""" % \
3620+ (col_basis_elem,
3621+ str(diag_tuple[1]),
3622+ diag_tuple[0])
3623+
3624+ col_amp.append(((res_amp[0].get('fermionfactor'),
3625+ diag_tuple[2],
3626+ diag_tuple[3],
3627+ diag_tuple[4]),
3628+ res_amp[0].get('number')))
3629+
3630+ col_amp_list.append(col_amp)
3631+
3632+ return col_amp_list
3633+
3634+
3635+
3636+ @staticmethod
3637+ def check_equal_decay_processes(decay1, decay2):
3638+ """Check if two single-sided decay processes
3639+ (HelasMatrixElements) are equal.
3640+
3641+ Note that this has to be called before any combination of
3642+ processes has occured.
3643+
3644+ Since a decay processes for a decay chain is always generated
3645+ such that all final state legs are completely contracted
3646+ before the initial state leg is included, all the diagrams
3647+ will have identical wave function, independently of the order
3648+ of final state particles.
3649+
3650+ Note that we assume that the process definitions have all
3651+ external particles, corresponding to the external
3652+ wavefunctions.
3653+ """
3654+
3655+ if len(decay1.get('processes')) != 1 or \
3656+ len(decay2.get('processes')) != 1:
3657+ raise HelasMatrixElement.PhysicsObjectError, \
3658+ "Can compare only single process HelasMatrixElements"
3659+
3660+ if len(filter(lambda leg: leg.get('state') == False, \
3661+ decay1.get('processes')[0].get('legs'))) != 1 or \
3662+ len(filter(lambda leg: leg.get('state') == False, \
3663+ decay2.get('processes')[0].get('legs'))) != 1:
3664+ raise HelasMatrixElement.PhysicsObjectError, \
3665+ "Call to check_decay_processes_equal requires " + \
3666+ "both processes to be unique"
3667+
3668+ # Compare bulk process properties (number of external legs,
3669+ # identity factors, number of diagrams, number of wavefunctions
3670+ # initial leg, final state legs
3671+ if len(decay1.get('processes')[0].get("legs")) != \
3672+ len(decay1.get('processes')[0].get("legs")) or \
3673+ len(decay1.get('diagrams')) != len(decay2.get('diagrams')) or \
3674+ decay1.get('identical_particle_factor') != \
3675+ decay2.get('identical_particle_factor') or \
3676+ sum(len(d.get('wavefunctions')) for d in \
3677+ decay1.get('diagrams')) != \
3678+ sum(len(d.get('wavefunctions')) for d in \
3679+ decay2.get('diagrams')) or \
3680+ decay1.get('processes')[0].get('legs')[0].get('id') != \
3681+ decay2.get('processes')[0].get('legs')[0].get('id') or \
3682+ sorted([leg.get('id') for leg in \
3683+ decay1.get('processes')[0].get('legs')[1:]]) != \
3684+ sorted([leg.get('id') for leg in \
3685+ decay2.get('processes')[0].get('legs')[1:]]):
3686+ return False
3687+
3688+ # Run a quick check to see if the processes are already
3689+ # identical (i.e., the wavefunctions are in the same order)
3690+ if [leg.get('id') for leg in \
3691+ decay1.get('processes')[0].get('legs')] == \
3692+ [leg.get('id') for leg in \
3693+ decay2.get('processes')[0].get('legs')] and \
3694+ decay1 == decay2:
3695+ return True
3696+
3697+ # Now check if all diagrams are identical. This is done by a
3698+ # recursive function starting from the last wavefunction
3699+ # (corresponding to the initial state), since it is the
3700+ # same steps for each level in mother wavefunctions
3701+
3702+ amplitudes2 = copy.copy(reduce(lambda a1, d2: a1 + \
3703+ d2.get('amplitudes'),
3704+ decay2.get('diagrams'), []))
3705+
3706+ for amplitude1 in reduce(lambda a1, d2: a1 + d2.get('amplitudes'),
3707+ decay1.get('diagrams'), []):
3708+ foundamplitude = False
3709+ for amplitude2 in amplitudes2:
3710+ if HelasMatrixElement.check_equal_wavefunctions(\
3711+ amplitude1.get('mothers')[-1],
3712+ amplitude2.get('mothers')[-1]):
3713+ foundamplitude = True
3714+ # Remove amplitude2, since it has already been matched
3715+ amplitudes2.remove(amplitude2)
3716+ break
3717+ if not foundamplitude:
3718+ return False
3719+
3720+ return True
3721+
3722+ @staticmethod
3723+ def check_equal_wavefunctions(wf1, wf2):
3724+ """Recursive function to check if two wavefunctions are equal.
3725+ First check that mothers have identical pdg codes, then repeat for
3726+ all mothers with identical pdg codes."""
3727+
3728+ # End recursion with False if the wavefunctions do not have
3729+ # the same mother pdgs
3730+ if sorted([wf.get('pdg_code') for wf in wf1.get('mothers')]) != \
3731+ sorted([wf.get('pdg_code') for wf in wf2.get('mothers')]):
3732+ return False
3733+
3734+ # End recursion with True if these are external wavefunctions
3735+ # (note that we have already checked that the pdgs are
3736+ # identical)
3737+ if not wf1.get('mothers') and not wf2.get('mothers'):
3738+ return True
3739+
3740+ mothers2 = copy.copy(wf2.get('mothers'))
3741+
3742+ for mother1 in wf1.get('mothers'):
3743+ # Compare mother1 with all mothers in wf2 that have not
3744+ # yet been used and have identical pdg codes
3745+ equalmothers = filter(lambda wf: wf.get('pdg_code') == \
3746+ mother1.get('pdg_code'),
3747+ mothers2)
3748+ foundmother = False
3749+ for mother2 in equalmothers:
3750+ if HelasMatrixElement.check_equal_wavefunctions(\
3751+ mother1, mother2):
3752+ foundmother = True
3753+ # Remove mother2, since it has already been matched
3754+ mothers2.remove(mother2)
3755+ break
3756+ if not foundmother:
3757+ return False
3758+
3759+ return True
3760+
3761+ # This gives the order in which the different spin states will be
3762+ # written in all Helas calls. Note that this is
3763+ sort_spin_dict = {1: 1, -2: 4, 2: 3, 3: 2, 5: 0}
3764+
3765+ @staticmethod
3766+ def sorted_mothers(arg):
3767+ """Gives a list of mother wavefunctions sorted according to
3768+ 1. the spin order needed in the Fortran Helas calls and
3769+ 2. the order of the particles in the interaction (cyclic)"""
3770+
3771+ if not isinstance(arg, HelasWavefunction) and \
3772+ not isinstance(arg, HelasAmplitude):
3773+ raise base_objects.PhysicsObject.PhysicsObjectError, \
3774+ "%s is not a valid HelasWavefunction or HelasAmplitude" % \
3775+ repr(arg)
3776+
3777+ if not arg.get('interaction_id'):
3778+ return arg.get('mothers')
3779+
3780+ sorted_mothers1 = copy.copy(arg.get('mothers'))
3781+
3782+ # Next sort according to interaction pdg codes
3783+
3784+ mother_codes = [ wf.get_pdg_code() for wf \
3785+ in sorted_mothers1 ]
3786+ pdg_codes = copy.copy(arg.get('pdg_codes'))
3787+ if isinstance(arg, HelasWavefunction):
3788+ my_code = arg.get_anti_pdg_code()
3789+ # We need to create the cyclic pdg_codes
3790+ missing_index = pdg_codes.index(my_code)
3791+ pdg_codes_cycl = pdg_codes[missing_index + 1:] + \
3792+ pdg_codes[:missing_index]
3793+ else:
3794+ pdg_codes_cycl = pdg_codes
3795+
3796+ sorted_mothers2 = HelasWavefunctionList()
3797+ for code in pdg_codes_cycl:
3798+ index = mother_codes.index(code)
3799+ mother_codes.pop(index)
3800+ sorted_mothers2.append(sorted_mothers1.pop(index))
3801+
3802+ if sorted_mothers1:
3803+ raise base_objects.PhysicsObject.PhysicsObjectError, \
3804+ "Mismatch of pdg codes, %s != %s" % \
3805+ (repr(mother_codes), repr(pdg_codes_cycl))
3806+
3807+ # Next sort according to spin_state_number
3808+ return HelasWavefunctionList(\
3809+ sorted(sorted_mothers2, lambda wf1, wf2: \
3810+ HelasMatrixElement.sort_spin_dict[\
3811+ wf2.get_spin_state_number()]\
3812+ - HelasMatrixElement.sort_spin_dict[\
3813+ wf1.get_spin_state_number()]))
3814+
3815
3816 #===============================================================================
3817 # HelasMatrixElementList
3818@@ -1675,6 +3179,284 @@
3819 return isinstance(obj, HelasMatrixElement)
3820
3821 #===============================================================================
3822+# HelasDecayChainProcess
3823+#===============================================================================
3824+class HelasDecayChainProcess(base_objects.PhysicsObject):
3825+ """HelasDecayChainProcess: If initiated with a DecayChainAmplitude
3826+ object, generates the HelasMatrixElements for the core process(es)
3827+ and decay chains. Then call combine_decay_chain_processes in order
3828+ to generate the matrix elements for all combined processes."""
3829+
3830+ def default_setup(self):
3831+ """Default values for all properties"""
3832+
3833+ self['core_processes'] = HelasMatrixElementList()
3834+ self['decay_chains'] = HelasDecayChainProcessList()
3835+
3836+ def filter(self, name, value):
3837+ """Filter for valid process property values."""
3838+
3839+ if name == 'core_processes':
3840+ if not isinstance(value, HelasMatrixElementList):
3841+ raise self.PhysicsObjectError, \
3842+ "%s is not a valid HelasMatrixElementList object" % \
3843+ str(value)
3844+
3845+ if name == 'decay_chains':
3846+ if not isinstance(value, HelasDecayChainProcessList):
3847+ raise self.PhysicsObjectError, \
3848+ "%s is not a valid HelasDecayChainProcessList object" % \
3849+ str(value)
3850+
3851+ return True
3852+
3853+ def get_sorted_keys(self):
3854+ """Return process property names as a nicely sorted list."""
3855+
3856+ return ['core_processes', 'decay_chains']
3857+
3858+ def __init__(self, argument=None):
3859+ """Allow initialization with DecayChainAmplitude"""
3860+
3861+ if isinstance(argument, diagram_generation.DecayChainAmplitude):
3862+ super(HelasDecayChainProcess, self).__init__()
3863+ self.generate_matrix_elements(argument)
3864+ elif argument:
3865+ # call the mother routine
3866+ super(HelasDecayChainProcess, self).__init__(argument)
3867+ else:
3868+ # call the mother routine
3869+ super(HelasDecayChainProcess, self).__init__()
3870+
3871+ def generate_matrix_elements(self, dc_amplitude):
3872+ """Generate the HelasMatrixElements for the core processes and
3873+ decay processes (separately)"""
3874+
3875+ if not isinstance(dc_amplitude, diagram_generation.DecayChainAmplitude):
3876+ raise base_objects.PhysicsObjectError, \
3877+ "%s is not a valid DecayChainAmplitude" % dc_amplitude
3878+
3879+ matrix_elements = self['core_processes']
3880+
3881+ # Extract the pdg codes of all particles decayed by decay chains
3882+ # since these should not be combined in a MultiProcess
3883+ decay_ids = dc_amplitude.get_decay_ids()
3884+
3885+ while dc_amplitude.get('amplitudes'):
3886+ # Pop the amplitude to save memory space
3887+ amplitude = dc_amplitude.get('amplitudes').pop(0)
3888+
3889+ logger.info("Generating Helas calls for %s" % \
3890+ amplitude.get('process').nice_string().\
3891+ replace('Process', 'process'))
3892+ matrix_element = HelasMatrixElement(amplitude,
3893+ decay_ids=decay_ids,
3894+ gen_color=False)
3895+
3896+ try:
3897+ # If an identical matrix element is already in the list,
3898+ # then simply add this process to the list of
3899+ # processes for that matrix element
3900+ other_processes = matrix_elements[\
3901+ matrix_elements.index(matrix_element)].get('processes')
3902+ logger.info("Combining process with %s" % \
3903+ other_processes[0].nice_string().replace('Process: ', ''))
3904+ other_processes.append(amplitude.get('process'))
3905+ except ValueError:
3906+ # Otherwise, if the matrix element has any diagrams,
3907+ # add this matrix element.
3908+ if matrix_element.get('processes') and \
3909+ matrix_element.get('diagrams'):
3910+ matrix_elements.append(matrix_element)
3911+
3912+ while dc_amplitude.get('decay_chains'):
3913+ # Pop the amplitude to save memory space
3914+ decay_chain = dc_amplitude.get('decay_chains').pop(0)
3915+ self['decay_chains'].append(HelasDecayChainProcess(\
3916+ decay_chain))
3917+
3918+ def combine_decay_chain_processes(self):
3919+ """Recursive function to generate complete
3920+ HelasMatrixElements, combining the core process with the decay
3921+ chains.
3922+
3923+ * If there are several identical final state particles and only
3924+ one decay chain defined, apply this decay chain to all
3925+ copies.
3926+ * If there are several decay chains defined for the same
3927+ particle, apply them in order of the FS particles and the
3928+ defined decay chains."""
3929+
3930+ # End recursion when there are no more decay chains
3931+ if not self['decay_chains']:
3932+ # Just return the list of matrix elements
3933+ return self['core_processes']
3934+
3935+ # decay_elements is a list of HelasMatrixElementLists with
3936+ # all decay processes
3937+ decay_elements = []
3938+
3939+ for decay_chain in self['decay_chains']:
3940+ # This is where recursion happens
3941+ decay_elements.append(decay_chain.combine_decay_chain_processes())
3942+
3943+ # Store the result in matrix_elements
3944+ matrix_elements = HelasMatrixElementList()
3945+
3946+ # List of list of ids for the initial state legs in all decay
3947+ # processes
3948+ decay_is_ids = [[element.get('processes')[0].get_initial_ids()[0] \
3949+ for element in elements]
3950+ for elements in decay_elements]
3951+
3952+ while self['core_processes']:
3953+ # Pop the process to save memory space
3954+ core_process = self['core_processes'].pop(0)
3955+ # Get all final state legs
3956+ fs_legs = core_process.get('processes')[0].get_final_legs()
3957+ fs_ids = [leg.get('id') for leg in fs_legs]
3958+ decay_lists = []
3959+ # Loop over unique final state particle ids
3960+ for fs_id in set(fs_ids):
3961+ # Check if the particle id for this leg has a decay
3962+ # chain defined
3963+ if not any([any([id == fs_id for id \
3964+ in is_ids]) for is_ids in decay_is_ids]):
3965+ continue
3966+ # decay_list has the leg numbers and decays for this
3967+ # fs particle id
3968+ decay_list = []
3969+ # Now check if the number of decay chains with
3970+ # this particle id is the same as the number of
3971+ # identical particles in the core process - if so,
3972+ # use one chain for each of the identical
3973+ # particles. Otherwise, use all combinations of
3974+ # decay chains for the particles.
3975+
3976+ # Indices for the decay chain lists which contain at
3977+ # least one decay for this final state
3978+ chain_indices = filter(lambda index: fs_id in \
3979+ decay_is_ids[index],
3980+ range(len(decay_is_ids)))
3981+
3982+ my_fs_legs = filter(lambda leg: leg.get('id') == fs_id,
3983+ fs_legs)
3984+ leg_numbers = [leg.get('number') for leg in my_fs_legs]
3985+
3986+ if len(leg_numbers) > 1 and \
3987+ len(leg_numbers) == len(chain_indices):
3988+
3989+ # The decay of the different fs parts is given
3990+ # by the different decay chains, respectively
3991+ # Chains is a list of matrix element lists
3992+ chains = []
3993+ for index in chain_indices:
3994+ decay_chains = decay_elements[index]
3995+ chains.append(filter(lambda me: \
3996+ me.get('processes')[0].\
3997+ get_initial_ids()[0] == fs_id,
3998+ decay_chains))
3999+
4000+ # Combine decays for this final state type
4001+ for element in itertools.product(*chains):
4002+ decay_list.append([[n, d] for [n, d] in \
4003+ zip(leg_numbers, element)])
4004+ else:
4005+ # We let the particles decay according to the
4006+ # first decay list only
4007+ proc_index = chain_indices[0]
4008+
4009+ # Generate all combinations of decay chains with
4010+ # the given decays, without double counting
4011+ decay_indices = filter(lambda index: fs_id == \
4012+ decay_is_ids[proc_index][index],
4013+ range(len(decay_is_ids[proc_index])))
4014+
4015+
4016+ red_decay_ids = []
4017+ decay_ids = [decay_indices] * len(leg_numbers)
4018+ # Combine all decays for this final state type,
4019+ # without double counting
4020+ for prod in itertools.product(*decay_ids):
4021+
4022+ # Remove double counting between final states
4023+ if tuple(sorted(prod)) in red_decay_ids:
4024+ continue
4025+
4026+ # Specify decay processes in the matrix element process
4027+ red_decay_ids.append(tuple(sorted(prod)));
4028+
4029+ # Pick out the decays for this iteration
4030+ decays = [decay_elements[proc_index][chain_index] \
4031+ for chain_index in prod]
4032+
4033+ decay_list.append([[n, d] for [n, d] in \
4034+ zip(leg_numbers, decays)])
4035+
4036+ decay_lists.append(decay_list)
4037+
4038+ # Finally combine all decays for this process,
4039+ # and combine them, decay by decay
4040+ for decays in itertools.product(*decay_lists):
4041+
4042+ # Generate a dictionary from leg number to decay process
4043+ decay_dict = dict(sum(decays, []))
4044+
4045+ # Make sure to not modify the original matrix element
4046+ matrix_element = copy.deepcopy(core_process)
4047+ # Avoid Python copying the complete model every time
4048+ for i, process in enumerate(matrix_element.get('processes')):
4049+ process.set('model',
4050+ core_process.get('processes')[i].get('model'))
4051+ # Need to replace Particle in all wavefunctions to avoid
4052+ # deepcopy
4053+ idecay = 0
4054+ org_wfs = core_process.get_all_wavefunctions()
4055+ for i, wf in enumerate(matrix_element.get_all_wavefunctions()):
4056+ wf.set('particle', org_wfs[i].get('particle'))
4057+ wf.set('antiparticle', org_wfs[i].get('antiparticle'))
4058+
4059+ # Insert the decay chains
4060+ logger.info("Combine %s with decays %s" % \
4061+ (core_process.get('processes')[0].nice_string().\
4062+ replace('Process: ', ''), \
4063+ ", ".join([d.get('processes')[0].nice_string().\
4064+ replace('Process: ', '') \
4065+ for d in decay_dict.values()])))
4066+
4067+ matrix_element.insert_decay_chains(decay_dict)
4068+
4069+ try:
4070+ # If an identical matrix element is already in the list,
4071+ # then simply add this process to the list of
4072+ # processes for that matrix element
4073+ other_processes = matrix_elements[\
4074+ matrix_elements.index(matrix_element)].get('processes')
4075+ logger.info("Combining process with %s" % \
4076+ other_processes[0].nice_string().replace('Process: ', ''))
4077+ other_processes.extend(matrix_element.get('processes'))
4078+ except ValueError:
4079+ # Otherwise, if the matrix element has any diagrams,
4080+ # add this matrix element.
4081+ if matrix_element.get('processes') and \
4082+ matrix_element.get('diagrams'):
4083+ matrix_elements.append(matrix_element)
4084+
4085+ return matrix_elements
4086+
4087+#===============================================================================
4088+# HelasDecayChainProcessList
4089+#===============================================================================
4090+class HelasDecayChainProcessList(base_objects.PhysicsObjectList):
4091+ """List of HelasDecayChainProcess objects
4092+ """
4093+
4094+ def is_valid_element(self, obj):
4095+ """Test if object obj is a valid HelasDecayChainProcess for the list."""
4096+
4097+ return isinstance(obj, HelasDecayChainProcess)
4098+
4099+#===============================================================================
4100 # HelasMultiProcess
4101 #===============================================================================
4102 class HelasMultiProcess(base_objects.PhysicsObject):
4103@@ -1711,6 +3493,10 @@
4104 elif isinstance(argument, diagram_generation.MultiProcess):
4105 super(HelasMultiProcess, self).__init__()
4106 self.generate_matrix_elements(argument.get('amplitudes'))
4107+ elif isinstance(argument, diagram_generation.Amplitude):
4108+ super(HelasMultiProcess, self).__init__()
4109+ self.generate_matrix_elements(\
4110+ diagram_generation.AmplitudeList([argument]))
4111 elif argument:
4112 # call the mother routine
4113 super(HelasMultiProcess, self).__init__(argument)
4114@@ -1724,7 +3510,7 @@
4115 defined by HelasMatrixElement.__eq__"""
4116
4117 if not isinstance(amplitudes, diagram_generation.AmplitudeList):
4118- raise self.HelasMultiProcessError, \
4119+ raise self.PhysicsObjectError, \
4120 "%s is not valid AmplitudeList" % repr(amplitudes)
4121
4122 # Keep track of already generated color objects, to reuse as
4123@@ -1735,56 +3521,73 @@
4124
4125 matrix_elements = self.get('matrix_elements')
4126
4127- for amplitude in amplitudes:
4128- logger.info("Generating Helas calls for %s" % \
4129- amplitude.get('process').nice_string().replace('Process', 'process'))
4130- matrix_element = HelasMatrixElement(amplitude, gen_color=False)
4131- try:
4132- # If an identical matrix element is already in the list,
4133- # then simply add this process to the list of
4134- # processes for that matrix element
4135- other_processes = matrix_elements[\
4136+ while amplitudes:
4137+ # Pop the amplitude to save memory space
4138+ amplitude = amplitudes.pop(0)
4139+ if isinstance(amplitude, diagram_generation.DecayChainAmplitude):
4140+ matrix_element_list = HelasDecayChainProcess(amplitude).\
4141+ combine_decay_chain_processes()
4142+ else:
4143+ logger.info("Generating Helas calls for %s" % \
4144+ amplitude.get('process').nice_string().\
4145+ replace('Process', 'process'))
4146+ matrix_element_list = [HelasMatrixElement(amplitude,
4147+ gen_color=False)]
4148+ for matrix_element in matrix_element_list:
4149+ if not isinstance(matrix_element, HelasMatrixElement):
4150+ raise self.PhysicsObjectError, \
4151+ "Not a HelasMatrixElement: ", matrix_element
4152+ try:
4153+ # If an identical matrix element is already in the list,
4154+ # then simply add this process to the list of
4155+ # processes for that matrix element
4156+ other_processes = matrix_elements[\
4157 matrix_elements.index(matrix_element)].get('processes')
4158- logger.info("Combining process with %s" % \
4159- other_processes[0].nice_string().replace('Process: ', ''))
4160- other_processes.append(amplitude.get('process'))
4161-
4162- except ValueError:
4163- # Otherwise, if the matrix element has any diagrams,
4164- # add this matrix element.
4165- if matrix_element.get('processes') and \
4166- matrix_element.get('diagrams'):
4167- matrix_elements.append(matrix_element)
4168-
4169- # Always create an empty color basis, and the list of raw
4170- # colorize objects (before simplification) associated with amplitude
4171- col_basis = color_amp.ColorBasis()
4172- colorize_obj = col_basis.create_color_dict_list(amplitude)
4173-
4174- try:
4175- # If the color configuration of the ME has already been
4176- # considered before, recycle the information
4177- col_index = list_colorize.index(colorize_obj)
4178- logger.info(\
4179- "Reusing existing color information for %s" % \
4180- amplitude.get('process').nice_string().replace('Process',
4181- 'process'))
4182+ logger.info("Combining process with %s" % \
4183+ other_processes[0].nice_string().replace('Process: ', ''))
4184+ other_processes.extend(matrix_element.get('processes'))
4185 except ValueError:
4186- # If not, create color basis and color matrix accordingly
4187- list_colorize.append(colorize_obj)
4188- col_basis.build()
4189- list_color_basis.append(col_basis)
4190- col_matrix = color_amp.ColorMatrix(col_basis)
4191- list_color_matrices.append(col_matrix)
4192- col_index = -1
4193- logger.info(\
4194- "Processing color information for %s" % \
4195- amplitude.get('process').nice_string().replace('Process',
4196- 'process'))
4197+ # Otherwise, if the matrix element has any diagrams,
4198+ # add this matrix element.
4199+ if matrix_element.get('processes') and \
4200+ matrix_element.get('diagrams'):
4201+ matrix_elements.append(matrix_element)
4202+
4203+ # Always create an empty color basis, and the
4204+ # list of raw colorize objects (before
4205+ # simplification) associated with amplitude
4206+ col_basis = color_amp.ColorBasis()
4207+ new_amp = matrix_element.get_base_amplitude()
4208+ matrix_element.set('base_amplitude', new_amp)
4209+ colorize_obj = col_basis.create_color_dict_list(new_amp)
4210+
4211+ try:
4212+ # If the color configuration of the ME has
4213+ # already been considered before, recycle
4214+ # the information
4215+ col_index = list_colorize.index(colorize_obj)
4216+ logger.info(\
4217+ "Reusing existing color information for %s" % \
4218+ matrix_element.get('processes')[0].nice_string().\
4219+ replace('Process', 'process'))
4220+ except ValueError:
4221+ # If not, create color basis and color
4222+ # matrix accordingly
4223+ list_colorize.append(colorize_obj)
4224+ col_basis.build()
4225+ list_color_basis.append(col_basis)
4226+ col_matrix = color_amp.ColorMatrix(col_basis)
4227+ list_color_matrices.append(col_matrix)
4228+ col_index = -1
4229+ logger.info(\
4230+ "Processing color information for %s" % \
4231+ matrix_element.get('processes')[0].nice_string().\
4232+ replace('Process', 'process'))
4233
4234 matrix_element.set('color_basis', list_color_basis[col_index])
4235- matrix_element.set('color_matrix', list_color_matrices[col_index])
4236-
4237+ matrix_element.set('color_matrix',
4238+ list_color_matrices[col_index])
4239+
4240 #===============================================================================
4241 # HelasModel
4242 #===============================================================================
4243@@ -1850,10 +3653,11 @@
4244 repr(matrix_element)
4245
4246 res = []
4247- for n, diagram in enumerate(matrix_element.get('diagrams')):
4248+ for diagram in matrix_element.get('diagrams'):
4249 res.extend([ self.get_wavefunction_call(wf) for \
4250 wf in diagram.get('wavefunctions') ])
4251- res.append("# Amplitude(s) for diagram number %d" % (n + 1))
4252+ res.append("# Amplitude(s) for diagram number %d" % \
4253+ diagram.get('number'))
4254 for amplitude in diagram.get('amplitudes'):
4255 res.append(self.get_amplitude_call(amplitude))
4256
4257@@ -1926,3 +3730,4 @@
4258 self.set('name', argument.get('name'))
4259 else:
4260 super(HelasModel, self).__init__(argument)
4261+
4262
4263=== modified file 'madgraph/interface/cmd_interface.py'
4264--- madgraph/interface/cmd_interface.py 2010-02-12 01:58:30 +0000
4265+++ madgraph/interface/cmd_interface.py 2010-05-07 07:20:46 +0000
4266@@ -18,6 +18,7 @@
4267 """
4268
4269 import cmd
4270+import copy
4271 import os
4272 import subprocess
4273 import sys
4274@@ -59,10 +60,17 @@
4275 'interactions',
4276 'processes',
4277 'multiparticles']
4278+ __add_opts = ['process']
4279 __save_opts = ['model',
4280 'processes']
4281 __import_formats = ['v4']
4282- __export_formats = ['v4standalone', 'v4sa_dirs']
4283+ __export_formats = ['v4standalone', 'v4sa_dirs', 'v4madevent']
4284+
4285+
4286+ class MadGraphCmdError(Exception):
4287+ """Exception raised if an error occurs in the execution
4288+ of command."""
4289+ pass
4290
4291 def split_arg(self, line):
4292 """Split a line of arguments"""
4293@@ -145,6 +153,30 @@
4294 "* *\n" + \
4295 "************************************************************"
4296
4297+ def precmd(self, line):
4298+ """ force the printing of the line if this is executed with an stdin """
4299+ if not self.use_rawinput:
4300+ print line
4301+
4302+ return line
4303+
4304+
4305+ def emptyline(self):
4306+ """If empty line, do nothing. Default is repeat previous command."""
4307+
4308+ pass
4309+
4310+ def default(self, line):
4311+ """Default action if line is not recognized"""
4312+
4313+ if line[0] == "#":
4314+ # This is a comment - do nothing
4315+ return
4316+ else:
4317+ # Faulty command
4318+ print "Command \"%s\" not recognized, please try again" % \
4319+ line.split()[0]
4320+
4321 # Import files
4322 def do_import(self, line):
4323 """Import files with external formats"""
4324@@ -231,18 +263,18 @@
4325 if self.__curr_model:
4326 #save_model.save_model(args[1], self.__curr_model)
4327 if save_load_object.save_to_file(args[1], self.__curr_model):
4328- print 'Saved model to file ',args[1]
4329+ print 'Saved model to file ', args[1]
4330 else:
4331 print 'No model to save!'
4332 elif args[0] == 'processes':
4333 if self.__curr_amps:
4334 if save_load_object.save_to_file(args[1], self.__curr_amps):
4335- print 'Saved processes to file ',args[1]
4336+ print 'Saved processes to file ', args[1]
4337 else:
4338 print 'No processes to save!'
4339 else:
4340 self.help_save()
4341-
4342+
4343 def do_load(self, line):
4344 """Load information from file"""
4345
4346@@ -260,7 +292,7 @@
4347 print "Loaded model from file in %0.3f s" % \
4348 (cpu_time2 - cpu_time1)
4349 else:
4350- print 'Error: Could not load model from file ',args[1]
4351+ print 'Error: Could not load model from file ', args[1]
4352 elif args[0] == 'processes':
4353 self.__curr_amps = save_load_object.load_from_file(args[1])
4354 if isinstance(self.__curr_amps, diagram_generation.AmplitudeList):
4355@@ -272,10 +304,10 @@
4356 get('process').get('model')
4357 print "Model set from process."
4358 else:
4359- print 'Error: Could not load processes from file ',args[1]
4360+ print 'Error: Could not load processes from file ', args[1]
4361 else:
4362 self.help_save()
4363-
4364+
4365 def complete_save(self, text, line, begidx, endidx):
4366 "Complete the save command"
4367
4368@@ -320,10 +352,10 @@
4369
4370 #option
4371 if len(self.split_arg(line[0:begidx])) >= 2:
4372- option=['external=', 'horizontal=', 'add_gap=','max_size=', \
4373+ option = ['external=', 'horizontal=', 'add_gap=', 'max_size=', \
4374 'contract_non_propagating=']
4375 return self.list_completion(text, option)
4376-
4377+
4378 # Display
4379 def do_display(self, line):
4380 """Display current internal status"""
4381@@ -362,9 +394,7 @@
4382
4383 if args[0] == 'processes':
4384 for amp in self.__curr_amps:
4385- print amp.get('process').nice_string()
4386- print amp.get('diagrams').nice_string()
4387-
4388+ print amp.nice_string()
4389 if args[0] == 'multiparticles':
4390 print 'Multiparticle labels:'
4391 for key in self.__multiparticles:
4392@@ -404,17 +434,129 @@
4393 " please create one first!"
4394 return False
4395
4396+ # Reset Helas matrix elements
4397+ self.__curr_matrix_elements = helas_objects.HelasMultiProcess()
4398+
4399+ try:
4400+ if line.find(',') == -1:
4401+ myprocdef = self.extract_process(line)
4402+ else:
4403+ myprocdef, line = self.extract_decay_chain_process(line)
4404+ except self.MadGraphCmdError as error:
4405+ print "Empty or wrong format process, please try again. Error:\n" \
4406+ + str(error)
4407+ myprocdef = None
4408+
4409+ if myprocdef:
4410+
4411+ cpu_time1 = time.time()
4412+
4413+ myproc = diagram_generation.MultiProcess(myprocdef)
4414+
4415+ self.__curr_amps = myproc.get('amplitudes')
4416+
4417+ cpu_time2 = time.time()
4418+
4419+ nprocs = len(self.__curr_amps)
4420+ ndiags = sum([amp.get_number_of_diagrams() for \
4421+ amp in self.__curr_amps])
4422+ print "%i processes with %i diagrams generated in %0.3f s" % \
4423+ (nprocs, ndiags, (cpu_time2 - cpu_time1))
4424+
4425+ else:
4426+ print "Empty or wrong format process, please try again."
4427+
4428+
4429+ # Helper functions
4430+ def extract_decay_chain_process(self, line, level_down = False):
4431+ """Recursively extract a decay chain process definition from a
4432+ string. Returns a ProcessDefinition."""
4433+
4434+ # Start with process number (identified by "@")
4435+ proc_number_pattern = re.compile("^(.+)@\s*(\d+)\s*(.*)$")
4436+ proc_number_re = proc_number_pattern.match(line)
4437+ proc_number = 0
4438+ if proc_number_re:
4439+ proc_number = int(proc_number_re.group(2))
4440+ line = proc_number_re.group(1) + \
4441+ proc_number_re.group(3)
4442+ print line
4443+
4444+ index_comma = line.find(",")
4445+ index_par = line.find(")")
4446+ min_index = index_comma
4447+ if index_par > -1 and (index_par < min_index or min_index == -1):
4448+ min_index = index_par
4449+
4450+ if min_index > -1:
4451+ core_process = self.extract_process(line[:min_index], proc_number)
4452+ else:
4453+ core_process = self.extract_process(line, proc_number)
4454+
4455+ #level_down = False
4456+
4457+ while index_comma > -1:
4458+ line = line[index_comma + 1:]
4459+ index_par = line.find(')')
4460+ if line.lstrip()[0] == '(':
4461+ # Go down one level in process hierarchy
4462+ #level_down = True
4463+ line = line.lstrip()[1:]
4464+ # This is where recursion happens
4465+ decay_process, line = \
4466+ self.extract_decay_chain_process(line,
4467+ level_down = True)
4468+ index_comma = line.find(",")
4469+ index_par = line.find(')')
4470+ else:
4471+ index_comma = line.find(",")
4472+ min_index = index_comma
4473+ if index_par > -1 and \
4474+ (index_par < min_index or min_index == -1):
4475+ min_index = index_par
4476+ if min_index > -1:
4477+ decay_process = self.extract_process(line[:min_index])
4478+ else:
4479+ decay_process = self.extract_process(line)
4480+
4481+ core_process.get('decay_chains').append(decay_process)
4482+
4483+ if level_down:
4484+ if index_par == -1:
4485+ raise self.MadGraphCmdError,\
4486+ "Missing ending parenthesis for decay process"
4487+
4488+ if index_par < index_comma:
4489+ line = line[index_par + 1:]
4490+ level_down = False
4491+ break
4492+
4493+ if level_down:
4494+ index_par = line.find(')')
4495+ if index_par == -1:
4496+ raise self.MadGraphCmdError,\
4497+ "Missing ending parenthesis for decay process"
4498+ line = line[index_par + 1:]
4499+
4500+ # Return the core process (ends recursion when there are no
4501+ # more decays)
4502+ return core_process, line
4503+
4504+ def extract_process(self, line, proc_number = 0):
4505+ """Extract a process definition from a string. Returns
4506+ a ProcessDefinition."""
4507+
4508 # Use regular expressions to extract s-channel propagators,
4509 # forbidden s-channel propagators/particles, coupling orders
4510 # and process number, starting from the back
4511
4512 # Start with process number (identified by "@")
4513- proc_number_pattern = re.compile("^(.+)\s*@\s*(\d+)\s*$")
4514+ proc_number_pattern = re.compile("^(.+)@\s*(\d+)\s*(.*)$")
4515 proc_number_re = proc_number_pattern.match(line)
4516- proc_number = 0
4517 if proc_number_re:
4518 proc_number = int(proc_number_re.group(2))
4519- line = proc_number_re.group(1)
4520+ line = proc_number_re.group(1) + \
4521+ proc_number_re.group(3)
4522
4523 # Start with coupling orders (identified by "=")
4524 order_pattern = re.compile("^(.+)\s+(\w+)\s*=\s*(\d+)\s*$")
4525@@ -429,11 +571,19 @@
4526 line = line.lower()
4527
4528 # Now check for forbidden particles, specified using "/"
4529- forbidden_particles_re = re.match("^(.+)\s*/\s*(.+)\s*$", line)
4530+ slash = line.find("/")
4531+ dollar = line.find("$")
4532 forbidden_particles = ""
4533- if forbidden_particles_re:
4534- forbidden_particles = forbidden_particles_re.group(2)
4535- line = forbidden_particles_re.group(1)
4536+ if slash > 0:
4537+ if dollar > slash:
4538+ forbidden_particles_re = re.match("^(.+)\s*/\s*(.+\s*)(\$.*)$", line)
4539+ else:
4540+ forbidden_particles_re = re.match("^(.+)\s*/\s*(.+\s*)$", line)
4541+ if forbidden_particles_re:
4542+ forbidden_particles = forbidden_particles_re.group(2)
4543+ line = forbidden_particles_re.group(1)
4544+ if len(forbidden_particles_re.groups()) > 2:
4545+ line = line + forbidden_particles_re.group(3)
4546
4547 # Now check for forbidden schannels, specified using "$"
4548 forbidden_schannels_re = re.match("^(.+)\s*\$\s*(.+)\s*$", line)
4549@@ -452,21 +602,18 @@
4550
4551 args = self.split_arg(line)
4552
4553- # Reset Helas matrix elements
4554- self.__curr_matrix_elements = helas_objects.HelasMultiProcess()
4555-
4556 myleglist = base_objects.MultiLegList()
4557- state = 'initial'
4558+ state = False
4559 number = 1
4560
4561 # Extract process
4562 for part_name in args:
4563
4564 if part_name == '>':
4565- if not len(myleglist):
4566- print "Empty or wrong format process, please try again."
4567- return False
4568- state = 'final'
4569+ if not myleglist:
4570+ raise self.MadGraphCmdError, \
4571+ "No final state particles"
4572+ state = True
4573 continue
4574
4575 mylegids = []
4576@@ -481,9 +628,10 @@
4577 myleglist.append(base_objects.MultiLeg({'ids':mylegids,
4578 'state':state}))
4579 else:
4580- print "No particle %s in model: skipped" % part_name
4581+ raise self.MadGraphCmdError,\
4582+ "No particle %s in model" % part_name
4583
4584- if filter(lambda leg: leg.get('state') == 'final', myleglist):
4585+ if filter(lambda leg: leg.get('state') == True, myleglist):
4586 # We have a valid process
4587
4588 # Now extract restrictions
4589@@ -491,6 +639,10 @@
4590 forbidden_schannel_ids = []
4591 required_schannel_ids = []
4592
4593+ decay_process = len(filter(lambda leg: \
4594+ leg.get('state') == False,
4595+ myleglist)) == 1
4596+
4597 if forbidden_particles:
4598 args = self.split_arg(forbidden_particles)
4599 for part_name in args:
4600@@ -521,37 +673,96 @@
4601 if mypart:
4602 required_schannel_ids.append(mypart.get_pdg_code())
4603
4604-
4605-
4606- myprocdef = base_objects.ProcessDefinitionList([\
4607+ return \
4608 base_objects.ProcessDefinition({'legs': myleglist,
4609- 'model': self.__curr_model,
4610- 'id': proc_number,
4611- 'orders': orders,
4612- 'forbidden_particles': forbidden_particle_ids,
4613- 'forbidden_s_channels': forbidden_schannel_ids,
4614- 'required_s_channels': required_schannel_ids \
4615- })])
4616- myproc = diagram_generation.MultiProcess({'process_definitions':\
4617- myprocdef})
4618-
4619- cpu_time1 = time.time()
4620-
4621- self.__curr_amps = myproc.get('amplitudes')
4622-
4623- cpu_time2 = time.time()
4624-
4625- nprocs = len(filter(lambda amp: amp.get("diagrams"),
4626- self.__curr_amps))
4627- ndiags = sum([len(amp.get('diagrams')) for \
4628- amp in self.__curr_amps])
4629- print "%i processes with %i diagrams generated in %0.3f s" % \
4630- (nprocs, ndiags, (cpu_time2 - cpu_time1))
4631-
4632+ 'model': self.__curr_model,
4633+ 'id': proc_number,
4634+ 'orders': orders,
4635+ 'forbidden_particles': forbidden_particle_ids,
4636+ 'forbidden_s_channels': forbidden_schannel_ids,
4637+ 'required_s_channels': required_schannel_ids
4638+ })
4639+ # 'is_decay_chain': decay_process\
4640 else:
4641- print "Empty or wrong format process, please try again."
4642+ return None
4643
4644+ # Add a process to the existing multiprocess definition
4645 # Generate a new amplitude
4646+ def do_add(self, line):
4647+ """Generate an amplitude for a given process and add to
4648+ existing amplitudes"""
4649+
4650+ if len(line) < 1:
4651+ self.help_add()
4652+ return False
4653+
4654+ if len(self.__curr_model['particles']) == 0:
4655+ print "No particle list currently active, please create one first!"
4656+ return False
4657+
4658+ if len(self.__curr_model['interactions']) == 0:
4659+ print "No interaction list currently active," + \
4660+ " please create one first!"
4661+ return False
4662+
4663+ args = self.split_arg(line)
4664+ if len(args) < 2:
4665+ self.help_import()
4666+ return False
4667+
4668+ if args[0] == 'process':
4669+ # Rejoin line
4670+ line = ' '.join(args[1:])
4671+
4672+ # Reset Helas matrix elements
4673+ self.__curr_matrix_elements = helas_objects.HelasMultiProcess()
4674+
4675+ try:
4676+ if line.find(',') == -1:
4677+ myprocdef = self.extract_process(line)
4678+ else:
4679+ myprocdef, line = self.extract_decay_chain_process(line)
4680+ except self.MadGraphCmdError as error:
4681+ print "Empty or wrong format process, please try again. Error:\n" \
4682+ + str(error)
4683+ myprocdef = None
4684+
4685+ if myprocdef:
4686+
4687+ cpu_time1 = time.time()
4688+
4689+ myproc = diagram_generation.MultiProcess(myprocdef)
4690+
4691+ for amp in myproc.get('amplitudes'):
4692+ if amp not in self.__curr_amps:
4693+ self.__curr_amps.append(amp)
4694+ else:
4695+ print "Warning: Already in processes:"
4696+ print amp.nice_string_processes()
4697+
4698+ cpu_time2 = time.time()
4699+
4700+ nprocs = len(myproc.get('amplitudes'))
4701+ ndiags = sum([amp.get_number_of_diagrams() for \
4702+ amp in myproc.get('amplitudes')])
4703+ print "%i processes with %i diagrams generated in %0.3f s" % \
4704+ (nprocs, ndiags, (cpu_time2 - cpu_time1))
4705+ ndiags = sum([amp.get_number_of_diagrams() for \
4706+ amp in self.__curr_amps])
4707+ print "Total: %i processes with %i diagrams" % \
4708+ (len(self.__curr_amps), ndiags)
4709+ else:
4710+ print "Empty or wrong format process, please try again."
4711+
4712+
4713+ def complete_add(self, text, line, begidx, endidx):
4714+ "Complete the add command"
4715+
4716+ # Format
4717+ if len(self.split_arg(line[0:begidx])) == 1:
4718+ return self.list_completion(text, self.__add_opts)
4719+
4720+ # Export a matrix element
4721 def do_export(self, line):
4722 """Export a generated amplitude to file"""
4723
4724@@ -584,7 +795,7 @@
4725 self.help_export()
4726 return False
4727
4728- if not filter(lambda amp: amp.get("diagrams"), self.__curr_amps):
4729+ if not self.__curr_amps:
4730 print "No process generated, please generate a process!"
4731 return False
4732
4733@@ -594,7 +805,7 @@
4734 ndiags, cpu_time = generate_matrix_elements(self)
4735 calls = 0
4736 path = args[1]
4737-
4738+
4739 if args[0] == 'v4standalone':
4740 for me in self.__curr_matrix_elements.get('matrix_elements'):
4741 filename = os.path.join(path, 'matrix_' + \
4742@@ -615,13 +826,26 @@
4743 export_v4.generate_subprocess_directory_v4_standalone(\
4744 me, self.__curr_fortran_model, path)
4745
4746+ if args[0] == 'v4madevent':
4747+ for me in self.__curr_matrix_elements.get('matrix_elements'):
4748+ calls = calls + \
4749+ export_v4.generate_subprocess_directory_v4_madevent(\
4750+ me, self.__curr_fortran_model, path)
4751+
4752 print ("Generated helas calls for %d subprocesses " + \
4753 "(%d diagrams) in %0.3f s") % \
4754 (len(self.__curr_matrix_elements.get('matrix_elements')),
4755 ndiags, cpu_time)
4756-
4757+
4758 print "Wrote %d helas calls" % calls
4759
4760+ # Replace the amplitudes with the actual amplitudes from the
4761+ # matrix elements, which allows proper diagram drawing also of
4762+ # decay chain processes
4763+ self.__curr_amps = diagram_generation.AmplitudeList(\
4764+ [me.get('base_amplitude') for me in \
4765+ self.__curr_matrix_elements.get('matrix_elements')])
4766+
4767 def complete_export(self, text, line, begidx, endidx):
4768 "Complete the export command"
4769
4770@@ -683,7 +907,7 @@
4771 self.help_draw()
4772 return False
4773
4774- if not filter(lambda amp: amp.get("diagrams"), self.__curr_amps):
4775+ if not self.__curr_amps:
4776 print "No process generated, please generate a process!"
4777 return False
4778
4779@@ -700,9 +924,15 @@
4780 print "invalid syntax: '%s'. Please try again" % data
4781 self.help_draw()
4782 return False
4783- option.set(key,value)
4784+ option.set(key, value)
4785+
4786+ # Collect amplitudes
4787+ amplitudes = diagram_generation.AmplitudeList()
4788
4789 for amp in self.__curr_amps:
4790+ amplitudes.extend(amp.get_amplitudes())
4791+
4792+ for amp in amplitudes:
4793 filename = os.path.join(args[0], 'diagrams_' + \
4794 amp.get('process').shell_string() + ".eps")
4795 plot = draw.MultiEpsDiagramDrawer(amp['diagrams'],
4796@@ -745,7 +975,21 @@
4797
4798 print "syntax: generate INITIAL STATE > REQ S-CHANNEL > FINAL STATE $ EXCL S-CHANNEL / FORBIDDEN PARTICLES COUP1=ORDER1 COUP2=ORDER2"
4799 print "-- generate diagrams for a given process"
4800- print " Example: u d~ > w+ > m+ vm g $ a / z h QED=3 QCD=0"
4801+ print " Example: u d~ > w+ > m+ vm g $ a / z h QED=3 QCD=0 @1"
4802+ print "Decay chain syntax:"
4803+ print " core process, decay1, (decay2, (decay3, ...)), ... etc"
4804+ print " Example: g g > t~ t @2, (t~ > W- b~, W- > e- ve~), t > W+ b"
4805+ print " Note that identical particles will all be decayed"
4806+
4807+ def help_add(self):
4808+
4809+ print "syntax: add process INITIAL STATE > REQ S-CHANNEL > FINAL STATE $ EXCL S-CHANNEL / FORBIDDEN PARTICLES COUP1=ORDER1 COUP2=ORDER2"
4810+ print "-- generate diagrams for a process and add to existing processes"
4811+ print " Syntax example: u d~ > w+ > m+ vm g $ a / z h QED=3 QCD=0 @1"
4812+ print "Decay chain syntax:"
4813+ print " core process, decay1, (decay2, (decay3, ...)), ... etc"
4814+ print " Example: g g > t~ t @2, (t~ > W- b~, W- > e- ve~), t > W+ b"
4815+ print " Note that identical particles will all be decayed"
4816
4817 def help_define(self):
4818 print "syntax: define multipart_name [ part_name_list ]"
4819@@ -757,7 +1001,11 @@
4820 " FILEPATH"
4821 print """-- export matrix elements. For v4standalone, the resulting
4822 files will be FILEPATH/matrix_\"process_string\".f. For v4sa_dirs,
4823- the result is a set of complete MG4 Standalone process directories."""
4824+ the result is a set of complete MG4 Standalone process directories.
4825+ For v4madevent, the path needs to be to a MadEvent SubProcesses
4826+ directory, and the result is the Pxxx directories (including the
4827+ diagram .ps and .jpg files) for the subprocesses as well as a
4828+ correctly generated subproc.mg file."""
4829
4830 def help_draw(self):
4831 print "syntax: draw FILEPATH [option=value]"
4832
4833=== modified file 'madgraph/iolibs/drawing.py'
4834--- madgraph/iolibs/drawing.py 2010-01-29 06:40:30 +0000
4835+++ madgraph/iolibs/drawing.py 2010-05-07 07:20:46 +0000
4836@@ -86,12 +86,12 @@
4837 return
4838
4839 def def_end_point(self, vertex):
4840- """-Re-Define the starting point of the line."""
4841+ """-Re-Define the starting point of the line. with check"""
4842
4843 if not isinstance(vertex, VertexPoint):
4844 raise self.FeynmanLineError, 'The end point should be a ' + \
4845 'Vertex_Point object'
4846-
4847+
4848 self.end = vertex
4849 vertex.add_line(self)
4850 return
4851@@ -741,7 +741,7 @@
4852 vertex_point = VertexPoint(vertex)
4853 self.vertexList.append(vertex_point)
4854 # If initial state particle, we will need to flip begin-end
4855- if line.get('state') == 'initial':
4856+ if line.get('state') == False:
4857 if line.start:
4858 line.inverse_begin_end()
4859 line.def_begin_point(vertex_point)
4860@@ -797,7 +797,7 @@
4861 2) Add this vertex in vertexList of the diagram
4862 3) Update vertex.line list. (first update the leg into line if needed)
4863 4) assign line.start[end] to this vertex. (in end if start is already
4864- assigned to another vertex). the start-end will be flipped later
4865+ assigned to another vertex). the start-end will be flip later
4866 if needed."""
4867
4868 #1) Extend to a vertex point
4869@@ -875,20 +875,32 @@
4870 #consequence we replace in line2 the common vertex by the second vertex
4871 #present in line1, such that the new line2 is the sum of the two
4872 #previous lines.
4873+ to_del = pos1
4874 if line1.end is line2.start:
4875 line2.def_begin_point(line1.start)
4876 else:
4877- line2.def_end_point(line1.end)
4878+ if line1.end:
4879+ # Standard case
4880+ line2.def_end_point(line1.end)
4881+ else:
4882+ # line1 is a initial/final line. We need to keep her status
4883+ #so we will delete line2 instead of line1
4884+ to_del = pos2
4885+ line1.def_begin_point(line2.end)
4886 # Remove line completely
4887- line1.start.remove_line(line1)
4888- del self.lineList[pos1]
4889+ if to_del == pos1:
4890+ line1.start.remove_line(line1)
4891+ del self.lineList[pos1]
4892+ else:
4893+ line2.start.remove_line(line2)
4894+ del self.lineList[pos2]
4895 # Remove last_vertex
4896 self.vertexList.remove(last_vertex)
4897
4898 def deal_last_line(self, last_line):
4899 """The line of the last vertex breaks the rules that line before
4900 '>' exist previously and the one after don't. The last one can also
4901- already exist and for the one befor the '>' sometimes they arrive
4902+ already exist and for the one before the '>' sometimes they arrive
4903 with a second object which is equivalent to another one but not
4904 the same object. discover those case and treat this properly."""
4905
4906@@ -898,7 +910,6 @@
4907 id1 = self.find_leg_id(last_line)
4908 # Find if they are a second call to this line
4909 id2 = self.find_leg_id(last_line, end=len(self._treated_legs) - id1)
4910-
4911 if id2 is not None:
4912 # Line is duplicate in linelist => remove this duplication
4913 line = self.lineList[id2]
4914@@ -920,7 +931,7 @@
4915 else:
4916 line.def_start_point(last_line.start)
4917 # Remove last_line from the vertex
4918- last_line.begin.remove_line(last_line)
4919+ last_line.start.remove_line(last_line)
4920
4921 # Remove last_line
4922 self.lineList.remove(last_line)
4923@@ -977,7 +988,7 @@
4924 continue
4925 # Check if T-channel or not. Note that T-channel tag is wrongly
4926 #define if only one particle in initial state.
4927- if line.get('state') == 'initial':
4928+ if line.get('state') == False:
4929 # This is T vertex. => level is 1
4930 line.end.def_level(1)
4931 else:
4932@@ -1016,7 +1027,7 @@
4933 evolution direction (which will be the wrong one at the next step)."""
4934
4935 for line in t_vertex.line:
4936- if line.get('state') == 'initial' and line.start is t_vertex:
4937+ if line.get('state') == False and line.start is t_vertex:
4938 return line.end
4939
4940 def find_vertex_at_level(self, previous_level):
4941@@ -1035,7 +1046,7 @@
4942 continue
4943
4944 for line in vertex.line:
4945- if line.start is vertex and line.get('state') == 'final':
4946+ if line.start is vertex and line.get('state') == True:
4947 vertex_at_level.append(line.end)
4948
4949 return vertex_at_level
4950@@ -1233,8 +1244,8 @@
4951 This occur for 1>X diagram where T-channel are wrongly define."""
4952
4953 for line in self.lineList:
4954- if line.get('state') == 'initial':
4955- line.set('state','final')
4956+ if line.get('state') == False:
4957+ line.set('state', True)
4958
4959
4960 def solve_line_direction(self):
4961@@ -1246,7 +1257,7 @@
4962
4963 # Use the basic rules. Assigns correctly but for T-channel
4964 for line in self.lineList:
4965- if line.get('state') == 'final':
4966+ if line.get('state') == True:
4967 line.define_line_orientation()
4968 # The define line orientation use level information and in consequence
4969 #fails on T-Channel. So in consequence we still have to fix T-channel
4970@@ -1269,7 +1280,7 @@
4971 for line in t_vertex.line:
4972
4973 # Identify the next T-channel particles
4974- if line.get('state') == 'initial' and t_old is not line and \
4975+ if line.get('state') == False and t_old is not line and \
4976 line.start is t_vertex:
4977 t_next = line
4978
4979@@ -1320,7 +1331,7 @@
4980 if line.is_external():
4981 # Check the size of final particles to restrict to the max_size
4982 #constraints.
4983- if line.get('state') == 'initial' or not line.is_external():
4984+ if line.get('state') == False or not line.is_external():
4985 continue
4986 size = line.get_length() * self.max_level
4987 if size > finalsize:
4988@@ -1874,4 +1885,4 @@
4989 """Convert the value in a number"""
4990
4991 return float(value)
4992-
4993\ No newline at end of file
4994+
4995
4996=== modified file 'madgraph/iolibs/export_v4.py'
4997--- madgraph/iolibs/export_v4.py 2010-02-12 12:53:59 +0000
4998+++ madgraph/iolibs/export_v4.py 2010-05-07 07:20:46 +0000
4999@@ -15,17 +15,18 @@
5000
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches