Merge lp:~madnklo/mg5amcnlo/MadNkLO_ISR into lp:~madnklo/mg5amcnlo/MadNkLO

Proposed by Valentin Hirschi on 2018-08-09
Status: Merged
Merged at revision: 551
Proposed branch: lp:~madnklo/mg5amcnlo/MadNkLO_ISR
Merge into: lp:~madnklo/mg5amcnlo/MadNkLO
Diff against target: 20935 lines (+13076/-2772)
50 files modified
Template/ME7/Cards/run_card.dat (+2/-0)
madgraph/core/accessors.py (+67/-36)
madgraph/core/base_objects.py (+254/-17)
madgraph/core/contributions.py (+977/-353)
madgraph/core/subtraction.py (+865/-136)
madgraph/integrator/ME7_integrands.py (+2608/-719)
madgraph/integrator/cuba_interface.py (+432/-0)
madgraph/integrator/mappings.py (+265/-31)
madgraph/integrator/phase_space_generators.py (+129/-13)
madgraph/integrator/phase_space_generators.pyx (+2/-2)
madgraph/integrator/pyCubaIntegrator.py (+444/-91)
madgraph/integrator/vectors.py (+88/-13)
madgraph/integrator/vegas3_integrator.py (+18/-16)
madgraph/integrator/walkers.py (+43/-22)
madgraph/interface/ME7_interface.py (+321/-149)
madgraph/interface/loop_interface.py (+23/-10)
madgraph/interface/madgraph_interface.py (+492/-160)
madgraph/iolibs/export_ME7.py (+336/-29)
madgraph/iolibs/template_files/loop_optimized/loop_matrix_standalone.inc (+3/-3)
madgraph/iolibs/template_files/loop_optimized/user_access_subroutines.inc (+108/-10)
madgraph/iolibs/template_files/matrix_standalone_splitOrders_v4.inc (+25/-9)
madgraph/iolibs/template_files/matrix_standalone_v4.inc (+23/-9)
madgraph/iolibs/template_files/subtraction/QCD_local_currents.py (+297/-21)
madgraph/iolibs/template_files/subtraction/cataniseymour/NLO/local_currents.py (+12/-11)
madgraph/iolibs/template_files/subtraction/cataniseymour/NNLO/local_currents.py (+395/-353)
madgraph/iolibs/template_files/subtraction/colorful/NLO/beam_factorization.py (+619/-0)
madgraph/iolibs/template_files/subtraction/colorful/NLO/integrated_currents.py (+11/-21)
madgraph/iolibs/template_files/subtraction/colorful/NLO/local_currents.py (+34/-10)
madgraph/iolibs/template_files/subtraction/colorful_pp/NLO/beam_factorization.py (+639/-0)
madgraph/iolibs/template_files/subtraction/colorful_pp/NLO/ginacg.cpp (+22/-0)
madgraph/iolibs/template_files/subtraction/colorful_pp/NLO/ginacgcompile.sh (+3/-0)
madgraph/iolibs/template_files/subtraction/colorful_pp/NLO/hardcoded_integrated_currents.py (+195/-0)
madgraph/iolibs/template_files/subtraction/colorful_pp/NLO/integrated_currents.py (+768/-0)
madgraph/iolibs/template_files/subtraction/colorful_pp/NLO/local_currents.py (+914/-0)
madgraph/iolibs/template_files/subtraction/colorful_pp/NNLO/local_currents.py (+869/-0)
madgraph/iolibs/template_files/subtraction/subtraction_current_implementations_utils.py (+129/-33)
madgraph/loop/loop_diagram_generation.py (+11/-3)
madgraph/loop/loop_exporters.py (+8/-0)
madgraph/various/banner.py (+1/-0)
madgraph/various/misc.py (+21/-1)
tests/acceptance_tests/test_currents.py (+7/-7)
tests/input_files/IOTestsComparison/ME7ContributionTest/current_generation_and_access/Counterterms_V.txt (+90/-78)
tests/input_files/IOTestsComparison/ME7ContributionTest/current_generation_and_access/Currents_Integ.txt (+50/-31)
tests/input_files/IOTestsComparison/ME7ContributionTest/current_generation_and_access/Currents_Local.txt (+30/-18)
tests/parallel_tests/compare_ME7_with_ME6.py (+1/-1)
tests/time_db (+348/-323)
tests/unit_tests/core/test_contributions.py (+16/-5)
tests/unit_tests/core/test_subtraction.py (+4/-4)
tests/unit_tests/integrator/test_mappings.py (+33/-0)
tests/unit_tests/integrator/test_phase_space_generators.py (+24/-24)
To merge this branch: bzr merge lp:~madnklo/mg5amcnlo/MadNkLO_ISR
Reviewer Review Type Date Requested Status
Nicolas Deutschmann 2018-08-09 Pending
Simone Lionetti 2018-08-09 Pending
Review via email: mp+352812@code.launchpad.net

Commit message

At long last, here is the alpha version of the ISR.
There is still quite some checks to be done and the option "group_subprocesses" still need testing.
That being said however, all is in place and one can already investigate NLO processes.
NNLO processes and beyond should already be structurally OK, but that needs further testing too.

In any case, for now the following can be tested/run:

MG5_aMC>set subtraction_currents_scheme colorful
MG5_aMC>import model loop_sm
MG5_aMC>set group_subprocesses False
MG5_aMC>define p = u d
MG5_aMC>define pbar = u~ d~
MG5_aMC>generate p pbar > z QED=1 --NLO --beam_types=proton@(1,2,3,4,-1,-2,-3,-4,21)
MG5_aMC>display contributions
MG5_aMC>output TEST_ISR_ddx_z
MG5_aMC>display contributions --format=4
MG5_aMC>display contributions --format=2
MG5_aMC>launch TEST_ISR_ddx_z -i
MG5_aMC>display integrands --format=4
ME7@NLO::TEST_ISR_ddx_z > test_IR_poles --integrands=BF1 --counterterms=all --set_PDFs_to_unity=False
ME7@NLO::TEST_ISR_ddx_z > test_IR_limits --limits=F(1) --integrands=BF1 --counterterms=all --set_PDFs_to_unity=False

You can also already integrate the B, V, BF1 and BF2 contributions, while for the R one we're still missing appropriate mappings for 2>1.

I'll update this posts with progress on the remaining issues, but ISR is structurally done now!

To post a comment you must log in.
Valentin Hirschi (valentin-hirschi) wrote :

I should add that the process definition is overly complicated just so as to illustrate the new option '--beam_types'.
But it's perfectly possible to simply type:

MG5_aMC>generate p p > z QED=1 --NLO

and things will be set automatically.

lp:~madnklo/mg5amcnlo/MadNkLO_ISR updated on 2018-08-30
531. By Valentin Hirschi on 2018-08-20

1. Fix to integrands
2. Skeletton for p p > X soft mapping and walker

532. By Nicolas Deutschmann on 2018-08-27

Implemented the Soft vs Initial state mapping to higher/lower multiplicity. All methods raise a NotImplemented error for now as they are *not* tested yet.

533. By Nicolas Deutschmann on 2018-08-27

Soft Vs Initial Mapping corrected to pass invertibility unit_test. Commutativity and associativity tests removed. Still raises NotImplemented error until validated with test_IR_limits

534. By Nicolas Deutschmann on 2018-08-28

Debugged the evaluation of the Initial-final soft-collinears. pp->Z soft limit validated. pp->Z collinear has issues with cuts.

535. By Valentin Hirschi on 2018-08-30

1. Fixed the definition of the mapping_variable `Q` in InitialLorentzOneMapping. @Simone: CHECK.
2. Fixed the soft-initial-collinear current so as to correctly define pC as 'pC = pC + pS' and use the `is_cut` function.
3. Improved the display of plots from test_IR_limits on a grid of subplots on a single page as opposed to multiple windows.
4. Tested that all limits and poles work on 'u u~ > z g'.

536. By Valentin Hirschi on 2018-08-30

1. Merged with the latest version of the trunk.

Valentin Hirschi (valentin-hirschi) wrote :

New contributions dubbed 'beam-factorization' contributions (BF) are introduced, together with their corresponding PDF counterterms.
These are generated automatically by inspecting existing contributions at lower orders, so that this part of the code is N^kLO ready already in principle.

The workflow of the __call__ and sigma functions of the ME7Integrands has been significantly modified/improved in order to have a more streamlined processing of the flavour mapping and back-propagation in ISR counterterms.
This improvement is realised with the new concept that the 'sigma' function no longer returns a single weight but instead a collection of 'ME7Event', each containing a list o weights for all the flavour structures it can adopt.

The application of the cuts and observables is then much more elegant as it is directly applied to the overall list of events generated for one specific set of input variables in the unit hypercube.

Given this large structural change, it is desirable to merge this branch sooner than later in the MadNkLO trunk. It is however not complete as it must still be tested beyond 2>1, at NNLO too, and the finite part of the integrated soft and soft-collinear counterterms is still missing.

This is however mostly irrelevant w.r.t this merge because the structural changes that were necessary for ISR are now established.

lp:~madnklo/mg5amcnlo/MadNkLO_ISR updated on 2019-01-14
537. By Valentin Hirschi on 2018-08-31

1. Improved and fixed the interface of MadNkLO to Cuba@Vegas, which is now a viable alternative to Vegas3.
2. Cuba@Vegas supports the same options as Vegas (--save_grids, --load_grids, etc...) plus some other.
3. Integrator options should now be supplied using the new command 'set_integrator_options <integrator_name> options'
4. When considering observables, the status for Cuba@Vegas is the same as for Vegas3: OK during refine only and without parallellization (which must therefore be done by hand)

538. By Valentin Hirschi on 2018-09-03

1. Improved pyCuba@Vegas to better handle saving and loading grids.
2. Improved handling of interrupts during parallel runs (however not satisfactory yet).

539. By Valentin Hirschi on 2018-09-03

1. Slight improvement of the KeyboardInterrupt handling.
2. Added to the pyCuba@Vegas grid output file the last integration result, to easily be able to recover it with max_eval==max_eval_survey==0.

540. By Valentin Hirschi on 2018-09-03

1. Fixed a bug introduced which prevented correct generation of chsi.

541. By Valentin Hirschi on 2018-09-06

1. First attempt at implementing BS contributions

542. By Valentin Hirschi on 2018-09-07

1. Implemented the BS contribution which is correctly generated when using the colorful scheme with a mapping recoiling soft against initial states.
2. Completed the implementation so that the BS integrand and its counterterms are correctly processed. This is running but not tested yet.
3. TODO: Adapt the phase-space generator so as to generate correlated chsi values.

543. By Valentin Hirschi on 2018-09-08

1. Finished the implementation of the ME7Integrand for the BS contribution and tested that test_IR_limits works for it.

544. By Valentin Hirschi on 2018-09-09

1. Finalized the implementation of the ME7_integrand_BS, including test_IR_poles and the integration.
2. Minor fixes for pyCuba@Vegas, still not perfect yet and as quirks.

545. By Valentin Hirschi on 2018-09-09

1. Fixed typo in last push

546. By Valentin Hirschi on 2018-09-09

1. Fix a typo in last push and solved a problem in the integration of the end-point beam-soft integrated CT.

547. By Valentin Hirschi on 2018-09-10

1. Fix on how to determine correlated beam currents for IntegratedBeamCurrents

548. By Valentin Hirschi on 2018-09-10

1. Corrected the spelling chsi->xi

549. By Nicolas Deutschmann on 2018-09-10

Added missing argument (walker) in definitions of scale_configuration

550. By Valentin Hirschi on 2018-09-10

1. Fixed so '\xi' to 'xi' as the slash was sometimes understood as an escape character.

551. By Valentin Hirschi on 2018-09-11

1. Finally, flavor mapping and mirrorring is now supported.
2. The problem of overcounting integrated ISR has been adressed.
3. In theory, all should now be working and 'p p > z QED=1 --NLO' is tested
   to be integrating. Next step is validation.

552. By Valentin Hirschi on 2018-09-12

1. Fixed the passing of options to the Vegas3 integrator

553. By Nicolas Deutschmann on 2018-09-13

Changed output format to JSON, default is now to append result to existing file, new launch option --run_name

554. By Valentin Hirschi on 2018-09-13

Fixed issues that prevented p p > z j to run fine. Now it does.

555. By Valentin Hirschi on 2018-09-13

Fixed some issues to generate NNLO ISR, e.g. p p > z --NNLO with option '--ignore_integrated_counterterms=RR' for the output

556. By Valentin Hirschi on 2018-09-14

Fixed severe bug in the subtraction of EpsilonExpansion

557. By Nicolas Deutschmann on 2018-09-17

added the integrated soft and soft collinear counterterms computed using the Soft vs IS mapping of Vittorio.

558. By Nicolas Deutschmann on 2018-09-17

Corrected the scaling of the matrix element when using Vittorio's Soft vs IS mapping

559. By Nicolas Deutschmann on 2018-09-18

Corrected a typo in colorful.NLO.beam_factorization : need to use filtered_flavor_matrix

560. By Nicolas Deutschmann on 2018-09-18

changes to ME7Event.convolve_flavor:
- doc reflects that a flavor matrix second entry (end_flavors) is actually a tuple to allow multiple flavors at once
- unspecified flavor matrix entries are set to 0, not 1

added ME7Event.is_empty : check if the convolution was with an empty matrix

in ME7Integrand.convolve_event_with_beam_factorization_currents:
- after convolution check if this has made the event empty, then return None as an event. This makes it be ignored

in test_IR_poles_for_process: now weights are summed, not events. This avoids issues with mapped events that have different kinematics (swapped leg orders) which cannot be summed.

561. By Nicolas Deutschmann on 2018-09-18

Refactored flavor masking in a separate function and corrected it (it did not account for end_flavors being tuples)

562. By Valentin Hirschi on 2018-09-21

1. BUG FIX: Real-emission local counterterms were not convoluted with the right PDFs.
2. BUG FIX: Sigma failed to add correctly the counterterms to the collection of events generated because of an indentation error.
3. Removed the ugly loop-induced specific class and refactored MadNkLO so that it can do loop-induced exactly like other processes.
4. Added options at generation stage to overwrite the process definition used for any contribution.
5. Improved MadLoop standalone loop-induced output so as to add the missing F2PY function hooks that the MadLoopAccessors needed.
6. Corrected how the flag 'requires_mirrorring' is set for integrated counterterms.
7. Fixed how the mirrored events are generated, so as to really swap the entire attribute relevant to both initial states.
8. Improved the printouts of test_IR_poles.
9. Fixed when the cuts (higher and lower) are called in the test_* functions depending on the options specified by the user.
Conclusion: This is the first revision that has a chance of yielding correct ISR NLO QCD cross-sections.

563. By Valentin Hirschi on 2018-09-21

1. Small fix in Vegas3 integrator options

564. By Valentin Hirschi on 2018-09-23

1. Fixed how the mirrored events of ISR collinear are generated as they need boosting since the maappinds sends them off the c.o.m. frame.

565. By Valentin Hirschi on 2018-09-24

1. Fixed how the flavour map is applied to the integrated collinear counterterms and [F] currents.

566. By Nicolas Deutschmann on 2018-09-24

Added dispatch in softs recoiling agaist IS according to dipole type (II, IF, FF). Corrected II integrated soft. Added sanity check to avoid handling massive emitters.

567. By Nicolas Deutschmann on 2018-09-24

corrected Bjorken x bound to avoid crashing when doing e+ e- annihilation. (x<=1 instead of x<1)

568. By Valentin Hirschi on 2018-09-24

1. Improved the printout of test_IR_poles
2. FIXED the PS point specifed when creating counter-events didn't have the right kinematics.
3. FIXED PS point of the counter-event *must* be boosted back to the com for distributions to be correct
   (this is because the xi rescalines must *not* apply with the change of variables chosen by MadNkLO to generate them)

569. By Nicolas Deutschmann on 2018-09-25

Corrected the printout when creating a new folder for results and improved comments on the result folder appending/creating logic

570. By Valentin Hirschi on 2018-09-28

1. Fixed support for the option --loop_filter
2. Fixed LO beam type guessing
3. Fixed implementation of cuts at LO
4. Fixed return 0. -> continue in __call__ of the integrand base class

571. By Valentin Hirschi on 2018-10-01

1. Fixed how to implement user-based cuts.

572. By Valentin Hirschi on 2018-10-01

1. Fixed a bug in the function that constructs the flavor cuts

573. By Valentin Hirschi on 2018-10-01

1. Fix bug in handling process options

574. By Valentin Hirschi on 2018-10-01

1. Trivial bug fixed in flavor cut building function

575. By Valentin Hirschi on 2018-10-01

1. Correctly apply Gabor's cut to initial state local collinear CT.
2. Compute Q directly in evaluate_(integrated)_counterterm and pass it to currents (possibly overwriting what was computed in the mappings).

576. By Valentin Hirschi on 2018-10-02

1. Also passed Q to the F-type of currents which is necessary structurally, even though they do not depend on it.

577. By Valentin Hirschi on 2018-10-02

1. Fixed the sum of initial momentum 'Q' passed to the PDF counterterms and integrated initial-state collinear counterterms so that they do not include the xi rescalings.

578. By Valentin Hirschi on 2018-10-02

1. Fixed the fact that the '+' distributions in the integrated ISR collinear CT must also apply on the 'Q^2/(xi1*xi2)' prefactor.
   This brings MadNkLO in full agreement with MadFKS for the process 'p p > z' ! :)

579. By Valentin Hirschi on 2018-10-03

1. Fixed last implementation of the log(Q^2)_plus so that it does not crash on endpoint evaluation.

580. By Valentin Hirschi on 2018-10-03

Typo in last push

581. By Valentin Hirschi on 2018-10-03

1. Fixed the definition of NF in the beam factorization currents.

582. By Nicolas Deutschmann on 2018-10-05

Added a new current scheme to separate colorful and colorful with Vittorio's mapping

583. By Valentin Hirschi on 2018-10-05

1. Improved printouts of the counterterm structure of the events generated, also in test_IR_poles.

584. By Valentin Hirschi on 2018-10-05

1. Safe-guarded the normalization factor in test_IR_poles.

585. By Nicolas Deutschmann on 2018-10-05

Inserted forgotten multiplicity factor in integrated counterterms

586. By Nicolas Deutschmann on 2018-10-05

Added the possibility to have 1,2,3 quarks in protons.
A warning is displayed when using a single quark: debug only

587. By Nicolas Deutschmann on 2018-10-05

Bug correction: identifying colorful_pp as a scheme using BS contributions and correcting selection of initial-initial eikonals

588. By Simone Lionetti <email address hidden> on 2018-10-05

Added safeguard for loading libcuba.so

589. By Valentin Hirschi on 2018-10-05

1. Fixed obvious mistake in the recursion setup for function get_reduced_flavor of the CountertermNode.

590. By Valentin Hirschi on 2018-10-07

1. Fixed issue with mapped processes having non-consecutive leg numbers.
2. Insure to completely skipped the beam factorization treatment when no contribution has an active beam.

591. By Nicolas Deutschmann on 2018-10-11

ProcessKey now always sorts final states PDGs. The sort_PDGs options only applies to the initial state.

592. By Simone Lionetti <email address hidden> on 2018-10-11

Improved error message of get_parent

593. By Simone Lionetti <email address hidden> on 2018-10-11

Partially fixed counterterm handling in test_IR_limits

594. By Simone Lionetti <email address hidden> on 2018-10-12

Improved handling of flavor assignments and cut events in test_IR_limits

595. By Simone Lionetti <email address hidden> on 2018-10-12

Commented out unused momenta from CataniSeymour currents that could cause fails

596. By Simone Lionetti <email address hidden> on 2018-11-26

Removed a printout of the soft PS point during mapping

597. By Simone Lionetti <email address hidden> on 2018-11-26

Implemented currents for the subtraction of 'e+ e- > u d u~ d~'
with distributed softs.

598. By Simone Lionetti <email address hidden> on 2018-11-26

Improvements to ME7 while coding qqQQ subtractions

These include:
- Saving of plots in grid format
- Print statements for debugging updated (and commented)
- ME-def line is not plotted by default (meaningless with reshuffled counterterms)
- Formatting and cleanup

599. By Simone Lionetti <email address hidden> on 2018-11-27

Fixed a bug in the combination of spin-correlations, some formatting

600. By Valentin Hirschi on 2018-11-28

1. Fixed spin-correlated MEs

601. By Valentin Hirschi on 2018-11-29

1. Forced the fastjet clustering algorithm to be genkt

602. By Valentin Hirschi on 2018-11-30

1. Improved some quirks in test_IR_limits

603. By Valentin Hirschi on 2018-11-30

1. Forced the absence of a factor in the initial collinear currents of colorful.

604. By Valentin Hirschi on 2018-11-30

1. Force the massive Q ref. collinear variables for colorful.

605. By Valentin Hirschi on 2018-11-30

1. Added a new attribute FROZEN_DIMENSIONS to the ME7Integrand base class that allows
   to freeze integration variables to a desired values so as to artificially lower the
   dimensionality of the phase-space integral to perform.

606. By Nicolas Deutschmann <email address hidden> on 2018-12-07

removed colorful cut and factors in local soft and SC counterterms

607. By Nicolas Deutschmann <email address hidden> on 2018-12-07

Temporary xi^2 correction factor in colorful_pp BS bulk to crrect flux factor. A more permanent change is warranted

608. By Nicolas Deutschmann <email address hidden> on 2018-12-07

Correct Q value in coloful_pp BS counterterm

609. By Nicolas Deutschmann <email address hidden> on 2018-12-10

Now using the right prefactor for colorful_pp beam_soft

610. By Valentin Hirschi on 2018-12-10

1. Improved the slowdown from the dot functions in the Vector classes.

611. By Valentin Hirschi on 2019-01-07

1. Added a minimal fix for not applying the Bjorken x's xi boundaries on the counterterm part of the beam convolution CTs. Will need to be generalised.

612. By Valentin Hirschi on 2019-01-14

1. Cleanly implemented the upper bound based on xi for the Bjorken x's in the convolution kernels that are not of type 'counterterms'.
2. Fixed the multiplicity of integrated ISR CT when doing flavor grouping.

613. By Valentin Hirschi on 2019-01-14

1. Merged with latest version of the MadNkLO trunk.

614. By Valentin Hirschi on 2019-01-14

1. Fixed all MadNkLO tests

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Template/ME7/Cards/run_card.dat'
2--- Template/ME7/Cards/run_card.dat 2018-07-26 20:20:01 +0000
3+++ Template/ME7/Cards/run_card.dat 2019-01-14 20:53:09 +0000
4@@ -68,6 +68,8 @@
5 #*********************************************************************
6 # Standard Cuts *
7 #*********************************************************************
8+ %(flavor_cuts)s = flavor_cuts ! function string, or keyword: 'hardcoded'
9+#*********************************************************************
10 # Minimum and maximum pt's (for max, -1 means no cut) *
11 #*********************************************************************
12 %(ptj)s = ptj ! minimum pt for the jets
13
14=== modified file 'madgraph/core/accessors.py'
15--- madgraph/core/accessors.py 2018-05-25 08:29:19 +0000
16+++ madgraph/core/accessors.py 2019-01-14 20:53:09 +0000
17@@ -45,6 +45,20 @@
18
19 cache_active = False
20
21+ #######################################################################################
22+ # WARNING:
23+ # The squared orders and coupling orders (those that correspond to the perturbed couplings
24+ # of the reduced processes of counterterms is not correctly set for now. It is typically
25+ # only relevant when performing mixed QCD-QED corrections which MadNkLO does not support
26+ # yet. So for now the option below disables the order_matching in the process key for
27+ # the orders specified in the variable below. Eventually however, all orders should be
28+ # matched correctly, except 'WEIGHTED' maybe, which is typically only used for generation
29+ # performance optimizations.
30+ _DISABLED_ORDERS_FOR_PROCESS_MATCHING = ['QCD','QED','WEIGHTED']
31+ #######################################################################################
32+ # Disable process coupling orders-matching
33+
34+
35 def get_key_for_cache(self,
36 process, PDGs, sort_PDGs, vetoed_attributes, allowed_attributes, opts):
37 """ Generates the look-up key for the dynamically created dictionary 'process_key_cache'
38@@ -69,7 +83,7 @@
39 # intputs from this __init__
40 sort_PDGs = True,
41 # Exclude certain process attributes when creating the key for a certain process.
42- vetoed_attributes = ['model','legs','uid','has_mirror_process','legs_with_decays'],
43+ vetoed_attributes = ['model','legs','uid','has_mirror_process','legs_with_decays','beam_factorization'],
44 # Specify only selected attributes to end up in the ProcessKey.
45 # If None, this filter is deactivated.
46 allowed_attributes = None,
47@@ -114,7 +128,10 @@
48 " via the options 'PDGs', 'legs' or 'process' (with precedence in this order).")
49
50 if sort_PDGs:
51- self.key_dict['PDGs'] = ( tuple(sorted(self.key_dict['PDGs'][0])), tuple(sorted(self.key_dict['PDGs'][1])) )
52+ self.key_dict['PDGs'] = ( tuple(sorted(self.key_dict['PDGs'][0])), tuple(sorted(self.key_dict['PDGs'][1])))
53+ else: # The final state still needs to be sorted
54+ self.key_dict['PDGs'] = ( self.key_dict['PDGs'][0], tuple(sorted(self.key_dict['PDGs'][1])))
55+
56
57 # Now make sure to instantiate a default process if the user didn't select one
58 if not process:
59@@ -175,12 +192,11 @@
60 # overlap and cannot be constructed unambiguously in reduced processes for counterterms
61 elif proc_attr in ['orders']:
62 self.key_dict[proc_attr] = hash_dict(dict((k, v) for k, v in value.items()
63- if k!='WEIGHTED'), proc_attr)
64+ if k not in self._DISABLED_ORDERS_FOR_PROCESS_MATCHING), proc_attr)
65 elif proc_attr in ['sqorders_types', 'squared_orders']:
66 self.key_dict[proc_attr] = hash_dict(dict((k, v) for k, v in value.items() if \
67- ( k!='WEIGHTED' and ( isinstance(process, subtraction.Current) or
68- (k not in process['sqorders_types']) or
69- process['sqorders_types'][k]=='==') )
70+ ( k not in self._DISABLED_ORDERS_FOR_PROCESS_MATCHING and ( isinstance(process, subtraction.Current) or
71+ (k not in process['sqorders_types']) or process['sqorders_types'][k]=='==') )
72 ), proc_attr)
73
74 elif proc_attr == 'parent_subtraction_leg':
75@@ -192,7 +208,7 @@
76 self.key_dict[proc_attr] = hash_list(sorted(value), proc_attr)
77
78 # Now generically treat other attributes
79- elif isinstance(value, (int, str, tuple, bool)):
80+ elif value is None or isinstance(value, (int, str, tuple, bool)):
81 self.key_dict[proc_attr] = value
82 elif isinstance(value, list):
83 self.key_dict[proc_attr] = hash_list(value, proc_attr)
84@@ -884,7 +900,7 @@
85 def call_subtraction_current(self, *args, **opts):
86 """ Wrapper around the actual call of the subtraction currents, so as to be
87 able to easily time it with a profiler."""
88-
89+
90 return self.subtraction_current_instance.evaluate_subtraction_current(*args, **opts)
91
92 def __call__(self, current, **opts):
93@@ -937,7 +953,7 @@
94 # Update the cache with the new results produced
95 if not cache_key is None:
96 all_evaluations = self.cache.add_result(all_evaluations, **cache_key)
97-
98+
99 # Return both the specific evaluation asked for
100 # and all results available for this cache_key
101 return self.evaluation_class(evaluation_asked_for), self.result_class(all_evaluations)
102@@ -979,14 +995,10 @@
103
104 if cls is F2PYMEAccessor:
105 target_class = None
106- n_loops = process.get('n_loops')
107- if process.get('perturbation_couplings') and n_loops==1:
108+ if process.get('perturbation_couplings') and process.get('NLO_mode') in ['virt','sqrvirt']:
109 target_class = F2PYMEAccessorMadLoop
110- elif n_loops==0:
111+ else:
112 target_class = F2PYMEAccessor
113- else:
114- raise MadGraph5Error("Could not determine the type of F2PYMEAccessor suited for "+
115- " %s with %d loops."%(process.nice_string(), n_loops))
116
117 return super(F2PYMEAccessor, cls).__new__(target_class,
118 process, f2py_module_path, slha_card_path, **opts)
119@@ -1109,7 +1121,7 @@
120 self.f2py_module = reload(self.f2py_module)
121
122 # Now gather various properties about the Matrix Elements
123- # Get all helicities orderin
124+ # Get all helicities ordering
125 self.helicity_configurations = dict((tuple(hels),i+1) for i, hels in enumerate(
126 self.get_function('get_helicity_definitions')() ))
127 # Reversed map
128@@ -1498,25 +1510,30 @@
129
130 cache_active = False
131
132- def __init__(self, process, f2py_module_path, slha_card_path, madloop_resources_path=None, **opts):
133+ def __init__(self, process, f2py_module_path, slha_card_path,
134+ madloop_resources_path=None, root_path='', **opts):
135 """ Use the MadLoop resources path for MadLoop outputs """
136
137- super(F2PYMEAccessorMadLoop, self).__init__(process,
138- f2py_module_path, slha_card_path, **opts)
139-
140 # Add the additional inputs to the back-up to be used later for the dump
141- # (if not already added by the mother)
142- if not 'madloop_resources_path' in self.initialization_inputs['opts']:
143- self.initialization_inputs['opts']['madloop_resources_path'] = madloop_resources_path
144-
145+ # (if not already present). It is important that this path is set before the
146+ # call to the mother's __init__ implementation because this path must have been
147+ # already set at the time synchronize() is called.
148 if madloop_resources_path:
149 # Save the location of MadLoop resources directory path
150 if not os.path.isabs(madloop_resources_path):
151 raise MadGraph5Error("The initialization of F2PYMEAccessorMadLoop necessitates an "+
152 "absolute path for madloop_resources_path.")
153- self.madloop_resources_path = os.path.relpath(madloop_resources_path, self.root_path)
154+ self.madloop_resources_path = os.path.relpath(madloop_resources_path, root_path)
155 else:
156 self.madloop_resources_path = None
157+
158+ super(F2PYMEAccessorMadLoop, self).__init__(process,
159+ f2py_module_path, slha_card_path, root_path=root_path, **opts)
160+
161+ # Add the input madloop_resouces_path to the initialization inputs necessary for the
162+ # dump (if not already present).
163+ if not 'madloop_resources_path' in self.initialization_inputs['opts']:
164+ self.initialization_inputs['opts']['madloop_resources_path'] = madloop_resources_path
165
166 if self.f2py_module is None:
167 # We are at the generation stage and the f2py module has never been compiled yet.
168@@ -1528,6 +1545,9 @@
169 self.loop_squared_orders = {None: 0}
170 if self.has_function('get_nsqso_loop'):
171 n_squared_orders = self.get_function('get_nsqso_loop')()
172+ # For loop-induced processes, the split order names would not have been set yet, so it is
173+ # always OK to re-import it here.
174+ self.split_order_names = [''.join(name).strip() for name in self.get_function('get_split_order_names')()]
175 for i_sq_order in range(1,n_squared_orders+1):
176 key = tuple(sorted([(self.split_order_names[i], value) for i, value in enumerate(
177 self.get_function('ml5get_squared_orders_for_soindex')(i_sq_order) )]))
178@@ -1553,6 +1573,11 @@
179 """ Synchronizes this accessor with the possibly updated value of parameter cards and ME source code.
180 Must be defined by daughter classes."""
181
182+ # Before the whole synchronization make sure that the current MadLoop resource path
183+ # is correctly propagated to fortran.
184+ if self.madloop_resources_path:
185+ self.f2py_module.initialise_madloop_path(pjoin(self.root_path, self.madloop_resources_path))
186+
187 ret_value = super(F2PYMEAccessorMadLoop, self).synchronize(ME7_options, from_init=from_init,**opts)
188
189 # Never do anything when called from the __init__ function
190@@ -1665,8 +1690,6 @@
191 if not self.module_initialized:
192 # These functions are not prefixed, so we should not ask to get them via the accessor self.get_function
193 self.f2py_module.initialise(pjoin(self.root_path, self.slha_card_path))
194- if self.madloop_resources_path:
195- self.f2py_module.initialise_madloop_path(pjoin(self.root_path, self.madloop_resources_path))
196 self.module_initialized = True
197
198
199@@ -1684,6 +1707,11 @@
200 'estimated_accuracies': estimated_accuracies,
201 'return_code': return_code}
202
203+ if __debug__ and return_code%100 ==4:
204+ logger.debug("MadLoop library '%s@%s' could not reach target accuracy %.2e for PS_point:\n%s"%(
205+ self.f2py_module_path[1], self.f2py_module_path[0],
206+ required_accuracy, str(PS_point) ))
207+ logger.debug("Accuracies obtained: %s"%str(estimated_accuracies))
208 # Gather additional newly generated output_data to be returned and placed in the cache.
209 output_datas = self.gather_output_datas(main_output, new_opts)
210
211@@ -1936,18 +1964,21 @@
212
213 return new_color_correlations
214
215- def format_spin_correlation(self, process, spin_correlations):
216- """ Synchronize the numbers in the spin correlation specifier to the leg_number in the process."""
217+ def format_spin_correlation(self, process, spin_correlation):
218+ """Synchronize the leg numbers in the spin correlation specifier
219+ to the leg numbers in the _reduced_ process.
220+ """
221
222 leg_number_to_pos_dict = {}
223 for leg_pos, leg in enumerate(process.get_initial_legs()+process.get_final_legs()):
224 leg_number_to_pos_dict[leg.get('number')] = leg_pos+1
225-
226- new_spin_correlations = []
227- for spin_correlation in spin_correlations:
228- new_spin_correlations.append( ( leg_number_to_pos_dict[spin_correlation[0]], spin_correlation[1] ) )
229-
230- return new_spin_correlations
231+
232+ new_spin_correlation = []
233+ for pol_vector in spin_correlation:
234+ new_spin_correlation.append(
235+ (leg_number_to_pos_dict[pol_vector[0]], pol_vector[1],))
236+
237+ return new_spin_correlation
238
239 def __call__(self, *args, **opts):
240 """ Quick access to directly calling a Matrix Element. The first argument should always be the MEdictionary key
241@@ -2030,7 +2061,7 @@
242 args[0].accessor[key] = (ME_accessor, call_options)
243 else:
244 args[0].accessor = {key:(ME_accessor, call_options)}
245-
246+
247 return ME_accessor(*call_args, **call_options)
248
249 def add_MEAccessor(self, ME_accessor, allow_overwrite=False):
250
251=== modified file 'madgraph/core/base_objects.py'
252--- madgraph/core/base_objects.py 2018-09-14 10:14:26 +0000
253+++ madgraph/core/base_objects.py 2019-01-14 20:53:09 +0000
254@@ -56,6 +56,7 @@
255
256 for item in init_dict.keys():
257 self.set(item, init_dict[item])
258+
259
260
261 def __getitem__(self, name):
262@@ -2813,6 +2814,15 @@
263 # states are interchanged. This is only used for post-processing by ME7
264 self['has_mirror_process'] = False
265
266+ # The following will store the beam factorization currents that applies
267+ # to this particular proces. The elements of the list have the following format
268+ # [{ 'beam_one': <BeamCurrent>,
269+ # 'beam_two': <BeamCurrent> }]
270+ # while None in place of a current indicates no convolution.
271+ # It may be necessary to have several instances of BeamCurrent, for example for the
272+ # combination F1^(1)F2^(0) and F1^(0)F2^(1)
273+ self['beam_factorization'] = [{'beam_one': None, 'beam_two': None},]
274+
275 def filter(self, name, value):
276 """Filter for valid process property values."""
277
278@@ -2838,6 +2848,21 @@
279 raise self.PhysicsObjectError, \
280 "%s is not a valid string" % str(value)
281
282+ if name == 'beam_factorization':
283+ if not isinstance(value, list):
284+ raise self.PhysicsObjectError, \
285+ "%s is not a valid list" % str(value)
286+ for bf in value:
287+ if not isinstance(bf, dict) or \
288+ not set(bf.keys())==set(['beam_one','beam_two']):
289+ raise self.PhysicsObjectError, \
290+ "%s is not a valid list of dictionary with keys ['beam_one','beam_two']" % str(bf)
291+ from madgraph.core.subtraction import BeamCurrent
292+ for key in ['beam_one','beam_two']:
293+ if (not bf[key] is None) and (not isinstance(bf[key],BeamCurrent)):
294+ raise self.PhysicsObjectError, \
295+ "%s is not None or a valid instance of a BeamCurrent" % str(bf[key])
296+
297 if name == 'split_orders':
298 if not isinstance(value, list):
299 raise self.PhysicsObjectError, \
300@@ -2948,10 +2973,6 @@
301 if value and isinstance(value, list) and \
302 not isinstance(value[0], list):
303 value = [value]
304-
305- if name == 'perturbation_couplings':
306- if value and self['n_loops']==0:
307- self['n_loops']=1
308
309 return super(Process, self).set(name, value) # call the mother routine
310
311@@ -2989,7 +3010,8 @@
312 'forbidden_onsh_s_channels', 'forbidden_s_channels',
313 'forbidden_particles', 'is_decay_chain', 'decay_chains',
314 'legs_with_decays', 'perturbation_couplings', 'has_born',
315- 'NLO_mode','split_orders','n_loops', 'has_mirror_process']
316+ 'NLO_mode','split_orders','n_loops', 'has_mirror_process',
317+ 'beam_factorization']
318
319 def nice_string(self, indent = 0, print_weighted = True, prefix = True):
320 """Returns a nicely formatted string about current process
321@@ -3117,12 +3139,33 @@
322 if self['has_mirror_process']:
323 mystr = mystr + ' (mirrored)'
324
325- if not self.get('decay_chains'):
326- return mystr
327+ if self.get('decay_chains'):
328+ for decay in self['decay_chains']:
329+ mystr = mystr + '\n' + \
330+ decay.nice_string(indent + 2).replace('Process', 'Decay')
331
332- for decay in self['decay_chains']:
333- mystr = mystr + '\n' + \
334- decay.nice_string(indent + 2).replace('Process', 'Decay')
335+ # Finally add beam factorization
336+ if len(self['beam_factorization'])>1 or \
337+ ( len(self['beam_factorization'])>0 and
338+ ( (not self['beam_factorization'][0]['beam_one'] is None) or \
339+ (not self['beam_factorization'][0]['beam_two'] is None)) ):
340+ beam_factorization_terms_str = []
341+ for bft in self['beam_factorization']:
342+ if (not bft['beam_one'] is None) and \
343+ (not bft['beam_two'] is None):
344+ parenthesis = '{%s}'
345+ else:
346+ # I find it more esthetical to always include the curly brackets
347+ parenthesis = '{%s}'
348+ beam_factorization_terms_str.append(parenthesis%('|'.join([
349+ bc.__str__(print_n=True, print_pdg=False, print_state=False,
350+ print_loops=True, print_orders=False) for bc in
351+ [bft['beam_one'],bft['beam_two']] if not bc is None
352+ ])))
353+ if len(self['beam_factorization'])>1:
354+ mystr += ' x [%s]'%(' + '.join(beam_factorization_terms_str))
355+ else:
356+ mystr += ' x %s'%(' + '.join(beam_factorization_terms_str))
357
358 return mystr
359
360@@ -3411,6 +3454,35 @@
361 self.initial_final_pdgs = (initial_pdgs, final_pdgs)
362 return self.initial_final_pdgs
363
364+ def get_n_loops_in_beam_factorization(self):
365+ """ Returns the total number of loops in each of the two beam factorization currents,
366+ this sum is expected to be identical for each beam factorization term."""
367+
368+ bft = self['beam_factorization'][0]
369+ n_loops = 0
370+ for beam_name in ['beam_one','beam_two']:
371+ if not bft[beam_name] is None:
372+ n_loops += bft[beam_name]['n_loops']
373+
374+ return n_loops
375+
376+ def get_squared_orders_in_beam_factorization(self):
377+ """ Returns the total squared orders in each of the two beam factorization currents,
378+ this sum is expected to be identical for each beam factorization term."""
379+
380+ bft = self['beam_factorization'][0]
381+ squared_orders = {}
382+
383+ for beam_name in ['beam_one','beam_two']:
384+ if not bft[beam_name] is None:
385+ for squared_order, value in bft[beam_name]['squared_orders'].items():
386+ if squared_order in squared_orders:
387+ squared_orders[squared_order] += value
388+ else:
389+ squared_orders[squared_order] = value
390+
391+ return squared_orders
392+
393 def format_PS_point_for_ME_call(self, PS_point):
394 """ From a dictionary formatted PS point and a process, returns the PS point as a flat list, ordered as
395 the legs in the process."""
396@@ -3542,7 +3614,8 @@
397
398 return ( tuple( l.get('id') for l in self.get('legs') if not l.get('state') ),
399 tuple( l.get('id') for l in self.get('legs') if l.get('state') ),
400- self.get('id'), self.get('n_loops'), tuple(self.get('perturbation_couplings')) )
401+ self.get('id'), self.get('n_loops'), tuple(self.get('perturbation_couplings')),
402+ tuple( (str(bf['beam_one']),str(bf['beam_two'])) for bf in self.get('beam_factorization')) )
403
404 def compare_for_sort(self, other):
405 """Sorting routine which allows to sort processes for
406@@ -3937,7 +4010,8 @@
407 'split_orders': self.get('split_orders'),
408 'NLO_mode': self.get('NLO_mode'),
409 'n_loops': self.get('n_loops'),
410- 'has_mirror_process': self.get('has_mirror_process')
411+ 'has_mirror_process': self.get('has_mirror_process'),
412+ 'beam_factorization': self.get('beam_factorization'),
413 })
414
415 def get_process(self, initial_state_ids, final_state_ids):
416@@ -4016,7 +4090,13 @@
417 correction_couplings = [],
418 n_unresolved_particles = 0,
419 n_loops = -1,
420- squared_orders_constraints = {}
421+ squared_orders_constraints = {},
422+ beam_types = (None, None),
423+ # Active simply means the a convolution is in place.
424+ beam_factorization_active = (False, False),
425+ # Correlated beam convolutions means that both beams are convoluted with
426+ # the same variable xi
427+ correlated_beam_convolution = False
428 ):
429 """ Instantiate a contribution definition with all necessary information
430 to generate the actual contribution. """
431@@ -4028,8 +4108,47 @@
432 self.correction_order = correction_order
433 self.correction_couplings = correction_couplings
434 self.n_unresolved_particles = n_unresolved_particles
435- # Make sure that n_loops matches the one in the process definition. We however keep
436- # this attribute in ContributionDefinition as well for convenience.
437+
438+ # Values of the dictionary 'beam_factorization' are dicts of the form:
439+ # { 'beam_type' : beam_type_name_str
440+ # 'beam_PDGs' : (beam_PDGs,)
441+ # }
442+ self.beam_factorization = {}
443+ for i, beam_name in enumerate(['beam_one', 'beam_two']):
444+ if beam_types[i] is None:
445+ self.beam_factorization[beam_name] = None
446+ else:
447+ self.beam_factorization[beam_name] = {
448+ 'beam_type' : beam_types[i][0],
449+ 'beam_PDGs' : tuple(beam_types[i][1]),
450+ # Active means that a convolution with beam factorization is in place.
451+ 'active' : beam_factorization_active[i]
452+ }
453+ if correlated_beam_convolution and beam_factorization_active != (True, True):
454+ raise MadGraph5Error("The beam convolution can be specified to be correlated only if"+
455+ " both beams are convoluted.")
456+ self.correlated_beam_convolution = correlated_beam_convolution
457+
458+ # Extract n_loops from the one in the process definition. However these need *not*
459+ # be the same, for example a Loop-Induced contribution 'RV' would have a process
460+ # definition with n_loops=2 while the contribution itself has n_loops=1 because it
461+ # must always match the expected number of loops in such contribution (i.e. RV in this
462+ # example) in the case of a non-loop induced process.
463+ if n_loops == -1:
464+ self.n_loops = process_definition.get('n_loops')
465+ else:
466+ self.n_loops = n_loops
467+
468+ # Possibly extract n_loops from the one in the process definition. Notice that the
469+ # attribute 'n_loops' of the contribution and of the process *must* be the same
470+ # even in the case of loop-induced processes. This attribute 'n_loops' is meant to
471+ # be used only for the whole MadNkLO construction where it *must* be equivalent to
472+ # labeling the number of loops one would find in this contribution or the processes
473+ # hosted *in the situation where the lowest order is tree-level*.
474+ # In order to specify the number of loops the process must really have when its
475+ # ME code is exported on disk, one must either set the 'NLO_mode' of the process
476+ # to be 'virt' or 'sqrvirt', or use the squared order constraint 'NLOOP' when using
477+ # multi-loop UFO form factors.
478 if n_loops == -1:
479 self.n_loops = process_definition.get('n_loops')
480 else:
481@@ -4038,12 +4157,60 @@
482 "should be instantiated with a ProcessDefinition with the same n_loops.")
483 else:
484 self.n_loops = n_loops
485+
486 # Squared orders to consider. Note that this is not the same as the attribute
487 # of the process definition, since it can be {'QED':[2,4]} to indicates that all
488 # squared order for QED between 2 and 4 must be considered. A constraint of
489 # {'QED':[2,2]} means that QED^2 must be exactly equal to 2.
490 self.squared_orders_constraints = squared_orders_constraints
491
492+ def get_copy(self):
493+ """ Returns a partially-shallow copy of self."""
494+ return ContributionDefinition(
495+ process_definition = self.process_definition.get_copy(),
496+ overall_correction_order = self.overall_correction_order,
497+ correction_order = self.correction_order,
498+ correction_couplings = self.correction_couplings,
499+ n_unresolved_particles = self.n_unresolved_particles,
500+ n_loops = self.n_loops,
501+ squared_orders_constraints = dict(self.squared_orders_constraints),
502+ beam_factorization = copy.deepcopy(self.beam_factorization),
503+ correlated_beam_convolution = self.correlated_beam_convolution
504+ )
505+
506+ def get_beam_types(self):
507+ """ Return the beam types in a tuple format."""
508+
509+ return (
510+ ( self.beam_factorization['beam_one']['beam_type'],
511+ self.beam_factorization['beam_one']['beam_PDGs']) if
512+ (not self.beam_factorization['beam_one'] is None) else None,
513+ ( self.beam_factorization['beam_two']['beam_type'],
514+ self.beam_factorization['beam_two']['beam_PDGs']) if
515+ (not self.beam_factorization['beam_two'] is None) else None )
516+
517+ def has_beam_factorization(self):
518+ """ Checks whether this contribution has as any beam factorization active."""
519+
520+ return ( ( (not self.beam_factorization['beam_one'] is None) and
521+ self.beam_factorization['beam_one']['active'] ) or
522+ ( (not self.beam_factorization['beam_two'] is None) and
523+ self.beam_factorization['beam_two']['active'] ) )
524+
525+ def is_beam_active(self, beam_name):
526+ """ Checks whether factorization of the beam 'beam_name' is active or not."""
527+ return (not self.beam_factorization[beam_name] is None) and \
528+ self.beam_factorization[beam_name]['active']
529+
530+ def is_loop_induced(self):
531+ """ Returns whether this contribution hosts a 'loop-induced' process. This is not
532+ so important in the whole MadNkLO construction and for now we simply use the
533+ mode of the process definition, but later we'll also have to investigate the
534+ coupling orders (to see if any is 'NLOOP') as (multi-)loops in Matrix Element can also
535+ be obtained with UFO form factors."""
536+ n_process_loops = 1 if (self.process_definition.get('NLO_mode') in ['virt','sqrvirt']) else 0
537+ return ((n_process_loops - self.n_loops) > 0)
538+
539 def nice_string(self):
540 """ Nice representation of a contribution definition."""
541 res = []
542@@ -4054,7 +4221,48 @@
543 res.append('%-30s: %s'%('n_loops',self.n_loops))
544 res.append('%-30s: %s'%('squared_orders_constraints',self.squared_orders_constraints))
545 res.append('%-30s: %s'%('process_definition',self.process_definition.nice_string().replace('Process: ','')))
546+ res.append('%-30s: %s'%('beam_one',self.get_beam_factorization_nice_string(
547+ self.process_definition['model'],self.beam_factorization['beam_one'])))
548+ res.append('%-30s: %s'%('beam_two',self.get_beam_factorization_nice_string(
549+ self.process_definition['model'],self.beam_factorization['beam_two'])))
550 return '\n'.join(res)
551+
552+ def get_beam_factorization_nice_string(self, model, beam_factorization_specification):
553+ """ Returns a nice representation of the beam factorization specification in
554+ argument"""
555+
556+
557+ if beam_factorization_specification is None:
558+ return ''
559+
560+ res = beam_factorization_specification['beam_type']
561+
562+ if beam_factorization_specification['active']:
563+ if self.correlated_beam_convolution:
564+ res += ' (correlated convolution)'
565+ else:
566+ res += ' (convoluted)'
567+
568+ res += ' [including: %s]'%(' '.join(model.get_particle(pdg).get_name() for pdg
569+ in beam_factorization_specification['beam_PDGs']))
570+ return res
571+
572+ def get_beam_factorization_short_name(self):
573+ """ Return a short string identifying the beam factorization applied to this
574+ contribution. For instance it can be:
575+ F1 --> PDF CT applied to beam one
576+ F2 --> PDF CT applied to beam two
577+ F1F2 --> PDF CT applied to both beams
578+ S --> Correlated convolution of both beams
579+ """
580+ if self.correlated_beam_convolution:
581+ return 'S'
582+ res = ''
583+ for i_beam, beam_name in enumerate(['beam_one', 'beam_two']):
584+ if not self.beam_factorization[beam_name] is None and \
585+ self.beam_factorization[beam_name]['active']:
586+ res += 'F%d'%(i_beam+1)
587+ return res
588
589 def short_name(self):
590 """ Returns a short-hand notation for that contribution."""
591@@ -4062,6 +4270,8 @@
592 'V'*self.n_loops
593 if shortname=='':
594 shortname = 'B'
595+ # Add factorization labels
596+ shortname += self.get_beam_factorization_short_name()
597 return shortname
598
599 def get_shell_name(self):
600@@ -4079,6 +4289,9 @@
601 shell_name += 'B'
602 else:
603 shell_name += 'V'*nV_in_conjugated_amplitude
604+ beam_factorization_label = self.get_beam_factorization_short_name()
605+ if beam_factorization_label != '':
606+ shell_name += '_%s'%beam_factorization_label
607 return shell_name
608
609 class EpsilonExpansion(dict):
610@@ -4152,6 +4365,11 @@
611 __truediv__ = __div__
612
613 def __radd__(self, other):
614+ # The check below is useful for this overloaded __radd__ to work with the Python
615+ # intrinsic sum() function.
616+ if other==0:
617+ return
618+ misc.sprint(other)
619 for k, v in other.items():
620 try:
621 self[k] += v
622@@ -4163,7 +4381,7 @@
623 try:
624 self[k] -= v
625 except KeyError:
626- self[k] = -v
627+ self[k] = -v
628
629 def __rmul__(self, other):
630 if not isinstance(other, EpsilonExpansion):
631@@ -4191,7 +4409,26 @@
632 raise NotImplementedError
633
634 __rtruediv__ = __rdiv__
635-
636+
637+ def get_term(self, term_specifier):
638+ """ Get on particular term of the expansion, specified either by a string or by
639+ an integer."""
640+ dict_key = 0
641+ term_specifier = term_specifier.lower()
642+ if isinstance(term_specifier, str):
643+ if term_specifier == 'sum_all':
644+ return sum(v for k,v in self.items() if isinstance(k, int))
645+ if term_specifier == 'finite':
646+ dict_key = 0
647+ elif term_specifier.startswith('eps^'):
648+ dict_key = int(term_specifier[4:])
649+ else:
650+ raise MadGraph5Error('Incorrect specification of the epsilon expansion term to access: %s'%term_specifier)
651+ else:
652+ dict_key = term_specifier
653+
654+ return self[dict_key]
655+
656 def to_human_readable_dict(self):
657 """ Transforms this expansion into a dictionary with human readable keys.
658 Typically to be used in MEEvaluation results dictionaries. """
659
660=== modified file 'madgraph/core/contributions.py'
661--- madgraph/core/contributions.py 2018-06-16 19:41:18 +0000
662+++ madgraph/core/contributions.py 2019-01-14 20:53:09 +0000
663@@ -50,6 +50,14 @@
664
665 logger = logging.getLogger('contributions')
666
667+# The initial state collinear counterterms may be combined because the
668+# BeamCurrents that implements them return the full backward evolution flavor
669+# matrix, so that several initial state collinear counterterms that differ only
670+# by the resulting backward evolved flavor (e.g. 'd g <- d' and 'g d~ <- d') could
671+# be combined. This option is not implemented as of now, so the only valid value
672+# for the global parameter below is False.
673+_COMBINE_INITIAL_STATE_INTEGRATED_COLLINEAR_COUNTERTERMS = False
674+
675 #===============================================================================
676 # Contribution mother class
677 #===============================================================================
678@@ -64,35 +72,39 @@
679
680 if cls is Contribution:
681 target_type = 'Unknown'
682- if contribution_definition.correction_order == 'LO':
683- if contribution_definition.n_loops == 0 and \
684- contribution_definition.n_unresolved_particles == 0:
685- target_type = 'Born'
686- elif contribution_definition.n_loops == 1 and \
687- contribution_definition.n_unresolved_particles == 0:
688- target_type = 'LoopInduced_Born'
689- elif contribution_definition.correction_order == 'NLO':
690- if contribution_definition.n_loops == 1 and \
691- contribution_definition.n_unresolved_particles == 0:
692+ correction_order = contribution_definition.correction_order.count('N')
693+ beam_factorization_count = 0
694+ if contribution_definition.is_beam_active('beam_one'):
695+ beam_factorization_count += 1
696+ if contribution_definition.is_beam_active('beam_two'):
697+ beam_factorization_count += 1
698+ n_loops = contribution_definition.n_loops
699+ n_unresolved_particles = contribution_definition.n_unresolved_particles
700+ # Beam factorization contributions are automatically of type RV because
701+ # they must both generate local counterterms (for the form factors) and
702+ # accept integrated ISR ones.
703+ if beam_factorization_count > 0:
704+ if contribution_definition.correlated_beam_convolution:
705+ target_type = 'BeamSoft'
706+ else:
707+ target_type = 'RealVirtual'
708+ elif n_loops == 0 and n_unresolved_particles == 0:
709+ target_type = 'Born'
710+ elif n_loops == 1 and n_unresolved_particles == 0:
711+ if correction_order < 1:
712+ target_type = 'LoopInduced_Born'
713+ else:
714 target_type = 'Virtual'
715- elif contribution_definition.n_loops == 0 and \
716- contribution_definition.n_unresolved_particles == 1:
717- target_type = 'SingleReals'
718- elif contribution_definition.correction_order == 'NNLO':
719- if contribution_definition.n_loops == 0 and \
720- contribution_definition.n_unresolved_particles == 2:
721- target_type = 'DoubleReals'
722- else:
723- raise MadGraph5Error("Some NNLO type of contributions are not implemented yet.")
724- elif contribution_definition.correction_order == 'NNNLO':
725- if contribution_definition.n_loops == 0 and \
726- contribution_definition.n_unresolved_particles == 3:
727- target_type = 'TripleReals'
728- else:
729- raise MadGraph5Error("Some NNNLO type of contributions are not implemented yet.")
730+ elif n_loops == 0 and n_unresolved_particles == 1:
731+ target_type = 'SingleReals'
732+ elif n_loops == 0 and n_unresolved_particles == 2:
733+ target_type = 'DoubleReals'
734+ elif n_loops == 0 and n_unresolved_particles == 3:
735+ target_type = 'TripleReals'
736+ else:
737+ raise MadGraph5Error("Some %s type of contributions are not implemented yet."%
738+ contribution_definition.correction_order)
739
740- else:
741- target_type = 'Unknown'
742 target_class = Contribution_classes_map[target_type]
743 if not target_class:
744 raise MadGraph5Error("Could not determine the class for contribution of type '%s' to be added for"%target_type+
745@@ -147,15 +159,21 @@
746
747 # Initialize an IR subtraction module if necessary
748 self.IR_subtraction = None
749- if self.contribution_definition.n_unresolved_particles > 0:
750+
751+ if self.contribution_definition.n_unresolved_particles > 0 or \
752+ self.contribution_definition.has_beam_factorization():
753 self.IR_subtraction = subtraction.IRSubtraction(
754 self.model,
755- coupling_types=self.contribution_definition.correction_couplings,
756- n_unresolved=self.contribution_definition.n_unresolved_particles )
757+ coupling_types = self.contribution_definition.correction_couplings,
758+ n_unresolved = self.contribution_definition.n_unresolved_particles,
759+ beam_types = self.contribution_definition.get_beam_types(),
760+ currents_scheme = self.options['subtraction_currents_scheme'],
761+ mappings_scheme = self.options['subtraction_mappings_scheme'] )
762
763- # The following two attributes dicate the type of Exporter which will be assigned to this contribution
764+ # The following two attributes dictate the type of Exporter which will be assigned to this contribution
765 self.output_type = 'default'
766 self.format = 'standalone'
767+
768 self.export_dir = 'None'
769 if self.options['group_subprocesses'] == 'Auto':
770 self.collect_mirror_procs = True
771@@ -166,22 +184,98 @@
772
773 # Options relevant only for LO diagram generation
774 self.ignore_six_quark_processes = False
775- self.diagram_filter = False
776+ if 'diagram_filter' in opts:
777+ self.diagram_filter = opts['diagram_filter']
778+ else:
779+ self.diagram_filter = False
780 self.optimize = False
781 # Flag to indicate whether this contribution supports decay chains
782 # Typically only LO contributions support decay chain amplitudes
783 self.supports_decay_chain = False
784
785+ if 'loop_filter' in opts:
786+ self.loop_filter = opts['loop_filter']
787+ else:
788+ self.loop_filter = None
789+
790 # Specifies if this contribution output requires its own MODEL in Source
791 self.requires_its_own_model = False
792
793+ # The following two attributes dictate the type of Exporter which will be assigned to this contribution
794+ self.output_type = self.get_output_type()
795+ self.format = self.get_output_format()
796+
797 # Specify the MultiProcessClass to use to generate amplitudes
798- self.MultiProcessClass = diagram_generation.MultiProcess
799+ self.MultiProcessClass = self.get_multi_process_class()
800
801 # Add default phase-space topology specifiers
802- self.processes_to_topologies = None
803- self.topologies_to_processes = None
804-
805+ self.processes_to_topologies = {}
806+ self.topologies_to_processes = {}
807+
808+ # Add an empty processes map, specifying that it is obtained before anything
809+ # was generated yet
810+ self.processes_map = ({
811+ 'had_amplitudes' : False,
812+ 'had_matrix_elements' : False
813+ }, {})
814+
815+ def get_multi_process_class(self):
816+ """ Function returning the appropriate multiprocess class to instantiate during
817+ the amplitude generation for this particular contribution."""
818+
819+ if self.output_type == 'madloop':
820+ # Make sure to adjust the MultiProcessClass to be used in the case of a 'madloop' output
821+ return loop_diagram_generation.LoopMultiProcess
822+ elif self.output_type == 'default':
823+ return diagram_generation.MultiProcess
824+ else:
825+ raise MadGraph5Error(
826+ "Output type '%s' not reckognized by the base Contribution class."%self.output_type+
827+ " It must be implemented explicitely in the daughter class.")
828+
829+ def get_output_type(self):
830+ """ Function returning the appropriate output type to be used when outputting
831+ the amplitudes from this particular contribution."""
832+
833+ # Decide whether a MadLoop output is required based on the process definition
834+ # underlying this contribution.
835+ if (self.contribution_definition.process_definition.get('NLO_mode') in ['virt','sqrvirt']):
836+ return 'madloop'
837+ else:
838+ return 'default'
839+
840+ def get_output_format(self):
841+ """ Function returning the appropriate output format to be used when outputting
842+ the amplitudes from this particular contribution."""
843+ return 'standalone'
844+
845+ def add_content_to_global_ME7_resources(self, global_ME7_dir, **opts):
846+ """ Possibly add content to the global ME7 resources pool, depending on the output
847+ format requested."""
848+
849+ if self.format == 'standalone' and self.output_type == 'madloop':
850+ destination = pjoin(global_ME7_dir,'Cards','MadLoopParams.dat')
851+ if not os.path.exists(destination):
852+ shutil.copyfile(pjoin(self.export_dir,'Cards','MadLoopParams.dat'), destination)
853+
854+
855+ def set_helas_model(self):
856+ """ Instantiate properly the helas model according to the exporter and the
857+ output type."""
858+
859+ if self.exporter.exporter == 'cpp':
860+ self.helas_model = helas_call_writers.CPPUFOHelasCallWriter(self.model)
861+ else:
862+ assert self.exporter.exporter == 'v4'
863+ if self.output_type == 'madloop':
864+ assert (not self.options['_model_v4_path'])
865+ self.helas_model = helas_call_writers.FortranUFOHelasCallWriter(self.model)
866+ else:
867+ if self.options['_model_v4_path']:
868+ self.helas_model = helas_call_writers.FortranHelasCallWriter(self.model)
869+ else:
870+ self.helas_model = helas_call_writers.FortranUFOHelasCallWriter(self.model)
871+
872 def add_integrated_counterterm(self, integrated_CT_properties):
873 """By default, do not support adding integrated counterterms."""
874
875@@ -189,6 +283,10 @@
876 "The contribution of type %s cannot receive" % type(self) +
877 " contributions from integrated counterterms." )
878
879+ def combine_initial_state_counterterms(self, *args, **opts):
880+ """ By default, does nothing. For inheritance purposes only."""
881+ pass
882+
883 def set_export_dir(self, prefix):
884 """Assigns an export directory name."""
885
886@@ -237,7 +335,42 @@
887 return self.exporter.pass_information_from_cmd(cmd_interface)
888
889 def generate_matrix_elements(self, group_processes=True):
890- """Generate the Helas matrix elements before exporting. Uses the main function argument
891+ """ Generate the matrix element in the manner appropriate to the selected output type."""
892+
893+ if self.output_type == 'madloop':
894+ return self.generate_loop_matrix_elements(group_processes=group_processes)
895+ else:
896+ return self.generate_tree_matrix_elements(group_processes=group_processes)
897+
898+ def generate_loop_matrix_elements(self, group_processes=True):
899+ """Generate the *loop* Helas matrix elements before exporting. Uses the main function argument
900+ 'group_processes' to decide whether to use group_subprocess or not."""
901+
902+ cpu_time1 = time.time()
903+ ndiags = 0
904+
905+ generation_mode = {'optimized_output': self.options['loop_optimized_output']}
906+
907+ self.all_matrix_elements = loop_helas_objects.LoopHelasProcess(
908+ self.amplitudes,
909+ compute_loop_nc = True,
910+ matrix_element_opts = generation_mode,
911+ optimized_output = self.options['loop_optimized_output'],
912+ combine_matrix_elements=group_processes
913+ )
914+ ndiags = sum([len(me.get('diagrams')) for me in self.all_matrix_elements.get_matrix_elements()])
915+
916+ # assign a unique id number to all process
917+ uid = 0
918+ for me in self.all_matrix_elements.get_matrix_elements():
919+ uid += 1 # update the identification number
920+ me.get('processes')[0].set('uid', uid)
921+
922+ cpu_time2 = time.time()
923+ return ndiags, cpu_time2 - cpu_time1
924+
925+ def generate_tree_matrix_elements(self, group_processes=True):
926+ """Generate the *tree* Helas matrix elements before exporting. Uses the main function argument
927 'group_processes' to decide whether to use group_subprocess or not."""
928
929 cpu_time1 = time.time()
930@@ -300,39 +433,51 @@
931
932 return ndiags, cpu_time2 - cpu_time1
933
934- def set_helas_model(self):
935- """ Instantiate properly the helas model """
936-
937- if self.exporter.exporter == 'cpp':
938- self.helas_model = helas_call_writers.CPPUFOHelasCallWriter(self.model)
939- elif self.options['_model_v4_path']:
940- assert self.exporter.exporter == 'v4'
941- self.helas_model = helas_call_writers.FortranHelasCallWriter(self.model)
942- else:
943- assert self.exporter.exporter == 'v4'
944- self.helas_model = helas_call_writers.FortranUFOHelasCallWriter(self.model)
945-
946 def generate_code(self):
947 """ Assuming the Helas Matrix Elements are now generated, we can write out the corresponding code."""
948
949 matrix_elements = self.all_matrix_elements.get_matrix_elements()
950-
951 calls=0
952 cpu_time_start = time.time()
953- if isinstance(self.all_matrix_elements, group_subprocs.SubProcessGroupList) and \
954- self.exporter.grouped_mode:
955- modified, self.all_matrix_elements = self.exporter.modify_grouping(self.all_matrix_elements)
956- for me_number, me in enumerate(self.all_matrix_elements):
957- calls = calls + self.exporter.generate_subprocess_directory(me, self.helas_model, me_number)
958-
959- else: # Non-grouped mode
960- for nb, me in enumerate(matrix_elements[:]):
961- new_calls = self.exporter.generate_subprocess_directory(me, self.helas_model, nb)
962- if isinstance(new_calls, int):
963- if new_calls ==0:
964- matrix_elements.remove(me)
965- else:
966- calls = calls + new_calls
967+
968+ if self.output_type == 'madloop':
969+ for me in matrix_elements:
970+ # Choose the group number to be the unique id so that the output prefix
971+ # is nicely P<proc_id>_<unique_id>_
972+ calls = calls + self.exporter.generate_loop_subprocess(me,
973+ self.helas_model,
974+ group_number = me.get('processes')[0].get('uid'),
975+ proc_id = None,
976+ config_map=None,
977+ unique_id=me.get('processes')[0].get('uid'))
978+
979+ # If all ME's do not share the same maximum loop vertex rank and the
980+ # same loop maximum wavefunction size, we need to set the maximum
981+ # in coef_specs.inc of the HELAS Source. The SubProcesses/P* directory
982+ # all link this file, so it should be properly propagated
983+ if self.options['loop_optimized_output'] and len(matrix_elements)>1:
984+ max_lwfspins = [m.get_max_loop_particle_spin() for m in \
985+ matrix_elements]
986+ max_loop_vert_ranks = [me.get_max_loop_vertex_rank() for me in \
987+ matrix_elements]
988+ if len(set(max_lwfspins))>1 or len(set(max_loop_vert_ranks))>1:
989+ self.exporter.fix_coef_specs(max(max_lwfspins),\
990+ max(max_loop_vert_ranks))
991+ else:
992+ if isinstance(self.all_matrix_elements, group_subprocs.SubProcessGroupList) and \
993+ self.exporter.grouped_mode:
994+ modified, self.all_matrix_elements = self.exporter.modify_grouping(self.all_matrix_elements)
995+ for me_number, me in enumerate(self.all_matrix_elements):
996+ calls = calls + self.exporter.generate_subprocess_directory(me, self.helas_model, me_number)
997+
998+ else: # Non-grouped mode
999+ for nb, me in enumerate(matrix_elements[:]):
1000+ new_calls = self.exporter.generate_subprocess_directory(me, self.helas_model, nb)
1001+ if isinstance(new_calls, int):
1002+ if new_calls ==0:
1003+ matrix_elements.remove(me)
1004+ else:
1005+ calls = calls + new_calls
1006
1007 return calls, time.time() - cpu_time_start
1008
1009@@ -344,7 +489,18 @@
1010 for amplitude in self.amplitudes:
1011 max_order = max(max_order, amplitude.get('diagrams').get_max_order(order))
1012 return max_order
1013+
1014+ def subtract(self, ignore_integrated_counterterms=False, **opts):
1015+ """ Generate and export all necessary subtraction counterterterms and currents
1016+ (including beam factorisation counterterms as well [bulk ones are already generated
1017+ at this stage] ones.). Nothing to do in this base class."""
1018+
1019+ # Assign an empty list of countrerterms for now to this contribution
1020+ self.counterterms = {}
1021
1022+ # No information need to be returned at this stage since nothing was generated
1023+ return {}
1024+
1025 def export(self, nojpeg=False, group_processes=True, args=[], **opts):
1026 """ Perform the export duties, that include generation of the HelasMatrixElements and
1027 the actual output of the matrix element code by the exporter."""
1028@@ -384,7 +540,7 @@
1029 # Now investigate phase-space topologies and identify the list of kinematic configurations present
1030 # and, for each process key in the process map, what configuration it includes and with which defining diagrams.
1031 self.set_phase_space_topologies()
1032-
1033+
1034 # The printout below summarizes the topologies identified and the information gathered about the
1035 # matrix elements and their diagrams
1036 # misc.sprint('='*50)
1037@@ -550,14 +706,16 @@
1038 """ Adds all MEAccessors for the matrix elements generated as part of this contribution."""
1039
1040 MEAccessors = []
1041+
1042+ # This is not needed for factorization contributions as they always use processes already
1043+ # in other contributions.
1044+ if self.has_beam_factorization():
1045+ return MEAccessors
1046+
1047 for process_key, (defining_process, mapped_processes) in self.get_processes_map().items():
1048 # The PDGs of the hashed representations correspond to entry [0][0]
1049- mapped_process_pdgs = [
1050- (proc.get_initial_ids(), proc.get_final_ids())
1051- for proc in mapped_processes ]
1052- proc_dir = pjoin(
1053- self.export_dir, 'SubProcesses',
1054- 'P%s' % self.process_dir_name(defining_process) )
1055+ mapped_process_pdgs = [ (proc.get_initial_ids(), proc.get_final_ids()) for proc in mapped_processes ]
1056+ proc_dir = pjoin(self.export_dir, 'SubProcesses', 'P%s' % self.process_dir_name(defining_process) )
1057 if not os.path.isdir(proc_dir):
1058 raise MadGraph5Error(
1059 "Cannot find the process directory '%s' for process %s." %
1060@@ -588,6 +746,46 @@
1061 all_MEAccessors.add_MEAccessors(
1062 MEAccessors, allow_overwrite=(not self.group_subprocesses) )
1063
1064+ def add_current_accessors(
1065+ self, model, all_MEAccessors, root_path, current_set, currents_to_consider):
1066+ """Generate and add all subtraction current accessors to the MEAccessorDict."""
1067+
1068+ # Generate the computer code and export it on disk for the remaining new currents
1069+ current_exporter = subtraction.SubtractionCurrentExporter(model, root_path, current_set)
1070+ mapped_currents = current_exporter.export(currents_to_consider)
1071+ # Print to the debug log which currents were exported
1072+ log_string = "The following subtraction current implementation are exported "+\
1073+ "in contribution '%s':\n"%self.short_name()
1074+ for (module_path, class_name, _), current_properties in mapped_currents.items():
1075+ if class_name != 'DefaultCurrentImplementation':
1076+ quote_class_name = "'%s'" % class_name
1077+ defining_current_str = str(current_properties['defining_current'])
1078+ line_pars = (quote_class_name, defining_current_str)
1079+ log_string += " > %-35s for representative current '%s'\n" % line_pars
1080+ else:
1081+ quote_default_name = "'DefaultCurrentImplementation'"
1082+ number_of_currents = len(current_properties['mapped_process_keys'])
1083+ line_pars = (quote_default_name, number_of_currents)
1084+ log_string += " > %-35s for a total of %d currents.\n" % line_pars
1085+ logger.debug(log_string)
1086+ # Instantiate the CurrentAccessors corresponding
1087+ # to all current implementations identified and needed
1088+ all_current_accessors = []
1089+ for (module_path, class_name, _), current_properties in mapped_currents.items():
1090+ all_current_accessors.append(accessors.VirtualMEAccessor(
1091+ current_properties['defining_current'],
1092+ module_path,
1093+ class_name,
1094+ '%s.subtraction_current_implementations_utils'%current_exporter.main_module_name,
1095+ current_properties['instantiation_options'],
1096+ mapped_process_keys=current_properties['mapped_process_keys'],
1097+ root_path=root_path,
1098+ model=model
1099+ ))
1100+ # Add MEAccessors
1101+ all_MEAccessors.add_MEAccessors(all_current_accessors)
1102+ return mapped_currents
1103+
1104 def can_processes_be_integrated_together(self, processA, processB):
1105 """ Investigates whether processA and processB can be integrated together for this contribution."""
1106
1107@@ -636,10 +834,6 @@
1108 all_integrands.extend(self.get_integrands_for_process_map(integrand_process_map, *args))
1109
1110 return all_integrands
1111-
1112- def add_content_to_global_ME7_resources(self, global_ME7_dir, **opts):
1113- """ Possibly add content to the global ME7 resources pool."""
1114- pass
1115
1116 def remove_superfluous_content(self):
1117 """At the end of the export of this contributions,
1118@@ -669,6 +863,11 @@
1119 # This stores the wanted couplings which should be exported by the overall ME7 exporter.
1120 global_wanted_couplings = []
1121
1122+ # For some contributions, like the beam factorization ones, there is no additional
1123+ # export to be performed.
1124+ if self.has_beam_factorization():
1125+ return global_wanted_couplings
1126+
1127 # Handling of the model
1128 if self.options['_model_v4_path']:
1129 logger.info(
1130@@ -700,6 +899,11 @@
1131
1132 def link_global_ME7_resources(self, global_ME7_dir):
1133 """ Remove some of the local resources and instead link the global ones from the ME7 exporters."""
1134+
1135+ # For some contributions, like the beam factorization ones, there is no additional
1136+ # export to be performed.
1137+ if self.has_beam_factorization():
1138+ return
1139
1140 # Link to the global model if necessary
1141 if not self.requires_its_own_model:
1142@@ -714,6 +918,12 @@
1143
1144 def make_model_symbolic_link(self):
1145 """ Links some of the MODEL resources to the other folders exported within this contribution."""
1146+
1147+ # For some contributions, like the beam factorization ones, there is no additional
1148+ # export to be performed.
1149+ if self.has_beam_factorization():
1150+ return
1151+
1152 self.exporter.make_model_symbolic_link()
1153
1154 def get_inverse_processes_map(self, force=False):
1155@@ -725,11 +935,7 @@
1156
1157 # Sort PDGs only when processes have been grouped
1158 sort_PDGs = self.group_subprocesses
1159-
1160- if not self.all_matrix_elements.get_matrix_elements():
1161- raise MadGraph5Error("The processes reverse map can only be built *after*"+
1162- " the matrix element have been exported.")
1163-
1164+
1165 if hasattr(self, 'inverse_processes_map') and not force:
1166 return self.inverse_processes_map
1167
1168@@ -761,9 +967,7 @@
1169 where defining_process is an actual instance of Process and [mapped processes] is a list
1170 of processes instances mapped to that defining process.
1171 """
1172- if not self.amplitudes:
1173- return {}
1174-
1175+
1176 # Attempt reusing a cached version
1177 if hasattr(self, 'processes_map') and not force:
1178 if (self.processes_map[0]['had_amplitudes'] == bool(self.amplitudes)) and \
1179@@ -771,6 +975,9 @@
1180 self.all_matrix_elements.get_matrix_elements())):
1181 return self.processes_map[1]
1182
1183+ if not self.amplitudes:
1184+ return {}
1185+
1186 all_defining_procs = [amp.get('process') for amp in self.amplitudes]
1187
1188 # The values are (defining_process_instance, list_of_mapped_process_instances)
1189@@ -803,7 +1010,11 @@
1190 all_defining_procs[proc][1].extend([all_procs_pdgs[p] for p in all_procs_pdgs
1191 if p!=proc and p not in all_defining_procs[proc][1]])
1192
1193-
1194+ # Insure that all processes have leg numbers that are consecutive
1195+ for process_key, (defining_process, mapped_processes) in all_defining_procs.items():
1196+ for p in [defining_process,]+mapped_processes:
1197+ for i, leg in enumerate(p.get('legs')):
1198+ leg.set('number', i+1)
1199
1200 # Cache the process map
1201 self.processes_map = ({
1202@@ -815,12 +1026,21 @@
1203
1204 def process_dir_name(self, process):
1205 """ Given a specified process, return the directory name in which it is output."""
1206- # This function is essentially useful in order to be shadowed by daughter classes
1207- return process.shell_string()
1208-
1209+ # This function is essentially useful in order to be shadowed by daughter classes
1210+ if self.output_type == 'madloop':
1211+ return "%d_%s_%s"%(process.get('id'), process.get('uid'),
1212+ process.shell_string(print_id=False))
1213+ else:
1214+ return process.shell_string()
1215+
1216 def compile(self):
1217 """ Compiles the f2py shared library to provide easy access to this contribution."""
1218
1219+ # This is not needed for factorization contributions as they always use processes already
1220+ # in other contributions.
1221+ if self.has_beam_factorization():
1222+ return
1223+
1224 processes_map = self.get_processes_map()
1225 i_process=0
1226 for process_key, (defining_process, mapped_processes) in processes_map.items():
1227@@ -837,15 +1057,16 @@
1228 if not os.path.isfile(pjoin(Pdir, 'matrix_%s_py.so'%name)):
1229 raise InvalidCmd("The f2py compilation of subprocess '%s' failed.\n"%Pdir+
1230 "Try running 'make matrix2py.so' by hand in this directory.")
1231- ln(pjoin(Pdir, 'matrix_%s_py.so'%name), starting_dir=self.export_dir)
1232-
1233+ ln(pjoin(Pdir, 'matrix_%s_py.so'%name), starting_dir=self.export_dir)
1234+
1235 def get_nice_string_process_line(self, process_key, defining_process, format=0):
1236 """ Return a nicely formated process line for the function nice_string of this
1237 contribution."""
1238 GREEN = '\033[92m'
1239- ENDC = '\033[0m'
1240- return GREEN+' %s'%defining_process.nice_string(print_weighted=False).\
1241- replace('Process: ','')+ENDC
1242+ ENDC = '\033[0m'
1243+
1244+ return GREEN+' %s'%(
1245+ defining_process.nice_string(print_weighted=False).replace('Process: ',''))+ENDC
1246
1247 def get_additional_nice_string_printout_lines(self, format=0):
1248 """ Return additional information lines for the function nice_string of this contribution."""
1249@@ -860,7 +1081,8 @@
1250 BLUE = '\033[94m'
1251 GREEN = '\033[92m'
1252 ENDC = '\033[0m'
1253- res = ['%-30s: %s'%('contribution_type',type(self))]
1254+ res = ['< %s%s%s >'%(BLUE,self.short_name(),ENDC)]
1255+ res.append('%-30s: %s'%('contribution_type',type(self)))
1256 res.extend([self.contribution_definition.nice_string()])
1257 if not self.topologies_to_processes is None:
1258 res.append('%-30s: %d'%('Number of topologies',
1259@@ -875,7 +1097,8 @@
1260 for amp in self.amplitudes:
1261 res.append(GREEN+' %s'%amp.get('process').nice_string(print_weighted=False).\
1262 replace('Process: ','')+ENDC)
1263- elif self.amplitudes and self.all_matrix_elements.get_matrix_elements():
1264+ elif self.amplitudes and self.all_matrix_elements.get_matrix_elements() or \
1265+ self.has_beam_factorization():
1266 processes_map = self.get_processes_map()
1267 if format < 1:
1268 res.append('Generated and mapped processes for this contribution: %d (+%d mapped)'%
1269@@ -892,9 +1115,157 @@
1270 res.append(BLUE+'No amplitudes generated yet.'+ENDC)
1271
1272 return '\n'.join(res).encode('utf-8')
1273+
1274+ def has_beam_factorization(self):
1275+ """ Checks whether this contribution has as any beam factorization active."""
1276+
1277+ return self.contribution_definition.has_beam_factorization()
1278+
1279+ def import_topologies_from_contrib(self, process_key, contrib):
1280+ """ Add to the current topologies of this contribution, the ones defined for the
1281+ process key of the contribution passed in argument."""
1282+
1283+ # TODO a proper combination needs to be performed here. For now, I simply add them
1284+ # to the attribute processes_to_topologies and topologies_to_processes, but that
1285+ # doesn't work because the topology_key used is based on the
1286+ # (subprocess_group_number, config_number)
1287+ # which will clash between the various contributions added. So matching configurations
1288+ # must be combined and the different ones must be added with a topology key upgraded
1289+ # so as to make them unique (for example using a combination of:
1290+ # contrib.contribution_definiton.process_definition.get('id')
1291+ # and
1292+ # contrib.contribution_definiton.n_loops
1293+ #.which is best done when generating the topologies in the first place anyway.
1294+ # But they will need to be combined anyway.
1295+ #
1296+ # For now we implement a minimal behaviour below which is supposed to work for
1297+ # NLO with a simple 'add process' command, where we can basically copy the topologies
1298+ # directly from the Born
1299+ self.processes_to_topologies[process_key] = contrib.processes_to_topologies[process_key]
1300+
1301+ for (topology_key, processes) in contrib.topologies_to_processes.items():
1302+ self.topologies_to_processes[topology_key] = processes
1303+
1304+ def add_beam_factorization_processes_from_contribution(self, contrib, beam_factorization_order):
1305+ """ 'Import' the processes_map and all generated attributes of the contribution
1306+ passed in argument to assign them to this beam factorization contributions, and
1307+ generate all necessary "bulk" BeamCurrents as we go along.
1308+ """
1309+
1310+ beam_factorization_configuration = (
1311+ (not self.contribution_definition.beam_factorization['beam_one'] is None) and \
1312+ self.contribution_definition.beam_factorization['beam_one']['active'],
1313+ (not self.contribution_definition.beam_factorization['beam_two'] is None) and \
1314+ self.contribution_definition.beam_factorization['beam_two']['active'] )
1315+
1316+ # If both beams must be assigned a factorisation term, then make sure that the
1317+ # beam_factorization_order is at least two
1318+ if beam_factorization_configuration == (True,True) and beam_factorization_order < 2:
1319+ # We have nothing to add from this contribution.
1320+ return
1321+
1322+ # Make sure the source contribution does have a factorizing beam for the active
1323+ # ones of this contribution.
1324+ if ( (beam_factorization_configuration[0] and
1325+ (contrib.contribution_definition.beam_factorization['beam_one'] is None)) or
1326+ (beam_factorization_configuration[1] and
1327+ (contrib.contribution_definition.beam_factorization['beam_two'] is None)) ):
1328+ return
1329+
1330+ # Now fetch the process map to load from
1331+ source_processes_map = contrib.get_processes_map()
1332+ processes_map = self.processes_map[1]
1333+ # To the new contribution instances, also assign references to the generated attributes
1334+ # of the original contributions (no deep copy necessary here)
1335+ for process_key, (defining_process, mapped_processes) in source_processes_map.items():
1336+
1337+ # First add this process to the current processes map
1338+ if process_key in processes_map:
1339+ raise MadGraph5Error('MadNkLO attempts to add beam factorization terms for'+
1340+ ' the %s a second time. This should never happen.'%(
1341+ defining_process.nice_string().replace('Process','process')))
1342+ processes_map[process_key] = (
1343+ defining_process.get_copy(), [mp.get_copy() for mp in mapped_processes] )
1344+
1345+ # Create the corresponding BeamCurrents
1346+ all_beam_currents = []
1347+
1348+ beam_one_type = contrib.contribution_definition.beam_factorization['beam_one']['beam_type']
1349+ beam_two_type = contrib.contribution_definition.beam_factorization['beam_two']['beam_type']
1350+ beam_one_PDGs = contrib.contribution_definition.beam_factorization['beam_one']['beam_PDGs']
1351+ beam_two_PDGs = contrib.contribution_definition.beam_factorization['beam_two']['beam_PDGs']
1352+ # The double for-loop below creates all compatible assignments of n_loop to the beam
1353+ # factorisation currents applied to the first and second beam. So (2,1),(1,2) at N^3LO for instance
1354+ # In principle it would also be possible here to further differentiate P^(0) x P^(0) from P^(1), but
1355+ # we do not do this for now.
1356+ for factorization_order_beam_one in range(beam_factorization_order+1):
1357+ if factorization_order_beam_one > 0 and not beam_factorization_configuration[0]:
1358+ break
1359+ for factorization_order_beam_two in range(beam_factorization_order-factorization_order_beam_one+1):
1360+ if factorization_order_beam_one == factorization_order_beam_two == 0:
1361+ continue
1362+ if factorization_order_beam_two > 0 and not beam_factorization_configuration[1]:
1363+ break
1364+ initial_legs = { l.get('number'): l for l in defining_process.get_initial_legs() }
1365+ all_beam_currents.append({
1366+ 'beam_one' : None if factorization_order_beam_one == 0 else
1367+ subtraction.BeamCurrent({
1368+ 'beam_type' : beam_one_type,
1369+ 'beam_PDGs' : beam_one_PDGs,
1370+ 'distribution_type' : 'bulk',
1371+ 'n_loops' : factorization_order_beam_one-1,
1372+ 'perturbation_couplings' : self.contribution_definition.correction_couplings,
1373+ 'squared_orders' : { order : factorization_order_beam_one*2 for
1374+ order in self.contribution_definition.correction_couplings },
1375+ 'sqorders_types' : { order : '<=' for order in
1376+ self.contribution_definition.correction_couplings },
1377+ 'singular_structure': subtraction.SingularStructure(
1378+ substructures = [ subtraction.BeamStructure(
1379+ legs = [initial_legs[1],]), ]
1380+ ),
1381+ }),
1382+ 'beam_two' : None if factorization_order_beam_two == 0 else
1383+ subtraction.BeamCurrent({
1384+ 'beam_type' : beam_two_type,
1385+ 'beam_PDGs' : beam_two_PDGs,
1386+ 'distribution_type' : 'bulk',
1387+ 'n_loops' : factorization_order_beam_two-1,
1388+ 'perturbation_couplings' : self.contribution_definition.correction_couplings,
1389+ 'squared_orders' : { order : factorization_order_beam_two*2 for
1390+ order in self.contribution_definition.correction_couplings },
1391+ 'sqorders_types' : { order : '<=' for order in
1392+ self.contribution_definition.correction_couplings },
1393+ 'singular_structure': subtraction.SingularStructure(
1394+ substructures = [ subtraction.BeamStructure(
1395+ legs = [initial_legs[2],]), ]
1396+ ),
1397+ })
1398+ })
1399+
1400+ # Add these beam currents to the processes just added to this contribution
1401+ processes_map[process_key][0].set('beam_factorization', all_beam_currents)
1402+ for mp in processes_map[process_key][1]:
1403+ mp.set('beam_factorization', all_beam_currents)
1404+
1405+ # Also add the corresponding topologies
1406+ self.import_topologies_from_contrib(process_key, contrib)
1407+
1408+ def generate_all_counterterms(self,
1409+ ignore_integrated_counterterms=False, group_processes=True, **opts):
1410+ """ Generate all counterterms associated to the processes in this contribution.
1411+ Since we are in the mother class Contribution here, we only consider generating the
1412+ beam factorisation counterterm of type 'local_CT'."""
1413+
1414+ # Initialise local and integrated counterterms to an empty list
1415+ self.counterterms = {}
1416+ all_integrated_counterterms = []
1417+
1418+ # Nothing done by default
1419+ return all_integrated_counterterms
1420
1421 def generate_amplitudes(self, force=False):
1422- """ Generates the relevant amplitudes for this contribution."""
1423+ """ Generates the relevant amplitudes for this contribution and the construction
1424+ of the instances of currents for the beam factorisation terms."""
1425
1426 # First check if the amplitude was not already generated
1427 if self.amplitudes and not force:
1428@@ -903,7 +1274,8 @@
1429 myproc = self.MultiProcessClass(self.contribution_definition.process_definition,
1430 collect_mirror_procs = self.collect_mirror_procs,
1431 ignore_six_quark_processes = self.ignore_six_quark_processes,
1432- optimize=self.optimize, diagram_filter=self.diagram_filter)
1433+ optimize=self.optimize, diagram_filter=self.diagram_filter,
1434+ loop_filter=self.loop_filter)
1435
1436 for amp in myproc.get('amplitudes'):
1437 if amp not in self.amplitudes:
1438@@ -919,16 +1291,10 @@
1439 super(Contribution_B,self).__init__(contribution_definition, cmd_interface, **opts)
1440 self.ignore_six_quark_processes = self.options['ignore_six_quark_processes'] if \
1441 "ignore_six_quark_processes" in self.options else []
1442- if 'diagram_filter' in opts:
1443- self.diagram_filter = opts['diagram_filter']
1444 if 'optimize' in opts:
1445 self.optimize = opts['optimize']
1446 self.supports_decay_chain = True
1447
1448-class Contribution_LIB(Contribution_B):
1449- """ Implements the handling of loop-induced Born-type of contribution."""
1450- pass
1451-
1452 class Contribution_R(Contribution):
1453 """ Implements the handling of a single real-emission type of contribution."""
1454
1455@@ -953,8 +1319,8 @@
1456
1457 GREEN = '\033[92m'
1458 ENDC = '\033[0m'
1459- res = GREEN+' %s'%defining_process.nice_string(print_weighted=False).\
1460- replace('Process: ','')+ENDC
1461+ res = GREEN+' %s'%(defining_process.nice_string(print_weighted=False).\
1462+ replace('Process: ',''))+ENDC
1463
1464 if not self.counterterms:
1465 return res
1466@@ -983,10 +1349,11 @@
1467
1468 def generate_all_counterterms(self, ignore_integrated_counterterms=False, group_processes=True):
1469 """ Generate all counterterms associated to the processes in this contribution."""
1470-
1471- self.counterterms = {}
1472+
1473 all_integrated_counterterms = []
1474+
1475 for process_key, (defining_process, mapped_processes) in self.get_processes_map().items():
1476+
1477 local_counterterms, integrated_counterterms = self.IR_subtraction.get_all_counterterms(
1478 defining_process, ignore_integrated_counterterms=ignore_integrated_counterterms)
1479
1480@@ -1001,13 +1368,19 @@
1481 # matrix elements themselves are not added here
1482 local_counterterms = [ct for ct in local_counterterms if ct.is_singular()]
1483
1484+ # Set the reduced flavor map of all the counterterms
1485+ for CT in integrated_counterterms:
1486+ CT.set_reduced_flavors_map(defining_process, mapped_processes, self.IR_subtraction)
1487+ for CT in local_counterterms:
1488+ CT.set_reduced_flavors_map(defining_process, mapped_processes, self.IR_subtraction)
1489+
1490 # misc.sprint('Local CTs for %s'%(defining_process.nice_string()))
1491 # for lc in local_counterterms:
1492 # misc.sprint(str(lc))
1493 # misc.sprint('Integrated CTs for %s'%(defining_process.nice_string()))
1494 # for ic in integrated_counterterms:
1495 # misc.sprint(str(ic))
1496-
1497+
1498 self.counterterms[process_key] = local_counterterms
1499
1500 # Now Add a additional information about the integrated_counterterms that will
1501@@ -1026,20 +1399,22 @@
1502 initial_pdgs = process.get_initial_ids()
1503 final_pdgs = tuple(process.get_final_ids_after_decay())
1504 flavors_combinations.append( ( ( tuple(initial_pdgs),final_pdgs ), leg_numbers ) )
1505-# It is not completely clear how to account for mirrored process at this stage,
1506-# so we anticipate for now that this will only need to be processed at the
1507-# integrand evaluation level.
1508-# The problem with the line below is that for the process u d~ > u d~ a for instance
1509-# the counterterm C(1,3) is only valid if leg #1 is u and leg #2 is d~.
1510+# Mirrored processes are not accounted for at this stage, as these are processed at the
1511+# integrand evaluation level where mirrored copy of events are generated.
1512+# Notice that this mirroring shouldn't be done simply as in the line below.
1513+##
1514+## if process.get('has_mirror_process'):
1515+## flavors_combinations.append(
1516+## ( ( tuple(reversed(initial_pdgs)), final_pdgs ), leg_numbers ) )
1517+##
1518+# because if one takes the following process for instance:
1519+# u d~ > u d~ a
1520+# then the counterterm C(1,3) is only valid if leg #1 is u and leg #2 is d~.
1521 # So swapping u(1) d~(2) for u(2) d~(1) would require propagating the indices
1522 # substitution in the entire counterterm (most importantly, its singular structure)
1523 # and create a copy of it. This pretty heavy (and the idea of the 'use_mirror_process'
1524-# option is precisely to avoid this work), so we can hopefully bypass this
1525+# option is precisely to avoid this work), so this is entirely bypassed
1526 # by implementing this symmetrisation during the integrand evaluation.
1527-## if process.get('has_mirror_process'):
1528-## flavors_combinations.append(
1529-## ( ( tuple(reversed(initial_pdgs)), final_pdgs ), leg_numbers ) )
1530-
1531
1532 # misc.sprint('doing process:',defining_process.nice_string())
1533 # misc.sprint('flavors_combinations',flavors_combinations)
1534@@ -1072,14 +1447,13 @@
1535 except KeyError:
1536 # Register the new integrated current topology
1537 integrated_current_topologies[integrated_current_topology] = [integrated_counterterm,]
1538-
1539-
1540+
1541 # For each topology, count the number of counterterms that are associated to
1542 # various list of external leg numbers. Within a given topology, one expects
1543 # that there is the same number of counterterm repetition for each combination of external legs
1544 # Example: resolved process is
1545 # e+(1) e-(2) > g(3) g(4) g(5) d(6) d~(7)
1546- # All the following counterterm belong to the same topology, which we group here according
1547+ # All the following counterterms belong to the same topology, which we group here according
1548 # to which list of external leg numbers they include:
1549 # (3,4,6) : C(C(S(3),4),6), C(C(S(4),3),6)
1550 # (3,5,6) : C(C(S(3),5),6), C(C(S(5),3),6)
1551@@ -1125,6 +1499,32 @@
1552 # flavors combination and the value is the multiplication factor.
1553 reduced_flavors_combinations = {}
1554 resolved_flavors_combinations = {}
1555+ # One subtlety occurs because of the backward flavor evolution in the initial
1556+ # states which will be registered in the event since it must be convoluted with
1557+ # the correct PDFs.
1558+ # Consider the following process:
1559+ # g(1) s(2) > s(3) c(4) c~(5) u(6) u~(7)
1560+ # + mapped g(1) q(2) > q(3) qprime(4) qprime~(5) u(6) u~(7)
1561+ # and the counterterm C(2,3)C(4,5) which yields the following reduced mapped flavor:
1562+ # g g > g u u~
1563+ # with a multiplicity of naively n_f**2. However, the actual events that will be generated
1564+ # will be the following:
1565+ # g s > g u u~
1566+ # g d > g u u~
1567+ # ...
1568+ # It is then clear that each of this flavor configuration in the ME7 event should not be
1569+ # multiplied by n_f**2 but instead just n_f. For this reason, the function
1570+ # generate_all_counterterms of the contribution class also produces the dictionary
1571+ # 'reduced_flavors_with_resolved_initial_states_combinations' which, in the example above,
1572+ # would be (nf=5):
1573+ # { (21,3),(21,2,-2) : 5,
1574+ # (21,1),(21,2,-2) : 5,
1575+ # ...
1576+ # }
1577+ # which will allow to divide the weight of each of these flavor configurations at the very
1578+ # by the corresponding multiplicity factor n_f.
1579+ reduced_flavors_with_resolved_initial_states_combinations = {}
1580+
1581 # Each integrated counterterm contributes proportionally to the ratio
1582 # of the symmetry factors of the resolved process and reduced one,
1583 # further divided by the symmetry factor of each group external leg
1584@@ -1147,14 +1547,15 @@
1585 # There should always be a unique value of S_t for each reduced_flavor
1586 symmetry_factors = {}
1587 for flavor_combination, leg_numbers in flavors_combinations:
1588+ # Access the reduced flavors
1589+ reduced_flavors = integrated_counterterm.get_reduced_flavors(
1590+ defining_flavors = flavor_combination )
1591 # Create a map from leg number to flavor
1592 number_to_flavor_map = dict(
1593 [ ( number, flavor_combination[0][i] ) for i, number in enumerate(leg_numbers[0]) ] +
1594 [ ( number, flavor_combination[1][i] ) for i, number in enumerate(leg_numbers[1]) ]
1595 )
1596- reduced_flavors = integrated_counterterm.get_reduced_flavors(
1597- defining_flavors=number_to_flavor_map, IR_subtraction=self.IR_subtraction)
1598-# misc.sprint('From resolved flavor:',flavor_combination)
1599+# misc.sprint('From resolved flavor:',number_to_flavor_map)
1600 # misc.sprint('I get:',reduced_flavors)
1601
1602 resolved_symmetry_factor = misc.symmetry_factor(list(flavor_combination[1]))
1603@@ -1169,15 +1570,15 @@
1604 l in singular_legs if l.state==subtraction.SubtractionLeg.FINAL]
1605 singular_legs_symmetry_factor *= misc.symmetry_factor(singular_external_leg_pdgs)
1606
1607- # misc.sprint('Counterterm: %s'%str(integrated_counterterm))
1608- # misc.sprint(' -> multiplicity = %d'%len(integrated_counterterms))
1609- # misc.sprint(' -> CT with same topologies = %s'%(' | '.join(str(CT) for CT in integrated_counterterms)) )
1610- # misc.sprint(' -> resolved_flavors = %s'%str(flavor_combination))
1611- # misc.sprint(' -> reduced = %s'%str(reduced_flavors))
1612- # misc.sprint(' -> resolved_symmetry_factor = %d'%resolved_symmetry_factor)
1613- # misc.sprint(' -> reduced_symmetry_factor = %d'%reduced_symmetry_factor)
1614- # misc.sprint(' -> singular_legs_symmetry_factor = %f'%singular_legs_symmetry_factor)
1615- # misc.sprint(' -> counterterm_repetition_factor = %f'%repetitions_for_topology[integrated_current_topology])
1616+# misc.sprint('Counterterm: %s'%str(integrated_counterterm))
1617+# misc.sprint(' -> multiplicity = %d'%len(integrated_counterterms))
1618+# misc.sprint(' -> CT with same topologies = %s'%(' | '.join(str(CT) for CT in integrated_counterterms)) )
1619+# misc.sprint(' -> resolved_flavors = %s'%str(flavor_combination))
1620+# misc.sprint(' -> reduced = %s'%str(reduced_flavors))
1621+# misc.sprint(' -> resolved_symmetry_factor = %d'%resolved_symmetry_factor)
1622+# misc.sprint(' -> reduced_symmetry_factor = %d'%reduced_symmetry_factor)
1623+# misc.sprint(' -> singular_legs_symmetry_factor = %f'%singular_legs_symmetry_factor)
1624+# misc.sprint(' -> counterterm_repetition_factor = %f'%repetitions_for_topology[integrated_current_topology])
1625
1626 # Now correct for the repetition number in that topology
1627 # Notice it should always be an integer, the float here is just so that
1628@@ -1192,6 +1593,16 @@
1629 reduced_flavors_combinations[reduced_flavors] = 1
1630 resolved_flavors_combinations[flavor_combination] = reduced_flavors
1631
1632+ # Now also build a multiplicity factor for combination of resolved initial-state
1633+ # flavors and reduced final-state ones.
1634+ reduced_flavors_with_resolved_initial_states = (flavor_combination[0], reduced_flavors[1])
1635+ try:
1636+ reduced_flavors_with_resolved_initial_states_combinations[
1637+ reduced_flavors_with_resolved_initial_states] += 1
1638+ except KeyError:
1639+ reduced_flavors_with_resolved_initial_states_combinations[
1640+ reduced_flavors_with_resolved_initial_states] = 1
1641+
1642 if reduced_flavors in symmetry_factors:
1643 # Sanity check, all reduced flavors should share the same S_t
1644 if overall_symmetry_factor != symmetry_factors[reduced_flavors]:
1645@@ -1217,6 +1628,8 @@
1646 'integrated_counterterm' : integrated_counterterm,
1647 'resolved_flavors_combinations' : resolved_flavors_combinations,
1648 'reduced_flavors_combinations' : reduced_flavors_combinations,
1649+ 'reduced_flavors_with_resolved_initial_states_combinations' :
1650+ reduced_flavors_with_resolved_initial_states_combinations,
1651 'symmetry_factor' : symmetry_factors.values()[0],
1652 'multiplicity' : len(integrated_counterterms)
1653 })
1654@@ -1228,25 +1641,46 @@
1655 """Given the list of available reduced processes encoded in the MEAccessorDict 'all_MEAccessors'
1656 given in argument, remove all the counterterms whose underlying reduced process does not exist."""
1657
1658- for counterterm in list(counterterms):
1659+ all_removed = True
1660+ for counterterm_info in list(counterterms):
1661+ # Integrated counterterm come as dictionaries with additional metadata
1662+ if isinstance(counterterm_info, dict):
1663+ counterterm = counterterm_info['integrated_counterterm']
1664+ else:
1665+ counterterm = counterterm_info
1666 # Of course don't remove any counterterm that would be the real-emission itself.
1667 if not counterterm.is_singular():
1668 continue
1669 try:
1670 all_MEAccessors.get_MEAccessor(counterterm.process)
1671+ all_removed = False
1672 except MadGraph5Error:
1673 # This means that the reduced process could not be found and
1674 # consequently, the corresponding counterterm must be removed.
1675 # Example: C(5,6) in e+ e- > g g d d~
1676- counterterms.remove(counterterm)
1677-
1678- def export(self, ignore_integrated_counterterms=False, **opts):
1679- """ Overloads export so as to export subtraction currents as well."""
1680- ret_value = super(Contribution_R, self).export(**opts)
1681+ # Useful debug lines below
1682+# misc.sprint('-'*50)
1683+# import pprint
1684+# misc.sprint('Process key:\n%s'%(
1685+# pprint.pformat(dict(ProcessKey(counterterm.process).get_canonical_key()))))
1686+# misc.sprint('-'*50)
1687+# misc.sprint('All process keys in the accessor:\n')
1688+# for key in all_MEAccessors:
1689+# misc.sprint(pprint.pformat(key))
1690+# misc.sprint('-'*50)
1691+ counterterms.remove(counterterm_info)
1692+
1693+ return all_removed
1694+
1695+ def subtract(self, ignore_integrated_counterterms=False, **opts):
1696+ """ Generate and export all necessary subtraction counterterterms and currents
1697+ (including beam factorization integrated counterterms [bulk ones are already generated
1698+ at this stage])."""
1699+
1700+ ret_value = super(Contribution_R,self).subtract(ignore_integrated_counterterms=False, **opts)
1701
1702 # Fish out the group_processes option as it could be used when attempting to
1703 # generate all currents.
1704-
1705 if 'group_processes' in opts:
1706 group_processes = opts['group_processes']
1707 else:
1708@@ -1258,7 +1692,7 @@
1709
1710 # Add the integrated counterterms to be passed to the exporter
1711 ret_value.update({'integrated_counterterms': integrated_counterterms})
1712-
1713+
1714 return ret_value
1715
1716 def get_all_necessary_local_currents(self, all_MEAccessors):
1717@@ -1276,55 +1710,50 @@
1718 if copied_current not in all_currents:
1719 all_currents.append(copied_current)
1720
1721+ # Also add the beam_factorization currents from process itself if it has any
1722+ for process_key, (defining_process, mapped_processes) in self.get_processes_map().items():
1723+ for beam_factorization_currents in defining_process['beam_factorization']:
1724+ for bfc in beam_factorization_currents.values():
1725+ if bfc is not None:
1726+ all_currents.append(bfc)
1727+
1728 # Now further remove currents that are already in all_MEAccessors
1729- all_currents = [current
1730- for current in all_currents
1731- if current.get_key().get_canonical_key() not in all_MEAccessors]
1732- return all_currents
1733-
1734- @classmethod
1735- def add_current_accessors(
1736- cls, model, all_MEAccessors, root_path, current_set, currents_to_consider):
1737- """Generate and add all subtraction current accessors to the MEAccessorDict."""
1738-
1739- # Generate the computer code and export it on disk for the remaining new currents
1740- current_exporter = subtraction.SubtractionCurrentExporter(
1741- model, root_path, current_set)
1742- mapped_currents = current_exporter.export(currents_to_consider)
1743- # Print to the debug log which currents were exported
1744- log_string = "The following subtraction current implementation are exported:\n"
1745- for (module_path, class_name, _), current_properties in mapped_currents.items():
1746- if class_name != 'DefaultCurrentImplementation':
1747- quote_class_name = "'%s'" % class_name
1748- defining_current_str = str(current_properties['defining_current'])
1749- line_pars = (quote_class_name, defining_current_str)
1750- log_string += " > %-35s for representative current '%s'\n" % line_pars
1751- else:
1752- quote_default_name = "'DefaultCurrentImplementation'"
1753- number_of_currents = len(current_properties['mapped_process_keys'])
1754- line_pars = (quote_default_name, number_of_currents)
1755- log_string += " > %-35s for a total of %d currents." % line_pars
1756- logger.debug(log_string)
1757- # Instantiate the CurrentAccessors corresponding
1758- # to all current implementations identified and needed
1759- all_current_accessors = []
1760- for (module_path, class_name, _), current_properties in mapped_currents.items():
1761- all_current_accessors.append(accessors.VirtualMEAccessor(
1762- current_properties['defining_current'],
1763- module_path,
1764- class_name,
1765- '%s.subtraction_current_implementations_utils'%current_exporter.main_module_name,
1766- current_properties['instantiation_options'],
1767- mapped_process_keys=current_properties['mapped_process_keys'],
1768- root_path=root_path,
1769- model=model
1770- ))
1771- # Add MEAccessors
1772- all_MEAccessors.add_MEAccessors(all_current_accessors)
1773- return mapped_currents
1774-
1775- def remove_zero_counterterms(self, all_ME_accessors):
1776-
1777+ all_necessary_currents = {}
1778+ for current in all_currents:
1779+ current_key = current.get_key().get_canonical_key()
1780+ if current_key not in all_MEAccessors and current_key not in all_necessary_currents:
1781+ all_necessary_currents[current_key] = current
1782+
1783+ return all_necessary_currents.values()
1784+
1785+ def add_ME_accessors(self, all_MEAccessors, root_path):
1786+ """ Adds all MEAccessors for the matrix elements and currents generated as part of this contribution."""
1787+
1788+ # Get the basic accessors for the matrix elements
1789+ super(Contribution_R, self).add_ME_accessors(all_MEAccessors, root_path)
1790+
1791+ # Only initialize all_counterterms_removed to True if there is CT to removed to begin with
1792+ all_counterterms_removed = any(len(CTs)>0 for CTs in self.counterterms.values())
1793+ for process_key, counterterms in self.counterterms.items():
1794+ # Remove counterterms with non-existing underlying Born processes
1795+ if not self.remove_counterterms_with_no_reduced_process(all_MEAccessors, counterterms):
1796+ all_counterterms_removed = False
1797+ # Sanity check
1798+ if all_counterterms_removed:
1799+ logger.warning("All local counterterms from contribution '%s' were removed because"%self.short_name()+
1800+ " their corresponding reduced processes was not found in the list of processes exported.\n"+
1801+ " This is likely wrong and originates from a mismatch in the process definitions of the various"+
1802+ " contributions building this higher order computation.")
1803+
1804+ # Obtain all necessary currents
1805+ current_set = self.options['subtraction_currents_scheme']
1806+ currents_to_consider = self.get_all_necessary_local_currents(all_MEAccessors)
1807+ self.add_current_accessors(
1808+ self.model, all_MEAccessors, root_path, current_set, currents_to_consider )
1809+
1810+ def remove_local_counterterms_set_to_zero(self, all_ME_accessors):
1811+ """ Remove all local counterterms involving currents whose implementation has set
1812+ them to zero."""
1813 for process_key, counterterms in self.counterterms.items():
1814 for counterterm in list(counterterms):
1815 # misc.sprint("Considering CT %s" % str(counterterm))
1816@@ -1336,32 +1765,17 @@
1817 counterterms.remove(counterterm)
1818 break
1819
1820- def add_ME_accessors(self, all_MEAccessors, root_path):
1821- """ Adds all MEAccessors for the matrix elements and currents generated as part of this contribution."""
1822-
1823- # Get the basic accessors for the matrix elements
1824- super(Contribution_R, self).add_ME_accessors(all_MEAccessors, root_path)
1825-
1826- for process_key, counterterms in self.counterterms.items():
1827- # Remove counterterms with non-existing underlying Born processes
1828- self.remove_counterterms_with_no_reduced_process(all_MEAccessors, counterterms)
1829-
1830- # Obtain all necessary currents
1831- current_set = self.options['subtraction_currents_scheme']
1832- currents_to_consider = self.get_all_necessary_local_currents(all_MEAccessors)
1833- self.add_current_accessors(
1834- self.model, all_MEAccessors, root_path, current_set, currents_to_consider )
1835- self.remove_zero_counterterms(all_MEAccessors)
1836-
1837 def get_integrands_for_process_map(self, process_map, model, run_card, all_MEAccessors, ME7_configuration):
1838 """ Returns all the integrands implementing this contribution for the specified process_map.
1839 The instance of MEAccessorDict is necessary so as to be passed to the integrand instances.
1840 """
1841
1842 relevant_counterterms = {}
1843- self.remove_zero_counterterms(all_MEAccessors)
1844- for process_key in process_map:
1845- relevant_counterterms[process_key] = self.counterterms[process_key]
1846+ self.remove_local_counterterms_set_to_zero(all_MEAccessors)
1847+
1848+ if self.counterterms:
1849+ for process_key in process_map:
1850+ relevant_counterterms[process_key] = self.counterterms[process_key]
1851
1852 return [
1853 ME7_integrands.ME7Integrand(
1854@@ -1387,10 +1801,7 @@
1855 def __init__(self, contribution_definition, cmd_interface, **opts):
1856 """ Bring in the couple of modifications necessary for this type of contributions."""
1857 super(Contribution_V,self).__init__(contribution_definition, cmd_interface, **opts)
1858- # Make sure to adjust the MultiProcessClass to be used
1859- self.MultiProcessClass = loop_diagram_generation.LoopMultiProcess
1860- self.output_type = 'madloop'
1861-
1862+
1863 # Store values being integration counterterms and their properties (as a dictionary),
1864 # with the ProcessKey they are attached to as keys of this attribute's dictionary.
1865 self.integrated_counterterms = {}
1866@@ -1408,8 +1819,8 @@
1867 contribution."""
1868 GREEN = '\033[92m'
1869 ENDC = '\033[0m'
1870- res = GREEN+' %s'%defining_process.nice_string(print_weighted=False).\
1871- replace('Process: ','')+ENDC
1872+ res = GREEN+' %s'%(defining_process.nice_string(print_weighted=False)
1873+ .replace('Process: ',''))+ENDC
1874
1875 if not self.integrated_counterterms:
1876 return res
1877@@ -1440,66 +1851,6 @@
1878 res += '\n'.join(long_res)
1879
1880 return res
1881-
1882- @classmethod
1883- def remove_counterterms_with_no_reduced_process(cls, all_MEAccessors, CT_properties_list):
1884- """ Given the list of available reduced processes encoded in the MEAccessorDict 'all_MEAccessors'
1885- given in argument, remove all the counterterms whose underlying reduced process does not exist."""
1886-
1887- for CT_properties in list(CT_properties_list):
1888- counterterm = CT_properties['integrated_counterterm']
1889- # Of course don't remove any counterterm that would be the real-emission itself.
1890- if not counterterm.is_singular():
1891- continue
1892- try:
1893- all_MEAccessors.get_MEAccessor(counterterm.process)
1894- except MadGraph5Error:
1895- # This means that the reduced process could not be found and
1896- # consequently, the corresponding counterterm must be removed.
1897- # Example: C(5,6) in e+ e- > g g d d~
1898- CT_properties_list.remove(CT_properties)
1899-
1900- def generate_matrix_elements(self, group_processes=True):
1901- """Generate the Helas matrix elements before exporting. Uses the main function argument
1902- 'group_processes' to decide whether to use group_subprocess or not."""
1903-
1904- cpu_time1 = time.time()
1905- ndiags = 0
1906-
1907- generation_mode = {'optimized_output': self.options['loop_optimized_output']}
1908-
1909- self.all_matrix_elements = loop_helas_objects.LoopHelasProcess(
1910- self.amplitudes,
1911- compute_loop_nc = True,
1912- matrix_element_opts = generation_mode,
1913- optimized_output = self.options['loop_optimized_output'],
1914- combine_matrix_elements=group_processes
1915- )
1916- ndiags = sum([len(me.get('diagrams')) for me in self.all_matrix_elements.get_matrix_elements()])
1917-
1918- # assign a unique id number to all process
1919- uid = 0
1920- for me in self.all_matrix_elements.get_matrix_elements():
1921- uid += 1 # update the identification number
1922- me.get('processes')[0].set('uid', uid)
1923-
1924- cpu_time2 = time.time()
1925- return ndiags, cpu_time2 - cpu_time1
1926-
1927- def add_content_to_global_ME7_resources(self, global_ME7_dir, **opts):
1928- """ Pass the MadLoopParams.dat card to the global ME7 resources."""
1929- super(Contribution_V, self).add_content_to_global_ME7_resources(global_ME7_dir, **opts)
1930-
1931- destination = pjoin(global_ME7_dir,'Cards','MadLoopParams.dat')
1932- if not os.path.exists(destination):
1933- shutil.copyfile(pjoin(self.export_dir,'Cards','MadLoopParams.dat'), destination)
1934-
1935- def set_helas_model(self):
1936- """ Instantiate properly the helas model """
1937-
1938- assert self.exporter.exporter == 'v4'
1939- assert (not self.options['_model_v4_path'])
1940- self.helas_model = helas_call_writers.FortranUFOHelasCallWriter(self.model)
1941
1942 def get_all_necessary_integrated_currents(self, all_MEAccessors):
1943 """ Given the list of currents already available encoded in the all_MEAccessors,
1944@@ -1525,14 +1876,29 @@
1945 return all_currents
1946
1947 @classmethod
1948- def add_current_accessors(
1949- cls, model, all_MEAccessors, root_path, current_set, currents_to_consider ):
1950- """Generate and add all integrated current accessors to the MEAccessorDict.
1951+ def remove_counterterms_with_no_reduced_process(cls, all_MEAccessors, counterterms):
1952+ """Given the list of available reduced processes encoded in the MEAccessorDict 'all_MEAccessors'
1953+ given in argument, remove all the counterterms whose underlying reduced process does not exist.
1954 For now we can recycle the implementation of the Contribution_R class.
1955 """
1956-
1957- return Contribution_R.add_current_accessors(
1958- model, all_MEAccessors, root_path, current_set, currents_to_consider)
1959+ return Contribution_R.remove_counterterms_with_no_reduced_process(all_MEAccessors, counterterms)
1960+
1961+ def remove_integrated_counterterms_set_to_zero(self, all_ME_accessors):
1962+ """ Remove all integrated counterterms involving currents whose implementation has set
1963+ them to zero."""
1964+
1965+ for process_key, counterterms in self.integrated_counterterms.items():
1966+ for integrated_counterterm_properties in list(counterterms):
1967+ integrated_counterterm = integrated_counterterm_properties['integrated_counterterm']
1968+ # misc.sprint("Considering integrated CT %s" % str(integrated_counterterm))
1969+ if integrated_counterterm.is_singular():
1970+ for current in integrated_counterterm.get_all_currents():
1971+ accessor, _ = all_ME_accessors[current]
1972+ if accessor.subtraction_current_instance.is_zero:
1973+ # misc.sprint("Zero current found in integrated CT %s" % str(integrated_counterterm))
1974+ logger.debug("Zero current found in integrated CT %s, removing it now."%str(integrated_counterterm))
1975+ counterterms.remove(integrated_counterterm_properties)
1976+ break
1977
1978 def get_integrands_for_process_map(self, process_map, model, run_card, all_MEAccessors, ME7_configuration):
1979 """ Returns all the integrands implementing this contribution for the specified process_map.
1980@@ -1540,6 +1906,7 @@
1981 """
1982
1983 relevant_counterterms = {}
1984+ self.remove_integrated_counterterms_set_to_zero(all_MEAccessors)
1985 if self.integrated_counterterms:
1986 for process_key in process_map:
1987 relevant_counterterms[process_key] = self.integrated_counterterms[process_key]
1988@@ -1561,10 +1928,20 @@
1989 # Get the basic accessors for the matrix elements
1990 super(Contribution_V, self).add_ME_accessors(all_MEAccessors, root_path)
1991
1992+ # Only initialize all_counterterms_removed to True if there is CT to removed to begin with
1993+ all_counterterms_removed = any(len(CTs)>0 for CTs in self.integrated_counterterms.values())
1994 for process_key, CT_properties in self.integrated_counterterms.items():
1995 # Remove integrated counterterms with non-existing underlying Born processes
1996- self.remove_counterterms_with_no_reduced_process(all_MEAccessors, CT_properties)
1997-
1998+ if not self.remove_counterterms_with_no_reduced_process(all_MEAccessors, CT_properties):
1999+ all_counterterms_removed = False
2000+
2001+ # Sanity check
2002+ if all_counterterms_removed:
2003+ logger.warning("All integrated counterterms from contribution '%s' were removed because"%self.short_name()+
2004+ " their corresponding reduced processes was not found in the list of processes exported.\n"+
2005+ " This is likely wrong and originates from a mismatch in the process definitions of the various"+
2006+ " contributions building this higher order computation.")
2007+
2008 # Obtain all necessary currents
2009 current_set = self.options['subtraction_currents_scheme']
2010 currents_to_consider = self.get_all_necessary_integrated_currents(all_MEAccessors)
2011@@ -1576,7 +1953,6 @@
2012 """ Figures out the permutation to apply to map the origin pdg orders
2013 (a 2-tuple of initial and final state pdgs) to the destination pdg orders (same format)."""
2014
2015-
2016 # Create a look_up list from the user-provided list of PDGs, whose elements will progressively be set to zero
2017 # as they are being mapped
2018 look_up_list = list(list(a_tuple) for a_tuple in origin_pdg_orders)
2019@@ -1721,6 +2097,77 @@
2020
2021 return all_mappings
2022
2023+ def combine_initial_state_counterterms(self):
2024+ """ The initial state beam factorization current, including the integrated collinear
2025+ ones, return a flavor matrix that represents all possible backward evolution of the
2026+ initial state. For instance, if the BF1 process is:
2027+ d d~ > z
2028+ then there will be two [C(1,3)] coming from the real processes d d~ > z g and g d~ > z
2029+ which correspond to the two possible backward evolutions to a d-quark and a gluon
2030+ respectively. Given that those are accounted for at once in the beam-factorization
2031+ current via the flavor matrix, we must make sure to combine all such integrated
2032+ initial-state counterterms so as to avoid double-counting.
2033+ Alternatively, one can disable this combination and always backward evolve only to
2034+ the particular initial-state flavor specified in this particular integrated counterterm.
2035+
2036+ WARNING: Eventually this entire construction of the 'allowed_backward_evolved_flavors'
2037+ should be dropped in favour of further differentiation of the integrated collinear ISR
2038+ current that should apply only to particular current (for instance the one
2039+ *only* backward-evolving to a gluon). When this will be done, we will be able to
2040+ drop this construction altogether.
2041+ """
2042+
2043+ for process_key, (process, mapped_processes) in self.get_processes_map().items():
2044+ # It may be that a particular process does not have any counterterms (typically when
2045+ # considering forcing a beam-type by hand at generation time for debugging purposes, for
2046+ # example:
2047+ # generate e+ e- > j j j / a QED=2 --NLO=QCD --beam_types=proton@(1,-1,21)
2048+ # In this case, self.integrated_counterterms would be empty and this combination
2049+ # should be skipped.
2050+ if process_key not in self.integrated_counterterms:
2051+ continue
2052+ for counterterm_characteristics in self.integrated_counterterms[process_key]:
2053+ # One should perform here the combination of the values of each key in
2054+ # self.integrated_counterterms.
2055+
2056+ # By default include all possible bbackward evolutions.
2057+ counterterm_characteristics['allowed_backward_evolved_flavors'] = \
2058+ {'beam_one': 'ALL', 'beam_two': 'ALL',}
2059+ if _COMBINE_INITIAL_STATE_INTEGRATED_COLLINEAR_COUNTERTERMS:
2060+ raise MadGraph5Error('Combination of integrated initial-state collinear '+
2061+ 'counterterms not implemented yet')
2062+ else:
2063+ # Since in this case we don't combine different initial state integrated
2064+ # collinear counterterms that differ only by the backward evolved flavor,
2065+ # we must restrict each one to the particular backward evolved flavor they
2066+ # correspond to.
2067+ integrated_counterterm = counterterm_characteristics['integrated_counterterm']
2068+ if integrated_counterterm.does_require_correlated_beam_convolution():
2069+ # In this case the beam currents has a diagonal flavor matrix anyway
2070+ continue
2071+ beam_number_map = {'beam_one': 0, 'beam_two': 1}
2072+ beam_currents = integrated_counterterm.get_beam_currents()
2073+ active_beams = set([])
2074+ # Check if the counterterm contains any current that could have a
2075+ # flavor matrix attached to either beam_one of beam_two
2076+ for bc in beam_currents:
2077+ for beam_name in beam_number_map:
2078+ # If it is a pure integrated mass collinear counterterm, then
2079+ # no masking should be applied
2080+ if (bc[beam_name] is None) or all(ss.name()=='F' for ss in
2081+ bc[beam_name].get('singular_structure').decompose()):
2082+ continue
2083+ # Now we can add this beam as a one for which a masking of the
2084+ # allowed initial state flavours must be specified.
2085+ active_beams.add(beam_name)
2086+ # Now for each beam identified as potentially having a flavour matrix
2087+ # then identify which flavours must be allowed as allowed ones after
2088+ # the application of the flavor matrix
2089+ for beam_name in active_beams:
2090+ counterterm_characteristics['allowed_backward_evolved_flavors'][beam_name] = \
2091+ tuple(set(fl[0][beam_number_map[beam_name]] for fl in
2092+ counterterm_characteristics['resolved_flavors_combinations']))
2093+
2094 def add_integrated_counterterm(self, integrated_CT_properties):
2095 """ Virtual contributions can receive integrated counterterms and they will
2096 be stored in the attribute list self.integrated_counterterms."""
2097@@ -1728,6 +2175,7 @@
2098 # Extract quantities from integrated_counterterm_properties
2099 integrated_counterterm = integrated_CT_properties['integrated_counterterm']
2100 # flavors_combinations = integrated_CT_properties['flavors_combinations']
2101+
2102
2103 # Sort PDGs only when processes have been grouped
2104 sort_PDGs = self.group_subprocesses
2105@@ -1748,23 +2196,42 @@
2106 # Use ordered PDGs since this is what is used in the inverse_processes_map
2107 # Also overwrite some of the process properties to make it match the
2108 # loop process definitions in this contribution
2109+ n_loops_of_reduced_process = self.contribution_definition.n_loops - \
2110+ integrated_counterterm.get_n_loops_in_beam_factorization_of_host_contribution()
2111+
2112 counterterm_reduced_process_key = ProcessKey(integrated_counterterm.process,
2113- sort_PDGs=sort_PDGs, n_loops=1, NLO_mode='virt',
2114- perturbation_couplings=self.contribution_definition.correction_couplings)\
2115- .get_canonical_key()
2116+ sort_PDGs = sort_PDGs,
2117+ n_loops = n_loops_of_reduced_process).get_canonical_key()
2118
2119 # misc.sprint(inverse_processes_map.keys(), len(inverse_processes_map.keys()))
2120 # misc.sprint(counterterm_reduced_process_key)
2121- if counterterm_reduced_process_key not in inverse_processes_map:
2122+
2123+ # Now look for a matching process key using only a limited set of criteria
2124+ potential_matches = []
2125+ criteria_considered = ['PDGs','n_loops','id']
2126+ for process_key in inverse_processes_map:
2127+ if all(pk[1] == crpk[1] for (pk, crpk) in zip(process_key,
2128+ counterterm_reduced_process_key) if pk[0] in criteria_considered):
2129+ # Now we can simply add this integrated counterterm in the group of the
2130+ # defining process to which the reduced process is mapped
2131+ potential_matches.append(inverse_processes_map[process_key])
2132+
2133+ if len(potential_matches)==0:
2134 # The reduced process of the integrated counterterm is not included in this
2135 # contribution so we return here False, letting the ME7 exporter know that
2136- # we cannot host this integrated counterterm.
2137+ # we cannot host this integrated counterterm here.
2138 return False
2139
2140- # Now we can simply add this integrated counterterm in the group of the
2141- # defining process to which the reduced process is mapped
2142- defining_key, virtual_process_instance = \
2143- inverse_processes_map[counterterm_reduced_process_key]
2144+ if len(potential_matches)>1:
2145+ logger.warning(
2146+ ('MadNkLO found the following processes:\n%s\nto be potential matches'+
2147+ 'to this integrated counterterm:\n%s\nThe first matching process will be'+
2148+ ' selected, but this may indicate that the current list of criteria considered'+
2149+ ' (%s) should be extended.')%(
2150+ '\n'.join(p[1].nice_string() for p in potential_matches),
2151+ integrated_counterterm.nice_string(), ','.join(criteria_considered)) )
2152+
2153+ defining_key, virtual_process_instance = potential_matches[0]
2154
2155 integrated_counterterm_properties = dict(integrated_CT_properties)
2156
2157@@ -1795,6 +2262,23 @@
2158 # Obtain the corresponding permutation
2159 basic_permutation = self.get_basic_permutation(origin_pdgs_order,destination_pdgs_order)
2160
2161+ # If the integrated counterterm requires only a single active beam factorization then
2162+ # this beam-factorization contribution should indeed host this counterterm only if:
2163+ # a) The initial state legs *does* require swapping and the factorized beam number of
2164+ # this integrated CT does *not* match the factorized beam number of thhis contribution
2165+ # b) The initial state legs does *not* require swapping and the factorized beam number
2166+ # of this integrated CT *does* match the factorized beam number of this contribution
2167+ active_beams = {k:self.contribution_definition.is_beam_active(k) for k in ['beam_one','beam_two'] }
2168+ integrated_CT_active_beams = integrated_counterterm.get_necessary_beam_convolutions()
2169+ if active_beams.values().count(True)==1 and len(integrated_CT_active_beams)==1:
2170+ require_initial_state_swapping = ( basic_permutation[0] == 1 and
2171+ basic_permutation[1] == 0 )
2172+ integrated_CT_active_beam = list(integrated_CT_active_beams)[0]
2173+ if (require_initial_state_swapping and active_beams[integrated_CT_active_beam]==True) or \
2174+ ((not require_initial_state_swapping) and active_beams[integrated_CT_active_beam]==False):
2175+ # Then this integrated CT should not be hosted in this contribution
2176+ return False
2177+
2178 # Now we need to distribute all flavors that are parents of the integrated current
2179 # in all possible ways. In the example below, the parent of C(3,7) is a d-quark and
2180 # we must assign it as both leg #3 and leg #5 of the following reduced integrated
2181@@ -1822,11 +2306,13 @@
2182
2183 # Now obtain all possible ways of distributing these parent flavors over the
2184 # flavors of the underlying reduced process.
2185+ # Initial-state are always distinguishible (and they were treated as so when
2186+ # generating the list of integrated counterterms to dispatch) so that we must not
2187+ # distribute over identical initial_state_parent_flavors:
2188+ initial_state_parent_flavors = {}
2189 flavor_permutations = self.distribute_parent_flavors(
2190- (initial_state_parent_flavors, final_state_parent_flavors),
2191- destination_pdgs_order)
2192-# misc.sprint('For integrated counterterm %s, flavor permutations are: %s'%(str(integrated_counterterm),str(flavor_permutations)))
2193-
2194+ (initial_state_parent_flavors, final_state_parent_flavors), destination_pdgs_order)
2195+
2196 # Now combine these permutations with the basic permutation to obtain the final
2197 # list of permutations to consider
2198 for flavor_permutation in flavor_permutations:
2199@@ -1835,6 +2321,17 @@
2200 combined_permuation[index] = basic_permutation[flavor_permutation.get(index, index)]
2201 permutations.append(combined_permuation)
2202
2203+# Example of how to print out information of a specific contribution.
2204+# if isinstance(self, Contribution_RV) and False:
2205+# misc.sprint('-'*50)
2206+# misc.sprint(momenta_dict)
2207+# misc.sprint(initial_state_parent_flavors)
2208+# misc.sprint(final_state_parent_flavors)
2209+# misc.sprint(destination_pdgs_order)
2210+# misc.sprint(integrated_counterterm.process.nice_string())
2211+# misc.sprint('For integrated counterterm %s, flavor permutations are: %s'%(str(integrated_counterterm),str(permutations)))
2212+# misc.sprint('-'*50)
2213+
2214 # Store the mapping to apply to the virtual ME inputs
2215 integrated_counterterm_properties['input_mappings'] = permutations
2216
2217@@ -1858,69 +2355,177 @@
2218 # Let the ME7 exporter that we could successfully host this integrated counterterm
2219 return True
2220
2221- def process_dir_name(self, process):
2222- """ Given a specified process, return the directory name in which it is output."""
2223-
2224- # For MadLoop ME's there is extra prefixes to avoid symbol and resource file clashes.
2225- return "%d_%s_%s"%(process.get('id'), process.get('uid'),
2226- process.shell_string(print_id=False))
2227-
2228-
2229- def generate_code(self):
2230- """ Assuming the Helas Matrix Elements are now generated, we can write out the corresponding code."""
2231-
2232- matrix_elements = self.all_matrix_elements.get_matrix_elements()
2233-
2234- cpu_time_start = time.time()
2235-
2236- # Pick out the matrix elements in a list
2237- matrix_elements = self.all_matrix_elements.get_matrix_elements()
2238- # MadLoop standalone fortran output
2239- calls = 0
2240- for me in matrix_elements:
2241- # Choose the group number to be the unique id so that the output prefix
2242- # is nicely P<proc_id>_<unique_id>_
2243- calls = calls + self.exporter.generate_loop_subprocess(me,
2244- self.helas_model,
2245- group_number = me.get('processes')[0].get('uid'),
2246- proc_id = None,
2247- config_map=None,
2248- unique_id=me.get('processes')[0].get('uid'))
2249-
2250- # If all ME's do not share the same maximum loop vertex rank and the
2251- # same loop maximum wavefunction size, we need to set the maximum
2252- # in coef_specs.inc of the HELAS Source. The SubProcesses/P* directory
2253- # all link this file, so it should be properly propagated
2254- if self.options['loop_optimized_output'] and len(matrix_elements)>1:
2255- max_lwfspins = [m.get_max_loop_particle_spin() for m in \
2256- matrix_elements]
2257- max_loop_vert_ranks = [me.get_max_loop_vertex_rank() for me in \
2258- matrix_elements]
2259- if len(set(max_lwfspins))>1 or len(set(max_loop_vert_ranks))>1:
2260- self.exporter.fix_coef_specs(max(max_lwfspins),\
2261- max(max_loop_vert_ranks))
2262-
2263- return calls, time.time() - cpu_time_start
2264-
2265-
2266 class Contribution_RV(Contribution_R, Contribution_V):
2267- """ Implements the handling of a real-virtual type of contribution."""
2268- pass
2269+ """ Implements the general handling of contribution with arbitrary number of reals, virtuals
2270+ and beam factorization contributions."""
2271+
2272+ def get_integrands_for_process_map(self, process_map, model, run_card, all_MEAccessors, ME7_configuration):
2273+ """ Returns all the integrands implementing this contribution for the specified process_map.
2274+ The instance of MEAccessorDict is necessary so as to be passed to the integrand instances.
2275+ """
2276+
2277+ relevant_counterterms = {}
2278+ self.remove_local_counterterms_set_to_zero(all_MEAccessors)
2279+ if self.counterterms:
2280+ for process_key in process_map:
2281+ relevant_counterterms[process_key] = self.counterterms[process_key]
2282+
2283+ relevant_integrated_counterterms = {}
2284+ self.remove_integrated_counterterms_set_to_zero(all_MEAccessors)
2285+ if self.integrated_counterterms:
2286+ for process_key in process_map:
2287+ relevant_integrated_counterterms[process_key] = self.integrated_counterterms[process_key]
2288+
2289+ return [ ME7_integrands.ME7Integrand(model, run_card,
2290+ self.contribution_definition,
2291+ process_map,
2292+ self.topologies_to_processes,
2293+ self.processes_to_topologies,
2294+ all_MEAccessors,
2295+ ME7_configuration,
2296+ subtraction_mappings_scheme=self.options['subtraction_mappings_scheme'],
2297+ counterterms=relevant_counterterms,
2298+ integrated_counterterms=relevant_integrated_counterterms
2299+ ) ]
2300+
2301+ def get_additional_nice_string_printout_lines(self, format=0):
2302+ """ Return additional information lines for the function nice_string of this contribution."""
2303+ res = Contribution_R.get_additional_nice_string_printout_lines(self, format=format)
2304+ res.extend(Contribution_V.get_additional_nice_string_printout_lines(self, format=format))
2305+ return res
2306+
2307+ def get_nice_string_process_line(self, process_key, defining_process, format=0):
2308+ """ Return a nicely formated process line for the function nice_string of this
2309+ contribution."""
2310+ GREEN = '\033[92m'
2311+ ENDC = '\033[0m'
2312+ res = GREEN+' %s'%(defining_process.nice_string(print_weighted=False)
2313+ .replace('Process: ',''))+ENDC
2314+
2315+ if self.integrated_counterterms is None or self.counterterms is None:
2316+ return res
2317+
2318+ if format<2:
2319+ if process_key in self.counterterms:
2320+ res += ' | %d local counterterms'%len([ 1
2321+ for CT in self.counterterms[process_key] if CT.is_singular() ])
2322+ else:
2323+ res += ' | 0 local counterterm'
2324+ if process_key in self.integrated_counterterms:
2325+ res += ' and %d integrated counterterms'%len(self.integrated_counterterms[process_key])
2326+ else:
2327+ res += ' and 0 integrated counterterm'
2328+ else:
2329+ long_res = [' | with the following local and integrated counterterms:']
2330+ for CT in self.counterterms[process_key]:
2331+ if CT.is_singular():
2332+ if format==2:
2333+ long_res.append( ' | %s' % str(CT))
2334+ elif format==3:
2335+ long_res.append( ' | %s' % CT.__str__(
2336+ print_n=True, print_pdg=True, print_state=True ) )
2337+ elif format>3:
2338+ long_res.append(CT.nice_string(" | "))
2339+ for CT_properties in self.integrated_counterterms[process_key]:
2340+ CT = CT_properties['integrated_counterterm']
2341+ if format==2:
2342+ long_res.append( ' | %s' % str(CT))
2343+ elif format==3:
2344+ long_res.append( ' | %s' % CT.__str__(
2345+ print_n=True, print_pdg=True, print_state=True ))
2346+ elif format==4:
2347+ long_res.append(CT.nice_string(" | "))
2348+ elif format>4:
2349+ long_res.append(CT.nice_string(" | "))
2350+ for key, value in CT_properties.items():
2351+ if not key in ['integrated_counterterm', 'matching_process_key']:
2352+ long_res.append( ' + %s : %s'%(key, str(value)))
2353+ res += '\n'.join(long_res)
2354+
2355+ return res
2356+
2357+
2358+class Contribution_BS(Contribution_RV):
2359+ """ A class implementing the Beam-Soft factorization contributions (BS) originating
2360+ from soft counterterms in the colorful currents scheme which can recoil against the
2361+ initial states, as when using the SoftBeamsRecoilNLOWalker mappings."""
2362+
2363+ def __init__(self, contribution_definition, cmd_interface, **opts):
2364+ """ Make sure that this contribution is instantiated by a contribution definition
2365+ that enforces a correlated two-beams convolution."""
2366+
2367+ if not contribution_definition.correlated_beam_convolution:
2368+ raise MadGraph5Error("A soft beam convolution contribution must be instantiated"+
2369+ " from a contribution definition that specifies a *correlated* convolution "+
2370+ "of the two beams.")
2371+ super(Contribution_RV,self).__init__(contribution_definition, cmd_interface, **opts)
2372+
2373+ def add_beam_factorization_processes_from_contribution(self, contrib, beam_factorization_order):
2374+ """ 'Import' the processes_map and all generated attributes of the contribution
2375+ passed in argument to assign them to this beam factorization contributions. Contrary
2376+ to the implementation of this function in the base class, here we do not instantiate
2377+ 'bulk' beam factorization currents because they are not needed. The BS contribution is
2378+ only meant to be a receptacle for the integrated soft counterterm in the 'colorful'
2379+ currents scheme when the 'ppToOneWalker' mappings scheme is used (where the soft
2380+ recoils equally against both initial states).
2381+ """
2382+
2383+ # Make sure the source contribution does have a factorizing beam for the active
2384+ # ones of this contribution.
2385+ if ( contrib.contribution_definition.beam_factorization['beam_one'] is None ) or \
2386+ ( contrib.contribution_definition.beam_factorization['beam_two'] is None ):
2387+ return
2388+
2389+ # Now fetch the process map to load from
2390+ source_processes_map = contrib.get_processes_map()
2391+ processes_map = self.processes_map[1]
2392+ # To the new contribution instances, also assign references to the generated attributes
2393+ # of the original contributions (no deep copy necessary here)
2394+ for process_key, (defining_process, mapped_processes) in source_processes_map.items():
2395+
2396+ # First add this process to the current processes map
2397+ if process_key in processes_map:
2398+ raise MadGraph5Error('MadNkLO attempts to add beam factorization terms for'+
2399+ ' the %s a second time. This should never happen.'%(
2400+ defining_process.nice_string().replace('Process','process')))
2401+ processes_map[process_key] = (
2402+ defining_process.get_copy(), [mp.get_copy() for mp in mapped_processes] )
2403+
2404+ # Contrary to what is done in the base class, no 'bulk' beam factorization
2405+ # current need to be added here
2406+
2407+ # Also add the corresponding topologies
2408+ self.import_topologies_from_contrib(process_key, contrib)
2409+
2410+ def generate_all_counterterms(self, ignore_integrated_counterterms=False, group_processes=True, *args, **opts):
2411+ """ Contrary to the other contributions the BS one has no physical contribution that require subtraction
2412+ and is only a place-holder to receive the integrated soft counterterms. Therefore nothing needs to be
2413+ done a this stage here. """
2414+
2415+ for process_key, (defining_process, mapped_processes) in self.get_processes_map().items():
2416+ self.counterterms[process_key] = []
2417+ all_integrated_counterterms = []
2418+ return all_integrated_counterterms
2419
2420 class ContributionList(base_objects.PhysicsObjectList):
2421 """ A container for storing a list of contributions."""
2422
2423 contributions_natural_order = [
2424- ('LO', (Contribution_B, Contribution_LIB) ),
2425- ('NLO', (Contribution_R, Contribution_V) ),
2426- ('NNLO', (Contribution_RR, ) ),
2427- ('NNNLO', (Contribution_RRR, ) )
2428+ ('LO', (Contribution_B,) ),
2429+ ('NLO', (Contribution_R, Contribution_V, Contribution_RV) ),
2430+ ('NNLO', (Contribution_RR, Contribution_RV) ),
2431+ ('NNNLO', (Contribution_RRR, Contribution_RV) )
2432 ]
2433
2434 def is_valid_element(self, obj):
2435 """Test if object obj is a valid instance of Contribution."""
2436 return isinstance(obj, Contribution)
2437
2438+ def get_loop_induced_contributions(self):
2439+ """ Return a list of loop-induced contributions."""
2440+ return ContributionList([contrib for contrib in self if
2441+ contrib.contribution_definition.is_loop_induced()])
2442+
2443 def get_contributions_of_order(self, correction_order):
2444 """ Returns a list of all contributions of a certain correction_order in argument."""
2445 return ContributionList([contrib for contrib in self if
2446@@ -1933,13 +2538,19 @@
2447 correction_classes = tuple(correction_classes)
2448 else:
2449 correction_classes = (correction_classes,)
2450- return ContributionList([contrib for contrib in self if isinstance(contrib, correction_classes)])
2451+ return ContributionList([contrib for contrib in self if type(contrib) in correction_classes])
2452
2453 def nice_string(self, format=0):
2454 """ A nice representation of a list of contributions.
2455 We can reuse the function from ContributionDefinitions."""
2456 return base_objects.ContributionDefinitionList.contrib_list_string(self, format=format)
2457
2458+ def get_max_correction_order(self):
2459+ """ Returns the maximum correction order found among these contributions."""
2460+
2461+ return 'N'*(max(contrib.contribution_definition.correction_order.count('N')
2462+ for contrib in self))+'LO'
2463+
2464 def sort_contributions(self):
2465 """ Sort contributions according to the order dictated by the class_attribute contributions_natural_order"""
2466
2467@@ -1948,7 +2559,14 @@
2468 for contrib_type in contribution_types:
2469 selected_contribs = self.get_contributions_of_order(correction_order).\
2470 get_contributions_of_type(contrib_type)
2471- new_order.extend(selected_contribs)
2472+ # Now sort contributions of this type using the ordering F1, F2, F1F2
2473+ factorization_order = [(False,False),(True, False),(False, True), (True, True)]
2474+ new_order.extend(sorted(selected_contribs,
2475+ key = lambda contrib: factorization_order.index((
2476+ contrib.contribution_definition.is_beam_active('beam_one'),
2477+ contrib.contribution_definition.is_beam_active('beam_two')
2478+ ))
2479+ ))
2480 for contrib in selected_contribs:
2481 self.pop(self.index(contrib))
2482
2483@@ -1961,7 +2579,6 @@
2484
2485 # Keep track of the return values
2486 return_values = []
2487-
2488 remaining_contribs = list(self)
2489 for correction_order, contribution_types in self.contributions_natural_order:
2490 for contrib_type in contribution_types:
2491@@ -1992,13 +2609,19 @@
2492 if log: logger.info('%s the %d contribution%s of unknown type...'%(log,
2493 len(remaining_contribs), 's' if len(remaining_contribs)>1 else ''))
2494 for i, contrib in enumerate(remaining_contribs):
2495- if log: logger.info('%s %d/%d'%(contrib_type.__name__, i+1, len(remaining_contribs)))
2496+ if log: logger.info('%s %d/%d'%(type(contrib).__name__, i+1, len(remaining_contribs)))
2497 try:
2498 contrib_function = getattr(contrib, method)
2499 except AttributeError:
2500 raise MadGraph5Error("The contribution\n%s\n does not have function '%s' defined."%(
2501 contrib.nice_string(), method))
2502- return_values.append((contrib, contrib_function(*method_args, **method_opts)))
2503+ # If method opts depend on the contribution, it can be passed as a callable
2504+ # that can be evaluated here
2505+ if not isinstance(method_opts, dict) and callable(method_opts):
2506+ method_options = method_opts(contrib)
2507+ else:
2508+ method_options = dict(method_opts)
2509+ return_values.append((contrib, contrib_function(*method_args, **method_options)))
2510
2511 return return_values
2512
2513@@ -2006,9 +2629,10 @@
2514 # by the interface when using a PLUGIN system where the user can define his own class of Contribution.
2515 # Notice that this Contribution must be placed after all the Contribution daughter classes have been declared.
2516 Contribution_classes_map = {'Born': Contribution_B,
2517- 'LoopInduced_Born': Contribution_LIB,
2518 'Virtual': Contribution_V,
2519 'SingleReals': Contribution_R,
2520+ 'RealVirtual': Contribution_RV,
2521 'DoubleReals': Contribution_RR,
2522 'TripleReals': Contribution_RRR,
2523+ 'BeamSoft': Contribution_BS,
2524 'Unknown': None}
2525
2526=== modified file 'madgraph/core/subtraction.py'
2527--- madgraph/core/subtraction.py 2018-06-19 17:36:59 +0000
2528+++ madgraph/core/subtraction.py 2019-01-14 20:53:09 +0000
2529@@ -25,6 +25,8 @@
2530 import os
2531 import importlib
2532 import shutil
2533+import copy
2534+import traceback
2535
2536 if __name__ == '__main__':
2537 sys.path.append(os.path.join(os.path.dirname(
2538@@ -43,6 +45,12 @@
2539 logger = logging.getLogger('madgraph')
2540 pjoin = os.path.join
2541
2542+# Specify here the list of subtraction currents schemes and subtraction_mappings_scheme
2543+# that uses soft counterterms that recoils against the initial states. This is necessary in
2544+# order to decide when a contribution with a correlated convolution of both beams must be setup.
2545+_currents_schemes_requiring_soft_beam_factorization = ['colorful', 'colorful_pp']
2546+_mappings_schemes_requiring_soft_beam_factorization = ['SoftBeamsRecoilNLO', ]
2547+
2548 #=========================================================================================
2549 # Multinomial function
2550 #=========================================================================================
2551@@ -347,6 +355,7 @@
2552 class SingularStructure(object):
2553 """Object that represents a hierarchical structure of IR singularities."""
2554
2555+
2556 def __init__(self, *args, **opts):
2557 """Initialize a hierarchical singular structure."""
2558
2559@@ -399,6 +408,26 @@
2560 tmp_str += ")"
2561 return tmp_str
2562
2563+ def does_require_correlated_beam_convolution(self):
2564+ """ Checks whether this integrated counterterm requires a host contribution featuring
2565+ correlated convolution of the beams (i.e. 'BS', 'VS', etc... contributions).
2566+ This is the case only for integrated counterterms with disjoint structures having each
2567+ at least one soft *BeamCurrents* that originated from a colorful subtraction currents
2568+ scheme with mappings such as ppToOneWalker that recoils the soft momentum equally
2569+ against both initial-state beams."""
2570+
2571+ # One probably needs to do something more refined for higher orders, since the
2572+ # structure C(3,4,S(5)) probably requires a collinear mapping and not a soft one.
2573+ # However, maybe for such cases the mappings yielding correlated beam convolutions
2574+ # can be avoided altogether since one would be guaranteed to have final-state recoilers
2575+ # for the singly-unresolved limit S(5).
2576+ # Implement a caching too since this is called by the ME7Integrand at runtime.
2577+ if not hasattr(self,'does_require_correlated_beam_convolution_cached'):
2578+ self.does_require_correlated_beam_convolution_cached = \
2579+ all(any(c.name()=='S' for c in sub_ss.decompose()) for sub_ss in self.substructures)
2580+
2581+ return self.does_require_correlated_beam_convolution_cached
2582+
2583 def get_subtraction_prefactor(self):
2584 """Determine the prefactor due to the nested subtraction technique."""
2585
2586@@ -420,6 +449,14 @@
2587
2588 return pref
2589
2590+ def get_beam_factorization_legs(self):
2591+ """ Return the set of all the beam factorization legs involved in this singular
2592+ structure. Note that beam factorization structures should always be at the top level
2593+ and never nested."""
2594+ return set(sum([
2595+ [ l.n for l in struct.get_all_legs() ]
2596+ for struct in self.substructures if isinstance(struct, BeamStructure)],[]))
2597+
2598 def annihilate(self):
2599 """When an operator cannot act on this structure,
2600 remove this structure altogether
2601@@ -454,6 +491,14 @@
2602 total_unresolved += substructure.count_unresolved()
2603 return total_unresolved
2604
2605+ def count_couplings(self):
2606+ """Count the number of couplings covered by this structure."""
2607+
2608+ total_couplings = 0
2609+ for substructure in self.substructures:
2610+ total_couplings += substructure.count_couplings()
2611+ return total_couplings
2612+
2613 def get_canonical_representation(self, track_leg_numbers=True):
2614 """Creates a canonical hashable representation of self."""
2615
2616@@ -508,12 +553,6 @@
2617 inner.append(foo)
2618 return inner
2619
2620- name_dictionary = {
2621- '' : 'SingularStructure',
2622- 'S': 'SoftStructure',
2623- 'C': 'CollStructure'
2624- }
2625-
2626 @staticmethod
2627 def from_string(string, process, log=False):
2628 """Reconstruct a singular structure from a string representation, e.g.
2629@@ -522,13 +561,20 @@
2630 :return: The reconstructed singular structure if successful, else None.
2631 """
2632
2633+ singular_structures_name_dictionary = {
2634+ '' : SingularStructure,
2635+ 'S': SoftStructure,
2636+ 'C': CollStructure,
2637+ 'F': BeamStructure
2638+ }
2639+
2640 msg = "SingularStructure.from_string:"
2641 openpos = string.find('(')
2642 closepos = string.rfind(')')
2643 name = string[:openpos]
2644- if not SingularStructure.name_dictionary.has_key(name):
2645+ if not singular_structures_name_dictionary.has_key(name):
2646 if log:
2647- print msg, "no singular structure with name", name
2648+ logger.warning('%s %s: %s'%(msg, "no singular structure with name", name))
2649 return None
2650 legs = []
2651 substructures = []
2652@@ -558,7 +604,7 @@
2653 leg_n = int(before_comma)
2654 except:
2655 if log:
2656- print msg, "could not convert", before_comma, "to integer"
2657+ logger.warning('%s %s %s'%("could not convert", before_comma, "to integer"))
2658 return None
2659 for leg in process['legs']:
2660 if leg['number'] == leg_n:
2661@@ -567,33 +613,45 @@
2662 break
2663 if before_comma:
2664 if log:
2665- print msg, "no leg number", leg_n, "found"
2666+ logger.warning('%s %s %d %s'%(msg, "no leg number", leg_n, "found"))
2667 return None
2668 if before_comma:
2669 if log:
2670- print msg, "bracket matching failed for", before_comma[:-1]
2671+ logger.warning("%s %s '%s'"%(msg, "bracket matching failed for:", before_comma[:-1]))
2672 return None
2673- return eval(SingularStructure.name_dictionary[name])(
2674- substructures=substructures, legs=legs)
2675+ return singular_structures_name_dictionary[name](substructures=substructures, legs=legs)
2676+
2677+class BeamStructure(SingularStructure):
2678+
2679+ def count_couplings(self):
2680+ return 1
2681+
2682+ def count_unresolved(self):
2683+ return 0
2684+
2685+ def name(self):
2686+ return "F"
2687
2688 class SoftStructure(SingularStructure):
2689
2690+ def count_couplings(self):
2691+ return self.count_unresolved()
2692+
2693 def count_unresolved(self):
2694-
2695 return len(self.get_all_legs())
2696
2697 def name(self):
2698-
2699 return "S"
2700
2701 class CollStructure(SingularStructure):
2702
2703+ def count_couplings(self):
2704+ return self.count_unresolved()
2705+
2706 def count_unresolved(self):
2707-
2708 return len(self.get_all_legs())-1
2709
2710 def name(self):
2711-
2712 return "C"
2713
2714 #=========================================================================================
2715@@ -762,6 +820,30 @@
2716 # WARNING: this statement has to be checked
2717 # By default, not skipping overlaps ensures correct subtraction
2718 return True
2719+
2720+class BeamOperator(SingularOperator):
2721+ """Object that represents a beam factorization elementary singular operator."""
2722+
2723+ def name(self):
2724+
2725+ return "F"
2726+
2727+ def get_structure(self):
2728+
2729+ return BeamStructure(legs=self)
2730+
2731+ def __init__(self, *args, **opts):
2732+
2733+ super(BeamOperator, self).__init__(*args, **opts)
2734+
2735+ def act_here_needed(self, structure):
2736+
2737+ assert isinstance(structure, SingularStructure)
2738+
2739+ # Beam factorization limits should never be needed within any other singular structure
2740+ # WARNING: this statement has to be checked
2741+
2742+ return False
2743
2744 #=========================================================================================
2745 # SingularOperatorList
2746@@ -796,7 +878,6 @@
2747 #=========================================================================================
2748 # Current
2749 #=========================================================================================
2750-
2751 class Current(base_objects.Process):
2752
2753 def default_setup(self):
2754@@ -824,6 +905,11 @@
2755
2756 return self['singular_structure'].count_unresolved()
2757
2758+ def count_couplings(self):
2759+ """Count the number of couplings expected in this current."""
2760+
2761+ return self['singular_structure'].count_couplings()
2762+
2763 def discard_leg_numbers(self,discard_initial_leg_numbers=True):
2764 """Discard all leg numbers in the singular_structure
2765 and in the parent_subtraction_leg,
2766@@ -852,31 +938,63 @@
2767 readable_string += " " + str(self.get('orders'))
2768 return readable_string
2769
2770+ def does_require_correlated_beam_convolution(self):
2771+ """ Checks whether this integrated counterterm requires a host contribution featuring
2772+ correlated convolution of the beams (i.e. 'BS', 'VS', etc... contributions).
2773+ This is the case only for integrated counterterms with disjoint structures having each
2774+ at least one soft *BeamCurrents* that originated from a colorful subtraction currents
2775+ scheme with mappings such as ppToOneWalker that recoils the soft momentum equally
2776+ against both initial-state beams."""
2777+
2778+ # For basic currents, this is never the case
2779+ return False
2780+
2781 def get_reduced_flavors(self, defining_flavors, IR_subtraction_module, routing_dict):
2782 """Given the defining flavors dictionary specifying the flavors of some leg numbers,
2783 add entries corresponding to the subtraction legs in this node and using the function
2784 get_parent_PDGs from the subtraction module."""
2785
2786- def get_parent(PDGs):
2787+ get_particle = IR_subtraction_module.model.get_particle
2788+
2789+ def get_parent(PDGs, is_initial):
2790 res = IR_subtraction_module.parent_PDGs_from_PDGs(PDGs)
2791 if len(res)!=1:
2792- raise MadGraph5Error("Multiple mother PDGs assignment not supported"+
2793- " yet by the function get_reduced_flavors")
2794- return res[0]
2795+ str1 = "Multiple mother PDGs assignment "
2796+ str2 = " is not yet supported by the function get_reduced_flavors"
2797+ misc.sprint("get_parent called with PDGs = " + str(PDGs))
2798+ raise MadGraph5Error(str1 + str(res) + str2)
2799+ # Now flip back the identity of the parent PDG if in the initial state
2800+ if is_initial:
2801+ return get_particle(res[0]).get_anti_pdg_code()
2802+ else:
2803+ return res[0]
2804
2805 all_legs = self['singular_structure'].get_all_legs()
2806+
2807+ # If all legs are unresolved, as it is the case in the pure soft configurations,
2808+ # then nothing needs to be done
2809+ if self.count_unresolved()==len(all_legs):
2810+ return
2811+
2812 leg_numbers = [leg.n for leg in all_legs]
2813- get_particle = IR_subtraction_module.model.get_particle
2814- # Swap the flavors of the initial states
2815- leg_flavors = [ (
2816- get_particle(defining_flavors[leg.n]).get_anti_pdg_code() if
2817- leg.state==SubtractionLeg.INITIAL else get_particle(defining_flavors[leg.n]).get_pdg_code() )
2818- for leg in all_legs ]
2819
2820- if len(leg_numbers)>0:
2821- mother_number = Counterterm.get_ancestor(frozenset(leg_numbers), routing_dict)
2822- if mother_number not in defining_flavors:
2823- defining_flavors[mother_number] = get_parent(leg_flavors)
2824+ try:
2825+ # Swap the flavors of the initial states
2826+ leg_flavors = [ (
2827+ get_particle(defining_flavors[leg.n]).get_anti_pdg_code() if
2828+ leg.state==SubtractionLeg.INITIAL else get_particle(defining_flavors[leg.n]).get_pdg_code() )
2829+ for leg in all_legs ]
2830+
2831+ if len(leg_numbers)>0:
2832+ mother_number = Counterterm.get_ancestor(frozenset(leg_numbers), routing_dict)
2833+ if mother_number not in defining_flavors:
2834+ defining_flavors[mother_number] = get_parent(leg_flavors,
2835+ any(leg.state==SubtractionLeg.INITIAL for leg in all_legs) )
2836+ except Exception as e:
2837+ raise MadGraph5Error('\nError when computing reduced quantities for current:\n%s\n'%str(self)+
2838+ 'Defining flavors:\n%s\n'%str(defining_flavors)+
2839+ 'routing dict:\n%s\n'%str(routing_dict)+
2840+ 'Exception encountered:\n%s'%traceback.format_exc())
2841
2842 def get_key(self, track_leg_numbers=False):
2843 """Return the ProcessKey associated to this current."""
2844@@ -888,9 +1006,48 @@
2845 'singular_structure',
2846 'n_loops',
2847 'squared_orders',
2848+ 'perturbation_couplings',
2849 'resolve_mother_spin_and_color' ],
2850 track_leg_numbers = track_leg_numbers)
2851
2852+ def get_initial_state_leg_map(self, leg_numbers_map):
2853+ """ Returns None if this current does not involve initial states, otherwise it returns
2854+ pair of SubtractionLeg instances in the form of the 2-tuple
2855+ (mother_IS_leg, daughter_IS_leg)
2856+ Notice that beam factorization across more than one (i.e. two) beams must be
2857+ factorized into two currents, so we should be guaranteed having to return at most
2858+ one pair of IS legs here."""
2859+
2860+
2861+ # First get the ancestors of all outermost leg numbers of this singular structures
2862+ ancestors = Counterterm.get_ancestor(
2863+ [ leg.n for leg in self['singular_structure'].get_all_legs() ], leg_numbers_map )
2864+
2865+ # Filter initial state legs and make sure there is only one
2866+ ancestors = [leg for leg in ancestors if leg.state==SubtractionLeg.INITIAL]
2867+
2868+ # This current has no initial-state
2869+ if len(ancestors) == 0:
2870+ return None
2871+
2872+ if len(ancestors)!=1:
2873+ raise MadGraph5Error('Beam factorization current should involve'+
2874+ ' exactly one ancestor initial state leg (the two beam factorization must be factorized).')
2875+ initial_state_ancestor = ancestors[0]
2876+
2877+ # Now find which external initial-state leg is connected to the initial
2878+ # state ancestor identified.
2879+ daughters = Counterterm.get_daughter_legs(
2880+ initial_state_ancestor, leg_numbers_map)
2881+ # Filter initial state legs and make sure there is only one
2882+ daughters = [leg for leg in daughters if leg.state==SubtractionLeg.INITIAL]
2883+ if len(daughters)!=1:
2884+ raise CurrentImplementationError('Beam factorization current should involve'+
2885+ ' exactly one daughter initial state leg (the two beam factorization must be factorized).')
2886+ initial_state_daughter = daughters[0]
2887+
2888+ return (initial_state_ancestor, initial_state_daughter)
2889+
2890 def get_copy(self, copied_attributes=()):
2891 """Return a copy of this current with a deep-copy of its singular structure."""
2892
2893@@ -901,22 +1058,92 @@
2894
2895 return copied_current
2896
2897+ def get_number_of_couplings(self):
2898+ """ Return a naive count of the number of couplings involved in this current."""
2899+ return self.get('n_loops') + self['singular_structure'].count_couplings()
2900+
2901 def __eq__(self, other):
2902 """Compare two currents using their ProcessKey's."""
2903
2904 return self.get_key().key_dict == other.get_key().key_dict
2905
2906 #=========================================================================================
2907+# BeamCurrent
2908+#=========================================================================================
2909+class BeamCurrent(Current):
2910+ """ A special class of current representing beam factorization terms
2911+ (aka PDF counterterms).
2912+
2913+ If we have:
2914+ PDF_CT(xi) = F_+(xi) + [F] \delta(xi-1)
2915+ (which can also be explicitely written)
2916+ PDF_CT(xi) = F(xi) + {F(xi)} \delta(xi-1) + [F] \delta(xi-1)
2917+
2918+ Then this current represent the two pieces of the xi-dependent distribution F_+(xi),
2919+ the first piece (F(xi)) with 'distribution_type' set to 'bulk' and the latter
2920+ ({F(xi)} \delta(xi-1)) with this attribute set to 'counterterm'.
2921+ """
2922+
2923+ def default_setup(self):
2924+ """Default values for all properties specific to the BeamCurrent,
2925+ additional to Process.
2926+ """
2927+ super(BeamCurrent, self).default_setup()
2928+ self['n_loops'] = -1
2929+ self['squared_orders'] = {}
2930+ self['resolve_mother_spin_and_color'] = True
2931+ self['singular_structure'] = SingularStructure()
2932+
2933+ # Which piece of the plus distribution is intended to be returned
2934+ # This can be 'bulk' or 'counterterm'.
2935+ self['distribution_type'] = 'bulk'
2936+
2937+ # List of the pdgs of the particles making up this beam
2938+ self['beam_PDGs'] = []
2939+ # Name of the beam factorization type that should be employed
2940+ self['beam_type'] = None
2941+
2942+ def does_require_correlated_beam_convolution(self):
2943+ """ Checks whether this integrated counterterm requires a host contribution featuring
2944+ correlated convolution of the beams (i.e. 'BS', 'VS', etc... contributions).
2945+ This is the case only for integrated counterterms with disjoint structures having each
2946+ at least one soft *BeamCurrents* that originated from a colorful subtraction currents
2947+ scheme with mappings such as ppToOneWalker that recoils the soft momentum equally
2948+ against both initial-state beams."""
2949+
2950+ # Return true only if the singular structures of this current are pure soft.
2951+ return self['singular_structure'].does_require_correlated_beam_convolution()
2952+
2953+ def get_key(self, track_leg_numbers=False):
2954+ """Return the ProcessKey associated to this current."""
2955+
2956+ from madgraph.core.accessors import ProcessKey
2957+ return ProcessKey(
2958+ self,
2959+ allowed_attributes = [
2960+ 'singular_structure',
2961+ 'n_loops',
2962+ 'squared_orders',
2963+ 'perturbation_couplings',
2964+ 'resolve_mother_spin_and_color',
2965+ 'beam_PDGs',
2966+ 'beam_type',
2967+ 'distribution_type' ],
2968+ track_leg_numbers = track_leg_numbers)
2969+
2970+ def __str__(self, **opts):
2971+ base_str = Current.__str__(self, **opts)
2972+ # Denote "bulk" contributions embedded within colons ':%s:'
2973+ if self['distribution_type']=='bulk':
2974+ return ':%s:'%base_str
2975+ return base_str
2976+
2977+#=========================================================================================
2978 # Integrated current
2979 #=========================================================================================
2980-
2981 class IntegratedCurrent(Current):
2982 """Class for integrated currents."""
2983
2984- # TODO
2985- # For now, it behaves exactly as a local 4D current,
2986- # but it is conceptually different.
2987-
2988 def __str__(
2989 self,
2990 print_n=True, print_pdg=True, print_state=True,
2991@@ -928,7 +1155,50 @@
2992 print_n=print_n, print_pdg=print_pdg, print_state=print_state,
2993 print_loops=print_loops, print_orders=print_orders
2994 )
2995- return '[integrated] %s' % res
2996+ # Denote integrated contributions embedded within '[%s]'
2997+ return '[%s]' % res
2998+
2999+class IntegratedBeamCurrent(IntegratedCurrent):
3000+ """ A special class of current representing the integrated beam factorization terms
3001+
3002+ If we have:
3003+ PDF_CT(xi) = F_+(xi) + [F] \delta(xi-1)
3004+ (which can also be explicitely written)
3005+ PDF_CT(xi) = F(xi) + :F(xi): \delta(xi-1) + [F] \delta(xi-1)
3006+
3007+ Then this current returns the end-point of the distribution: [F] \delta(xi-1)
3008+ """
3009+
3010+ def default_setup(self):
3011+ """Default values for all properties specific to the BeamCurrent,
3012+ additional to Process.
3013+ """
3014+ super(IntegratedBeamCurrent, self).default_setup()
3015+ self['n_loops'] = 0
3016+ self['squared_orders'] = {}
3017+ self['resolve_mother_spin_and_color'] = True
3018+ self['singular_structure'] = SingularStructure()
3019+
3020+ # List of the pdgs of the particles making up this beam
3021+ self['beam_PDGs'] = []
3022+ # Name of the beam factorization type that should be employed
3023+ self['beam_type'] = None
3024+
3025+ def get_key(self, track_leg_numbers=False):
3026+ """Return the ProcessKey associated to this current."""
3027+
3028+ from madgraph.core.accessors import ProcessKey
3029+ return ProcessKey(
3030+ self,
3031+ allowed_attributes = [
3032+ 'singular_structure',
3033+ 'n_loops',
3034+ 'squared_orders',
3035+ 'perturbation_couplings',
3036+ 'resolve_mother_spin_and_color',
3037+ 'beam_PDGs',
3038+ 'beam_type' ],
3039+ track_leg_numbers = track_leg_numbers)
3040
3041 #=========================================================================================
3042 # CountertermNode
3043@@ -999,6 +1269,14 @@
3044 for node in self.nodes:
3045 total_unresolved += node.count_unresolved()
3046 return total_unresolved
3047+
3048+ def get_number_of_couplings(self):
3049+ """Count the number of couplings expected in the currents of this counterterm node."""
3050+
3051+ total_couplings = self.current.get_number_of_couplings()
3052+ for node in self.nodes:
3053+ total_couplings += node.get_number_of_couplings()
3054+ return total_couplings
3055
3056 def find_leg(self, number):
3057 """Find the SubtractionLeg with number specified."""
3058@@ -1027,13 +1305,12 @@
3059 add entries corresponding to the subtraction legs in this node and using the function
3060 get_parent_PDGs from the subtraction module."""
3061
3062+ for node in self.nodes:
3063+ node.get_reduced_flavors(defining_flavors, IR_subtraction_module, routing_dict)
3064+
3065 self.current.get_reduced_flavors(
3066 defining_flavors, IR_subtraction_module, routing_dict)
3067
3068- for node in self.nodes:
3069- node.current.get_reduced_flavors(
3070- defining_flavors, IR_subtraction_module, routing_dict)
3071-
3072 def recursive_singular_structure(self, intermediate_leg_ns):
3073 """Reconstruct recursively the singular structure of this CountertermNode."""
3074
3075@@ -1048,11 +1325,11 @@
3076 ])
3077 return type(structure)(legs=legs, substructures=substructures)
3078
3079- def split_loops(self, n_loops):
3080+ def distribute_loops(self, n_loops):
3081 """Split a counterterm in several ones according to the individual
3082 loop orders of its currents.
3083 """
3084-
3085+
3086 assert isinstance(n_loops, int)
3087
3088 # Generate all combinations of nodes with total number of loops
3089@@ -1067,7 +1344,7 @@
3090 # at any loop number between 0 and n_loops
3091 first_without_loops = self.nodes[0]
3092 for loop_n in range(n_loops + 1):
3093- for first_with_loops in first_without_loops.split_loops(loop_n):
3094+ for first_with_loops in first_without_loops.distribute_loops(loop_n):
3095 node_combinations += [[first_with_loops, ], ]
3096 # Add the other nodes one by one recursively,
3097 # taking care of never exceeding n_loops
3098@@ -1081,7 +1358,7 @@
3099 # Distribute the order of this node
3100 # in all possible ways within the current itself
3101 # (recursion step)
3102- for ith_node in self.nodes[i_current].split_loops(new_loop_n):
3103+ for ith_node in self.nodes[i_current].distribute_loops(new_loop_n):
3104 new_node_combinations.append(
3105 combination + [ith_node, ] )
3106 node_combinations = new_node_combinations
3107@@ -1100,7 +1377,7 @@
3108
3109 return result
3110
3111- def split_orders(self, target_squared_orders):
3112+ def distribute_orders(self, target_squared_orders):
3113 """Split a counterterm in several ones according to the individual
3114 coupling orders of its currents.
3115 """
3116@@ -1110,17 +1387,12 @@
3117 # Eventually, it should create new instances, set orders there, and return the
3118 # new list of counterterm copies with orders
3119 # TODO actually create a new counterterm with orders set
3120-
3121 for node in self.nodes:
3122- node.split_orders(target_squared_orders)
3123+ node.distribute_orders(target_squared_orders)
3124
3125 if isinstance(self.current, Current):
3126 self.current.set(
3127- 'squared_orders',
3128- { 'QCD':
3129- (self.current.get('n_loops') * 2 +
3130- self.current[
3131- 'singular_structure'].count_unresolved() * 2) } )
3132+ 'squared_orders', { 'QCD': self.current.get_number_of_couplings() * 2 } )
3133 else:
3134 # Here the squared order reduced process should be changed accordingly
3135 # and decreased for each squared order "sucked up" by the currents
3136@@ -1178,10 +1450,21 @@
3137 except KeyError:
3138 self.prefactor = self.get_prefactor(**opts)
3139
3140+ # Both attributes below will be set via a call to `set_reduced_flavors_map`
3141+ # during the generation of the counterterms
3142+ # The dictionary below takes the form:
3143+ # { ( (reolved_initial_state_PDGs,), (resolved_final_state_PDGs, ) ) :
3144+ # ( (resolved_initial_state_PDGs,), (resolved_final_state_PDGs, ) ) }
3145+ # In the future, the values can be upgraded to list of mapped reduced flavors.
3146+ # But for now there can only be one at a time.
3147+ # Also, an additional 'None' key is added which points to the reduced flavor
3148+ # configuration of the defining process.
3149+ self.reduced_flavors_map = None
3150+
3151 @classmethod
3152 def get_ancestor(cls, particles, momentum_dict):
3153 """Recursively explore the momentum dictionary to find an ancestor
3154- of the given particle set.
3155+ of the given particle set (i.e. these are leg numbers here).
3156 """
3157
3158 try:
3159@@ -1204,13 +1487,18 @@
3160 "in this momentum routing dictionary:\n" + str(momentum_dict) )
3161
3162 def __str__(self, print_n=True, print_pdg=False, print_state=False):
3163-
3164 return self.reconstruct_complete_singular_structure().__str__(
3165 print_n=print_n, print_pdg=print_pdg, print_state=print_state )
3166
3167 def nice_string(self, lead=" ", tab=" "):
3168+
3169+ CT_type = ''
3170+ if type(self) == Counterterm:
3171+ CT_type = '[local] '
3172+ elif type(self) == IntegratedCounterterm:
3173+ CT_type = '[integrated] '
3174
3175- tmp_str = lead + self.process.nice_string(0, True, False)
3176+ tmp_str = lead + CT_type + self.process.nice_string(0, True, False)
3177 tmp_str += " ("
3178 tmp_str += " ".join(
3179 str(leg['number'])
3180@@ -1273,23 +1561,32 @@
3181
3182 return None
3183
3184- def get_daughter_pdgs(self, leg_number, state):
3185- """Walk down the tree of currents to find the pdgs 'attached'
3186- to a given leg number of the reduced process.
3187- """
3188+ @classmethod
3189+ def get_daughter_legs(cls, leg_number, momentum_dict):
3190+ """ Walk down the momentum dictionary tree to find the outter-most daughter legs
3191+ coming from the leg_number specified."""
3192
3193 external_leg_numbers = []
3194 intermediate_leg_number = [leg_number, ]
3195 while len(intermediate_leg_number) > 0:
3196 next_leg = intermediate_leg_number.pop(0)
3197- # Check if this leg is final
3198- daughters = self.momenta_dict[next_leg]
3199+ # Check if this leg is external
3200+ daughters = momentum_dict[next_leg]
3201 if daughters == frozenset([next_leg,]):
3202 external_leg_numbers.append(next_leg)
3203 else:
3204 intermediate_leg_number = list(daughters) + intermediate_leg_number
3205
3206- # Now that we have the external leg numbers, find the corresponding pdg
3207+ return external_leg_numbers
3208+
3209+ def get_daughter_pdgs(self, leg_number, state):
3210+ """Walk down the tree of currents to find the pdgs 'attached'
3211+ to a given leg number of the reduced process.
3212+ """
3213+
3214+ external_leg_numbers = self.get_daughter_legs(leg_number, self.momenta_dict)
3215+
3216+ # Now that we have the external leg numbers, find the corresponding PDG
3217 pdgs = []
3218 for n in external_leg_numbers:
3219 leg = self.find_leg(n)
3220@@ -1394,6 +1691,14 @@
3221 total_unresolved += node.count_unresolved()
3222 return total_unresolved
3223
3224+ def get_number_of_couplings(self):
3225+ """Count the number of couplings expected in the currents of this counterterm."""
3226+
3227+ total_couplings = 0
3228+ for node in self.nodes:
3229+ total_couplings += node.get_number_of_couplings()
3230+ return total_couplings
3231+
3232 def get_all_currents(self):
3233 """Return a list of all currents involved in this counterterm."""
3234
3235@@ -1402,36 +1707,90 @@
3236 currents += node.get_all_currents()
3237 return currents
3238
3239- def get_reduced_flavors(self, defining_flavors=None, IR_subtraction=None):
3240- """Given the defining flavors corresponding to the resolved process (as a dictionary),
3241- return a *list* of flavors corresponding to the flavor assignment of the reduced process
3242- given the defining flavors.
3243- """
3244+ def get_beam_currents(self):
3245+ """ Returns a list of dictionaries of the form
3246+ { 'beam_one' : beam_current_1 # None for no convolution
3247+ 'beam_two' : beam_current_2 # None for no convolution
3248+ } """
3249+
3250+ # Copy the beam currents from the reduced process to avoid border effects
3251+ # when further post-processing it below.
3252+ beam_currents = [dict(bf) for bf in self.current['beam_factorization']]
3253+
3254+ # First fetch all currents making up this counterterm
3255+ for current in self.get_all_currents():
3256+ if type(current) not in [BeamCurrent]:
3257+ continue
3258+
3259+ # Now get all legs of its singular structure
3260+ all_legs = current['singular_structure'].get_all_legs()
3261+
3262+ if any(l.n==1 for l in all_legs):
3263+ for bcs in beam_currents:
3264+ if bcs['beam_one'] is not None:
3265+ raise MadGraph5Error('The beam factorization currents from the reduced'+
3266+ ' process and the ones in the counterterms must never apply to the same beam (#1).')
3267+ bcs['beam_one'] = current
3268+ if any(l.n==2 for l in all_legs):
3269+ for bcs in beam_currents:
3270+ if bcs['beam_two'] is not None:
3271+ raise MadGraph5Error('The beam factorization currents from the reduced'+
3272+ ' process and the ones in the counterterms must never apply to the same beam (#2).')
3273+ bcs['beam_two'] = current
3274+
3275+ return beam_currents
3276+
3277+ def set_reduced_flavors_map(self, defining_process, mapped_processes, IR_subtraction):
3278+ """Given the defining and mapped processes subject to this counterterm,
3279+ this function sets the instance attribute 'reduced_flavor_map' which can be
3280+ used later to easily access the corresponding reduced flavor."""
3281+
3282 if self.count_unresolved() > 1 and any(
3283 any(substruct.name()=='S' for substruct in current['singular_structure'].substructures)
3284- for current in self.get_all_currents()):
3285+ for current in self.get_all_currents() if isinstance(current,(
3286+ IntegratedBeamCurrent,IntegratedCurrent,BeamCurrent))):
3287 raise MadGraph5Error("The function 'get_reduced_flavors' of the class Counterterm"+
3288- " does not yet support NNLO soft structures such as: %s"%str(self))
3289-
3290- # Now construct the reduced flavors list
3291- if defining_flavors is None or IR_subtraction is None:
3292- # If no defining flavors are specified then simply use the flavors already
3293- # filled in the reduced process.
3294- reduced_flavors = self.process.get_cached_initial_final_pdgs()
3295- else:
3296- number_to_flavors_map = dict(defining_flavors)
3297+ " does not yet support integrated NNLO soft structures such as: %s"%str(self))
3298+
3299+ self.reduced_flavors_map = {}
3300+ for i_proc, process in enumerate([defining_process,]+mapped_processes):
3301+ number_to_flavors_map = { l['number'] : l['id'] for l in process.get('legs') }
3302+
3303 # Update the number_to_flavors_map dictionary by walking through the nodes
3304 for node in self.nodes:
3305 node.get_reduced_flavors(
3306 number_to_flavors_map, IR_subtraction, self.momenta_dict)
3307 reduced_process_leg_numbers = self.process.get_cached_initial_final_numbers()
3308-
3309 reduced_flavors = (
3310 tuple( number_to_flavors_map[n] for n in reduced_process_leg_numbers[0] ),
3311 tuple( number_to_flavors_map[n] for n in reduced_process_leg_numbers[1] )
3312 )
3313-
3314- return reduced_flavors
3315+ self.reduced_flavors_map[process.get_cached_initial_final_pdgs()] = reduced_flavors
3316+ # Also add a None key for the default reduced flavors corresponding to the
3317+ # resolved ones of the defining process.
3318+ if i_proc==0:
3319+ self.reduced_flavors_map[None] = reduced_flavors
3320+
3321+ def get_reduced_flavors(self, defining_flavors=None):
3322+ """Given the defining flavors as a tuple (initial_state_PDGs, final_state_PDGs),
3323+ returns in the same format the flavors corresponding to the flavor assignment of
3324+ the reduced process given the defining flavors.
3325+ If the defining defining_flavors is set to None, then the default reduced_flavors
3326+ will be returned, corresponding to the defining process that yielded this counterterm.
3327+ """
3328+
3329+ if self.reduced_flavors_map is None:
3330+ raise MadGraph5Error("The function 'set_reduced_flavors_map' of the counterterm"+
3331+ " should be called before the function 'get_reduced_flavors'.")
3332+
3333+ try:
3334+ return self.reduced_flavors_map[defining_flavors]
3335+ except KeyError:
3336+ raise MadGraph5Error(
3337+ "The reduced flavor configuration for the resolved configuration "+
3338+ "'%s' was not found in the"%str(defining_flavors)+
3339+ " reduced flavors map:\n%s\nof this counterterm:\n%s"%(
3340+ str(self.reduced_flavors_map),self.nice_string()))
3341
3342 def get_reduced_quantities(self, reduced_PS_dict, defining_flavors=None):
3343 """Given the PS *dictionary* providing the reduced kinematics returned
3344@@ -1462,25 +1821,134 @@
3345 class IntegratedCounterterm(Counterterm):
3346 """A class for the integrated counterterm."""
3347
3348+ def __str__(self, print_n=True, print_pdg=False, print_state=False):
3349+ """ A nice short string for the structure of this integrated CT."""
3350+
3351+ reconstructed_ss = self.reconstruct_complete_singular_structure()
3352+
3353+ # We deconstruct the reconstructed ss so as to be able to render the information about
3354+ # wheter beam currents are local counterterms or bulk ones.
3355+ res = []
3356+ for i, node in enumerate(self.nodes):
3357+ if isinstance(node.current, BeamCurrent) and node.current['distribution_type']=='bulk':
3358+ template = ':%s:,'
3359+ else:
3360+ template = '%s,'
3361+ res.append(template%(reconstructed_ss.substructures[i].__str__(
3362+ print_n=print_n, print_pdg=print_pdg, print_state=print_state )))
3363+
3364+ return '[%s]'%(''.join(res))
3365+
3366 def get_integrated_current(self):
3367 """ This function only makes sense in the current context where we don't allow
3368 the factorization of the integrated counterterm into several integrated currents."""
3369
3370 # For now we only support a basic integrated counterterm which is not broken
3371 # down into subcurrents but contains a single CountertermNode with a single
3372- # current in it that contains the whole singular structure describing this
3373- # integrated counterterm.
3374- if len(self.nodes) != 1 or len(self.nodes[0].nodes) != 0:
3375+ # current *or* a BeamCurrent and no CountermNode
3376+ if len(self.nodes) == 1 and len(self.nodes[0].nodes) == 0:
3377+ assert isinstance(self.nodes[0].current, (IntegratedCurrent, BeamCurrent, IntegratedBeamCurrent))
3378+ return self.nodes[0].current
3379+ else:
3380 raise MadGraph5Error(
3381- """For now, MadEvent7 only support simple integrated
3382+ """For now, MadNkLO only support simple integrated
3383 counterterms that consists of single current encompassing the full
3384- singular structure that must be analytically integrated over.""")
3385- assert(isinstance(self.nodes[0].current, IntegratedCurrent))
3386-
3387- return self.nodes[0].current
3388+ singular structure that must be analytically integrated over, not:\n%s"""%str(self))
3389+
3390+
3391+ def get_beam_currents(self):
3392+ """ Returns a list of dictionaries of the form
3393+ { 'beam_one' : beam_current_1 # None for no convolution
3394+ 'beam_two' : beam_current_2 # None for no convolution
3395+ <optional> 'correlated_convolution' : beam_current # None for not correlated convolution
3396+ }
3397+ Note that whenever 'correlated_convolution' is not set to None, the other beam_currents
3398+ will be None.
3399+ """
3400+
3401+ beam_currents = [dict(bf) for bf in self.current['beam_factorization']]
3402+
3403+ # First fetch all integrated currents making up this integrated counterterm
3404+ # In the current implementation, there can only be one for now.
3405+ integrated_current = self.get_integrated_current()
3406+
3407+ if type(integrated_current) not in [BeamCurrent, IntegratedBeamCurrent]:
3408+ return beam_currents
3409+
3410+ if integrated_current['singular_structure'].does_require_correlated_beam_convolution():
3411+ for bcs in beam_currents:
3412+ if bcs['beam_one'] is not None or bcs['beam_two'] is not None:
3413+ raise MadGraph5Error('The beam factorization currents from the reduced'+
3414+ ' process should all be None if the integrated counterterm necessitates a correlated'+
3415+ ' beam convolution.')
3416+ bcs['correlated_convolution'] = integrated_current
3417+ return beam_currents
3418+
3419+ # Now get all legs of its singular structure
3420+ all_legs = integrated_current['singular_structure'].get_all_legs()
3421+
3422+ if any(l.n==1 for l in all_legs):
3423+ for bcs in beam_currents:
3424+ if bcs['beam_one'] is not None:
3425+ raise MadGraph5Error('The beam factorization currents from the reduced'+
3426+ ' process and the ones in the integrated CT must never apply to the same beam (#1).')
3427+ bcs['beam_one'] = integrated_current
3428+ if any(l.n==2 for l in all_legs):
3429+ for bcs in beam_currents:
3430+ if bcs['beam_two'] is not None:
3431+ raise MadGraph5Error('The beam factorization currents from the reduced'+
3432+ ' process and the ones in the integrated CT must never apply to the same beam (#2).')
3433+ bcs['beam_two'] = integrated_current
3434+
3435+ return beam_currents
3436+
3437+ def does_require_correlated_beam_convolution(self):
3438+ """ Checks whether this integrated counterterm requires a host contribution featuring
3439+ correlated convolution of the beams (i.e. 'BS', 'VS', etc... contributions).
3440+ This is the case only for integrated counterterms with pure-soft *BeamCurrent* that originated
3441+ from a colorful subtraction currents scheme with mappings such as ppToOneWalker that
3442+ recoils the soft momentum equally against both initial-state beams."""
3443+
3444+ # Fetch all integrated currents making up this integrated counterterm
3445+ # In the current implementation, there can only be one for now.
3446+ integrated_current = self.get_integrated_current()
3447+ return integrated_current.does_require_correlated_beam_convolution()
3448+
3449+ def get_necessary_beam_convolutions(self):
3450+ """ Returns a set of beam names ('beam_one' or 'beam_two') that must be active
3451+ in the contribution that will host this counterterm"""
3452+
3453+ necessary_beam_convolutions = set([])
3454+
3455+ # First analyze the reduced process
3456+ for bft in self.process['beam_factorization']:
3457+ for beam_name, beam_current in bft.items():
3458+ if not beam_current is None:
3459+ necessary_beam_convolutions.add(beam_name)
3460+
3461+ # Then fetch all integrated currents making up this integrated counterterm
3462+ # In the current implementation, there can only be one for now.
3463+ integrated_current = self.get_integrated_current()
3464+
3465+ if type(integrated_current) not in [BeamCurrent,]:
3466+ return necessary_beam_convolutions
3467+
3468+ # Check if this integrated counterterm necessitate a soft-recoil that demands
3469+ # a correlated convolution of both beams
3470+ if integrated_current.does_require_correlated_beam_convolution():
3471+ return set(['beam_one','beam_two'])
3472+
3473+ # Now get all legs of its singular structure
3474+ all_legs = integrated_current['singular_structure'].get_all_legs()
3475+
3476+ if any(l.n==1 for l in all_legs):
3477+ necessary_beam_convolutions.add('beam_one')
3478+ if any(l.n==2 for l in all_legs):
3479+ necessary_beam_convolutions.add('beam_two')
3480+ return necessary_beam_convolutions
3481
3482 def get_reduced_kinematics(self, input_reduced_PS):
3483- """Given the PS *dictionary* providing the reduced kinematics corresponding to
3484+ """Given the PS specifying the reduced kinematics corresponding to
3485 the current PS point thrown at the virtual contribution, return a *dictionary* of
3486 momenta corresponding to the reduced kinematics."""
3487
3488@@ -1510,20 +1978,128 @@
3489
3490 return reduced_PS
3491
3492+ def n_loops_in_host_contribution(self):
3493+ """ Returns the number of loops that the contribution hosting this integrated counterterm
3494+ is supposed to have. Factorization contributions render this quantity more subtle than
3495+ the naive n_loops() call to self."""
3496+
3497+ # First account for all the loops of the building blocks
3498+ host_contrib_n_loops = self.n_loops()
3499+ host_contrib_n_loops += self.process.get_n_loops_in_beam_factorization()
3500+
3501+ # We must then add one loop for each unresolved parton of this integrated CT.
3502+ host_contrib_n_loops += self.count_unresolved()
3503+
3504+
3505+ # We must now adjust the number of loops expected in the host contributions because
3506+ # some integrated counterterms don't yield loops in the host contributions, but instead
3507+ # beam factorisation convolutions.
3508+ beam_numbers_for_which_n_loops_is_already_decremented = []
3509+ for current in self.get_all_currents():
3510+ if not type(current)==BeamCurrent:
3511+ continue
3512+ # All the soft structures appearing as 'BeamCurrents' come from the colorful currents scheme
3513+ # when the a mapping such as ppToOneWalker is used, which recoils equally against both beams.
3514+ # In this case, the corresponding integrated soft CT is placed in dedicated contributions
3515+ # such as 'BS', 'VS' etc... which are defined without an increment on n_loops corresponding
3516+ # to the unresolved soft particle, so it must be decremented here
3517+ for c in current['singular_structure'].decompose():
3518+ if c.name()=='S':
3519+ host_contrib_n_loops -= 1
3520+
3521+ if not current.does_require_correlated_beam_convolution():
3522+ # Finally the ISR beamCurrent contributions do have a non-zero number of unresolved contributions
3523+ # which increased the loop count in the line above. However, given that those must be placed
3524+ # together with the beam factorization currents, we should not account for these loops.
3525+ beam_numbers_in_currents = set(sum([[l.n for l in ss.get_all_legs() if l.n in [1,2]]
3526+ for ss in current['singular_structure'].substructures if ss.name()=='C'],[]))
3527+ for beam_number in beam_numbers_in_currents:
3528+ if beam_number not in beam_numbers_for_which_n_loops_is_already_decremented:
3529+ beam_numbers_for_which_n_loops_is_already_decremented.append(beam_number)
3530+ host_contrib_n_loops -= 1
3531+
3532+ # Similarly, the *integrated* beamCurrent F terms have zero number of unresolved legs
3533+ # but they must be placed together with the virtuals which have a loop count. We must
3534+ # therefore increase the loop count by *at most* one for each of the two beams that has
3535+ # at least one integratedBeamCurrent F structure attached
3536+ beam_numbers_for_which_n_loops_is_already_incremented = []
3537+ for current in self.get_all_currents():
3538+ if not type(current)==IntegratedBeamCurrent:
3539+ continue
3540+
3541+ # We don't need to look recursively inside the singular structures since the beam
3542+ # factorization ones are supposed to be at the top level since they factorize
3543+ beam_numbers_in_currents = set(sum([[l.n for l in ss.legs] for ss in
3544+ current['singular_structure'].substructures if ss.name()=='F'],[]))
3545+ for beam_number in beam_numbers_in_currents:
3546+ assert (beam_number in [1,2]), "Inconsistent beam number found."
3547+ if beam_number not in beam_numbers_for_which_n_loops_is_already_incremented:
3548+ beam_numbers_for_which_n_loops_is_already_incremented.append(beam_number)
3549+ host_contrib_n_loops += 1
3550+
3551+ return host_contrib_n_loops
3552+
3553+ def get_n_loops_in_beam_factorization_of_host_contribution(self):
3554+ """ Return the effective number of loops which will be part of the beam factorization
3555+ of the host contribution."""
3556+
3557+ # Start by counting the number of loops in the beam factorization of the reduced
3558+ # process factorized
3559+ n_loops = self.process.get_n_loops_in_beam_factorization()
3560+
3561+ # Then add those of the xi-dependent integrated ISR counterterm and local F ones.
3562+ beam_numbers_in_currents = set([])
3563+ soft_beam_factorisation_n_loop = 0
3564+ for current in self.get_all_currents():
3565+ if not type(current)==BeamCurrent:
3566+ continue
3567+ # For the integrated soft counterterms that go in the BS, VS etc... contributions,
3568+ # we must only subtract exactly one as this correspond to exactly one convolution,
3569+ # irrespectively of the number of unresolved particles of the counterterm.
3570+ if self.does_require_correlated_beam_convolution():
3571+ soft_beam_factorisation_n_loop = 1
3572+ else:
3573+ # We don't need to look recursively inside the singular structures since the beam
3574+ # factorization ones are supposed to be at the top level since they factorize
3575+ beam_numbers_in_current = set(sum([[l.n for l in ss.legs if l.n in [1,2]] for ss in
3576+ current['singular_structure'].substructures if ss.name()=='C'],[]))
3577+ beam_numbers_in_currents |= beam_numbers_in_current
3578+ n_loops += current['n_loops'] + current.count_unresolved()
3579+ # We must remove one loop per beam number encountered in integrated ISR collinear CT
3580+ # since the leading beam factorization counterterm has zero loop but count_unresolved()
3581+ # counts for one there.
3582+ n_loops -= (len(beam_numbers_in_currents) + soft_beam_factorisation_n_loop)
3583+
3584+ return n_loops
3585+
3586 #=========================================================================================
3587 # IRSubtraction
3588 #=========================================================================================
3589
3590 class IRSubtraction(object):
3591
3592- def __init__(self, model, n_unresolved, coupling_types=('QCD', )):
3593+ _allowed_model_names = ['sm', 'loop_sm', 'loopsm',
3594+ 'simple_qcd','loop_qcd_qed_sm','hc_nlo_x0_ufo']
3595+
3596+ def __init__(self, model, n_unresolved, coupling_types=('QCD', ), beam_types=(None,None),
3597+ currents_scheme = 'colorful', mappings_scheme = 'LorentzNLO'):
3598 """Initialize a IR subtractions for a given model,
3599 correction order and type.
3600 """
3601-
3602- self.model = model
3603- self.n_unresolved = n_unresolved
3604+ self.model = model
3605+ self.n_unresolved = n_unresolved
3606 self.coupling_types = coupling_types
3607+ self.beam_types = beam_types
3608+
3609+ self.currents_scheme = currents_scheme
3610+ self.mappings_scheme = mappings_scheme
3611+ # Decide is soft recoil against initial states
3612+ if self.currents_scheme in _currents_schemes_requiring_soft_beam_factorization and \
3613+ self.mappings_scheme in _mappings_schemes_requiring_soft_beam_factorization:
3614+ self.soft_do_recoil_against_initial_states = True
3615+ else:
3616+ self.soft_do_recoil_against_initial_states = False
3617+
3618 # Map perturbed coupling orders to the corresponding interactions and particles.
3619 # The entries of the dictionary are
3620 # 'interactions', 'pert_particles' and 'soft_particles'
3621@@ -1561,7 +2137,8 @@
3622
3623 if not any(
3624 self.model.get('name').lower().startswith(name)
3625- for name in ['sm', 'loop_sm', 'loopsm', 'simple_qcd'] ):
3626+ for name in self._allowed_model_names
3627+ ):
3628 raise InvalidCmd(
3629 "parent_PDGs_from_PDGs is implemented for SM only, "
3630 "not in model %s." % self.model.get('name') )
3631@@ -1607,8 +2184,10 @@
3632 pdgs.append(cur)
3633 else:
3634 pdgs.append(self.model.get_particle(cur).get_anti_pdg_code())
3635+
3636 # Get the parent PDG of this leg set
3637 parent_PDGs = self.parent_PDGs_from_PDGs(pdgs)
3638+
3639 # Cross back if the leg set referred to initial-state
3640 final_PDGs = []
3641 for parent_PDG in parent_PDGs:
3642@@ -1618,6 +2197,7 @@
3643 )
3644 else:
3645 final_PDGs.append(parent_PDG)
3646+
3647 return final_PDGs
3648
3649 def can_become_soft(self, legs):
3650@@ -1683,6 +2263,18 @@
3651 coll_set = (coll_initial_leg, ) + coll_initial_set
3652 if self.can_become_collinear(coll_set):
3653 elementary_operator_list.append(CollOperator(*coll_set))
3654+
3655+ # Finally add an elementary BeamOperator if the process acted on has factorization
3656+ # Beam currents attached to it.
3657+ beam_names_added = []
3658+ for pbft in process.get('beam_factorization'):
3659+ for beam_name in ['beam_one','beam_two']:
3660+ if beam_name not in beam_names_added and (not pbft[beam_name] is None):
3661+ beam_names_added.append(beam_name)
3662+ elementary_operator_list.append(
3663+ BeamOperator(*pbft[beam_name]['singular_structure'].get_all_legs()))
3664+
3665+
3666 return SingularOperatorList(elementary_operator_list)
3667
3668
3669@@ -1767,8 +2359,10 @@
3670
3671 # The squared orders of the reduced process will be set correctly later
3672 reduced_process = reduced_process.get_copy(
3673- ['legs', 'n_loops', 'legs_with_decays']
3674+ ['legs', 'n_loops', 'legs_with_decays','beam_factorization']
3675 )
3676+ # We need a deep copy of the beam_factorization here.
3677+ reduced_process['beam_factorization'] = copy.deepcopy(reduced_process['beam_factorization'])
3678 # Empty legs_with_decays: it will be regenerated automatically when asked for
3679 reduced_process['legs_with_decays'][:] = []
3680 # TODO If resetting n_loops, what about orders?
3681@@ -1778,7 +2372,6 @@
3682 nodes = []
3683
3684 # 2. Recursively look into substructures
3685-
3686 current_args = set(structure.legs)
3687 for substructure in structure.substructures:
3688 node = self.get_counterterm(
3689@@ -1808,12 +2401,24 @@
3690 # Replace soft structures with their flattened versions
3691 elif current_structure.name() == "S":
3692 current_args.add(current_structure)
3693+ # Remove all non-identity beam factorization currents for that leg of
3694+ # type 'bulk' to be the identity operator ('None').
3695+ elif current_structure.name() == "F":
3696+ if len(current_structure.legs)!=1 or current_structure.legs[0].n not in [1,2]:
3697+ raise MadGraph5Error("Only beam factorizations attached to leg number 1 or"+
3698+ " 2 are supported for now; not %s."%str(current_structure))
3699+ current_args.add(current_structure)
3700+ beam_names = {1:'beam_one', 2:'beam_two'}
3701+ # Change the distribution type of all beam_factorization terms of that beam
3702+ # in the reduced process to be the identity (i.e. None).
3703+ for bft in reduced_process.get('beam_factorization'):
3704+ beam_name = beam_names[current_structure.legs[0].n]
3705+ if not bft[beam_name] is None:
3706+ bft[beam_name]=None
3707+
3708 # Other structures need to be implemented
3709 else:
3710- raise MadGraph5Error(
3711- "Unrecognized current of type %s" %
3712- str(type(current_structure))
3713- )
3714+ raise MadGraph5Error("Unrecognized current of type %s" % str(type(current_structure)))
3715 # Add this node
3716 nodes.append(node)
3717
3718@@ -1833,12 +2438,30 @@
3719 # 3. Else build the current and update
3720 # the reduced process as well as the dictionary
3721 current_type = type(structure)(*current_args)
3722- current = Current({
3723- 'singular_structure': current_type })
3724+ if current_type.name()=='F':
3725+ # The beam factorization singular structure always have a single leg with a number
3726+ # matching the beam number.
3727+ beam_number = current_type.legs[0].n
3728+ assert (beam_number in [1,2]), "Beam factorization structure are expected to "+\
3729+ "have a single leg withh number in [1,2]."
3730+ current = BeamCurrent({
3731+ 'beam_type' : self.beam_types[beam_number-1][0],
3732+ 'beam_PDGs' : self.beam_types[beam_number-1][1],
3733+ 'distribution_type' : 'counterterm',
3734+ 'singular_structure': current_type })
3735+ else:
3736+ current = Current({'singular_structure': current_type })
3737 structure_legs = current_type.get_all_legs()
3738 structure_leg_ns = frozenset(leg.n for leg in structure_legs)
3739+ if current_type.name()=='F':
3740+ # We must not remove legs from the reduced process for beam factorization
3741+ # contributions
3742+ structure_leg_ns = frozenset()
3743+ else:
3744+ structure_leg_ns = frozenset(leg.n for leg in structure_legs)
3745+
3746 parent = None
3747- if structure.name() == "C":
3748+ if structure.name() in ["C"]:
3749 # Add entry to dictionary
3750 parent_index = len(momenta_dict_so_far) + 1
3751 momenta_dict_so_far[parent_index] = structure_leg_ns
3752@@ -1853,10 +2476,11 @@
3753 elif structure.name() == "S":
3754 # No parent
3755 pass
3756+ elif structure.name() == "F":
3757+ # No need to propagate parents for F structures
3758+ pass
3759 else:
3760- raise MadGraph5Error(
3761- "Building unrecognized current of type %s" %
3762- str(type(structure)) )
3763+ raise MadGraph5Error("Building unrecognized current of type %s" %str(type(structure)) )
3764 # Remove legs of this structure
3765 legs_to_remove = []
3766 for leg in reduced_process['legs']:
3767@@ -1888,13 +2512,15 @@
3768 # First check that the local counterterm is singular, because if not then we
3769 # should of course not return any integrated counterterm.
3770 if not local_counterterm.is_singular():
3771- return None
3772-
3773- complete_singular_structure = local_counterterm. \
3774- reconstruct_complete_singular_structure()
3775+ return []
3776+
3777+ complete_singular_structure = local_counterterm.reconstruct_complete_singular_structure()
3778+
3779+ # Useful to also have access to the flatten complete singular structure
3780+ flatten_singular_structure = complete_singular_structure.decompose()
3781
3782 reduced_process = local_counterterm.process.get_copy(
3783- ['legs', 'n_loops', 'legs_with_decays', 'squared_orders'] )
3784+ ['legs', 'n_loops', 'legs_with_decays', 'squared_orders', 'beam_factorization'] )
3785
3786 # The following sums all the loop numbers in all nodes and reduced process,
3787 # the latter of which must then be removed
3788@@ -1923,17 +2549,117 @@
3789 #
3790 ########
3791
3792- integrated_current = IntegratedCurrent({
3793- 'n_loops': n_loops,
3794- 'squared_orders': squared_orders,
3795- 'resolve_mother_spin_and_color': True,
3796- 'singular_structure': complete_singular_structure })
3797-
3798- return IntegratedCounterterm(
3799- process=reduced_process,
3800- nodes=[CountertermNode(current=integrated_current), ],
3801- momenta_dict=bidict(local_counterterm.momenta_dict),
3802- prefactor=-1. * local_counterterm.prefactor )
3803+ ########
3804+ #
3805+ # TODO handle the mixed final-initial local counterterm case. This issue is more general, however and pertains
3806+ # to the definition of the integrated countertem(S?) of any set of disjoint local counterterms.
3807+ # For now only the case of a single initial state collinear local CT is supported.
3808+ #
3809+ ########
3810+
3811+ # First tackle the special case of single initial-state collinear structure
3812+ initial_state_legs = [leg for leg in complete_singular_structure.substructures[0].legs if
3813+ leg.state == SubtractionLeg.INITIAL]
3814+
3815+ # A list to hold all integrated CT generated here.
3816+ integrated_CTs = []
3817+
3818+ # Check if this local counterterm has a soft component that recoils symmetrically
3819+ # against both beams.
3820+ has_soft_symmetric_ISR_recoil = (self.soft_do_recoil_against_initial_states and
3821+ complete_singular_structure.does_require_correlated_beam_convolution())
3822+
3823+ # Handle the specific case of single initial-state pure collinear counterterm or
3824+ # soft recoil against IS.
3825+ if (len(initial_state_legs) == 1 and not any(ss.name()=='S'
3826+ for ss in flatten_singular_structure)) or has_soft_symmetric_ISR_recoil:
3827+
3828+ if has_soft_symmetric_ISR_recoil:
3829+ # In the case of soft recoiling against initial states, the beam_type should not matter.
3830+ beam_type = None
3831+ beam_PDGs = None
3832+ else:
3833+ beam_number = initial_state_legs[0].n
3834+ beam_names = {1:'beam_one', 2:'beam_two'}
3835+
3836+ assert (beam_number in [1,2]), "MadNkLO only supports initial state legs with number in [1,2]."
3837+ beam_type = self.beam_types[beam_number-1][0] if self.beam_types[beam_number-1] else None
3838+ beam_PDGs = self.beam_types[beam_number-1][1] if self.beam_types[beam_number-1] else None
3839+
3840+ beam_current_options = {
3841+ 'beam_type' : beam_type,
3842+ 'beam_PDGs' : beam_PDGs,
3843+ 'n_loops' : n_loops,
3844+ 'squared_orders': squared_orders,
3845+ 'singular_structure': complete_singular_structure
3846+ }
3847+
3848+ # For now there will be no additional CT node, but for NNLO there may be because
3849+ # the integrated ISR has to be combined with local FSR counterterms
3850+ additional_CT_nodes = []
3851+
3852+ # Handle the specific case of NLO single beam factorization counterterm
3853+ if complete_singular_structure.substructures[0].name() == 'F':
3854+
3855+ integrated_CTs.append(IntegratedCounterterm(
3856+ process = reduced_process,
3857+ nodes = [
3858+ CountertermNode(current=IntegratedBeamCurrent(beam_current_options)),
3859+ ]+additional_CT_nodes,
3860+ momenta_dict= bidict(local_counterterm.momenta_dict),
3861+ prefactor = -1. * local_counterterm.prefactor ))
3862+
3863+ # Handle the specific case of ISR recoils, either because of ISR collinear or
3864+ # soft counterterms in colorful and ppToOneWalker mapping.
3865+ elif has_soft_symmetric_ISR_recoil or complete_singular_structure.substructures[0].name() == 'C':
3866+
3867+ # Now add the three pieces of the integrated ISR. Ultimately we may choose to instead
3868+ # generate only the first piece ('bulk') and the other two via an iterative application
3869+ # of the subtraction procedure. For now however, we will generate all three at once.
3870+
3871+ # Starting with the integrated endpoint:
3872+ integrated_CTs.append(IntegratedCounterterm(
3873+ process = reduced_process,
3874+ nodes = [
3875+ CountertermNode(current=IntegratedBeamCurrent(beam_current_options)),
3876+ ]+additional_CT_nodes,
3877+ momenta_dict= bidict(local_counterterm.momenta_dict),
3878+ prefactor = -1. * local_counterterm.prefactor ))
3879+
3880+ # Then the bulk and counterterm integrated ISR pieces, which are both xi dependent.
3881+ beam_current_options['distribution_type'] = 'bulk'
3882+ integrated_CTs.append(IntegratedCounterterm(
3883+ process = reduced_process,
3884+ nodes = [
3885+ CountertermNode(current=BeamCurrent(beam_current_options)),
3886+ ]+additional_CT_nodes,
3887+ momenta_dict= bidict(local_counterterm.momenta_dict),
3888+ prefactor = -1. * local_counterterm.prefactor ))
3889+
3890+ beam_current_options['distribution_type'] = 'counterterm'
3891+ integrated_CTs.append(IntegratedCounterterm(
3892+ process = reduced_process,
3893+ nodes = [
3894+ CountertermNode(current=BeamCurrent(beam_current_options)),
3895+ ]+additional_CT_nodes,
3896+ momenta_dict= bidict(local_counterterm.momenta_dict),
3897+ prefactor = +1. * local_counterterm.prefactor ))
3898+
3899+ else:
3900+ # Here is the general solution chosen for arbitrary final state local CT
3901+ integrated_current = IntegratedCurrent({
3902+ 'n_loops': n_loops,
3903+ 'squared_orders': squared_orders,
3904+ 'resolve_mother_spin_and_color': True,
3905+ 'singular_structure': complete_singular_structure })
3906+
3907+ integrated_CTs.append(IntegratedCounterterm(
3908+ process = reduced_process,
3909+ nodes = [CountertermNode(current=integrated_current), ],
3910+ momenta_dict= bidict(local_counterterm.momenta_dict),
3911+ prefactor = -1. * local_counterterm.prefactor ))
3912+
3913+ return integrated_CTs
3914
3915 @staticmethod
3916 def get_all_currents(counterterms):
3917@@ -1953,7 +2679,7 @@
3918 ignore_integrated_counterterms=False):
3919 """Generate all counterterms for the corrections specified in this module
3920 and the process given in argument."""
3921-
3922+
3923 if max_unresolved_in_elementary is None:
3924 max_unresolved_in_elementary = self.n_unresolved
3925 if max_unresolved_in_combination is None:
3926@@ -1962,33 +2688,34 @@
3927 process, max_unresolved_in_elementary)
3928 combinations = self.get_all_combinations(
3929 elementary_operators, max_unresolved_in_combination)
3930+
3931 all_counterterms = []
3932 all_integrated_counterterms = []
3933 for combination in combinations:
3934 template_counterterm = self.get_counterterm(combination, process)
3935 if not ignore_integrated_counterterms:
3936- template_integrated_counterterm = \
3937+ template_integrated_counterterms = \
3938 self.get_integrated_counterterm(template_counterterm)
3939 else:
3940- template_integrated_counterterm = None
3941+ template_integrated_counterterms = []
3942
3943- counterterms_with_loops = template_counterterm.split_loops(process['n_loops'])
3944+ counterterms_with_loops = template_counterterm.distribute_loops(process['n_loops'])
3945 # TODO
3946- # For the time being, split_loops is given None instead of the squared orders
3947+ # For the time being, distribute_loops is given None instead of the squared orders
3948 # because they should be retrieved from the process by looking at individual
3949 # matrix elements
3950 # That is, every process has a list of possible coupling orders assignations
3951 # so we should loop over them
3952 for counterterm_with_loops in counterterms_with_loops:
3953- all_counterterms.extend( counterterm_with_loops.split_orders(None) )
3954- # Now also distribute the template integrated counterterm if it is not None
3955- if not template_integrated_counterterm is None:
3956- integrated_counterterms_with_loops = template_integrated_counterterm.split_loops(
3957+ all_counterterms.extend( counterterm_with_loops.distribute_orders(None) )
3958+
3959+ # Now also distribute the template integrated counterterms
3960+ for template_integrated_counterterm in template_integrated_counterterms:
3961+ integrated_counterterms_with_loops = template_integrated_counterterm.distribute_loops(
3962 process['n_loops'] )
3963 for integrated_counterterm_with_loops in integrated_counterterms_with_loops:
3964 all_integrated_counterterms.extend(
3965- integrated_counterterm_with_loops.split_orders(None) )
3966-
3967+ integrated_counterterm_with_loops.distribute_orders(None) )
3968 return all_counterterms, all_integrated_counterterms
3969
3970 #=========================================================================================
3971@@ -2118,8 +2845,10 @@
3972 for current in currents:
3973 found_current_class = None
3974 for (dir_name, module_path, class_name, implementation_class) in all_classes:
3975+# misc.sprint('Testing class %s for current: %s'%(class_name,str(current)))
3976 instantiation_options = implementation_class.does_implement_this_current(
3977 current, self.model )
3978+# misc.sprint('Result: %s'%str(instantiation_options))
3979 if instantiation_options is None:
3980 continue
3981 if found_current_class is not None:
3982@@ -2165,7 +2894,7 @@
3983 # (it should never be used in production)
3984 if currents_with_default_implementation:
3985 currents_str = '\n'.join(
3986- ' > %s' % str(crt)
3987+ ' > %s (type: %s)' % (str(crt), type(crt) )
3988 for crt in currents_with_default_implementation )
3989 msg = """No implementation was found for the following subtraction currents:
3990 %s
3991
3992=== modified file 'madgraph/integrator/ME7_integrands.py' (properties changed: +x to -x)
3993--- madgraph/integrator/ME7_integrands.py 2018-08-13 20:45:37 +0000
3994+++ madgraph/integrator/ME7_integrands.py 2019-01-14 20:53:09 +0000
3995@@ -73,6 +73,7 @@
3996 import madgraph.integrator.phase_space_generators as phase_space_generators
3997 import madgraph.integrator.pyCubaIntegrator as pyCubaIntegrator
3998 import madgraph.integrator.vegas3_integrator as vegas3_integrator
3999+import madgraph.integrator.vectors as vectors
4000 # import madgraph.various.histograms as histograms # imported later to not slow down the loading of the code
4001 import models.check_param_card as check_param_card
4002 import models.model_reader as model_reader
4003@@ -81,8 +82,6 @@
4004 from madgraph.iolibs.files import ln
4005 from madgraph import InvalidCmd, MadGraph5Error, MG5DIR, ReadWrite
4006
4007-
4008-
4009 class MadEvent7Error(Exception):
4010 pass
4011
4012@@ -90,6 +89,551 @@
4013 pass
4014
4015 #===============================================================================
4016+# ME7Events
4017+#===============================================================================
4018+class ME7Event(object):
4019+ """ A class to store a particular configuration produced by the integrands
4020+ when evaluated and which has undefinite flavor configuration to begin with."""
4021+
4022+ def __init__(self, PS_point, weights_per_flavor_configurations,
4023+ requires_mirroring = False,
4024+ host_contribution_definition = 'N/A',
4025+ # This entry is mostly for debugging purposes, but it is useful to be able
4026+ # to know what is the list of defining PDGs of the resolved process from
4027+ # which this counterterm originates. For this reason the counterterm_structure
4028+ # is (possible a list of) tuple(s) of the form:
4029+ # (
4030+ # integrated_CT_instance,
4031+ # list_of_all combination_of_PDGs_of_the_resolved_process_this_integrated_CT_can_comes_from
4032+ # )
4033+ counterterm_structure = None,
4034+ Bjorken_xs = (1.0,1.0),
4035+ Bjorken_x_rescalings = (1.0,1.0),
4036+ is_a_mirrored_event = False,
4037+ # The user is free to add below any additional information that
4038+ # plays no structural role in in the MadNkLO construction. Example:
4039+ # test_IR_poles will store here information such as the input_mapping used
4040+ # and the convolutional mask employed for nicer printouts.
4041+ additional_information = {}
4042+ ):
4043+ """Initialize this particular event with a unique PS point
4044+ and a dictionary of the form:
4045+ ( (initial_state_flavors,), (final_state_flavors,) ) : weight
4046+ to specify all possible flavor configurations applying to this event.
4047+ For PDF counterterms and integrated collinear CT counterterms, we must
4048+ provide the possibility of applying a rescaling to the Bjorken x that will be
4049+ used to compute each of the two beams. By default this rescaling is of course one."""
4050+
4051+ self.PS_point = PS_point.to_list()
4052+ self.weights_per_flavor_configurations = weights_per_flavor_configurations
4053+ self.requires_mirroring = requires_mirroring
4054+ self.is_a_mirrored_event = is_a_mirrored_event
4055+ self.host_contribution_definition = host_contribution_definition
4056+ self.counterterm_structure = counterterm_structure
4057+ # This can either be None or a dictionary of the format:
4058+ # { 'beam_one' : 'ALL' or (PDG_allowed_1, PDG_allowed_2, PDG_allowed_3, ...)
4059+ # 'beam_two' : 'ALL' or (PDG_allowed_1, PDG_allowed_2, PDG_allowed_3, ...)
4060+ # }
4061+ self.Bjorken_x_rescalings = Bjorken_x_rescalings
4062+ self.Bjorken_xs = Bjorken_xs
4063+ self.additional_information = additional_information
4064+
4065+ # This entry specifies which flavor has been selected for this event.
4066+ # None implies that it has not been selected yet.
4067+ self.selected_flavors = None
4068+
4069+ def get_copy(self):
4070+ """ Returns a shallow copy of self, except for the
4071+ 'weights_per_flavor_configurations' attribute. """
4072+
4073+ return ME7Event(
4074+ self.PS_point, dict(self.weights_per_flavor_configurations),
4075+ requires_mirroring = self.requires_mirroring,
4076+ host_contribution_definition = self.host_contribution_definition,
4077+ counterterm_structure = self.counterterm_structure,
4078+ Bjorken_x_rescalings = self.Bjorken_x_rescalings,
4079+ Bjorken_xs = self.Bjorken_xs,
4080+ is_a_mirrored_event = self.is_a_mirrored_event,
4081+ additional_information = copy.deepcopy(self.additional_information)
4082+ )
4083+
4084+ def set_Bjorken_rescalings(self, *args):
4085+ """ Assigns the specified Bjorken x rescaling to this event."""
4086+ self.Bjorken_x_rescalings = tuple(args)
4087+
4088+ def set_Bjorken_xs(self, *args):
4089+ """ Assigns the specified Bjorken x's."""
4090+ self.Bjorken_xs = tuple(args)
4091+
4092+ def apply_PDF_convolution(self, pdf_accessor, all_pdf, all_mu_f_squared):
4093+ """ Convolute the weights_per_flavor_configurations."""
4094+
4095+ # In order for the + distributions of the PDF counterterms and integrated
4096+ # collinear ISR counterterms to hit the PDF only (and not the matrix elements or
4097+ # observables functions), a change of variable is necessary: xb_1' = xb_1 * xi1
4098+ # In this case, the xi1 to factor to apply is given by the Bjorken_x_rescalings
4099+ # factor, which must be used both for rescaling the argument of the PDF as well
4100+ # as bringing a factor 1/xi for the jacobian of the change of variable.
4101+ for flavors in self.weights_per_flavor_configurations:
4102+ PDFs = 1.
4103+ for i, flavor in enumerate(flavors[0]):
4104+ PDFs *= pdf_accessor(all_pdf[i], flavor,
4105+ self.Bjorken_xs[i]*self.Bjorken_x_rescalings[i], all_mu_f_squared[i])
4106+ PDFs *= self.Bjorken_x_rescalings[i]
4107+ self.weights_per_flavor_configurations[flavors] *= PDFs
4108+
4109+ def store_information(self, key, value):
4110+ """ This function registers a new additional information that will be attached to this
4111+ event."""
4112+ self.additional_information[key] = value
4113+
4114+ def get_information(self, key):
4115+ """ Tries to fetch a particular additional information registered in this event.
4116+ If not present, it returns None."""
4117+ return self.additional_information.get(key, None)
4118+
4119+ def get_total_weight(self):
4120+ """ Return the total weight of this event for all flavor considerations to consider."""
4121+
4122+ summed_weight = None
4123+ for wgt in self.weights_per_flavor_configurations.values():
4124+ if summed_weight is None:
4125+ summed_weight = copy.copy(wgt)
4126+ else:
4127+ summed_weight += wgt
4128+ if summed_weight is None:
4129+ return 0.
4130+
4131+ return summed_weight
4132+
4133+ def generate_mirrored_event(self):
4134+ """ Creates a mirror event of self if necessary."""
4135+ if not self.requires_mirroring or self.is_empty():
4136+ return None
4137+
4138+
4139+ # Now that the mirroring was applied to this event, it no longer needs any
4140+ self.requires_mirroring = False
4141+ # Neither does the mirrored event we instantiate below
4142+ mirrored_event = self.get_copy()
4143+ # Set the mirrored event flag "is_a_mirrored_event" appropriately
4144+ mirrored_event.is_a_mirrored_event = True
4145+
4146+ # First swap initial state flavors
4147+ mirrored_event.weights_per_flavor_configurations = {}
4148+ for flavor_config, wgt in self.weights_per_flavor_configurations.items():
4149+ swapped_flavor_config = ( (flavor_config[0][1],flavor_config[0][0]),
4150+ flavor_config[1] )
4151+ mirrored_event.weights_per_flavor_configurations[swapped_flavor_config] = wgt
4152+
4153+ # Then swap the kinematic configurations:
4154+ # a) Initial-state momenta must be swapped
4155+ # b) Final-state momenta must have their z-component swapped (for this to be correct
4156+ # however, first make sure that initial momenta are indeed on the z-axis in the c.o.m frame)
4157+ # Normally all events generated should be in the c.o.m frame already, but it may be
4158+ # that in test_IR_limits, for debugging purposes only, the option 'boost_back_to_com' was
4159+ # set to False, in which case we must here first *temporarily boost* it back to the c.o.m.
4160+ if __debug__:
4161+ # Two initial states
4162+ assert(len(self.weights_per_flavor_configurations.keys()[0][0])==2)
4163+ sqrts = math.sqrt((self.PS_point[0]+self.PS_point[1]).square())
4164+ # Assert initial states along the z axis
4165+ assert(abs(self.PS_point[0][1]/sqrts)<1.0e-9)
4166+ assert(abs(self.PS_point[1][1]/sqrts)<1.0e-9)
4167+ assert(abs(self.PS_point[0][2]/sqrts)<1.0e-9)
4168+ assert(abs(self.PS_point[1][2]/sqrts)<1.0e-9)
4169+ # If initial states are back to back we can directly proceed with a simple swap of the
4170+ # z-axis, otherwise we must first boost to the c.o.m
4171+ PS_point_to_swap = self.PS_point.get_copy()
4172+ boost_vector = None
4173+ if abs((self.PS_point[0]+self.PS_point[1])[3]/sqrts)>1.0e-9:
4174+ boost_vector = (PS_point_to_swap[0]+PS_point_to_swap[1]).boostVector()
4175+ for vector in PS_point_to_swap:
4176+ vector.boost(-boost_vector)
4177+ # debug: now make sure the event is back to back
4178+ if __debug__:
4179+ sqrts = math.sqrt((self.PS_point[0]+self.PS_point[1]).square())
4180+ assert(abs((PS_point_to_swap[0]+PS_point_to_swap[1])[3]/sqrts)<1.0e-9)
4181+ mirrored_event.PS_point = vectors.LorentzVectorList([ PS_point_to_swap[1],PS_point_to_swap[0] ])
4182+ for vector in PS_point_to_swap[2:]:
4183+ mirrored_event.PS_point.append(vectors.LorentzVector(
4184+ [vector[0],vector[1],vector[2],-vector[3]]))
4185+ # And now if we had boosted the event we must now boost it back
4186+ if boost_vector is not None:
4187+ for vector in PS_point_to_swap:
4188+ vector.boost(boost_vector)
4189+
4190+ # Then swap Bjorken x's and rescaling.
4191+ mirrored_event.Bjorken_x_rescalings = (self.Bjorken_x_rescalings[1], self.Bjorken_x_rescalings[0])
4192+ mirrored_event.Bjorken_xs = (self.Bjorken_xs[1], self.Bjorken_xs[0])
4193+
4194+ return mirrored_event
4195+
4196+ def filter_flavor_configurations(self, flavor_cut, **opts):
4197+ """ Apply the flavor cut on this event for each flavor configuration, removing
4198+ all those failings. If there is None left, this returns False."""
4199+
4200+ new_weights_per_flavor_configurations = {}
4201+ for flavors, weight in self.weights_per_flavor_configurations.items():
4202+ if flavor_cut(self.PS_point, flavors, xb_1=self.Bjorken_xs[0],
4203+ xb_2=self.Bjorken_xs[1], **opts):
4204+ new_weights_per_flavor_configurations[flavors] = weight
4205+
4206+ self.weights_per_flavor_configurations = new_weights_per_flavor_configurations
4207+
4208+ return (not self.is_empty())
4209+
4210+ def apply_flavor_blind_cuts(self, flavor_blind_cut, *args, **opts):
4211+ """ Apply the flavor-blind cut to this event, returning False if it failed."""
4212+
4213+ return flavor_blind_cut(self.PS_point, *args, xb_1=self.Bjorken_xs[0],
4214+ xb_2=self.Bjorken_xs[1], **opts)
4215+
4216+ def select_a_flavor_configuration(self):
4217+ """ Select a particular flavor configuration for this event which may be used
4218+ by the flavor-sensitive observables. The selection is performed randomly with
4219+ a probability proportional to the weight of each flavor configuration."""
4220+
4221+ weights_by_flavor_configurations = self.weights_per_flavor_configurations.items()
4222+ index_selected=0
4223+ abs_flavor_wgts_sum = sum(abs(value) for value in
4224+ self.weights_per_flavor_configurations.values())
4225+ running_sum = abs(weights_by_flavor_configurations[index_selected][1])
4226+ rv_flavor = random.random()*abs_flavor_wgts_sum
4227+ while rv_flavor > running_sum:
4228+ index_selected +=1
4229+ running_sum += abs(weights_by_flavor_configurations[index_selected][1])
4230+ self.selected_flavors = weights_by_flavor_configurations[index_selected][0]
4231+
4232+ def select_epsilon_expansion_term(self, term_name):
4233+ """ Select a particular EpsilonExpansion term in the weights of the flavor
4234+ matrix of this event. If the weights are float already, then they will be
4235+ set to zero unless the term_name is 'finite'."""
4236+
4237+ if not isinstance(term_name, int):
4238+ is_finite_specified = (term_name.lower()=='finite')
4239+ for flavors, weight in self.weights_per_flavor_configurations.items():
4240+ if isinstance(weight, float):
4241+ if not is_finite_specified:
4242+ del self.weights_per_flavor_configurations[flavors]
4243+ else:
4244+ self.weights_per_flavor_configurations[flavors] = weight.get_term(term_name)
4245+
4246+ def convolve_flavors(self, flavor_matrix, leg_index, initial_state=True):
4247+ """ Convolves the flavor matrix in argument (of the form:
4248+ { start_flavor_a : { end_flavors_1 : wgt_x,
4249+ end_flavors_2 : wgt_y,
4250+ [...]
4251+ },
4252+ start_flavor_b : { end_flavors_1 : wgt_z,
4253+ end_flavors_2 : wgt_w,
4254+ [...]
4255+ },
4256+ [...]
4257+ }
4258+ where 'wgt_x' can be EpsilonExpansion instances.
4259+ and modifies self.weights_per_flavor_configurations with the convolved result.
4260+ leg index specifies which leg must be convolved.
4261+ (e.g. beam #1 correpsonds to leg_index=0 and initial_state = True.
4262+ Note that end_flavors is a tuple of flavor with identical matrix elements (such as in q -> q g, which is flavor independent)
4263+ """
4264+
4265+ def substitute_flavor(flavor_config, new_flavor):
4266+ """ Substitute in the flavor config the convoluted flavor with the one in
4267+ argument and returns the corresponding new flavor configuration as a two tuples,
4268+ for the initial and final states respectively."""
4269+ return (
4270+ tuple( (pdg if i!=leg_index else new_flavor)
4271+ for i, pdg in enumerate(flavor_config[0]) ) if initial_state else flavor_config[0],
4272+ tuple( (pdg if i!=leg_index else new_flavor)
4273+ for i, pdg in enumerate(flavor_config[1]) ) if (not initial_state) else flavor_config[1]
4274+ )
4275+
4276+ if len(flavor_matrix)==0:
4277+ # shortcut in case an empty convolutional flavor matrix is specified which can happen
4278+ # if the beam currents are zero but don't have the attribute is_zero = True
4279+ self.weights_per_flavor_configurations = {}
4280+ return
4281+
4282+ new_flavor_configurations = {}
4283+ for flavor_config, wgt in self.weights_per_flavor_configurations.items():
4284+ start_flavor = flavor_config[0][leg_index] if initial_state else flavor_config[1][leg_index]
4285+ if start_flavor in flavor_matrix:
4286+ new_flavor_configs = []
4287+ for end_flavors, matrix_wgt in flavor_matrix[start_flavor].items():
4288+ new_flavor_configs.extend([
4289+ ( substitute_flavor(flavor_config, end_flavor),
4290+ base_objects.EpsilonExpansion(matrix_wgt)*wgt )
4291+ for end_flavor in end_flavors ])
4292+
4293+ for new_flavor_config, new_wgt in new_flavor_configs:
4294+ try:
4295+ new_flavor_configurations[new_flavor_config] += new_wgt
4296+ except KeyError:
4297+ new_flavor_configurations[new_flavor_config] = new_wgt
4298+
4299+ # Now assign the newly create flavor configurations
4300+ self.weights_per_flavor_configurations = new_flavor_configurations
4301+
4302+ def is_empty(self):
4303+ """
4304+ Check if the weight vector (weights_per_flavor_configurations) is an empty dictionary.
4305+ For now does not check if all weights are zero
4306+ :return: bool
4307+ """
4308+ return len(self.weights_per_flavor_configurations) == 0
4309+
4310+ def __add__(self, other):
4311+ """ overload the '+' operator."""
4312+ new_event = self.get_copy()
4313+ return new_event.__iadd__(other)
4314+
4315+ def __iadd__(self, other):
4316+ """ overload the '+=' operator."""
4317+ if __debug__: assert(self.requires_mirroring == other.requires_mirroring)
4318+ if __debug__: assert(self.PS_point == other.PS_point)
4319+ if __debug__: assert(self.Bjorken_x_rescalings == other.Bjorken_x_rescalings)
4320+ if __debug__: assert(self.Bjorken_xs == other.Bjorken_xs)
4321+ for flavor_configuration, wgt in other.weights_per_flavor_configurations.items():
4322+ try:
4323+ self.weights_per_flavor_configurations[flavor_configuration] += wgt
4324+ except KeyError:
4325+ self.weights_per_flavor_configurations[flavor_configuration] = wgt
4326+ return self
4327+
4328+ def __mul__(self, multiplier):
4329+ """ overload the '*' operator with a float or an EpsilonExpansion. """
4330+ new_event = self.get_copy()
4331+ return new_event.__imul__(multiplier)
4332+
4333+ def __imul__(self, multiplier):
4334+ """ overload the '*=' operator with a float or an EpsilonExpansion. """
4335+ if __debug__: assert( isinstance(multiplier, (float, base_objects.EpsilonExpansion)) )
4336+ for flavor_configuration, wgt in self.weights_per_flavor_configurations.items():
4337+ try:
4338+ self.weights_per_flavor_configurations[flavor_configuration] = multiplier*wgt
4339+ except KeyError:
4340+ self.weights_per_flavor_configurations[flavor_configuration] = multiplier*wgt
4341+ return self
4342+
4343+ def __str__(self):
4344+ return self.nice_string()
4345+
4346+ def nice_string(self):
4347+ """ Returns a nice and short string representation of this event."""
4348+ BLUE = misc.bcolors.BLUE
4349+ GREEN = misc.bcolors.GREEN
4350+ ENDC = misc.bcolors.ENDC
4351+ res = []
4352+ if self.counterterm_structure is None:
4353+ res.append("%s%svent from %s contribution '%s'%s%s:"%(
4354+ GREEN,
4355+ 'E' if not self.is_a_mirrored_event else 'Mirrored e',
4356+ self.host_contribution_definition.correction_order,
4357+ self.host_contribution_definition.short_name(),
4358+ ' (requires mirroring)' if self.requires_mirroring else '',
4359+ ENDC))
4360+ else:
4361+ if isinstance(self.counterterm_structure, list):
4362+ all_ct_strucs = self.counterterm_structure
4363+ else:
4364+ all_ct_strucs = [self.counterterm_structure,]
4365+
4366+ counterterm_structure_str_elems = []
4367+ for ct_struct, all_resolved_PDGs in all_ct_strucs:
4368+ # We show only one representative resolved_PDGs structure: the first one.
4369+ if isinstance(all_resolved_PDGs, dict):
4370+ representative_resolved_PDGs = all_resolved_PDGs.keys()[0]
4371+ else:
4372+ representative_resolved_PDGs = all_resolved_PDGs[0]
4373+ counterterm_structure_str_elems.append('%s@%s'%(str(ct_struct),
4374+ str(representative_resolved_PDGs).replace(' ','')))
4375+
4376+ if len(all_ct_strucs)>1:
4377+ counterterm_structure_str = '( %s )'%(' + '.join(counterterm_structure_str_elems))
4378+ else:
4379+ counterterm_structure_str = counterterm_structure_str_elems[0]
4380+
4381+ res.append('%s%sounterterm event from limit%s %s of %s contribution %s%s'%(
4382+ GREEN,
4383+ 'C' if not self.is_a_mirrored_event else 'Mirrored c',
4384+ 's' if isinstance(self.counterterm_structure, list) and len(self.counterterm_structure)>1 else '',
4385+ counterterm_structure_str,
4386+ self.host_contribution_definition.correction_order,
4387+ self.host_contribution_definition.short_name(),
4388+ ENDC))
4389+
4390+ res.append('%s Kinematic configuration:%s'%(BLUE, ENDC))
4391+ res.extend(' %s'%line for line in str(self.PS_point).split('\n'))
4392+ if not any(bs is None for bs in self.Bjorken_x_rescalings):
4393+ res.append(" Bjorken x's scalings: %s"%(', '.join('%.16e'%bs for bs in self.Bjorken_x_rescalings)))
4394+ if not any(bs is None for bs in self.Bjorken_xs):
4395+ res.append(" Bjorken x's : %s"%(', '.join('%.16e'%bs for bs in self.Bjorken_xs)))
4396+ res.append('%s Flavors configurations:%s'%(BLUE, ENDC))
4397+ for flavors in sorted(self.weights_per_flavor_configurations.keys()):
4398+ wgt = self.weights_per_flavor_configurations[flavors]
4399+ res.append('%s | %s'%('%-25s'%(' %s -> %s'%(
4400+ ' '.join('%s'%pdg for pdg in flavors[0]),
4401+ ' '.join('%s'%pdg for pdg in flavors[1]))),
4402+ wgt.__str__(format='.16e') if isinstance(wgt,base_objects.EpsilonExpansion)
4403+ else '%.16e'%wgt ))
4404+ if not self.selected_flavors is None:
4405+ res.append('%s Selected flavors configuration : %s%s -> %s'%(BLUE, ENDC,
4406+ ' '.join('%s'%pdg for pdg in self.selected_flavors[0]),
4407+ ' '.join('%s'%pdg for pdg in self.selected_flavors[1]) ))
4408+
4409+ if len(self.additional_information)>0:
4410+ res.append('%s Additional information:%s'%(BLUE, ENDC))
4411+ for key, value in self.additional_information.items():
4412+ res.append(' %s : %s'%(str(key), str(value)))
4413+
4414+ return '\n'.join(res)
4415+
4416+ def counterterm_structure_short_string(self):
4417+ """ Return a short string specifying the counterterm structure(s) of this event."""
4418+
4419+ if self.counterterm_structure is None:
4420+ if not self.is_a_mirrored_event:
4421+ return 'Event'
4422+ else:
4423+ return '<->Event'
4424+
4425+ event_str = str(self.counterterm_structure)
4426+ if isinstance(self.counterterm_structure, list):
4427+ event_ct_structs = self.counterterm_structure
4428+ else:
4429+ event_ct_structs = [self.counterterm_structure,]
4430+
4431+ event_str_elems = []
4432+ for ct_struct, all_resolved_PDGs in event_ct_structs:
4433+ # We show only one representative resolved_PDGs structure: the first one.
4434+ if isinstance(all_resolved_PDGs, dict):
4435+ representative_resolved_PDGs = all_resolved_PDGs.keys()[0]
4436+ else:
4437+ representative_resolved_PDGs = all_resolved_PDGs[0]
4438+ str_ct_struct = str(ct_struct)
4439+ # Clean-up of the embedding overall parenthesis for the short string
4440+ if str_ct_struct.startswith("("):
4441+ str_ct_struct = str_ct_struct[1:]
4442+ if str_ct_struct.endswith(",)"):
4443+ str_ct_struct = str_ct_struct[:-2]
4444+ event_str_elems.append('%s@%s'%(str_ct_struct,
4445+ str(representative_resolved_PDGs).replace(' ','')))
4446+
4447+ if len(event_ct_structs)>1:
4448+ event_str = '( %s )'%(' + '.join(event_str_elems))
4449+ else:
4450+ event_str = event_str_elems[0]
4451+ input_mapping = self.get_information('input_mapping')
4452+ if input_mapping is not None:
4453+ event_str += 'x{%s}'%(','.join('%d:%d'%(k,input_mapping[k]) for k in
4454+ sorted(input_mapping.keys()) ))
4455+ if self.is_a_mirrored_event:
4456+ event_str = '<->%s'%event_str
4457+ beam_convolution_masks = self.get_information('beam_convolution_masks')
4458+ if beam_convolution_masks is not None:
4459+ for (beam_name, beam_short_name) in [('beam_one','F1'),('beam_two','F2')]:
4460+ if beam_convolution_masks[beam_name] != 'ALL':
4461+ event_str += '@%s<-(%s)'%(beam_short_name,'|'.join('%d'%pdg for pdg in
4462+ beam_convolution_masks[beam_name]))
4463+ return event_str
4464+
4465+class ME7EventList(list):
4466+ """ A class handling a collection of ME7Events, with helper function to
4467+ collectively act on them. This customized list is mainly intended to
4468+ store a list of ME7Events that all originate from the same integrand
4469+ evaluation call."""
4470+
4471+ def __init__(self, *args, **opts):
4472+ super(ME7EventList,self).__init__(*args, **opts)
4473+
4474+ def apply_PDF_convolution(self, *args,**opts):
4475+ """ Convolute the weights_per_flavor_configurations of each event
4476+ with the PDF luminosity."""
4477+ for event in self:
4478+ event.apply_PDF_convolution(*args, **opts)
4479+
4480+ def filter_flavor_configurations(self, flavor_cut, **opts):
4481+ """ Apply the flavor cut on each event in this list, removing those with
4482+ no valid flavor configuration left. This returns false if there is no events left."""
4483+
4484+ new_event_list = []
4485+ for event in self:
4486+ if event.filter_flavor_configurations(flavor_cut, **opts):
4487+ new_event_list.append(event)
4488+ self[:] = new_event_list
4489+
4490+ return len(self) != 0
4491+
4492+ def filter_with_flavor_blind_cuts(self, cut_function, *args, **opts):
4493+ """ Apply the kinematic flavor-blind cut on each event in this list, removing those with
4494+ failing the cut. This returns false if there is no events left."""
4495+
4496+ new_event_list = []
4497+ for event in self:
4498+ if event.apply_flavor_blind_cuts(cut_function, *args, **opts):
4499+ new_event_list.append(event)
4500+ self[:] = new_event_list
4501+
4502+ return len(self) != 0
4503+
4504+ def get_total_weight(self):
4505+ """ Return the total weight over all events in this list."""
4506+ if len(self) > 0:
4507+ return sum(event.get_total_weight() for event in self)
4508+ else:
4509+ return 0.
4510+
4511+ def select_a_flavor_configuration(self):
4512+ """ Select a particular flavor configuration to be used in the flavor-sensitive
4513+ observables for each event. """
4514+
4515+ for event in self:
4516+ event.select_a_flavor_configuration()
4517+
4518+ def select_epsilon_expansion_term(self, term_name):
4519+ """ Select a particular EpsilonExpansion term in the weights of the flavor
4520+ matrix of these events. If the weights are float already, then they will be
4521+ set to zero unless the term_name is 'finite'."""
4522+
4523+ for event in self:
4524+ event.select_epsilon_expansion_term(term_name)
4525+
4526+ def apply_observables(self, observable_list, integrator_weight=1.):
4527+ """ Applies the observable list to the entire set of events of this list."""
4528+
4529+ for event in self:
4530+ observable_list.apply_observables(event, integrator_weight)
4531+
4532+ def generate_mirrored_events(self):
4533+ """ Creates the mirror configurations if necessary."""
4534+ for event in list(self):
4535+ mirrored_event = event.generate_mirrored_event()
4536+ if mirrored_event:
4537+ self.append(mirrored_event)
4538+
4539+ def __str__(self):
4540+ return self.nice_string()
4541+
4542+ def nice_string(self):
4543+ """ Returns a nice string representation of this list of event."""
4544+
4545+ BLUE = misc.bcolors.BLUE
4546+ GREEN = misc.bcolors.GREEN
4547+ ENDC = misc.bcolors.ENDC
4548+
4549+ n_events = len(self)
4550+ res=['%s>> Start of a list of %d event%s:%s'%(
4551+ GREEN, n_events, 's' if n_events > 1 else '',ENDC)]
4552+ res.append('')
4553+ for i_event, event in enumerate(self):
4554+ res.append('#%d '%(i_event+1) + event.nice_string())
4555+ res.append('')
4556+ res.append('%s>> End of a list of %d event%s.%s'%(
4557+ GREEN, n_events, 's' if n_events > 1 else '',ENDC))
4558+ return '\n'.join(res)
4559+
4560+#===============================================================================
4561 # ME7Integrand
4562 #===============================================================================
4563 class ME7Integrand(integrands.VirtualIntegrand):
4564@@ -98,6 +642,8 @@
4565 # Maximum size of the cache for PDF calls
4566 PDF_cache_max_size = 1000
4567
4568+ FROZEN_DIMENSIONS = {}#{'x_1': 0.13, 'x_2': 0.26, 'x_3': 0.35}
4569+
4570 def __new__(cls, model,
4571 run_card,
4572 contribution_definition,
4573@@ -108,40 +654,45 @@
4574 ME7_configuration, **opt):
4575 all_args = [model, run_card, contribution_definition, processes_map,
4576 all_MEAccessors, ME7_configuration ]
4577+
4578 if cls is ME7Integrand:
4579 target_type = 'Unknown'
4580- if contribution_definition.correction_order == 'LO':
4581- if contribution_definition.n_loops == 0 and \
4582- contribution_definition.n_unresolved_particles == 0:
4583- target_type = 'Born'
4584- elif contribution_definition.n_loops == 1 and \
4585- contribution_definition.n_unresolved_particles == 0:
4586- target_type = 'LoopInduced_Born'
4587- elif contribution_definition.correction_order == 'NLO':
4588- if contribution_definition.n_loops == 1 and \
4589- contribution_definition.n_unresolved_particles == 0:
4590+ correction_order = contribution_definition.correction_order.count('N')
4591+ beam_factorization_count = 0
4592+ if contribution_definition.is_beam_active('beam_one'):
4593+ beam_factorization_count += 1
4594+ if contribution_definition.is_beam_active('beam_two'):
4595+ beam_factorization_count += 1
4596+ n_loops = contribution_definition.n_loops
4597+ n_unresolved_particles = contribution_definition.n_unresolved_particles
4598+ # Beam factorization contributions are automatically of type RV because
4599+ # they must both generate local counterterms (for the form factors) and
4600+ # accept integrated ISR ones.
4601+ if beam_factorization_count > 0:
4602+ if contribution_definition.correlated_beam_convolution:
4603+ target_type = 'BeamSoft'
4604+ else:
4605+ target_type = 'RealVirtual'
4606+ elif n_loops == 0 and n_unresolved_particles == 0:
4607+ target_type = 'Born'
4608+ elif n_loops == 1 and n_unresolved_particles == 0:
4609+ if correction_order < 1:
4610+ target_type = 'LoopInduced_Born'
4611+ else:
4612 target_type = 'Virtual'
4613- elif contribution_definition.n_loops == 0 and \
4614- contribution_definition.n_unresolved_particles == 1:
4615- target_type = 'SingleReals'
4616- elif contribution_definition.correction_order == 'NNLO':
4617- if contribution_definition.n_loops == 0 and \
4618- contribution_definition.n_unresolved_particles == 2:
4619- target_type = 'DoubleReals'
4620- else:
4621- raise MadGraph5Error("Some NNLO type of integrands are not implemented yet.")
4622- elif contribution_definition.correction_order == 'NNNLO':
4623- if contribution_definition.n_loops == 0 and \
4624- contribution_definition.n_unresolved_particles == 3:
4625- target_type = 'TripleReals'
4626- else:
4627- raise MadGraph5Error("Some NNNLO type of integrands are not implemented yet.")
4628+ elif n_loops == 0 and n_unresolved_particles == 1:
4629+ target_type = 'SingleReals'
4630+ elif n_loops == 0 and n_unresolved_particles == 2:
4631+ target_type = 'DoubleReals'
4632+ elif n_loops == 0 and n_unresolved_particles == 3:
4633+ target_type = 'TripleReals'
4634 else:
4635- target_type = 'Unknown'
4636+ raise MadGraph5Error("Some %s type of ME7Integrands are not implemented yet."%
4637+ contribution_definition.correction_order)
4638 target_class = ME7Integrand_classes_map[target_type]
4639
4640 if not target_class:
4641- raise MadGraph5Error("Could not determine the class of integrand of type '%s' to be added for"%target_type+
4642+ raise MadGraph5Error("Could not determine the class of ME7Integrand of type '%s' to be added for"%target_type+
4643 " the contribution definiton:\n%s"%str(contribution_definition.nice_string()))
4644
4645 return super(ME7Integrand, cls).__new__(target_class, *all_args, **opt)
4646@@ -176,6 +727,7 @@
4647
4648 # The original ContributionDefinition instance at the origin this integrand
4649 self.contribution_definition = contribution_definition
4650+
4651 # The process map of the Contribution instance at the origin of this integrand.
4652 # The format is identical to the one generated from the function 'get_process_map' of a contribution.
4653 self.processes_map = processes_map
4654@@ -197,13 +749,17 @@
4655 # This counter is incremented for each time self.__call__ is called and reinitialized in self.synchronize
4656 self.n_observable_calls = 0
4657
4658- def nice_string(self):
4659- """ For now simply use the contribution_definition and class name for a nice readable representation."""
4660-
4661- res = []
4662- res.append("Instance of class '%s', with the following contribution definition:"%(self.__class__.__name__))
4663- res.append('\n'.join(' > %s'%line for line in self.contribution_definition.nice_string().split('\n')))
4664- return '\n'.join(res)
4665+ # Store a flavor cut definition, which the ME7 interface can overwrite. This will be used
4666+ # by the function pass_flavor_sensitive cut and it is always None by default.
4667+ self.flavor_cut_function = None
4668+
4669+ def has_integrated_counterterms(self):
4670+ """ A property of the class, indicating whether it can contain integrated counterterms."""
4671+ return False
4672+
4673+ def has_local_counterterms(self):
4674+ """ A property of the class, indicating whether it can contain local counterterms."""
4675+ return False
4676
4677 def get_additional_nice_string_printout_lines(self):
4678 """ Additional printout information for nice_string.
4679@@ -218,12 +774,18 @@
4680 return GREEN+' %s'%defining_process.nice_string(print_weighted=False).\
4681 replace('Process: ','')+ENDC
4682
4683+ def get_short_name(self):
4684+ """ Returns the short-name for this integrand, typically extracted from the one
4685+ of the contribution definition."""
4686+ return self.contribution_definition.short_name()
4687+
4688 def nice_string(self, format=0):
4689 """ Nice string representation of self."""
4690 BLUE = '\033[94m'
4691 GREEN = '\033[92m'
4692 ENDC = '\033[0m'
4693- res = ['%-30s: %s'%('ME7Integrand_type',type(self))]
4694+ res = ['< %s%s%s >'%(BLUE,self.get_short_name(),ENDC)]
4695+ res.append('%-30s: %s'%('ME7Integrand_type',type(self)))
4696 res.extend([self.contribution_definition.nice_string()])
4697 if not self.topologies_to_processes is None:
4698 res.append('%-30s: %d'%('Number of topologies',
4699@@ -244,6 +806,155 @@
4700
4701 return '\n'.join(res).encode('utf-8')
4702
4703+ def __str__(self):
4704+ return self.nice_string()
4705+
4706+ @staticmethod
4707+ def build_flavor_cut_function(flavor_cut_string):
4708+ """ Given a string defining a flavor cut function specification (taken from the run
4709+ card), build here the flavor cut function to apply at run-time. Example of a complicated
4710+ cut function:
4711+
4712+ "{
4713+ 'R':'* 21 > (21,2)x3 * | * 21 > (21,3)x3 *',
4714+ 'V':'21 (-2,2) > 21x2 (range(1,7)+range(-1,-7,-1))x5
4715+ }"
4716+
4717+ Means that the real must have:
4718+ a) anything as first incoming particle and a gluon the second incoming particle
4719+ b) Three 'gluons or up quarks' in the final state + anything
4720+ *or* Three 'gluons or strange quarks' in the final state + anything
4721+
4722+ Means that the virtual must have:
4723+ a) a gluon and a 'up quark or anti-up quark' as first and second particle
4724+ b) 2 gluons and 5 quarks or antiquarks *exactly* (all particles must fit in one category at least)
4725+
4726+ Notice that one can also speficy just a string, in which case the cut will apply
4727+ to all contributions.
4728+
4729+ """
4730+
4731+ # First check if the format is a dictionary that specifies a different function
4732+ # for different integrand or if it applies to all integrands.
4733+ try:
4734+ flavor_func_dic = eval(flavor_cut_string)
4735+ is_a_dict = isinstance(flavor_func_dic, dict)
4736+ except:
4737+ is_a_dict = False
4738+ if is_a_dict:
4739+ if self.get_short_name() in flavor_func_dic:
4740+ flavor_cut_string = flavor_func_dic[self.get_short_name()]
4741+ else:
4742+ return None
4743+
4744+ if flavor_cut_string.lower() == 'none':
4745+ return None
4746+ if flavor_cut_string.lower() == 'hardcoded':
4747+ return 'hardcoded'
4748+
4749+ patterns = [p.strip() for p in flavor_cut_string.split('|')]
4750+ processed_patterns = []
4751+ for pattern in patterns:
4752+ processed_pattern = {}
4753+ try:
4754+ initial_state_pattern, final_state_pattern = pattern.split('>')
4755+ initial_state_pattern = initial_state_pattern.strip()
4756+ final_state_pattern = final_state_pattern.strip()
4757+ except:
4758+ raise MadEvent7Error('Malformed flavor-cut pattern: %s.'%pattern)
4759+ initial_state_patterns = [isp.strip() for isp in re.split(r'\s+',initial_state_pattern)]
4760+ processed_pattern['initial_states'] = []
4761+ for isp in initial_state_patterns:
4762+ if isp=='*':
4763+ processed_pattern['initial_states'].append(None)
4764+ else:
4765+ try:
4766+ specifier = eval(isp)
4767+ except:
4768+ raise MadEvent7Error('Malformed flavor-cut pattern: %s.'%isp)
4769+ if isinstance(specifier,int):
4770+ processed_pattern['initial_states'].append((specifier,))
4771+ elif isinstance(specifier, (tuple, list)):
4772+ processed_pattern['initial_states'].append(tuple(specifier))
4773+ else:
4774+ raise MadEvent7Error('Malformed flavor-cut pattern: %s.'%isp)
4775+ processed_pattern['final_states'] = []
4776+ processed_pattern['needs_exact_final_states'] = True
4777+ final_state_patterns = [fsp.strip() for fsp in re.split(r'\s+',final_state_pattern)]
4778+ for fsp in final_state_patterns:
4779+ try:
4780+ matcher, multiplier = fsp.split('x')
4781+ except:
4782+ multiplier = '1'
4783+ matcher = fsp
4784+ try:
4785+ multiplier = int(multiplier.strip())
4786+ except:
4787+ raise MadEvent7Error('Malformed flavor-cut pattern: %s.'%multiplier)
4788+ matcher = matcher.strip()
4789+ if matcher=='*':
4790+ processed_pattern['needs_exact_final_states'] = False
4791+ else:
4792+ try:
4793+ specifier = eval(matcher)
4794+ except:
4795+ raise MadEvent7Error('Malformed flavor-cut pattern: %s.'%matcher)
4796+ if isinstance(specifier,int):
4797+ processed_pattern['final_states'].append(((specifier,),multiplier))
4798+ elif isinstance(specifier, (tuple, list)):
4799+ processed_pattern['initial_states'].append((tuple(specifier),multiplier))
4800+ else:
4801+ raise MadEvent7Error('Malformed flavor-cut pattern: %s.'%matcher)
4802+ processed_patterns.append(processed_pattern)
4803+
4804+ # Ok, now that we've processed the patterns to consider, we can create the function
4805+ # that checks it:
4806+ def pass_flavor_cut_func(flavor_config):
4807+ initial_pdgs, final_pdgs = flavor_config[0], flavor_config[1]
4808+ # Loop over all capturing patterns
4809+ for pattern in processed_patterns:
4810+ # First check the initial states
4811+ if len(initial_pdgs)!=len(pattern['initial_states']):
4812+ continue
4813+ must_continue = False
4814+ for i, pdg in enumerate(initial_pdgs):
4815+ if (pattern['initial_states'][i] is not None) and \
4816+ (pdg not in pattern['initial_states'][i]):
4817+ must_continue = True
4818+ break
4819+ if must_continue:
4820+ continue
4821+
4822+ # Then check the final states.
4823+ # First make sure all constraints specified by the user apply
4824+ must_continue = False
4825+ for (matcher, multiplier) in pattern['final_states']:
4826+ if len([1 for pdg in final_pdgs if pdg in matcher])!=multiplier:
4827+ must_continue = True
4828+ break
4829+ if must_continue:
4830+ continue
4831+ # And if 'needs_exact_final_states' then also make sure that all final
4832+ # states belong in at least one category
4833+ if pattern['needs_exact_final_states']:
4834+ must_continue = False
4835+ for pdg in final_pdgs:
4836+ if not any(pdg in matcher for (matcher, multiplier) in pattern['final_states']):
4837+ must_continue = True
4838+ break
4839+ if must_continue:
4840+ continue
4841+
4842+ # If we haven't "continued" by now, then it means that the pattern was
4843+ # capturing and we can return True
4844+ return True
4845+
4846+ # No pattern matched upt to this point so we must return False
4847+ return False
4848+
4849+ # We can now return the flavor cut function created
4850+ return pass_flavor_cut_func
4851+
4852 def synchronize(self, model, run_card, ME7_configuration):
4853 """ Synchronize this integrand with the most recent run_card and model."""
4854
4855@@ -258,6 +969,8 @@
4856 # A RunCardME7 instance, properly initialized with the values of the run_card.dat of this run
4857 self.run_card = run_card
4858
4859+ self.flavor_cut_function = ME7Integrand.build_flavor_cut_function(self.run_card['flavor_cuts'])
4860+
4861 # Set external masses
4862 all_processes = [p[0] for p in self.processes_map.values()]
4863 self.masses = all_processes[0].get_external_masses(self.model)
4864@@ -279,22 +992,34 @@
4865 (self.run_card['lpp1'], self.run_card['lpp2']))
4866
4867 # Always initialize the basic flat PS generator. It can be overwritten later if necessary.
4868+ simplified_beam_types = (
4869+ 0 if self.contribution_definition.beam_factorization['beam_one'] is None else 1,
4870+ 0 if self.contribution_definition.beam_factorization['beam_two'] is None else 1,
4871+ )
4872 self.phase_space_generator = phase_space_generators.FlatInvertiblePhasespace(
4873 self.masses[0], self.masses[1],
4874- beam_Es = (self.run_card['ebeam1'], self.run_card['ebeam2']),
4875- beam_types = (self.run_card['lpp1'], self.run_card['lpp2']),
4876+ beam_Es = (self.run_card['ebeam1'], self.run_card['ebeam2']),
4877+ beam_types = simplified_beam_types,
4878+ is_beam_factorization_active =
4879+ ( self.contribution_definition.is_beam_active('beam_one'),
4880+ self.contribution_definition.is_beam_active('beam_two') ),
4881+ correlated_beam_convolution = self.contribution_definition.correlated_beam_convolution
4882 )
4883
4884 # Add a copy of the PS generator dimensions here.
4885 # Notice however that we could add more dimensions pertaining to this integrand only, and PS generation.
4886- # This is in particular true for discrete integration dimension like sectors, helicities, etc...
4887- self.set_dimensions(integrands.DimensionList(self.phase_space_generator.dimensions))
4888+ # This is in particular true for discrete integration dimension like sectors, helicities, etc...
4889+ integrand_dimensions = integrands.DimensionList(self.phase_space_generator.dimensions)
4890+
4891+ # Now remove the frozen dimensions from the list of continuous ones.
4892+ integrand_dimensions = integrands.DimensionList(
4893+ d for d in integrand_dimensions if d.name not in self.FROZEN_DIMENSIONS)
4894+ self.set_dimensions(integrand_dimensions)
4895 self.dim_ordered_names = [d.name for d in self.get_dimensions()]
4896 self.dim_name_to_position = dict((name,i) for i, name in enumerate(self.dim_ordered_names))
4897 self.position_to_dim_name = dict((v,k) for (k,v) in self.dim_name_to_position.items())
4898
4899 self.collider_energy = self.run_card['ebeam1'] + self.run_card['ebeam2']
4900-
4901 # Set the seed
4902 if self.run_card['iseed'] > 0:
4903 random.seed(self.run_card['iseed'])
4904@@ -462,8 +1187,8 @@
4905
4906 return False
4907
4908- def find_counterterms_matching_regexp(
4909- self, counterterms, limit_pattern=None):
4910+ def find_counterterms_matching_regexp(self,
4911+ counterterms, limit_pattern=None, integrated_CT=False):
4912 """ Find all mappings that match a particular limits given in argument
4913 (takes a random one if left to None). This function is placed here given that
4914 it can be useful for both the ME7Integrnd_V and ME7_integrand_R."""
4915@@ -471,7 +1196,11 @@
4916 # First select only the counterterms which are not pure matrix elements
4917 # (i.e. they have singular structures) and also exclude here soft-collinear
4918 # counterterms since they do not have an approach limit function.
4919- selected_counterterms = [ ct for ct in counterterms if ct.is_singular() ]
4920+ if not integrated_CT:
4921+ selected_counterterms = [ ct for ct in counterterms if ct.is_singular() ]
4922+ else:
4923+ selected_counterterms = [ ct for ct in counterterms if ct['integrated_counterterm'].is_singular() ]
4924+
4925 if len(selected_counterterms)==0:
4926 return []
4927
4928@@ -499,13 +1228,18 @@
4929 list_limit_pattern = [limit_pattern]
4930 new_list_limit_pattern = []
4931 for limit_pattern in list_limit_pattern:
4932- if not limit_pattern.startswith('('):
4933+ if not integrated_CT and not limit_pattern.startswith('('):
4934 # If not specified as a raw string, we take the liberty of adding
4935 # the enclosing parenthesis.
4936 limit_pattern = '(%s,)' % limit_pattern
4937+ elif integrated_CT and not limit_pattern.startswith('['):
4938+ # If not specified as a raw string, we take the liberty of adding
4939+ # the enclosing parenthesis.
4940+ limit_pattern = '[%s,]' % limit_pattern
4941 # We also take the liberty of escaping the parenthesis
4942 # since this is presumably what the user expects.
4943 limit_pattern = limit_pattern.replace('(', '\(').replace(')', '\)')
4944+ limit_pattern = limit_pattern.replace('[', '\[').replace(']', '\]')
4945 new_list_limit_pattern.append(limit_pattern)
4946 limit_pattern_re = re.compile(r'^(%s)$'%(
4947 '|'.join(limit_pattern for limit_pattern in new_list_limit_pattern) ))
4948@@ -515,19 +1249,34 @@
4949 returned_counterterms.append(random.choice(selected_counterterms))
4950 else:
4951 for counterterm in selected_counterterms:
4952- if re.match(limit_pattern_re, str(counterterm)):
4953+ ct = counterterm if not integrated_CT else counterterm['integrated_counterterm']
4954+ if re.match(limit_pattern_re, str(ct)):
4955 returned_counterterms.append(counterterm)
4956
4957 return returned_counterterms
4958
4959- def pass_flavor_blind_cuts(self, PS_point, process_pdgs, n_jets_allowed_to_be_clustered = None):
4960+ def pass_all_cuts(self, PS_point, flavors,
4961+ n_jets_allowed_to_be_clustered = None, xb_1 = None, xb_2 = None):
4962+ """ Calls both the flavor-blind and flavor-sensitive cuts."""
4963+ return self.pass_flavor_blind_cuts(PS_point, flavors,
4964+ n_jets_allowed_to_be_clustered = n_jets_allowed_to_be_clustered,
4965+ xb_1 = xb_1, xb_2 = xb_2 ) and \
4966+ self.pass_flavor_sensitive_cuts(PS_point, flavors,
4967+ xb_1 = xb_1, xb_2 = xb_2 )
4968+
4969+ def pass_flavor_blind_cuts(self, PS_point, process_pdgs,
4970+ n_jets_allowed_to_be_clustered = None, xb_1 = None, xb_2 = None):
4971 """ Implementation of a minimal set of isolation cuts. This can be made much nicer in the future and
4972 will probably be taken outside of this class so as to ease user-specific cuts, fastjet, etc...
4973 We consider here a two-level cuts system, this first one of which is flavour blind.
4974 The 'n_jets_allowed_to_be_clustered' is an option that allows to overwrite the
4975 maximum number of jets that can be clustered and which is by default taken to be:
4976 self.contribution_definition.n_unresolved_particles
4977- This is useful when using this function to apply cuts to the reduced PS of the CTs."""
4978+ This is useful when using this function to apply cuts to the reduced PS of the CTs.
4979+ Finally, the options 'xb_1' and 'xb_2' allow to specify the boost bringing
4980+ the PS point back from the c.o.m. to the lab frame, necessary if some cuts are not
4981+ invariant under a boost along the beam axis."""
4982+
4983 debug_cuts = False
4984 # This is a temporary function anyway which should eventually be replaced by a full
4985 # fledged module for handling generation level cuts, which would also make use of fjcore.
4986@@ -535,6 +1284,12 @@
4987
4988 from madgraph.integrator.vectors import LorentzVectorDict, LorentzVectorList, LorentzVector
4989
4990+ # If the cuts depend on the boost to the lab frame in case of hadronic collision
4991+ # then the quantity below can be used:
4992+# boost_vector_to_lab_frame = None
4993+# if (xb_1 is not None) and (xb_2 is not None) and (xb_1, xb_2)!=(1.,1.):
4994+# boost_vector_to_lab_frame = PS_point.get_boost_vector_to_lab_frame(xb_1, xb_2)
4995+
4996 # These cuts are not allowed to resolve flavour, but only whether a particle is a jet or not
4997 def is_a_jet(pdg):
4998 return abs(pdg) in range(1,self.run_card['maxjetflavor']+1)+[21]
4999@@ -554,6 +1309,19 @@
5000 if n_jets_allowed_to_be_clustered is None:
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: