HI Valentin, Thanks a lot for your comment. That’s what I call a good review. > A1) What does this question really imply when launching the process? > > 5 Add weight to events based on coupling parameters reweight=OFF > > I thought that you were multiplying/dividing by the matrix element of the original and new hypothesis, and not just the ratio of particular couplings (which would be faster but much less general). So is this some hybrid implementation? You are correct this is multiplying/dividing by the matrix element of the original and new hypothesis. This nomenclature is mainly to distinguish from SysCalc type of reweighing. > I see the xml tag ‘' added when turning this on, what is it used for? je ne vois pas ce tag pour des events NLO. @LO, c’est le tag pour SysCalc. Donc rien a voir avec cette branche. > A2) What is the status of the parrallelization? In the code it says that it’s unstable, but is it still the case with f2py? The parrallelization needs to be handle in a completely different way for f2py. The first problem is the JIL that I’m not sure how he is going to handle f2py. if it consider it as a standard python function, then we are basically in trouble. So it is not going to work. > A3) What about adding the possibility of an ‘unweight ' command to the reweight_card.dat to allow the user to decide to create a new event file with only the unweighted events corresponding to the new hypothesis? The options exists to create a new weighted sample. (the command is “change output 2") > A4) How is the indicative error computed? This has two part: 1) the original statistical error (rescale by the ratio of the cross-section). 2) a component proportional to the variance of the weight factor (variance/math.sqrt(event_nb) * orig_cross) I combine those linearly. > B1) Debug left at [common_run_interface.py at line 3089] removed > B2) INFO: storring files of Previous run -> INFO: Storing files of previous run > INFO: Do remember that the reweighting -> INFO: Remember that the reweighting thanks. > change model loop_sm > change process g g > h [virt=QCD] > launch > /Users/valentin/Documents/Work/MG5/unleashed_reweighting/TEST_LI_gg_hg/Cards/param_card2.dat > > and it crashed in a weird way telling me that the param_card I used was identical to the older one. for me it crash with the expected message: Command "generate_events run_01" interrupted with error: InvalidCmd : NLO processes can’t be reweight (for Loop induced reweighting use [sqrvirt =]) > But even then it crashed because the file: > “template_files/loop_optimized/check_py.f" file added > Even then, I would put a warning and let the code go on with the same card because it is a good check to make sure that with an identical card one recovers the same result, and users might want to do this to get confidence in the tool. Ok, good point. > B4) I tried NLO reweighting for p p > e+ ve [QCD] by changing the irrelevant top Yukawa with > set ymt 200.0 > and it first crashed because my f2py is called f2py-2.7. The crash was not helpful to the lambda user: ok now I run with the first of those three: f2py, f2py-2.6, f2py-2.7 I also prevent the possibility to run the code if f2py is not installed. > B5) Now the same trial as above worked, but with this reweighting it didn't: > > launch > set ymt 200.0 > # SPECIFY A PATH OR USE THE SET COMMAND like > set sminputs 1 100 # modify 1/alpha_EW > change model mssm > launch > /Users/valentin/Documents/Work/MG5/unleashed_reweighting/pp_epve_MSSM/Cards/param_card.dat I have try to make it work (found and fix an independant fix in the process) but at the end, we would have to play with reload module (since we re-create a new python module from f2py) this sounds too complicated to me (and potentially too dangerous). So I put an explicit error that such command need to be first. Since you can also run the reweighing module independently of the generation. this is not really a problem. (Note that you will have a border effect if you do not quit the interface between the two runs) > Also, is it allowed to change process multiple time or, like model, it's only once at the beginning? Same problem of module loading. so only once (but this is even less problematic in practise) > B6) Since you already make a diff of the card to see what changed, you could add a warning when the user tried to change ‘as' to tell him that it is not going to be applied. ok added. > B7) Still on u u~ > d d~ and I am now trying to include EW effects, i.e. > > change process u u~ > d d~ QCD=99 QED=99 > launch > > And I get a basically identical cross section. I tried with QCD=0 to really have a numerically big difference between the two hypothesis but the two cross-section remained basically identical, i.e. > > INFO: new cross-section is : 16022.2 pb (indicative error: 82.0026 pb) > INFO: Original cross-section: 16069.999782 +- 48.060999348 pb > INFO: Computed cross-section: > INFO: 2 : 16022.216079 +- 82.0026420156 pb First comment, is that this is clearly out of range of the validity region of the method. Especially due to the Z peak which is quite hard to probe a priori. This also explains why the indicative error increase (not enough alarming tough but ok) So I would not be too worried to fail to have correct number here. In addition, it looks like the tale is quite different and drops much quicker for QCD compare to QED, making part of the tale not probed at all for QED. if i run standard MG/ME I get: u u~ > d d~ : 1.607e4 u u~ > d d~ QED=99: 1.629e4 u u~ > d d~ / Z QED=99: 1.516e+04 ± 48 So your result does not sounds too bad. for QCD=0, the exact result is: 1408 ± 3.2 and via the reweighing: INFO: new cross-section is : 1011.03 pb (indicative error: 42.3097 pb) INFO: Original cross-section: 16069.999782 +- 48.060999348 pb INFO: Computed cross-section: INFO: 1 : 1011.02625362 +- 42.3097474513 pb You see that the result is quite off, this is mainly due to the fact that the QCD shape drops much quicker than the QED one, and therefore you miss a huge contribution coming from the tale. If I increase the number of events to 500k (so 50k more) then I have: INFO: new cross-section is : 1197.37 pb (indicative error: 8.62098 pb) INFO: Original cross-section: 16041.534118 +- 5.6026372868 pb INFO: Computed cross-section: INFO: 1 : 1197.36689035 +- 8.62097545251 pb Not yet perfect (actually far from that but going in the correct direction). > B8) I tried now with the full dijet, i.e. p p > j j and I wanted to reweight with process definitions using custom multiparticle label definition, but one cannot right? (i.e. define not supported). So I went on and tried started from this dijet run: > > define p = g u u~ > define j = p > generate p p > j j; output; launch; 5; > > and then I used the reweight card: > > change process u u~ > u u~ QED=99 > change process u u > u u QED=99 > change process u~ u~ > u~ u~ QED=99 > change process u u~ > g g --add > change process u g > u g --add > change process g g > u u~ --add > change process u~ g > u~ g --add > change process g g > g g --add > launch > > and it worked but once again the cross-section was identical to the original one (see B7) The same comment as B7 can be applied. So I would not worry about the value. Concerning the “define”, indeed this is not supported here. I can add it if really needed. (I would rather do change multi particles p =) Otherwise, some of your process do not have the “—add” and are therefore not include. I would expect that to have it crash if such event occur. > C1) Why having modified all MadLoop messages so as to start with "##"? I think I should write them using a log function so as to make such modifications less painful in the future. That was for the fortran implementation, where I was parsing the stdout, so it was nice to have all them formatted in the same way to allow to bypass it. Not needed anymore but I keep the change. > C2) In get_LO_definition_from_NLO. I'm not sure I understand the treatment of the coupling orders. > Also, if the process being reweighted is, say p p > t t~ j [QCD], I don't see how you will correctly generate the real u u~ > t t~ d d~ (i.e. gluon splitting into d d~) with your type of construction. I have added a bunch of comment but the idea is to use fks to know the list of particle which can be soft. (i.e. to recover the list of particle that should be in “p”) we name those particle pert_QCD and then generate > p p > t t~ j pert_QCD which indeed include: u u~ > t t~ d d~ Cheers and thanks for this very deep review. Olivier PS: I will take a new look at your branch ;-) On 18 Jul 2015, at 02:56, Valentin Hirschi