Merge lp:~christopher-hunt08/maus/minor_tweaks into lp:maus/merge

Proposed by Christopher Hunt
Status: Merged
Merged at revision: 1223
Proposed branch: lp:~christopher-hunt08/maus/minor_tweaks
Merge into: lp:maus/merge
Diff against target: 1554 lines (+1260/-39)
9 files modified
bin/utilities/data_tracker_verify.py (+6/-1)
src/common_py/analysis/beam_sampling.py (+1025/-0)
src/common_py/analysis/covariances.py (+9/-8)
src/common_py/analysis/hit_types.py (+23/-2)
src/common_py/analysis/inspectors.py (+154/-20)
src/common_py/analysis/tools.py (+4/-0)
src/common_py/event_loader.py (+17/-5)
src/legacy/Simulation/FillMaterials.cc (+19/-0)
third_party/bash/40python_extras.bash (+3/-3)
To merge this branch: bzr merge lp:~christopher-hunt08/maus/minor_tweaks
Reviewer Review Type Date Requested Status
Adam Dobbs Needs Fixing
Review via email: mp+329995@code.launchpad.net

Description of the change

Some small changes to v2.9.1

To post a comment you must log in.
Revision history for this message
Adam Dobbs (ajdobbs) wrote :

Thanks Chris, do you have a Jenkins build for this branch?

Revision history for this message
Christopher Hunt (christopher-hunt08) wrote :

Sure its:
MAUS_Hunt_v2.6.1_tweaks

There should be more to come, I think this just some little things I found.

Chris

On 31-08-17 10:55, Adam Dobbs wrote:
> Thanks Chris, do you have a Jenkins build for this branch?
> --
> https://code.launchpad.net/~christopher-hunt08/maus/minor_tweaks/+merge/329995
> You are the owner of lp:~christopher-hunt08/maus/minor_tweaks.

--

Room: Blackett-514 Ext: 41625
E-mail: <email address hidden>
Imperial College London

Revision history for this message
Adam Dobbs (ajdobbs) wrote :

OK, I juts tried to do a clean build on lx02, after merging it into a clean merge branch:

bzr branch lp:maus/merge chunt
cd chunt
bzr merge lp:~christopher-hunt08/maus/minor_tweaks
./install_build_test.bash -j 4 --use-system-gcc true

I get the following error when it reaches the start of the unit tests:

Install file: "tests/integration/test_simulation/test_tof/tof_mc_plotter" as "build/tests/tof_mc_plotter"
Install file: "tests/cpp_unit/test_cpp_unit" as "build/tests/test_cpp_unit"
scons: done building targets.

INFO: Running the unit tests

Failure: ImportError (No module named maus_cpp.globals) ... ERROR

======================================================================
ERROR: Failure: ImportError (No module named maus_cpp.globals)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/loader.py", line 390, in loadTestsFromName
    addr.filename, addr.module)
  File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/importer.py", line 39, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/importer.py", line 86, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/vols/build/mice/adobbs/MAUS/maus/chunt/build/tests/test_maus_cpp/test_simulation.py", line 27, in <module>
    import maus_cpp.globals
ImportError: No module named maus_cpp.globals

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (errors=1)

^C
adobbs@lx02:~$

Any thoughts?

review: Needs Fixing
Revision history for this message
Christopher Hunt (christopher-hunt08) wrote :

I have no idea.

Unit test run fine for me and Jenkins.

I've seen things argue about the python includes before but I have no idea why. Can you perform the import on the command line?

Chris

On 31-08-17 15:44, Adam Dobbs wrote:
> Review: Needs Fixing
>
> OK, I juts tried to do a clean build on lx02, after merging it into a clean merge branch:
>
> bzr branch lp:maus/merge chunt
> cd chunt
> bzr merge lp:~christopher-hunt08/maus/minor_tweaks
> ./install_build_test.bash -j 4 --use-system-gcc true
>
> I get the following error when it reaches the start of the unit tests:
>
> Install file: "tests/integration/test_simulation/test_tof/tof_mc_plotter" as "build/tests/tof_mc_plotter"
> Install file: "tests/cpp_unit/test_cpp_unit" as "build/tests/test_cpp_unit"
> scons: done building targets.
>
> INFO: Running the unit tests
>
> Failure: ImportError (No module named maus_cpp.globals) ... ERROR
>
> ======================================================================
> ERROR: Failure: ImportError (No module named maus_cpp.globals)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/loader.py", line 390, in loadTestsFromName
> addr.filename, addr.module)
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/importer.py", line 39, in importFromPath
> return self.importFromDir(dir_path, fqname)
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/importer.py", line 86, in importFromDir
> mod = load_module(part_fqname, fh, filename, desc)
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/build/tests/test_maus_cpp/test_simulation.py", line 27, in <module>
> import maus_cpp.globals
> ImportError: No module named maus_cpp.globals
>
> ----------------------------------------------------------------------
> Ran 1 test in 0.001s
>
> FAILED (errors=1)
>
> ^C
> adobbs@lx02:~$
>
> Any thoughts?
> --
> https://code.launchpad.net/~christopher-hunt08/maus/minor_tweaks/+merge/329995
> You are the owner of lp:~christopher-hunt08/maus/minor_tweaks.

--

Room: Blackett-514 Ext: 41625
E-mail: <email address hidden>
Imperial College London

Revision history for this message
Christopher Hunt (christopher-hunt08) wrote :

Interestingly I also don't have a build/tests/ directory. And I don't think it does on jenkins either.

And yours does for some reason?

Chris

On 31-08-17 15:44, Adam Dobbs wrote:
> Review: Needs Fixing
>
> OK, I juts tried to do a clean build on lx02, after merging it into a clean merge branch:
>
> bzr branch lp:maus/merge chunt
> cd chunt
> bzr merge lp:~christopher-hunt08/maus/minor_tweaks
> ./install_build_test.bash -j 4 --use-system-gcc true
>
> I get the following error when it reaches the start of the unit tests:
>
> Install file: "tests/integration/test_simulation/test_tof/tof_mc_plotter" as "build/tests/tof_mc_plotter"
> Install file: "tests/cpp_unit/test_cpp_unit" as "build/tests/test_cpp_unit"
> scons: done building targets.
>
> INFO: Running the unit tests
>
> Failure: ImportError (No module named maus_cpp.globals) ... ERROR
>
> ======================================================================
> ERROR: Failure: ImportError (No module named maus_cpp.globals)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/loader.py", line 390, in loadTestsFromName
> addr.filename, addr.module)
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/importer.py", line 39, in importFromPath
> return self.importFromDir(dir_path, fqname)
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/third_party/install/lib/python2.7/site-packages/nose/importer.py", line 86, in importFromDir
> mod = load_module(part_fqname, fh, filename, desc)
> File "/vols/build/mice/adobbs/MAUS/maus/chunt/build/tests/test_maus_cpp/test_simulation.py", line 27, in <module>
> import maus_cpp.globals
> ImportError: No module named maus_cpp.globals
>
> ----------------------------------------------------------------------
> Ran 1 test in 0.001s
>
> FAILED (errors=1)
>
> ^C
> adobbs@lx02:~$
>
> Any thoughts?
> --
> https://code.launchpad.net/~christopher-hunt08/maus/minor_tweaks/+merge/329995
> You are the owner of lp:~christopher-hunt08/maus/minor_tweaks.

--

Room: Blackett-514 Ext: 41625
E-mail: <email address hidden>
Imperial College London

Revision history for this message
chrisrogers1234 (chris-rogers) wrote :

FYI import maus_cpp.globals is looking for $MAUS_ROOT_DIR/build/maus_cpp/globals.so

If you do

nm build/maus_cpp/globals.so | grep init

you should see a line like

0000000000004070 T initglobals

if that doesn't exist then somewhere the compile effort failed; it is compiling

src/py_cpp/PyGlobals.cc

which is done by method

build_python_modules(env)

in src/common_py/maus_build_tools/core_builder.py

hope that helps!

Revision history for this message
Adam Dobbs (ajdobbs) wrote :

Dobbs's fault, on my last push I missed a : when I was setting PYTHONPATH in configure. Not sure why my push didn't break Jenkins. Tests running on your branch now Chris.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'bin/utilities/data_tracker_verify.py'
2--- bin/utilities/data_tracker_verify.py 2017-05-11 16:12:21 +0000
3+++ bin/utilities/data_tracker_verify.py 2017-08-31 10:53:10 +0000
4@@ -36,6 +36,7 @@
5 import event_loader
6 from analysis import tools
7 import ROOT
8+import glob
9
10 # Useful Constants and configuration
11 REFERENCE_PLANE = 0
12@@ -1010,7 +1011,11 @@
13
14 ##### 2. Load SciFi Events ####################################################
15 print "\nLoading Spills...\n"
16- file_reader = event_loader.maus_reader(namespace.maus_root_files)
17+ files = []
18+ for filename in namespace.maus_root_files :
19+ files += glob.glob(filename)
20+
21+ file_reader = event_loader.maus_reader(files)
22
23 try :
24 while file_reader.next_event() and \
25
26=== added file 'src/common_py/analysis/beam_sampling.py'
27--- src/common_py/analysis/beam_sampling.py 1970-01-01 00:00:00 +0000
28+++ src/common_py/analysis/beam_sampling.py 2017-08-31 10:53:10 +0000
29@@ -0,0 +1,1025 @@
30+#!/usr/bin/env python
31+
32+# This file is part of MAUS: http://micewww.pp.rl.ac.uk/projects/maus
33+#
34+# MAUS is free software: you can redistribute it and/or modify
35+# it under the terms of the GNU General Public License as published by
36+# the Free Software Foundation, either version 3 of the License, or
37+# (at your option) any later version.
38+#
39+# MAUS is distributed in the hope that it will be useful,
40+# but WITHOUT ANY WARRANTY; without even the implied warranty of
41+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
42+# GNU General Public License for more details.
43+#
44+# You should have received a copy of the GNU General Public License
45+# along with MAUS. If not, see <http://www.gnu.org/licenses/>.
46+#
47+
48+"""
49+ Used for selecting distributions of particles from a larger
50+ distribution.
51+
52+ The parent distribution must be known prior to invocation as both the
53+ covariance matrix and number of particles is required.
54+"""
55+
56+# pylint: disable = W0311, E1101, W0102, R0902, C0103, R0201, W0223
57+# pylint: disable = R0912, R0913, R0914, R0915, W0221, C0302
58+
59+import numpy
60+from scipy.stats import chi2
61+import math
62+import ROOT
63+
64+import analysis.hit_types
65+import analysis.inspectors
66+import analysis.covariances
67+
68+
69+MAX_STATISTICAL_WEIGHT = 10.0
70+
71+
72+################################################################################
73+
74+def gaussian(x, mean, var) :
75+ """
76+ Analytically calculate values for a Gaussian distribution with given mean
77+ and variance
78+ """
79+ var_inv = 1.0/var
80+ vector = x - mean
81+ const = 1.0 / math.sqrt( 2.0*math.pi * var )
82+ exp = -0.5*( vector*vector*var_inv )
83+
84+ return const * math.exp( exp )
85+
86+def multivariate_gaussian(x, mean, cov) :
87+ """
88+ Analytically calculate vectors for a multivariate Gaussian distribution
89+ with given mean vector and covariance matrix
90+ """
91+ cov_inv = numpy.linalg.inv(cov)
92+ vector = x - mean
93+ const = 1.0 / math.sqrt( numpy.linalg.det( 2.0*math.pi * cov ) )
94+ exp = -0.5*( vector.transpose().dot(cov_inv.dot(vector)) )
95+
96+ return const * math.exp( exp )
97+
98+################################################################################
99+
100+
101+class Sampler_base(object) :
102+ """
103+ Base class for sampler-type object definitions.
104+ Outlines the interface.
105+ """
106+ def __init__(self) :
107+ pass
108+
109+
110+ def accept(self, hit) :
111+ """
112+ Test method. Returns true if the supplied hit should be accepted into the
113+ daughter distribution.
114+ """
115+ raise NotImplementedError("Sample_base.accept()"+\
116+ " Method not overriden by derived class")
117+
118+
119+ def weight(self, hit) :
120+ """
121+ Test method. Returns the statistical weight of the hit for sampling.
122+ """
123+ raise NotImplementedError("Sample_base.accept()"+\
124+ " Method not overriden by derived class")
125+
126+
127+ def calculate_parent(self, x) :
128+ """
129+ Returns the value of the parent distribution at a position x
130+ """
131+ raise NotImplementedError("Sample_base.calculate_parent()"+\
132+ " Method not overriden by derived class")
133+
134+
135+ def calculate_daughter(self, x) :
136+ """
137+ Returns the value of the daughter distribution at a position x
138+ """
139+ raise NotImplementedError("Sample_base.calculate_daughter()"+\
140+ " Method not overriden by derived class")
141+
142+ def get_plots(self) :
143+ """
144+ Returns a dictionary of plots for analysis.
145+ """
146+ return {}
147+
148+
149+
150+
151+class GaussianMomentumSampler(Sampler_base) :
152+ """
153+ Select a gaussian momentum distribution
154+ """
155+ def __init__(self, parent_dist, mean, std) :
156+ """
157+ Init.
158+ Requires parent TH1F momenutm distribution
159+ Mean of daughter distribution
160+ STD of daughter distribution
161+ """
162+ Sampler_base.__init__(self)
163+ self.__mean = mean
164+ self.__var = std*std
165+
166+ self.__parent_distribution = parent_dist
167+ self.__parent_distribution.Scale(\
168+ 1.0 / self.__parent_distribution.GetEntries())
169+
170+ self.__max = 5.0*std
171+ self.__normalisation = 1.0e+20
172+ self.__weight_normalisation = 0.0
173+
174+
175+ # ACCEPT / REJECT
176+ for bin_i in range(self.__parent_distribution.GetNbinsX()) :
177+ x = self.__parent_distribution.GetBinCenter(bin_i)
178+ if abs(x - mean) > self.__max :
179+ continue
180+
181+ if self.calculate_daughter(x) > 0.0 : #Divide by zero errors
182+ norm = self.calculate_parent(x) / self.calculate_daughter(x)
183+ else :
184+ norm = self.__normalisation
185+
186+ if norm < self.__normalisation :
187+ self.__normalisation = norm
188+
189+
190+ # STATISTICAL WEIGHTING
191+ max_daughter = 0.0
192+ max_parent = 0.0
193+
194+ for bin_i in range(self.__parent_distribution.GetNbinsX()) :
195+ x = self.__parent_distribution.GetBinCenter(bin_i)
196+ if abs(x - mean) > self.__max :
197+ continue
198+
199+ daughter = self.calculate_daughter(x)
200+ parent = self.calculate_parent(x)
201+ if daughter > max_daughter :
202+ max_daughter = daughter
203+ max_parent = parent
204+
205+ self.__weight_normalisation = max_parent / max_daughter
206+
207+
208+ def calculate_parent(self, x) :
209+ """
210+ Calculate parent probability for momentum
211+ """
212+ found_bin = self.__parent_distribution.FindBin(x)
213+ return self.__parent_distribution.GetBinContent(found_bin)
214+
215+
216+ def calculate_daughter(self, x) :
217+ """
218+ Calculate daughter probability for momentum
219+ """
220+ return gaussian( x, self.__mean, self.__var )
221+
222+
223+ def accept(self, hit) :
224+ p = hit.get_p()
225+
226+ if abs(p - self.__mean) > self.__max :
227+ return False
228+
229+ u = numpy.random.sample() #Uniform random number (0,1)
230+ g_y = self.calculate_daughter(p)*self.__normalisation
231+ f_y = self.calculate_parent(p)
232+
233+ if ( u >= (g_y / f_y) ) :
234+ return False
235+ else :
236+ return True
237+
238+
239+ def weight(self, hit) :
240+ p = hit.get_p()
241+
242+ if abs(p - self.__mean) > self.__max :
243+ return 0.0
244+
245+ g_y = self.calculate_daughter(p)*self.__weight_normalisation
246+ f_y = self.calculate_parent(p)
247+
248+ ratio = (g_y / f_y)
249+ return ratio
250+
251+
252+
253+class MomentumWindowSampler(Sampler_base) :
254+ """
255+ Sample a square momentum selection
256+ """
257+ def __init__(self, parent_dist, mean, window) :
258+ Sampler_base.__init__(self)
259+ self.__mean = mean
260+ self.__window = window/2.0
261+
262+ self.__parent_distribution = parent_dist
263+
264+
265+ def calculate_parent(self, x) :
266+ """
267+ Nothing to do
268+ """
269+ return None
270+
271+
272+ def calculate_daughter(self, x) :
273+ """
274+ Nothing to do
275+ """
276+ return None
277+
278+
279+ def accept(self, hit) :
280+ p = hit.get_p()
281+
282+ if abs(p - self.__mean) > self.__window :
283+ return False
284+ else :
285+ return True
286+
287+
288+
289+
290+class GaussianXYSampler(Sampler_base) :
291+ """
292+ Multivariate Gaussian Sampling class.
293+ Assumes both the parent and daughter distributions are gaussian.
294+ """
295+
296+ def __init__(self, parent_means, parent_cov, means, cov) :
297+ """
298+ Expects
299+ """
300+ Sampler_base.__init__(self)
301+ if parent_means.shape != (2,) :
302+ raise ValueError("Parent Means must be a 2x1 vector")
303+ if parent_cov.shape != (2, 2) :
304+ raise ValueError("Parent Covariance must be a 2x2 matrix")
305+ if means.shape != (2,) :
306+ raise ValueError("Means must be a 2x1 vector")
307+ if cov.shape != (2,2) :
308+ raise ValueError("Covariance must be a 2x2 matrix")
309+
310+ self.__parent_means = parent_means
311+ self.__parent_covariance = parent_cov
312+ self.__means = means
313+ self.__covariance = cov
314+
315+ self.__normalisation = math.sqrt(numpy.linalg.det(self.__covariance) / \
316+ numpy.linalg.det(self.__parent_covariance))
317+
318+
319+ def calculate_parent(self, x) :
320+ """
321+ Returns the value of the parent distribution at a position x
322+ """
323+ return multivariate_gaussian(x, self.__parent_means, \
324+ self.__parent_covariance)
325+
326+
327+ def calculate_daughter(self, x) :
328+ """
329+ Returns the value of the daughter distribution at a position x
330+ """
331+ return multivariate_gaussian(x, self.__means, self.__covariance)
332+
333+
334+ def accept(self, hit) :
335+ """
336+ Returns true if the hit-vector is to be acceted into the daughter
337+ distribution, otherwise returns false.
338+ """
339+ vector = [ hit.get_x(), hit.get_y() ]
340+
341+ u = numpy.random.sample() #Uniform random number (0,1)
342+ g_y = self.calculate_daughter(vector)*self.__normalisation
343+ f_y = self.calculate_parent(vector)
344+
345+ if ( u >= (g_y / f_y) ) :
346+ return False
347+ else :
348+ return True
349+
350+
351+
352+
353+class Gaussian4DSampler(Sampler_base) :
354+ """
355+ Multivariate Gaussian Sampling class.
356+ Assumes both the parent and daughter distributions are gaussian.
357+ """
358+
359+ def __init__(self, parent_means, parent_cov, means, cov) :
360+ """
361+ Expects
362+ """
363+ Sampler_base.__init__(self)
364+ if parent_means.shape != (4,) :
365+ raise ValueError("Parent Means must be a 4x1 vector")
366+ if parent_cov.shape != (4, 4) :
367+ raise ValueError("Parent Covariance must be a 4x4 matrix")
368+ if means.shape != (4,) :
369+ raise ValueError("Means must be a 4x1 vector")
370+ if cov.shape != (4,4) :
371+ raise ValueError("Covariance must be a 4x4 matrix")
372+
373+ self.__parent_means = parent_means
374+ self.__parent_covariance = parent_cov
375+ self.__means = means
376+ self.__covariance = cov
377+
378+ self.__normalisation = math.sqrt(numpy.linalg.det(self.__covariance) / \
379+ numpy.linalg.det(self.__parent_covariance))
380+
381+
382+ def calculate_parent(self, x) :
383+ """
384+ Returns the value of the parent distribution at a position x
385+ """
386+ return multivariate_gaussian(x, self.__parent_means, \
387+ self.__parent_covariance)
388+
389+
390+ def calculate_daughter(self, x) :
391+ """
392+ Returns the value of the daughter distribution at a position x
393+ """
394+ return multivariate_gaussian(x, self.__means, self.__covariance)
395+
396+
397+ def accept(self, hit) :
398+ """
399+ Returns true if the hit-vector is to be acceted into the daughter
400+ distribution, otherwise returns false.
401+ """
402+ vector = hit.get_as_vector()[2:6]
403+
404+ u = numpy.random.sample() #Uniform random number (0,1)
405+ g_y = self.calculate_daughter(vector)*self.__normalisation
406+ f_y = self.calculate_parent(vector)
407+
408+ if ( u >= (g_y / f_y) ) :
409+ return False
410+ else :
411+ return True
412+
413+
414+class Amplitude4DSampler(Sampler_base) :
415+ """
416+ 1D Chi-Squared Distribution Sampler
417+ Assumes both the parent and daughter distributions are Chi-Squared
418+ distributed.
419+ """
420+
421+ def __init__(self, parent_distribution, covariance, emittance, max_x=1.0e12) :
422+ """
423+ Setup Amplitude4D Sampler. Requires the parent amplitude distribtion,
424+ parent covariance matrix, required emittance and a maximum amplitude
425+ range.
426+ """
427+ Sampler_base.__init__(self)
428+ if type(parent_distribution) != ROOT.TH1F :
429+ raise ValueError("Parent distribution must be a ROOT::TH1F")
430+ if covariance.shape != (4,4) :
431+ raise ValueError("Covariance must be a 4x4 matrix")
432+
433+ self.__parent_distribution = parent_distribution
434+ self.__parent_distribution.Scale(\
435+ 1.0 / self.__parent_distribution.GetEntries())
436+ self.__weight_normalisation = 0.0
437+ self.__normalisation = 1.0e+20
438+ self.__parent_emittance =\
439+ analysis.covariances.emittance_from_matrix(covariance)
440+ self.__emittance = emittance
441+ self.__parent_covariance = covariance
442+ self.__parent_covariance_inv = numpy.linalg.inv(covariance)
443+ self.__ndf = 4
444+ self.__max = max_x
445+
446+ # ACCEPT / REJECT
447+ for bin_i in range(self.__parent_distribution.GetNbinsX()) :
448+ x = self.__parent_distribution.GetBinCenter(bin_i)
449+ if x > self.__max :
450+ break
451+
452+ if self.calculate_daughter(x) > 0.0 : #Divide by zero errors
453+ norm = self.calculate_parent(x) / self.calculate_daughter(x)
454+ else :
455+ norm = self.__normalisation
456+
457+ if norm < self.__normalisation :
458+ self.__normalisation = norm
459+
460+
461+ # STATISTICAL WEIGHTING
462+ max_daughter = 0.0
463+ max_parent = 0.0
464+
465+ for bin_i in range(self.__parent_distribution.GetNbinsX()) :
466+ x = self.__parent_distribution.GetBinCenter(bin_i)
467+ if x > self.__max :
468+ continue
469+
470+ daughter = self.calculate_daughter(x)
471+ parent = self.calculate_parent(x)
472+ if daughter > max_daughter :
473+ max_daughter = daughter
474+ max_parent = parent
475+
476+ self.__weight_normalisation = max_parent / max_daughter
477+
478+ print "Parent =", self.__parent_emittance
479+ print "Select =", self.__emittance
480+
481+
482+ def _get_amplitude(self, x) :
483+ """
484+ Return the amplitude of a particle given the state vector.
485+ """
486+ amplitude = self.__parent_emittance*x.transpose().dot(\
487+ self.__parent_covariance_inv.dot(x))
488+ return amplitude
489+
490+
491+ def calculate_parent(self, x) :
492+ """
493+ Calculate parent probability for amplitude
494+ """
495+ found_bin = self.__parent_distribution.FindBin(x)
496+ return self.__parent_distribution.GetBinContent(found_bin)
497+
498+
499+ def calculate_daughter(self, x) :
500+ """
501+ Calculate daughter probability for amplitude
502+ """
503+# c = x*self.__emittance/(self.__parent_emittance**2)
504+ c = x/self.__emittance
505+ return chi2.pdf(c, self.__ndf)
506+
507+
508+ def accept(self, hit) :
509+ """
510+ Returns true if the hit-vector is to be acceted into the daughter
511+ distribution, otherwise returns false.
512+ """
513+
514+ x = self._get_amplitude(numpy.array(hit.get_as_vector()[2:6]))
515+
516+ if x > self.__max :
517+ return False
518+
519+ u = numpy.random.sample() #Uniform random number (0,1)
520+ g_y = self.calculate_daughter(x)*self.__normalisation
521+ f_y = self.calculate_parent(x)
522+
523+ if ( u >= (g_y / f_y) ) :
524+ return False
525+ else :
526+ return True
527+
528+
529+ def weight(self, hit) :
530+ x = self._get_amplitude(numpy.array(hit.get_as_vector()[2:6]))
531+
532+ if x > self.__max :
533+ return 0.0
534+
535+ g_y = self.calculate_daughter(x)*self.__weight_normalisation
536+ f_y = self.calculate_parent(x)
537+
538+ ratio = (g_y / f_y)
539+ return ratio
540+
541+
542+
543+
544+class UniformAmplitude4DSampler(Sampler_base) :
545+ """
546+ 1D Flat Amplitude Distribution Sampler
547+ """
548+
549+ def __init__(self, parent_distribution, covariance, amplitude) :
550+ """
551+
552+ """
553+ Sampler_base.__init__(self)
554+ if type(parent_distribution) != ROOT.TH1F :
555+ raise ValueError("Parent distribution must be a ROOT::TH1F")
556+ if covariance.shape != (4,4) :
557+ raise ValueError("Covariance must be a 4x4 matrix")
558+
559+ self.__parent_distribution = parent_distribution
560+ self.__parent_distribution.Scale(\
561+ 1.0 / self.__parent_distribution.GetEntries())
562+ self.__weight_normalisation = 0.0
563+ self.__normalisation = 1.0e+20
564+ self.__parent_emittance =\
565+ analysis.covariances.emittance_from_matrix(covariance)
566+ self.__max_amplitude = amplitude
567+ self.__square = 1.0 / amplitude
568+ self.__parent_covariance = covariance
569+ self.__parent_covariance_inv = numpy.linalg.inv(covariance)
570+
571+ # ACCEPT / REJECT
572+ for bin_i in range(1, self.__parent_distribution.GetNbinsX()) :
573+ x = self.__parent_distribution.GetBinCenter(bin_i)
574+ if x > self.__max_amplitude :
575+ break
576+
577+ if self.calculate_daughter(x) > 0.0 : #Divide by zero errors
578+ norm = self.calculate_parent(x) / self.calculate_daughter(x)
579+ else :
580+ norm = self.__normalisation
581+
582+ if norm < self.__normalisation :
583+ self.__normalisation = norm
584+
585+ # STATISTICAL WEIGHTING
586+ x = self.__max_amplitude * 0.5
587+ max_daughter = self.calculate_daughter(x)
588+ max_parent = self.calculate_parent(x)
589+ self.__weight_normalisation = max_parent / max_daughter
590+
591+
592+ def _get_amplitude(self, x) :
593+ """
594+ Calculate the amplitude of a particle given the state vector
595+ """
596+ amplitude = self.__parent_emittance*x.transpose().dot(\
597+ self.__parent_covariance_inv.dot(x))
598+ return amplitude
599+
600+
601+ def calculate_parent(self, x) :
602+ """
603+ Calculate parent probability for amplitude
604+ """
605+ found_bin = self.__parent_distribution.FindBin(x)
606+ return self.__parent_distribution.GetBinContent(found_bin)
607+
608+
609+ def calculate_daughter(self, x) :
610+ """
611+ Calculate daughter probability for amplitude
612+ """
613+ if x > self.__max_amplitude :
614+ return 0.0
615+ else :
616+ return self.__square
617+
618+
619+ def accept(self, hit) :
620+ """
621+ Returns true if the hit-vector is to be acceted into the daughter
622+ distribution, otherwise returns false.
623+ """
624+
625+ x = self._get_amplitude(numpy.array(hit.get_as_vector()[2:6]))
626+
627+ if x > self.__max_amplitude :
628+ return False
629+
630+ u = numpy.random.sample() #Uniform random number (0,1)
631+ g_y = self.calculate_daughter(x)*self.__normalisation
632+ f_y = self.calculate_parent(x)
633+
634+ if ( u >= (g_y / f_y) ) :
635+ return False
636+ else :
637+ return True
638+
639+
640+ def weight(self, hit) :
641+ x = self._get_amplitude(numpy.array(hit.get_as_vector()[2:6]))
642+
643+ if x > self.__max_amplitude :
644+ return 0.0
645+
646+ g_y = self.calculate_daughter(x)*self.__weight_normalisation
647+ f_y = self.calculate_parent(x)
648+
649+ ratio = (g_y / f_y)
650+ return ratio
651+
652+
653+
654+class XYPhaseSpaceSampler(Sampler_base) :
655+ """
656+ 1D Chi-Squared Distribution Sampler
657+ Assumes both the parent and daughter distributions are Chi-Squared
658+ distributed.
659+ """
660+
661+ def __init__(self, parent_distribution_x, parent_distribution_y,\
662+ means, covariance) :
663+ """
664+ Initialise the transverse phase space sampler. Requires the x-px phase
665+ space distribution, the y-py phase space distribution, the means of the
666+ distributions and the covariance matrix.
667+ """
668+ Sampler_base.__init__(self)
669+ if type(parent_distribution_x) != ROOT.TH2F :
670+ raise ValueError("Parent distributions must be a ROOT::TH2F")
671+ if type(parent_distribution_y) != ROOT.TH2F :
672+ raise ValueError("Parent distributions must be a ROOT::TH2F")
673+ if means.shape != (2,) :
674+ raise ValueError("Means must be a 2x1 vector")
675+ if covariance.shape != (2, 2) :
676+ raise ValueError("Covariance must be a 2x2 vector")
677+
678+ self.__parent_distribution_x = parent_distribution_x
679+ self.__parent_distribution_y = parent_distribution_y
680+ self.__parent_distribution_x.Scale(\
681+ 1.0 / self.__parent_distribution_x.GetEntries())
682+ self.__parent_distribution_y.Scale(\
683+ 1.0 / self.__parent_distribution_y.GetEntries())
684+ self.__weight_normalisation_x = 0.0
685+ self.__weight_normalisation_y = 0.0
686+ self.__normalisation_x = 1.0e+20
687+ self.__normalisation_y = 1.0e+20
688+ self.__max = 1.0e+20
689+ self.__means = means
690+ self.__covariance = covariance
691+ self.__covariance_inv = numpy.linalg.inv(covariance)
692+
693+ self.__max = 9.0
694+
695+ # ACCEPT / REJECT
696+ for bin_i in range(self.__parent_distribution_x.GetNbinsX()) :
697+ for bin_j in range(self.__parent_distribution_x.GetNbinsY()) :
698+ x = self.__parent_distribution_x.GetXaxis().GetBinCenter(bin_i)
699+ px = self.__parent_distribution_x.GetYaxis().GetBinCenter(bin_j)
700+ vec = numpy.array([x,px])
701+ if not self._is_contained(vec) :
702+ continue
703+
704+ if self.calculate_daughter(vec) > 0.0 : #Divide by zero errors
705+ norm = self.calculate_parent(vec, self.__parent_distribution_x) / \
706+ self.calculate_daughter(numpy.array(vec))
707+ else :
708+ norm = self.__normalisation_x
709+
710+ if norm < self.__normalisation_x :
711+ self.__normalisation_x = norm
712+
713+ for bin_i in range(self.__parent_distribution_y.GetNbinsX()) :
714+ for bin_j in range(self.__parent_distribution_y.GetNbinsY()) :
715+ x = self.__parent_distribution_y.GetXaxis().GetBinCenter(bin_i)
716+ px = self.__parent_distribution_y.GetYaxis().GetBinCenter(bin_j)
717+ vec = numpy.array([x,px])
718+ if not self._is_contained(vec) :
719+ continue
720+
721+ if self.calculate_daughter(vec) > 0.0 : #Divide by zero errors
722+ norm = self.calculate_parent(vec, self.__parent_distribution_y) / \
723+ self.calculate_daughter(numpy.array(vec))
724+ else :
725+ norm = self.__normalisation_y
726+
727+ if norm < self.__normalisation_y :
728+ self.__normalisation_y = norm
729+
730+
731+ # STATISTICAL WEIGHTING
732+ max_daughter = 0.0
733+ max_parent = 0.0
734+
735+ for bin_i in range(self.__parent_distribution.GetNbinsX()) :
736+ for bin_j in range(self.__parent_distribution_x.GetNbinsY()) :
737+ x = self.__parent_distribution_x.GetXaxis().GetBinCenter(bin_i)
738+ px = self.__parent_distribution_x.GetYaxis().GetBinCenter(bin_j)
739+ vec = numpy.array([x,px])
740+ if not self._is_contained(vec) :
741+ continue
742+
743+ daughter = self.calculate_daughter(vec)
744+ parent = self.calculate_parent(vec)
745+ if daughter > max_daughter :
746+ max_daughter = daughter
747+ max_parent = parent
748+
749+ self.__weight_normalisation_x = max_parent / max_daughter
750+
751+ max_daughter = 0.0
752+ max_parent = 0.0
753+
754+ for bin_i in range(self.__parent_distribution_y.GetNbinsX()) :
755+ for bin_j in range(self.__parent_distribution_y.GetNbinsY()) :
756+ x = self.__parent_distribution_y.GetXaxis().GetBinCenter(bin_i)
757+ px = self.__parent_distribution_y.GetYaxis().GetBinCenter(bin_j)
758+ vec = numpy.array([x,px])
759+ if not self._is_contained(vec) :
760+ continue
761+
762+ daughter = self.calculate_daughter(vec)
763+ parent = self.calculate_parent(vec)
764+ if daughter > max_daughter :
765+ max_daughter = daughter
766+ max_parent = parent
767+
768+ self.__weight_normalisation_y = max_parent / max_daughter
769+
770+
771+
772+ def _is_contained(self, x) :
773+ """
774+ Function to determine whether or not a particle's state vector is
775+ contained withing the amplitude maximum
776+ """
777+ amp = x.transpose().dot(self.__covariance_inv.dot(x))
778+# print amp
779+ if amp < self.__max :
780+ return True
781+ else :
782+ return False
783+
784+
785+ def calculate_parent(self, x, distribution=None) :
786+ """
787+ Calculate parent probability for the vector x, given the parent
788+ distribution
789+ """
790+ if distribution is None :
791+ distribution = self.__parent_distribution_x
792+
793+ found_bin = distribution.FindBin(x[0], x[1])
794+ return distribution.GetBinContent(found_bin)
795+
796+
797+ def calculate_daughter(self, x) :
798+ """
799+ Calculate daughter probability for a 4D vector
800+ """
801+ return multivariate_gaussian(x, self.__means, self.__covariance)
802+
803+
804+ def accept(self, hit) :
805+ """
806+ Returns true if the hit-vector is to be acceted into the daughter
807+ distribution, otherwise returns false.
808+ """
809+ x = [ hit.get_x(), hit.get_px() ]
810+ y = [ hit.get_y(), hit.get_py() ]
811+
812+ if not self._is_contained(x) or not self._is_contained(y) :
813+ return False
814+
815+ u = numpy.random.sample() #Uniform random number (0,1)
816+ g_y_x = self.calculate_daughter(x)*self.__normalisation_x
817+ f_y_x = self.calculate_parent(x, self.__parent_distribution_x)
818+
819+ g_y_y = self.calculate_daughter(y)*self.__normalisation_y
820+ f_y_y = self.calculate_parent(y, self.__parent_distribution_y)
821+
822+ if ( u >= (g_y_x / f_y_x) ) and ( u >= (g_y_y / f_y_y) ) :
823+ return False
824+ else :
825+ return True
826+
827+
828+ def weight(self, hit) :
829+ x = [ hit.get_x(), hit.get_px() ]
830+ y = [ hit.get_y(), hit.get_py() ]
831+
832+ if not self._is_contained(x) or not self._is_contained(y) :
833+ return 0.0
834+
835+ g_y_x = self.calculate_daughter(x)*self.__weight_normalisation_x
836+ f_y_x = self.calculate_parent(x, self.__parent_distribution_x)
837+
838+ g_y_y = self.calculate_daughter(y)*self.__weight_normalisation_y
839+ f_y_y = self.calculate_parent(y, self.__parent_distribution_y)
840+
841+ ratio_x = (g_y_x / f_y_x)
842+ ratio_y = (g_y_y / f_y_y)
843+ return ratio_x*ratio_y
844+
845+
846+
847+class XY4DPhaseSpaceSampler(Sampler_base) :
848+ """
849+ 4D Sampler for beams that require the 6 2D projections from which the sample
850+ is made.
851+ """
852+
853+ def __init__(self, x_px, x_py, y_px, y_py, x_y, px_py, means, covariance) :
854+ """
855+
856+ """
857+ Sampler_base.__init__(self)
858+ if type(x_px) != ROOT.TH2F :
859+ raise ValueError("Parent distributions must be a ROOT::TH2F. Not "\
860+ +str(type(x_px)))
861+ if type(x_py) != ROOT.TH2F :
862+ raise ValueError("Parent distributions must be a ROOT::TH2F. Not "\
863+ +str(type(x_py)))
864+ if type(y_py) != ROOT.TH2F :
865+ raise ValueError("Parent distributions must be a ROOT::TH2F. Not "\
866+ +str(type(y_py)))
867+ if type(y_px) != ROOT.TH2F :
868+ raise ValueError("Parent distributions must be a ROOT::TH2F. Not "\
869+ +str(type(y_px)))
870+ if type(px_py) != ROOT.TH2F :
871+ raise ValueError("Parent distributions must be a ROOT::TH2F. Not "\
872+ +str(type(px_py)))
873+ if type(x_y) != ROOT.TH2F :
874+ raise ValueError("Parent distributions must be a ROOT::TH2F. Not "\
875+ +str(type(x_y)))
876+
877+ if means.shape != (4,) :
878+ raise ValueError("Means must be a 4x1 vector")
879+ if covariance.shape != (4, 4) :
880+ raise ValueError("Covariance must be a 4x4 vector")
881+
882+ self.__all_cells = [(0, 1), (0, 2), (0, 3), (2, 1), (1, 3), (2, 3)]
883+
884+ self.__distributions = [ [ None for _ in range(4) ] for _ in range(4) ]
885+ self.__distributions[0][1] = x_px
886+ self.__distributions[0][2] = x_y
887+ self.__distributions[0][3] = x_py
888+ self.__distributions[2][1] = y_px
889+ self.__distributions[1][3] = px_py
890+ self.__distributions[2][3] = y_py
891+
892+ self.__normalisations = [ [ None for _ in range(4) ] for _ in range(4) ]
893+ for a, b in self.__all_cells:
894+ self.__normalisations[a][b] = 1.0e+20
895+
896+ self.__weight_normalisations =\
897+ [ [ None for _ in range(4) ] for _ in range(4) ]
898+ for a, b in self.__all_cells:
899+ self.__weight_normalisations[a][b] = 0.0
900+
901+ for a, b in self.__all_cells:
902+ self.__distributions[a][b].Scale(\
903+ 1.0/(self.__distributions[a][b].GetEntries()**2) )
904+
905+ self.__means = means
906+ self.__covariance = covariance
907+ self.__covariance_inv = numpy.linalg.inv(covariance)
908+
909+ self.__max = 12.0
910+ self.__parent_min = 0.5
911+
912+ # ACCEPT / REJECT
913+ while True :
914+ for a, b in self.__all_cells :
915+ dist = self.__distributions[a][b]
916+ self.__normalisations[a][b] = 1.0e+20
917+
918+ for bin_i in range(dist.GetNbinsX()) :
919+ for bin_j in range(dist.GetNbinsY()) :
920+ x = dist.GetXaxis().GetBinCenter(bin_i)
921+ px = dist.GetYaxis().GetBinCenter(bin_j)
922+
923+ vec = numpy.array([0.0, 0.0, 0.0, 0.0])
924+ vec[a] = x
925+ vec[b] = px
926+
927+ if not self._is_contained(vec) :
928+ continue
929+
930+ if self.calculate_daughter(vec) > 0.0 : #Divide by zero errors
931+ norm = self.calculate_parent(numpy.array([x, px]), dist) / \
932+ self.calculate_daughter(vec)
933+ else :
934+ norm = self.__normalisations[a][b]
935+
936+ if norm < self.__normalisations[a][b] :
937+ self.__normalisations[a][b] = norm
938+
939+ if self.__normalisations[a][b] < self.__parent_min :
940+ self.__max = self.__max - 1.0
941+ break
942+
943+ else :
944+ break
945+
946+ # STATISTICAL WEIGHTING
947+ for a, b in self.__all_cells :
948+ dist = self.__distributions[a][b]
949+ self.__normalisations[a][b] = 0.0
950+ max_daughter = 0.0
951+ max_parent = 0.0
952+
953+ for bin_i in range(dist.GetNbinsX()) :
954+ for bin_j in range(dist.GetNbinsY()) :
955+ x = dist.GetXaxis().GetBinCenter(bin_i)
956+ px = dist.GetYaxis().GetBinCenter(bin_j)
957+
958+ vec = numpy.array([0.0, 0.0, 0.0, 0.0])
959+ vec[a] = x
960+ vec[b] = px
961+
962+ if not self._is_contained(vec) :
963+ continue
964+
965+ daughter = self.calculate_daughter(vec)
966+ parent = self.calculate_parent(numpy.array([x, px]), dist)
967+
968+ if daughter > max_daughter :
969+ max_daughter = daughter
970+ max_parent = parent
971+
972+ self.__weight_normalisation[a][b] = max_parent / max_daughter
973+
974+
975+ def _is_contained(self, x) :
976+ """
977+ Function to determine whether or not a particle's state vector is
978+ contained withing the amplitude maximum
979+ """
980+ amp = x.transpose().dot(self.__covariance_inv.dot(x))
981+ if amp < self.__max :
982+# print amp, x
983+ return True
984+ else :
985+ return False
986+
987+
988+ def calculate_parent(self, x, distribution=None) :
989+ """
990+ Calculate parent probability for a vector x, given the parent
991+ distribution
992+ """
993+ if distribution is None :
994+ distribution = self.__x_y
995+
996+ found_bin = distribution.FindBin(x[0], x[1])
997+ return distribution.GetBinContent(found_bin)
998+
999+
1000+ def calculate_daughter(self, x) :
1001+ """
1002+ Calculate daughter probability for a 4D vector
1003+ """
1004+ return multivariate_gaussian(x, self.__means, self.__covariance)
1005+
1006+
1007+ def accept(self, hit) :
1008+ """
1009+ Returns true if the hit-vector is to be acceted into the daughter
1010+ distribution, otherwise returns false.
1011+ """
1012+
1013+ vec = numpy.array(hit.get_as_vector()[2:6])
1014+
1015+# if not self._is_contained(vec) :
1016+# return False
1017+
1018+ u = numpy.random.sample() #Uniform random number (0,1)
1019+ daughter = self.calculate_daughter(vec)
1020+
1021+ for a, b in self.__all_cells :
1022+ dist = self.__distributions[a][b]
1023+ vector = [ vec[a], vec[b] ]
1024+ parent_result = self.calculate_parent(vector, dist)
1025+
1026+ result = (daughter*self.__normalisations[a][b] / parent_result)
1027+ if (u < result ) :
1028+ continue
1029+ else :
1030+ break
1031+ else :
1032+# self.__selection_inspector.add_hit(vec)
1033+ return True
1034+
1035+ return False
1036+
1037+
1038+ def weight(self, hit) :
1039+ vec = numpy.array(hit.get_as_vector()[2:6])
1040+
1041+ daughter = self.calculate_daughter(vec)
1042+ ratio = 1.0
1043+
1044+ for a, b in self.__all_cells :
1045+ dist = self.__distributions[a][b]
1046+ vector = [ vec[a], vec[b] ]
1047+ parent_result = self.calculate_parent(vector, dist)
1048+
1049+ result = (daughter*self.__weight_normalisations[a][b] / parent_result)
1050+
1051+ ratio *= result
1052+
1053+ return ratio
1054+
1055
1056=== modified file 'src/common_py/analysis/covariances.py'
1057--- src/common_py/analysis/covariances.py 2017-05-10 10:29:14 +0000
1058+++ src/common_py/analysis/covariances.py 2017-08-31 10:53:10 +0000
1059@@ -119,7 +119,7 @@
1060 self._mean_vector = numpy.zeros( self._numVars )
1061 self._product_matrix = numpy.zeros( ( self._numVars, self._numVars ) )
1062 self._mean_momentum = 0.0
1063- self._num_particles = 0
1064+ self._num_particles = 0.0
1065
1066 self._correction_matrix = None
1067
1068@@ -147,7 +147,7 @@
1069 self._mean_vector = numpy.zeros( self._numVars )
1070 self._product_matrix = numpy.zeros( ( self._numVars, self._numVars ) )
1071 self._mean_momentum = 0.0
1072- self._num_particles = 0
1073+ self._num_particles = 0.0
1074
1075 def add_hit_scifi( self, scifi_hit ) :
1076 """
1077@@ -162,15 +162,16 @@
1078 Add a hit and extract the information into the class.
1079 Accepts both xboa type hits and the local emittance analysis type hits.
1080 """
1081+ weight = hit.get_weight()
1082 for row, rowvar in enumerate( VARIABLE_LIST ) :
1083 for col, colvar in enumerate( VARIABLE_LIST ) :
1084 self._product_matrix[row][col] += hit.get( rowvar ) *\
1085- hit.get( colvar )
1086-
1087- self._mean_vector[row] += hit.get( rowvar )
1088-
1089- self._mean_momentum += hit.get_p()
1090- self._num_particles += 1
1091+ hit.get( colvar ) * weight**2.0
1092+
1093+ self._mean_vector[row] += hit.get( rowvar ) * weight
1094+
1095+ self._mean_momentum += hit.get_p()*weight
1096+ self._num_particles += 1.0*weight
1097
1098
1099 def add_dict( self, dictionary ) :
1100
1101=== modified file 'src/common_py/analysis/hit_types.py'
1102--- src/common_py/analysis/hit_types.py 2017-05-10 10:29:14 +0000
1103+++ src/common_py/analysis/hit_types.py 2017-08-31 10:53:10 +0000
1104@@ -39,6 +39,7 @@
1105 def __init__( self, x = 0.0, y = 0.0, z = 0.0, \
1106 px = 0.0, py = 0.0, pz = 0.0, station = 0, \
1107 time = 0.0, mass = 105.6583715, p_value = 1.0, pid = 13, \
1108+ weight = 1.0, \
1109 scifi_track_point = None, virtual_track_point = None ) :
1110 """
1111 Initialise the object. this can be done in three ways:
1112@@ -58,6 +59,7 @@
1113 self.__p_value = p_value
1114 self.__pid = pid
1115 self.__station = station
1116+ self.__weight = weight
1117 # self.__reference = reference
1118
1119 elif scifi_track_point is not None and virtual_track_point is None :
1120@@ -73,13 +75,12 @@
1121 self.__pid = pid
1122 self.__station = tools.calculate_plane_id(scifi_track_point.tracker(), \
1123 scifi_track_point.station(), scifi_track_point.plane())
1124+ self.__weight = weight
1125 # if reference is None :
1126 # self.__reference = str(scifi_track_point.tracker())+"."+\
1127 # str(scifi_track_point.station())+"."+str(scifi_track_point.plane())
1128 # else :
1129 # self.__reference = reference
1130- if math.isnan(self.__x) :
1131- raise ValueError("NaN Values Received from Scifi Track Point")
1132
1133 elif scifi_track_point is None and virtual_track_point is not None :
1134 self.__x = virtual_track_point.GetPosition().x()
1135@@ -93,6 +94,7 @@
1136 self.__p_value = 1.0
1137 self.__pid = virtual_track_point.GetParticleId()
1138 self.__station = virtual_track_point.GetStationId()
1139+ self.__weight = weight
1140 # self.__reference = reference
1141 # if reference is None :
1142 # self.__reference = virtual_track_point.GetStationId()
1143@@ -104,6 +106,10 @@
1144 raise ValueError( 'Please supply precisely one of "virtual_track_point"'+\
1145 ' or "scifi_track_point", or specify all values explicitly.' )
1146
1147+# if math.isnan(self.__x) or math.isnan(self.__px) or math.isnan(self.__y) or math.isnan(self.__py) :
1148+ if math.isnan(self.__x) or math.isnan(self.__px) or math.isinf(self.__x) or math.isinf(self.__px) :
1149+ raise ValueError("NaN Values Received from Scifi Track Point")
1150+
1151
1152 def __str__( self ) :
1153 """
1154@@ -202,9 +208,16 @@
1155 """
1156 self.__station = station
1157
1158+ def set_weight( self, w ) :
1159+ """
1160+ Set the statistical weight of the hit
1161+ """
1162+ self.__weight = w
1163+
1164 # def set_reference( self, string ) :
1165 # self.__reference = string
1166
1167+
1168 def get_x( self ) :
1169 """
1170 Get the x position
1171@@ -253,6 +266,7 @@
1172 """
1173 if math.isnan(math.sqrt( self.__px**2 + self.__py**2 + self.__pz**2 )) :
1174 print "ISNAN!!!", self
1175+ raise ValueError("NaN Values Received from Scifi Track Point")
1176 return math.sqrt( self.__px**2 + self.__py**2 + self.__pz**2 )
1177
1178 def get_r( self ) :
1179@@ -322,6 +336,13 @@
1180 return self.__station
1181
1182
1183+ def get_weight( self ) :
1184+ """
1185+ Return the statistical weight of the hit
1186+ """
1187+ return self.__weight
1188+
1189+
1190 # def get_reference( self ) :
1191 # return self.__reference
1192
1193
1194=== modified file 'src/common_py/analysis/inspectors.py'
1195--- src/common_py/analysis/inspectors.py 2017-05-10 10:29:14 +0000
1196+++ src/common_py/analysis/inspectors.py 2017-08-31 10:53:10 +0000
1197@@ -17,7 +17,7 @@
1198 #
1199
1200 # pylint: disable = W0311, E1101, W0613, C0111, R0911, W0621, C0103, R0902
1201-# pylint: disable = W0102, R0915
1202+# pylint: disable = W0102, R0915, R0201
1203
1204
1205 import analysis
1206@@ -71,13 +71,9 @@
1207 'inspected_phi_phasespace_{0}'.format(self.plane), '#phi-Phasespace', \
1208 100, -4.0, 4.0 )
1209 self.theta_phasespace_plot = ROOT.TH1F(\
1210- 'inspected_phi_phasespace_{0}'.format(self.plane), '#theta-Phasespace', \
1211+ 'inspected_theta_phasespace_{0}'.format(self.plane), '#theta-Phasespace', \
1212 100, -4.0, 4.0 )
1213
1214- self.amplitude_momentum_plot = ROOT.TH2F(\
1215- 'inspected_A_p_phasespace_{0}'.format(self.plane), 'A-p-Phasespace' , \
1216- 200, 0.0, 100.0, 260, 130.0, 260.0 )
1217-
1218 self.pz_plot = ROOT.TH1F(\
1219 'inspected_pz_{0}'.format(self.plane), 'p_{z}', 400, 0.0, 400.0 )
1220 self.p_plot = ROOT.TH1F(\
1221@@ -112,7 +108,7 @@
1222 'inspected_beta_y_{0}'.format(self.plane), 'Y Beta Function', \
1223 3000, 0.0, 3000.0)
1224 self.total_momentum_plot = ROOT.TH1F(\
1225- 'inspected_momentum_{0}'.format(self.plane), 'Momentum', \
1226+ 'inspected_total_momentum_{0}'.format(self.plane), 'Momentum', \
1227 1000, 0.0, 500.0)
1228
1229 self.amplitude_plot = ROOT.TH1F(\
1230@@ -153,22 +149,22 @@
1231 if self.parent_covariance_inv is not None :
1232 vector = numpy.array(hit.get_as_vector()[2:6])
1233 amplitude = self.parent_emittance*vector.transpose().dot(self.parent_covariance_inv.dot(vector))
1234- self.amplitude_plot.Fill(amplitude)
1235- self.amplitude_momentum_plot.Fill(amplitude, hit.get_p())
1236+ self.amplitude_plot.Fill(amplitude, hit.get_weight())
1237+ self.amplitude_momentum_plot.Fill(amplitude, hit.get_p(), hit.get_weight())
1238
1239 self.covariance.add_hit(hit)
1240
1241- self.position_plot.Fill(hit.get_x(), hit.get_y())
1242- self.momentum_plot.Fill(hit.get_px(), hit.get_py())
1243- self.x_phasespace_plot.Fill(hit.get_x(), hit.get_px())
1244- self.y_phasespace_plot.Fill(hit.get_y(), hit.get_py())
1245- self.xy_phasespace_plot.Fill(hit.get_x(), hit.get_py())
1246- self.yx_phasespace_plot.Fill(hit.get_y(), hit.get_px())
1247- self.rpt_phasespace_plot.Fill(hit.get_r(), hit.get_pt())
1248- self.phi_phasespace_plot.Fill(hit.get_phi())
1249- self.theta_phasespace_plot.Fill(hit.get_theta())
1250- self.pz_plot.Fill(hit.get_pz())
1251- self.p_plot.Fill(hit.get_p())
1252+ self.position_plot.Fill(hit.get_x(), hit.get_y(), hit.get_weight())
1253+ self.momentum_plot.Fill(hit.get_px(), hit.get_py(), hit.get_weight())
1254+ self.x_phasespace_plot.Fill(hit.get_x(), hit.get_px(), hit.get_weight())
1255+ self.y_phasespace_plot.Fill(hit.get_y(), hit.get_py(), hit.get_weight())
1256+ self.xy_phasespace_plot.Fill(hit.get_x(), hit.get_py(), hit.get_weight())
1257+ self.yx_phasespace_plot.Fill(hit.get_y(), hit.get_px(), hit.get_weight())
1258+ self.rpt_phasespace_plot.Fill(hit.get_r(), hit.get_pt(), hit.get_weight())
1259+ self.phi_phasespace_plot.Fill(hit.get_phi(), hit.get_weight())
1260+ self.theta_phasespace_plot.Fill(hit.get_theta(), hit.get_weight())
1261+ self.pz_plot.Fill(hit.get_pz(), hit.get_weight())
1262+ self.p_plot.Fill(hit.get_p(), hit.get_weight())
1263 self.position = hit.get_z()
1264
1265 if self.ensemble_size > 1 :
1266@@ -312,3 +308,141 @@
1267 return ins_data
1268
1269
1270+####################################################################################################
1271+
1272+
1273+class CoolingInspector() :
1274+
1275+ def __init__(self) :
1276+ self.position = 0.0
1277+ self.number_particles = 0
1278+
1279+# self.momentum_bias = None
1280+# self.momentum_correlation = None
1281+
1282+ self.upstream_parent_covariance = None
1283+ self.upstream_parent_covariance_inv = None
1284+ self.upstream_parent_emittance = None
1285+
1286+ self.downstream_parent_covariance = None
1287+ self.downstream_parent_covariance_inv = None
1288+ self.downstream_parent_emittance = None
1289+
1290+ self.upstream_amplitude_plot = ROOT.TH1F(\
1291+ 'upstream_reference_amplitude', 'Amplitude', 200, 0.0, 100.0)
1292+ self.downstream_amplitude_plot = ROOT.TH1F(\
1293+ 'downstream_reference_amplitude', 'Amplitude', 200, 0.0, 100.0)
1294+
1295+ self.upstream_amplitude_transmitted_plot = ROOT.TH1F(\
1296+ 'upstream_reference_amplitude_transmitted', 'Amplitude', 200, 0.0, 100.0)
1297+ self.downstream_amplitude_transmitted_plot = ROOT.TH1F(\
1298+ 'downstream_reference_amplitude_transmitted', 'Amplitude', 200, 0.0, 100.0)
1299+
1300+ self.upstream_amplitude_scraped_plot = ROOT.TH1F(\
1301+ 'upstream_reference_amplitude_scraped', 'Amplitude', 200, 0.0, 100.0)
1302+ self.downstream_amplitude_scraped_plot = ROOT.TH1F(\
1303+ 'downstream_reference_amplitude_scarped', 'Amplitude', 200, 0.0, 100.0)
1304+
1305+ self.amplitude_change_plot = ROOT.TH1F(\
1306+ 'reference_amplitude_change', 'Amplitude Change', 200, -100.0, 100.0)
1307+ self.amplitude_change_amplitude_plot = ROOT.TH2F(\
1308+ 'reference_amplitude_change_amplitude', 'Amplitude Change', \
1309+ 200, 0.0, 100.0, 200, -100.0, 100.0 )
1310+ self.amplitude_mobility_plot = ROOT.TH2F(\
1311+ 'reference_amplitude_mobility', 'Amplitude Mobility', \
1312+ 100, 0.0, 100.0, 100, 0.0, 100.0 )
1313+
1314+
1315+# def set_covariance_correction(self, correction, axes=['x', 'px', 'y', 'py']) :
1316+# self.covariance.set_covariance_correction( correction, axes )
1317+# self.ensemble_covariance.set_covariance_correction( correction, axes )
1318+
1319+
1320+ def set_upstream_covariance(self, matrix) :
1321+ self.upstream_parent_covariance = numpy.array(matrix)
1322+ self.upstream_parent_covariance_inv = numpy.linalg.inv(self.upstream_parent_covariance)
1323+ self.upstream_parent_emittance = analysis.covariances.emittance_from_matrix(self.upstream_parent_covariance)
1324+
1325+
1326+ def set_downstream_covariance(self, matrix) :
1327+ self.downstream_parent_covariance = numpy.array(matrix)
1328+ self.downstream_parent_covariance_inv = numpy.linalg.inv(self.downstream_parent_covariance)
1329+ self.downstream_parent_emittance = analysis.covariances.emittance_from_matrix(self.downstream_parent_covariance)
1330+
1331+
1332+# def set_momentum_correction(self, constant, gradient) :
1333+# self.momentum_bias = constant
1334+# self.momentum_correlation = gradient
1335+
1336+
1337+ def add_hits(self, up_hit, down_hit) :
1338+# if self.momentum_bias is not None :
1339+# p = hit.get_p()
1340+## correction = 1.0 + (self.momentum_bias + self.momentum_correlation*p) / p
1341+# correction = ((p - self.momentum_bias) / \
1342+# (1.0 - self.momentum_correlation)) / p
1343+# hit.set_px(hit.get_px()*correction)
1344+# hit.set_py(hit.get_py()*correction)
1345+# hit.set_pz(hit.get_pz()*correction)
1346+
1347+ if (self.upstream_parent_covariance_inv is not None) and (self.downstream_parent_covariance_inv is not None) :
1348+ self.number_particles += 1
1349+
1350+ if up_hit is not None :
1351+ vector = numpy.array(up_hit.get_as_vector()[2:6])
1352+ up_amplitude = self.upstream_parent_emittance*vector.transpose().dot(self.upstream_parent_covariance_inv.dot(vector))
1353+ self.upstream_amplitude_plot.Fill(up_amplitude)
1354+
1355+ if down_hit is None :
1356+ self.upstream_amplitude_scraped_plot.Fill(up_amplitude)
1357+ else :
1358+ self.upstream_amplitude_transmitted_plot.Fill(up_amplitude)
1359+
1360+ if down_hit is not None :
1361+ vector = numpy.array(down_hit.get_as_vector()[2:6])
1362+ down_amplitude = self.downstream_parent_emittance*vector.transpose().dot(self.downstream_parent_covariance_inv.dot(vector))
1363+ self.downstream_amplitude_plot.Fill(down_amplitude)
1364+
1365+ if up_hit is None :
1366+ self.downstream_amplitude_scraped_plot.Fill(down_amplitude)
1367+ else :
1368+ self.downstream_amplitude_transmitted_plot.Fill(down_amplitude)
1369+
1370+
1371+ if (up_hit is not None) and (down_hit is not None) :
1372+ DA = down_amplitude - up_amplitude
1373+ self.amplitude_change_plot.Fill( DA )
1374+ self.amplitude_change_amplitude_plot.Fill( up_amplitude, DA )
1375+ self.amplitude_mobility_plot.Fill( up_amplitude, down_amplitude )
1376+
1377+
1378+ def fill_plots(self) :
1379+ pass
1380+
1381+
1382+ def get_length(self) :
1383+ return self.number_particles
1384+
1385+
1386+ def get_plot_dictionary(self) :
1387+ ins_plots = {}
1388+
1389+ ins_plots['upstream_amplitude'] = self.upstream_amplitude_plot
1390+ ins_plots['downstream_amplitude'] = self.downstream_amplitude_plot
1391+ ins_plots['upstream_amplitude_transmitted'] = self.upstream_amplitude_transmitted_plot
1392+ ins_plots['downstream_amplitude_transmitted'] = self.downstream_amplitude_transmitted_plot
1393+ ins_plots['upstream_amplitude_scraped'] = self.upstream_amplitude_scraped_plot
1394+ ins_plots['downstream_amplitude_scraped'] = self.downstream_amplitude_scraped_plot
1395+
1396+ ins_plots['amplitude_change'] = self.amplitude_change_plot
1397+ ins_plots['amplitude_change_amplitude'] = self.amplitude_change_amplitude_plot
1398+ ins_plots['amplitude_mobility'] = self.amplitude_mobility_plot
1399+
1400+ return ins_plots
1401+
1402+
1403+ def get_data_dictionary(self) :
1404+ ins_data = {}
1405+
1406+ return ins_data
1407+
1408
1409=== modified file 'src/common_py/analysis/tools.py'
1410--- src/common_py/analysis/tools.py 2017-02-17 12:38:12 +0000
1411+++ src/common_py/analysis/tools.py 2017-08-31 10:53:10 +0000
1412@@ -216,6 +216,10 @@
1413 new_location = os.path.join( location, key )
1414 print_plots( plot_dict[key], new_location )
1415
1416+ elif type( plot_dict[key] ) is ROOT.TH2F :
1417+ canvas = ROOT.TCanvas( key+'_canvas' )
1418+ plot_dict[key].Draw("COL")
1419+ canvas.SaveAs( os.path.join(location, key ) + ".pdf", "pdf" )
1420 else :
1421 canvas = ROOT.TCanvas( key+'_canvas' )
1422 plot_dict[key].Draw()
1423
1424=== modified file 'src/common_py/event_loader.py'
1425--- src/common_py/event_loader.py 2017-05-04 14:09:47 +0000
1426+++ src/common_py/event_loader.py 2017-08-31 10:53:10 +0000
1427@@ -49,6 +49,7 @@
1428 self.__filenames = [ filename ]
1429
1430 self.__current_filenumber = -1
1431+ self.__current_statistical_weight = 1.0
1432
1433 self.__tree = None
1434 self.__data = None
1435@@ -120,6 +121,13 @@
1436 str(print_progress) )
1437
1438
1439+ def get_current_statistical_weight( self ) :
1440+ """
1441+ Returns the statistical weight of the last loaded event.
1442+ """
1443+ return self.__current_statistical_weight
1444+
1445+
1446 def select_events( self, selected_events_dict ) :
1447 """
1448 Only loads events that are present in the supplied dictionary.
1449@@ -139,7 +147,7 @@
1450 return False
1451
1452
1453- def save_event( self ) :
1454+ def save_event( self, weight=1.0 ) :
1455 """
1456 Add the event location to an internal list of good events.
1457 """
1458@@ -149,9 +157,9 @@
1459 if filename not in self.__saved_events :
1460 self.__saved_events[filename] = {}
1461 if spill not in self.__saved_events[filename] :
1462- self.__saved_events[filename][spill] = []
1463+ self.__saved_events[filename][spill] = {}
1464
1465- self.__saved_events[filename][spill].append(event)
1466+ self.__saved_events[filename][spill][event] = weight
1467
1468
1469 def get_saved_events( self ) :
1470@@ -252,10 +260,13 @@
1471 else :
1472 while self.next_event():
1473 spill = str(self.__current_spill_num)
1474+ event = str(self.__current_event_num)
1475 if self.__current_filename in self.__selected_events :
1476 if spill in self.__selected_events[self.__current_filename] :
1477- if self.__current_event_num in \
1478+ if event in \
1479 self.__selected_events[self.__current_filename][spill] :
1480+ weight = self.__selected_events[self.__current_filename][spill][event]
1481+ self.__current_statistical_weight = weight
1482 return True
1483 else:
1484 return False
1485@@ -290,7 +301,8 @@
1486
1487 self.__tree.GetEntry( self.__current_spill_num )
1488 self.__spill = self.__data.GetSpill()
1489- self.__current_num_events = self.__spill.GetReconEvents().size()
1490+ self.__current_num_events = max( self.__spill.GetReconEvents().size(), \
1491+ self.__spill.GetMCEvents().size() )
1492 self.__current_event_num = 0
1493
1494 if self.__spill.GetDaqEventType() == "physics_event" :
1495
1496=== modified file 'src/legacy/Simulation/FillMaterials.cc'
1497--- src/legacy/Simulation/FillMaterials.cc 2017-08-22 21:21:20 +0000
1498+++ src/legacy/Simulation/FillMaterials.cc 2017-08-31 10:53:10 +0000
1499@@ -385,6 +385,25 @@
1500 TufsetPU->AddElement(elO, 20.60*perCent);
1501 materials_list->addMaterial( TufsetPU, name );
1502
1503+ // POLYVINYL_TOLUENE_
1504+ density = 1.032 * g/cm3;
1505+ name = "POLYVINYL_TOLUENE";
1506+ G4Material* PVT = new G4Material(name, density, nComp=2, kStateSolid);
1507+ PVT->AddElement(elC, 91.5023*perCent);
1508+ PVT->AddElement(elH, 8.4977*perCent);
1509+ materials_list->addMaterial( PVT, name );
1510+
1511+ G4Element* elCl = man->FindOrBuildElement("Cl");
1512+
1513+ // POLYVINYL_CHLORIDE_LOWD
1514+ density = 0.12 * g/cm3;
1515+ name = "POLYVINYL_CHLORIDE_LOWD";
1516+ G4Material* PVC_LD = new G4Material(name, density, nComp=3, kStateSolid);
1517+ PVC_LD->AddElement(elC, 38.436*perCent);
1518+ PVC_LD->AddElement(elH, 4.838*perCent);
1519+ PVC_LD->AddElement(elCl, 56.726*perCent);
1520+ materials_list->addMaterial( PVC_LD, name );
1521+
1522 // TUFNOL
1523
1524 // now fill in the other maps of the material properties
1525
1526=== modified file 'third_party/bash/40python_extras.bash'
1527--- third_party/bash/40python_extras.bash 2017-02-16 23:33:58 +0000
1528+++ third_party/bash/40python_extras.bash 2017-08-31 10:53:10 +0000
1529@@ -25,7 +25,7 @@
1530
1531 # The packages to download for the MAUS third party tarball
1532 download_package_list="\
1533- numpy anyjson python-dateutil>=1.5,<2.0 amqplib>=1.0 six>=1.4.0 \
1534+ numpy scipy anyjson python-dateutil>=1.5,<2.0 amqplib>=1.0 six>=1.4.0 \
1535 logilab-common logilab-astng suds validictory nose==1.1 nose-exclude \
1536 coverage doxypy astroid isort pylint==1.6.3 bitarray \
1537 pymongo==2.3 readline matplotlib==2.0.0 \
1538@@ -35,7 +35,7 @@
1539 # The packages to install
1540 # Note: pil is no longer on PyPI so we only use our existing tarball
1541 package_list="\
1542- numpy anyjson python-dateutil amqplib six \
1543+ numpy scipy anyjson python-dateutil amqplib six \
1544 logilab-common logilab-astng suds validictory nose==1.1 nose-exclude \
1545 coverage doxypy pylint==1.6.3 bitarray \
1546 pymongo readline matplotlib \
1547@@ -43,7 +43,7 @@
1548 "
1549
1550 # The modules to test
1551-module_test_list="numpy suds validictory nose coverage \
1552+module_test_list="numpy scipy suds validictory nose coverage \
1553 pylint bitarray matplotlib pymongo \
1554 Image django magickwand celery" #Image is pil bottom level
1555

Subscribers

People subscribed via source and target branches